title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Self-supervised video pretraining yields robust and more human-aligned visual representations | Accept (poster) | Summary: The research question addressed by the paper is whether self-supervised training on
videos leads to image features that are better aligned with human perception. To that
end, the authors propose a contrastive learning method that leverages natural
changes over time as different views, beyond adapted image augmentations. Moreover, an
attention mechanism is introduced that allows the model to compare image pairs based
on features from selected regions only. The model is trained on videos that have been
curated to better match the ImageNet category distribution than existing video datasets.
Trained this way, the model performs better than previous methods when transferred
to several standard benchmarks and shows a better alignment with human vision.
Strengths: - Training computer vision models to better align with human perception is an active
area of research with the potential to improve, e.g., the robustness of current
approaches. Self-supervised training on natural videos is closer to how humans learn
than standard training methods. Exploring this direction therefore is well motivated.
- The authors compare to a broad range of previous models on several computer viion
benchmarks. The newly proposed method improves over previous approaches in most cases.
- Several ablation studies are performed and reported in the supplement to disect the
influences of the contributions on the improved performance.
Weaknesses: The paper aims at better aligning neural network features with human perception.
However, the range of models tested for alignment with human perception is much smaller
than the range of models tested for performance on computer vision bechmarks. In
particular, none of the other video based self-supervised methods is tested and none
of the ablation studies considers alignment with human vision.
The title of the paper, "Self-supervised video pretraining yields human-aligned visual
representations", and the introduction therefore raised wrong expectations for me. Due to the
missing ablations, one cannot judge whether the improved human alignment is due to the
video pretraining or other contributions, such as the new training set or the
multi-scale contrastive attention pooling. Moreover, I find the title too bold given
that other methods perform much better in terms of human error consistency according to
the official benchmark repository (e.g., CLIP).
**Update:** Both points raised above have been addressed during the rebuttal. Therefore I improved my rating of this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As described above, the paper has a much stronger focus on performance in terms of
computer vision (where the proposed method is very competitive) than alignment with
human vision. I would recommend to change the title and introduction to better align
with this focus and I am happy to increase my score if this issue is addressed.
L186ff needs clarification for me: When evaluating video understanding, the paper claims
that "VITO learns features that capture finegrained temporal deformations of objects".
However, VITO is based on image networks so that spatio-temporal features cannot be
learned.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss limitations of the paper due to the focus on a single architecture
(ResNet-50). Potential societal impacts are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and address the concerns below.
**“none of the other video based self-supervised methods is tested and none of the ablation studies considers alignment with human vision.”**
We agree this was a limitation of our current results and have addressed this by greatly expanding the models evaluated on our benchmarks and human alignment. (See global response). The results stand that VITO outperforms all prior video pretraining on both the visual tasks and human alignment, and we have verified that these effects are strongest only with the combination of the contrastive attention pooling and the use of temporal deformations during training.
**“Due to the missing ablations, one cannot judge whether the improved human alignment is due to the video pretraining or other contributions, such as the new training set or the multi-scale contrastive attention pooling.”**
We refer the reviewer to our general response and ablation figures in the rebuttal PDF for a complete analysis of the impact of the ablations on the improved human alignment and robustness. In summary, we have verified that all components (dataset, contrastive attention pooling, and temporal augmentations) are necessary for the strongest performance. In addition, we find a particularly strong relationship between robustness/good human alignment with models that have been trained with the temporal augmentations.
**“Moreover, I find the title too bold given that other methods perform much better in terms of human error consistency according to the official benchmark repository (e.g., CLIP).would recommend to change the title and introduction to better align with this focus and I am happy to increase my score if this issue is addressed.”**
We refer the reviewer to the global response clarifying the intentions behind our claims and title (which we will incorporate in the text), and are happy to discuss/change based on what the reviewer thinks would make the most appropriate title. Regarding comparisons with CLIP, we first would like to note that CLIP in fact does not outperform our method on the saliency benchmarks which are a strong test of one aspect of perceptual alignment. Additionally regarding the shape-bias alignment, we have now included CLIP in our evaluations and while we find that it performs better than VITO, it is not by a very large margin. Moreover, CLIP is also trained with large-scale language supervision (400M image-text pairs), which is far larger than VideoNet, so we find it significant that we can obtain similar gains over standard ImageNet pretraining with far less data and supervision than CLIP training. More relevant are comparisons to image- and video-only self-supervised learning methods, where VITO outperforms prior work. Nevertheless, it would be very interesting to combine our proposed self-supervised video pretraining with multimodal language supervision.
**“When evaluating video understanding, the paper claims that "VITO learns features that capture finegrained temporal deformations of objects". However, VITO is based on image networks so that spatio-temporal features cannot be learned.”**
We fully agree that VITO cannot capture spatio-temporal features explicitly. However, as seen in the ablations (see main response), VITO does in fact perform significantly better when trained with temporal deformations. We believe this is due to the fact that VITO learns to attend to the features in an image which are more likely to be predictable under the fine-grained temporal deformations of objects. These features can be quite different than those that are predictable under standard image-based augmentations. Therefore, even though VITO does not capture the deformation explicitly, it is learning to become robust to the fine-grained temporal deformations of objects. We will clarify this in the main text.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed response. I appreciate the extension of the results regarding the alignment with human vision, now including both an ablation study and more extensive comparisons to prior work. In my view these results strengthen the focus of the paper and I will improve my rating accordingly.
Thank you for clarifying the focus of your paper. As written in my original review, I see training on videos is a well motivated direction regarding alignment with human vision. In the general response you clarify that you "do not mean to claim we have learned the most human-aligned, general visual representation". However to me the title seems to exactly claim that, a more appropriate alternative could be "Self-supervised video pretraining improves human-alignment of visual representations".
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer very much for their prompt response and for appreciating our new results and being willing to update their score.
Regarding the title, yes we see the reviewer's concern and are happy to change the title to the reviewer's suggestion as we agree that it is more appropriate. We hope that this alleviates any remaining concerns. | Summary: This paper introduces a new video dataset (VideoNet), and architectual adjustments (VITO) that enables allows pretraining on video datasets to improve performance on downstream transfer tasks, such as segmentation, object detection, and generalization. The VITO network uses a ResNet backbone, and architectural changes include extracting spatial feature maps at the two penultimate blocks, applying a learned softmax function to obtain a weighting over features, and then projecting this into a final feature map. This feature extractor is trained using the InfoNCE loss on two sampled video frames, with additional image-space augmentations applied. Additional experiments in the supplementary material investigate the use of transformer backbones. The VideoNet dataset retrieves video clips matching the ImageNet categories, and applies an image classifier on these videos to filter for the correct category. Additional experiments compare the attention maps from the VITO model and other baselines to human ground truth from the ClickMe dataset.
Strengths: - The proposed method and training dataset presents strong empirical results. It outperforms, or is competitive with, image pretraining on ImageNet, video pretraining, or robust image pretraining on a variety of transfer tasks. Notably, it demonstrates improved robustness on datasets with image corruptions even when compared to models trained for robust classification.
- Ablations evaluate the independent contributions of both the dataset on the model design. With the same VITO model, but different datasets, VideoNet can outperform AudioSet as a prior video dataset on segmentation and object detection tasks. On the same dataset (VideoNet) but different models (MoCLR vs VITO), VITO also outperforms the prior method. (Table B.1)
- On additional experiments on Swin transformers show promising improvements. I think additional details on the transformer setup, and which spatial feature maps are extracted in the transformer would be helpful.
- This paper is well written and clear to follow.
Weaknesses: - The benefit of VITO seems to be primarily on transfer tasks. On standard classification, it falls short of ImageNet pretraining by ~10%, but it is still outperforming other video tasks. It is also slightly lower on object detection compared to the DINO representation. However, I think there are still valuable insights to be gained from the methodology and dataset presented here.
- There are some additional parts that could benefit from clarification. Please see the below questions section.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - L124: what is the softmax normalized over?
- L130: is $a_\xi$ also a target network? It may be helpful to also illustrate the network $g$ in Fig. 1
- Fig 3: Why was the CLIP Resnet chosen for the attention map, as opposed to the attention maps extracted from CLIP or DINO transformer models?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitation are adequately addressed. The current setup explores primarily the ResNet backbone with the InfoNCE loss, but there are promising initial results with transformer models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and now address the weaknesses/questions.
**Weakness 1: lower ImageNet validation numbers**
Yes VITO does underperform supervised ImageNet pretraining on ImageNet classification by ~10% and other SSL ImageNet pretraining methods by 5-7%. This however can be attributed to the fact that ImageNet classification is an “in-distribution” task for models pretrained on ImageNet. Even though a separate validation set is used, the training images are sampled from the same distribution, as they share properties such as single-object field-of-view, centrally cropped, high resolution etc. Even though VideoNet matches the overall class distribution to ImageNet, we cannot expect our models to transfer as well to ImageNet as it is still out-of-distribution in many respects. Indeed VideoNet frames contain multiple objects and more global scenes, strong motion deformations, and wider variability in resolution and quality. Even still, VITO outperforms all ImageNet trained models on benchmarks based on ImageNet distribution shifts (ImageNet categories under distribution shifts). For example, VITO strongly outperforms all ImageNet pretraining on ImageNet-Vid and ImageNet-3DCC. These recognition benchmarks are arguably better tests of generalization under real-world conditions.
Additionally, while VITO is competitive on all other scene understanding benchmarks and strongly surpasses ImageNet pre-training on transfer to video-based tasks (DAVIS, UCF). Therefore, on average, VITO generalizes across all tasks significantly better than the comparable ImageNet pretrained models.
**Question 1: L124: what is the softmax normalized over?**
The softmax is normalized over space such that the attention-weights across space to sum to 1. This encourages competition across spatial locations, forcing the attention to be more localized.
**Question 2: L130: is a_xi also a target network? It may be helpful to also illustrate the network g in Fig. 1**
A_xi is also a target network as the EMA is applied to all parameters for the target. We understand that missing g in Fig 1 can be confusing and will update the figure and caption to emphasize it’s role.
**Question 3: Fig 3: Why was the CLIP Resnet chosen for the attention map, as opposed to the attention maps extracted from CLIP or DINO transformer models?**
We choose the CLIP ResNet for attention map comparison as it provides an architecturally matched comparison to ours. We fully agree that scale in data and architecture can produce improvements, but our goal was to understand the impact of video pretraining on the image representations for this we require comparisons that are matched in model capacity to isolate these effects.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: Thanks to the authors for the response and the additional clarifications and ablation experiments. I agree with reviewer jN1F that perhaps human alignment was not the principle focus of this paper, but overall I find the methodology presented here to demonstrate compelling results across a variety of tasks. I will retain my original rating. | Summary: The paper proposes a self-supervised method for learning image representations from videos.
Towards this end, a procedure to curate video datasets most suitable for such pre-training is proposed by selecting videos that best match the distribution of visual classes found in ImageNet.
Secondly, this dataset is leveraged through a contrastive training approach where the common global average pooling of video frame features is replaced with an attentional module that learns to select spatial regions of several network layers to construct the image embedding.
The learned representations are evaluated on a large set of downstream tasks, including spatial scene understanding (segmentation + detection), video understanding (segmentation + action recognition), and robust image classification.
Strengths: - The method is very well presented, and the paper is well written
- The downstream evaluation of the learned image representation is extensive, and the model achieves good performance across a variety of tasks
- The finding that a video dataset with similar category distribution to ImageNet appears to perform considerably better than many existing video datasets is interesting (see Fig 4)
Weaknesses: - I would have preferred to see more extensive ablation experiments to support the technical contributions of the paper (e.g., the attentional pooling module for contrastive learning). These ablations are currently only reported on a single task and dataset. It would be more convincing to see the influence on all the downstream tasks in Table 1 and to summarize these ablation results in a Table. As it is, it is unclear if the benefits of the proposed training are consistent across tasks or if most of the benefit is due to the data curation process.
- The data curation process relies fundamentally on reproducing the object category distributions of ImageNet in a video dataset. While it is interesting that this appears to lead to a benefit across many downstream tasks, it is unclear what explains these benefits. Is it greater visual diversity, lesser class imbalance, or maybe some other property?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would appreciate it if the authors could address the weaknesses outlined above. I'm particularly interested in how the technical contributions affect downstream results across multiple benchmarks (i.e., how do the ablations look when evaluated across the multiple benchmarks in Table 1)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and address the concerns below.
**Weakness 1: More extensive ablations**:
See our global response for a detailed discussion of more extensive ablations we have now performed. Notably, we have verified that all components (dataset, contrastive attention pooling, and temporal augmentations) all are necessary for the strongest performance. In addition, we find a particularly strong relationship between robustness/good human alignment with models that have been trained with temporal augmentations. We believe that strengthening this link is important and will update our main text accordingly.
**Weakness 2: unclear what explains VideoNet performance**
We agree with the reviewer that there is still much more to unpack about the VideoNet curation and what aspects are necessary or not. We have provided ablations comparing to YT8M and training on the far more diverse JFT-300M image dataset in Supp Table B.1. In both cases, VideoNet outperforms by a large margin across the scene understanding benchmarks. This suggests that the key feature has to do with the shape of the class distribution rather than just greater visual diversity. Specifically, we hypothesize that having densely sampled classes and lesser class imbalance (removing long tails) greatly improves performance because it provides more difficult yet useful negative examples for the contrastive loss. It remains to be seen whether we can test this more empirically by controlling for different properties of the distribution and evaluating, but we believe our work will be a solid basis for this investigation.
---
Rebuttal Comment 1.1:
Comment: I read the rebuttal and the other reviews. I appreciate the novel results in the rebuttal and agree with the overall positive assessment of the other reviews. I vote for accepting this paper. | Summary: This paper studies how to take advantage of natural temporal distortions in video to learn image spatial representations. The paper first proposes a VideoNet dataset that filters the video data from common video datasets by an ImageNet classifier. The paper further proposes to multi-scale attention pooling to improve the baseline algorithm. The algorithm trained on the proposed the dataset is evaluated on both image and video datasets. The paper also studies the human alignment of the learned attention.
Strengths: 1. Pre-training image representations with video datasets is a natural and valuable idea to explore.
2. The VideoNet dataset could be a valuable contribution to the community provided the authors have a plan to release it publicly.
3. The paper provided abundant experiments to demonstrate the effectiveness of the proposed algorithm.
Weaknesses: 1. Although using an ImageNet classifier to filter the video data is a practical idea to reduce the domain gap between video and image datasets, this also introduces unfairness to the comparison to other self-supervised methods. This is because the filtering process with an ImageNet classifier implicitly takes advantage of ImageNet labels, and ImageNet labels are very effective on downstream image tasks as validated by the authors in Figure 4.
2. It is unsure how much the temporal deformations provided by a video dataset help with the pre-training VITO, which I think is one of the central questions for video contrastive learning for image downstream tasks. Because if temporal deformations are not useful, then why bother filtering a video dataset which costs much more than a image dataset. I am not sure if T=0 in Figure B.2 is a baseline that generate two views from the same frame in VideoNet. If so, this would be a helpful ablation that can be highlighted in the main text.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have a few questions on the experiment settings:
1. In Table 1, why are some results different from what are reported in their original papers? For example, VFS [8] on DAVIS is reported to be 68.9 in its paper, but 67.8 in this manuscript. Is this 67.8 number produced by the authors with a different experimental setting? Are most of numbers in Table 1 produced by the authors by fine-tuning on the publicly available checkpoints?
2. Why are the UCF-101 results in Table 1 significantly lower than other self-supervised learning algorithms on video? For example, in Feichtenhofer et al. [34] a ResNet-50 pre-trained on K400 can easily obtain an over 90% accuracy on UCF101?
3. The experiments on ImageNet-3DCC in Figure 2 are interesting. I am wondering why only evaluate MoCLR as the representative self-supervised algorithm, while not others like DINO?
4. I am wondering the training cost on VideoNet. For example, how many ImageNet-equivalent epochs have been used and how many crops are there in each step?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and address the concerns below.
**Weakness 1: VideoNet curation**
We realize that the VideoNet curation procedure does involve utilizing the class distribution or implicit labels. However, most of the high-performance SSL methods we compare to were trained on ImageNet (which itself was manually curated) so we do not agree that there is anything unfair in our comparisons. Even the VINCE/CycleContrast video models leverage a similar curation strategy in construction of their R2V2 dataset. Particularly, we outperform ImageNet pre-trained methods on video tasks, OOD robustness, and human alignment, demonstrating that we can leverage the benefits of ImageNet’s class diversity along with complex spatiotemporal deformations of objects to learn more general visual representations. However, to additionally alleviate concerns we have provided ablations of VITO using VideoNet and YT8M in Supp Table. B.1. We find that using these uncurated datasets, VITO outperforms all prior video-pretraining (including prior video pretraining leveraging the same datasets) and even outperforms an SSL baseline trained on the significantly larger, but uncurated JFT-300M image dataset.
**Weakness 2: temporal ablations**
We agree with the reviewer that assessing the impact of the temporal deformations is a central question. Indeed in our ablations in Fig B.2, T=0 is the spatial version of training where augmentations are only generated from individual frames. Therefore, we see a significant increase in performance including temporal deformations. We have now additionally shown the impact of temporal deformations through many more evaluations specifically emphasizing the impact of the temporal deformations on increased robustness (ImageNet 3dcc recognition) and human alignment on shape-biased tasks. For more information, see the global response and rebuttal PDF figures.
**Question 1: DAVIS inconsistency**
For fair comparison, we re-ran all the models we compare to by fine-tuning or evaluating linear classifiers (for ImageNet-based evals) on the publicly available checkpoints. Therefore, there are slight discrepancies in the DAVIS video segmentation numbers, but we believe all these comparisons are fair as they are all fine-tuned under the same conditions.
**Question 2: UCF low numbers**
Our UCF numbers are to the best of our knowledge, the best performance based on a pure image-based backbone. In fact, in Supp Table B.5 it can be seen that with simple temporal pooling strategies we surpass many prior video-based backbones and can get close to the Feichtenhofer numbers (under the same pre-training epochs). All prior work that show accuracies in the 90% range use training procedures that train with 3D ResNets, (2+1)D ResNets, or other architectures that allow for temporal integration in the model during training. We think it is significant that we can achieve numbers that are not too far from this just with spatial feature extraction and pooling of the output representations. In our new ablations (see global response), we see that this performance does not appear without the inclusion of temporal deformations during the training, indicating that there is implicit knowledge of the features relevant for defining action categories that can be captured in a purely image-based representation.
**Question 3: ImageNet-3DCC**
See our global response. We have now added DINO and many of the prior video pretraining works (cycle contrast, vince, vfs) and our ablations. We find that the strong robustness of VITO only appears with the combination of its methodological components, and is not replicated by other powerful SSL methods such as DINO or other video pretraining efforts.
**Question 4: Computational budget**
We designed the VITO training procedure such that it matched the computational budget of all of the models we compare to that are trained on ImageNet. In each step, for all models we train with 3 views and the 300 epochs refer to ImageNet-equivalent epochs (i.e. the same number of iterations needed for 300 ImageNet epochs). .
---
Rebuttal Comment 1.1:
Comment: As the discussion period is coming to a close, we want to thank the reviewer again for their detailed review and ask if they have any other questions or concerns that have not been addressed by our rebuttal response. | Rebuttal 1:
Rebuttal: We thank all reviewers for the feedback and acknowledging our work's clear presentation of a novel step towards learning more general human-aligned visual models. We will address global concerns here and individual comments separately. Please see the rebuttal PDF for new results.
**Ablations:** Reviewers emphasized the need for more evaluations on ablated models to justify the critical components of our final model. To answer these questions, we present new results evaluating multiple ablations of VITO along different dimensions: UCF101 action recognition, OOD image recognition (ImageNet-A/ImageNet-vid), recognition under natural corruption (ImageNet 3dcc) and human alignment (shape bias tasks). We evaluated the following models on these tasks, seen in Table 1, Table 2, and Figure 3 (right panel):
* MoCLR VideoNet: MoCLR applied to VideoNet frames, no temp. augmentations.
* VITO spatial: 2-scale attention pooling, no temp. augmentations.
* VITO (1scale w/o attn): All augmentations, 1-scale embedding, no attention pooling.
* VITO (2scale w/o attn): 2-scale embeddings, all augmentations, no attention pooling.
* VITO (1scale): Full VITO, 1-scale embedding.
* VITO (AudioSet): Full VITO, pretraining data: uncurated AudioSet.
These models exemplify the key method elements (spatial vs spatio-temporal deformations, multi-scale contrastive attention, and VideoNet dataset).
_Rebuttal Table 1_ data is consistent with the more limited ablations in App Fig. B2 and App Table B.1. First, incorporating temporal deformations contributes greatly to the best model performance (temporal VideoNet models outperform the VITO-spatial and MoCLR models on all benchmarks). Second, architecturally, the multi-scale aspect is the most significant component on its own, but when combined with attention pooling there is a synergistic effect that leads to significant improvements, especially on OOD recognition benchmarks. Moreover, VideoNet models outperform those trained on other datasets such as Audioset. Along with the ablations in App. Table B.1, this suggests that the importance of VideoNet is in shaping the class distribution rather than just having large visual diversity.
_Rebuttal Table 2_ shows that the full VITO method produces the best human alignment in terms of ceiled error consistency. Notably, other high-performing models are _all VITO ablations that were trained with temporal deformations_. This surprisingly includes the AudioSet pretrained model, even though it has poorer performance on the computer vision tasks. This result is quite striking, as previous image-based video pretraining methods perform quite poorly (VINCE [7], VFS [8], and CycleCon [9]). This suggests that our specific methodology better leverages the spatio-temporal data to produce learned representations that are more human-aligned. As suggested, we now include a comparison with CLIP, which does have better error-consistency than our best model. However, CLIP is trained with large-scale language supervision (400M image-text pairs), and is thus not directly comparable. More relevant are comparisons to image- and video-only SSL methods, where VITO significantly outperforms prior work.
_Rebuttal Fig 3_ (right panel): In recognition under natural corruptions (ImageNet-3DCC), VITO outperforms all ablated versions. Again, there is a striking gap between models trained with spatio-temporal deformations, which are far more robust than those trained only with spatial augmentations. Coupled with the alignment results, this strongly highlights the importance of video vs. image pretraining.
**Comparisons to MAEs and transformer-based architectures:** While we agree with the reviewers that transformer architectures would greatly benefit the work, this would require a significant replication effort that we leave to future work. It would be unfair to just compare ResNet-50 models with transformers due to the vast differences in expressive power. To reiterate, the goal of this work is to _compare image representations learned from videos with those learned from images, assuming matched model capacities_. In addition, some of our most important comparisons are with prior work that also attempted to learn image representations from video data. These also mostly performed experiments with ResNet-50s, informing our initial architecture choice to have apples-to-apples comparisons with prior work.
**Comparisons to SoTA video models**: Most SoTA video-based methods utilize specialized video architectures, and thus cannot be evaluated on general image-based tasks such as scene understanding, robust recognition, and human alignment. As a result, while these methods perform well on video tasks such as action recognition, they remain less general than ours and are less appealing candidates for robust, human-aligned visual representations.
Lastly, we wish to clarify potential confusion over our title and introduction. We do not mean to claim we have learned the most human-aligned, general visual representation. Yet, because most of the current literature in this space is solely focused on achieving more general representations via scale (larger image datasets and models), we wanted to probe alternative ways to achieve this goal. Our work shows that video pretraining can yield more general, robust, and human-aligned visual representations by leveraging natural spatio-temporal deformations, even at a relatively small scale. This contribution can be complementary with scaling to larger architectures, datasets, and sources of supervision (e.g. language alignment). As an example, recent work has shown the benefits of combining image-level self-supervision and multimodal image-text alignment [Mu]. Naturally, combining our work in self-supervised video pretraining with multimodal image-text alignment appears as a potentially promising direction for future work.
[Mu] Mu, Norman, et al. "Slip: Self-supervision meets language-image pretraining."
Pdf: /pdf/a3db5c8c155a50014b7afd05c2fe9dd731696448.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces a SSL method using video pre-training to produce general visual representations for both image and video tasks. The proposed VITO pipeline includes a data curation process and a video SSL technique based on MoCLR. The authors conduct a comprehensive evaluation of VITO's performance on diverse benchmarks, spanning detection, segmentation, video segmentation, classification, and out-of-distribution object recognition. Additionally, the paper includes a comparison of human-alignment on two benchmarks.
Strengths: • The paper is well-organized and easy to follow. The proposed components for the method, including data curation and various modifications on MoCLR, all make sense to me.
• Ablation studies are conducted by the authors to validate the effectiveness of their proposed method.
Weaknesses: 1. The main method is based on the image SSL method MoCLR. While the authors demonstrate the effectiveness of multi-scale contrastive attention pooling, this component appears applicable to image SSL as well. Therefore, the technical novelty of applying an image SSL method to video datasets seems limited, especially considering the existence of other established video pre-training works.
2. The motivation behind choosing MoCLR as the starting baseline should be clarified. Given the availability of better contrastive learning methods and the superior performance of Masked-based pre-training (e.g., MAE), it would be beneficial to provide comparisons or discussions to justify this selection.
3. The data curation process plays a critical role in the overall pipeline. To gain more insights, additional ablation or analysis would be helpful.
3a. For example, what are the the effect of the number of curated videos and different filtering strategies.
3b. For the ablation (Figure 4), considering that performance can be significantly influenced by pre-training data and downstream dataset distribution, it would be better to verify on more downstream tasks instead of only one dataset.
3c. It would be interesting to Investigate the scaling behavior on the training schedule, as 300 ImageNet epochs might be insufficient for large-scale video datasets.
4. In Table 1, incorporating more standard benchmarks would enhance the evaluation's comprehensiveness. For instance, Kinetics is a more widely adopted benchmark for video understanding compared to the small-scale UCF-101.
5. In B.5, VITO doesn’t show superior performance on image-based tasks, especially it has significant drop on ImageNet compared to ImageNet pre-trained methods. It might be worth to have some discussions around that as this is also important observations.
6. The paper lacks comparisons with some video-specific SSL methods, including contrastive learning-based and Masked-based methods (e.g., VideoMoCo, VideoMAE, MAE_ST, etc).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Overall I think the video pre-training direction and results are interesting.
However, my main concerns revolve around the limited evaluation and results presentation, which lack sufficient ablation studies and comparisons, as mentioned in the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and direct them to the global response for our summary. We will address each cited weakness point by point.
**Weakness 1: technical novelty**
We agree with the reviewer that the basic idea of applying an image-based SSL framework to a video dataset is not novel. In fact this has been attempted previously in the many cited works we compare to in Table 1 (CycleCon, VINCE, VFS, VIVI etc.). The novelty of our work lies in the careful methodological choices (for both the learning paradigm and dataset curation), which lead to significant empirical gains over all prior image-based video pretraining work. We concede that there are many successfully works which specifically train video-based architectures on video datasets, but these methods are severely limited in their application to general scene understanding, robust recognition, and comparisons to human behavior due to the fact that they cannot be adequately modified to maintain high performance on image-based tasks. We emphasize that a key aspect of our work is demonstrating how image-based representations trained on video data can be far more general: maintaining strong performance on spatiotemporal tasks while far outperforming on static image tasks.
**Weakness 2: why MoCLR?**
We will clarify this in the text, but we choose MoCLR based on it’s strong performance in [1] showing that it outperforms both contrastive (SimCLR) and non-contrastive (BYOL) formulations. We disagree that it is not a strong baseline. However, we have additionally compared to DINO in the robustness and alignment evaluations and find similar and sometimes worse behavior compared with MoCLR. We do not compare to all recent SoTA SSL methods such as MAE primarily because they use significantly larger architectures (generally transformers) that would not be architecture-matched and would obfuscate the current set of results (see general response for longer discussion).
**Weaknesses 3a/3b: impact of curation is unclear**
Regarding VideoNet data curation and ablations, we refer the reviewer to our general response. Briefly, we present ablations on the 4 scene understanding tasks using two other prominent video datasets (AudioSet and YT8M) (Supp. Table B1). While the performance drops slightly using alternative video datasets, we see that our method performs far better than prior image-based video pretraining efforts. VideoNet curation helps to close the gap with ImageNet pretraining but it is not necessary to outperform prior work. Nevertheless, we agree with the reviewer that more work should be done to understand how our results are impacted by different curation strategies. In particular, we believe a purely unsupervised curation approach may be possible in which we simply measure the similarity of our dataset and ImageNet images (without labels) in an unsupervised embedding space to better align the two, rather than using a classifier (similarly to DINO v2). We leave this for future work.
**Weakness 3c: scaling epochs**
We agree that investigating scaling behavior to longer epoch schedules is valuable and interesting. We did not have time to run further experiments on this, but will add text to cite this as future work.
**Weakness 4: insufficiency of UCF101**
UCF-101 is to our knowledge quite standard and widely accepted. While it may be easier than some benchmarks, many prior works do evaluate on it (See Supp Table B5). Additionally, there are very few prior works in image-based video pretraining which evaluate on Kinetics, which is the subset of prior work that we feel is the most important and direct comparison. Finally, many video-based pre-training in fact trains on the Kinetics dataset, so for those models, Kinetics is an in-distribution evaluation. Therefore, given these constraints, UCF presents a reasonable example of an OOD video benchmark. We also note that outside of the UCF benchmark, we additionally evaluate on video segmentation via DAVIS, which is a standard benchmark in the field.
**Weakness 5: ImageNet numbers**
The reviewer notes that VITO does not show superior performance on image-based tasks citing the ImageNet classification number. This is not true as VITO does outperform many ImageNet pretraining methods on tasks which use OOD datasets (segmentation, detection, and OOD recognition). The reason VITO does not outperform other methods on ImageNet validation accuracy is that this is an in-distribution evaluation for models trained on the clean ImageNet dataset. While we align the class distribution of VideoNet to that of ImageNet, the frames themselves are significantly more diverse and noisy meaning that ImageNet recognition is not strictly an “in-distribution” task for VideoNet pretraining, making the comparison unfair.
**Weakness 6: comparing to video models**
We provide some video-based comparisons in Supp Table B5 for comparing VITO on the UCF evaluation. However, on all image-based evaluations it is unclear how we can compare to these video-specific SSL methods as they all utilize video-specific architectures. It is well-known that these architectures cannot easily be adapted to handle image-based evaluations so we do not see an easy way to compare with these methods. This issue in fact exemplifies a benefit of VITO as it generalizes across image and video tasks seamlessly while maintaining strong performance in both domains.
---
Rebuttal Comment 1.1:
Title: quantitative comparisons with a masked image modeling approach
Comment: The reviewer requested comparisons with a masked image modeling baseline. While we had initially not found any such baselines which use a comparable architecture, in fact the recent work of Li et al. 2022 (A2-MIM [1]) develops a masked-image modeling methodology that can be applied to ResNet-50 architectures and is highly performant.
To facilitate the discussion, we evaluated the A2-MIM models on multiple benchmarks. On the Geirhos human alignment benchmarks, we find it does not improve over standard supervised training in human alignment and still significantly underperforms VITO. On the ImageNet-3DCC benchmark, while A2MiM provides additional robustness (at the level of the stylized and robust resnets), it still underperforms VITO significantly. For detailed numbers see the tables below. We hope this provides an additional data point that demonstrates that VITO outperforms a variety of state-of-the-art SSL objectives in this respect.
**Geirhos Human Alignment**
| **Method** | **Dataset** | **Accuracy difference (↓)** | **Observed. Consistency (↑)** | **Ceiled Error Consistency ( ↑)** |
| :--- | :----: | :----: | :----: | :----: |
| VITO | VideoNet | **0.157** | **0.564** | **0.422** |
| A2-MiM | ImageNet | 0.197 | 0.520 | 0.325 |
| DINO | ImageNet | 0.236 | 0.504 | 0.291 |
**ImageNet-3dcc**
| **Method** | **Dataset** | **$\Delta$ Accuracy Severity 1** | **$\Delta$ Accuracy Severity 2** | **$\Delta$ Accuracy Severity 3** | **$\Delta$ Accuracy Severity 4** | **$\Delta$ Accuracy Severity 5** |
| :--- | :----: | :----: | :----: | :----: | :----: | :----: |
| VITO | VideoNet | **-14.3** | **-19.6** | **-24.8** | **-29.6** | **-34.1** |
| L2-Robust | ImageNet | -15.2 | -23.5 | -30.1 | -35.8 | -40.5 |
| A2-MiM | ImageNet | -16.3 | -23.4 | -30.5 | -36.6 | -42.2 |
|DINO | ImageNet | -19.4 | -27.7 | -34.7 | -40.6 | -45.5 |
[1] Li, Siyuan, et al. "Architecture-Agnostic Masked Image Modeling--From ViT back to CNN." arXiv preprint arXiv:2205.13943 (2022). | Summary: The paper proposes VITO, a new method for self-supervised video pretraining that learns general and human-aligned visual representations. It made several modifications over existing contrastive learning frameworks, including larger crop sizes, improved temporal sampling scheme, and multi-scale attention feature pooling for the projector. The paper also creates a video dataset (VideoNet) that aligns the class distribution with ImageNet, and partially redresses the imbalance between image and video learning. This improves spatial understanding compared to other video datasets.
Experiments are conducted on various tasks, including semantic segmentation / object detection on image, video object segmentation and classification, as well as OOD object recognition.
Strengths: This paper is well-organized and easy to follow.
The technical contribution and the way to do data curation make sense to me.
The results shows that VITO matches or exceeds image pretraining on spatial tasks like object detection and segmentation while
outperforming other video pretraining methods. This shows it learns a general representation.
The ablation is comprehensive.
Weaknesses: The human alignment evaluations are somewhat weak. Human alignment is a big area and whether saliency and shape bias tasks can serve as representative tasks for the evaluation concerns me. More comprehensive alignment benchmarks could be explored.
It would be great if the authors could also show baselines with the masked image modeling objective.
Overall I think this paper shows good results and presents things clearly. Thus my rating is weak accept. I encourage the authors to perform the above things I suggest to improve the quality of this paper further.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. Regarding the human alignment evaluations, we agree that more evaluations should be done; however, we focused on saliency and shape-bias tasks as these have been recent and prevalent tasks where many deep image models fail to capture important aspects of human behavior. In fact, the saliency benchmark that we evaluate on is an extremely large-scale study and we believe that this result is quite strong, given the fact that VITO provides greater saliency alignment than even the “harmonized” model (trained with an objective to explicitly match the human saliency maps). Additionally, shape vs texture bias has been a prevalent issue in the vision community given how strongly many models differ from humans. While the gap has closed for very large models trained on billions of images, our results demonstrate that even in a smaller scale setting, video pre-training may be a more natural solution. Nevertheless, more evaluations are always beneficial so if the reviewer has any suggestions for a preferable quantitative alignment benchmark, we are happy to evaluate and include this in the final paper.
Regarding masked image modeling, we understand that this is a dominant recent SSL paradigm. However, much of the masked autoencoding (MAE) literature is focused on using transformer architectures, due to the need for operating on patchified inputs. Because we seek architecture-matched comparisons, this makes comparing to any masked image modeling objectives difficult. We have provided a brief scaling evaluation using Swin transformers in Supp Table B.4, but leave it to future work to do a thorough comparison across transformer architectures.
---
Rebuttal Comment 1.1:
Title: quantitative comparisons with masked image modeling approach
Comment: The reviewer requested comparisons with a masked image modeling baseline. While we had initially not found any such baselines which use a comparable architecture, in fact the recent work of Li et al. 2022 (A2-MIM [1]) develops a masked-image modeling methodology that can be applied to ResNet-50 architectures and is highly performant.
To facilitate the discussion, we evaluated the A2-MIM models on multiple benchmarks. On the Geirhos human alignment benchmarks, we find it does not improve over standard supervised training in human alignment and still significantly underperforms VITO. On the ImageNet-3DCC benchmark, while A2MiM provides additional robustness (at the level of the stylized and robust resnets), it still underperforms VITO significantly. For detailed numbers see the tables below. We hope this provides an additional data point that demonstrates that VITO outperforms a variety of state-of-the-art SSL objectives in this respect.
**Geirhos Human Alignment**
| **Method** | **Dataset** | **Accuracy difference (↓)** | **Observed. Consistency (↑)** | **Ceiled Error Consistency ( ↑)** |
| :--- | :----: | :----: | :----: | :----: |
| VITO | VideoNet | **0.157** | **0.564** | **0.422** |
| A2-MiM | ImageNet | 0.197 | 0.520 | 0.325 |
| DINO | ImageNet | 0.236 | 0.504 | 0.291 |
**ImageNet-3dcc**
| **Method** | **Dataset** | **$\Delta$ Accuracy Severity 1** | **$\Delta$ Accuracy Severity 2** | **$\Delta$ Accuracy Severity 3** | **$\Delta$ Accuracy Severity 4** | **$\Delta$ Accuracy Severity 5** |
| :--- | :----: | :----: | :----: | :----: | :----: | :----: |
| VITO | VideoNet | **-14.3** | **-19.6** | **-24.8** | **-29.6** | **-34.1** |
| L2-Robust | ImageNet | -15.2 | -23.5 | -30.1 | -35.8 | -40.5 |
| A2-MiM | ImageNet | -16.3 | -23.4 | -30.5 | -36.6 | -42.2 |
|DINO | ImageNet | -19.4 | -27.7 | -34.7 | -40.6 | -45.5 |
[1] Li, Siyuan, et al. "Architecture-Agnostic Masked Image Modeling--From ViT back to CNN." arXiv preprint arXiv:2205.13943 (2022). | null | null | null | null |
On Class Distributions Induced by Nearest Neighbor Graphs for Node Classification of Tabular Data | Accept (poster) | Summary: This paper studies the machine learning problems on tabular data and convert the problem into node classification. Then, they formally analyze the Cross-Class Neighborhood Similarity (CCNS) and validate the performance on benchmark datasets.
Strengths: S1. The studied problem is very important.
S2. The paper offers theoretical analysis.
S3. The paper is well written and the idea is easy to follow.
Weaknesses: W1. More baselines are needed, such as other SOTA solutions in long-tail node classification.
W2. Neighbors’ sampling for balanced classes seems to be less novel [1]. The technical contribution is weak.
W3. Introducing graph data for tabular data has been studied a lot [2].
[1] Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation, ICML 20
[2] TabGNN: Multiplex graph neural network for tabular data prediction.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing that the problem we study is important, and that we managed to present the theoretical ideas in a clear form.
We address the questions/concerns of the reviewer below.
**W1:** Long-tail node classification baselines deal, depending on its definition, with problem of power-law distributed degrees [*] or variable sized graphs [**]. In our experiments, each dataset correspond to a single graph where all nodes have exactly $k$ neighbors, so the inductive bias of the baselines suggested by the reviewer would not be advantageous in this context.
To further clarify, the goal of the experimental approach is to show that k-NN approaches are not a panacea that always improves the performances on tabular data. If creating a k-NN graph was really helpful for the task, then we would observe improvements already with "more basic" methods like GCN and GIN.
[*] Long-tailed graph neural networks via graph structure learning
for node classification, Applied Intelligence, 2023.
[**] On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks, WWW, 2022.
**W2:** We are not entirely sure there is a connection between the referenced paper about unsupervised domain adaptation and what we are currently proposing. Also, our paper is not about neighbor’s sampling for balanced classes.
We would appreciate if the reviewer could clarify this aspect.
**W3:** We thank the reviewer for suggesting an additional reference to be included in the paper. There are many works, like the ones we mention in Section 2, that generate a graph for tabular data and try to empirically evaluate the performance of graph machine learning methods. However, **none** of these studies approach the problem from a theoretical perspective, and therefore the conclusions these studies draw remain limited to specific datasets.
The novelty of our work mainly lies in a theoretical analyses that suggest how creating nearest neighbor graphs, and in particular k-NN graphs, might not be helpful to improve the performances in a fully supervised setting. The experimental part is only meant to bring additional empirical evidence supported by the theoretical results.
In summary, while the introduction of graph for tabular data has been already studied, it is still unclear why and if it makes sense to do it in a fully supervised setup. Our work is the first theoretical study in this direction; we hope to have clarified this aspect.
--
We hope to have adequately addressed the points raised by the reviewer, but please let us know if further clarifications are needed.
---
Rebuttal Comment 1.1:
Title: Thanks for your response.
Comment: Thanks for your response. I will raise the score to 5. | Summary: The paper studies quantitative measures to evaluate the usefulness of k-NN graphs when performing classification in datasets that do not originally come with a known graph topology. The authors argue that the use of k-NN graphs is not useful.
Strengths: A study of applying k-NN to construct a graph and then using standard GNN models for node classification. This is compared to using the baseline MLP that does not make use of any neighboring node information.
Weaknesses: 1. The paper is very heavy on assumptions, especially using the use of Gaussian models to derive certain results. There is no empirical evidence that such models hold in the datasets used.
2. The numerical experiments are not convincing enough. Maybe I misunderstood how the experiments are conducted. A major concern is how the k-NN graphs are constructed and how the GNN models are applied.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. In the definition (1), q_c does not seem to depend on the graph structure, unlike the original definition in [39]. How do you incorporate the notion of "neighborhood" here?
In fact, the whole development in Section 3.1 does not need to assume that we are performing node classification in a graph. Only in (9) do we use the notion of a node's neighborhood.
1. What is the purpose of the derivations in Prop 3.3 - 3.7? These are never used anywhere else in the paper or experiments (except for a very simple synthetic example). While I understand that the authors "leave these ideas to future investigations", there should be empirical studies to demonstrate that the results are of interest or useful.
1. The distribution p(\omega | c) in (3) has not been stated. Looking through the proofs and Prop 3.3, it seems to be assumed to be a Gaussian. Is this reasonable and is there empirical evidence for it?
1. The definition of SED in Prop 3.8 was never properly given. One has to look at the proof to realize the measure used is Lebesgue measure. Why is that when p(\omega | c) above is assumed to be Gaussian?
1. In the experiments, what distance metric is assumed to construct the k-NN graphs? The performance of different GNN models depends very much on the graph topology. Have different metrics been considered and do the authors' conclusions hold regardless of the metric?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Theoretical assumptions have not been comprehensively tested.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing constructive feedback and the opportunity to clarify unclear points.
**W1**: As we better discuss in our answers below, we rely on Gaussian **mixtures** that are known to be universal approximators. In addition, despite not covering all use-cases, assumption 2 is often used in the literature because they are particularly attractive from a theoretical perspective [*].
In our opinion, assumptions 1 and 3 are quite reasonable considering the flexibility of Gaussian mixtures and the nearest neighbors construction (line 148) in the (implicitly Euclidean) message passing mechanism. In addition, we tested the non-linear assumption of Section 3.2 empirically, and we showed that it does not affect the analyses of Section 3.2.
[*] Sutton, Charles, and Andrew McCallum. "An introduction to conditional random fields." Foundations and Trends in Machine Learning 4.4 (2012): 267-373.
**W2**: We construct the experiment as follows:
- Every dataset is a set of $n$ samples to be classified. For the GNN models, we construct a single k-NN graph of size $n$ using the Euclidean distance between attributes (see also our answer to the question below).
- A GNN is trained on top of this graph to carry out the original classification problem as a node classification problem
Our empirical setting follows the rigorous one of [19], trying many configurations for all baselines. Please let us know if we missed something we need to better clarify about this.
**Q:** In the definition (1), q_c does not seem to depend on the graph structure, unlike the original definition in [39]. How do you incorporate the notion of "neighborhood" here?
**A:** In nearest neighbor graphs, we can define a maximum neighboring distance $\epsilon$ from each node/sample. Hence, we incorporate the notion of neighborhood through the use of hyper-cubes (lines 150-152), which represent the set of all possible points (that is, neighbors) around the hyper-cube’s center (that is, the sample of interest).
**Q + Limitations:** What is the purpose of the derivations in Prop 3.3 - 3.7? ... **(partially omitted for space limitations)**
**A:** Improving the theoretical understanding of a phenomenon can influence the discovery of new practical methods, but not all theoretical insights have an easy and straightforward application. The theoretical developments of Section 3.1 are the first steps of a longer-term research plan on how to build *useful* synthetic graphs from tabular data, and to the best of our knowledge we are the first to do this from a theoretical perspective.
The practical use of these results is left to future works simply because we would require an additional paper to first create an algorithm and then test it properly and convincingly.
We hope this clarifies the long-term impact of our research and its relevance for future methods. We thank the reviewer for challenging us so that we can better improve the motivation for our studies in the paper.
**Q:** The distribution p(\omega | c) in (3) has not been stated. Looking through the proofs and Prop 3.3, it seems to be assumed to be a Gaussian. Is this reasonable and is there empirical evidence for it?
**A:** In the graphical model of Figure 1 (left), the distribution $p(x|c)$ (or equivalently $p(\omega | c)$) is a **mixture of Gaussians** conditioned on the class $c$. It is computed by marginalizing over the possible values of the variable $M$: $p(x|c) = \sum_m p(x|m,c)p(m|c)$. As an additional remark, mixtures of Gaussians are known to be universal approximators of distributions.
We will fix the paper accordingly.
**Q:** The definition of SED in Prop 3.8 was never properly given. One has to look at the proof to realize the measure used is Lebesgue measure. Why is that when p(\omega | c) above is assumed to be Gaussian?
**A:** Thank you for spotting this, we will add the SED definition in the main paper at line 248. As discussed in our previous answer, $p(\omega | c)$ is a mixture of Gaussians and the SED is the only distance for which there exists an exact, rather than approximated, close-form equation for Gaussian mixtures [29].
**Q:** In the experiments, what distance metric is assumed to construct the k-NN graphs? The performance of different GNN models depends very much on the graph topology. Have different metrics been considered and do the authors' conclusions hold regardless of the metric?
**A:** We thank the reviewer for the insightful suggestion. All related works (with the exception of [50] that uses the cosine similarity) rely on the Euclidean distance for two main reasons:
- GNNs models typically work in the Euclidean space when aggregating neighbors, hence the homophily assumption can therefore be imposed - to a certain extent - only by computing nearest neighbors in terms of Euclidean distance
- Euclidean distance has an easy interpretation in terms of “how similar are the attributes of adjacent nodes”
Our analysis is set in this context, and we will clarify this in the paper. In addition, we considered the reviewer’s suggestion and ran additional experiments by using the cosine similarity as in [50] to construct the k-NN graphs. As shown in the PDF in the general response to reviewers, in general we observe generally worse performances of the GNN methods using this alternative metric, for instance on Adult and eGrid.
This makes sense because, in addition to the above considerations, the use of cosine similarity is not “expressive” as the Euclidean distance as it ignores the magnitude of the vectors.
--
We hope to have adequately addressed the points raised by the reviewer. as we understand their importance in the final reviewer’s decision especially as regards the need for assumptions in a theoretical work. Thank you again for the opportunity to improve these aspects and we remain available should further clarifications be necessary.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. I appreciate the efforts by the authors to address the reviewers' comments. I just have two additional remarks.
1. Gaussian mixtures are universal approximators only for distributions with densities and in the sense of the weak* topology. They cannot approximate distributions with that have point masses. Moreover, even for a distribution with density, one will not know how many mixture components to use without prior information. These are reasons why you do not see much more widespread use of Gaussian mixtures in applications.
2. I agree that theoretical understanding is very important. Therefore, I believe that this would have been a much better paper if it focuses more deeply on the theoretical development with numerical experiments designed to test this theoretical understanding. For one, I appreciate the presented qualitative results more.
---
Reply to Comment 1.1.1:
Title: Response by Authors
Comment: We sincerely thank the reviewer for engaging in the discussion.
- We agree with the clarification: this is indeed what we (partially) wrote at lines 136-137. We should have been more careful in our rebuttal and we will further clarify this point in the paper.
- In Appendix A.4, Figure 4, we are additionally using the results of Prop 3.3-3.7 to approximate the CCNS of a synthetic dataset for varying values of $k$.
Also, the empirical results are what motivated us to work on the theoretical analysis, and a rigorous benchmarking in the fully-supervised setting is also lacking in the literature and needed to set a baseline. The rationale behind the structuring of the paper was that presenting a mixed set of results could have been positively received by a broader community.
As we value the reviewer’s feedback, we would like to do everything we can to improve the quality of the paper at this stage and possibly raise the score. Is there any additional analysis that we could provide to the reviewer to achieve this?
Thank you again for your time. We look forward to continuing the discussion. | Summary: This paper shows that using a k-NN graph in tabular data classification is not beneficial compared to MLP. Specifically, based on Cross-Class Neighborhood Similarity, the authors theoretically induce the analytical forms of the squared error distance (SED) of two approaches. In the experiment, they demonstrate that the SED of the method using a k-NN graph is smaller than the SED of the method using attributes directly. Furthermore, in fully-supervised setting, they show that MLP exhibits comparable performance with methods using k-NN graphs.
Strengths: - They propose a theoretical framework to analyze the method using a k-NN graph.
- They show that the SED for using a k-NN graph is smaller than the SED for using attributes directly in synthetic data.
- They demonstrate that MLP exhibits comparable performance with the approaches using a k-NN graph in fully-supervised setting.
Weaknesses: - [W1] As the authors mentioned, the assumptions seem strong. Especially, for assumption 1, it would not hold that attributes are independent in the real-world data. Also, they fix GNN architecture as GCN in Equation 9, but there exists many effective GNNs.
- [W2] I understand that SED can be related with performance. However, it seems not enough to exploit the statement that the methods with large SED would show better performance than the methods with small SED.
- [W3] As far as I understand, the proposed framework gives only the analytic forms of SED and the empirical evidence for SED difference is also suggested in the synthetic setting. It seems not enough to assert that using a k-NN graph offers no benefits compared to a structure-agnostic baseline.
- [W4] The analysis about the analytic forms does not provided in the main paper.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Following [W1], could you provide the correlation between attributes in the real-world datasets?
- Following [W2], would you provide eivdences about the relation between SED and classification performance?
- Following [W3], could you provide more evidences that the SED for the methods using a k-NN graph is smaller than the SED of the methods using attributes directly?
- Following [W4], would you provide the detailed description about analytic forms? Also, I'm curious about the conditions when SED of methods using a k-NN graph gets large based on the analytic forms?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: As the authors mentioned, the assumptions seem strong [W1].
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for providing constructive feedback. Below, we address the reviewer questions, each of which is related to a specific weakness.
**Q:** Following [W1], could you provide the correlation between attributes in the real-world datasets?
A: Please have a look at the PDF in the general response to reviewers. We provide the Pearson correlation analysis for two datasets that have few attributes in the pdf attached to this submission. However, we would like to remind the reviewer that *correlation does not imply causation*, hence we cannot still draw conclusions about the form of the true data generating distribution.
In our opinion, assumptions 1 and 3 are quite reasonable considering the flexibility of mixture models and the nearest neighbors construction (line 148). In addition, we tested the non-linear assumption of Section 3.2 empirically, and we showed that it does not affect the analyses of Section 3.2.
Assumptions 2, instead, is widely used in the literature and make graphical models particularly attractive from a theoretical perspective [*], despite it is also known that they cannot cover all real-world cases.
[*] Sutton, Charles, and Andrew McCallum. "An introduction to conditional random fields." Foundations and Trends in Machine Learning 4.4 (2012): 267-373.
**Q:** Following [W2], would you provide evidences about the relation between SED and classification performance?
**A:** We can mention a few of the numerous works in the literature (see list below) where this relation, in the more general case of distance metrics and not just SED, is discussed and analyzed. Thank you for pointing this out, we will include a clarification with the references in the paper.
- Oneto, et al. "Do we really need a new theory to understand over-parameterization?." Neurocomputing 543 (2023).
- Biggio and Roli. "Wild patterns: Ten years after the rise of adversarial machine learning." Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 2018.
- Elsayed, Gamaleldin, et al. "Large margin deep networks for classification." Advances in neural information processing systems 31 (2018).
- Vapnik,. "Statistical learning theory." Wiley series on adaptive and learning systems for signal processing, communications and control (1998).
- Oh, Il-Seok et al., "Analysis of class separation and combination of class-dependent features for handwriting recognition." IEEE Transactions on Pattern Analysis and Machine Intelligence 21.10 (1999).
**Q:** Following [W3], could you provide more evidences that the SED for the methods using a k-NN graph is smaller than the SED of the methods using attributes directly?
**A:** Because assumptions about data are inevitable, it is very hard to prove that a k-NN graph *never* helps. However, below we provide more theoretically inspired evidence as to why this should be always the case -- at least under our assumptions.
In the proof of Prop 3.8 (appendix), we observe that $p(h|c)$ is a mixture of Gaussians that differs from $p(x|c)$ only in the higher variance of the individual Gaussian components. In other words, the distribution of $p(h|c)$ looks “flatter” than $p(x|c)$. At least in a one or two-dimensional space, flatter distributions $p(h|c)$ and $p(h|c’)$ should therefore increase the overlap (and therefore decrease the SED) compared to the SED in the original space. This intuition, despite its simplicity, is unfortunately hard to prove for technical reasons, for instance the dependence of the distance value on the specific parameters $\mu$ and $\Sigma$.
We hope this addresses the reviewer's question, but please let us know if there is something else specific we should clarify.
**Q:** Following [W4], would you provide the detailed description about analytic forms? Also, I'm curious about the conditions when SED of methods using a k-NN graph gets large based on the analytic forms?
**A:** Due to space reasons we could not include the analysis in the main paper and had to leave our considerations in the appendix (lines 659-672). We might be able to include those, however, in the camera ready version.
This is a great question and related to our answer above. Without making assumptions over the values of the $\mu$ and $\Sigma$ of the true data generating distributions, one cannot easily see when the SED gets large. However, since the SED of the embedding space seems to be (empirically) always smaller than the SED of the original space and closely related in shape (theoretically), we can argue that the former should get larger as the latter increases. This happens when the class-specific distributions in the original space get farther away from each other, which makes the use of a k-NN graph even less useful because different classes become more separable.
--
We thank the reviewer for the questions that touch profound points of our work. We hope to have provided satisfactory answers to the reviewer’s curiosity and to have addressed the doubts. We remain available for continuing a constructive discussion should clarifications be needed.
---
Rebuttal Comment 1.1:
Title: Change my score to weak accept
Comment: I appreciate your detailed response. Most of my concerns are resolved. I change my score to weak accept in that the key message is significant and the logical reasoning seems correct.
---
Reply to Comment 1.1.1:
Title: Response
Comment: We thank the reviewer for appreciating our efforts and for helping us to further improve the quality of our paper. | Summary: This submission provides a theoretical framework to explore the benefits of Nearest Neighbor (NN) graphs in the absence of a predefined graph structure. The Cross-Class Neighborhood Similarity (CCNS) is formally analyzed to evaluate the usefulness of structures within nearest neighbor graphs. The study also investigates the class separability induced by deep graph networks on a k-NN graph. Quantitative experiments conducted under full supervision demonstrate that employing a k-NN graph provides no advantages compared to a structure-agnostic baseline. Qualitative analyses indicate that the framework effectively estimates CCNS and suggests that k-NN graphs are not useful for such classification tasks. This highlights the need to explore alternative graph construction techniques.
Strengths: 1. The theoretical analysis is interesting.
2. The motivation is clear.
Weaknesses: To be honest, I am not familiar with this topic. I can only provide high-level comments as follows,
1. Equation (2) is not clear. For example, how to derive the equivalent between two absolute values?
2. Figure 2 left is not very easy to be understood. For example, what are the colors mean, and what is the meaning of each line?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read and assess our work. Below we provide an answer to the questions posed.
**Q1:** Equation (2) is not clear. For example, how to derive the equivalent between two absolute values?
**A:** The equivalence of Eq. 2 follows from the linearity of expectations and from the fact that $q_c(x)$ does not depend on $p(x|c’)$ and vice versa ($q_c’(x),p(x|c)$), so the two resulting expectations further simplify:
$\\mathbb{E}\_{p(x|c)p(x'|c')}[q\_c(x) - q\_{c'}(x')] = \\mathbb{E}\_{p(x|c)\\cancel{p(x'|c')}}[q\_c(x)] - \\mathbb{E}\_{\\cancel{p(x|c)}p(x'|c')}[q\_{c'}(x')] = \\mathbb{E}\_{p(x|c)}[q\_c(x)] - \\mathbb{E}\_{p(x'|c')}[q\_{c'}(x')] $
Thanks for mentioning this, we will add it to the paper.
**Q2:** Figure 2 left is not very easy to be understood. For example, what are the colors mean, and what is the meaning of each line?
**A:** With Figure 2, we want to empirically test whether there exist tabular datasets (following our assumptions) where a nearest-neighbor structure can help to better discriminate the samples. Each curve/color corresponds to a different configuration of hyper-parameters (Table 4 appendix) for the data generating distribution of the dataset.
The value on the y-curve indicates if the embedding space constructed using $k$-NN graphs and DGNs is more separable than the original space. To be more separable, the value should exceed 1, but we observe that the lines never cross this value and reach an asymptote as we increase $k$. Therefore, $k$-NN graphs do not seem to help when the data generating distribution behaves according to our theory.
--
We hope to have adequately addressed the concerns of the reviewer. If not, please let us know and we will do our best to further clarify these points.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your response. My concerns are solved.
Since I am not familiar with this topic, I will follow other reviewer's final judgment
---
Reply to Comment 1.1.1:
Title: Response to Reviewer CVih
Comment: Thank you for your message; we understand the reviewer's position.
On a separate note, since the rebuttal is closing soon, we were hoping you could still consider adjusting the score as we have addressed all the reviewer's concerns. Thank you again for your time. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their constructive feedback. We did our best to answer all questions and we look forward to a fruitful discussion.
Attached you can find a PDF with additional results to answer the reviewers' questions. Please note that the Table is partially complete due to running experiments, and we will do our best to complete it or to update you during the discussion period.
Pdf: /pdf/b89533d32d6a7554302f8d2ed04177dc5afe4e63.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The main contributions of this paper are as follows:
1. The authors of the paper study Cross-Class Neighborhood Similarity (CCNS) for nearest neighbor graphs and provide its first lower bound.
2. They apply a Deep Graph Network (DGN) to an artificial k-NN graph and study how it affect the class separability of the input data.
3. They carry out a robust comparison between structure-agnostic and structure-aware baselines on a set of 11 datasets, and their results are in agreement with their theoretical results.
4. Qualitative analyses based on the theory further suggest that using the k-NN graph might not be advisable.
Overall, the authors introduce a theoretical framework to understand the benefits of Nearest Neighbor (NN) graphs when a graph structure is missing and provide both theoretical and empirical evidence to support their conclusions. They try to raise awareness about the potential limitations of k-NN graphs for node classification tasks and advocates for the study of alternative graph construction techniques.
Strengths: 1. The authors introduce a theoretical framework to understand the benefits of Nearest Neighbor (NN) graphs when a graph structure is missing.
2. They provide a formal analysis of the Cross-Class Neighborhood Similarity (CCNS) in the context of nearest neighbor graphs and provide an analytical computation of the CCNS for Nearest Neighbor Graphs.
3. They carry out a robust comparison between structure-agnostic and structure-aware baselines on a set of 11 datasets that is in agreement with their theoretical results.
Overall, this paper provides both theoretical and empirical evidence to support its conclusions and makes a contribution to the understanding of the benefits and limitations of using k-NN graphs for node classification tasks.
Weaknesses: As the authors mentioned in the paper, they assume that the attributes are conditionally independent when conditioned on a specific mixture and class. These assumptions may not hold for all datasets and could limit the generalizability of their results.
Additionally, the authors’ analysis is focused on the fully-supervised setting, where all training labels are available. It is not clear how their conclusions would apply to other settings, such as the semi-supervised setting where only a subset of training labels is available.
Overall, while this paper provides valuable insights into the benefits and limitations of using k-NN graphs for node classification tasks, its conclusions are based on specific assumptions and conditions and may not apply to all situations.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. How does your work relate to other dataset formats more than tabular data?
2. How would your conclusions apply to other settings, such as the semi-supervised setting where only a subset of training labels is available?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the merits and strengths of our work. Below, we will address the comments and questions that have been raised.
**On the independence assumption**
The conditional independence assumption has been widely used in the graphical models literature because it is particularly attractive from a theoretical perspective [*], despite it cannot cover all real-world cases.
We also want to remark that this assumption is only needed in Section 3.1 to approximate the CCNS, and it is not required in Section 3.2 which deals with the more practical problem of class separability induced by DGNs.
Assumptions 1 and 3 are still quite reasonable considering the flexibility of mixture models and the nearest neighbors construction (line 148). In addition, we tested the non-linear assumption of Section 3.2 empirically, and we showed that it does not affect the analyses of Section 3.2.
Overall, the results we have obtained help us better understand the observed behaviour of models under speficic conditions, but they also pave the way for other researchers to extend these results to more general scenarios, which is usually the case for theoretical works.
[*] Sutton, Charles, and Andrew McCallum. "An introduction to conditional random fields." Foundations and Trends in Machine Learning 4.4 (2012): 267-373.
**On the fully supervised setting**
Our results do not apply to the semi-supervised setting: it has already been shown in the literature (lines 31-48) that a k–NN graph can help in the semi-supervised setting, possibly due to a regularizing effect that prevents overfitting of the few labels.
For this reason, we study the only alternative scenario where the above conclusions have not been comprehensively tested yet, and our theory suggests an opposite result in the fully supervised context.
**Q:** How does your work relate to other dataset formats more than tabular data?
**A:** Our work is specific to tabular data tasks that are transformed into node classification tasks, as it is becoming a more and more common approach.
**Q:** How would your conclusions apply to other settings, such as the semi-supervised setting where only a subset of training labels is available?
**A:** please refer to our considerations above.
--
We hope to have adequately addressed the reviewer’s doubts. Please let us know if there is something we should further clarify.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your positive and valuable feedback.
We hope our responses addressed your concerns. As the rebuttal is closing soon, we would appreciate if the reviewer could consider raising the score accordingly. We remain available for further requests of clarifications should they be needed. Your feedback is greatly appreciated. | Summary: This paper proposes to investigate the practice of using nearest neighbour graphs build upon tabular data in order to classify samples (then relying on a node classification model). The authors note that this practice has been shown to be relevant in the semi-supervised case, but not with full supervision - and that it often exploits the common idea that homophily is a desired graph property for node classification models. However, recent works shows that the difference between the distributions of labels of neighbours for nodes of separate class (measured by the Cross-Class Neighborhood Similarity, CCNS) is what matters for their efficiency. This brings the authors to propose (1) a formal model for computing an approximation of these distributions under several assumptions (2) under the same assumptions, study the separability of classes induced by a simple node representation model relying on message passing on the NN-graph.
The authors then propose two sets of experiments to draw conclusions: (1) an study of the results of learning classification models directly on tabular data or on NN-graphs for 11 tasks (2) qualitative experiments relying on their theoretical work, verifying if, for a toy task of binary classification based on simple distributions built following their assumptions: (a) there exists of configuration where classes are more separable using a NN-graph structure (b) the quantities obtained via their approximation of CCNS concurr to the conclusion of (a).
Strengths: - This paper is very well written, and easy to follow, despite the difficulty of the subject.
- This paper demonstrates experimentally that using nearest neighbour graphs build upon tabular data in order to classify samples does not help performance.
- This paper provides well-motivated theoretical arguments showing that a model built on message-passing for a k-NN graph will not improve class separability.
- Finally, this paper provides a theoretical framework (under assumptions) to build a tool, allowing to approximate the CCNS (an indicator of potential success of node classification models).
Weaknesses: - While the paper clearly discusses the limitations brought by the assumptions it makes, it would be interesting and useful to explore how significant they are.
- The utility of the theoretical developments for the experimental demonstration is not made completely clear.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - The purpose of the CCNS approximation proposed is a little unclear. Is it supposed to be a practical tool for testing the relevance of the k-NN graph for a classification task ?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - Limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that the reviewer found our work well presented and motivated. Below, we comment on the points raised by the reviewer, hoping that it will better clarify the scope of this work.
**Q:** While the paper clearly discusses the limitations brought by the assumptions it makes, it would be interesting and useful to explore how significant they are.
**A:** A challenge, common to all theoretical machine learning problems like this one, in verifying how impactful the assumptions are is that we do not have access to the true data generating distribution of real-world datasets.
For instance, analyses that focus on variable correlations cannot ensure that there is a real causation between variables in the dataset, so the assumption of independence cannot be easily verified.
However, we have at least empirically tested the impact of non-linearities on the results by comparing sDGN with GIN and GCN, and we have observed no influence on the performances. In addition, the use of a mixture of Gaussians in the data generating model allows us to represent any distribution with a sufficiently high number of mixtures, so it is not a limiting assumption.
**Q:** The utility of the theoretical developments for the experimental demonstration is not made completely clear.
**A:** Improving the theoretical understanding of a phenomenon can influence the discovery of new practical methods, but not all theoretical insights have an easy and straightforward application. The theoretical developments of Section 3.1 are the first steps of a longer-term research plan on how to build *useful* synthetic graphs from tabular data, and to the best of our knowledge we are the first to do this from a theoretical perspective.
In contrast, the theoretical studies of Section 3.2 are more directly related to the evaluation of various methods under the k-NN graph construction, which is why we could provide a thorough empirical analysis on that end.
--
We hope to have addressed the doubts of the reviewer, but please let us know if we should further clarify some aspects of our work. Thank you again for the constructive feedback. | null | null | null | null |
Federated Virtual Learning on Heterogeneous Data with Local-global Distillation | Reject | Summary: This paper proposes a method called FedLGD that utilizes distilled virtual data on both clients and the server to train FL models. To address the synchronization issue and class imbalance, the authors use iterative distribution matching to distill the same amount of local virtual data on the clients for local model training, thereby improving the efficiency and scalability of FL. The authors also reveal that training on local virtual data exacerbates the heterogeneity issue in FL. To address this problem, they use federated gradient matching to distill global data on the server and add a regularization term to the local loss function to promote the similarity between local and global features. They evaluate the proposed FedLGD method on benchmark and real datasets and show that FedLGD outperforms existing heterogeneous FL algorithms.
Strengths: 1. The authors use visualization to reveal the limitation of local data distillation in federated virtual learning, which makes the motivation of the proposed method clear.
2. The proposed method preserves local data privacy by leveraging averaged local gradients to distil global virtual data.
3. The experiment results validate performance improvement and privacy protection.
Weaknesses: 1. The initialization for data distillation requires each client to calculate the data statistics and the server to aggregate these statistics, which still raises privacy concerns since the statistics contain some private information. How about using random initialization or other strategies? The authors need to justify it.
2. Compared with VHL that uses untrained StyleGAN without further updates, the proposed FedLGD method needs to update the global virtual data iteratively. Therefore, it is not surprising that FedLGD outperforms VHL. If the StyleGAN can be updated the same number of times as FedLGD, does FedLGD still outperform it? This requires justification or experimental validation.
3. The structure of the proposed method is not clear enough, which makes it difficult to follow. The authors first present the overall pipeline and then describe each component. However, the connection between the components and the overall pipeline is not clear. This requires significant revision.
4. The presentation quality is not satisfactory. There are too many typos and grammatical errors. Some notations are unclear, e.g., $i$ represents both the data index and client index; the subscript $t$ disappears in many places; $\tau$ is a set but denoted as a scalar in the caption of Figure 2.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. For the selected rounds, is the regularization term $L_{con}$ needed for the local model update? As shown in Algorithm 1, the term disappears in these rounds.
2. In Table 1, why is VHL better than the proposed method on SVHN with IPC=10?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. As pointed out by the authors, data distillation incurs additional communication and computation cost. Further investigation is required to enhance the efficiency.
2. The proposed method performs well but lacks of theoretical analysis to support its performance improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Privacy concern on sharing the local dataset statistics
Thank you for the comments. However, we would like to point out that FedLGD aims to reduce local privacy leakage *by training and sharing gradients w.r.t. local virtual data*. The global virtual data is designed for *regularizing local training on the feature extractors*, so that we can handle the feature heterogeneity issue among clients. We had reported using different initialization strategies for initializing both local and global virtual data on DIGITS datasets in Appendix C.7 (Table 9). Furthermore, we provide an alternative initialization that does not require sharing local data mean and variance for global data distillation in our rebuttal and report the consistent good results in rebuttal PDF (Table 1). The performances are almost identical for global virtual data to be initialized with either random noise or local statistics, so sharing local statistics to the server is in fact optional. We will add the discussion to the Appendix of our revised version.
We have provided more comprehensive analysis on the positive impacts of using FedLGD for privacy preservation in the **Justification of privacy** section of the general response.
> Justification on iteratively updating global virtual data v.s. StyleGAN
We would like to point out that training StyleGAN for generating global virtual data in FL is a challenging task: First, it requires either a large in-domain global dataset or an algorithm that incorporates all clients’ data. The latter part is itself an outstanding research topic which is beyond the scope of this paper. Second, updating GAN requires clients and the server to host an additional StyleGAN model, whereas we rely on classification models only. Thus, enabling global virtual data to be updated through global iterations is a key contribution of FedLGD.
> Explanation of the structure of Sec. 3
Thanks for the great comments on paper writing, we tried to put the notations in the same paragraph for easy reference and provided a notation table in Appendix A in our original submission. For the structure of Sec. 3, we had several internal discussions and carefully chosen the current version for presentation. For example, we want to introduce the fundamental part of federated virtual learning, so we began with the introduction of local virtual data. Then, we introduced the heterogeneity problem caused by FVL, so we brought in the concept of global virtual data to formulate the local training objective of FedLGD (Eq. 3). In the last sentence of the paragraph, we pointed out the question we wanted to solve with our novel design of federated gradient matching (“At this point, a critical problem arises: What global virtual data shall we use?”). We will carefully read through and revise the paragraph to make it easier to follow.
> Typos and grammar errors
Thank you for pointing out our typos. We have corrected the caption in Fig. 2. We agreed that our paper had a lot of notations since we needed to cover a variety of variables for explaining our pipeline. We tried our best to carefully assign consistent notations for each variable and provided a notation table in the Appendix A. We will perform additional rounds of proof-reading for revising our notations.
> Explanation of $L_{Con}$ in Algorithm 1
Thank you for pointing out the subtle design of FedLGD. No, we would not add the regularization term for local training in the selected rounds because we required the averaged gradients w.r.t. *only* the cross-entropy loss from clients. Therefore, we disabled the regularization term in the selected iterations for synthesizing global virtual data.
> Experimental results in Table 1
Thanks for your careful review. In the majority of cases, and as substantiated by the experimental results, our method demonstrates superior performance to VHL, with a significant margin exceeding 2%. Indeed in two specific instances identified by reviewer 5yts, VHL exhibits better results, albeit by relatively narrow margin: SVHN+ConvNet (0.4%). This discrepancy may be attributed to the nature of SVHN, which are three-channel (RGB) images. As a result, the anchors generated by the styleGAN in VHL might more effectively capture the RGB information. Yet, it is important to emphasize that under the same conditions, our method exhibits much better results compared to VHL for the other clients.
> Theoretical analysis of FedLGD
Please see the **Theoretical analysis of FedLGD** section in the general response.
[r1] Dataset distillation: A comprehensive review.
---
Rebuttal Comment 1.1:
Comment: The authors have resolved some of my main concerns (especially the privacy and initialization), and thus I have increased my rating.
---
Reply to Comment 1.1.1:
Title: Appreciate your feedback on our responses
Comment: We are pleased to learn that our explanations and clarifications have been positively acknowledged and your major concerns regarding privacy and initialization are well-dressed in our replies. Your comments and feedbacks have been immensely valuable. We would like to express our sincere appreciation once more for your time and the dedication for reviewing our work. | Summary: This work proposes a method to address data heterogeneity from the perspective of dataset distillation, named FedLGD. Specifically, the proposed iterative distribution matching and federated gradient matching strategies are used to iteratively update the local balanced data and the global shared virtual data, and the global virtual regularization is applied to coordinate the data domain drift between clients effectively. This method can effectively solve the problem of client data imbalance and domain drift in heterogeneous data scenarios. Extensive experimental results show the effectiveness of the proposed method.
Strengths: 1. The problem of data heterogeneity studied in this paper is important for applying federated learning in real-world scenarios.
2. The idea of this paper to solve the problem of data heterogeneity in federated learning through the dataset distillation method is novel.
3. The authors perform various experiments to analyze the proposed method
Weaknesses: 1. Some symbols are written differently. The authors should unify these symbols. In section 3.1, local virtual data is written as $\widetilde{D}_{I}$, but in section 3.2 and 3.3 is written as $\widetilde^{D}{c}$, in Figure 2 is $\widetilde{D}_{t}^{c_{I}}$. In Eq. (3), $L_ {Con} $is about \ widetilde ^ {D} {g} and \ widetilde {D} ^ {c} function. But they do not appear in Eq. (4). The authors should give more details about Eq. (3) and (4).
2. other approaches to address heterogeneity are personalized federated learning, e.g. FedAMP[1] Ditto[2] KT-pFL[3], etc. However, it is not mentioned in related work, and there is no comparison in experimental methods.
3. The legend and curve in Figure 3a do not match. In Table 1, ResNet18 generally performs worse than CNN. The authors mention that overfitting may be happening (at line 275). The authors should increase the dataset size or use a smaller model to make the results more convincing.
4. According to the results in Figure 4, the visualization results of FedLGD do not define the boundaries of each class well. Although FIG. 4 can prove that FedLGD solves the data drift of both clients, the degree of clustering of each class looks reduced. The authors should analyze it further.
[1] Huang, Yutao, et al. "Personalized cross-silo federated learning on non-iid data." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 9. 2021.
[2] Li, Tian, et al. "Ditto: Fair and robust federated learning through personalization." International Conference on Machine Learning. PMLR, 2021.
[3] Zhang, Jie, et al. "Parameterized knowledge transfer for personalized federated learning." Advances in Neural Information Processing Systems 34 (2021): 10092-10104.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can the proposed approach be generalized to other data types such as text and corresponding text models?
2. Can the proposed method be generalized to more client scenarios which is more practical in reality? Is the proposed method theoretically guaranteed?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss the limitations in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Explanation of the notations and symbols
In Sec. 3.1, we started with the classical FL setting and derived to FVL setting, so we used $\widetilde{D}\_{i}$ to represent each client’s virtual data. Beginning Sec. 3.2, we introduced the global and virtual data, so we used $\widetilde{D}^c_i$ and $\widetilde{D}^g_i$ instead. We thank the reviewer for the careful review in Fig. 2 and we will correct the notation to $\widetilde{D}^c$.
Regarding Eq. 3 & 4, we explained how $L_{Con}$ is related to $\widetilde{D}^g$ and $\widetilde{D}^c$ in line 201-202. We apologize for the confusion, and we will add the detailed explanation of Eq. 3.
> Justification on the differences between FedLGD and PFL
Thanks for pointing out the PFL literature. This work focuses on the *classical FL setting*, where the objective is to optimize a single global model that generalizes well to heterogeneous clients, as stated in lines 131-132. Specifically, we aim to evaluate a single global model on the global distribution (a joint of local distributions), a concept often referred to as out-of-distribution (OOD) testing in the literature, such as [r1]. Conversely, PFL seeks to test various local models on their individual testing sets, an approach known as in-distribution testing. Given the differing goal and evaluation criteria, we chose not to include PFL in our comparison. Nonetheless, in response to your suggestion, we have added a discussion on PFL in our related work section.
[r1]Motley: Benchmarking heterogeneity and personalization in federated learning.
> Explanation of overfitting and Fig. 3a
We’d like to affirmatively confirm that the statements to the figure are correct. Line 275 is to explain why ConvNet performs better than ResNet. We meant “overfitting” as the phenomenon that when we use small, distilled virtual images as the training data to train heavy architectures, the model tends to overfit (Please check Fig. 1 in the rebuttal PDF). As the reviewer suggested, we indeed showed that using more virtual data (larger IPC) and a smaller model (such as ConvNet) resulted in higher accuracy in Table 1. Notably, the overfitting observation was consistent with other data distillation literature [*45*] as we stated in line 274.
We want to emphasize that the visualization in Fig. 3a is for a totally different purpose – showing our proposed contrastive regularization loss performs better than MMD, in which we plotted the averaged accuracies of DIGITS and CIFAR10C separately on the given IPC and model, while varying the $\lambda$ of the different choices for regularization terms.
> Explanation of Fig. 4
We plotted the tSNE figure on the *features layer that distribution loss was added on* to show whether our regularization term could group the features from different clients together if they’re from the same class. Thus, the main purpose is not to have a clear boundary in this feature space but to check the grouping effect. We add a further layer for classification, which helps increase the separability.
> The generalization of FedLGD on text data and models
We believe that it is promising to generalize our methods to text data and NLP models, given recent studies have explored data distillation on text data [r1,r2,r3]. However, the data distillation strategies for text data and NLP models are slightly different from those for image data and CNNs, thus we believe modifications are needed in virtual data generation procedure. We believe our novel ideas and successful demonstration on heterogeneous images can inspire the community to explore more on the NLP area in the near future.
[r1]Data distillation for text classification.
[r2]Dataset Distillation with Attention Labels for Fine-tuning BERT.
[r3]Soft-label dataset distillation and text dataset distillation.
> Generalization of FedLGD on more client scenarios
We thank the reviewer for the question. We conducted experiments on CIFAR10C dataset to show that FedLGD consistently performed well when there were 57 heterogeneous clients. In general, the computation cost and communication overhead will not increase if the number of clients increases, so we believe our proposed method is scalable to the number of clients.
> Theoretical analysis of FedLGD
Please see the **Theoretical analysis of FedLGD** section in the general response.
[r1]Dataset distillation: A comprehensive review.
---
Rebuttal Comment 1.1:
Title: Thank you for your comments and look forward to further discussions
Comment: Dear reviewer zf9v,
We appreciate the feedback and suggestions you've provided to our paper. We've put in our utmost effort to thoughtfully address these points and have included further details of privacy and theoretical analysis in the overall response. We sincerely hope our responses have solved your concerns to our paper, and we remain fully available to respond to any extra inquiries that may arise during the discussion phase.
Best Regards,
FedLGD authors | Summary: To solve the challenges of synchronization, efficiency, and privacy, this paper presents a local-global distillation mechanism for FL (FedLGD). In FedLGD, an iterative distribution matching scheme is proposed to distill global virtual data to alleviate the heterogeneous problem. Experiments have shown superiority of FedLGD compared with existing FL methods.
Strengths: 1. The whole pipeline of FedLGD is well depicted in Figure 2. Each component involved in the pipeline is carefully designed.
2. It is an interesting idea to solve the existing FL challenges from the virtual learning perspective. This can inspire future studies in this direction.
3. Experimental results look solid with sufficient implementation details.
Weaknesses: 1. It seems that only feature heterogeneity is considered in this work. How the proposed method performs on different heterogeneous cases should be discussed.
2. The definition of small distilled dataset is not very clear, which can affect the readers’ understanding towards the motivation and detailed technical parts.
3. Privacy concern. Since there are image-level data transferred between the server and clients, it is better to further discuss the potential privacy-preserving risks.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: What is the detailed setting of Figure 1? Is it reasonable to use the size of the overlapped area to represent the level of heterogeneity? Is there any theoretical basis? Also, will the results be different when using different synthetic schemes?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes. The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Justification of the client heterogeneity considered in FedLGD
We studied on both *label and feature shift* among clients as we stated in the last paragraph of our Introduction(line 64-76). Intuitively, generating the same Images Per Class (IPC) can balance label shift. Particularly, we showed FedLGD could handle label shifts by our CIFAR10C experiments where the local datasets are sampled with Dirichlet distribution. The effectiveness of FedLGD in mitigating feature shifts was shown in the experiments in DIGITS, CIFAR10C, and RETINA datasets.
> Explanation of the definition of small distilled dataset
We tried to explain our FVL setting in line 134-140, where a much smaller virtual data is used for local training in FL for synchronization and efficiency of the FL system, which follows the common purpose of data distillation literature, including the ones in FL [*10, 16, 40*]. Here, the ‘small’ was referred to as a comparison to the size of local real datasets (in line 138), and we empirically followed the original data distillation papers to set the numbers of images per class to 10 and 50 [*45, 46*]. The word ‘distilled’ meant the *method* for synthesizing local and global virtual data as described in Sec. 3. We referred to ‘distilled data’ as ‘virtual data’ in alignment with a closely related work, VHL.
> Justification of the privacy concern on sharing global virtual data
Thanks for the great suggestion. We indeed carefully thought about privacy concerns in our original submission.
We first want to clarify that the ‘image-level’ information pointed by the reviewer is actually inverted from the shared global model update, which is accessible to every client by default in classical FL such as FedAvg. We do NOT directly share local private images or locally generated images as other literature does in CVPR2023 [*40*]. Instead, we shared local data mean and variance to global server for global server distillation. Except for local data statistics, we shared the *SAME* level of information to the server as in FedAvg. We also showed that FedLGD can preserve higher privacy against Membership Inference Attack compared to FedAvg in Appendix D.6. In addition, we provide an alternative initialization that does not require sharing local data mean and variance for global data distillation in our rebuttal and report the consistent good results in rebuttal PDF (Table 1). Lastly, in line 345, we stated that “Our future direction will be investigating privacy-preserving data generation” to further enhance FedLGD’s privacy preservation.
> Explanation of the feature heterogeneity shown in Fig. 1
We directly fed in the randomly sampled raw data from the two datasets for plotting Fig. 1. We followed the visualization of VHL to inspect the data heterogeneity by tSNE plot [*37*] (as depicted in line 39-42 in our original submission). tSNE is a statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map [r1]. It could be observed that after distillation, the data distribution of client 0 became more clustered and separated from the distribution of the data distribution of client 1. In addition, we statistically show the distance between real and virtual data in the rebuttal PDF (Table 2).
[r1] Visualizing data using t-SNE.
---
Rebuttal Comment 1.1:
Title: Thank you for your comments and look forward to further discussions
Comment: Dear reviewer DZFG,
We appreciate the feedback and suggestions you've provided to our paper. We've put in our utmost effort to thoughtfully address these points and have included further details of privacy and theoretical analysis in the overall response. We sincerely hope our responses have solved your concerns to our paper, and we remain fully available to respond to any extra inquiries that may arise during the discussion phase.
Best Regards,
FedLGD authors
---
Rebuttal 2:
Title: Appreciate your feedback on our responses
Comment: Dear Reviewer DZFG,
As the Discussion stage is about to end, we sincerely look forward to knowing whether our responses have addressed your initial questions. We are more than happy to answer your remaining concern and appreciate your inputs and feedback very much. Thank you!
Best Regards, FedLGD Authors | Summary: This paper introduces an approach on Federated learning using dataset distillation techniques
Strengths: 1. The idea of using dataset distillation for FL is interesting
2. The solution is reasonable
3. The experimental results show the effectiveness of the proposed approach
Weaknesses: 1. In a few equations, the details is not provided. For instance $L_CE$ in 3, $Dist$ in 5. The paper should be self-contained
2. The technical contribution is low
3. In the experimental results, Table 1, can you highligh both first and second place? In MNIST-M, the winner should be VHL/R, 85.7> FedLGD 85.2.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you try to provide details on those equations, see above?
2. I didn't see how the sizes of different clients play a role in the global data distillation. Can you provide some details?
Also, what is the overall objective functions for both local training and global training? The current equations are not very clear on this.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation on using virtual data should be discussed. Is there any drawbacks on using fake data instead of real data?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Explanation of Eq. 3 and Eq. 5
We stated that $L_{CE}$ is the cross-entropy loss in line 197 and $L_{Dist}$ is defined the same as that in [*46*] in line 219. Apologize for the confusion, following your suggestion, we have added the detailed equation of $L_{CE}$ and $L_{Dist}$ in our revised version.
> Justification of our technical contribution
We would like to take the chance to re-emphasize our technical contribution in the following:
* We applied data distillation to FL pipeline to handle label shift, asynchronization problems in FL, and we were the first to point out data distillation may amplify the data heterogeneity among clients.
* We proposed Iterative Distribution Matching to iteratively inpaint the global information to local virtual data using the up-to-date global model.
* We proposed Federated Gradient Matching to efficiently incorporate the bi-level optimization problem of data distillation using gradient matching into the classical FL pipeline. The virtual global data could be used to regularize feature heterogeneity among clients.
* Through comprehensive experiments on benchmark and real-world datasets, we showed that FedLGD outperformed existing state-of-the-art FL algorithms.
> Experimental results in Table 1
Thanks for your careful review. In the majority of cases, and as substantiated by the experimental results, our method demonstrates superior performance to VHL, with a significant margin exceeding 2%. Indeed in two specific instances identified by reviewer 5yts, VHL exhibits better results, albeit by relatively narrow margins: SVHN+ConvNet (0.4%) and MNIST-M+ResNet (0.5%). This discrepancy may be attributed to the nature of SVHN and MNIST-M, which are three-channel (RGB) images. As a result, the anchors generated by the styleGAN in VHL might more effectively capture the RGB information. Yet, it is important to emphasize that under the same conditions, our method exhibits much better results compared to VHL in all other clients. Following the valuable suggestion, we update the bold highlighting in our Table 1.
> Explanation of the effect of the size of different clients on global data distillation
We thank the reviewer for the clarification question. To make sure we better answer your first question, we assume two possible scenarios about the *size*: 1) the number of local real data and 2) the number of selected clients in each round of training. For 1), since we distilled the same amount of local virtual data for each client, namely $|\tilde{D}^c_i|$ = $|\tilde{D}^c_j|$ for i, j $\in$ [N] (see line 162-163), the number of local real data would not directly affect global data distillation. For 2), we had investigated a different number of selected clients in each round of training and the results were presented in Table 2. We observed consistent performance w.r.t. different number of selected clients, which indicate a stable results on global data distillation, since we used *averaged* gradients for updating global virtual data (Eq. 5).
Regarding the reviewer’s second question, we would like to first re-state our overall training pipeline:
1.Initialization local virtual data
2.if selected iterations:
Clients: update local virtual data and model;
Server: update global virtual data and aggregate model;
3.else:
Clients: update model;
Server: aggregate model;
We would like to clarify that we had two types of local training — 1) local model updating and 2) local virtual data distillation. The objective of 1) was depicted in Eq. 3 where we used cross-entropy loss and contrastive loss to train local models. The objective of 2) was depicted in Eq. 2 where we updated local virtual data with the MMD loss and the up-to-date global model.
Note that we didn’t train the global model with gradient descent (instead we performed FedAvg aggregation), so we referred to “global training” in the question as global data distillation. We updated global virtual data with the gradient matching as depicted in Eq.5 where we forced the global gradients to match the averaged local gradients.
> Explanation on the limitations of FedLGD
In the conclusion of our original submission, we pointed out the additional communication overhead and computational cost in data distillation. Note that if we train FL for long iterations, FedLGD still could be more efficient compared with training on real data as we analyze in Appendix C.3. Additionally, like the phenomenon that happens in existing Data Distillation literatures, models trained on virtual data may perform worse than training on real data. While in FedLGD, there is a trade-off on such limitation and enjoying the benefits of using virtual data in FL, including improving model training efficiency, FL synchronization and stronger inversion defence on the model parameters.
---
Rebuttal Comment 1.1:
Title: Thank you for your comments and look forward to further discussions
Comment: Dear reviewer 5ytS,
We appreciate the feedback and suggestions you've provided to our paper. We've put in our utmost effort to thoughtfully address these points and have included further details of privacy and theoretical analysis in the overall response. We sincerely hope our responses have solved your concerns to our paper, and we remain fully available to respond to any extra inquiries that may arise during the discussion phase.
Best Regards,
FedLGD authors
---
Rebuttal Comment 1.2:
Title: thanks for the clarification
Comment: The authors have addressed some of my concerns; I will increase my score. | Rebuttal 1:
Rebuttal: > Appreciation to the reviewers
We thank the reviewer for the positive comments about our FedLGD design(5ytS, DZFG, zf9v), FedLGD's effectiveness through comprehensive experiments(5ytS, DZFG, zf9v, TJpJ), the presentation of FedLGD (DZFG), and the privacy-preservation mechanism of FedLGD(TJpJ).
We would like to take the chance to re-emphasize our technical contribution in the following:
* In this work, we focused on Federated Virtual Learning (FVL), a new FL framework and promising approach to apply data distillation-based synthetic data to FL pipeline to handle label shift, asynchronization problems in FL. We proposed FedLGD, which incorporated iterative local and global data distillation to achieve good performance with limited amounts of distilled virtual data. We proposed Iterative Distribution Matching to inpaint the global information to local virtual data using the up-to-date global model. Through local virtual data distillation, *class-balanced* synthetic data were generated to facilitate FL training.
* We proposed Federated Gradient Matching to efficiently incorporate the bi-level optimization problem of data distillation using gradient matching into the classical FL pipeline. The virtual global data could be used to *regularize feature heterogeneity* among clients.
* Through comprehensive experiments on benchmark and real-world datasets under various settings, we showed that FedLGD outperforms the existing state-of-the-art FL algorithms.
We have carefully addressed the comments from the reviewers. We sincerely appreciate it if reviewers could reconsider your rating of our paper. We are delighted to further discuss with you if any further questions arise during the discussion phase. Thank you again for your invaluable time and suggestions.
> Justification of privacy
First, we would like to point out that FedLGD aims to reduce local privacy leakage *by training and sharing gradients w.r.t. local virtual data*. The global virtual data is designed for *regularizing local training on the feature extractors*, so that we can handle the feature heterogeneity issue among clients.
Second, we want to clarify that the shared 'image-level' information is actually inverted from the shared global model update, which is accessible to every client by default in classical FL such as FedAvg. We do NOT directly share local private images or locally generated images as other literature does in [*40*]. Instead, we shared local data mean and variance to global server for global server distillation. Except for local data statistics, we shared the *SAME* level of information to the server as in FedAvg. We also showed that FedLGD can preserve higher privacy against Membership Inference Attack compared to FedAvg in Appendix D.6.
In addition, we provide an alternative initialization that does not require sharing local data mean and variance for global data distillation in our rebuttal and report the consistent good results in rebuttal PDF (Table 1).
Lastly, in line 345, we stated that "Our future direction will be investigating privacy-preserving data generation" to further enhance FedLGD's privacy preservation.
> Theoretical analysis of FedLGD
Yes, we can show theoretical insights on FedLGD, which mainly relies on the existing tools [*37*, r1].
First, the generalization performance of model $f$ on distribution $P(x,y)$ can be analyzed using statistical margin (SM) following [*37*]. Here $f$ is optimized using FedLGD with minimizing Eq. (3). Let $f= \phi \circ h $ be a neural network composed of a feature extractor $\phi$ and a classifier $h$. Since we obtain local virtual data $\widetilde{D}_c$ via distribution matching with real data, thus we assume $P_{\widetilde{D}} \approx P$. Similar to VHL[*37*] (Theorem A.2), we have the lower bound of SM for FedLGD as: $$ E_{h \leftarrow P_g} SM(h, P)
\geq E_{h \leftarrow P_g} SM(h, \widetilde{D})-E_{h \leftarrow P_g}[SM(h, P_g)-SM(h, \widetilde{D})] |-E_y d(P(\phi | y), P_{g}(\phi | y)),$$
where $P_g$ denote global virtual data distribution.
The key distinction between FedLGD and VHL is the last term, also known as distribution matching objective between $P$ and $P_g$. For maximizing SM to achieve strong generalization, we want to show SM has a tight lower bound. Therefore, upper-bounding the last term is desired. Note that in VHL, the global virtual data is generated from an un-pretrained StyleGAN. In contrast, our approach employs the *gradient matching* strategy to synthesize the global virtual data. Due to the complexity of data distillation steps, we consider the analysis on a kernel ridge regression model with a random feature extractor. Based on the Proposition 2 of [r1], the first-order distribution matching objective (the last term) is approximately equal to gradient matching of each class (i.e., $\mathcal{L}_{Dist}$ (Eq 5)) . Namely, minimizing Eq 5 in FedLGD implies minimizing the last term in that setting. Hence, using gradient matching generated global virtual data elicits the model's SM a tight lower bound and proved generalization.
[r1] Dataset distillation: A comprehensive review.
Note: We denote the new references used in rebuttal as [r#].
Pdf: /pdf/c9dec29db1f456ba9bca73cc4a04ebfc2ec8c59b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Neural Algorithmic Reasoning Without Intermediate Supervision | Accept (poster) | Summary: The paper addresses neural algorithmic reasoning without the supervision of intermediate steps of reasoning. Typically, neural algorithmic reasoning requires supervision on the intermediate steps of reasoning. The paper proposes a method that does not require intermediate supervision and achieves competitive results with full-supervised benchmarks. The paper proposed a modified no-hint reasoning scheme that mimics the hints-supervision setting and self-supervised objective using the contrastive loss term for nodes for each step.
Strengths: Overall the paper addresses an important problem of the integration of reasoning with neural networks.
The motivation is clearly argued in the paper. Neural algorithmic reasoning has been showing its strong capacity for structured reasoning as neural networks, but the approach requires dense supervision on intermediate steps. The proposed method mitigates this issue, reducing the amount of required effort and thus would extend the capacity and applicability of neural algorithmic reasoning for different tasks.
The proposed method achieves competitive results against fully supervised benchmarks and shows its advantages. The obtained results are interesting and promising. The source code is available in the supplementary material.
Weaknesses: My major concerns regarding this paper are as follows.
- The explanation of the method is somewhat unclear and is not easy to follow. For example, Hint-ReLIC is repeatedly referred to and used as an explanation. However, the paper does not provide enough explanations regarding Hint-ReLIC (although textual explanations are distributed in different parts of the paper), and that makes the presentation not easy to follow. Readers would gain benefits from a bit more detailed explanations regarding the key relevant method in the background section to understand the proposed method.
- Experimental details are not explained enough. In Table 1, some cells do not contain values, but no specific reasons are clarified in the main text (if I am not missing them). In the right-most column of Table 1 and Table 2, values for Insertion/Bubble/Quick/Heap sort are all the same, even though the experiments are conducted on 5 different random seeds and stochastic gradient descent. This seems to imply that the proposed method produces exactly the same values for each different random seed across different tasks via stochastic optimization. If there is an explanation for why this happens, it would need to be mentioned in the main text.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Why are some values not available in Table 1?
In the right-most column of Table 1 and Table 2, values for Insertion/Bubble/Quick/Heap sort are all the same, even though the experiments are conducted on 5 different random seeds and stochastic gradient descent. Why does this happen?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Some limitations are argued in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your thoughtful review!
We agree that adding a more detailed explanation of the Hint-ReLIC method to the background section would make the paper easier to follow, we will update this section. However, we are happy and open to clarifying something during the discussion period if needed.
We address your questions below.
> Why are some values not available in Table 1?
For the Hint-ReLIC method, the DFS scores were not reported by Bevilacqua et al. (2023) (as we mention in the footnote on page 7). For the last column, as mentioned in line 180 of the paper, our proposed regularization term is applicable only for part of the problems, which is also mentioned as one of the main limitations in Section 6.
> In the right-most column of Table 1 and Table 2, values for Insertion/Bubble/Quick/Heap sort are all the same, even though the experiments are conducted on 5 different random seeds and stochastic gradient descent. Why does this happen?
As the proposed no-hint regime does not use any information about the algorithm, all sorting tasks become the same and scores may differ only by a random seed, so we reported the same numbers for all sorting algorithms. But, as we mention in lines 303-306 of the paper, the standard no-hint regime uses the ground-truth step count for each algorithm, which produces the difference for sorting algorithms in the first column.
We hope that our response addresses your concerns and will be happy to answer any additional questions during the discussion period if needed.
---
Rebuttal Comment 1.1:
Title: Reply to the rebuttal
Comment: I thank the authors for the detailed clarification.
As the concerns are addressed adequately, I raise the score to 6 (from 5).
I defer to the other reviewers for the deep assessment of the novelty/originality of this work due to my limited knowledge. | Summary: This paper addresses the challenge of developing neural networks capable of performing algorithmic reasoning. The paper discusses disadvantages about the use of intermediate hints during training of algorithmic reasoners. The authors propose a regularisation term to ensure that meaningful representations are learnt during the neural execution of algorithms, even when no supervision occurs on intermediate steps.
Strengths: The paper is written clearly. Discussion of background and prior work is comprehensive. Source code is provided for reproducibility of the experiments. The paper is not very original, as it borrows concepts and formulations from [1]. However, the authors use these concepts in an alternative way.
While [1] uses the regularisation loss to build a causal model, the authors use it to ensure that a neural network computes the same representations (at all steps) for any two inputs x_1, x_2 for which the execution of the algorithm A would be the same. I find this idea a clever augmentation technique. Quantitative results show supposedly good improvements in sorting algorithms with respect to baselines.
[1] B. Bevilacqua, K. Nikiforou, B. Ibarz, I. Bica, M. Paganini, C. Blundell, J. Mitrovic, and P. Velickovic. Neural algorithmic reasoning with causal regularisation.
Weaknesses: As mentioned above, all concepts and regularisation terms are not new and borrowed as-is from [1]. Experimental details are somewhat scarce and sometimes imprecise. For instance, the authors do not state the hidden dimensions of their models. Furthermore, this technique seems to be applicable only on a small portion of all problems in the CLRS benchmark, which greatly limits the potentiality of the method. Fundamentally, this technique seems to be only useful in the case of sorting, since in all other tested techniques it performs significantly worse. Plus, training networks the way authors propose in this paper will not align with any specific algorithm dynamics (as the authors correctly recognised in the limitation). In fact, judging from the sorting results (98.74% everywhere) it seems that the network has learnt a different algorithm w.r.t to insertion/bubble/quick/heap sort (or possibly only one of them).
The authors main point is that the regularisation term is forcing similar representations for the inputs to have the same execution trajectories, but there’s no experiments nor theoretical analysis confirming this property.
One thing I find very concerning is that the authors claim to fix the number of steps of the network (i.e., the number of steps in the “neural execution” phase) to be linearly dependent on the number of input size (i.e., O(n)). This means that the network must have learnt a sorting algorithm that achieves around 98% accuracy and runs in linear time, which feels very strange, unless the constant is high enough to behave well with array of 64 elements (test size).
[1] B. Bevilacqua, K. Nikiforou, B. Ibarz, I. Bica, M. Paganini, C. Blundell, J. Mitrovic, and P. Velickovic. Neural algorithmic reasoning with causal regularisation.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Q1) Can the Authors provide some insights on why the proposed technique only works for sorting algorithms?
Q2) The proposed technique seems likely to be effectively used in conjunction with supervision on hints: have the Authors tried that? Or do the Authors believe that it is not applicable?
Q3) Given that the manuscript reports 98.74% \pm 1.12 for all sorting algorithms, it looks like that the network is not aligning with any specific sorting algorithm, but potentially learns a different one. Have the Authors tried investigating in this direction?
Q4) What is the exact number of steps that the network performs during algorithm rollout?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a thoughtful review of our paper!
First, we would like to emphasize one of our contributions - the modification of the no-hint mode towards being more similar to the hint-based one - that is not discussed in the review. While this contribution is technically simple, we believe it has great importance, as a simple modification, which is applicable for all problems, demonstrates clear (14 of 18) disadvantages of direct hint supervision ("Baseline" vs "No-hint (ours)" columns in Table 1). This contributes to the main takeaway of our paper that no-hint training is competitive to the hint-based one and thus requires more attention from the community.
We address the questions and concerns below.
> all concepts and regularization terms are not new and borrowed as-is from [1]
We note that the concept and the regularization from [1] is a standard contrastive technique, see, e.g., [2], [3]. However, the main contribution of [1] is the application of this technique to algorithmic reasoning in a particular way. The way this technique is applied in [1] has a significant difference from our work: [1] uses the contrastive term to align intermediate computations of the model to the ground-truth dynamics of the algorithm, while our contrastive term gives the model additional signal about the problem without providing any constraints about computations which the model needs to follow.
> experimental details are somewhat scarce and sometimes imprecise. For instance, the authors do not state the hidden dimensions of their models
While we used the same setup as in the past works, we agree that all experimental details should be clarified in the text. We listed our hyperparameters in Table 1 in the [global response](https://openreview.net/forum?id=vBwSACOB3x¬eId=UUd9q6foXC) (please, see the attached pdf).
> training networks the way authors propose in this paper will not align with any specific algorithm dynamics (as the authors correctly recognised in the limitation)
A model without hints indeed may not align with any algorithm dynamics, but we consider this as a desirable property, since
- Aligning to a particular algorithm can be suboptimal (lines 136-139 of the paper)
- Allowing the model to find optimal dynamic can lead to new ways to solve the problem (lines 140-142)
> There’s no experiments nor theoretical analysis confirming that the regularisation term is forcing similar representations.
To address this comment, we conducted an additional illustrative experiment. Please, see our global response and Fig. 1(b) in the attached pdf.
> One thing I find very concerning is that the authors claim to fix the number of steps of the network (i.e., the number of steps in the “neural execution” phase) to be linearly dependent on the number of input size (i.e., O(n)). This means that the network must have learnt a sorting algorithm that achieves around 98% accuracy and runs in linear time
We note that each processor's step is the message-passing, which operates (in the case of Triplet-GMPNN) over the full graph, so in terms of the problem size the computation complexity is $O(n^3)$ (which is the upper bound for all considered algorithms), so there is no need to use high constant, we used 5 for graph problems, and 1 for others. For more clarity, we used the exact same steps count as the original hint trajectory for Insertion sort, so this choice seems reasonable.
Other options for step count are described in lines 286-288 of the paper.
We also include ablation on the step count for sorting (Table 2 from the global response).
Q1) In our opinion (which reflects the second limitation from Section 6) the reason is that invariance under order-preserving shifts reduces space differently for different tasks. For example, learning the sorting algorithm from information about the relative order is almost trivial, while solving the MST problem having the relative order over edge weights still requires additional logic, such as checking cycles between edges, adding edges to the tree, etc., which is not forced by the proposed regularization. We consider developing additional/stronger inductive biases as a promising direction for future work.
Q2) We have not tried that and it seems applicable, but it does not reflect the key points of our paper: 1. The usual comparison hint vs no-hint is inaccurate due to the significant computation graph differences; 2. For some problems, it is possible to give a model additional inductive bias without any predefined constraints about underlying computations (such as the original algorithm trajectory).
Nevertheless, we believe that these two directions (hints vs no-hints) can converge to the point with a balance between strong inductive biases and the absence of intense supervision, which is closely related to your idea.
Q3) We would like to refer to Fig. 1(a) from the global response, which demonstrates that due to the parallel architecture, the model without hints is making most of the final predictions in a parallel way, which differs from the sequential nature of the sorting algorithms.
Q4) As mentioned above, it is $5 n$ for the graph problems and $n$ for the array problems.
We are happy to discuss further any of the raised weaknesses and questions!
[1] B. Bevilacqua, K. Nikiforou, B. Ibarz, I. Bica, M. Paganini, C. Blundell, J. Mitrovic, and P. Velickovic. Neural algorithmic reasoning with causal regularisation.
[2] Mitrovic, J., McWilliams, B., Walker, J. C., Buesing, L. H., and Blundell, C. Representation learning via invariant causal mechanisms
[3] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations
---
Rebuttal Comment 1.1:
Title: Considerations on rebuttal
Comment: First of all, thanks to the Authors for their response and for clarifying previously obscure experimental details.
Let there be no doubts, both contributions (i.e., modification of the no-hint regime and addition of the self supervised term) are sensible and interesting investigations, especially considering targeting of NP-hard problems, where we clearly cannot supervise on intermediate hints.
However, there are still fundamental issues in the work:
1) From the authors’ response it has emerged that data for all algorithms are cliques (even for sorting algorithms were usually data are arrays and could very well be represented as undirected chains), this should be specified clearly in the text. This is a direct consequences of using the Triplet-GMPNN processor. As being fully-connected, cliques suit the ``parallel processing’’ paradigm that the authors present in lines 136-139 as a motivation for introducing the self-supervised term. That is also the reason why their model could perform sorting in O(n) steps. The performance of the method was not tested in case where algorithmic data are represented differently from cliques, or, equivalently, if a different processor was used (not based on fully-connected graphs), making the assessment of the proposed approach not possible in such cases. Surely, one can always take into account the fully-connected version of any graphs when applying this method, but this would {\it greatly} impact scalability. The method would be inapplicable for large graphs.
2) Figure 1(b) shows that the self-supervised term does bring representations closer, for nodes having the same algorithm trajectory. However, I think that the authors must include also other baselines in that analysis, such as Hint-ReliC, and move the figure somewhere in the main paper. It is important to compare the behaviour of node representations also for other baselines, especially considering that GNNs in general tend to bring neighboring nodes' representations closer anyway. I expect the authors' proposed method still performs the best, but I believe it has to be verified and illustrated clearly.
3) Since the model is not aligning with any specific algorithms, it is not fair to present results on Heapsort, Quicksort, Bubblesort and Insertion Sort as four different results. Indeed, without supervising on hints these four algorithms are indistinguishable from one another. This is confirmed by the fact that the model achieved exactly the same accuracy on all four algorithms, as I pointed out in the original Q3. Overall, this method seems to tackle {it solving of algorithmic problems} rather than {\it learning of algorithms}. One may argue that in order to {\it solve} algorithmic problems, the network must learn an algorithm anyway. While I agree with this, I still find the comparisons within Table 1 not sensible, since those results show how well algorithms are imitated in principle.
4) Follow-up question from 3): I suspect that authors' used the same 5 seeds for all experiments on sorting, since results are * exactly * the same and algorithms are indistinguishable given the absence of supervision on intermediate steps. For the same reason, I believe the same should happen for MST algorithms (Kruskal and Prim), but results are different there. Did the authors use different seeds for Kruskal and Prim?
---
Reply to Comment 1.1.1:
Comment: Thank you for your involvement in the discussion and valuable comments!
Let us address the remaining concerns:
Q1) Let us explain why we believe that our contributions do not rely on the full graph.
First, parallel processing arises even with the mentioned chain graph inputs, as a single message-passing step can simultaneously update all nodes at once. We also note that even on the chain graph, sorting could perform in O(n) steps on the message-passing framework with global graph features (the presence of the global state requires only one additional node with $n$ additional edges). For example, in the insertion sort, the insertion phase for each node could be done in $O(1)$ (as proposed by the original hints trajectory from the CLRS Algorithmic Reasoning Benchmark, where hints were designed for parallelizable but not necessarily fully-connected architectures, such as, e.g., a chain).
Second, the proposed contrastive term is also not relying on the density of the graph, as this term can be used with any architecture.
Since both our ideas (no-hint modifications and contrastive term) can be applied to any processor, we have chosen Triplet-GMPNN as the state-of-the-art processor showing good results in the previous work (Ibarz et al., 2022; Bevilacqua et al., 2023). We agree that the details of Triplet-GMPNN should be added to the experiment setup and will do this.
Q2) We are grateful to the reviewers for their ideas for additional experiments and will definitely add all the figures and tables from the general response to the updated version of the paper. We would be happy to use the Hint-ReLIC models for additional experiments, however, the original implementation of this method is not published yet. We will add other hint-based sorting models (Bubble sort, Merge sort, Quicksort) as baselines to this comparison.
Q3) It is true that without hints different algorithms become indistinguishable from one another and it is more suitable to consider the original problem (e.g., sorting), rather than a particular way of solving it. But we note that the alignment of the model to the particular execution trajectory is usually not the goal, but mainly the approach to improve the generalization abilities of the models (Veličković et al., 2022). So the comparison of the OOD performance for different methods (such as, for example, aligning to Bubble Sort with hint supervision, to Merge Sort with Hint-ReLIC, or training the sorting problem without hints) is reasonable and crucial on the path to strongly generalizable reasoners. We also agree that demonstrating no-hint performance under the names of the particular algorithms can be confusing, however, such choice is used as a convention in previous work (Mahdavi et al. (2023), Bevilacqua et al., (2023)). We will clarify this in the text by adding the corresponding comment for the table. If you have any other suggestions for how to better present this result - we will be happy to incorporate them in the revised version.
Q4) Yes, as all the sorting tasks become the same and scores may differ only by a random seed, so we reported the same numbers for all sorting algorithms to highlight this. However, for the MST problem, while the underlying problem is the same and the same input/output can be used, there are differences coming from the CLRS Algorithmic Reasoning Benchmark: the Kruskal algorithm takes the graph as an input and the output is a binary mask on the edges from the MST, while the Prim algorithm takes as input the graph and the starting node and predicts for each node the pointer to another node (representing the order of extension of connected components). Such differences produce separate optimization problems.
We would like to know whether we have answered all the questions and concerns raised by the reviewer. We are happy to discuss further any of the raised weaknesses and questions! | Summary: The paper is about neural algorithmic reasoning, which is the task of building models that can execute classical algorithms. The paper focuses on learning algorithms without intermediate supervision, which means using only input-output pairs and not the steps (hints) of the algorithm. The paper proposes two improvements for this setting: a simple architectural change that makes the model more similar to the hint-based version, and a self-supervised objective that regularizes the model by forcing similar representations for inputs with the same execution trajectories. The paper shows that these improvements lead to competitive or state-of-the-art results on a subset of the CLRS benchmark. The paper argues that learning algorithms without intermediate supervision is a promising direction for neural algorithmic reasoning.
Strengths: 1. Learning algorithms without intermediate supervision is an important direction for neural algorithmic reasoning.
2. The contrastive term added is insightful and the corresponding results are promising.
Weaknesses: 1. The fundamental difference between the proposed "no-hint" and the original "no-hint" has not been stated clearly - the proposed "no hint" seems to only have added the hidden layer size or the neural network capacity. If so, the comparison against the original “no-hint” is unfair. Can the authors give a more detailed description of the proposed “no hint” architecture? Also, please list relevant hyperparameters (like size, layers) on the network, and conduct a comparative experiment on the network size.
2. The “step count” setting in section 5.2 is also confusing: is step count the main reason why the proposed no-hint performs better than the original no-hint?
3. Table 2 is somewhat confusing: No hint (with our updates) & Binary Search has 93.21 ± 1.10 which is different from 87.12 ± 2.23 or 85.29 ± 4.52 in Table 1. Also, I think taking the maximum of No hints (ours) and Add contr. (ours) as “ours” in Table 2 is not a good choice since they are different methods proposed.
4. Although the proposed new contrast item can improve the performance without hints, I doubt that the design of this item is extremely challenging (which might have already been mentioned in section 6 limitations), even more so than obtaining hint data. This is because the design of this item requires a deep understanding of the target algorithm itself and the designer must be an algorithm expert, while obtaining hint data only requires a corresponding executable algorithm. This may deviate from the original intention of “without intermediate supervision” and greatly reduce the actual effectiveness of this paper.
5. The writing needs to be improved.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Table 2 should be merged into Table 1 since only one column is different.
2. Can you show the latent representation change with the number of steps? Can it correspond to the hint of the algorithm?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations have been discussed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a thoughtful review! Let us address the raised concerns and questions.
> Can the authors give a more detailed description of the proposed “no hint” architecture?
The main difference between the original no-hint version and the proposed one is the presence of an encoding-decoding stage after each processor step. The difference is demonstrated in Figure 2 of the paper and motivated by the presence of such stages in hint-based models. Such modifications add a small group of parameters for the encoder (green line in Figure 2) and decoder (red line in Figure 2) (encoder and decoder are single linear layers), keeping the hidden size of the model the same.
> The comparison against the original “no-hint” is unfair
One of the key points of our paper is that the original (used in previous work) comparison hint vs no-hint, which demonstrated the advantages of using hints, is unfair due to the significant computation graph differences (left and middle parts of Figure 2). Our proposed modification makes the comparison hint vs no-hint more accurate (middle and right parts of Figure 2) and demonstrates the disadvantage of direct hint supervision over the no-hint version for most of the problems (columns "Baseline" and "No-hint (ours)" in Table 1).
> Also, please list relevant hyperparameters (like size, layers) on the network, and conduct a comparative experiment on the network size.
Model sizes and other hyperparameters are nested from the previous work. To address your question, we specified all the hyperparameters in Table 1 of the [global response](https://openreview.net/forum?id=vBwSACOB3x¬eId=UUd9q6foXC). Also, since our contribution is not a development of new model architecture, we believe that experiments with the network size are orthogonal to our work. However, we added some other ablations to the global response.
> The “step count” setting in section 5.2 is also confusing: is step count the main reason why the proposed no-hint performs better than the original no-hint?
No, for example, for sorting, we used the same step count as in the original Insertion sort hint trajectory ($n$ steps for input with $n$ nodes). Also, to answer this question, we conducted an ablation on the steps count for sorting (Table 2 from the global response), demonstrating almost the same performance for different step counts. The motivation behind our choice was to avoid an input-dependent number of steps (as, for example, for Heap sort) as it is unnecessary for the no-hint mode. So we made the simplest choice which is applicable to all the problems at the same time.
> the design of this item requires a deep understanding of the target algorithm itself and the designer must be an algorithm expert, while obtaining hint data only requires a corresponding executable algorithm
We definitely agree that finding augmentations is not trivial and requires a deep understanding of the algorithm and will describe it in the limitations section. However, we also note that hints don’t come “for free” with dataset generator, at least, the useful ones: Veličković et al. (2022) noted some difficulties with hint generation, such as compression (simple ‘for’ loops can be described as a single step for parallel architectures) or ability to predict the next hint from the current one, which can represent a separate task for different architectures. Also, the same hints can be encoded/predicted differently, which can affect the model performance. While we believe that for most of the tasks and architectures useful hint sequences can be found, the comparison between simple supervision on hints and no-hint mode (“Baseline” and “No hints (ours)” columns in Table 1 of the paper) shows that for most of the problems (14 of 18) no-hints performance is better (or better hints are needed). We also note that invariants/augmentations from the paper need to be discovered/developed once for a task, while helpful hints may need to be engineered separately for different architectures (line 131 in the paper).
> The writing needs to be improved.
We would be grateful if you could clarify which aspects of writing need to be improved, then we will revise the paper accordingly.
> Table 2 should be merged into Table 1 since only one column is different.
Our motivation for including a separate table was to use the aggregated numbers depending on the different hint usage modes, for highlighting the progress of no-hint and hint-based reasoners. While such a comparison of different methods can be imprecise, it serves as a proxy to answering the question: Given fixed model size and different strategies of hint usage, which strategy is preferable for different tasks? We think that combining two tables could be more confusing for readers as some results from Table 2 were taken not from Table 1, for example, for the DFS problem (see the footnote on page 7 of the paper).
> Table 2 is somewhat confusing: No hint (with our updates) & Binary Search has 93.21 ± 1.10 which is different from 87.12 ± 2.23 or 85.29 ± 4.52 in Table 1.
As we mentioned above, the motivation behind Table 2 was to aggregate the best results depending on hint usage mode. Binary search is the only problem that did not gain performance with our updates, so the best performance without hints is still 93.21 ± 1.10, which is taken from the first column.
> Can you show the latent representation change with the number of steps? Can it correspond to the hint of the algorithm?
Thank you for the suggestion! We refer to Fig. 1(a) in the global response.
We would like to know whether we have answered all the questions and concerns raised by the reviewer. We are happy to discuss further any of the raised weaknesses and questions!
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! Some of my concerns have been addressed but I still have the following ones:
**1. About the "no-hint" structure.**
The authors say that "Such modifications add a small group of parameters for the encoder (green line in Figure 2) and decoder (red line in Figure 2) (encoder and decoder are single linear layers), keeping the hidden size of the model the same." So my original understanding is correct, right? "no hint" seems to only add two linear layers, and the effect is to increase the network capacity.
Or, can the authors explain more about the design of the "encoder and decoder"? There is no evidence why the added layers are named "encoder and decoder" because there is no demarcation between the so-called encoder and decoder like corresponding loss, explicit tensor form transformation, etc. Think about seq2seq models, the demarcation between the encoder and decoder is very clear. As a comparison, I cannot see the demarcation in "no hint". And that's why I take the "encoder-decoder" design as simply increasing network capacity and ask the authors to list hyperparameters of the network.
Or, one simple question: What does the latent representation look like and how do you supervise the training of it? "Hint" is not necessary here but you must have some constraints like losses or architecture designs or training algorithms, to make the latent representation behaves like a hint. Otherwise, they are simple linear layers.
**2. The design of this item requires a deep understanding of the target algorithm itself and the designer must be an algorithm expert.**
Can the authors give a concrete example of the design cost of the two methods? This is important for validating the motivation.
---
Reply to Comment 1.1.1:
Comment: Thank you for being involved in the discussion and for all your questions, which highlight the parts of the paper that need to be additionally clarified!
**1. About the "no-hint" structure.**
>So my original understanding is correct, right? "no hint" seems to only add two linear layers, and the effect is to increase the network capacity.
Yes, the difference is just in two linear layers, which are presented in the model with hint supervision. In other words, we argue that for a fair comparison versus the hint model, one needs to remove the loss on the intermediate steps, keeping the computational graph the same, while the previous work removed these linear layers (Figure 2 of the paper).
>Or, can the authors explain more about the design of the "encoder and decoder"?
Such naming is nested from the model with hints, the encoder maps abstract objects (node features, edge weights) to the latent space, while the decoder constructs abstract objects (e.g., a pointer from a node to another node) from the latent space. We agree that for a no-hint model, this naming is useful only for encoding inputs and decoding outputs of the model, while for the intermediate steps there is no abstract meaning. Also, phrases like «we propose to run the no-hint model in the encoded-decoded mode» (line 161) from the paper are used only for consistency with hint-based models. We will make it more transparent in the text.
>What does the latent representation look like and how do you supervise the training of it?
Such a «decoded» intermediate state is a single logit for each edge (line 283) (binary mask over the edges), which is «encoded» to the adjacent node features at the next step of the computation. Such states have no additional constraints:
- the no-hint model is trained only with supervision loss on the output of the model (and not the intermediate steps);
- the proposed contrastive term forces invariance of these states for equivalent inputs, but these states are not supposed to ‘behave like a hint’ (while potentially could).
**2. The design of this item requires a deep understanding of the target algorithm itself and the designer must be an algorithm expert.**
>Can the authors give a concrete example of the design cost of the two methods?
First, we would like to note that we do not argue that finding the invariances is in some sense easier or harder than building good hints, both methods have additional costs. But the main difference between these two methods is not the cost, but that the invariance utilizes the fact that models usually find solutions that are more aligned with their architectures (so aligning them to a particular trajectory can be a suboptimal or difficult task).
Answering your question:
We believe that for hint supervision, the main cost is to align the model to the desired algorithm: for each architecture, one needs to decompose the algorithm execution trajectory to the sequence of steps, each of which the model should predict. Also, one needs to carefully design transitions between steps, making hints predictable from previous steps, but not so simple, keeping the execution trajectory relatively short. For example, we would refer to the hint trajectories from the CLRS Algorithmic Reasoning Benchmark (Veličković et al., 2022) for the Insertion Sort algorithm. The insertion phase is “compressed” to the single step of message-passing (which is possible due to the parallel processing of modern architectures), another option was to use another constant steps count or $O(n)$ steps (similar to the ground-truth algorithm), even the simple algorithm produces a lot of different ways to convert it to the hints trajectory and each particular choice directly affects the performance of the model, as intermediate predictions are supervised to correspond exactly to the hints. Given the fact that for 14 of 18 problems the no-hint (ours) performance is better than hint supervision, we could say that existing hints/hint losses are possibly not optimal.
As for the proposed contrastive technique, the main cost is to understand for which inputs the algorithm will perform identically. For example, for comparison sort algorithms this is easy (we can consider inputs with the same relative order of the elements), for other algorithms it may require a deeper understanding of the underlying problem.
The common idea behind the contrastive term is utilizing the same execution trajectory for different inputs. Even a simple notion on the relative order of the inputs is applicable for several different problems since sorting or taking the maximum occurs as a subtask in a variety of problems (e.g., in addition to the MST problem described in the paper, there is also building the heap/search tree from some values). Once such invariants are found, it is clear how to apply the contrastive term regardless of the architecture used.
We hope this answers the question and we are open to further discussion on the raised points. | Summary: This paper proposes a novel method for Algorithmic Reasoning without intermediate supervision. The core idea is to use a self-supervised objective that regularize the internal computations. The authors evaluated the proposed method with CLRS algorithmic reasoning benchmark and achieve state-of-the-art performance on a subset of problems. To the best of my knowledge, the contribution is novel and clear and I recommend acceptance.
Strengths: This paper is well-motivated and relatively easy to follow. The design of regularization for the self supervision scheme is also quite intuitive. The evaluations are comprehensive and seems to support the effectiveness of the proposed approach. I appreciate the specific emphasis of potential limitations.
Weaknesses: While the main experiment/evaluation is informative, I would appreciate if the authors address some potential ablations such as varying numbers of augmentations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In the limitation sections, the authors mentioned the issues with requirements for strong inductive bias for the self-supervised regularization scheme. I would appreciate the authors to address the potential concern on computation with respect to search.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As mentioned in Strength, I appreciate authors' detailed inclusion of limitations. I do not think there are major limitations in addition to the said ones given the current content.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and positive feedback!
Following your suggestion, we conducted several ablations (please, see the details in the [global response](https://openreview.net/forum?id=vBwSACOB3x¬eId=UUd9q6foXC)):
- Different augmentations count (Table 3 from the global response). Keeping the batch size equal to 32, we tried different distributions of positive and negative examples for each element in the batch, and used them in the contrastive term, as described in lines 242-248 of the paper. This experiment shows that the method is not sensitive to a particular choice of the augmentations count. For the results in the paper, we used only one positive example, and the other 30 were used as negative examples.
- Different processor steps count for sorting (Table 2 from the global response).
- Experimental verification of stronger invariance of the model trained with the contrastive term (Fig. 1(b)).
Answering your question, we also would like to clarify our thoughts on stronger inductive biases for other problems. The core reason why the proposed contrastive term improves the performance differently for different tasks is that the invariance under order-preserving shifts reduces the space differently for different tasks. For example, learning the sorting algorithm from information about the relative order is almost trivial, while solving the MST problem having the relative order over edge weights still requires additional logic, such as checking cycles between edges, adding edges to the tree, etc., which is not forced by the proposed regularization. That is why we consider developing additional/stronger inductive biases as a promising direction for future work. Does that answer your question? We are happy and open to further discussion on this topic. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their feedback and many constructive suggestions and comments.
We have incorporated additional clarifications and experiments in the attached pdf, which contains:
**Table 1.** Parameters of the model and training procedure.
**Table 2.** Ablation on the steps count for the sorting problem. In the paper, we used the same steps count as the Baseline model for Insertion sort. We demonstrate that mimicking the steps count from other baseline models still shows the advantages of the proposed no-hint changes.
**Table 3.** Ablation on the augmentations count for the contrastive term.
**Figure 1(a).** Demonstration of the execution dynamics for the hint-based model vs the no-hint model on the sorting task.
While the interpretation of the underlying computations of a model is non-trivial, we can use the decoders of models to demonstrate the dynamics of the intermediate predictions. We took two models for the sorting problem: No-hint (ours) and Insertion sort (trained with hints supervision). Both models use the same steps count (64 steps for the test data). After each processor step, we can apply the output decoder to node embeddings to see the intermediate predictions of the predecessor (we remind that the output for the sorting problem is represented by the prediction of the predecessor node in the sorted order). For each step, we visualize the ratio of the intermediate pointers that are equal to the final output of the model.
We note that for hint-based models we have two decoders: the Hint decoder and the output decoder, both of which can be used for this purpose (orange and blue line, respectively).
We would like to highlight several insights from this experiment:
1. Models trained with supervision on the trajectory for Insertion sort struggle to learn the relation between the intermediate predictions of the pointers and the output predictions (orange line). In other words, the model potentially can consider the hint sequence (and even each transition between hints) as a separate task, not related to the output of the algorithm directly. However, the execution trajectory for the models with hints inherits some sequential order of intermediate updates (blue line).
2. The model without hints has a much stronger tendency to do parallel processing - almost all output predictions are obtained on the first steps of the execution (green line).
**Figure 2(b).** We demonstrate that the proposed contrastive term forces the model to have similar representations for inputs from the same equivalence class. For this purpose, we took different pairs of equivalent inputs and for each step of the execution we calculated the following ratio: if we take one node from one input and take the closest node from the augmented input, what is the probability that the closest node will represent the element with the same relative order (and not, for example, the closest value)?
While the model without the contrastive term (blue line) is able to mainly rely on the relative order, we see that our contrastive term is forcing much more confidence (and speed) in achieving that property.
We are happy to discuss these experiments further or answer any questions!
Pdf: /pdf/1f570b7dbd8699fa163a09b406ac7ff5e7c312d1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper focuses on neural networks that learn to execute algorithms, using the CLRS benchmark. It tackles the case of learning to execute when there are no hints available (no access to intermediate steps of the algorithm's trace). To do that, it proposes two modifications: the first is architectural, which is maintaining the latent representation of the current step and using it for the next step, which previous no-hint architectures discard and architectures using the hints use to decode the predicted hint and compute an additional loss by comparing with the ground-truth hint. The second one proposes a contrastive loss aimed to reduce the function search space, by using the following invariance: inputs that have the same (algorithmic) trajectory (belonging to the same equivalence class should have the same representation. The two modifications improve on the no hints baseline and in many cases match the model with hints.
Strengths: As the strengths outweigh the weaknesses of this paper, I recommend acceptance in its current form. I would be happy to increase my score if some of the weaknesses are addressed.
The strengths of the paper include great clarity and sensible proposals to improve no-hint performance, such as maintaining a similar computation graph and using invariances to reduce the search space in lack of stronger inductive biases. Moreover, the evaluation is thorough and the improvements brought show a clear step in the direction of accurately learning algorithms without hints.
Weaknesses: I think the paper would be greatly strengthened by improving the motivation towards doing no-hint learning. As the performance of the proposed models is not consistently better than that using hints, it is likely that at the moment, this is not enough evidence to convince towards porting to no-hint neural algorithmic reasoners for polynomial algorithms.
One way to improve this would be to improve it empirically (considering other invariances, introducing stronger inductive biases, extending the number of algorithms it is applicable to—perhaps applying the model to NP-hard problems, where having hints is not scalable because of exponential-time complexity), or providing ablation studies of some of the initial motivating points: abilities of no-hint architectures to do parallel processing or new ways/insights of learning to solve the problem. I also think that the argument of human effort should be further clarified in the limitations — while hints are generated using the algorithm's implementation therefore they come "for free" with the dataset generator, finding augmentations requires understanding of the algorithm's invariances, which is also based on human effort.
It would be useful if a few phrases could be clarified:
- “We note that the computation of hint predictions and encoding them to the processor in the encoded-decoded mode can be expressed as an additional message-passing step over the nodes, edges, and graph features” could be made clearer by including information on how hints are computed
- "a latent representation of type mask located at edges” is not immediately obvious
- it is mentioned that the setup is the same as in previous work, but restating the training and testing regime (such as size of inputs) would be beneficial.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Please see weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have included a discussion on the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and constructive comments!
We would like to additionally support our motivation towards investigating no-hint reasoners with an experiment (Fig. 1(a) from the global response), which demonstrates the execution trajectories for sorting models. We discuss our observations in the [global response](https://openreview.net/forum?id=vBwSACOB3x¬eId=UUd9q6foXC). In particular, we observe that models with hints may struggle to learn the relation between the intermediate predictions of the pointers and the output predictions. Also, the model without hints has a much stronger tendency to do parallel processing. Such insights combined with common intuition about the redundancy of over-engineering (as models usually find solutions that are more aligned with their architectures) represent the core of our motivation.
Also thank you for your notes on human effort for hints/augmentations - we definitely agree that finding augmentations is not trivial and requires a deep understanding of the algorithm and we will describe it in the limitations section. However, we also note that hints don’t come “for free” with dataset generator, at least, the useful ones: Veličković et al. (2022) noted some difficulties with hint generation, such as compression (simple ‘for’ loops can be described as a single step for parallel architectures) or ability to predict the next hint from the current one, which can represent a separate task for different architectures. Also, the same hints can be encoded/predicted differently, which can affect the model performance. While we believe that for most of the tasks and architectures useful hint sequences can be found, the comparison between simple supervision on hints and no-hint mode (“Baseline” and “No hints (ours)” columns in Table 1 of the paper) shows that for most of the problems (14 of 18) no-hints performance is better (or better hints are needed). We also note that invariants/augmentations from the paper need to be discovered/developed once for a task, while helpful hints may need to be engineered separately for different architectures (line 131 in the paper).
We also thank the reviewer for noticing parts that could be clarified in the text, we will describe them in more detail in our updated version. However, we are happy and open to clarifying something during the discussion period if needed.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! The main point I raised has been partially addressed through one of the new studies, strengthening my opinion that the paper should be accepted. I increased my score accordingly. | null | null | null | null | null | null |
An Exploration-by-Optimization Approach to Best of Both Worlds in Linear Bandits | Accept (poster) | Summary: The paper addresses the challenge of selecting an algorithm suited to the environment type, which is often unknown in real-world applications. This paper introduces the concept of best-of-both-worlds (BOBW) linear bandit algorithms that perform well in both stochastic and adversarial environments. Previous BOBW algorithms for linear bandit problems have achieved suboptimal regret bounds in stochastic environments due to an additional logarithmic factor. The authors use an existing approach called exploration by optimization (EXO), and prove that it achieves nearly optimal regret bounds in both regimes. EXO utilizes the exponential weight method to update the reference distribution and computes the sampling distribution and loss estimator to minimize an upper bound on regret. More precisely, the algorithm constructed using this approach achieves $O(d\sqrt{T \log T})$-regret in adversarial environments and $O(d^2 \log T/\Delta_{\min})$-regret in stochastic environments. The paper establishes a connection between the EXO approach and the SCRiBLe algorithm, which uses a follow-the-regularized-leader (FTRL) method with self-concordant barrier regularization. The authors also propose a variant called mean-oriented EXO that encompasses the EXO approach and allows for interpretation of existing SCRiBLe-type algorithms as EXO-based methods.
Strengths: The paper addresses the challenge of constructing best-of-both-worlds linear bandit algorithms that perform well in both stochastic and adversarial environments. The distinction between these environments and the need for an optimal algorithm for both is considered as a quite significant open problem. The paper demonstrates the effectiveness of EXO in achieving nearly optimal regret bounds in both stochastic and adversarial environments by establishing regret bounds.
The paper also establishes a connection between the EXO approach and the SCRiBLe algorithm and its extensions. This connection helps in interpreting existing algorithms within the framework of exploration by optimization.
The paper suggests that the framework of exploration by optimization and best-of-both-worlds regret guarantees can be extended to other sequential decision-making problems beyond linear bandits. It mentions partial monitoring problems and episodic MDPs as potential areas for future research and highlights the broader implications of the EXO framework in various decision-making contexts.
Weaknesses: While the paper presents an approach to constructing best-of-both-worlds linear bandit algorithms that perform well in both stochastic and adversarial environments, there are a few weak points that can be identified:
The paper assumes that the loss vectors are generated according to a specific model and that the environment falls under either a stochastic or adversarial setting. In real-world applications, the actual environment may not be strictly one or the other, and it might be challenging to accurately model the underlying process.
The paper primarily focuses on theoretical analysis and regret bounds, but it lacks an empirical evaluation or experimental results to validate the proposed algorithms. Without empirical evidence, it is challenging to assess the practical effectiveness and efficiency of the algorithms in real-world scenarios.
Improving a problem or solution by a log(T) term may not always be considered a significant breakthrough, depending on the context and the magnitude of the improvement.
The paper mentions that the proposed algorithms involve optimization problems (minimization problem (9)) and self-bounding techniques. While these techniques can lead to improved performance, they may introduce complexity and implementation challenges. The paper does not provide detailed insights into the practical feasibility and implementation aspects of the algorithms.
The paper does not explicitly discuss the limitations and assumptions of the proposed algorithms. It would be beneficial to address the assumptions made about the underlying environment, potential limitations of the algorithms, and scenarios where the algorithms may not perform optimally.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Are there any limitations or potential challenges in implementing the EXO-based algorithm in real-world applications?
The paper mentions the use of the self-bounding technique for analyzing regret bounds in stochastic environments. Can you explain how this technique is applied and what advantages it offers?
How do you think the algorithm interpolate between the two regimes? For example, How does it perform in corrupted stochastic environments or environments with adversarial corruptions?
Can you elaborate on the potential extensions of the EXO framework to other sequential decision-making problems beyond linear bandits? How can this framework be applied to partial monitoring problems or episodic MDPs?
Are there any plans for future empirical studies?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper focuses solely on theoretical analysis.
The paper mentions that EXO can be complex in terms of structure and implementation. The details of how these algorithms can be efficiently implemented in practice are not provided, leaving open questions about their feasibility and scalability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewers' thoughtful and comprehensive comments and feedback.
The reviewers' insights have greatly contributed to the improvement of our work, and we sincerely appreciate the time and effort you have invested in this review.
We hope our response below addresses your concerns.
> Are there any limitations or potential challenges in implementing the EXO-based algorithm in real-world applications?
Brief notes on the computational complexity and on the implementation of the algorithm can be found in Appendix E of the supplementary materials and in Remark 2.
Unfortunately, it is not known if there is an efficient algorithm to solve the optimization problems (9) as we discussed in Appendix E,
which is a major limitation in the implementation.
However,
we do not necessarily need to solve the optimization problem (9) exactly as noted in Remark 2.
For example,
it is expected that a polynomial-time algorithm that achieves regret bounds of Corollary 1 can be implemented given a separation oracle (or equivalently, a linear optimization oracle) for $\mathcal{A}$.
Indeed,
for achieving the regret bounds of Corollary 1,
it suffices to find $p$ and $g$ such that $\Lambda_{q_t, \eta_t} (p, g)$ is bounded by the RHS of (10).
The construction of such $p$ and $g$ is provided in the proof of Lemmas 2 and 3 (Lines 242--261),
which can be performed using a separation oracle for $\mathcal{A}$.
In fact,
we can obtain samples from $p$ by using the techniques for log-concave sampling (e.g., [Lovász and Vempala, 2007]) and for computing convex combination expression (cf. Carathéodory's theorem for convex hull and [Schrijver, 1998, Corollary 14.1g]).
However, the analysis of log-concave sampling and calculations of $H(p)$ (which is required for constructing $g$)
including consideration of calculation errors can be highly complicated,
and the computational cost can be very large, although on the order of polynomials (cf. [Hazan and Karnin, 2016, Corollary 6.2]).
Therefore, it is also an important challenge to consider more efficient implementation methods.
A more detailed discussion will be added in the revised version.
> How do you think the algorithm interpolate between the two regimes? For example, How does it perform in corrupted stochastic environments or environments with adversarial corruptions?
The proposed algorithm interpolates between the two regimes well.
In fact,
regret bounds in this paper can be applied to a class of stochastic environment with adversarial corruptions as noted on Lines 153--165.
Discussion on Lines 153--165 and Theorem 1 together imply that
the proposed approach achieves $O( \log T + \sqrt{C \log T} )$-regret
(ignoring factors w.r.t. parameters other than $C$ and $T$)
in stochastic environments with corruptions,
where $C$ represents the total amount of corruption.
> The paper mentions the use of the self-bounding technique for analyzing regret bounds in stochastic environments. Can you explain how this technique is applied and what advantages it offers?
In the analysis based on the self-bounding technique,
we exploit self-bounding constraints (2) to obtain regret bounds such that the regret itself appears on the right-hand side,
e.g.,
\\[
R_T(a^*) \leq \sqrt{ \beta (R_T(a^*) + C) \log T } .
\\]
A similar expression can be found in Lines 524--525 in the supplementary material.
This expression can be interpreted as a quadratic inequality for $R_T(a^*)$,
and solving it yields $R_T(a^*) = O( \beta \log T + \sqrt{C \beta \log T} )$.
One advantage of this technique is that it also provides good bounds for stochastic environments with adversarial corruptions,
i.e.,
for the environments with $C > 0$.
> Can you elaborate on the potential extensions of the EXO framework to other sequential decision-making problems beyond linear bandits? How can this framework be applied to partial monitoring problems or episodic MDPs?
We believe that the algorithmic framework and Theorem 1 can be extended to other problems,
such as partial monitoring problems or episodic MDPs,
relatively easily.
However, the analysis of $\Lambda^*$ provided in Section 4.3,
which is essential for exploiting Theorem 1,
heavily relies on a structure specific to linear bandits,
and we believe that extending this part to other problems would require a great deal of nontrivial thought.
> Are there any plans for future empirical studies?
While we agree with the importance of empirical studies,
we have no plans to conduct experimental studies in the near future,
at this time.
As we answered your first question,
we need to consider how to improve the implementation of the proposed algorithm in order to make it work in a realistic computation time for high-dimensional problems.
We believe that full-scale numerical experiments will become possible only after an efficient implementation method is found.
References:
- E. Hazan and Z. Karnin. Volumetric spanners: an efficient exploration basis for learning. The Journal of Machine Learning Research, 17(1):4062–4095, 2016.
- L. Lovász and S. Vempala. The geometry of logconcave functions and sampling algorithms. Random Structures & Algorithms, 30(3):307–358, 2007.
- A. Schrijver. Theory of Linear and Integer Programming. John Wiley & Sons, 1998.
---
Rebuttal Comment 1.1:
Comment: I have read the other reviews and the rebuttal. I am satisfied with the answer provided by the authors. I will not change my score. | Summary: This paper considers the best-of-both-worlds problem for linear bandits. They investigate an Exploration-by-Optimization approach for this problem and show O(d^2\logT) regret for stochastic setting and O(d\sqrt{T}) regret for adversarial setting.
Strengths: This paper investigates a new approach for Best-of-both-worlds problem in linear bandits.
In some special cases such as multi-armed bandits and hypercube, the authors provide improved results utilizing the problem structure.
Previous BoBW results mainly depend on FTRL-type algorithms, this work uses a new type of approach, which may be hoped to solve the BoBW problems with other structure.
Weaknesses: Given existing works on BoBW problem for linear bandits, the convergence results provided by this paper are not optimal.
It is not clear what the advantage of EXO algorithm is compared with standard FTRL-type algorithms. Can you give more discussions?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our paper.
We hope the following answers address your questions.
> Given existing works on BoBW problem for linear bandits, the convergence results provided by this paper are not optimal.
As the reviewer pointed out, our bounds are not tight and we have positioned this as an important issue.
Section 6 discusses possible approaches to these issues,
and we intend to investigate the effectiveness of these approaches in future research.
> It is not clear what the advantage of EXO algorithm is compared with standard FTRL-type algorithms. Can you give more discussions?
Conventional FTRL-type algorithms for linear bandits,
such as exponential weight algorithms with $p_t \approx q_t$ [Bubeck et al. 2012] and SCRiBLe with Dikin ellipsoid sampling [Abernethy et al., 2008, 2012],
have been originally designed for adversarial environments,
and do not achieve best-of-both-worlds regret bounds,
i.e.,
they do not have better bounds than $O(\sqrt{T})$ even in stochastic environments.
The advantage of EXO algorithms compared to this is that they achieve best-of-both-worlds regret bounds,
i.e.,
they have $O(\log T)$-regret bounds for stochastic environments.
References:
- J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In 21st Annual Conference on Learning Theory, 2008.
- J. Abernethy, E. Hazan, and A. Rakhlin. Interior-point methods for full-information and bandit online learning. IEEE Transactions on Information Theory, 58(7):4164–4175, 2012.
- S. Bubeck, N. Cesa-Bianchi, and S. M. Kakade. Towards minimax policies for online linear optimization with bandit feedback. In Conference on Learning Theory, pages 41–1. JMLR Workshop and Conference Proceedings, 2012.
---
Rebuttal Comment 1.1:
Title: Please engage in the rebuttal
Comment: Dear reviewer,
Please acknowledge the author's response and ideally tell us if that has changed your mind. | Summary: The paper establishes $\log T$-style instance-dependent regret upper-bounds for linear bandit algorithms built upon the FTRL framework and appropriate exploration-by-optimization style update steps.
Strengths: The $\log T$-style instance-dependent regret upper-bounds can be achieved on general convex action sets and a number of typical important specific action sets. The paper establishes the claimed results by first identifying a sufficient condition for the so-called self-bounding property for FTRL with exploration-by-optimization style update steps and thus $O(\log T)$ regret bounds (Theorem 1), then showing this condition is indeed satisfied under the continuous exponential weight methods. Overall, the paper is clearly written.
Weaknesses: Similar to most existing works on linear bandits that mainly discusses sample complexity instead the actual computational cost, the paper by default assumes one can effectively and accurately perform the computation of EXO update steps. However, for most convex action sets excepts for some special ones, these steps may be computationally intractable on an ordinary RAM machine. I would happy to see some discussion about the actual computation model (e.g., with oracle access to separation hyperplanes) and how to implement the proposed EXO algorithms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See the weaknesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
We hope that our response below addresses your concerns.
> Similar to most existing works on linear bandits that mainly discusses sample complexity instead the actual computational cost, the paper by default assumes one can effectively and accurately perform the computation of EXO update steps. However, for most convex action sets excepts for some special ones, these steps may be computationally intractable on an ordinary RAM machine. I would happy to see some discussion about the actual computation model (e.g., with oracle access to separation hyperplanes) and how to implement the proposed EXO algorithms.
Brief notes on the computational complexity and on the implementation of the algorithm can be found in Appendix E of the supplementary material and in Remark 2.
We consider that one reasonable computation model would be the Real-RAM model equipped with a separation oracle for (the convex hull of) the action set $\mathcal{A}$ as the reviewer suggested.
Unfortunately, it is not known if there is an efficient algorithm to solve the optimization problems (9) as we discussed in Appendix E.
However,
we do not necessarily need to solve the optimization problem (9) exactly as noted in Remark 2.
For example,
it is expected that a polynomial-time algorithm that achieves regret bounds of Corollary 1 can be implemented given a separation oracle (or equivalently, a linear optimization oracle) for $\mathcal{A}$.
Indeed,
for achieving the regret bounds of Corollary 1,
it suffices to find $p$ and $g$ such that $\Lambda_{q_t, \eta_t} (p, g)$ is bounded by the RHS of (10).
The construction of such $p$ and $g$ is provided in the proof of Lemmas 2 and 3 (Lines 242--261),
which can be performed using a separation oracle for $\mathcal{A}$.
In fact,
we can obtain samples from $p$ by using the techniques for log-concave sampling (e.g., [Lovász and Vempala, 2007]) and for computing convex combination expression (cf. Carathéodory's theorem for convex hull and [Schrijver, 1998, Corollary 14.1g]).
However, the analysis of log-concave sampling and calculations of $H(p)$ (which is required for constructing $g$)
including consideration of calculation errors can be highly complicated,
and the computational cost can be very large, although on the order of polynomials (cf. [Hazan and Karnin, 2016, Corollary 6.2]).
A more detailed discussion will be added in the revised version.
References:
- E. Hazan and Z. Karnin. Volumetric spanners: an efficient exploration basis for learning. The Journal of Machine Learning Research, 17(1):4062–4095, 2016.
- L. Lovász and S. Vempala. The geometry of logconcave functions and sampling algorithms. Random Structures & Algorithms, 30(3):307–358, 2007.
- A. Schrijver. Theory of Linear and Integer Programming. John Wiley & Sons, 1998.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for the response. The discussion looks good to me. I will keep my score. | Summary: This paper studies the "best-of-both-world" problem for linear bandits, meaning that designing a single algorithm that simultaneously achieves a $O(\sqrt{T\log T})$ regret in the adversarial case and $O(\log T)$ regret in the stochastic case. They propose to use the Exploration-by-Optimization approach to obtain such bounds. The algorithm can be viewed as a continuous exponential weight algorithm with an optimally designed action sampling scheme.
Strengths: - The approach provides a unified and insightful way to obtain near-optimal best-of-both-worlds bounds for linear bandits and several of its special cases. The algorithm is conceptually simpler than previous work such as [Lee et al., 2021] and [Dann et al., 2023].
- The algorithm is based on a general framework of Exploration-by-Optimization, which has been proven to work for general adversarial decision making problems. Therefore, the proposed method has potential to be extended to more general cases, though the current analysis is only for linear bandits.
- There are several novel elements in the analysis, especially Lemma 3. It shows that continuous exponential weights with a simple sampling scheme already works well.
Weaknesses: - The analysis relies on the unique optimal action assumption, and the bound in the stochastic environment can be arbitrarily worse than the instance-dependent bound by Lattimore and Szepesvari (2017). However, these are general open questions in the field but not the specific issues of this work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the above sections.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you have invested in this review.
> The analysis relies on the unique optimal action assumption, and the bound in the stochastic environment can be arbitrarily worse than the instance-dependent bound by Lattimore and Szepesvari (2017). However, these are general open questions in the field but not the specific issues of this work.
As discussed in Section 6, we recognize the two points you raised as important issues.
Section 6 also discusses possible approaches to these issues,
and we intend to investigate the effectiveness of these approaches in future research.
---
Rebuttal Comment 1.1:
Comment: Thanks for the answer. I think this is a good work, and I keep my positive scoring. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Reflexion: language agents with verbal reinforcement learning | Accept (poster) | Summary: This paper presents a method by which LLMs can improve task performance through a form of learning in language: distilling unsuccessful attempts down to a "lesson learned" via a self-reflection process. The work shows that this idea leads to improved performance within a variety of task domains, and includes fairly thorough ablations, which I appreciated.
Strengths: I find this to be a fairly satisfying paper overall.
* The idea is compelling, and well executed.
* The experiments demonstrate that the approach is beneficial across three fairly diverse LM application domains, which shows that it is not simply a narrow improvement.
* The ablations are fairly thorough, and make the results more compelling.
* The paper is generally well-written and presented.
Weaknesses: I have a few suggestions on how the paper could be improved:
1) I think this work could be more thoroughly integrated with the existing literature on learning from language feedback, both within LLMs (e.g. https://arxiv.org/abs/2204.14146 and https://arxiv.org/abs/2303.16749) and RL agents (https://proceedings.neurips.cc/paper_files/paper/2022/hash/0113ef4642264adc2e6924a3cbbdf532-Abstract-Conference.html and https://arxiv.org/abs/2112.03753). Another recent paper https://arxiv.org/abs/2304.03442 proposed a similar reflection approach, though with different motivation; it came out quite recently though so there is no expectation to refer to it.
2) Evaluations on GPT-4 are challenging to interpret. We don't know what data the model was trained or tuned on, and even if the datasets used were generated after the pre-training cutoff, the model may have encountered them in the fine-tuning or RLHF stages. Indeed, OpenAI explicitly uses information submitted to their chat (at least) as training data; can we guarantee that no LeetCoder submitted the problems to GPT-4 to check their solutions? This problem is exacerbated if the benchmark has previously been evaluated against GPT-4, as it seems these problems have; there have been substantial concerns about GPT-4's coding performance evaluations being affected by data contamination. It would be best if the authors at least acknowledge these issues in the manuscript; ideally, they would also evaluate against an open language model for which the training process is more transparent, although undoubtedly such a model might not perform as well.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See above for some suggestions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your helpful suggestions!
### 1. Related work
Thank you for pointing out these, which we will cite and discuss!
### 2. Evaluation for closed-source large language models
- This is a general issue for all works that use closed-source LLMs, and motivated us to create our own LeetcodeHardGym Benchmark, which was specifically filtered to only contain problems created later than the training data cutoff date of GPT-4.
- Across all tasks, baseline methods based on GPT-4 are far from perfect, and Reflexion + GPT-4 significantly improves the performance, which indicates GPT-4 might not have memorized these tasks, and better prompting methods might be important.
- It is also possible to use open-source LLMs for Reflexion, e.g. see **General Response (2)**.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks to the authors for the response!
To clarify my comment on contamination: while I acknowledge that the new benchmark was created after the pretraining cutoff, we know that GPT-4's training has continued up to the present (at the very least to address vulnerabilities and introduce new features); it's quite reasonable to presume they use user chats for that training, and so it is still possible the model has been trained on these problems. The fact that it does better with Reflexion than alone is promising in that regard, but does not completely rule out the possibility (just because an LM has been trained on something does not mean it will always get it right). I still think it might be worth acknowledging this more explicitly, even if it is a general problem.
However, I do want to reiterate that I think this is a valuable work that should certainly be accepted.
---
Reply to Comment 1.1.1:
Title: Agree
Comment: Yes, we agree this is a valid concern. We will explicitly acknowledge such an issue and our attempts toward tackling the issue (e.g. new leetcode benchmark) in the revision.
Thanks again for finding our work valuable! | Summary: The paper introduces a novel framework, Reflexion, to improve the learning capabilities of language agents. Instead of traditional reinforcement learning methods, Reflexion uses linguistic feedback, where agents verbally reflect on task feedback and store these reflections in an episodic memory buffer to enhance decision-making. The authors demonstrate that Reflexion, which can incorporate various types and sources of feedback, outperforms baseline agents across diverse tasks, including text games, natural language question-answering, and code generation.
Strengths: 1. The paper introduces a unique framework, Reflexion, that uses linguistic feedback for reinforcement learning in language agents. This is a significant departure from traditional methods that rely on weight updates.
2. The paper offers a good explanation of their proposed methods and is easy to follow. Good visualizations are also included to help with reader understanding.
3. Abundant experiments have been conducted across 3 different tasks, covering text games, question answering, and code generation. A comprehensive analysis has also shown the effectiveness of the reflexion method.
Weaknesses: 1. Although reflexion shows significant improvements over baselines on three different tasks, I still feel that reflexion shows limited generalization ability on different tasks. The key modules in the framework, including evaluator and self-reflection, have varying designs for different tasks. In other words, the specific design details may only work on one task and may fail on the other ones.
2. The results for comparison in the question-answering task seem to be unfair. The reflexion framework leverages the environment binary reward of whether the answer is correct as the ground-truth answer or not. This leaks a little information from testing question-answer pairs. At least the ReAct baseline did not include such information.
3. There are some related works mentioned in Section 2. However, there is no comparison between Reflexion and these methods in the experiment sections.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you offer some statistics on the running time of the reflexion framework? As it seems that the optimization process is time-consuming.
2. I am very curious about the comparison between reflexion and methods like self-refine and self-debugging.
3. Can you offer some detailed analysis of the differences between your method (reflexion) and simple ReAct+self-refine?
4. Is the quality of test cases generated in code generation tasks important? Are they guaranteed to be correct?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately introduced the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
### 1. the specific design details may only work on one task and may fail on the other ones
Please see the General Response.
### 2. Accuracy comparisons
You are right. The Reflexion performance on HotpotQA is not meant as a SoTA claim. Rather, it is intended to show how performances can be improved with multiple rounds of reflections. ALFWorld and Coding results are fair comparisons. We will better clarify it in the paper.
### 3. Comparison to other methods
- Self-refine and self-debugging: self-refine is designed for text generation tasks, and self-debugging is designed for coding tasks. In contrast, Reflexion is designed for general agent tasks, which covers text generation, coding, and also embodied tasks (e.g. AlfWorld).
- ReAct + self-refine: self-refine was not designed for acting tasks, cannot generated self-tests, and cannot incorporate environment or self-generated reward signals as Reflexion.
- Experiments: In our programming ablation experiments, we omit various components, and these ablations are similar to baseline methods such as CodeT and Self-Debugging. Let us know if you think some other important baseline method is missing.
### 4. Running time statistics
We evaluated `starchat-beta` on HumanEval Python - 161 problems (as mentioned in the General Response) and recorded wall time on 1 NVIDIA H100 GPU.
*avg over 8 runs
Simple: 00:25:18 (hh:mm:ss)
Reflexion: 01:17:06 (hh:mm:ss)
### 5. Importance of tests
Yes, the quality of the tests in code generation is important and a bottleneck. In **Table 2**, we show that the high false positive rate of MBPP Python tests (relative to HumanEval Python) largely contributed to the inferior performance of Reflexion.
---
Rebuttal Comment 1.1:
Title: Thanks for the Response
Comment: I think most of my concerns have been addressed by the authors. Thanks for the efforts during the rebuttal. Although some of the points I listed in the review seem to be framework limitations, I still find the paper interesting and useful. I will maintain my positive score. | Summary: This paper proposes a novel framework to do RL style learning through LLM. The idea is interesting. The authors have also done evaluation to show this technique/framework can work better than baseline methods w/ a few different tasks.
Strengths: The author proposed a novel framework to let LLM learn new tasks through trial and error. The framework seems general and can handle multiple tasks.
Weaknesses: 1. While the framework is general. The devil is in the details. It seems for each specific task, it requires different heuristic and tuning. Also different task may require different LLM/VLM as actor/evaluator/self-reflection.
2. In terms of ablation, since this paper focuses on the framework, it seems more useful to ablate different framework level choices. i.e. is the self-reflection module really the key component in this framework, or can we bypass it by some other way, such as prompting?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Will this framework also work well w/ VLMs? (if we also have vision/images as input)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
### 1. different heuristic/tuning/LLM/VLM for each task
- In this work, there is no finetuning or VLM used.
- We only used OpenAI GPT models as LLMs in the paper. In **General Response (2)**, we show it's also possible to use other LLMs, or use different LLMs for different roles (actor, evaluator, self-reflection). We believe such a flexibility is a benefit, rather than an issue.
- Prompt design is task-specific to some degree, which is true for all prompting methods. However, Reflexion allows a principled way to design prompted roles (actor, evaluator, self-reflection) and their control flow. Also, the self-reflection prompt is mostly similar across different tasks.
- In summary, as you said, "the framework is general" to solve diverse tasks because it could flexibly incorporate task heuristics with principled prompt role design and control flow.
### 2. Necessity for self-reflection
- We cannot "bypass self-reflection by prompting", because self-reflection is done via prompting in our work.
- Ablating self-reflection is equivalent to ReAct/CoT baselines, which we reported in all tasks.
### 3. Possible application with VLMs
We believe the idea of generating and conditioning on text reflections should be possible and useful for any language-conditioned policies, including ones using VLMs. This is an interesting question and a good future direction. | Summary: The paper presents Reflexion, a framework to reinforce language agents by using language feedback. Reflexion agents verbally reflect on task feedback signals, then store their own reflective text in a memory buffer to improve their decision-making in future trials. Reflexion is flexible and effective across diverse tasks (sequential decision-making, coding, language reasoning). The paper also explores different types and sources of feedback signals, feedback incorporation methods, and agent types, and provides insights into how they affect performance.
Strengths: 1. This paper proposes Reflexion, a new framework for “verbal” reinforcement that conditions a policy on the agent’s “memory” which is beneficial for the development of LLMs, reinforcement learning, reasoning, code generation, and so on.
2. The results across diverse tasks including sequential decision-making, language reasoning, and programming are very promising with a large advantage.
3. This paper proposes a new benchmark, LeetcodeHardGym, consisting of 40 challenging Leetcode questions in 19 different programming languages, which is beneficial for the program synthesis and code generation community.
Weaknesses: 1. Various LLMs (the GPT family, the vicuna family, and models with different sizes) should be evaluated to show reflexion’s generalization ability across models. Although reflexion performs well, it is important to analyze its border.
2. Ablation study about long/short term memory. For example, in ALFWorld the baseline is react which resets the environment and starts a new trial whenever self-reflexion is suggested. I take it as “reflexion w/o long/short term memory”. Then, how about the performance of “reflexion w/o long-term memory (but w/ short-term memory)” and the performance of “reflexion w/o short-term memory (but w/ long-term memory)”?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. According to line 117, the evaluator is not necessarily an LLM, right? But it is pictured as “LM” in figure 2 which is a little bit confusing.
2. In algorithm 1, $\pi_\theta$ is not relevant to trajectory $\tau_{t-1}$, the short-term memory. But in figure 2 (a) and section 3, the Actor receives short-term memory as its input.
3. How does short-term memory work on programming tasks? Is reflexion = self-debugging + CodeT?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The author has discussed the limitations well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the great comments!
### 1. Other models
Please see **General Response (2)**, where we run additional experiments on closed and open source LLMs.
### 2. Short vs long-term memory
In this work, we define short-term memory as the current trajectory and long-term memory as the reflections. It is not possible to omit short-term memory, and a long term memory ablation would be the baseline ReAct/CoT experiment.
### 3. Evaluator
Yes, the Evaluator does not have to be an LLM. We will revise the figure.
### 4. Policy
Yes, the policy is conditioned on the trajectory (and possibly other reflections). We will update the algorithm specification.
### 5. Related work
Reflexion adds a self-reflection step and unit test generation step to Self-Debugging - and a self-debugging step to CodeT. So for coding Reflexion has important ideas of both self-debugging and CodeT, yet unlike self-debugging and CodeT, Reflexion is a more general framework that works for diverse tasks.
---
Rebuttal Comment 1.1:
Comment: Thanks for the experiments and response.
(1) Although Reflexion is a general framework, it still seems that on programming tasks "reflexion = self-debugging + CodeT".
(2) As a general framework for LLMs, there haven't been enough models considered in the evaluation (like the ones measured by CoT). But I understand that conducting more comprehensive testing requires more time. I hope that the authors can include a more comprehensive set of tests in the future.
For the effectiveness of the proposed method, I keep my score and vote for acceptance. | Rebuttal 1:
Rebuttal:
We appreciate all of the great feedback from our reviewers.
### 1. Related Work, Ablations, and Baselines
Reflexion is a general framework for reasoning, decision-making, and programming problems. A typical Reflexion loop consists of a generation, evaluation, and feedback function. Thus, the setup for the individual components are different, but similar across all tasks. To demonstrate the requirement for self-reflection, we precisely removed individual components of the loop to evaluate accuracy. On HotPotQA (Figure 4C), we show that having simple episodic memory (full past trajectories) in the context was not as effective as self-reflection instruction. On HumanEval Rust (Table 3), we show that removing the self-reflection step results in the same performance as the baseline accuracy. Many other related works such as Self-Debugging and CodeT lack at least one of: self-written tests, natural language reflections, or generalizability.
### 2. Different LLMs and Additional Experiments
- HotpotQA: We used `text-davinci-003` for the Actor and Self-Reflector.
- Alfworld: We used `text-davinci-003` for the Actor, Evaluator, and Self-Reflector.
- Programming tasks: We used `gpt-4` for the Actor, Evaluator, and Self-Reflector.
- Additional experiment #1:
- We ran HumanEval Python with `starchat-beta`, an open-source model that was pre-trained on a large open-source dataset of permissively licensed code, `The Stack`, as the Actor, Evaluator, and Reflector.
- We found that low reflection generation quality prevented improvement in performance.
- Additional experiment #2:
- We ran HotPotQA with `gpt-3.5-turbo` and `gpt-4` as the Actor, Evaluator, and Reflector to test Reflexion performance using stronger models.
- It was not surprising that we found that Reflexion agents were able to learn with stronger models.
We ran two new experiments to evaluate Reflexion using `starchat-beta` on HumanEval, and `gpt-4` and `gpt-3.5-turbo` on HotPotQA.
| | HumanEval Pass@1 |
| -------- | ------- |
| Baseline | 0.26 (avg over 8 runs); SE = 0.00481 |
| Reflexion | 0.26 (avg over 8 runs); SE = 0.00305 |
StarChat is qualitatively and quantitatively not suitable for running a Reflexion loop. For example, given the following erroneous implementation:
```python
# Adds two numbers
def add(x, y):
if x == 0:
return 0
else:
return x + y
```
And feedback from the unit test suite:
```python
# passing tests
assert add(1, 2) == 3
assert add(3, 0) == 3
# failing tests
assert add(0, 2) == 2 # Output: 0
```
StarChat generates the following self-reflection which is not good:
In this case, it appears as though there may have been an issue with scope or variable shadowing occurring within your function definition for add(). Specifically, in line 4 of your provided code snippet, where you define the local variable x with the value of 0, but then later reference the global variable x (which has no initial value assigned), it could potentially cause issues depending on how the rest of your function is written.
To resolve this error, I would recommend explicitly defining a new variable called zero which takes on the value of 0 at the beginning of your function, like so:
```python
def add(x,y):
zero = 0
if x==0:
return zero
else:
return x+y
```
This should ensure that the intended behavior of returning zero when x=0 remains intact while also preventing potential confusion around scoping/shadowing. Let me know if you need anything further!
On the other hand, we scaled-up the models for Reflexion evaluation on HotPotQA. We found that Reflexion enabled CoT and ReAct agents to improve performance.
| Model | Baseline | Reflexion |
| -------- | ------- | ------- |
| CoT (GT) + text-davinci-003 | 0.60 | 0.77 |
| CoT (GT) + gpt-3.5-turbo | 0.57 | 0.71 |
| CoT (GT) + gpt-4 | 0.68 | 0.80 |
| ReAct + text-davinci-003 | 0.3 | 0.55 |
| ReAct + gpt-3.5-turbo | 0.26 | 0.38 |
| ReAct + gpt-4 | 0.39 | 0.51 |
We believe that Reflexion will be strengthened as LLMs improve given that the ability to effectively self-reflect is crucial to performance. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces Reflexion, a framework for in-context reasoning and decision-making for LMs. It allows the LM to iteratively attempt to solve a task and then reason about the results of its attempts and plan improved courses of action. The authors evaluate Reflexion on a diverse array of tasks (navigating and acting in text environments, question answering, code generation) and report impressive results, in some cases establishing new state-of-the-art. They also compare Reflexion to useful baselines and conduct instructive ablations.
Strengths: 1. The paper proposes a sophisticated planning algorithm (composed of short-term and long-term-memory, gating mechanisms and feedback simulators) that can be run (almost) entirely in-context by an LM. I think this approach is very powerful and this paper does a good job showing its promise.
2. The paper is well-written and easy to follow. I like the error analysis paragraph accompanying each section.
3. I appreciate effort put into thorough benchmarking of code generation capabilities, including translating MBPP and HumanEval to Rust and introducing LeetcodeHardGym that prevents tests cases from having been leaked into GPT-4 data.
4. The empirical results obtained by Reflexion are impressive, especially pass@1 for HumanEval.
Weaknesses: 1. It would be useful to compare Reflexion to baselines also in terms of inference efficiency. For instance, how many tokens are typically needed for a Reflexion run (vs e.g. ReAct or CoT) to reach an answer in a given task? Is the performance Reflexion bottlenecked by context window size and could we expect it to improve as context windows of LMs increase?
2. Similarly, I am a bit disappointed that authors don’t report any error bars. I understand inference costs, but the authors could at least compute errors bars based on variance across problems (in code generation) or across tasks (in AlfWorld and HotPotQA). Alternatively, I’d rather see a less diverse task array than no error bars.
3. Authors frequently compare Reflexion to human cognition (e.g., “This is akin to how humans iteratively learn to accomplish complex tasks”, line 31-32; “similar to the way that humans remember fine-grain recent details while also recalling distilled important”, lines 141-143). I think these comparisons are overblown, not supported by cognitive science citations and potentially misleading. For instance, there are numerous dissimilarities between the kind of memory being used in Reflexion and human memory and the topic deserves a more careful discussion that just highlighting superficial similarities in passing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It's not entirely clear to me which parts of the algorithm happen within the "main" context window of Reflexion and which are encapsulated into separate, local context windows that are then discarded (my understanding is that this is the case, e.g., for unit test generation). I'd appreciate if the authors could make that more clear.
2. How does Reflexion scale (down) with model size? Are GPT-4-level capabilities necessary for it to yield improvements or does it also work with `gpt3.5-turbo` and `text-davinci-003`?
3. How was the error analysis in section 4.1 conducted, i.e. who classified failures as being due hallucination or due to insufficient planning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Certain limitations and societal impacts are discussed. I think it could be good to complement that with a discussion of risks associated with making LMs prompted to be agents act autonomously by allowing them to set their own subgoals and to plan with rich world models (e.g., Perez et al., 2022; Ngo et al., 2023). This could have unknown failure modes if those agents are given unrestricted Internet access and are not supervised.
Perez et al., 2022. Discovering Language Model Behaviors with Model-Written Evaluations
Ngo et al., 2023. The alignment problem from a deep learning perspective
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your feedback!
### 1. Token requirements and context
Reflexion computation scales linearly with respect to the number of episode attempts.
However, with regard to context sizes, Reflexion only keeps last N reflections in the context ($N=1$ for coding and $N=3$ for HotpotQA and AlfWorld), so the context length is not much longer than the base ReAct or CoT context length.
### 2. Error bars
We were unable to run GPT experiments for many trials due to OpenAI API costs. See the **General Response (2)** for the error bars for StarCoder runs.
- "at least compute errors bars based on variance across problems (in code generation) or across tasks (in AlfWorld and HotPotQA)": we are not sure what it means, we will be happy to provide any statistics based on existing results.
### 3. References to human cognition
Thank you for the feedback. We will tone-down lines 31-32 and lines 141-143.
### 4. Reflexion performance scaled down models
Please see the **General Response (2)**, where we show additional experiments with both stronger and weaker LLMs.
### 5. Error classification for Alfworld
Error cases were classified by a simple heuristic: If a trajectory contained more than 30 actions, we classified the trial as poorly planned. If a trajectory contained less than 30 actions, but contained 3 consecutive identical action choices, then the trial was classified as a hallucination. This was implemented after our initial experiments as we found that agents were often unable to escape this repetitive action choice loop.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I especially appreciate experiments with GPT-3.5 series models.
I feel my Question 1 from the review wasn't answered. Please let me know if it's not clear.
> "at least compute errors bars based on variance across problems (in code generation) or across tasks (in AlfWorld and HotPotQA)": we are not sure what it means, we will be happy to provide any statistics based on existing results.
Sure! For instance, HumanEval consists of 164 problems. You could, for a given method, compute pass@1 for each of 164 problems and then average those pass@1 scores across those 164 problems and compute standard errors across 164 problems. This can work even if each pass@1 is estimated based on a single solution attempt (i.e. it's a binary variable; in this case, you can use the closed form of the variance of a binomial distribution).
---
Reply to Comment 1.1.1:
Comment: **Response to Question 1:**
Reflexion does not have a main context window. There is an implementation, evaluation, and reflection context template that is used. These templates may vary per task to include domain-specific information, such as the permissible action space. The content per step in the loop is specified below:
Implementation:
- memory (last N items in the list of reflections)
- problem specification
- previous implementation (programming tasks only)
Evaluation:
- previous implementation
- evaluation instruction
Reflection:
- previous implementation
- binary success status from evaluation
- reflection instruction
**Error bars**
Thank you for the additional information! Initially, we did not do trial-level variance estimates over question success because HumanEval problems are not equal in difficulty. However, here is the standard error for HumanEval (python):
SE = sqrt((0.91)(1-0.91)/164) = 0.022
Thank you very much for your clarification! | null | null | null | null | null | null |
MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion | Accept (spotlight) | Summary: This paper proposes a new method to generate multi-view images, ensuring pixel-to-pixel multi-view consistency.
The multi-view consistency is guaranteed by the correspondence aware module, which is utilized in the latent diffusion multi-image generation process.
The proposed method outperforms previous works for tasks related with multi-view image generation, especially in the multi-view consistency.
Strengths: 1. The idea to guarantee pixel correspondence in the diffusion process is novel and reasonable.
2. The supplementary material provides codes, which show some details of the implementation.
3. The proposed method may improve the results for text to texure given meshes.
4. For different view images, the user can input different prompt.
Weaknesses: 1. To obtain the pixel correspondence for multi-view images, the proposed method requires to input the depth maps.
2. The authors do not explain how to deal with the depth occlusion and how to deal with the situation where the pixels in target images are not in the view of the source image.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How does the proposed method perform for objects? Could we use the proposed method to generate textures given object meshes?
2. How does the authors deal with the depth occlusion and the situation where the pixels in target images are not in the view of the source image?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors addressed the limitations and the work does not have negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer VE1c for constructive suggestions.
---
**W1:** Our current method is restricted to the generation of multi-view images where one-to-one correspondences are readily available. Unfortunately, this condition isn't fulfilled in casual multi-view image generation scenarios, and thus our method requires depth inputs. Developing methods that can accommodate casual multi-view image generation without the necessity of one-to-one correspondence should be an important direction for future work.
---
**W2:** We have enhanced the positional encoding (Figure 3 in the main paper) used in the correspondence-aware attention by incorporating depth differences between source and target views. Specifically, we project the depths of correspondences from the target views onto the source views and subsequently calculate the depth differences. Notably, in the case of occlusions, we do not apply any specific handling. We anticipate that the standard SD modules will interpolate or inpaint in the presence of occlusions by learning from the depth differences. For areas of occlusion where two images lack overlap, there's no requirement to ensure consistent outcomes. We updated the results in the global rebuttal's PDF file. Please refer to the *GR1 of the general response* for updated results on multiview depth-to-image generation.
---
**Q1:** We believe that MVDiffusion is able to generate textures given object meshes since our method can generate textures of a scene, as shown in the PDF file of the global response. Please refer to the *GR1 of the general response*.
---
**Q2:** Please see the *GR1 of the global response* for our answers. | Summary: This paper introduces MVSDiffusion, a diffusion framework that generates multi-view images with content consistency. The problem setting is interesting and essential in practice. The proposed correspondence-aware attention mechanism provides cues for multi-view consistency.
Strengths: 1. The paper proposes a novel problem setting: generating images for multiple camera poses with consistent contents.
2. The paper proposes a multi-view diffusion technique to obtain content consistency: extract features from other views guided by depth maps.
3. Panorama generation can also be improved via the multi-view diffusion framework.
Weaknesses: 1. The MVDiffusion relies on given high-quality multi-view depth maps. However, the depth map obtained for leaves and strings may lack details, which may impact the generated image quality.
2. The correspondences are derived from wrapped depth so that it can only support rigid static scenes.
3. The interpolation module is time-consuming but the time consumption of each module is not provided. Please show them.
4. The tailored super-resolution module is not novel.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. The presented panorama is blurry compared with SD (pano). Why?
2. From my point of view, MVDiffusion is able to generate multi-view images and thus can synthesize images in 3D novel views instead of pure rotations (panorama). But all the presented image sequences are almost pure rotation. What's the problem?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: 1. The multi-view image generation relies on given geometries.
2. Regions that are not captured by the pre-defined camera poses cannot be recovered.
3. As the framework needs to interpolate camera poses between keyframes. The time consumption may be quite large.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the constructive feedback and suggestions. We reply to the questions/concerns in the following:
---
**W1.** While our experiments are currently with the ScanNet dataset, we believe that the most pragmatic usage is the creation of textures for handcrafted scene meshes. These meshes typically possess flawless geometries. This holds significant potential for real-world applications, particularly in design, visual effects, and entertainment industries.
---
**W2.** Yes. Right now, our method only applies to static scenes, but non-rigid scenes should be future work since we can still obtain correspondences through non-rigid transformation.
---
**W3.** We measure the run time on a single A6000 GPU. For panorama, we measure the run time by generating 8 perspective images and we don't use the interpolation module for panorama generation. For multiview depth-to-image generation, we measure the runtime by generating 12 images for generation and super-resolution module and the runtime by generating 4 interpolated images for interpolation modules. The time consumption is as follows, which will be added to the paper:
| Task | Generation model | Interpolation model | Super-resolution model |
|------------------|:----:|:-----:|:-----:|
| Panorama | 67s | / | 1120s |
| Multiview depth-to-image generation | 128s | 104s | 1443s
---
**Q1.** There is still inconsistency between perspective images. When they are stitched together, the panorama becomes blurry. The reason is that we use a resolution of 256 * 256 for the generation module, and thus the latent space is in low resolution, and there exists pixel inconsistency. This could be mitigated by training the generation module with a higher resolution (512 * 512). We show one example in Figure 1 of the rebuttal file, which is much clearer. Please refer to the *GR4 of the general response* for updated results of panorama generation.
---
**Q2.** As we indicated in the abstract, our current method is restricted to the generation of multi-view images where one-to-one correspondences are readily available. Unfortunately, this condition isn't fulfilled in casual multi-view image generation scenarios. Therefore, developing methods that can accommodate casual multi-view image generation without the necessity of one-to-one correspondence is a valuable direction for future work. | Summary: This paper presents MVDiffusion, a new diffusion model to generate consistent multi-view images, e.g., panorama. The authors propose a novel correspondence-aware attention mechanism in order to enforce pixel-level correspondence and cross-view consistency. More specifically this mechanism is used in three modules: a generation module that generates consistent low-resolution multi-view images; an interpolation module that generates images in between; a super-resolution module. After injecting such module into Stable Diffusion Unet layers and fine-tuning the model on multi-view image dataset, the model can synthesize consistent multi-view images based on text or depth.
Strengths: 1. The main idea (improving multi-view consistency through a correspondence-aware attention mechanism) is novel, simple, effective and easy to understand. The generated results are very impressive, see figure 1, 4, 5, 6 and all the images in the supplementary.
2. The paper is well-written especially the method part where the main design is demonstrated clearly, see section 4.1 and figure 3. The authors also provide the code in the supplementary.
3. The comparison with previous methods and some straightforward baselines are comprehensive and the improvements are convincing.
Weaknesses: 1. The pipeline figure (Figure 2) could be further improved: this figure is supposed to make the audience understand the whole workflow without looking at the method text, however, for now, in this figure, it's a bit unclear to me how these modules are associated with each other at first glance.
2. Adding a brief section on failure cases would be great: the results in the main paper and supplementary are truly impressive, however, I'm also curious what are the common failure cases of MVDiffusion. Adding a short paragraph discussion this would be beneficial to the whole community.
3. I feel a bit confused about the panorama generation during inference time: in Line 145, it says "The generation module generates eight 256 × 256 images, and the super-resolution module upscales to eight 1024 × 1024 images". I am just wondering where's the interpolation module then? Or let me put this way, how many images are generated by the generation module and how many are generated by the interpolation module? I assume the key frames are generated by the generation module and more images in between are generated by the interpolation module?
4. How to handle the conflicts in the overlap region between two generated images? If I understand correctly, although correspondence attention works very well, it still cannot guarantee the overlap region between two generated images are *exactly* matched, so I'm just curious how does this mismatch is handled? I asked this because in the stitched 8 perspective images, it looks very consistent.
5. Three modules are proposed in this paper, it would be great if an ablation study could be conducted such that the audience could understand the influence of each individual module better, e.g., how much difference it would make if removing the interpolation module?
6. In addition to the stitch image, it would be great if showing more *densely* (much more than 8) generated multi-view images and make it an animation, such that the multi-view consistency could be better evaluated qualitatively by the audience.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be great if the authors could respond the points mentioned above. Thanks!
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer x9w4 for the valuable questions and suggestions. We address the comments in the following:
---
**W1:** Thanks for your suggestion. For the final version, we plan to enhance Figure 2. We will illustrate a pipeline flow to show that images are initially created by the generation module, followed by the interpolation module (multiview depth-to-image generation), and finally by the super-resolution module. Additionally, we will include a note indicating that the interpolation module does not participate in the panorama generation.
---
**W2:** Considering our model allows users to dictate distinct prompts for each perspective image, significant content change can emerge when these prompts undergo drastic alterations. Please refer to *GR4 of the general response* for examples of failure cases. We will add more examples in the final version of the paper.
---
**W3:** In our panorama generation process, we do not employ the interpolation module. The reason for this is that the 8 perspective images we use already encompass a full 360-degree view without any occlusions. Therefore, the use of interpolation to fill in gaps or unseen areas becomes unnecessary. Conversely, the interpolation module is vital in multiview depth-to-image generation, when dealing with a large number of multi-view images. In cases where the quantity of images exceeds the memory capacity of a single GPU, the interpolation module steps in to effectively manage the images. We will clarify this in the final version.
---
**W4:** In our approach, we do not explicitly address conflicts, as our objective is for the diffusion model to learn to generate perfectly consistent images. In panorama generation, we don't see obvious inconsistencies. However, inconsistencies do occur in the multiview depth-to-image generation. We believe these inconsistencies could arise due to two key factors. Firstly, the presence of noisy depth data can introduce errors into the generation process, leading to inconsistencies in the output images. Secondly, the employed latent space might not be high-resolution enough. This lack of granularity could result in the loss of pixel-level consistency, since the attention operation is solely conducted within the latent space.
---
**W5:** If we remove the interpolation module, we cannot generate images that cover the whole scene mesh, as presented in Figure 2 of the rebuttal file. As for the ablations on the super-resolution module, we are not able to present qualitative results in the single-page rebuttal PDF due to the space constraint, but we will include it in the final version.
---
**W6:** We have made the video, and the link has been sent to AC as guided by the emailed instructions.
---
Rebuttal Comment 1.1:
Title: thanks for the detailed response
Comment: Thanks a lot to the authors for the detailed response! I don't have more questions but just curious where I am able to see the video? (I think this is supposed to be a question for the area chair instead of the authors then).
---
Reply to Comment 1.1.1:
Comment: I have informed AC to send the link. | Summary: The paper proposes a multiview latent diffusion model that is aware of the correspondence between views. Equipped with correspondence-aware attention blocks, the proposed generation module, interpolation module and the super-resolution module help MVDiffusion outperforms existing works.
Strengths: The proposed Correspondence-Aware Attention block connects different views. Also, keyframes are first generated and then upsampled both spatially and temporally.
Weaknesses: My biggest concern of the paper is the evaluation of measuring multi-view consistency. On L217 the authors mentioned to use pixel-level similarity for multi-view consistency. Is the PSNR computed between generated image and ground truth image? Does that mean the generation of depth-to-image is deterministic? It might be a good idea to conduct some evaluation in 3D. For example, train a NeRF with generated images and evaluate the difference between rendered image and the MVDiffusion generated image. Another way might be calculating some reprojection error since we know the camera poses of the generated images.
Also, for panorama generation, the author only compares with Stable Diffusion. Would be nice if DiffCollage or MultiDiffusion is also considered for comparison.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 2y4s for the constructive suggestions and feedback. We answer the questions as follows:
---
**1. Is the PSNR computed between generated image and ground truth image?**
No. The PSNR is calculated between two consecutive generated images in their overlapping regions, L219 in the main paper, since the generated image is different from ground truth image (not deterministic).
---
**2. It might be a good idea to conduct some evaluation in 3D. For example, train a NeRF with generated images and evaluate the difference between rendered image and the MVDiffusion generated image.**
We conducted evaluations in 3D using TSDF fusion to integrate the depths and images into a mesh. For experiment details, please refer to the *GR1 of the general response* for new qualitative results.
---
**3. Would be nice if DiffCollage or MultiDiffusion is also considered for comparison?**
Please see the *GR2 of the general response* for a qualitative comparison with MultiDiffusion or DiffCollage. | Rebuttal 1:
Rebuttal: We thank all reviewers and appreciate the constructive comments and the recognition of novelty, and we are grateful for all the positive initial ratings (two accept, one weak accept, one borderline accept).
This general response provides updated figures and accompanying discussions to answer several key comments/questions from the reviewers. We also uploaded a video showing a rotating camera view, as requested by reviewer x9w4. The link to the video has been sent to AC, following the emailed instructions. We encourage every reviewer to watch this video.
We first summarize the key comments/questions covered in this response in the following:
---
### Summary of key questions/comments:
**S1.** Reviewer 2y4s asks for evaluating MVDiffusion in 3D.
**S2.** Reviewer 2y4s asks for a comparison with MultiDiffusion or DiffCollage.
**S3.** Reviewer x9w4 asks for failure cases of MViffusion.
**S4.** Reviewer xhAT asks why the presented panorama is blurry compared with SD (pano).
**S5.** Reviewer VE1c asks how MVDiffusion deals with the depth occlusion.
---
We will then address the comments/questions summarized above by referring to the new experimental results attached in the single-page PDF:
**GR1. (for S1 and S5) Updated results on multiview depth-to-image generation.**
In our rebuttal, Figure 2 showcases the ability of MVDiffusion to produce high-quality textures for scene meshes. The following modifications have been implemented: 1) We improved the positional encoding used in multiview depth-to-image generation, as depicted in Figure 3 of the main paper. This was achieved by integrating depth disparities between source and target views. To be precise, we project the depth of correspondences from the target views onto the source views and then compute the resulting depth disparities (handle occlusions), 2) During the training phase, every training sample incorporates 12 consecutive keyframes, 3) During testing, we first employ the generation module to produce all the key frames within a given test sequence. Then, the interpolation module is utilized to enrich or densify these images. Notably, even though our model has been trained using a frame length of 12, it has the capability to be generalized to accommodate any number of frames, and 4) Ultimately, we fuse the RGBD sequence into a cohesive scene mesh.
---
**GR2. (for S2) Comparison with MultiDiffusion or DiffCollage.**
As pointed out in Line 64 of the paper, neither MultiDiffusion nor DiffCollage incorporates a camera model when generating 360-degree views. This leads to outputs that do not accurately represent true panoramas. In our rebuttal's Figure 5, we present the results of MultiDiffusion and MVDiffusion using the same prompts. In a true panorama, lines often appear curved due to the distinct perspective shifts inherent in panoramic photography. However, in images produced by MultiDiffusion, these lines remain linear, retaining the characteristics of a conventional perspective image
---
**GR3. (for S4) Updated results of panorama generation**
We trained the correspondence-aware attention block within our panorama generation module at a resolution of 512 x 512 while keeping the original stable diffusion UNet weights frozen using the Matterport3D indoor datasets. As shown in Figure 1, our method is able to generate outdoor panoramas. The result indicates: 1) Even when trained on a limited dataset, MVDiffusion exhibits strong generalization capabilities due to the stable diffusion pretrained model, and 2) The clarity of the generated panorama is notably enhanced—with fewer artifacts—when trained at higher resolutions.
---
**GR4. (for S3) Failure cases of panorama generation**
In Figure 3, we highlight a failure case of our panorama generation. Considering our model allows users to dictate distinct prompts for each perspective image, significant content change can emerge when these prompts undergo drastic alterations.
---
**GR5. At last, we update a new functionality of MVDiffusion, which supports extrapolating a perspective image to a full 360 view.**
We also demonstrate our method can extrapolate a perspective image into 360 views in Figure 4. Specifically, we use the pretrained stable diffusion impainting model as the base generation model. In the Unet branch of the conditioned image, we append a mask of ones (in 4 channels) to the image. This concatenated image then serves as the input for the inpainting model, which ensures the content remains consistent. Conversely, in the Unet branch dealing with the generated images, we concatenate a black image (pixel values of zero) with a mask of zeros. This serves as the input, enabling the inpainting model to generate a completely new image based on the text it is provided.
Pdf: /pdf/5daf30b26b98be0a5de2a5c9bba6bbb6a3d15066.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model | Accept (spotlight) | Summary: The paper introduces an extension to the standard unconstrained features model in neural collapse theory by incorporating multiple layers above the unconstrained features. The study demonstrates that, under certain conditions and at optimality, all of the top layers exhibit all properties of neural collapse.
Strengths: -- The paper exhibits excellent writing and effectively surveys the existing literature.
-- The paper contributes to a promising direction in the theory of neural collapse by presenting a significant extension, and it conducts a non-trivial theoretical analysis that offers compelling evidence for the occurrence of neural collapse at intermediate layers. This extension aligns with previous empirical findings, verifying the theoretical predictions made in earlier studies.
Weaknesses: -- Like other papers in the field, the rationale behind asserting that SGD converges to a solution satisfying neural collapse solely based on optimality is not entirely clear. It would greatly benefit the paper if the authors could provide further clarification and expand upon this aspect.
-- While the paper focuses on the binary classification case, which is acceptable, it would be valuable to discuss the feasibility and potential challenges of extending the analysis to a multiclass setting.
-- I believe that expanding on the implications and interpretations of the results would significantly improve the paper. It would be beneficial for the authors to discuss how these findings relate to aspects like generalization and what implications they have for our understanding of deep learning in general.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: There are no negative societal issues with this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful review and positive feedback. We address all the points raised in the review below.
**The rationale behind asserting that SGD converges to a solution satisfying neural collapse solely based on optimality is not entirely clear**
*Response.* We fully agree with the reviewer that the ‘static’ global optimality analysis by itself does not prove SGD convergence. Showing SGD convergence (and, thus, providing a ‘dynamic’ analysis of DNC) is a challenging future research direction.
We note that the full SGD convergence is not completely established even for the standard neural collapse, with [7, 21] giving partial solutions based on the assumptions discussed in our related work section.
[12, 20, 42] try to argue the convergence by showing that the loss landscape of the unconstrained features model is benign. An equivalent statement for the DUFM is unlikely to even be true. In fact, [R1] shows that multi-layer deep linear networks exhibit spurious local minima. Including the ReLU activation intuitively does not simplify the loss landscape, nor does it make it benign.
**While the paper focuses on the binary classification case, which is acceptable, it would be valuable to discuss the feasibility and potential challenges of extending the analysis to a multiclass setting**
*Response.* This issue was raised by multiple reviewers. Hence, we have opted to respond to it in the global response above.
**It would be beneficial to discuss the practical implications of our findings and how they relate to topics such as generalization and general understanding of neural networks.**
*Response.* This is a good suggestion. We provide below three insights resulting from our theory.
*(i)* Our results suggest that the $l_2$ regularization leads the deep neural network to find solutions with well-conditioned matrices (orthogonal in the ideal DNC case). This in turn provides more robust models, which also suggests better generalization.
*(ii)* In principle, it should be possible to exchange the $l_2$ regularization on the weight matrices of the backbone with an $l_2$ regularization directly on $H_1$. In this way, we expect to be able to force the occurrence of the DNC more directly. Some works [1, 5, 6, 15, 16, 17, 32, 34] try to utilize the neural collapse for practical purposes (including transfer learning, out-of-distribution detection, robustness, and calibration) and this is a good way to better reach it.
*(iii)* As for the general understanding of neural networks, we can learn from the DNC what is the ideal state at which deep neural networks are incentivized to represent the data, having been trained with MSE loss and $l_2$ regularization. In particular, the network tends to reduce the within-class variability and create structured feature representations. This suggests that we should treat the terms “features” and “feature extraction” with caution, as two different samples from the same class will have the same feature representation, despite being significantly different in nature (such as two images of a dog with considerably different background or coming from considerably different dog species). The after-relu DNC2 results of Theorem 3 suggest that two samples of different classes, having orthogonal, yet non-negative feature representations, cannot share the activation at any of the neurons. This means that even seemingly similar samples with different labels do not share the same feature representation at any neuron, despite perhaps sharing some major features (such as two different species of a dog with a different label, sharing most of the dog’s visual properties and only differing in fine details).
[R1] Kawaguchi, Kenji. "Deep learning without poor local minima." Advances in neural information processing systems 29 (2016). | Summary: The paper studied Deep Neural Collapse (DNC) which extends the structure of Neural Collapse of the last layer to multi-layers in deep neural networks. Theoretically, this paper generalizes the established unconstrained features model to the deep unconstrained features model (DUFM) of multi-layer non-linear models for binary classification. Based on the deep unconstrained features model, they show the global optimum exhibits several extended properties. Empirical experiments demonstrate that the results of the optimized deep unconstrained features model and trained networks agree with the theory.
Strengths: 1. The paper generalizes the Neural Collapse to Deep Neural Collapse which describes the structure of the feature other than the last layer. The proof based on the deep unconstrained features model for binary classification is clear and easy to follow. It gives a framework to find and prove the feature structure of other layers in DUFM.
2. The extensive experiments and numerical results shows that the theory successfully predicts the behavior of the deep unconstrained features model(DUFM) which exhibits proposed DNC properties.
3. The paper is well written, it is smooth to follow the whole logic and the idea of the method.
Weaknesses: 1. Though the model and proof consider the Deep Neural Collapse which works for not only features of the last layer, the model and the proof are restrictive which works for binary classification and no bias case. Many proofs and derivations rely on binary classification like Lemma 5 - Lemma 7 which is the key to giving the conditional optimum of previous layers.
2. In addition, the DUFM adds a regularization term for the first layer feature which is not fit with the real training of neural networks to some extent.
3. Compared to the numerical results of DUFM, the experiments for real training of neural networks are limited to the 3-layer Resnet20. If there are more results for the training of other neural networks it will make the experiments more convincible.
4. There are some typos in the proof, like L517 in the appendix where I think the $\sigma_1^2$ should be $s_{L, i}^2$.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors:
1. Most previous research[1,2, 3] focus on models with bias while in [3] they also study the unconstrained features model without bias and show the features exhibit OF rather than STF under UFM with bias. So I am just wondering if it is possible to extend the DUFM to the bias case and show the STF structure in DNC2?
2. There are some experiments results for 3-layer Resnet-20 in the paper which also show DNC, but in [4] the results seem to show that the Neural Collapse process gradually across the layers of the neural network like the measure of NC1 decreases exponentially. I think this may be caused by the gap between the DUFM and real trained networks. Are there more experiments on real trained networks and whether they exhibit DNCs? If so, why is there such a difference between the results in this paper and the results in [4]. If not, is that mean there is a gap and needs other explanations for such a phenomenon?
3. All experiments based on DUFM in this paper add the regularization term of the first layer feature. By the derivation of the DUFM, it seems that possible to add a regularization term on the intermediate layer feature, for example, add $||H_5||_2$ and optimize $W_l, l \ge 5$, are there any experiment results can demonstrate in this case how the DNC performs?
Reference:
[1] Zhu, Z., Ding, T., Zhou, J., Li, X., You, C., Sulam, J., & Qu, Q. (2021). A geometric analysis of neural collapse with unconstrained features. Advances in Neural Information Processing Systems, 34, 29820-29834.
[2] Fang, C., He, H., Long, Q., & Su, W. J. (2021). Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences, 118(43), e2103091118.
[3] Tirer, T., & Bruna, J. (2022, June). Extended unconstrained features model for exploring deep neural collapse. In International Conference on Machine Learning (pp. 21478-21505). PMLR.
[4]He, H., & Su, W. J. (2022). A law of data separation in deep learning. arXiv preprint arXiv:2210.17020.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The framework and the proof relies on the binary classification and no bias case which make the DUFM not so correlated to the real trained networks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive, insightful and engaging review, and for pointing out the strengths of our work. We address all the points raised in the review below.
**The paper only considers the binary classification and the bias-free case**
*Response.* For the extension to multiple classes, please see the global response. As for the omission of the bias term, we believe it is possible to extend our analysis to include the bias. In fact, Lemmas 5-9 are unchanged, and the only changes involve Lemma 4 and Lemma 11, which can also be adjusted to accommodate the bias. However, by the argument made in [29] that the ETF structure can be obtained from the orthogonal frame (OF) structure by simply centering it, we believe that including bias would not provide significant new insights on top of our existing results. Since the computations would get much more complicated, we thus opted to omit this case.
**DUFM adds a regularization term for the first layer feature which is not fit with real training of neural networks**
*Response.* We agree with this point. Indeed, the regularization can be regarded as a simplification with respect to the (more complicated) nature of real training in which features are the output of regularized network layers. However, $l_2$ regularization on the unconstrained features is common to most works considering regularized UFM, see e.g. [2, 7, 25, 27, 28, 29, 30, 40, 42].
For standard last-layer neural collapse, global optimality was also demonstrated under different assumptions, such as $l_2$-norm constraints [19, 31, 36]. Thus, an interesting direction for future work is to consider constrained features instead of the $l_2$ regularization, in the context of the deep neural collapse.
Finally, we highlight that our results can be readily generalized to the case where $H_1$ is regularized with any Schatten-$2/L$ pseudonorm. This pseudonorm constitutes a candidate for a more accurate regularization term. In fact, [R1] shows that regularizing a sequence of weight matrices without the ReLU activation (i.e., in a deep linear network) is equivalent to regularizing their product with the Schatten-$2/L$ pseudonorm, where $L$ is the number of the weight matrices.
**Experiments for real training of neural networks are limited to the Resnet20 with 3-layer MLP on top.**
*Response.* In the PDF attached to the general response above, we provide additional experiments:
*(i)* We consider VGG13 and DenseNet121 architectures, keeping the rest of the experimental setup the same as in Figure 1 of the paper.
(ii) We consider a 5-layer head on top of the ResNet20.
All these new experiments fully agree with the original one and support our theory.
**Q1: is it possible to extend the DUFM to the bias case and show the STF structure in DNC2?**
*Response.* We believe it is possible to do our analysis with the bias term included and this would yield results similar to the one-layer NC. Please see our response above in the weaknesses section.
**Q2: In [4] the results seem to show that the Neural Collapse progresses gradually across the layers of the neural network like the measure of NC1 decreases exponentially**
*Response.* We agree that there is a gap between the predictions of the DUFM and the progressive neural collapse which has been observed empirically in real networks, and investigating this gap is an important future direction. Let us make two comments in this regard.
*(i)* The progressivity of DNC may be due to the non-linearity of the data, which the neural network has to eliminate throughout the layers. One reason for the gradual removal of within-class variability through the layers could be the balance between the norms of the weight matrices of different layers. In fact, decreasing the within-class variability more quickly would require larger matrix norms, which is prevented by the regularization.
*(ii)* A dynamic analysis of the trajectory of gradient descent could also show a progressive neural collapse: the final layers reduce the within-class variability faster than the first ones, as training progresses.
Finally, we note that the phenomenon of progressive neural collapse is not entirely understood even at the experimental level: on the one hand, [8] observes a log-linear behavior in the DNC1 reduction; on the other hand, [26] observes a reduction which is no longer log-linear.
**Q3: Is it possible to directly regularize the intermediate features instead of the weight matrices from previous layers?**
*Response.* Yes, this is indeed possible. This would also provide a more direct way of enforcing the occurrence of DNC, which could be beneficial for the applications of neural collapse (including e.g. transfer learning, out-of-distribution detection, robustness, and calibration, see [1, 5, 6, 15, 16, 17, 32, 34]).
[R1] Shang, Fanhua, Yuanyuan Liu, and James Cheng. "Unified scalable equivalent formulations for Schatten quasi-norms." arXiv preprint arXiv:1606.00668 (2016).
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response and additional experiments. I think most of the questions are well answered and further experimental results fit with the theory. So I change my Rating from 6 to 7. | Summary: The paper proposes a novel approach to investigating deep neural collapse in deep neural networks by presenting a generalization of the analytical framework for neural collapse (NC) to multiple non-linear layers. The paper introduces the deep unconstrained features model to demonstrate that the unique global optimum of the MSE loss for binary classification exhibits all the properties typical of deep neural collapse. However, the paper only considers the MSE loss without a bias term for binary classification, which may limit the applicability of deep neural collapse for other losses. Specifically, the theoretical conclusions may only be provably optimal in some cases.
Strengths: - The paper presents a generalization of the unconstrained features model to multiple non-linear layers, filling a gap in the existing literature.
- The paper contributes to a better understanding of deep neural collapse in binary classifcation deep neural networks, which has important implications for the design and optimization of deep learning models.
Weaknesses: - The theoretical analysis only involves binary classification, and it is unclear whether the proposed approach can be extended to multi-class classification problems, especially for a large number of classes that would change the final form (DNC2) of the resulting deep neural collapse.
- The paper only considers the MSE loss without a (last-layer) bias term, which may limit the applicability of deep neural collapse for other losses. Specifically, benifiting from the MSE loss without the bias term, the unique global solution can exhibit close-formed results, which may further allow the derivation of the proof to be greatly simplified.
- while neural collapse induces a simplex equiangular tight frame [1], which may not hold for other loss functions or if a bias term is included [2].
- The paper lacks extensive experiments to validate deep neural collapse. Additional experiments on a variety of datasets (especially for a large number of classes) and deep neural networks would help to better understand and confirm the prevalence of deep neural collapse.
- The paper could benefit from a more thorough discussion of the practical implications of the proposed approach for the design and optimization of deep learning models.
[1] Vardan Papyan, X. Y. Han, and David L Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. In *Proceedings of the National Academy of Sciences (PNAS)*, volume 117, 2020.
[2] Jinxin Zhou, Xiao Li, Tianyu Ding, Chong You, Qing Qu, and Zhihui Zhu. On the optimization landscape of neural collapse under mse loss: Global optimality with unconstrained features. In *International Conference on Machine Learning (ICML)*, 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and insightful assessment. We address all the points raised in the review below.
**Extension to multi-class classification problems**
*Response.* This issue was raised by multiple reviewers. Hence, we have opted to respond to it in the global response above.
**The paper only analyzes the MSE loss without the bias term**
*Response.* We agree that using the MSE loss does not provide a full insight into the usage of other losses. Indeed, as we see in the literature on neural collapse, the analysis and even the statements for MSE and cross-entropy (CE) losses differ significantly. For instance, the neural collapse for CE loss is not well-defined without feature normalization or weight decay.
In our work, we perform the analysis with the MSE loss, as it provides more options for mathematical treatment. We highlight that the MSE loss is a reasonable choice, as it is reported by [R1] that training with the MSE loss can perform on par with the training with the CE loss. For these reasons it is the setup of choice of a large number of papers on neural collapse [7, 21, 28, 30, 40, 41]. An extension to cross-entropy is a great direction for future work, however the corresponding analysis is likely to be quite different. In fact, the CE loss does not admit closed-form solutions, and it typically yields a structure described by an equiangular tight frame (ETF) at least in the last layer.
As for the omission of the bias term, we believe it is possible to extend our analysis to include the bias. In fact, Lemmas 5-9 are unchanged, and the only changes involve Lemma 4 and Lemma 11, which can also be adjusted to accommodate the bias. However, by the argument made in [29] that the ETF structure can be obtained from the orthogonal frame (OF) structure by simply centering it, we believe that including bias would not provide significant new insights on top of our existing results. Since the computations would get much more complicated, we thus opted to omit this case.
**The paper lacks experiments to validate the neural collapse**
*Response.* We agree that validating the DNC is a key prerequisite for its theoretical study. However, this validation was already extensively made in recent work (see [8, 26]). Our contribution crucially builds upon this experimental evidence and provides a theoretical analysis of the same phenomenon.
That being said, we still do experiments related to main take-away of our paper:
*(i)* We have performed a rather extensive set of experiments on DUFM: different depths (ranging from 2 to 10), different widths (from very small to as wide as 1000 neurons), different weight decays, and different learning rates. Many of these experiments are discussed in detail in the ablation study in Appendix C, where we provide a concise description of the effects these hyperparameters have on the emergence of DNC.
*(ii)* The experiments on real data validate the DUFM assumption, a point that was lacking in the related literature, see both the main paper and the additional results in Appendix D. We also provide extra experiments in the 1-page PDF attached to the global response above:
*(ii-a)* We consider VGG13 and DenseNet121 architectures, keeping the rest of the experimental setup the same as in Figure 1 of the paper.
*(ii-b)* We consider a 5-layer head on top of the ResNet20.
All these new experiments fully agree with the original one and support our theory.
**The paper could benefit from further discussion on the practical applications of the proposed approach**
*Response.* This is a good suggestion. We provide below two insights resulting from our theory.
*(i)* Our results suggest that the $l_2$ regularization leads the deep neural network to find solutions with well-conditioned matrices (orthogonal in the ideal DNC case). This in turn provides more robust models.
*(ii)* In principle, it should be possible to exchange the $l_2$ regularization on the weight matrices of the backbone with an $l_2$ regularization directly on $H_1$. In this way, we expect to be able to force the occurrence of the DNC more directly. Several works (e.g., [1, 5, 6, 15, 16, 17, 32, 34]) try to utilize the neural collapse for practical purposes (including transfer learning, out-of-distribution detection, robustness, and calibration), and this is a good way to better reach it.
[R1] Demirkaya, Ahmet, Jiasi Chen, and Samet Oymak. "Exploring the role of loss functions in multiclass classification." 2020 54th annual conference on information sciences and systems (ciss). IEEE, 2020. | Summary: This paper introduces the concept of deep neural collapse (DNC), extending the understanding of neural collapse to earlier layers of deep neural networks. It proposes the deep unconstrained features model (DUFM) as a theoretical framework to analyze DNC. The authors demonstrate that for binary classification, DNC is a globally optimal solution in the DUFM, providing rigorous evidence of its occurrence in multiple layers. Numerical experiments confirm that gradient descent efficiently finds solutions in line with the theory, exhibiting neural collapse in multiple layers. The paper concludes by highlighting open problems for future research, such as generalizing the analysis to multiple classes, exploring gradient dynamics in the DUFM, and understanding the impact of biases and loss functions on the collapsed solutions.
Strengths: $\bf (1)\ \textbf{Strong Technical Analysis}:$ A key strength of this paper is its strong and rigorous technical analysis. The authors provide a solid theoretical framework, specifically the deep unconstrained features model (DUFM), to analyze and understand the phenomenon of deep neural collapse (DNC). Theorems and empirical evidence are presented to support their findings, enhancing the credibility and reliability of the results.
$\bf (2)\ \textbf{Extension to Multiple Non-linear Layers:}$ This paper addresses a crucial gap in the existing literature by extending the analysis of neural collapse beyond the last layer to multiple non-linear layers. By generalizing the unconstrained features model, the authors provide insights into deep neural collapse (DNC) occurring in deeper layers of neural networks. This extension contributes to a more comprehensive understanding of the behavior of deep networks during training.
$\bf (3)\ \textbf{Empirical Validation:}$ The paper provides empirical evidence to validate the theoretical framework. The authors demonstrate that by optimizing deep unconstrained feature models using gradient descent, the resulting solutions align well with the theoretical predictions. This empirical validation reinforces the credibility of the proposed modeling principles and their applicability in real-world scenarios.
$\bf (4)\ \textbf{Relevance and Contribution to the Field:}$ The paper addresses a significant topic of interest in the field of deep learning by focusing on the phenomenon of neural collapse in deep neural networks. By providing theoretical insights and empirical evidence, the paper contributes to the existing body of knowledge and expands our understanding of the behavior of deep networks during training.
Weaknesses:
$\bf (1)\ \textbf{Theorem 3 cannot explain the progressive neural collapse:}$ In Ref [8], the authors observe that each layer roughly improves a certain measure of data separation by an equal multiplicative factor. A similar observation is depicted in Figure 1 of the current paper. However, Theorem 3 contradicts this observation by demonstrating that for all layers $l \ge 2$, the within-class variability of features becomes zero, and the features form an orthogonal matrix. This discrepancy suggests that there may be additional factors or dynamics at play in the behavior of the layers beyond what is captured by the initial observation. Further investigation is necessary to reconcile these contrasting findings and gain a more comprehensive understanding of the behavior and properties of features in deep neural networks.
$\bf (2) \textbf{Role of activations:}$ One weakness of the paper is the limited discussion or exploration of the specific role and impact of activations in the context of deep neural collapse (DNC). While the paper acknowledges that DNC1 and DNC2 hold for layer l ≥ 2 before and after activations, it does not provide an in-depth analysis or explanation of how the activations influence or interact with DNC.
$\bf (3) \textbf{Limited Scope:}$ The paper primarily focuses on deep neural collapse (DNC) in the context of binary classification. While this specific focus allows for in-depth analysis, it may limit the generalizability of the findings to other problem domains or network architectures. Further exploration of DNC in more diverse settings could strengthen the paper's applicability.
$\bf Typos:$ (1) At line 122, it should be $W_L \in \mathbb{R}^{K\times d_L}$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: $\bf 1.$ The results in this paper are based on the L-layer deep unconstrained features model. This omits all the structures in the data input. A natural question is whether the analysis can be extended to the case where the data input has some special structures, such as the case where the data is whitened, i.e., assuming $H_1H_1^T=I$ in Definition 1.
$\bf 2.$ What is the main technical difficulty in analyzing multiple-class cases?
$\bf 3.$ What is the role of the activations in deep neural networks? Why do DNC1 and DNC2 hold for layer l ≥ 2 before and after activations?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful assessment and for the positive evaluation of our work. We address the points raised in the review below.
**Theorem 3 cannot explain the progressive neural collapse**
*Response.* We agree that there is a gap between the predictions of the DUFM and the progressive neural collapse which has been observed empirically in real networks, and investigating this gap is an important future direction. Let us make two comments in this regard.
*(i)* The progressivity of DNC may be due to the non-linearity of the data, which the neural network has to eliminate throughout the layers. One reason for the gradual removal of within-class variability through the layers could be the balance between the norms of the weight matrices of different layers. In fact, decreasing the within-class variability more quickly would require larger matrix norms, which is prevented by the regularization.
*(ii)* A dynamic analysis of the trajectory of gradient descent could also show a progressive neural collapse: the final layers reduce the within-class variability faster than the first ones, as training progresses.
Finally, we note that the phenomenon of progressive neural collapse is not entirely understood even at the experimental level: on the one hand, [8] observes a log-linear behavior in the DNC1 reduction; on the other hand, [26] observes a reduction which is no longer log-linear.
**Lack of discussion on the role of activations**
*Response.* This is a good point, we will expand the manuscript accordingly. Our theorem indeed reveals an intriguing insight into the role ReLU plays in the DNC. As we see from Lemma 6, the optimal intermediate layer activations, i.e. $H_l$ in our notation, are non-negative.
This is due to the fact that the ReLU disregards any negative activation by zeroing it out. From our computations, it follows that producing strictly negative values is suboptimal for the norm of $W_l$ and, therefore, all the activations are non-negative.
**The paper only focuses on binary classification**
*Response.* This issue was raised by multiple reviewers. Hence, we have opted to respond to it in the global response above.
**Q1: Omitting the data structure + possible structure in $H_1$**
*Response.* We agree with the reviewer that the unconstrained features model removes all structure from the data. In fact, the main assumption underlying the model is that the network’s backbone eliminates any structure from layer $H_1$. However, though we consider multiple last layers, we still assume that we only look at the relative end of the network, therefore the unconstrained features model is similarly motivated as in the last-layer collapse. Going beyond unconstrained features, understanding the effect of additional structure is certainly a natural question.
Note that the rank of $H_1$ under DNC1 is at most $K$, and the row-dimension of $H_1$ is $d_1$. Thus, for $H_1H_1^T=I$ to hold, we need to look at $\bar{H}_1$ (as defined in line 116) instead of $H_1$ and we need $d_1 = K$, which is rather restrictive. Hence, in general, the assumption $H_1H_1^T=I$ is not compatible with DNC1.
Instead of considering an $l_2$ regularization on $H_1$, several papers (see e.g. [19, 31, 36]) assume the features to be normalized, which can be regarded as a way to add structure in the input data. Another attempt to assume *some* structure is done in [29], where the authors consider features that are only approximately unconstrained and admit an error. It is an interesting future direction to quantitatively understand the effect of such modeling assumptions on our results.
**Q2: Main challenges in analyzing the multiclass case**
*Response.* Please see the global response.
**Q3: The role ReLU plays in our theory**
*Response.* The reason why DNC1 and DNC2 hold before and after the activation is rather subtle, and it follows from the results of Lemma 6. One intuitive way of reasoning about this is as follows: ReLU disregards negative values, hence it is useless for the network to create negative features. The computations of Lemma 6 formalize this idea: the network avoids creating negative features (before the application of ReLU), and by doing it, it yields a solution with the smallest possible Frobenius norm. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the positive feedback on our work. We reply to reviews separately and we address here one point raised by all reviewers.
**The paper only considers the binary classification case. Are the results generalizable to the multi-class classification and if yes, what are the main technical challenges?**
*Response.* We regard the generalization to multiple classes as an important open problem. We discuss below two technical challenges in this generalization:
*(i)* For more than two classes, Lemma 9 is false. We provide a counterexample in Appendix B for a multi-column matrix $H$ and $L=2$. This makes it difficult to obtain the optimal value of $H_2^*$, which is crucial for the rest of the analysis. On the other hand, even if Lemma 9 does not hold in full generality, [29] argues that the inequality cannot be violated for the (supposedly optimal) orthogonal matrices $H$. This indicates that counterexamples to Lemma 9 have to be ill-conditioned, which leads to an increase in the loss also in the multi-class case. Therefore, obtaining some bounds on $||H||_*$ could suffice to generalize the proof.
*(ii)* Computing the optimal solution of Lemma 5 appears to be challenging for multiple classes. In fact, a component-wise application of the ReLU activation can increase the rank of a matrix. In other words, even if $X$ is low rank, $\sigma(WX)$ can still have high rank, which significantly complicates the analysis.
**Enclosed PDF:**
Some of the reviewers asked us to provide additional experiments supporting the validity of DUFM as a modeling principle. Therefore we provide experiments where we trained VGG13 and DenseNet121 as an alternative to ResNet20 for the 3-layer DUFM head and alternative ResNet20 training for 5-layer DUFM head. The results are in full accordance with those in the paper.
Pdf: /pdf/137d9cadda77c02e8dbd4bff8c952de59473502d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Pitfall of Optimism: Distributional Reinforcement Learning by Randomizing Risk Criterion | Accept (poster) | Summary: This paper proposes a distributional RL algorithm, PQR, that uses distributional return estimates for exploration by updating with a greedy distributional Bellman operator for time-varying, random risk measures. The paper demonstrates that, under a simple concentration condition on the random risk measures over time, the means of return distributions learned by PQR converge to the optimal Q-values. It is argued that the use of random risk measures (as opposed to static risk measures) produces updates that are less biased away from the optimal Q-values. It is shown that PQR demonstrates favorable exploration performance in a simple chain MDP and in Atari.
Strengths: The paper presents a well-motivated algorithm for leveraging return distributions for exploration in (deep) RL. The idea of using random risk measures in greedy distributional Bellman updates to veer less from the optimal Q-values is novel and interesting. Furthermore, the empirical results of the PQR algorithm appear to be quite good.
Weaknesses: The proposed algorithm itself may not be substantially novel -- I suspect similar results would hold for QR-DQN with a dynamic distortion risk measure that tends to the identity sufficiently quickly.
The theoretical results are fairly weak. Firstly, convergence of the return distributions is only established with respect to their first moments. Moreover, the upper bound equation in Theorem 3.3 is difficult to interpret, and I am not sure what one should conclude from this upper bound.
Additionally, the influence of the schedule $(\Delta_t)_{t\geq 0}$ and the Dirichlet parameter $\beta$ is not really studied -- I suspect, at least in theory, that there are more optimal data-based schedules.
## Grammatical / non-major issues
L4 "present a novel distributional reinforcement learning that selects..." -> "present a novel distributional reinforcement learning **algorithm** that selects..."
Last sentence in the abstract cites [6] as a risk-sensitive distributional RL algorithm, but this is not really a distributional RL algorithm.
L144 mentions the perturbed probability distribution $\xi\mathbb{P}$, but this doesn't really type-check. $\mathbbf{P}$ is a probability measure (maps measurable sets to probabilities) and $\xi$ is a probability density function (mapping elements of the sample space to reals), so the product $\xi\mathbb{P}$ is not exactly correct.
Theorem 3.4 needs to be made more precise. It makes a claim about the fixed point of the Bellman optimality operator, but I think you meant the PDBOO.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: L172 says "PDBOO has a significant distinction in that it performs dynamic programming that adheres to the risk-neutral optimal policy..." -- what does it mean to "adhere to the risk-neutral optimal policy"? The PDBOO updates are still performing greedy updates w.r.t. risk-sensitive policies.
As mentioned above, convergence of return distributions is not really established at all under the PDBOO update -- we only know about the convergence of their means. But if the dynamics of the other statistics of the return distributions are essentially unknown, how can you be sure that the random risk measures are actually estimating anything meaningful?
Is there any correspondence between the PDBOO targets and the "Thompson Sampling" approach introduced by Riou and Honda (2020) http://proceedings.mlr.press/v117/riou20a/riou20a.pdf? There may be an interesting connection there, which could perhaps help add precision to the statement about "adhering to the risk-neutral optimal policy".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are addressed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the time and effort in reviewing our paper. Please find below our response to the main points raised in the review.
## The proposed algorithm itself may not be substantially novel.
Of course, any scheduling that is sufficiently close to the QR-DQN can yield a risk-neutral optimal solution, but this is not sufficient for exploration. Since our theory provides sufficient conditions for satisfying the risk-neutrality, we have made it as exploratory as possible by choosing scheduling that satisfies non-trivial boundary conditions. We also believe that our proof of convergence for the time-varying bellman operator is sufficiently novel.
## Convergence of the return distributions is only established with respect to their first moments.
I think this is a very important question. In general, the return distribution in distRL does not have a unique fixed distribution. This is because there can be multiple greedy policies, since they are defined only for expectations and do not focus on the shape of the distribution. If we could define a total ordering on the set of greedy policies,(e.g., select the action with the lowest variance if expected values are equal) then there is a unique fixed distribution [1,2].
The same is true for the situation we face, and the total ordering is likewise required for convergence at the distribution level. If the total ordering is well-defined so that the uniqueness of the fixed distribution is guaranteed, then our fixed distribution is the same as the solution of the standard distributional Bellman optimality equation. We will add more details in the revised version in Section 3.2.
[1] Bellemare, Marc G., Will Dabney, and Rémi Munos. "A distributional perspective on reinforcement learning." International conference on machine learning. PMLR, 2017.
[2] Bellemare, Marc G., Will Dabney, and Mark Rowland. Distributional reinforcement learning. MIT Press, 2023.
## The influence of the schedule $\Delta_t$ and $\beta$
We have analyzed the effect of learning as a function of the size of the initial value of $\Delta_t$ and PQR shows consistent results unlike other algorithms. **In the global response, we additionally experiment with different forms of scheduling for PQR.**
For the hyperparameter $\beta$, we chose a weight of the form $c * \textbf{1}^N$ to draw symmetrically from the simplex. If the value of $c$ is too large, the sampled weights will be around the center of the simplex, which is not much different from the original mean operator. So we have experimented with the setting $c=grid search[0.05, 0.1, 0.5, 1]$. We then chose a small value so that the perturbation gap is large enough by sampling $\xi$ in the form of spikes at the edge of the simplex.
## Reference [6] is not really a distributional RL algorithm
Thank you for the careful review. We'll fix the citation in that paragraph and include another reference.
## Expression issue of $\xi\mathcal{P}$
This was a sloppy expression. We'll fix it in the main text as follows,
- line 144: which can be interpreted as the expectation of $X$ under perturbed probability distribution $Q(dw) = \xi(w)P(dw)$.
## Fixed point of the Bellman optimality operator in Theorem 3.4 seems to mean the PDBOO.
Since Theorem 3.4 shows that the fixed point of PDBOO is the same as that of a standard DBOO, this statement reflects our meaning well. We state this once more in lines 193-194 and 197.
## Connection between the PDBOO targets and the Thompson Sampling?
We agree that the suggested method is similar to our approach. Thanks for suggesting the reference, it seems to be quite an important paper. The proposed nonparametric TS algorithm seems to have many similarities with our approach in that it uses a weighted sum of Dirichlet distributions for action selection. Their situation is different from ours in that they record all visits and reward histories, but perhaps extending their work to value function approximation approach would deepen the connection. The improved version of nonparametric TS algorithms controls the Dirichlet parameter, while PQR controls the upper bound of the perturbation gap, which seems to have a different approach but similar effect.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. I have increased my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your positive response
Comment: We appreciate your positive response, and we're glad that our response was clear. We will incorporate your thoughtful feedback into a revised version. | Summary: The authors propose a new exploration strategy that is more principled than epsilon-greedy or DLTV of Mavrin et al. (based on truncated variance). The proposed exploration strategy is to based on a novel perturbed distributional Bellman optimality operator, where the goal (Eqn. 1) is to minimize the distributional Bellman objective subject to the perturbation being sampled from an uncertainty set; the uncertainty set is defined as importance weighting the next state's return distribution in the Bellman operator, where the radius of the set decreases with time. The authors show that by appropriately decaying the set's radius, the algorithm can learn the optimal value function, and provide experimental validation on N-chain and Atari, showing competitive performance to IQN and Rainbow.
Strengths: 1. Perturbing the expectation with a density ratio is a simple but neat idea.
2. Experimental results seem pretty promising, given that QR-DQN's performance significantly improves with the proposed strategy, sometimes even surpassing IQN.
Weaknesses: 1. I am not fully sold that perturbing the Bellman operator is a principled way to modeling epistemic uncertainty of our estimates. In particular, the paper seems to suggest that optimism is actually biased and unprincipled, while we know that in both theory and practice, optimism in the face of uncertainty is principled for online RL and minimax optimal in most well-known settings.
1b. I think the paper would benefit from a clearer and more proper explanation of how prior settings are biased / unbiased, and how the proposed approach is better/worse than prior exploration strategies.
2. if I understand correctly, the proposed uncertainty set seems to be (s,a)-independent, which seems wrong. (in particular, if some parts of the state space was not visited often, that uncertainty should be high, while uncertainty should be low for frequented states).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Typically, distributionally robust optimization (DRO) optimizes the worst-case performance. However, in eqn. 1, you sample from the uncertainty set instead of taking a supremum. Can you give some intuition on why you care about the average performance over the set, rather than worst case?
2. Is the perturbation gap related/inspired by the dual form of dynamic risk with coherent risk measures?
3. Theorem 3.3 shows some kind of convergence of the learned $E Z^{(n)}$ to the true $Q*$ , but it seems that as n -> infinity, the bound does not necessarily to go zero. Doesn't this mean there will be some bias in the proposed approach?
4. To be clear, does your implementation only use QR-DQN + the proposed PQR exploration strategy? Do you make use of target networks, double Q-learning, prioritized replay, or any of the bag of tricks that Rainbow uses?
5. In Section 4.2, what is meant by "intrinsic uncertainty" of Atari? Without sticky actions, Atari is fully deterministic and there is no aleatoric uncertainty (only epistemic uncertainty), right?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see weakness/questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the time and effort in reviewing our paper. Please find below our response to the main points raised in the review.
## How does the perturbed Bellman operator is a principled way to modeling epistemic uncertainty?
We are not trying to model epistemic uncertainty through perturbations; rather, we are trying to point out that using the OFU without considering the entanglement of intrinsic uncertainty (commonly referred to as risk) leads to biased exploration and degrades performance.
In principle, OFU works well for most online RL tasks where epistemic uncertainty (e.g., the number of visits) can be modeled. However, as mentioned in lines 41-47, in the deep reinforcement learning case, it is difficult to separate epistemic uncertainty because learning a feature representation of the high-dimensional state-action space and updating the bellman target occur simultaneously. There have been several attempts to solve this problem, and we have proposed a simple yet effective exploration method that does not separate the two uncertainties.
## Proposed uncertainty set seems to be $(s,a)$-independent, which seems wrong.
That's a great point. First, we would like to note that our method is not exploration based on parametric uncertainty. The ambiguity set we define is for sampling perturbations xi, which is a weight applied equally to each return distribution, where the perturbation gaps are different for each $(s,a)$ depending on the shape of the distribution. Therefore, we believe that $(s,a)$-dependent elements like bonus correspond to perturbation gaps, but not to perturbation gaps or ambiguity sets.
Also, note that $\bar{U}_\Delta$ is the definition that the upper bound of the perturbation interval for all $(s,a)$ is less than $\Delta$, which does not mean that all $(s,a)$ have the “same” amount of perturbation gap. Thus, for uncertain state-action which will have high parametric uncertainty, the estimated distribution will also have a high variance, so the perturbation gap will be large and still be chosen frequently.
## Give some intuition on why we care about the average performance, rather than the worst case.
If we use a min-max operator with respect to risk, the solution is always more conservative (pessimistic) than the standard solution as it always considers the worst-case scenario. On the other hand, we considered the average-case which has the possibility to maintain risk-neutrality. Specifically, we aimed to find a sufficient condition to ensure risk-neutrality by scheduling the ambiguity set of risks. As such, we only share the definition of ambiguity set by the DRO literature, but with a different goal of risk-neutrality.
## Relation/Inspiration by the dual form of coherent risk measures?
Coherent risk measures and perturbation gaps seem to be related, since we try to construct ambiguity sets according to the DRO literature. However, we were not inspired by the dual form and have not yet attempted to make the connection. The definition of the perturbation gap is a natural consequence of obtaining the upper bound of Theorem 3.3. Because it is a tractable and simple form, we were able to design a practical PQR by defining the perturbation gap as shown in the paper. It would be interesting to define the perturbation gap in a more sophisticated way.
## The bound in Theorem 3.3 does not necessarily go zero.
We cannot guarantee that delta_t will always go to zero if it is unconstrained. Hence, in Assumption 3.2, we gave sufficient conditions to ensure that the upper bound always goes to zero. Please make sure that the subscript "$k=n$" in the summation of the right terms is not misinterpreted as "$k=1$".
## Does our work only use the proposed PQR exploration strategy?
We implemented PQR as you would expect, and we believe it's the right way to make a fair comparison. We did not use any improvement techniques of RAINBOW to verify the effectiveness of our method alone. Please note that our baseline comparison, IQN-dopamine, also used $n=3$ multi-steps and still PQR achieved higher performance.
## What is meant by “intrinsic uncertainty” of Atari?
We apologize for not explaining this in more detail. While it's true that the Atari game itself (without no-op or sticky actions) is completely deterministic. However, in v0 and v4, the versions we have experimented with, opengym additionally introduced random frame skipping, so there's another source of intrinsic uncertainty. We will revise the sentence to make this clear. | Summary: This paper investigates the exploration problem in distributional reinforcement learning (DRL). It proposes the Perturbed Distributional Bellman Operator (PDBOO) as an extension of the distributional Bellman operator, which introduces non-directional noise to the target return distribution. This extension originates from the problem that a one-sided risk tendency can cause biased action selection, which might lead to unsuitable behavior depending on tasks. To address this issue, the PDBOO adds non-directional noise as a perturbation to the return distribution targeted for learning, ensuring risk neutrality. The perturbation term is sampled at each time step. This approach allows for the selection of diverse actions while maintaining risk neutrality. The authors provide a theoretical analysis of PDBOO, demonstrating how the strength of the perturbation should be scheduled to asymptotically converge to the unique fixed point of the Bellman optimality equation. Then, the PDBOO is applied to the Quantile Regression Deep Q Network (QR-DQN) to propose a method called Perturbed Quantile Regression (PQR). The effectiveness of PQR is evaluated in a 4-states chain MDP and 55 Atari games and compared with several DQN variants.
Strengths: - The paper introduces the Perturbed Distributional Bellman Operator (PDBOO), a novel extension of the distributional Bellman operator. This approach enables diverse exploration while maintaining risk neutrality in distributional reinforcement learning. The authors also develop Perturbed Quantile Regression (PQR), a DQN variant that estimates quantiles based on PDBOO. The proposal represents a new and interesting direction of the study in distributional RL.
- The authors provide a solid theoretical analysis of PDBOO, demonstrating how the strength of the perturbation should be scheduled to achieve the unique fixed point of the Bellman optimality equation asymptotically. This result provides confidence in their proposed methods and valuable insights for algorithm development.
- The proposed PQR method is evaluated in a 4-states chain MDP and 55 Atari games. While the results showed room for improvement, the evaluation confirms the potential practical applicability of PQR and its ability to address the exploration problem while maintaining risk neutrality.
Weaknesses: - The paper lacks an explanation as to why the proposed approach is more effective compared to other methods. It is crucial to discuss under what tasks PDBOO is effective and when it might not be suitable to understand its value.
- For example, PDBOO would add random noise (perturbation) to the target return distribution for learning. Depending on the perspective, this could also have the side effect of making it more challenging to learn the return distribution due to the existence of noise. I would like to know if there are situations where PDBOO should not be applied.
- Also, it is not clear why the approach of PDBOO is necessary, as it seems that what the authors want to achieve with PDBOO would not be the perturbation of the target return distribution for learning but instead could also be accomplished by perturbing the estimated return distribution at the time of action selection. This is somewhat supported by the results of p-DLTV in the N-chain MDP experiment.
- The overall experimental results seem weak and do not resolve the abovementioned questions. Furthermore, the improved performance of PQR is observed in a subset of games, and the improvement seems modest. There is no comparison with p-DLTV, which showed similar good performance in the N-chain MDP experiment.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - How exactly does the proposed PQR select actions? Does it use an epsilon-greedy or softmax policy? More importantly, how does it compute the values of the actions which will be the input to the policy? Does it sample the perturbation \xi, as PDBOO does, and skew the distribution for calculating the mean? I think there are two timings at which the distribution can be perturbed by introducing ξ: during the parameter θ update, and during the action selection. This paper only seems to discuss the former, but considering that p-DLTV performs well similarly to the proposed method, wouldn't it also be valid to perturb the mean for the action selection?
- What is the type of policy for each baseline used in the experiments? Is it epsilon-greedy or softmax policy? I do not see it stated.
- Regarding the N-Chain with high intrinsic uncertainty experiment, the difference in the decaying rate of \Delta and c between PQR and DLTV or p-DLTV might also be significant. What are the authors' thoughts? It would be best to align the experiment conditions as much as possible.
- The authors used the N-chain task where risk-based exploration like DLTV obviously does not work well, so the presented results are reasonable. On the other hand, in a task where DLTV works well (e.g., both s_0 and s_4 have the same reward variance), I think it is essential to experiment to see how the proposed method performs to compare the proposed method fairly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - The paper lacks a thorough discussion of the limitations of the proposed PDBOO and PQR. The authors should provide more insight into scenarios where the method might not be applicable or efficient.
- As pointed out above, why the proposed method is effective (for instance, as opposed to p-DLTV or simply increasing the temperature during action selection) remains unclear. It seems that the questions are not adequately addressed in the experiments. The authors should consider addressing these points to provide a more complete view of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and for taking the time to review our submission. Contrary to your concerns, we want to emphasize that we've already designed PDBOO in the direction you're thinking, and we have provided detailed answers below.
## Misconceptions about how PDBOO is designed
We will address this concern first, as it is the one that has caused the most misunderstanding. We want to emphasize that, as in p-DLTV, PDBOO only uses the perturbed return to determine the action that maximizes it, and updates the target by the “unperturbed” return distribution of that action. This is mentioned in lines 156-157 and in the pseudocode. To learn in the way you are concerned, the target must be updated by the perturbed pdf of $Z$ as $Q(dw) = \xi(w)P(dw)$. Specifically, our distributional Bellman update is written as
$T_\xi Z(s,a) = R(s,a) + \gamma Z(S', a^*(\xi))$, not
$T_\xi Z(s,a) = R(s,a) + \gamma (\xi Z)(S', a^*(\xi))$.
The latter is probably what you interpreted.
We are deeply concerned that this misunderstanding has had a significant impact on the evaluation of the paper, and we would like to add the following sentence to line 158 to avoid confusion.
- “Specifically, PDBOO perturbs the estimated distribution only to select the optimal behavior, while the target is updated with the original (unperturbed) return distribution.”
**In the global response, we add the comment that the target is updated with unperturbed return distribution.**
## When is PDBOO effective/not effective?
PDBOO is more effective in environments with high intrinsic uncertainty, such as stochasticity from transitions, rewards, or observations, and less effective in near-deterministic environments. (However, this is not necessarily a disadvantage.) As an experiment, Appendix D.1 shows the results of varying the amount of intrinsic uncertainty, and we can see that p-DLTV and PQR give consistent results, while DLTV shows a gradual degradation in performance as intrinsic uncertainty increases. This efficiency is crucial to achieve a better Bellman update because the experimental results show that choosing particular sub-optimal actions repeatedly contaminates the return distribution due to its incorrect policy evaluation.
## Performance improvements in a subset of games and marginal improvements.
We respectfully disagree with your comment. To prevent performance exploitation from a subset of games, we have already provided 4 different performance metrics including the median of human normalized score following the evaluation protocol of [1] in Table 1. Furthermore, the metrics compared to human and DQN scores support that PQR outperforms IQN and is competitive with RAINBOW. As mentioned in lines 307-309, RAINBOW applied several improvement techniques, while PQR only changed its exploration strategy, which cannot be considered a marginal performance.
[1] Bellemare, Marc G., et al. "The arcade learning environment: An evaluation platform for general agents." Journal of Artificial Intelligence Research 47 (2013): 253-279.
## The type of policy for each baseline
Only DLTV, p-DLTV, and PQR explored in their own way without using an epsilon-greedy schedule. DLTV's exploration method is mentioned in lines 102-106. It is noteworthy that PQR, like soft q-learning, includes the exploration process in its Bellman operator. We mentioned in lines 97-99 and 307-308 that the remaining baselines QR-DQN, IQN, and RAINBOW all use epsilon-greedy schedules.
Align the experiment conditions as much as possible.
DLTV and p-DLTV have their own schedules, so we don't think it's appropriate to change them. We also set the scheduling of PQR to be close to $\frac{1}{t}$ based on theory. However, we think your concerns are valid, and it would be interesting to experiment with scheduling PQR like DLTV, even if it's not consistent with theory. We'll try to explain the various scheduling aspects in more detail.
## No experiment where DLTV works well.
The N-Chain environment we design has been tested in scenarios where the optimal action of the predefined risk measure is not necessarily the risk-neutral optimal solution (action). Our experiments on the standard benchmarks, 55 Atari games and Lunarlander-v2 have different reward designs, which confirms the generalizability of our action selection method. In other words, to design an environment where DLTV performs better, we need to make the optimal solution of the mean-variance risk measure equal to the risk-neutral optimal solution. This is an artificial design that exploits the property of optimal return distributions that should be unknown.
Despite the above issue of fair comparison, we can answer your comment, “compare the proposed method fairly” in the standard benchmark setting as follows.
In Figure 10 in Appendix D.3, we demonstrated the comparison on Atari with 30 no-ops protocol. Among the injected stochasticities, it is known that the 30 no-ops has small randomness, because this protocol only has an effect on the beginning of the episode. By the nature of Atari games themselves, Pong has less intrinsic uncertainty because the opponent has the identical policy even if our agent is trained, but Seaquest has different transitions over time as its manual notes
*“so after each round, take a breath - enemy subs and sharks will increase in speed”*.
We now know that Pong has small intrinsic uncertainty, but Seaquest has larger intrinsic uncertainty (different transitions on a given state over time). In Figure 10, DLTV works well in Pong (low intrinsic uncertainty) but shows a lower performance in Seaquest because of the high intrinsic uncertainty.
We want to emphasize that DLTV is a decent algorithm, but our proposed method outperforms in the cases of high intrinsic uncertainty as shown in Figure 7,9,10 and all result tables.
---
Rebuttal Comment 1.1:
Title: Additional Response by Authors
Comment: ## No comparison with p-DLTV
We apologize for the delay in writing about this issue due to the character limit. First of all, p-DLTV is an exemplary algorithm for us to compare OFU and randomized approach very effectively. Also, PQR is a more general method compared to p-DLTV which only considers 2nd order moments (variance), whereas PQR uses all the moments indirectly. We experiment with p-DLTV for some Atari environments in Appendix D.3, but do not include it in the main text for three reasons.
1. Reproducibility problem with DLTV: As we wrote in detail in Appendix D, DLTV was difficult to check for reproducibility because it did not provide raw scores. In Table 4, our implementation showed that DLTV had a Human Normalized Mean/Median Score of 603%/109% based on 40M frames, which is a marginal performance difference compared to QR-DQN's 505%/120%. Therefore, we decided to baseline only those algorithms that were reproducible, and DLTV and p-DLTV were excluded from the baseline for a fair comparison.
2. Hyperparameter sensitivity: While experimenting on several Atari games, we found that p-DLTV is very sensitive to its coefficient, c, as shown in Figure 6. As noted in Appendix C.1, we ran a grid search for six values of c, but this suffered from the problem that the optimal value was different for different environments depending on the scale of the reward, making it less effective. PQR, on the other hand, has the advantage of being intrinsically tunable, since xi is defined as a weight that is independent of scale, and thus showed robust performance.
3. Not consistent with our proposed theory: From a theoretical perspective, p-DLTV does not satisfy the sufficient conditions we proposed. Since we want to guarantee risk-neutrality while the agent explores, p-DLTV is only an intermediate algorithm so it is not the final goal. Furthermore, we experimented further in the global response with PQR-$\sqrt{\log t/t }$, which has the same form of scheduling as p-DLTV and found that it easily falls into suboptimality. | Summary: In this work, the authors address the issue of biased exploration caused by a one-sided tendency on risk in action selection by proposing the method perturbed quantile regression (PQR). PQR selects actions by randoming risk criterion while retaining a risk-neutral objective. The authors also derive a sufficient condition for the convergence of the proposed Bellman operator without satisfying the conventional contraction property. Results are demonstrated in an extended N-Chain environment and the Atari suite showing improved performance.
Strengths: The paper was written very well in a pedagogical manner which provided intuition and understanding of the challenges in the field surrounding the contribution of the method. Deep technical detail was provided with accessible explanations.
The motivation for the method was well grounded intuitively. The experiment section also directly confirmed the intended contributions, organizing each experiment and result around specific questions directly aligned to the motivation.
The background section was thorough and clear. Since the authors pulled together several topics which many researchers do not study all of, this presentation was very important and added a clear path of accessibility to the technical detail in the rest of the paper.
The supplementary material provided sufficient information to reproduce results including code. The authors also included substantial supporting theoretical information in proofs and a large amount of additional convincing experiments.
The contribution to the community regarding managing risk in exploration strategies is important and interesting.
Weaknesses:
The authors claim limitations about epistemic and parametric uncertainty metrics as well as optimism in the face of uncertainty approaches which I thought were not backed up in the paper strongly enough given that these claims are the premise of the entire work. The paragraph in the introduction which presents the limitations describes it as a broad issue, yet only one paper is cited as an example. To provide more context, it would help to give at least more citations here to back up the importance of this challenge. The deeper explanation of the DLTV paper is not necessary for all additional citations.
Please provide more comments in Algorithm 1 (like the current text “Select greedy action…”). The algorithm is fairly clear but with the density of notation, recalling the variable references would be much easier for the reader with textual reminders in the algorithm block.
My primary concerns regarding this work are related to the experiments in Section 4.1 which require clarification. Figure 4 is quite hard to read since the text is very small. Reformatting could make these plots readable when the paper is printed. Furthermore, the results in Figure 4 are very hard to understand due not only to the legibility but also due to insufficient explanation in the experimental results section and figure. See Questions for more information.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am unsure if I am correctly interpreting the results in Figure 4. Is the lower standard deviation around the dotted line for the mean a1 considered to be a better estimation result? Given that this experimental setting is new (the authors indicate that they adapt the N-Chain environment and cite work - DQN - which presents very different result metrics), substantially clearer explanation is required about the connection between the presentation and the claims.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not discuss limitations or broader impact. This discussion should be added.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the time and effort in reviewing our paper. Please find below our response to the main points raised in the review.
## Provide more context to back up the importance of this challenge.
We will add a few more papers in line 50 that attempt to use distributions for exploration following DLTV.
- Ramtin Keramati, Christoph Dann, Alex Tamkin, and Emma Brunskill. Being optimistic to be conservative: Quickly learning a cvar policy. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 4436–4443, 2020.
## Provide more comments in Algorithm 1
Thank you for your consideration. **In the global response, we add more detailed explanations in Algorithm 1**, including comments about scheduling for $\Delta_t$ and refinement to a weighted function.
## Reformatting and more explanation about Figure 4
Thanks for the details. We will try to make the font size as large as possible.
Also, you are right that a smaller standard deviation around $a_1$, the blue curve, means a better estimate. Regarding the explanation of Fig. 4, we will add information about the ground truth return distribution of $s_0$ and $s_4$ to the caption of Fig. 3 to make it easier to check whether the estimate of the red/blue curve is close to the ground truth.
- Ground truth return of $s_0$ : $\gamma^2 \mathcal{N}(10, 0.1^2)$
- Ground truth return of $s_4$ : $\gamma^2 [\frac{1}{2} \mathcal{N}(5, 0.1^2) + \frac{1}{2} \mathcal{N}(13, 0.1^2)]$
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal
Comment: Thank you very much for your additional details and responses to my questions. With these clarifications, I raised my confidence from a 3 to a 4 for my accept rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your positive response
Comment: We appreciate your positive response and we're glad that it has helped to increase your confidence in our paper. We will incorporate your thoughtful feedback into a revised version. | Rebuttal 1:
Rebuttal: # Global Response from Author
We thank all the reviewers for their valuable comments on our paper.
First, we added a revised **description of the pseudocode** and **additional experimental results for the schedule of $\Delta_t$** in the PDF, based on feedback from some reviewers.
The additional experiments were conducted in Pong-v4 where the maximum score is 21.0 and the baseline is described as follows:
- PQR : Our main algorithm, $\Delta_t$ = O(1/t^{1+\epsilon} )
- $\frac{1}{t^0}$ : $\Delta_t = O(1)$ to check the results for a fixed size ambiguity set, and it can be seen that it affects the convergence.
- $\sqrt{ \frac{\log t}{t}} $: $\Delta_t = O(\sqrt{\log t/t})$ as in the scheduling of DLTV, which theoretically does not correspond to the sufficient condition we presented.
- OPT : The output vector sampled from the Dirichlet distribution is fixed to $[0,0,...,1]$, forcing the agent to estimate only optimistically.
- $\sqrt{ \frac{\log t}{t}} $ + OPT : A hybrid of the two methods above.
Pong-v4 is a simple and easy environment where a maximum score of 21.0 can easily be achieved for many distRL algorithms(QR-DQN, IQN, RAINBOW).
The game's description for Pong-v4 is as follows:
*A player or team scores one point when the opponent hits the ball out of bounds or misses a hit. The first player or team to score 21 points wins the game.*
In this experiment, our proposed PQR is the only method that stably achieves the maximum score with very low variance.
In the case of optimism, we can see that it performs quickly in the early stages of learning, but converges without reaching the maximum score, which is similar to the behavior of N-Chain.
In the case of fixed ambiguity, it converges to suboptimal and shows very low performance, showing the necessity of a time-varying schedule. Finally, when mimicking a schedule of p-DLTV is applied to PQR, the performance also degrades. From this, we believe that the proposed sufficient condition is quite tight.
Also, We would like to summarize and answer some common issues raised by several reviewers.
## Definition of a risk-neutral objective
Several RL papers propose new frameworks with different roles of optimal solutions by modifying the original objective functions. (e.g., maximum entropy RL and risk-sensitive RL.) In this paper, we propose the novel Bellman operator that **even if the objective function is modified, the optimal solution remains the same as before**, but only the exploration performance is improved. Since we want to obtain the same optimal solution as the original, even if we change the objective, we call it a *risk-neutral objective*, which tries to avoid one-sided risk tendency.
## Why the randomized approach is more effective than OFU in deep RL
While OFU is a principled criterion for exploration in bandit or tabular RL, it is already known that randomized approaches, including Thompson sampling, empirically outperform OFU, while there are not many papers that theoretically analyze the exact reason why.
Meanwhile, in deep RL, there have been attempts to use the information for exploration from distRL, which estimates the return distribution. DistRL tries to capture intrinsic (aleatoric) uncertainty, but the variance of the estimated distribution is mixed with parametric (epistemic) uncertainty.
We argue that using the estimated variance as the OFU is problematic and can be mitigated by a randomized approach.
The reason is that **the intrinsic uncertainty that remains in the estimated variance is irreducible during learning, and applying the OFU will bias the exploration towards risk-seeking behaviors.** We demonstrate this biased exploration phenomenon and its significant performance degradation. We also show in numerous experimental settings that the randomized approach is highly effective in avoiding this bias. Furthermore, we provide a sufficient condition for PDBOO to converge to the original optimal solution while applying a randomized risk criterion.
Pdf: /pdf/1d57c3b0cf7e19d6e8171bee5a50f234ad92df18.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses exploration via distributional RL which has, to date, been most recently performed under an _optimism in the face of uncertainty_ (OFU) paradigm. This paper loosely characterizes this OFU approach as problematic as it results in biased exploration, resulting in sub-optimal value distributions. To remedy this pitfall, this paper proposes a stochastic perturbation of the distributional Bellman operator. Clear theoretical development of the proposed operator gives way to practical algorithm development building on top of quantile regression. Empirically, the proposed algorithm PQR is demonstrated to have clear performance benefits.
Strengths: The paper sets up a really interesting juxaposition between the advantages of the distRL paradigm in accounting for aleatoric uncertainty but attempting to resolve epistemic uncertainty through exploration. As set out in the introduction of the paper, this presents an appealing motivation to perhaps determine more effective exploration strategies that are cognizant of this apparent mismatch.
The methods presented in this paper to address the one-sided tendency on risk, resulting in a practical algorithm, are reasonable and seem to neatly satisfy the intended contributions set out in the paper.
I was impressed at the extent by which the authors provide sufficient background in distributional RL in Section 2. Many papers take shortcuts here however this paper lays out relevant concepts from which the remaining development of the methods introduced in the paper are easier to understand from this grounding.
The technical formulation of the PDBOO is clear and well written. The definitions help to outline important concepts that motivate the further development of the methods.
I appreciate how the authors approach Section 4, clearly laying out the objectives of their empirical study. This helps to frame the impressive experimental performance achieved by the introduced PQR algorithm.
Weaknesses: It’s not clear what is meant by “without losing the risk-neutral objective” (mentioned in the abstract)…
In line 43, it could be helpful to again list out what the two types of uncertainty are to more directly connect this sentence to what has been introduced conceptually earlier in the paper.
The authors seem to use “deep” RL interchangably with “distributional” RL in the introduction. This makes the framing of the work a little challenging to follow initially. Better clarification of prior works in the non-distributional deep RL from distributional deep RL would help make this easier to understand and would strengthen the beginning sections of the paper.
The statements made in lines 50-53 are pretty strong, as in they seem to be a result of some analysis or in the worst case are derived from opinionated intuition. Some additional justification with formal explanation (whether through a toy example or reference to later section or citation to a paper) would help make the writing easier to accept and be persuaded by. This is especially important because the “one-sided tendency on risk” appears to be a major foundation for the proposed PDBOO.
It would be helpful if the authors expanded more on their discussion at the end of Section 2.2 since DLTV appears to introduce biased exploration that is addressed in this work. A couple of sentences addressing the limitations of DLTV and how the work presented in this paper addresses those would be great. The sentence that ends the preceding paragraph (Line 100-101) is a decent example of setting up a clear understanding of the contributions set forth in the paper.
Many of the claims about why PQR performs well on n-chain and in Atari are speculative. The paper would be significantly improved if more rigorous analysis connecting the theoretical treatise in Sections 3.2 and 3.3 to empirical performance and how it differs from the baselines would be great.
The discussion around the empirical results is somewhat disappointing. It would additionally be interesting to see any ablations that could be done on PQR to help tease apart the effect of the different components of the algorithm have on performance. I suppose that this is is partially addressed with the hyperparameter plots in Figure 6 but I think it would be helpful to see how sensitive the resulting policy is to the scope of the ambiguity sets used.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Is the guarantee of risk-neutrality found in expectation? E.g. is it a response to evenly choosing a risk-averse obj vs a risk-seeking obj?
The writing in lines 41-45 is pretty difficult to follow. It’s apparent that the authors are taking care to concisely convey their thoughts but fail to do so clearly.
Line 46: It’s not clear what “side-effect” is being referred to here. Additionally, it would be helpful to perhaps directly reference what section in the paper is being referred to in this sentence as the authors indicate that they’ve explored the effects of OFU in deep RL and the effects it has on handling the two types of uncertainty.
Line 61: Could you call PQR a “stochastic optimization” approach to help situate the work among other prior work? This isn’t a major concern of mine. I’m just thinking that it could be an easier way to describe the approach rather than calling it a “randomized” approach.
Line 93: The loss function here is simply the Huber loss, no? It’s not overly important to include this much detail to be honest unless the paper heavily develops further insight directly from these equations (applies to Section 2 in it’s entirety). Otherwise this just feels like a regurgitation of information that’s already established in the prior literature.
Line 128-129: Why is it important to have an interpretation of different risk levels as constructing an options framework? How does this help build the proposed methods introduced in this paper?
Line 178: Does “the standard” here mean the standard distributional Bellman operator?
Line 186: What is $B_\xi$? How does this relate to the symmetric Dirichlet distribution used to construct each ambiguity set? This seems to be addressed in Lines 210-212.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Several technical statements that crucial to the dvelopment of the proposed methods/contributions are vague and are not precisely justified. For example, it would be really helpful (and would help me agree/accept the claims) in Lines 170-174 if it were shown precisely how Eqt 1 maintains risk-neutral policy performance. As currently written we have to take the authors’ word. Additionally, the statement of using a min-expectation vs. a min-max operator should be developed and justified. This is a new concept that hasn’t been discussed previously.
It’s not clear how the time-varying amiguity sets are decoupled from a changing estimate of the return distribution Z. As formulated, PDBOO seems to ignore that $Z_\theta$ is changing through time.
The paper omits at least one relevant paper, Moskovitz, et al (NeurIPS 2021) where an ensemble of distributional crtiics are used to balance between optimism and pessmism for continous control. It would be informative to hear from the authors why this paper, and its corresponding approach, should or shouldn’t be included as a comparative baseline.
>Moskovitz, Ted, et al. "Tactical optimism and pessimism for deep reinforcement learning." Advances in Neural Information Processing Systems 34 (2021): 12849-12863.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First of all, we appreciate the time and effort you put into our paper. We will revise our paper based on all feedback to improve the quality of the paper. We have responded to the key comments below.
## Clarity of “Without losing the risk-neutral objective”.
Previous work has focused on learning variations of existing objective functions such as maximum entropy or risk-sensitive. To emphasize that our work does not modify the objective of the existing distributional Bellman update, we refer to it as a risk-neutral objective. However, this does not seem to convey the meaning well, so we will change “without losing the risk-neutral objective” to “to avoid one-sided risk tendency” for clarity.
## Strong statements in line 50-53, unclear what ‘side-effect’ is being referred to here.
Thanks for pointing this out for readability. We will revise the paper as follows:
In Section 4, although DLTV is the first attempt to introduce OFU in distRL, we found that consistent optimism on the uncertainty of the estimated distribution still leads to biased exploration. We will refer to this side-effect as "one-sided tendency on risk", where selecting an action based on a fixed risk criterion degrades learning performance.
## Connecting the theory to empirical performance and further ablations.
While DLTV and p-DLTV do not satisfy our sufficient conditions, **we additionally experiment with different forms of $\Delta_t$ scheduling for PQR in global response**, such as $1/t^0$ and $\sqrt{ \log t /t}$, for a more robust comparison. We will also provide a table that shows the performance of the resulting policy as the hyperparameter varies.
## Guarantee of risk-neutrality
Risk neutrality is guaranteed by a sufficient condition on $\Delta_t$ and updating the target with an unperturbed return of selected action. For example, DLTV is a well-motivated distributional RL method with the unperturbed return updates, but it does not theoretically guarantee the risk neutrality because the bonus decay schedule is heuristically defined by the Hoeffding inequality.
Empirically, selecting behaviors based on randomized risk criteria plays an important role in eliminating the biased exploration of traditional OFUs, where behaviors with high intrinsic uncertainty are explored only because of the consistent positive bonuses (we call this "one-sided tendency on risk"). In our experiments, we confirm the above claim by showing that p-DLTV with only Gaussian noise injection outperforms DLTV with fixed optimism.
## Interpreting as a stochastic optimization.
There is a slight difference. Stochastic optimization aims to obtain a robust solution from a fixed ambiguity set, while we shrink the ambiguity set to avoid one-sided risk tendency, so our goal is different. The word "randomized" is a common expression in the exploration literature [1,2], and we think it is an appropriate description because our approach is aligned with the existing works. We will explain these differences in more detail in lines 170-179 of the revised version.
[1] Haque Ishfaq, Qiwen Cui, Viet Nguyen, Alex Ayoub, Zhuoran Yang, Zhaoran Wang, Doina Precup, and Lin Yang. Randomized exploration in reinforcement learning with general value function approximation. In International Conference on Machine Learning, pages 4607–4616. PMLR, 2021.
[2] Ian Osband, Benjamin Van Roy, Daniel J Russo, Zheng Wen, et al. Deep exploration via randomized value functions. J. Mach. Learn. Res., 20(124):1–62, 2019.
## Relation with $B_{\xi}$ and the construction of ambiguity set
$B_{\xi}$ is a constant that describes uniform boundedness (such as $L$-Lipschitz, smoothness) and we just use it for theoretical rigor. Since we consider $\xi$ to be the weight for $N$ quantiles, $B_{\xi}$ is always less than or equal to $N$. It plays no direct role in constructing the ambiguity set. To avoid confusion, we will use this expression only in the appendix for the proof and remove it from the main text.
## How does Eq1 maintain risk-neutral policy performance? More discussion on min-expectation vs min-max operator.
If we use a min-max operator with respect to risk, the solution is always more conservative (pessimistic) than the standard solution as it always considers the worst-case scenario. On the other hand, we considered the average-case which has the possibility to maintain risk-neutrality. Specifically, we aimed to find a sufficient condition to ensure risk-neutrality by scheduling the ambiguity set of risks. As such, we only share the definition of ambiguity set by the DRO literature, but with a different goal of risk-neutrality.
PDBOO seems to ignore that $Z_\theta$ is changing through time.
Line 156 defines PDBOO for a fixed xi, but we extend it for a time-varying PDBOO below, denoted $\xi_t$. The ambiguity set is to sample a perturbation $\xi$ that yields the same weighted expectation of the return distribution $Z$ for all actions.
[Moskovitz, et al.] as a comparative baseline.
Thanks for suggesting a novel related work. As I understand it, their work controls pessimism/optimism based on the Exponentially Weighted Average Forecasting algorithm in the MAB literature. In this manner, this work has a similar point in terms of changing the risk criteria but it is questionable whether the EWAF algorithm is still valid in deep RL where the agent updates the target and uses a function approximation.
It is difficult to use TOP-TD3 as a baseline because our work is based on a value-based algorithm targeting a discrete action space, while TOP-TD3 experimented with continuous control using an actor-critical architecture. Note that our main baselines QR-DQN, DLTV, IQN, and RAINBOW were all experimented on the Atari environment, which is a discrete action space. However, since TOP-TD3 has a similar goal to address intrinsic(aleatoric) uncertainty in exploration, we will add it to our related work.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed responses
Comment: I appreciate the detailed responses from the authors. I am satisfied by the answers which have addressed my major concerns and correcting some areas that I misunderstood. As a result, I have decided to raise my score from 6 to 7.
Based on the responses and demonstrated efforts to revise the paper (as recommended by all reviewers) I am confident that the promised changes will result in a stronger paper, one suitable for publication.
---
Reply to Comment 1.1.1:
Title: Thanks for your positive response
Comment: We're glad that our answer satisfied you, and we appreciate your positive feedback.
We'll incorporate your constructive feedback into the revised version. | null | null | null | null | null | null |
End-To-End Latent Variational Diffusion Models for Inverse Problems in High Energy Physics | Accept (poster) | Summary: The paper introduces a diffusion model based approach to tackle inverse problems. The method is then applied to High Energy Physics in order to reconstruct kinematic quantities.
Strengths: The paper is well written with clear structure, definitions, and figures. The paper includes testing of the proposed methodology, VLD, on an important HEP inverse problem, which is unfolding semi-leptonic tt events. It is benchmarked on this task against other state of the art algorithms such as CINN, LDM, and VDM, as well as variations of the authors own proposed algorithm. Their VLD algorithm does appear to significantly outperform the other considered algorithms.
Another strength of this paper is the fact that the authors are deliberate in construction of their network, explaining the advantages and necessity of each of the components.
Weaknesses: Limited testing of the proposed unifying architecture is a weakness of the paper. Indeed, the authors do acknowledge that further testing on different event topologies would be beneficial, yet including at least one more benchmark would increase the confidence in the algorithm’s performance.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Page 4, line 143, typo, should be “in an abstract”
2. Page 6, line 204, please define E to be energy, m mass, p momentum, and mention that you are using the HEP convention of setting c to be 1.
3. Please discuss the reason why UC-VLD outperforms the VLD on some metrics and vice-versa.
4. Figure 4, b quark, I don’t really see the bimodal nature of the distribution
5. Figure 4, neutrino, how is the neutrino “truth” line showing the bimodal nature?
6. What would happen with the LDM’s performance if a different prior weight is given?
7. What would happen in the benchmarks if the CINN uses the MMD loss instead?
8. To further show the improvement of the described unified training loss, it would be beneficial to show what would happen if the various components would be trained in parallel.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their king comments and detailed questions. We address the limitations comment in the global rebuttal. We answer the questions here.
1. & 2. We thank the reviewer for noticing these points. We have done a pass through the paper and fixed these presentation issues along with other small wording changes.
⁣
3. We thank the reviewer for their insightful question. Unfortunately, we do not yet have an intuitive explanation for this behavior. The complexity is further compounded by the absence of a standard (and computationally feasible) method for measuring the distance between high-dimensional distributions. Each of the metrics we provide captures different aspects of distribution distance. We think the bin-independent metrics are more sensitive to outliers and long tails whereas the binned metrics focus more on the distance between the high-mass regions of the distribution. An examination of the detailed appendix results reveals that the UC-VLD performs more favorably with long-tailed features such as energy, but less so with the peaked mass term.
4. & 5. We regret the confusion surrounding the bimodal posteriors. Only the neutrino $\eta$ posterior is expected to be bimodal. This arises because neutrinos are not directly measurable at the detector, and their kinematics must be inferred from the event's missing energy. For events with a single neutrino, the missing neutrino energy defines a quadratic constraint on the $\eta$ term which often leads to two configurations satisfying both energy and momentum conservation. In Figure 4, we notice that the empirical estimate fails to capture this expectation, whereas the LVD captures this behavior without explicit information about this expectation. We also note that the truth sample will still be a single value sampled from this theoretically bimodal distribution. We have clarified this phenomenon in the revised text to better describe this expectation and our results.
⁣
6. This is an interesting question. We use a low prior weight in accordance with the LDM paper, as we replicate their VAE pre-training. We suspect that, since our use case makes the latent dimension larger than the data dimension, the prior loss is dominating the pre-training step. This is because reconstruction loss may trivially be near-perfect without a bottleneck, so the main constraint for the VAE is the Gaussian prior over the latent space.
7. The MMD (Maximum Mean Discrepancy) loss was used in the CINN paper to more accurately represent the true distributions associated with kinematics, such as the peaked mass term. We could not incorporate this MMD loss into our variational framework in a natural way as we employ a Gaussian VAE which necessitates a mean squared error loss term for the reconstruction. Instead, we chose to incorporate an additional self-consistent physics-informed regularization loss to enhance mass reconstruction, citing analogous strategies previously implemented for other generative models in physics. These two approaches share similar goals, although a detailed comparison of these methods could be fruitful for future work. We use exactly the same loss for all methods with the goal of evaluating only the generative model architecture's effect on performance,
8. We agree with this reviewer and in fact we reported the results of such an experiment in Table 1. In this table, we compare the component-wise training (LDM) to the unified end-to-end training (VLD), and show that the latter outperforms the former by roughly one order of magnitude. Since we use identical network architectures for all methods, the primary difference between these two methods is the unified training. We have further clarified this point in the revised version,
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for the time taken to write the rebuttal and addressing the points I raised. | Summary: This paper proposes a unified framework, which combines the latent diffusion model and variational diffusion model, and applies the method to the inverse problem in the field of High Energy Physics. The loss terms for VAE and variational diffusion model are combined to achieve end-to-end training. The proposed VLD and corresponding variants achieve state-of-the-art performance as shown in experiments.
Strengths: * A unified framework to support end-to-end training for latent variational diffusion model.
* The proposed VLD and variants achieve state-of-art performance existing generative models.
Weaknesses: Overall, the main contribution of this paper is the unified framework to combine latent diffusion model and variational diffusion model, where the core lies in the extra VAE loss (3rd term in Equation 8).
1. Technical novelty seems limited, since the only contribution could be summarized as an extra loss term in diffusion model.
2. The experiments to emphasis the importance of the extra loss term are limited. Among all the baselines, the LDM at L243-245 is the most similar method to the proposed one, where the only difference is the extra loss term that leads to an end-to-end training process. The small gaps between LDM and C-VLD in Table 1 also suggest the similarities. The main question is, is the comparison between LDM and VLD fair enough? The details about training LDM are missing, such as whether they adopt the same feedforward block as VLD, especially how to pretrain the VAE and what is the performance of the pretained-VAE. Existing experiments seem insufficient to support that the benefits of VLD are from the unified training process.
3. Since there are results for C-VLD and UC-VLD, which mainly differ in the conditional signal, is it possible to compare LDM with similar settings?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * How to define that the dataset at L273 includes enough variations for the high energy physics? For example, is there any existing similar settings for reference?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Technically, this paper provides a unified way, which seems like a straight-forward combination of the loss terms, to train a latent variational diffusion model in an end-to-end manner. However, experiment results are insufficient to support the importance of the extra loss term, or the benefits brought by the unified training process.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful questions.
1. We respectfully disagree with this characterization. Many important contributions in machine learning, such as weight regularization and the original VAE, may be characterized in this reductive manner if one overlooks the reasoning behind the respective loss modification. The primary contribution of this paper is a formal derivation of a unified variational architecture, together with its application to an important inverse problem in physics and an assessment of each component of this architecture. End-to-end training of this architecture in a theoretically justified fashion necessitates the addition of a new loss term, derived from the more foundational variational objective. Our experiments demonstrate that the main performance advantage originates from this unified end-to-end training of all the components, allowing the latent space to be fine-tuned to the diffusion task, in contrast to VAEs trained without diffusion or methods lacking a latent space.
2. We respectfully disagree with the reviewer's interpretation of the results. The VLD and LDM share identical network architectures for the denoising network, detector encoder, and VAE. The primary distinction between these two methods lies in pre-training the VAE (LDM) as opposed to training all components end-to-end (VLD). We have highlighted this distinction in the revised text to avoid confusion. Our findings, presented in Table 1, reveal that the regular VLD significantly outperforms the LDM, with the sole discrepancy being the unified training. We did observe that including a conditional decoder unexpectedly worsened performance, a phenomenon we discuss in the text in the paragraph starting on line 321. We hypothesize that this reduction may stem from the conditional decoder's sensitivity to minor errors in the diffusion process during inference.
3. We thank the reviewer for their interesting suggestion. While it may be possible to formulate a pre-trained LDM which uses a conditional VAE, doing so is non-trivial. The design would need to decide between using individual conditioning encoders for each phase or sharing the encoder for both phases. This is further complicated by the fact our domain lacks a pre-trained conditioning akin to CLIP for text. We encourage future work to examine this problem, but one of the benefits of our unified approach is that this change is simpler to implement while maintaining a common framework. | Summary: The paper benchmarks several network architectures on a real-life problem in particle physics, i.e. the problem of inverting the effect of limited detector resolution and guess the features of a given collision from what is actually observed in the detector (unfolding). Considering several metrics to assess the accuracy of a given unfolding, the authors show that a novel architecture for diffusion models provides the best performance.
Strengths: Very solid analysis, with clear explanation of the various steps.
Shows potential progress in applications, thanks to novel architecture
Weaknesses: none
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: none
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their kind comments and support of our paper. | Summary: Thanks for the author's rebuttal. I have read all of the rebuttals and reviews and decided to keep the current rating.
Current work proposed an extension of the variational diffusion model, called the variational latent diffusion model. It is applied to inverse problems in the high energy physics field and tested on a single topology. The result shows that the distance is three times less than current latent diffusion models.
Strengths: Originality: The model is a combination of two existing models, the latent diffusion model and the variational diffusion model. Moreover, the author defined an appropriate loss function to train the model.
Quality: The result demonstrated that the proposed model outperforms other baseline models.
Significance: The experiment provides unique data in the high energy physics field.
Weaknesses: The paper has several typos and unclear points. Moreover, some claims are not well supported by the experiment results. The reviewer is confused to some presented points. Details can be found in the questions part.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. There is no general caption for Figure 1. Moreover, the sub-caption is not consistent with the sub-figure. For example, ($e$) in caption while ($e^{-}$) in the subfigure. ($\nu$) in caption while $\bar{v}$ in subfigure. $d$ in caption while $\bar{d}$ in subfigure. Moreover, subfigure (b) is too small to see the detail; the reviewer did not get the point of how it is related to your problem or the model. The font in this subfigure is too small to see. The review suggests presenting the subfigure (b) in a better way to give more insights into your problem or model.
2. There are several typos in the paper. For example, line 8, ``latent variation diffusion model``, is supposed to be ``latent variational diffusion models`` Line 192, the $\hat{\cdot}$ is on $z_t$ or $t$?
3. Some definitions are not clear. For example, how did you calculate $\hat{E}$ and $\hat{p}$ from the predicted value in Equation 10? What is the norm of the $\lambda_c|\cdot|$ in Equation 10? Is it $L_1$ norm or $L_2$ norm?
4. The author uses a deterministic encoder and claims it is better than the variational encoder. According to the author, the latter only has limited benefits while increasing training variance and complexity. However, there is no experiment result to support the claim. More ablation study is needed to justify the claim. Still, it sounds strange to the reviewer that the model is called a variational latent diffusion model but only uses a deterministic encoder.
5. The author proposed three variants of the model. Conditional, unconditional, and the last one, conditional encoder and unconditional decoder. The way of presenting the results confuses the reviewer. Firstly, which one do you promote to use in the conclusion? It seems in Table 1. VLD is better in the latter three metrics, while the UC-VLD is slightly better in the first three metrics. However, Figure 2 shows the framework of VLD, while all the Figures in the result part and the appendix part only show the results for the UC-VLD. If the authors want to promote VLD, consider adding relevant plot results for VLD. If the UC-VLD is better, consider changing Figure 2 to reflect this point.
6. A physics-informed consistency loss is proposed with a hyperparameter $\lambda_{c}$; how did you adjust the value of this hyperparameter? Moreover, what would be the effect of including and not including this loss?
7. The author claims the model is aimed at high-dimensional inverse problems. However, the training cost for $55$ dimensional variables is expensive, $24$ hours. Moreover, the designed latent space has a higher dimension than the input instead of the commonly used lower latent space dimension for compression. This design could introduce additional costs. And the reviewer is concerned with the scalability of the current model to higher dimensional problems.
8. What is the limitation of the current work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The author did not mention the limitations explicitly. I would be curious what would be the limitation of the proposed model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful questions and comments.
1. We agree with the reviewer that Figure 1 could benefit from additional context. The intent of this figure was to offer a visual illustration of parton configurations and detector observations, as subsequent sections describe this data solely in terms of kinematic vectors. We have augmented the captions to clarify this intent.
2. We thank the reviewer for noticing these detailed errors. We have performed a pass through the paper and fixed these errors along with other small wording changes. We have also added the missing Figure number for Figure 1 which did not render in our original submission.
3. As we train our model on the task of reconstruction, the inputs and outputs of our network consist of the parton momentum vectors, defined by the components $(M, \log E, p_x, p_y, p_z)$. These vectors represent specific physical quantities, such as mass and energy, and the network provides approximate values for these kinematic parameters. In response to this question and comments from other reviewers who requested clarification on the letter symbols and their associated physical quantities, we have added further details to this description. In relation to Equation 10, as all values are scalars, the bars represent a simple absolute value rather than a vector norm. We have clarified this with an explicit formulation in terms of mean absolute error.
4. We use a deterministic encoder only for the detector (conditioning) inputs, not the parton encoder. We have clarified this distinction in the text. We note in the text that we have the option to make the detector encoder variational, but we did not expect any benefit from this approach. Our decision aligns with previous work, including methodologies like CVAE and LDM, which similarly utilize deterministic encoders for their conditioning. This choice was also informed by preliminary experiments, which were insufficiently conclusive to include in our study. We have therefore also revised this section to more precisely articulate that we opted not to explore a variational encoder for the detector observations, owing to both a lack of compelling motivation and following existing precedent.
5. We thank the reviewer for identifying the discrepancy and the absence of detail in Figure 2, as well as for their insightful comment. We have revised Figure 2 to include annotations that highlight the differences between the three variations, thereby providing additional visual intuition to their definitions. We have included this updated figure in this response. We have also clarified that we view the UC-VLD as the best model for this dataset due to its performance on the bin-independent metrics and present its results in the graphical comparisons.
6. The consistency loss scale functions as a hyper-parameter, comparable to the weight regularization scale, and requires empirical tuning. Our computational limitations precluded a comprehensive hyper-parameter sweep for this parameter, so we selected a conservative value of 0.1, guided by the loss magnitudes observed during training. We observed that omitting this loss leads to a broader mass reconstruction, failing to capture the sharp peak in the mass features, and this trend was consistent across all models. To ensure a fair comparison, we applied the same loss and loss scale for all models.
7. We are indeed looking into higher dimensional problems and problems where the data could have a variable number of dimensions. Diffusion models are infamously very computationally expensive to train, even within existing application domains. We note that our problem has 55 dense features, where every feature is approximately independent and meaningful; this is in contrast to images, which have much higher dimensionality, but each individual feature is less informative. Due to this difference, we opt to make our latent space with higher dimensionality than our data to allow the VAE to learn a fine-tuned latent space for the diffusion objective.
8. We address this question in the global rebuttal. | Rebuttal 1:
Rebuttal: We thank all of the reviewers for their detailed comments and questions. We begin with a discussion of the limitations as requested by two of the reviewers and then answer the remaining questions in individual responses. We also include a document with updated figures.
Limitations
--------------
We note in the text that the experiments performed in this study are limited to parton unfolding of a specific event topology and on simulated data. The method is general and may be applied to arbitrary topologies, with our choice guided by the limits of current baseline methods. We think this exploration is robust as many practically explored topologies are simpler than our semi-leptonic $t\bar{t}$ test-case. We note in the text that the next step would be to perform particle-level unfolding which is not specific to a particular topology, and we believe this technique may be applied to this more general problem as well with little modification. Another common limitation of training on simulated data is the imperfection of the simulation and the possibility of skewing the results compared to real detector data. We note in the text that we must adjust the model’s simulation bias by employing real data, and we cite potential techniques, such the as iterative approach of ICINN, to accomplish this adjustment. We have added further discussion to the text to illustrate these limitations and future solutions.
Pdf: /pdf/62d0715428604ef2a854deb5c428020d60078f28.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adaptive Algorithms for Relaxed Pareto Set Identification | Accept (poster) | Summary: This paper studies a less common setting that arms are multi-variate distributions. In the expiration period, the agent seeks to identify the arm which is Pareto optimal.
This problem is formulated as a fixed-confidence identification problem. The most novel setting in that, the fixed confidence identification problem s extended for multi-objective settings.
This paper proposed a novel sampling rule called Adaptive Pareto Exploration. The central idea is to identify the two contentious arms and sample both or one of them.
The paper also gives the theoretical guarantee that the proposed method can recommend a correct set whp after most a given number of iterations.
Strengths: This paper proposes a principled method and provides theoretical guarantees.
Weaknesses: It seems trival to me to extend the single objective best arm identification problem to the mo setting.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can the proposed method identify the full (relaxed) Pareto front?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
* We would like to clarify from your summary that our algorithm is not meant to identify "the arm which is Pareto optimal" as there might be more than one Pareto optimal arm. We propose a sampling rule called Adaptive Pareto Exploration that when combined with different stopping rules, can solve different objectives: identify the entire Pareto set ($0$-PSI in our terminology), identify the entire Pareto set and possibly a few extra arms close to it ($\varepsilon_1$-PSI), identify at most $k$ arms that are (nearly) Pareto optimal ($\varepsilon_1$-PSI-$k$) or identify a "representative" subset of the Pareto set ($(\varepsilon_1,\varepsilon_2)$-PSI). We hope that this clarifies our contribution and answers your question.
* We strongly disagree that it is trivial to extend the single objective best arm identification problem to the mo setting. While PSI (Pareto set identification) is a natural extension of the best arm identification problem in standard bandits, PSI is actually a more challenging problem. In PSI we don't know the number of optimal arms beforehand: it can go from $1$ to $K$, while in BAI the classical assumption is that there is a single optimal arm. When relaxations are further taken into account, there are multiple valid solutions (e.g. for $\varepsilon_1$-PSI-$k$ when the size of the Pareto set is larger than $k$, any subset of size $k$ is a valid guess), which creates some additional difficulties even in the uni-dimensional setting [10]. It is true that our sampling rule shares a common structure with that of existing adaptive BAI algorithms which rely on playing two contentious arms ($b_t$ and $c_t$ in our paper), often the empirical best arm and a contender (LUCB, UGap or more recently Top Two algorithms). However, it actually took us several iterations to come up with the right definitions for $b_t$ and $c_t$ for the PSI problem, which interestingly do not exactly coincide with the choices in prior BAI algorithms when $D=1$ (see Appendix E). You can also read our answer to reviewer 9Ln4 for a better intuition about our definition of $b_t$ and $c_t$, which will be included in our revision. As a start, in the multi-objective case we no longer have an ``empirical best arm" but a set of arms of random size that could equally be optimal. Likewise, many results and techniques known so far for BAI do not directly apply to our setting and our analysis required specific results to pin down the behavior of our algorithm: Lemma 3 and Lemma 4. We will be happy to sketch their proof (and their crucial ingredient Lemma 8 currently stated in appendix) if we are given an extra page for our revision.
---
Rebuttal Comment 1.1:
Title: reply
Comment: I have read the response. And since I am not an expert in this field. I am not confident with my score. | Summary: The primary objective of the paper is to address the challenge of identifying a set of arms, consisting of at most $k$ arms, where each arm is either Pareto optimal or close to Pareto optimal. The paper explains the concept of Pareto optimality within the context of bandit problems. To tackle this problem, the paper presents the $\epsilon_1$-APE-$k$ (Adaptive Pareto Exploration) algorithm and establishes an upper bound on the sample complexity. Furthermore, empirical evidence is provided to demonstrate the effectiveness of the proposed algorithm.
Strengths: One strength of the paper is its focus on addressing the problem of Pareto optimality in bandit literature, which is an underexplored area. The paper introduces a novel problem setup and clearly defines the goals for different scenarios related to Pareto optimal identification, including the identification of Pareto optimal sets, near Pareto optimal sets, and at-most $k$ near Pareto optimal sets.
The paper excels in providing a comprehensive discussion on sample complexity upper bounds, highlighting the theoretical aspects of the problem. It also establishes connections to the existing literature on Pareto optimality in bandits, demonstrating a solid understanding of the research landscape.
Furthermore, the paper offers ample experimental evidence to support its claims, including experiments conducted on real-world datasets. This empirical validation strengthens the credibility of the proposed approaches and enhances the practical relevance of the research.
Weaknesses: One weakness of the paper is the absence of references to real-world applications that could benefit from the proposed framework. For instance, discussing potential applications such as A/B testing in clinical trials would enhance the practical relevance and broader impact of the research.
Another weakness is the lack of discussion of lower bounds in the main paper. Given that the problem setup for Pareto set identification is relatively new, it would be valuable to explore more properties of the lower bounds to gain insights into the tightness of the derived upper bounds. This would provide a more comprehensive understanding of the problem and its inherent complexities.
The paper lacks a thorough discussion on the computational complexity of the $\epsilon_1$-APE-$k$ algorithm. Considering the importance of computational efficiency in practical applications, it would be beneficial to address
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: One potential weakness is the lack of information regarding scalability issues encountered when running large-scale experiments using the $\epsilon_1$-APE-$k$ algorithm. It would be valuable to understand whether the algorithm faces any challenges in handling larger datasets or more complex scenarios. Additionally, providing references on the scale of parameters used in A/B testing for applications beyond COVID datasets would further strengthen the practicality of the proposed approach.
Another point to consider is the limited discussion on the broader scope of utilizing Pareto sets in various applications apart from clinical trials. Exploring and discussing other potential domains where Pareto sets could be beneficial would enhance the paper's impact and shed light on additional practical applications.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No limitations or potential impact of their work discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address your concerns and answer your questions below.
* We will add a paragraph about the computational complexity of $\varepsilon_1$-APE-$k$ in Section 5, with some details in Appendix H. The time complexity of $\varepsilon_1$-APE-$k$ is $\mathcal{O}(K^2D)$ and its memory complexity is $\mathcal{O}(K^2)$. The computational bottleneck is the computation of the $M(i,j,t)$ for each $(i,j) \in [K] \times [K]$, which requires a triple-nested for-loop over $[K]\times [K]\times[D]$). We have implemented the algorithm in C++17 compiled with GCC12. To give an idea of the run time, a single run on a random Bernoulli instance with $K=1000, D=10$ takes around 4 minutes for $0.1-$APE-$1000$ on a personal computer (a single 3GHz ARM core used, 8 GB RAM, 256 GB disk storage) with $\delta=0.01$ and no particular optimization. We are aware that due its fully sequential nature, our algorithm may have a higher computational cost compared to uniform sampling strategies which typically proceed in batches. However, in our implementation the most time-consuming operation was actually collecting a sample from the selected arm(s), especially for multivariate Gaussians. So that finally, in the experiments, our algorithm which ultimately requires less samples had in practice a smaller computational cost compared to PSI-Unif-Elim which uses uniform sampling.
* We will provide in our revision more examples of practical cases in which adaptively identifying a Pareto set is meaningful. In the current version of the introduction we mostly focused on the motivating examples used in our experimental evaluation, for which the $k$-relaxation is especially relevant, referring only briefly to other examples contained in [5] (see l.42-44). We will elaborate on those and provide others. For example in [5] the authors have evaluated the PAL (Gaussian Process based) algorithm on the SW-LLVM dataset, which is a dataset of 1023 compiler settings and 11 objective indicators (memory footprint, performance etc.). Since it is unlikely that a single compiler setting optimizes all the 11 objectives, it is meaningful to find compiler settings that are Pareto optimal. Another application is hardware design optimization [5,7]. The idea is similar to software design but the arms are different hardware implementations designed to solve a given problem. The usual objectives for this application include chip area, throughput, energy consumption and runtime.
Other applications include A/B/n testing for marketing or online recommender systems in which it is common to jointly optimize multiple (possibly conflicting) objectives such as user behavioral metrics (e.g. clicks, streams, dwell time, etc), supplier exposure objectives (e.g. diversity) and platform centric objectives (e.g. promotions). We will add a reference to the paper (Mehrotra, Xue, Lalmas. *Bandit based optimization of multiple objectives on a music streaming platform*, KDD 2020). Although we are not aware of the scale of the parameters used for these kind of applications, benchmarks of simple heuristics on undisclosed datasets in this paper have shown fair practical performance, which we are confident could be outperformed by our method. Going back to potential applications to adaptive clinical trials, we could also mention other examples besides vaccinology in which it is common to monitor several possibly conflicting objectives. For example one can think of clinical trials combining patient-reported outcomes (e.g. quality-of-life questionnaires) and clinical outcomes (e.g. tumour remission) or clinical trials in neurocognitive diseases that use a lot of different tests assessing different neurocognitive dimensions (ex. executive functions and memory tests).
* Due to space limitation, we had to state our lower bound (Theorem 3) in the appendix, but we will move it to the main text if given some extra space for our revision, and also mention the existing lower bound of [6] (not for the $k$-relaxation). We emphasize that our lower bound is only a worse case result (in the spirit of the one derived by [11] for the problem of finding a $k$-sized subset of the top $m$ arms in a standard bandit). We prove that there exists some instances in which the sample complexity has to scale with our gaps. We will leave as an open question whether we can derive a tighter lower bound which is valid for every instance. But deriving such bounds in the presence of multiple correct outputs is known to be challenging [10].
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed responses, they helped me clear up some of my queries. | Summary: This paper extends the best arm bandit problem to a multi-objective version to identify the arms that are in the Pareto front.
Strengths: This problem is new and the author provides a clear practice motivation of this problem.
The theory study seems valid and complete.
Weaknesses: The math notation seems a bit over-complicated and I was wondering can you simplify it?
Overall, extending the best arm bandit problem into multi-object setting seems straightforward, which limits the novelty of this paper.
The experiment result looks a bit hand-wavy. I understand that there might not be any algorithm specifically designed for such problem, but I was wondering how the proposed method compares with some naively modified baselines?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: In the definition of Pareto front, why we don't consider the variance of the distribution?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We comment about each weakness mentioned and answer your question below.
* Due to the multi-dimensional setting and the fact that we consider three relaxations simultaneously, we agree that the notation can be a bit heavy. We will double check if some simplification is possible.
* While PSI (Pareto set identification) is a natural extension of the best arm identification (BAI) problem in standard bandits, PSI is actually a more challenging problem. In PSI we don't know the number of optimal arms beforehand: it can go from $1$ to $K$, while in BAI the classical assumption is that there is a single optimal arm. When relaxations are further taken into account, there are multiple valid solutions (e.g. for $\varepsilon_1$-PSI-$k$ when the size of the Pareto set is larger than $k$, any subset of size $k$ is a valid guess), which creates some additional difficulties even in the uni-dimensional setting [10]. It is true that our sampling rule shares a common structure with that of existing adaptive BAI algorithms which rely on playing two contentious arms ($b_t$ and $c_t$ in our paper), often the empirical best arm and a contender (LUCB, UGap or more recently Top Two algorithms). However, it actually took us several iterations to come up with the right definitions for $b_t$ and $c_t$ for the PSI problem, which interestingly do not exactly coincide with the choices in prior BAI algorithms when $D=1$ (see Appendix E). You can also read our answer to reviewer 9Ln4 for a better intuition about our definition of $b_t$ and $c_t$, which will be included in our revision. As a start, in the multi-objective case we no longer have an ``empirical best arm" but a set of arms of random size that could equally be optimal. Likewise, many results and techniques known so far for BAI do not directly apply to our setting and our analysis required specific results to pin down the behavior of our algorithm: Lemma 3 and Lemma 4. We will be happy to sketch their proof (and their crucial ingredient Lemma 8 currently stated in appendix) if we are given an extra page for our revision.
* The experiments reported in the main paper are mostly meant to illustrate the reduction in sample complexity obtained when solving the $\varepsilon_1$-PSI-$k$ relaxation (for different values of $k$), compared to the state-of-the-art algorithm for the (unrelaxed) $\varepsilon_1$-PSI problem from [6], which we call PSI-Unif-Elim. In particular, when $k=K$ (the total number of arms), we are comparing two algorithms ($\varepsilon_1$-APE-$K$ and PSI-Unif-Elim) designed for the same objective: $\varepsilon_1$-PSI, showing that our proposal leads to a reduced sample complexity (by 25$\\%$ on average, Figure 1 and Table 2 of the main paper). In Appendix H.3.3. we further propose a naive modification of PSI-Unif-Elim for the $\varepsilon_1$-PSI-$k$ objective (Algorithm 3) and compare it to $\varepsilon_1$-APE-$k$. The results, reported in Figure 8 show that $\varepsilon_1$-APE-$k$ consistently has smaller sample complexity (up to 3 times smaller). We will put more emphasis on this last finding in our revision, which is currently only briefly mentioned in l.302-303 of the main paper.
* We are not sure of what you mean by ``considering the variance of the distributions in the definition of the Pareto front''. Right now, our goal (as that of prior work) is to identify the Pareto set of the set of means vectors, assuming that the marginal distributions of all objectives are sub-Gaussian with a known common bound on their sub-Gaussian parameter (l. 119-121). The setting and algorithms can be adapted to consider marginals that have different scales (i.e. different *known* bounds on their sub-Gaussian parameter), which is the case in our practical application, as explained in Appendix H.1. This amounts to identify the Pareto set of the vectors
$\\{(\mu_i^{d}/\sigma^{d})_{d \in [D]}\\}_i$ where $\sigma^{d}$ is the sub-Gaussian parameter of criterion $d$. But maybe what you had in mind is to consider unknown variances and possibly a risk-averse version of the PSI problem, e.g. in which the goal is to identify the Pareto set of the set $\\{\boldsymbol{\mu}_i - \alpha \boldsymbol{\sigma}_i^2\\}_i$ for some parameter $\alpha > 0$, where the vector $\boldsymbol{\sigma}_i^{2}$ contains the variance of each objective $d$ (which could now depend on $i$ as well). Adapting our algorithm to this setting would require significant effort (e.g. building confidence intervals on unknown variances) that are beyond the scope of this paper.
---
Rebuttal Comment 1.1:
Title: Thanks.
Comment: Thanks for the rebuttal. I keep my original score. | Summary: This paper studies a relaxed problem of Pareto set identification, in which the learner is required to identify a subset of the optimal arms. A single sampling strategy APE is proposed and then combined with various stopping rules to realize different relaxations of the original problem. In theory, this paper derives the sample complexity of these combinations. The proposed method is also validated empirically on a real-world scenario of Covid-19 vaccination.
Strengths: 1. Relaxed Pareto set identification is an important problem for multi-objective MAB. The proposed new framework may inspire new researches in this field, if properly justified.
2. The proposed sampling strategy sounds novel, and the analysis is technically sound.
3. The experiment on selecting vaccination strategies against Covid-19 is interesting.
Weaknesses: 1. One of the main contributions of this paper is to propose a new problem of $\epsilon$-PSI-$k$ and provide an analysis framework for this problem. However, I think the motivation of $\epsilon$-PSI-$k$ should be explained in more detail. I am not sure if we need this new formulation as $(\epsilon_1,\epsilon_2)$-PSI has already dealt with the sparsity issue.
2. Some detailed explanations of the intuition of the proposed sampling strategy (and why it outperforms previous strategies in principle) may help better understand the novelty in the algorithm design.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations have been properly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your review. We address your two concerns about weaknesses below.
* Our motivation for the $\epsilon_1$-PSI-$k$-relaxation comes from possible applications to early stage clinical trials, e.g. in vaccinology as in our real-world scenario. In this context, the constraint to identify at most $k$ interesting arms that will be investigated in further phases of clinical development (phase II, phase III) comes from material constraints: the cost of producing distinct treatments at a larger scale, a maximum number of patients that has to remain an order of magnitude larger than the number of arms, etc. (see also line 47-53 in our introduction). It is true that the parameter $\epsilon_2$ in the $(\epsilon_1,\epsilon_2)$-PSI relaxation also enforces some sparsity, but it provides no control on the size of the output, which is desirable in the above mentioned scenario. Thus, we view these two relaxations as two different ways of ensuring sparsity, and one of our main contribution is to propose a single exploration strategy (APE) that can tackle both objectives. Our strongest results are for $\epsilon_1$-PSI-$k$, for which we manage to quantify the reduction in sample complexity resulting from the limitation $k$ on the size of the output, both in theory and in practice.
* Regarding the novelty in algorithmic design, we emphasize that our sampling rule is the first fully sequential sampling rule ever proposed for (any of the studied relaxations of) Pareto front identification. It is fully sequential in the sense that in each round $t$, the most informative arm $a_t$ is selected in a data-dependent way. All prior algorithms are based on a ``racing'' approach, i.e. they use uniform sampling and an accept/reject mechanism. It is known in single objective bandits that fully sequential algorithms usually outperform their racing counterparts (see [9]) and we extend this observation to the multi-objective setting. Following your suggestion, we will provide a more detailed explanation of the intuition for our sampling rule, which is currently a bit shallow. It is built in the spirit of the adaptive lower/upper confidence bounds type algorithms in the single objective bandit setting, such as LUCB, LUCB++ or UGap. These three algorithms identify in each round two contentious arms: $b_t$: a current guess for the optimal arm (defined as the empirical best arm or smallest upper bound on its sub-optimality gap), $c_t$: a contender of this arm; the arm which is the most likely to outperform $b_t$ (in all three algorithms, it is the arm with the largest upper confidence bound in $[K]\backslash \\{b_t\\}$). Then, either both arms are pulled (LUCB, LUCB++) or the least explored among $b_t$ and $c_t$ is pulled. The originality of our sampling rule lies in how to appropriately define $b_t$ and $c_t$ for the multi-objective setting. The intuition for their definition is the following. Let $i$ be a fixed arm. Note that $M(i,j)>0$ for some $j$, if and only if there exists a component $d$ such that $\mu_i^d > \mu_j^d$ (recall that $M(i,j):= \max_d (\mu_i^d - \mu_j^d)$) i.e $i$ is not dominated by $j$. Moreover, the larger $M(i,j)$, the more $i$ is non-dominated by $j$ in the sense that there exists $d$ such that $\mu_i^d >> \mu_j^d$. Therefore, $i$ is strictly optimal if and only if for all $j\neq i$, $M(i,j)>0$ i.e $\alpha_i:= \min_{j\neq i}M(i,j)>0$. And the larger $\alpha_i$, the more $i$ looks optimal in the sense that for each arm $j\neq i$, there exists a component $d_j$ for which $i$ is way better than $j$. As the $\alpha_i$'s are unknown, we define $b_t$ as the maximizer of an optimistic estimate of $\alpha_i$. We further restrict the maximization to arms for which we are not already convinced that they are optimal (by Lemma 1, the arms in $OPT^{\varepsilon_1}(t)$ are (nearly) Pareto optimal on the event $\mathcal{E}$). Then, we note that for a fixed arm $i$, $M(i,j) < 0$ if and only if $i$ is strictly dominated by $j$. And the smaller $M(i,j)$, the more $j$ is close to dominate $i$ (for any component $d$, $\mu_i^d - \mu_j^d$ is small). Thus, for a fixed arm $i$, $\text{argmin}_{j\neq i} M(i,j)$ can be seen as the arm which is the closest to dominate $i$ (or which dominates it by the largest margin).
By minimizing a lower confidence bound on the unknown quantity $M(b_t,j)$, our contender $c_t$ can be interpreted as the arm which is the most likely to be (close to) dominating $b_t$. Gathering information on both $b_t$ and $c_t$ can be useful to check whether $b_t$ can indeed be optimal. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Deep Equilibrium Based Neural Operators for Steady-State PDEs | Accept (poster) | Summary: The paper proposes a method combining Fourier Neural Operator (FNO) with the Deep Equilibrium Model (DEQ) for more efficient operator learning under noisy conditions. Overall, the paper is well-written with clear methods and theory, and some effective experimental results. It's a decent piece of work, and I recommend a weak acceptance. However, I believe there are still areas in the paper that could be improved.
Strengths: 1. The paper is well-written, with a clear presentation of the method and theoretical underpinnings.
2. The experiments show some positive results, demonstrating the potential utility of the proposed approach.
Weaknesses: 1. The paper lacks a thorough literature review. It hardly mentions operator learning methods based on attention, which have demonstrated significant advantages in many areas, such as Navier-Stokes equations and problems on irregular geometric regions. Therefore, I believe the authors should add some references [1,2,3] to cover these works.
2. Each experiment in the paper compares the effects of adding noise to either the input or output. However, the improvements observed in noisy settings do not seem to be as significant as those without noise. This makes me wonder why the authors considered these experiments necessary.
3. Additionally, as a new model structure, DEQ should be able to be combined with many other approaches, such as DeepONet or other neural operators. However, the paper seems not to mention this possibility and only tries to combine it with FNO. It would be interesting to see an exploration of the potential of DEQ in combination with other models.
References
1. Transformer for Partial Differential Equations' Operator Learning (https://arxiv.org/abs/2205.13671)
2. GNOT: A General Neural Operator Transformer for Operator Learning (https://arxiv.org/abs/2302.14376)
3. HT-Net: Hierarchical Transformer based Operator Learning Model for Multiscale PDEs (https://arxiv.org/abs/2210.10890)
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and are encouraged to see that the reviewer finds the paper to be well-written with positive experimental results.
Please find our replies to some of your comments and concerns:
**Re: Lacking references to attention based operator learning.**
We apologize for not including the mentioned papers in our literature review and will be sure to include them in the Related Work section in the revised manuscript.
**Re: Importance of experiments with noise.**
The primary motivation behind showing results with added noise was to emulate real-world scenarios where the different physical quantities in a PDE are measured using sensors—since they are likely to incur noise in the observations.
We further note that the relative decrease in the MSE loss for weight-tied architectures (FNO-DEQ and FNO-WT) is smaller than when compared to FNO and FNO++ especially when trained with noisy inputs. For example, for Navier-Stokes with viscosity 0.01 we only see a 7% decrease in the performance of FNO-DEQ (best performing weight-tied model) versus 23% decrease in best performing FNO++ model.
Therefore, weight tied architectures are indeed an effective inductive bias for solving steady-state PDEs with neural operator frameworks.
**Re: Weight-tied DeepONets**
In principle, weight-tieing and DEQs can be combined with other neural operators like DeepONets. We chose FNO in part due its performance on various benchmark tasks in PDEbench [1]. We certainly hope that experiments with other operator architectures will be performed, by us and the community, and think this is fertile ground for future work!
[1] Takamoto, Makoto, et al. "PDEBench: An extensive benchmark for scientific machine learning." Advances in Neural Information Processing Systems 35 (2022): 1596-1611.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, thank you for your review and feedback! Since today is the last day of the author-reviewer discussion period, please let us know if there are any outstanding questions that we can help answer or clarify. | Summary: The paper proposes two weight-tied neural network architectures for solving steady-state partial differential equations (PDEs) using the universal approximation capabilities of neural networks. The first architecture is a weight-tied version of Fourier Neural Operators (FNO), while the second architecture is a Deep Equilibrium Model (DEQ) that uses black-box root finding algorithms to implicitly train the model. The paper shows that both architectures outperform existing methods on benchmark problems and can be used to learn efficient solvers for PDEs. The contributions of the paper include the introduction of the weight-tied FNO and FNO-DEQ architectures, as well as the demonstration of their effectiveness in solving steady-state PDEs.
Strengths: The main strengths of the paper are:
1. The paper proposes a new architecture called FNO-DEQ that uses weight-tied neural network layers to solve steady-state partial differential equations (PDEs). The architecture is based on the observation that the solution of most steady-state PDEs can be expressed as a fixed point of a non-linear operator.
2. The paper shows that FNO-DEQ outperforms other non-weight-tied architectures with 4x the number of parameters in predicting the solution to steady-state PDEs such as Darcy Flow and steady-state incompressible Navier-Stokes.
3. The paper demonstrates that FNO-DEQ and weight-tied architectures are more robust to both input and observation noise compared to non-weight-tied architectures, including FNO.
4. The paper leverages the universal approximation results of FNO to show that FNO-DEQ can universally approximate the solution operator for a wide variety of steady-state PDE families.
Weaknesses: Some potential limitations of the paper are:
1. The proposed FNO-DEQ architecture may not be applicable to all types of PDEs, as it is specifically designed for steady-state PDEs. Further research is needed to explore the effectiveness of weight-tied architectures for other types of PDEs.
2. The paper focuses on the empirical performance of the proposed approach and does not provide a detailed theoretical analysis of why weight-tying is effective for steady-state PDEs. A more rigorous theoretical analysis could provide deeper insights into the underlying mechanisms of the proposed approach.
3. The paper does not compare the proposed approach to other state-of-the-art methods for solving steady-state PDEs, such as finite element methods or spectral methods. A more comprehensive comparison could provide a better understanding of the relative strengths and weaknesses of different approaches.
4. The proposed approach may require more computational resources than other methods, as it involves solving for the fixed point of an implicit operator layer. This could limit its scalability to larger or more complex PDEs.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Refer to the weaknesses section.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Refer to the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your positive review! Please find our responses to the concerns you raised below.
**Re: Proposed architectures not applicable to all PDEs.**
There are a multitude of different forms of PDEs, each with their unique characteristics that may or may not be efficiently modeled by a single neural network architecture. In our paper we show that for a *broad class of PDEs*, namely *steady state PDEs*, there are specific architectural choices that can outperform the baselines with much fewer parameters. We fully agree that charting the space of effective architectures for various PDE families is a major outstanding question in using machine learning methods for PDE solvers.
**Re: Lack of theoretical benefits of why weight-tying is beneficial for steady-state PDEs.**
We agree that more theoretical understanding of the benefits of different kinds of architectures is an exciting avenue for further research, and we hope that there is a concerted effort towards studying and designing architectures for PDEs by the machine learning community!
However, we would like to note that in general showing separations theoretically between the performance (statistical or algorithmic) of different architectures is incredibly challenging to show theoretically—e.g. sample complexity separations between convnets and fully connected networks are very poorly understood, and most of the results are under very strong assumptions [4].
That being said, as we mention in our paper, there indeed are some theoretical motivations towards using weight-tied architectures for steady-state PDEs as established in the previous works of [1,2,3] which we use as a motivation for our work.
**Re: Comparison with finite element method and spectral methods.**
Our paper builds on neural operators (in particular Fourier Neural Operators (FNO) [5]), which have many benefits over fixed algorithms like finite element methods or spectral methods. Some of the benefits are:
- **Computational efficiency**: Neural operators learn a solution operator for an entire family of PDE, which has an inference time of just 0.005s compared to the 2.2s of spectral methods when used to solve Navier-Stokes equations [5]. Similar gains can be observed for many families of PDEs.
- **Robustness to discretization**: Neural operators (especially FNO) are robust to changes in discretization at inference time, that is they tend to perform well even if some of the training data is very coarsely discretized.
We refer the reviewer to [5] and related works for a more comprehensive overview of the benefits of neural operators.
**Re: Scaling to more complex PDEs.**
It is true that the computational overhead would increase with more complex PDEs, however that is true for *all* neural network based PDE solvers and is not limited to FNO-DEQ or FNO-WT. In fact, we believe weight-tied and DEQ based architectures are better suited for more complex steady-state PDEs since the memory used by DEQs for the backward pass is going to be constant, i.e., O(1), but for deeper architectures in FNO and FNO++, the memory required would increase with the number of layers in the architecture.
However, as mentioned in our paper, training DEQs can be slow as we solve for a fixed point in the forward pass. Through use of approximate implicit gradients and with careful selection of hyperparameters, the training can be made faster. Further, when compared to a non-weight-tied network of a similar depth, the overhead due to fixed point solving is marginal, both in terms of compute and memory requirements.
[1] Marwah, Tanya, et al. "Neural Network Approximations of PDEs Beyond Linearity: A Representational Perspective." International Conference on Machine Learning. PMLR, 2023.
[2] Marwah, Tanya, Zachary Lipton, and Andrej Risteski. "Parametric complexity bounds for approximating PDEs with neural networks." Advances in Neural Information Processing Systems 34 (2021): 15044-15055.
[3] Chen, Ziang, Jianfeng Lu, and Yulong Lu. "On the representation of solutions to elliptic pdes in barron spaces." Advances in neural information processing systems 34 (2021): 6454-6465.
[4] Li, Zhiyuan, Yi Zhang, and Sanjeev Arora. "Why are convolutional nets more sample-efficient than fully-connected nets?." arXiv preprint arXiv:2010.08515 (2020).
[5] Li, Zongyi, et al. "Fourier neural operator for parametric partial differential equations." arXiv preprint arXiv:2010.08895 (2020)
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, thank you for your review and feedback! Since today is the last day of the author-reviewer discussion period, please let us know if there are any outstanding questions that we can help answer or clarify. | Summary: This research examined the solution of steady-state Partial Differential Equations (PDEs) using Fourier Neural Operator (FNO) based architecture. The author introduced a fix-point iteration mechanism into the FNO framework, leading to the proposal of weight-tied FNO and FNO Deep Equilibrium (FNO-DEQ) models. Comparative analyses revealed that these newly proposed architectures outperformed the traditional FNO when solving standard Darcy Flow and Navier-Stokes Equations.
Strengths: This paper is well written, providing clear and rigorous mathematical definitions of the problem, its underlying theory, and the proposed solution.
The innovation of incorporating the fixed-point iteration mechanism (via the contraction mapping theorem) into the Fourier Neural Operator (FNO) is novel and captivating. The results clearly demonstrate a significant improvement when applying this technique to steady-state Partial Differential Equations (PDEs).
Further enhancing the strength of the paper, the author proves a universal approximation theorem for the FNO Deep Equilibrium (FNO-DEQ) model, which assures the boundedness of the approximation. This theoretical validation lends additional credibility and rigor to the findings.
Weaknesses: The paper doesn't provide any loss versus training epochs data, which would allow us to assess whether the training had indeed reached convergence (as well as to compare the speed of convergence). While it's assumed that convergence must have been achieved for the results shown in Tables 2 and 3, the absence of this specific data inhibits a more comprehensive understanding of the model's training process.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The paper doesn't provide a clear explanation regarding the practical implementation of the Fourier Neural Operator Deep Equilibrium (FNO-DEQ) model. The inclusion of a brief paragraph or section detailing its implementation would greatly enhance the reader's understanding and potentially facilitate the model's broader adoption.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The author focused only on the standard Darcy flow and Navier Stokes problem, which the classical FNO already works well. An immediete question is that if the FNO-DEQ can be applied to more challenging PDEs (for example 3D Navier Stokes), to show its effectiveness, where classical FNO struggles.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the encouraging review and feedback. We are glad that the reviewer finds our paper as novel and captivating!! Please find our replies to some of your comments below. We promise to make the corresponding changes to the camera-ready version of our draft to incorporate your suggestions.
**Re: Loss vs training epochs data and convergence.**
Thank you for your feedback! We agree that adding convergence plots will help towards a more comprehensive understanding of our methodology. We have added the convergence plots for Naiver Stokes with viscosity 0.01 in the attached PDF (Figure 1). The key observation is that while all the models converge to approximately the same train MSE value, the convergence of the test loss differs, i.e., DEQs and weight tied networks get a better test loss in fewer epochs.
We also note that the convergence plots for Darcy Flow and Navier-Stokes with viscosity 0.001 follow similar trends. We will add the convergence plots for all the PDEs and models in the final version of the paper.
**Re: Practical implementation and clarification.**
We provide a detailed description of the implementation and architectural details in the Appendix (Section A) of the supplementary material. For example, we note that all the networks for 500 epochs with Adam optimizer with L2 weight penalty coefficient of 1e-4. The learning rate is set to 0.001 for Darcy flow and 0.005 for Navier-Stokes, with batch size 32. Further information about the training details of the DEQ architecture are provided in the Appendix (Section A) as well. We will restructure the draft to include the key implementation details earlier in the paper.
Finally, we will release our code (submitted as supplementary material) detailing the implementation details and appropriate documentation on how to reproduce our results along with the camera-ready version of the paper.
**Re: Standard Darcy Flow and Navier-Stokes problem where FNO already works well.**
We agree that applying FNO-DEQ and FNO-WT to more challenging PDEs is an important step towards showcasing the benefits of the proposed architectures, and we believe scaling our methodologies to more complex PDEs is a fertile ground for future work. However, we also would like to note that ours is the first paper to benchmark the performance of FNO (and FNO++) along with weight-tied models on steady-state 2D Navier-Stokes PDEs!
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. All my questions are well answered. | Summary: The paper tackles the problem of solving steady-state PDEs with weight-tying FNOs. The authors argue that instead of stack multiple FNO layers with different parameters, repeatedly applying one FNO layer computation is a better choice. This hypothesis is motivated by the fact that steady-state PDEs are solved for fixed-point solutions, where evolving PDE further will not change the solution. Moreover, instead of directly stack the same FNO layer and hope the fix point can be reached, the authors propose to use more advanced fixed-point solving method (FNO-DEQ). By using the fix-point properties, DEQ can have smaller training memory usage but is slower. Experiments are conducted on Darcy flow and naiver-stokes equations.
Strengths: - The paper is well structured,
- The proposed method leverages well the fact that the solutions are fixed points.
- Experimental results look good.
Weaknesses: - There are very few discussions on the convergence of the fixed point solution. For example, if we apply the FNO layers more times than during training, would the solution remain the same?
- There are two major differences between FNO-WT and FNO-DEQ. The first is in forward pass FNO-WT directly applies the network multiple times while FNO-DEQ use fixed point solver. The second is in the backward pass FNO-WT directly propagates through the computation graph while FNO-DEQ use implicit gradients. These two components seem independent to each other. For example, since FNO-WT in some sense also solve for the fixed point solution, can we use the implicit gradient to train FNO-WT?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How is the speed of computing the implicit gradient?
- In Definition 3, shouldn't all functions map from $\Omega$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and response! We are glad that the reviewer finds that our paper is well structured and likes the empirical results. We address some of the concerns raised by the reviewer below:
**Re: A discussion on the convergence of the fixed point.**
Increasing the number of fixed point solver iterations results in a better estimation of the fixed point. For steady state PDEs, we expect the test error to reduce as the estimation of the fixed point improves. Furthermore, at inference time we observe that the test error improves (i.e. reduces) with increase in the number of fixed point solver iterations even though the DEQ is trained with fewer solver steps.
We report these empirical results for Darcy flow and steady state Navier Stokes (viscosity 0.01) in Tables 1 and 2 in the attached PDF. For example, for Navier-Stokes with viscosity 0.01, at inference time we get a test MSE loss of 0.0744 with 48 solver steps from 0.0847 when used with 24 solver steps.
This further bolsters the benefits of DEQs (and weight-tied architectures in general) for training neural operators for steady-state PDEs. Moreover, performance saturates after a certain point once we have a reasonable estimate of the fixed point, hence showing that more solver steps stabilize to the same solution.
While we only show convergence graphs for the noiseless experiments due to space constraints, we will include similar results for all the PDEs as well as experiments with noise in the final version of the paper.
**Re: Difference between FNO-WT and FNO-DEQ.**
Both FNO-WT and FNO-DEQ leverage weight-tying (i.e., applying the same transformation at each layer) as the fundamental architectural choice. Where they differ is at the final aim of the forward pass: while FNO-WT will have a *fixed depth* computation (that may or may not approach a fixed point), FNO-DEQ explicitly is trained to find/tends to a fixed point. Therefore, since FNO-WT might fail to converge to a fixed point, implicit gradients cannot be used for FNO-WT.
We refer the reviewer to Bai et al. (2019) for further details on the difference between equilibrium models and traditional weight-tied models that repeat a transformation for a fixed depth.
**Re: Speed of computing implicit gradients**
We use phantom gradients [2] to compute approximate implicit gradients. These gradients are very light-weight, and fast to compute as they require backward pass through a single FNO block only a couple of times (1 time for Darcy Flow, and 3 times for Navier Stokes). More details of phantom gradients are included in Line 455-459 in the Supplementary Material.
**Re: Clarification on the domains of functions.**
We note that since the projection operator $\mathcal{P}$ is a function from $\Omega$ to $\mathbb{R}^d$, the domain for all the other functions in the composition would be $\mathbb{R}^d$. We will ensure that this is more clear in the final version of the paper.
[1] Bai, Shaojie, J. Zico Kolter, and Vladlen Koltun. "Deep equilibrium models." Advances in Neural Information Processing Systems 32 (2019).
[2] Geng, Zhengyang, et al. "On training implicit models." Advances in Neural Information Processing Systems 34 (2021): 24247-24260.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. While deep equilibrium model is new to me, I believe this is an interesting paper. I feel some theoretical analysis on the convergence would improve the presentation but I think this should be optional. I will increase my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback and the detailed comments, and are encouraged to see the overall positive response for our paper! We hope that we have sufficiently addressed all the questions and comments posed by the reviewers in their individual responses.
Furthermore, we have also attached a PDF with relevant figures and numbers that accompany our individual replies to all the reviewers.
Pdf: /pdf/97825e29eece6e4e61b4ab59a99ee2fe09c622ce.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Winner Takes It All: Training Performant RL Populations for Combinatorial Optimization | Accept (poster) | Summary: This paper presents a multi-decoder (population) neural network structure and training method to solve the combinatorial optimization problem. In particular, the paper presents a model update method in which multiple decoders can specialize in different types of problem instances. The method proposed in this paper shows promising results in TSP, CVRP, KP, and JSSP experiments.
Strengths: **S1.** The idea of agent population and the training method used for building it are novel.
**S2.** While the idea is simple and clear, the accuracy improvement by the proposed method is remarkable. When performing inference in a fast and simple form(i.e. without iterative searching for inference), Puppy showed superior results compared to POMO 16 samples or POMO 16 ensemble.
**S3.** The method proposed in the paper is applicable to various types of CO problems.
**S4.** The idea of applying the multi-agent method of this paper to the existing POMO training method is excellent.
Weaknesses: There seems to be no major flaw in the methods presented in the paper. However, there are two opinions related to the TSP/CVRP experiment results.
**W1.** Recent studies that have shown good results in TSP and CVRP experiments are omitted as baselines in Tables 1 and 2, for example [1, 2, 3]. Especially in the case of CVRP experiments, EAS[1], DPDP[2] and SGBS[3] have outperformed LKH3.
**W2.** HGS [4,5] shows better performance than LKH3 as a meta-heuristic algorithm. I recommend that paper authors consider adding HGS as a meta-heuristic baseline in Table 2.
**References**
[1] Andre Hottung, et al. Efficient active search for combinatorial optimization problems. International Conference on Learning Representations, 2022.
[2] Kool, Wouter, et al. Deep policy dynamic programming for vehicle routing problems. Integration of Constraint Programming, Artificial Intelligence, and Operations Research, 2022.
[3] Jinho Choo, et al. Simulation-guided Beam Search for Neural Combinatorial Optimization. Advances in Neural Information Processing Systems 35, 2022.
[4] Thibaut Vidal, et al. A hybrid genetic algorithm for multidepot and periodic vehicle routing problems. Operations Research, 2012.
[5] Thibaut Vidal. Hybrid genetic search for the cvrp: Open-source implementation and swap* neighborhood. Computers & Operations Research, 2022.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: **Q1.** In Algorithm 1, reward of the result of the second best agent $R(\tau_i^{**})$ was used as the baseline(169\~171), but Algorithm 2 seems to have used the POMO shared-baseline of the result of the best agent as the baseline(Algorithm 2, 7\~9). In other words, parameter update by applying the POMO training algorithm only for the best agent for each instance. Is it correct? Please provide a more detailed explanation of the parameter update in Algorithm 2.
**Q2.** How to determine the appropriate number of populations? Are there criteria, considerations, etc. for the decision?
**Q3.** Can you show the change in accuracy, training time, inference time, etc. according to the change in the number of population?
**Q4.** Algorithm 1 Input - Missing 'H' in 'number of training steps H'.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review and appreciating the novelty, simplicity and applicability of our method. We hope our answers address the reviewer’s remaining concerns.
> Q1. In Algorithm 1, reward of the result of the second best agent $R(\tau_{i^{\ast\ast}})$ was used as the baseline(169-171), but Algorithm 2 seems to have used the POMO shared-baseline of the result of the best agent as the baseline(Algorithm 2, 7-9). Please provide a more detailed explanation.
We are grateful to the reviewer for raising this point; indeed we discuss two possible choices of baseline for our training algorithm. In both cases, we rollout our entire population on the problem and train only the best agent, but can consider:
* POMO baseline: Use the standard shared-baseline introduced in [2] as per Algorithm 2. This is a minimal modification to our re-implementation of POMO and so was initially used for all presented results in the paper.
* Analytical baseline: As presented in line (169~171); when subsequently deriving the analytical gradient of the population-level objective, we find that the baseline is not the average performance of all agents (i.e. POMO baseline), but rather the performance of the second best agent only.
Given this result, we have since re-run our experiments using the analytical baseline which we are provided below. In summary, we find that in almost all cases the analytical baseline provides better or equal performance than the POMO baseline. We believe that this highlights both that our intuitive “winner takes it all” approach works well even with slightly different choices of baseline, and that indeed our theoretical analysis correctly predicts a modified baseline that can be used in practice with strong performance.
We will include the new results in the appendix of the revised manuscript and add a summary of the above discussion in section 3 of the main text.
Optimal gap (%):
### TSP:
| Method/Size | 100 | 125 | 150 |
|-------------|------|------|------|
| POMO (16 samples) | 0.27 | 0.42 | 0.81 |
| Poppy 16 (POMO baseline) | 0.07 | 0.14 | 0.27 |
| Poppy 16 (analytical baseline) | 0.06 | 0.13 | 0.28 |
### CVRP:
| Method/Size | 100 | 125 | 150 |
|-------------|------|------|------|
| POMO (32 samples) | 0.78 | 1.06 | 2.03 |
| Poppy 32 (POMO baseline) | 0.52 | 0.71 | 1.43 |
| Poppy 32 (analytical baseline) | 0.48 | 0.71 | 1.43 |
### KP:
| Method | Optimal gap |
|--------|-------|
| POMO (16 samples) | 0.006 |
| Poppy 32 (POMO baseline) | 0.0005 |
| Poppy 32 (analytical baseline) | 0.0001 |
### JSSP:
| Method | Optimal gap |
|--------|-------|
| Single (16 samples) | 7.2 |
| Poppy 16 (POMO baseline) | 6.2 |
| Poppy 16 (analytical baseline) | 5.5 |
> Q2. How to determine the appropriate number of populations? Are there criteria, considerations, etc. for the decision?
We have experimentally observed that the performance increases with the size of the population (see Tables 1-2, Figure 3). However, larger populations incur higher training and inference costs. We also hypothesize that the population performance could collapse when the population is arbitrarily large, as briefly mentioned in Sec. 5 (lines 308-314). Practically, the primary limits to population size are the availability of compute during the training stage, as well as the inference budget, since populations provide various performance-time trade-offs (Fig 7).
> Q3. Can you show the change in accuracy, training time, inference time, etc. according to the change in the number of population?
We briefly describe the cost of training in TSP with 100 cities in lines 214-216. More precisely, single agent baselines were trained for 5D, 1W, 1D, and 1D for respectively TSP, CVRP, KP and JSSP, while our largest populations were trained until convergence for 4D, 5D, 3D and 3D, which we will add to the paper. Additionally, we want to emphasize that Poppy is already performant with less training budget: Figure 3 (left) outlines the performance of Poppy after just a few hours of training, which is already sufficient for Poppy 4 to reach the performance of POMO (100 samples). The inference times (“Time” column) and accuracies (“Obj.”) for increasing population sizes are studied and provided in Tables 1 (TSP), and 2 (CVRP), and in Fig 7.
> Q4. Algorithm 1 Input - Missing 'H' in 'number of training steps H'.
Thank you, we will fix it in future versions of the paper.
> W1. Recent studies that have shown good results in TSP and CVRP are omitted as baselines (EAS, DPDP and SGBS).
The results presented in the main paper are achieved with pure inference without a search mechanism, which is why we excluded more expensive inference methods like EAS from the main paper. However, we present a comparison to these methods (EAS and DPDP) in Appendix C, referenced on line 322, which we will outline more clearly in the paper. We implement a naive stochastic sampling strategy to give a sense of the performance of Poppy with a larger time budget matching the number of rollouts of EAS. We show that this variant of Poppy can outperform EAS and DPDP even without an explicit adaptation at test time, further highlighting the performance of our approach. We also thank the reviewer for the reference to SGBS, which we will include in future versions.
We would also like to emphasize that EAS and SGBS are search methods used on top of pretrained models, and as such can be considered improvements orthogonal to Poppy. Indeed, in principle, Poppy could be combined with EAS and fine tuned at inference if the inference budget is large enough. We believe that leveraging populations of policies for more efficient inference-time search is a promising direction for future research but is beyond the scope of our current work.
> W2. HGS shows better performance than LKH3.
We thank the reviewer for the reference. We will add it in future versions of the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors addressing my concerns with detailed and appropriate responses, as well as conducting additional experiments. However, I will keep my rating for this paper unchanged. I have no further questions. | Summary: This paper proposes a construction method that learns a population of agents to improve the exploration of the solution space of combinatorial optimization problems. Experiments demonstrates that the proposed method improves the solving efficiency of four popular NP-hard problems.
Strengths: 1. The paper is well-written and easy to follow.
2. Experiments demonstrate that the proposed method achieves state-of-the-art RL results on four popular combinatorial optimization problems.
Weaknesses: 1. The motivation is unconvincing. The authors provide a motivating example of a naïve decision-making environment in Figure 1. However, it would be more convincing if the authors could provide an example of combinatorial optimization problems, as this paper aims to develop effective methods to solve combinatorial optimization problems.
2. The idea of learning a population of agents with diverse policies is incremental, as [1] has proposed a similar method.
3. The authors claim that their proposed method can produce a set of complementary policies. However, there is no proof for this claim. Thus, it would be more convincing if the authors could provide theoretical and/or empirical proof for this claim.
4. The experiments are insufficient.
(1) It would be more convincing if the authors could evaluate the proposed method on combinatorial optimization problems with larger sizes, such as TSP with 1000 cities.
(2) Some important baselines are missing. First, the authors may want to compare their method with MDAM [1] on the packing problems. Second, the idea of learning a population of agents is similar to learning an ensemble of agents. Thus, the authors may want to compare their method with baselines with the ensemble learning trick.
(3) It would be better if the authors could provide a time analysis of training cost, as training a population of agents may take much longer time than that of baselines.
[1] Xin, Liang, et al. "Multi-decoder attention model with embedding glimpse for solving vehicle routing problems." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 13. 2021.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please refer to Weaknesses for my questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We believe our answers address the concerns raised and hope these enable the Reviewer to reconsider their assessment.
> W1. [The motivation] would be more convincing if the authors could provide an example of combinatorial optimization problems [in Fig. 1].
The motivating example in Figure 1 was chosen because it illustrates the limitations of learning an optimal policy using a single agent: it cannot learn an optimal policy, whereas a population of two agents can. We agree that it is important to relate this to CO problems as these are the focus of our work. However, as we state on lines 154-158, this problem setting can in fact be seen as a toy model for the challenge of solving an NP-hard CO problem in a sequential decision-making setting. Concretely, we posit that the maze prevents the agent from being able to reason over the outcome of its actions, and thus it must learn heuristics over the expected (but uncertain) returns. If we instead consider the first action of a problem such as TSP, the number of possible unique tours that could follow from each action is exponentially large and, for any reasonably finite agent capacity, essentially provides the same obfuscation over the final returns.
We accept that we should have made this link more clear in our manuscript and are happy to extend the discussion of Figure 1 to include a summarized justification.
> W2. The idea of learning a population of agents with diverse policies is incremental, as [1] has proposed a similar method [MDAM].
Whilst MDAM and Poppy both share the intuition of training a population of agents for CO problems, there are substantial differences in both methodology and performance. We provide a comparison in the Related Works (lines 84-92) but to summarize:
* MDAM trades off performance with diversity by jointly optimizing policies and their KL divergence. Poppy drives diversity by maximizing population-level performance *without* using any explicit diversity metric (which, as we argue in lines 173-175 and 326-328, is a major benefit of our approach).
* MDAM only drives diversity at the first step of a trajectory since computing the KL divergence for the whole trajectory is intractable. Poppy, by contrast, can generate diverse policies over the entire trajectory.
* Poppy scales better with population size than MDAM (which requires computing the KL divergence for each pair of agents and thus is only scaled to 5 agents in [1]).
* Poppy significantly outperforms MDAM in TSP and CVRP (Tables 1 and 2).
In summary, we do not believe that MDAM can be used to detract from Poppy which makes significant contributions both algorithmically (a novel framework for population-based CO) and practically (with significant performance benefits).
> W3. The authors claim that their proposed method can produce a set of complementary policies. However, there is no proof for this claim.
We do provide empirical evidence to support the claim that Poppy can produce a set of complementary policies and will ensure that this is highlighted in revised versions. Specifically:
* In Fig 3 (right), it can be seen that the *average* agent performance in Poppy 8 has an optimality gap of 1.1%, far from the 0.4% of POMO. However, the performance of the population is 0.09%, outperforming even “POMO (100 samples)” despite using less than 12 times fewer rollouts. This can be interpreted as complementarity: alone, each agent performs worse than POMO, but together they perform far better.
* Poppy outperforms ensemble methods (Table 1 and 2), showing that it is not reduced to a population of diverse agents.
* In App. D.1, Fig. 4 (lines 546-557), we show that every agent contributes to the whole population performance.
> W4.1. It would be more convincing if the authors could evaluate the proposed method on CO problems with larger sizes.
Even though this work does not specifically focus on scalability, our method can easily be applied to larger instances since our method is agnostic to the network architecture. Concretely, the Poppy framework can be applied to architectures designed specifically for scalability, e.g. DIMES [1]. However, we use the POMO [2] architecture as it is widely established as the default option in our setting [4, 5, 6] and allows for direct comparison to the SOTA baseline methods. In this context, we scale to the same instance sizes as previous works [2-6].
> W4.2.1.The authors may want to compare their method with MDAM on the packing problems.
The authors of MDAM did not report the performance of their method in KP and JSSP. We also note that on TSP and CVRP, where MDAM reports results, MDAM is not the strongest RL baseline to which we compare: POMO (ensemble) outperforms MDAM in all settings, and is in turn always outperformed by Poppy. Therefore, we believe that it is appropriate to focus on the strongest published baselines for later experiments; and hence use POMO (single and ensemble) for KP and the JSSP-specific RL SOTA method, L2D [3].
> W4.2.2. The authors may want to compare their method with baselines with the ensemble learning trick.
We have indeed performed these experiments, which are denoted by POMO X (ensemble), where X is a number of agents. The results stated in Tables 1, 2, and 3a for TSP, CVRP, and KP, respectively, demonstrate that Poppy outperforms ensembles.
> W4.3. It would be better if the authors could provide a time analysis of training cost.
Single-agent baselines were trained for 5D, 7D, 1D, and 1D for respectively TSP, CVRP, KP and JSSP, while the largest populations we provide were trained until convergence for 4D, 5D, 3D and 3D, which we will add to the paper. Additionally, we want to emphasize that Poppy is already performant with less training budget: Figure 3 (left) outlines the performance of Poppy after just a few hours of training, which is already sufficient for Poppy 4 to reach the performance of POMO (100 samples).
---
Rebuttal Comment 1.1:
Title: AC note: Please engage with authors
Comment: Hi Reviewer 51fw,
Please engage with the authors. They have put in a significant amount of effort to respond to the concerns, and improve their work. I'd encourage you to respond asap and give the authors an opportunity to continue improving their work!
---
Rebuttal Comment 1.2:
Title: Thanks for the authors’ rebuttal
Comment: Thanks for the authors' response. However, my major concerns 1, 3, and 4 have not been properly addressed. Thus, I would lean toward rejection given the current status of my communication with the authors.
1. (Concern 1 in weaknesses) The authors do not provide a convincing example for the motivation.
2. (Concern 3 in weaknesses) It is unclear to me why Fig 3 demonstrates that Poppy can produce a set of complementary policies.
2. (Concern 4 in weaknesses) My concerns 4.1 and 4.2 have not been properly addressed. | Summary: The paper proposes a new training procedure that allows to train a diverse set of policies for solving combinatorial optimization problems. Most existing approaches for these problems train a single policy/agent and aim to construct a solution in a single shot (or by sampling multiple solutions). In contrast, the authors propose to use a population of agents that is trained such that a diverse set of solutions can be obtained by solving a problem instance by each population member. To this end, the paper proposes a new training objective that aims to increase the maximum reward over all K population members. This is implemented by backpropagating only through the best performing agent for each instance during the training phase. Experimentally, the authors show that this approach is indeed able to learn a set/population of complementary policies. Furthermore, the results demonstrate that the performance of a population trained via the proposed approach is clearly superior to a population in which all agents are trained independently in standard-fashion.
Strengths: - While other papers have proposed to train a population of agents, the proposed training procedure (that always only trains the best agent of the population per instance) is novel.
- The proposed methods show a significant performance improvement on the considered problems. For example, on the traveling salesperson problem the method finds solutions with a gap to optimality of 0.07% (in comparison to a gap of 0.27% of the state-of-the-art POMO method). To the best of my knowledge, the proposed method is currently the best neural combinatorial optimization method for quick solution generation (i.e., without a search component).
- The method succeeds in training a population of complementary agents. The authors show experimentally that the average solution quality decreases during the training while the best performance over all agents increases. This means that the agents successfully specialize on specific strategies.
- The authors evaluate their method on 4 different combinatorial optimization problems which demonstrates the wide applicability of the technique. On all problems, the method significantly improves the performance.
- The considered problem of learning heuristics for combinatorial optimization problems has gotten a lot of attention in the literature and is a very promising research area.
- The proposed method is simple and can be easily applied to other neural construction methods. Thus, it is likely that other researchers will use the proposed technique in their work.
- Overall, the paper is well written and clearly organized.
Weaknesses: - Overall, the additional training of a population with the proposed method is quite resource intensive because only one of the K rollouts is used for backpropagation. In some settings, the additional resources needed for the population training phase might not be worth the obtained performance improvements.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors:
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - The paper does not discuss limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and outlining the strengths of our work (novelty, performance, applicability, clarity). While there are no questions, we are happy to provide some feedback on a comment made in the review.
> Overall, the additional training of a population with the proposed method is quite resource intensive because only one of the K rollouts is used for backpropagation. In some settings, the additional resources needed for the population training phase might not be worth the obtained performance improvements.
We agree with the reviewer that the training process can be intensive. However, we still would like to emphasize that the practical cost of training the population is still in the order of training a single agent (for example, Poppy 16 costs 80% of the training of the initial single agent, as we briefly mention on lines 214-215). Crucially, the cost of training the population is substantially reduced by cloning the final few layers of a pre-trained single agent to initialize the population and then fine-tuning each agent. Future works could consider strategies that enable the early detection of the poor performing agents and hence avoid full rollouts for them.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I keep my very high rating and believe that the paper should be accepted. | Summary: The paper proposes that populations of agents can produce better results than using single agents. This leads to a new "Poppy" algorithm which performs policy gradient updates only on the agent which produced the highest reward. This is then applied to combinatorial optimization problems using attention-based architectures to demonstrate fast and competitive results.
Strengths: * The proposed "Poppy" algorithm appears to be a novel policy gradient variant. I do wonder however, if such an algorithm can be extended to general RL settings, rather than only combinatorial optimization.
* Overall the paper is a clean read and presentation is strong. I did not have any issues with understanding.
* Experimental results are reasonable. The method demonstrates good performance and low inference costs.
Weaknesses: The core issue of the proposed method, is that there is no explicit "diversity" objective being optimized, which makes this a theoretical weakness. However, the paper does defend against this through multiple explanations (e.g. agents will tend to specialize) and empirical evidence (e.g. best agent improves which population worsens).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * Could the proposed Poppy RL algorithm be more generally applicable to any MDP? If so, which problems (outside of combinatorial optimization) do you think can be most benefited?
* Following up on my weakness section - In which types of MDPs/cases could "diversity" behavior not appear from Poppy training, since diversity is not an explicitly optimized objective?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations section appears comprehensive. No issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and appreciating the presentation and clarity of the work. We hope our answers address all concerns and are sufficient to reconsider the assessment.
> Could the proposed Poppy RL algorithm be more generally applicable to any MDP? If so, which problems (outside of combinatorial optimization) do you think can be most benefited?
The Poppy framework (cloning a single agent, then applying the Poppy objective) is applicable to any model architecture. However, Poppy only makes sense in settings where (i) there is a distribution of problem instances and (ii) these instances can be attempted multiple times at inference (e.g., self-driving cars would not fit into this category). A lot of environments have these two features, including combinatorial problems, text/code generation problems or even theorem proving. Moreover, environments which are based on a simulator that we can repeatedly use satisfy (ii), as several attempts can be performed virtually before acting.
We make a brief reference to this question in lines 324-326, however, we agree that this is a natural question and will extend this to include the above discussion in the final version of the manuscript.
> In which types of MDPs/cases could "diversity" behavior not appear from Poppy training, since diversity is not an explicitly optimized objective?
We note that diversity is not the aim of Poppy; rather we aim to maximize performance - therefore we expect diversity to emerge if, and only if, it increases the population-level performance (i.e. taking the best result from any agent in the population on each sampled problem instance). Correspondingly, if only a single policy can achieve the highest practical returns across all problem instances we would expect the population to collapse to have only one agent providing all of the performance (as the other agents are never the best on a sampled problem and thus never get trained). Trivial examples would be when the training distribution consists of only a single problem instance, or when a single simple strategy (that can be exactly learned by an agent) can reach optimal performance on all problem instances (e.g. if we train on only TSP problem instances where the shortest tour is obtained by always moving to the nearest unvisited city).
> The core issue of the proposed method, is that there is no explicit "diversity" objective being optimized
We firmly believe that this is, in fact, a key strength of our approach, for the following reasons:
* Population diversity is only a proxy for population performance. Poppy directly optimizes the target metrics and attains diversity as a by-product, which is more aligned with our objective of solving CO problems.
* Diversity is tricky to measure and thus to optimize. The KL divergence has been used in CO (MDAM [6]), but it is challenging to scale as the complexity grows as the size of the population squared, and was only applied to the very first action, which limits the performance (as we argue on L85-L92). Another approach in the line of quality-diversity methods would be to encode explicit behavioral markers. However, this can be difficult to implement in the case of CO problems where there are no canonical behavioral markers, and so these would have to be handcrafted for each specific problem.
* Even if we were to suppose that we have a way to measure and optimize diversity, it is not clear how to balance the RL and diversity objective. Indeed, different problems may necessitate various degrees of diversity and too much diversity can hurt final performance: this implies tuning the hyperparameter used in the loss trade-off, as done in [6].
* We demonstrate empirically that optimizing our objective produces a set of diverse agents (e.g., see Figures 4-6 in the Appendix) across several domains *without* handcrafting any problem-specific notions of diversity.
---
Rebuttal Comment 1.1:
Title: AC Note: Please engage with the authors
Comment: Hi Reviewer Ym2T, Please engage with the authors. They have put in a significant amount of effort to respond to the concerns, and improve their work. I'd encourage you to respond asap and give the authors an opportunity to continue improving their work!
---
Rebuttal Comment 1.2:
Title: Keeping current score of acceptance
Comment: I thank the authors for their time writing their response. I do think the paper has adequately defended its position on the diversity issue, and has achieved good results.
The proposed method indeed is interesting, and it appears to be a more general technique than just for combinatorial optimization (although its generality is not explored in this paper). At the moment, the paper focuses mostly on combinatorial optimization cases however, and so the impact of the paper may be limited.
Thus I am fine with lukewarm acceptance at the moment.
---
Reply to Comment 1.2.1:
Comment: We appreciate the Reviewer's feedback and are thankful our work's merit isn't in doubt.
The Reviewer comments on a possible narrow appeal due to our focus on CO problems. We respectfully differ in this view. Combinatorial Optimization is a significant and long-standing research area, with related papers frequently presented at top venues like NeurIPS. For reference, we've listed several recent NeurIPS publications on neural CO at the end of our response.
To further address the Reviewer’s concern, we're open to expanding our introduction and related work sections to underscore real-world applications of CO problems, such as routing, scheduling, and chip placement.
However, we fully appreciate that gauging the impact of specific topics can be subjective and simply wished to take this chance to share our perspective. We're grateful for the reviewer's constructive feedback and their positive evaluation of our research.
**NeurIPS 2022**
* Malherbe et al. “Optimistic Tree Searches for Combinatorial Black-Box Optimization”.
* Choo et al. “Simulation-guided Beam Search for Neural Combinatorial Optimization”.
* Kim et al. “Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization”.
* Wang et al. “Unsupervised Learning for Combinatorial Optimization with Principled Objective Relaxation”.
**NeurIPS 2021**
* Kim et al. “Learning Collaborative Policies to Solve NP-hard Routing Problems”.
* Ma et al. “Learning to Iteratively Solve Routing Problems with Dual-Aspect Collaborative Transformer”.
* Wang et al. “A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs”.
* Kwon et al. “Matrix encoding networks for neural combinatorial optimization”. | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed feedback on our manuscript. We summarize some common points across reviewers:
* *Not having an explicit objective for diversity is an advantage, not a weakness.* We empirically show that Poppy attains diversity as a by-product of optimizing the proposed population objective. Crucially, Poppy does not require any problem-specific notion of diversity, which makes it generally applicable to a wide range of problems while still achieving SOTA performance across them. See answers to Reviewers Ym2T and 51fw for more details.
* *The cost incurred by training the population is comparable to that of training a single agent.* See answers to Reviewers wqBo, 51fw and ybFK for details.
In the future, we will *update* the paper in response to the reviewers’ comments. The main changes include:
* Include the results reported in the answer to Reviewer ybFK.
* Make the link between Fig. 1 (motivation for populations) and CO problems clearer.
* Clarify why the policies learned by the agents are complementary using Fig 3.
* Explain why Poppy is potentially applicable to problems other than CO.
* Include the solver HGS as a baseline for CVRP.
* Include SGBS as a baseline in Appendix C.
Additionally, we provide below several references used in the answers provided to the reviewers:
[1] Qiu et al. 2022. DIMES: A Differentiable Meta Solver for Combinatorial Optimization Problems.
[2] Kwon et al. 2020. POMO: Policy Optimization with Multiple Optima for Reinforcement Learning.
[3] Zhang et al. 2020. Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning.
[4] Hottung et al. 2022. Efficient Active Search for Combinatorial Optimization Problems.
[5] Kool et al. 2019. Attention, Learn to Solve Routing Problems!
[6] Xin et al. 2021. Multi-decoder attention model with embedding glimpse for solving vehicle routing problems. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Improving Adversarial Robustness via Information Bottleneck Distillation | Accept (poster) | Summary: This paper proposes Information Bottleneck Distillation (IBD) to improve adversarial robustness from the perspective of information bottleneck principle. Specifically, two distillation strategies are proposed to boost information bottleneck. Different from the existing works, this paper utilizes the predictions of robust models to maximize the mutual information. Also, the authors design an adaptive feature distillation based on the attention mechanism to facilitate the student model to inherit knowledge from the teacher model. The experimental results demonstrate the proposed strategies are effective in improving adversarial robustness.
Strengths: S1: This paper is technically sound, and the motivation and formulation of the proposed method are elegant.
S2: Formulating the information bottleneck objective from the perspective of adversarial robustness distillation is novel and well-motivated.
S3: The difference with the existing works is clearly demonstrated and discussed.
Weaknesses: W1: Some derivation details in this paper are overly simplified and jumpy, making it difficult to understand. For example, how is the last step of the derivation of Eq. (11) obtained?
W2: The proposed methods rely heavily on a robust teacher model, hence the experimental evaluation of what effect of varying teachers on the proposed methods should be conducted.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As mentioned in W1 and W2, how is the last step of the derivation of Eq. (11) obtained? And, what is the effect of varying teachers on the proposed methods ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The derivation details are not very clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback! Here is our response to the concerned questions.
#### **Q1: Some derivation details in this paper are overly simplified and jumpy**
**A1**:Thank you for your comment and sorry for the confusion. To clear up this, we further provide the detailed derivation of Eq.(11). The Eq. (11) is
\begin{equation}
\begin{aligned}
\mathop{\mathbb{E}}\limits_{p(x)p(z|x)} \left[ \log \frac{p(z | x)}{q(z | t)}\right] = \mathop{\mathbb{E}}\limits_{p(x)} \left[ \int p(z | x) \log \frac{p(z | x)}{q(z|t)} d z \right] =
\mathop{\mathbb{E}}\limits_{p(x)p(z|x)} \Big[\mathrm{KL} \big(p(z | x) || q(z|t) \big) \Big].
\end{aligned}
\end{equation}
Thus, we optimize the $\mathrm{KL}$ divergence between the feature likelihood $p(z|x)$ and the appropriate feature probability $q(z|t)$.
In IBD, we parameterize Gaussian densities $p(z|x)$ and $q(z|t)$ using neural networks, where the mean of $p(z|x)$ and $q(z|t)$ are the intermediate features from $f_s$ and $f_t$, respectively, and the variance is set to an identity matrix.
The $\mathrm{KL}$ divergence between two Gaussian distributions $p$ and $q$ can be obtained by
\begin{equation}
\begin{aligned}
\mathrm{KL}(p, q) & =-\int p(x) \log q(x) d x+\int p(x) \log p(x) d x \\
& =\frac{1}{2} \log \left(2 \pi \sigma_2^2\right)+\frac{\sigma_1^2+\left(\mu_1-\mu_2\right)^2}{2 \sigma_2^2}-\frac{1}{2}\left(1+\log 2 \pi \sigma_1^2\right) \\
& =\log \frac{\sigma_2}{\sigma_1}+\frac{\sigma_1^2+\left(\mu_1-\mu_2\right)^2}{2 \sigma_2^2}-\frac{1}{2}
% = \frac{1}{2c_1} \left(\mu_1-\mu_2\right)^2 + c_2,
\end{aligned}
\end{equation}
In IBD, $\mu_1$ is $f_t(x)$ and $\mu_2$ is $f_s(x)$. $\sigma_1$ and $\sigma_2$ are identity matrices.
As a result, the Eq.(11) can be calculated by:
\begin{equation}
\begin{aligned}
\mathop{\mathbb{E}}\limits_{p(x,y)p(z|x)} \left[ \log \frac{p(z | x)}{q(z | t)}\right] = \mathop{\mathbb{E}}\limits_{p(x)p(z|x)} \Big[\mathrm{KL} \big(p(z | x) || q(z|t) \big) \Big]
= \mathop{\mathbb{E}}\limits_{p(x)} \left[ \big(f_t(x) - f_s(x)\big)^2 + \text{c} \right],
\end{aligned}
\end{equation}
where $c$ is a constant.
When applying Eq.(11) as an objective to optimize the DNNs, a particular challenge is how to choose appropriate intermediate features from models to calculate the loss, since the different intermediate features tend to have different levels of information, especially when the student and teacher models have different architectures. Therefore, we proposed an attention-based feature distillation strategy to achieve this optimization objective.
**In order to avoid confusion about the derivation process, we will give the detailed derivation steps of each formula in the new version.**
#### **Q2: How would the performance of teachers affect that of student models?**
**A2**: Thank you for your good questions. In our submitted manuscript, we only used one robust teacher network for fair experimental comparison. Following your comments, we conduct an ablation experiment by using different teacher models to verify the impact of the teacher's robustness on the performance of the student model. We conduct this experiment on CIFAR-10 with two student models: ResNet-18 and WideResNet-34-10, and five different teacher models which have different robustness. The results are shown in the Table. We can observe that different robust teacher models have a significant positive benefit on the student model. For the ResNet-18 student model, we find that the robustness of the student does not increase monotonically with that of the teacher.
As the teacher model (WideResNet-34-20) becomes more complex, the robustness of the student model decreases, compared to WideResNet-34-10. This may be due to the large gap in the architecture of the teacher model and the student model.
This phenomenon is called **Robust saturation** [1].
For the WideResNet-34-10 student model, we found that in most cases, the student’s robustness can surpass that of the teacher model.
We think there are two reasons for this, one is that the performance of the teacher model is not very strong.
The other is that the teacher model provides robust soft labels to alleviate overfitting and improve performance.
Therefore, in most cases, it is expected that the student model exceeds the teacher model, but when the teacher model is strong enough, it is not easy for the student model to surpass the teacher model (e.g., WideResNet-76-10).
Teacher | Natural | AutoAtt | Student | Natural | AutoAtt | Student | Natural | AutoAtt |
---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
Resnet-18 | 84.09 | 48.71 | Resnet-18 | 83.74 | 50.52 | WRN-34-10 | 84.41 | 53.94 |
Resnet-34 | 85.94 | 50.57 | Resnet-18 | 84.92 | 49.84 | WRN-34-10 | 85.79 | 54.17 |
WRN-34-10 | 84.92 | 53.08 | Resnet-18 | 83.17 | 52.11 | WRN-34-10 | 84.21 | 55.65 |
WRN-34-20 | 85.65 | 56.82 | Resnet-18 | 82.82 | 51.64 | WRN-34-10 | 84.73 | 55.71 |
WRN-76-10 | 88.54 | 64.25 | Resnet-18 | 85.28 | 51.98 | WRN-34-10 | 86.61 | 57.12 |
Ref:
[1] Revisiting adversarial robustness distillation: Robust soft labels make student better. ICCV 2021.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response to the initial review. Having carefully considered their feedback in conjunction with the comments from other reviewers, I decide to maintain my initial rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Dvz1
Comment: Thank you for considering our response and providing feedback. We greatly appreciate your valuable comments to further strengthen our work. Thank you again for your time and input. | Summary: This paper takes a closer look at the information bottleneck principle and show that specially designed robust distillation can boost information bottleneck, benefiting from the prior knowledge of a robust pre-trained model and presents the Information Bottleneck Distillation (IBD) approach. What’s more, this paper also propose two distillation strategies to match the two optimization processes of the IB, respectively. The experimental results demonstrate the effectiveness of IBD.
Strengths: This article is the first to design a distillation objective function for information bottlenecks called IBD objective, and then propose two distillation strategies tor perform the two optimization processes of the information bottleneck for the further use in adversarial training.This article is the first to design a distillation objective function for information bottlenecks
Weaknesses: See Questions part.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1.The author should explicitly list the main contributions of this work, which would helps readers especially not with prior knowledge of relevant methods quickly understand the core content of the article and the value of the work, and also save reading time.
2.This work proposes IBD by combining robustness distillation and information bottleneck. I am concerned about the motivation of IBD. What problem does IBD solve for robust distillation or information bottleneck in adversarial training tasks?
3.The proposed IBD followed with two distillation strategies corresponding to the two optimization processes of the information bottleneck. Could the author explain more about the these two processes? My understanding is that these two strategies are designed for the two terms of the IB objective which require different optimization methods, and then integrated into the final objective function.
4.From the experimental results of CIFAR-10 in Table 1, although the proposed method has improved the robustness, it seems to be obtained by sacrificing the clean accuracy. Does the author have any ideas for improvement on this trade-off?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback! Here is our response to the concerned questions.
#### **Q1: The main contributions of our work.**
**A1**: Thank you for your comment and fruitful advice. The main contributions of our work include as follows:
1) **Theoretically**, we utilize conditional variational inference to construct a lower bound to estimate the mutual information and reformat the IB principle by using the adversarial robustness as the prior for learning features, which is termed Information Bottleneck Distillation (IBD).
2) **Algorithmically**, to realize IBD, we propose two distillation strategies to match the two optimization processes of the information bottleneck, respectively.
First, we utilize robust soft label distillation to maximize the mutual information between latent features and output prediction.
Second, we present an adaptive feature distillation that automatically transfers relevant knowledge from the teacher model to the student
model, so that it can restrict the mutual information between the input and latent features.
3) **Experimentally**, we conducted extensive experiments on various benchmark datasets such as CIFAR and ImageNet.
The results show the effectiveness of our IBD in improving the robustness of DNNs against most attacks (e.g., PGD-attack and AutoAttack), and our IBD behaves more robustly than state-of-the-art methods.
#### **Q2: What problem does IBD solve for robust distillation or information bottleneck in adversarial training tasks?**
**A2**: Thank you for your comments and sorry for the confusion. Our motivation is to improve the robustness of the model through the efficient optimization of IB.
1) How does IBD improve the optimization of IB? IBD assists the calculation of mutual information $I(x; z)$ by introducing an adversarial robust prior for each sample. Therefore, when calculating mutual information, IBD provides customized and reliable prior information according to different samples, rather a uniform priors. Our purpose is to let the model learn more relevant information and discard nuisance information.
2) From the perspective of robust distillation, previous adversarial robust distillation methods only optimize the output distribution of the model, while ignoring the feature information of the model.
Previous work [1] has shown that there are two kinds of features in the neural network: robust and non-robust features.
Therefore, we adopt the robust model as the teacher model and use the adversarial robustness as the prior for learning robust features.
We propose an adaptive feature distillation strategy to automatically transfer robust features from the teacher model to the student model. thereby improving the robustness of the student model.
#### **Q3: Could the author explain more about the two processes of IB?**
**A3**: Thank you for your comments and sorry for the confusion. IB expresses a trade-off in intermediate features $Z$ between relevant information for the prediction $Y$ and nuisance information about the input $X$. The objective of IB can be formulated as follows:
$
\max I(Z; Y) - \beta I(X; Z),
$
where $I$ denotes mutual information and $\beta$ controls the trade-off between the two terms.
IB involves two processes. One is maximizing the first term ($ I(Z; Y)$), which means that ensures there is a strong correlation between the learned features and the target label. This correlation is relevant information.
the other is maximizing the second term ($ -I(X; Z)$), which means ensuring that the learned features contain relevant information as much as possible and discard nuisance information.
For the two processes in IB, we propose two distillation strategies to match the two optimization processes, respectively. First, we utilize robust soft label distillation to maximize the mutual information between latent features and output prediction. Second, we present an adaptive feature distillation that automatically transfers relevant knowledge from the teacher model to the student model, so that it can restrict the mutual information between the input and latent features. Finally, we integrate the two proposed optimizations into a final objective function.
#### **Q4: Does the author have any ideas for improvement on this trade-off ?**
**A4**: In our final optimization objective function (Eq. (16)), we introduce the $\alpha$ hyperparameter, which is a hyperparameter that can trade off the adversarial robustness and natural accuracy.
We conduct ablation experiments to verify the trade-off. The results are shown in the following. When we set $\alpha = 0.9$, our method can achieve the best adversarial robustness.
| Alpha |0.0 | 0.1 |0.2 |0.3 | 0.4 |0.5 |0.6 |0.7 | 0.8 |0.9 |1.0 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| Natural | 87.28 | 85.32 | 84.74 | 84.45 | 84.25 | 84.02 | 83.55 | 83.62 | 83.48 | 83.17 | 83.04 |
| AA | 9.31 | 47.31 | 49.67 | 51.43 | 51.45 | 51.57 | 51.79 | 51.82 | 51.96 | 52.11 | 52.04 |
In addition, in order to improve this trade-off, we train IBD with additional training data [2] to achieve better trade-off results, Our IBD can achieve 87.82% natural accuracy and 61.22% adversarial robustness under standard autoattack, which significantly improves the trade-off.
Ref:
[1] Adversarial Examples Are Not Bugs, They Are Features. NeruIPS 2019.
[2] Fixing Data Augmentation to Improve Adversarial Robustness NeruIPS 2021
---
Rebuttal Comment 1.1:
Title: Response to authors' rebuttal
Comment: Thanks to the authors' response which has addressed my main concerns, and I will raise my rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer wRTW
Comment: Thank you for considering our response and providing feedback. We greatly appreciate your valuable suggestion and will incorporate additional explanations and experimental results into the revised manuscript to further strengthen our work. Thank you again for your time and input. | Summary: This paper draws inspiration from prior studies that suggest robust models can offer strong prior information, thereby enhancing both the robustness and uncertainty of the model. Accordingly, we propose a new Information Bottleneck (IB) objective, which is designed to distil robustness in the context of a Variational Information Bottleneck (VIB).
Strengths: Pros:
1. The proposed Information Bottleneck Distillation (IBD) method can significantly improve the robustness of Deep Neural Networks (DNNs), protecting them against most attacks such as the PGD-attack and AutoAttack.
2. The IBD method optimizes the information bottleneck efficiently and effectively by maximizing mutual information between intermediate features and output prediction via soft label distillation, and restricting the mutual information between the input and intermediate features via adaptive feature distillation.
3. The adaptive feature distillation transfers appropriate knowledge from the teacher model to the student model, resulting in a more accurate estimation of the student feature distribution.
4. The method was extensively tested on various benchmark datasets, including CIFAR and ImageNet, demonstrating its effectiveness and robustness compared to state-of-the-art methods.
Weaknesses: N/A
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you very much for your valuable and encouraging comments on our work! Thanks! | Summary: This paper proposes the Information Bottleneck Distillation (IBD) method to enhance adversarial robustness, derived from revisiting variational information bottleneck from the perspective of robustness distillation. IBD leverages two distillation strategies to perform the optimization processes of the information bottleneck, namely soft label distillation and adaptive feature distillation. The final experimental results show that the proposed method enhances the adversarial robustness against both the white- and black-box attacks.
Strengths: 1. This paper revisits variational information bottleneck from the perspective of robustness distillation, which utilizes the intermediate features extracted by a pre-trained teacher model to approximate $q(z)$.
2. The experimental results demonstrate that the proposed method improves the adversarial robustness against both the white- and black-box attacks.
Weaknesses: 1. This paper contains two hyperparameters, namely $\alpha$ and $\beta$. The authors should also analyze the impact of $\alpha$ on the proposed method.
2. In the experiment part of this paper, the classical adversarial training methods are SAT and TRADES. These benchmarks are somewhat old, and it is recommended to use newer training methods as baselines to boost the universality of the methods.
3. There are some minor typos in this paper, such as inconsistent presentation tenses, errors in the use of singular and plural (line 99), and the representation of the cross-entropy function in Eq.(2).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors point out the potential limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback! Here is our response to the concerned questions.
#### **Q1: The impact of the hyperparameters $\alpha$**
**A1**: Thanks for your comment. The $\alpha$ is a trade-off the adversarial robustness and natural accuracy.
We conduct ablation experiments to verify the trade-off. The results are shown in the following table. When we set $\alpha = 0.9$, our method can achieve the best adversarial robustness.
| Alpha |0.0 | 0.1 |0.2 |0.3 | 0.4 |0.5 |0.6 |0.7 | 0.8 |0.9 |1.0 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| Natural | 87.28 | 85.32 | 84.74 | 84.45 | 84.25 | 84.02 | 83.55 | 83.62 | 83.48 | 83.17 | 83.04 |
| AA | 9.31 | 47.31 | 49.67 | 51.43 | 51.45 | 51.57 | 51.79 | 51.82 | 51.96 | 52.11 | 52.04 |
#### **Q2: Some benchmarks are somewhat old**
**A2**:Thanks for this fruitful advice.
In order to verify the effectiveness of our method, we not only compare with some classical adversarial training methods (i.e., SAT and TRADES) but also compare with several state-of-the-art adversarial trained models on the robust benchmark[1] such as LBGAT[2] (2021), LTD[3] (2021) and LAS-AT[4] (2022), all of which are published in recent years.
As shown in the Table, we can observe that the proposed IBD improves the adversarial robustness by $\sim 1.2\%$.
Furthermore, when combined with AWP [5], our IBD also surpasses the previously state-of-the-art models reported by the benchmark. where every small margin of improvement is significant. \textbf{Note} that our method does not use any additional datasets.
Method | WRN | Natural | AA |
---- | ---- | ---- | ---- |
SAT | 34-10 | 84.92 | 53.42 |
LBGAT | 34-20 | 88.70 | 53.57 |
TRADES | 34-20 | 86.18 | 54.39 |
LTD | 34-10 | 85.02 | 54.45 |
IBD | 34-10 | 83.33 | 55.65 |
TRADES + AWP | 34-10 | 85.26 | 56.17 |
LASAT+ AWP | 34-10 | 84.98 | 56.26 |
LTD+ AWP | 34-10 | 86.28 | 56.94 |
IBD+ AWP | 34-10 | 85.21 | 57.18 |
#### **Q3: Some minor typos.**
**A3**: Thank you very much for kindly pointing this out. We have corrected these typos and carefully checked the manuscript to ensure that it is typos-free.
Ref:
[1] Robustbench: a standardized adversarial robustness benchmark.2020
[2] Learnable boundary guided adversarial training. ICCV 2021
[3] Ltd: Low temperature distillation for robust adversarial training. 2021
[4] Las-at: Adversarial training with learnable attack strategy. CVPR 2022
[5] Adversarial weight perturbation helps robust generalization. NeurIPS 2020
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comprehensive responses and new results. I decide to raise the score and hope the authors can incorporate the new results in the revision.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer rCCu
Comment: Thank you for considering our response and providing feedback. We greatly appreciate your valuable suggestion and will incorporate the new experimental results into the revised manuscript to further strengthen our work. Thank you again for your time and feedback! | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful and constructive comments on our work. During the rebuttal period, we carefully addressed all the comments and suggestions raised by all reviewers. We hope that our response has properly addressed the comments of the reviewers and that its overall contribution, quality and clarity are now significantly improved! Thanks all! | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper aims to improve the adversarial robustness of deep neural networks. From the perspective of the Information Bottleneck, a knowledge distillation method is proposed. It makes use of intermediate features and logits from a robust teacher to get priors for guidance in training of the student model.
Experiments on CIFAR-10, CIFAR-100, and ImgaeNet are conducted. Obvious improvements are obtained when compared with previous methods.
Strengths: 1. The paper gives some insights into knowledge distillation from the perspective of Information Bottleneck.
2. The distillation method shows impressive results on model robustness.
Weaknesses: 1. From the perspective of knowledge distillation, feature-based methods have already been explored by previous methods, like [9], [ref1].
As the paper claims, the major difference between the proposed method and previous distillation methods is that an adversarially-trained robust model is used as the teacher.
[ref1] Fitnets: Hints for thin deep nets. ICLR 2015.
2. It is interesting that the students can have stronger robustness than their teachers.
How would the performance of teachers affect that of student models?
Will the robustness of student models be enhanced when a more robust teacher model is deployed?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: limitations are discussed in the supplementary file.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback! Here is our response to the concerned questions.
#### **Q1: From the perspective of knowledge distillation, feature-based methods have already been explored by previous methods.**
**A1**: Thanks for your comment. Our approach indeed resembles the feature-based distillation methods.
However, we would like to highlight that the motivation behind our IBD and the optimization techniques is fundamentally different from those of the previous methods:
1) We utilize conditional variational inference to construct a lower bound to estimate the mutual information and reformat the IB principle by using the adversarial robustness as the prior for learning features, which is termed Information Bottleneck Distillation (IBD). The optimization of our IBD complies with the Information Bottleneck Principle, which makes the target model learns more relevant information and discard nuisance information.
2) From the perspective of robust distillation, previous adversarial robust distillation methods only optimize the output distribution of the model, while ignoring the feature information of the model. Previous work [1] has shown that there are two kinds of features in the neural network: robust and non-robust features. Therefore, we adopt the robust model as the teacher model and use the adversarial robustness as the prior for learning robust features. We propose an adaptive feature distillation strategy to automatically transfer robust features from the teacher model to the student model. thereby improving the robustness of the student model.
#### **Q2: How would the performance of teachers affect that of student models?**
**A2**: Thank you for your good questions. In our submitted manuscript, we only used one robust teacher network for fair experimental comparison. Following your comments, we conduct an ablation experiment by using different teacher models to verify the impact of the teacher's robustness on the performance of the student model. We conduct this experiment on CIFAR-10 with two student models: ResNet-18 and WideResNet-34-10, and five different teacher models which have different robustness. The results are shown in the Table. We can observe that different robust teacher models have a significant positive benefit on the student model. For the ResNet-18 student model, we find that the robustness of the student does not increase monotonically with that of the teacher.
As the teacher model (WideResNet-34-20) becomes more complex, the robustness of the student model decreases, compared to WideResNet-34-10. This may be due to the large gap in the architecture of the teacher model and the student model.
This phenomenon is called **Robust saturation** [2].
For the WideResNet-34-10 student model, we found that in most cases, the student’s robustness can surpass that of the teacher model.
We think there are two reasons for this, one is that the performance of the teacher model is not very strong.
The other is that the teacher model provides robust soft labels to alleviate overfitting and improve performance.
Therefore, in most cases, it is expected that the student model exceeds the teacher model, but when the teacher model is strong enough, it is not easy for the student model to surpass the teacher model (e.g., WideResNet-76-10).
Teacher | Natural | AutoAtt | Student | Natural | AutoAtt | Student | Natural | AutoAtt |
---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
Resnet-18 | 84.09 | 48.71 | Resnet-18 | 83.74 | 50.52 | WRN-34-10 | 84.41 | 53.94 |
Resnet-34 | 85.94 | 50.57 | Resnet-18 | 84.92 | 49.84 | WRN-34-10 | 85.79 | 54.17 |
WRN-34-10 | 84.92 | 53.08 | Resnet-18 | 83.17 | 52.11 | WRN-34-10 | 84.21 | 55.65 |
WRN-34-20 | 85.65 | 56.82 | Resnet-18 | 82.82 | 51.64 | WRN-34-10 | 84.73 | 55.71 |
WRN-76-10 | 88.54 | 64.25 | Resnet-18 | 85.28 | 51.98 | WRN-34-10 | 86.61 | 57.12 |
Ref:
[1] Adversarial Examples Are Not Bugs, They Are Features. NeruIPS 2019.
[2] Revisiting adversarial robustness distillation: Robust soft labels make student better. ICCV 2021. | null | null | null | null | null | null |
UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild | Accept (poster) | Summary: The paper proposes a unified diffusion framework, UnitControl for a more fine-grained controllable generation based on input visual conditions, utilizing two modules mixture-of-experts adaptor that extract features from different visual conditions, and a task-aware HyperNet that extracts language-based task embedding to control visual generation.
Strengths: 1. The paper conducts a relatively comprehensive evaluation of the proposed method, including human evaluations.
2. The framework can be useful as it unified different modalities for visual controllable generation using diffusion models.
3. The authors promise to open-source 20M triple training datasets that can benefit the whole community.
Weaknesses: 1. **The writing is not so well written and missing details that can hinder understanding of readers**. For example, Figure 2 should also mention parameters of encoder and decoder since I initially thought the introduced parameters are only from MOE adaptors and Hypernet. The authors should also highlight the differences and **similarities** (e.g., encoder and decoder are copied) between their work and ControlNet. In Section 3.3, how is hybrid task generalization done? Since your model only conditions on one visual modality when computing the score in one forward pass, do you compute scores individually for each condition for generation? If so, what sampling technique did you use? One sampling technique I can think of is the conjunction operator from Composable Diffusion [1] or do you use some other existing techniques? For zero-shot task generalization, I also find it confusing and don't quite understand how it is done. Mentioning it in the supplementary is not sufficient since it is meant to provide details that don't affect your understanding overall.
2. **The claims are quite shaky.** The authors claim that the model need to tackle the misalignment of low-level features from different tasks. However, based on Figure 4, I don't see too much of misalignment from baselines, i.e., ControlNet. Besides the visual quality, I don't see much of advantages over ControlNet. Even sometimes single-task ControlNet is better in some aspects. For example, for the object bounding box to image, control net seems to understand there should be two separate benches instead of one long bench generated by the proposed model. In addition, since one unified model is trained for longer time, model could be more robust, directly improving overall visual quality.
3. **The zero-shot generalization ability could be misleading.** Suppose in training, there doesn't exist any instructions that perform colorization. I don't expect the language itself can bridge the gap between visual signals and language to perform such unseen tasks. For example, in Figure 5, such zero-shot results doesn’t necessarily reflect the true reality that the model understands given instructions. One way to check this is to reverse the whole process - for example, can you do image decolorization using your prompt or blur the image instead. Since training images are mostly high resolution, and colored images, the model could automatically utilize learned prior to make images higher quality, which doesn’t necessarily mean they are doing what they are told to do. If so, then the model just does general reconstruction when prompted something unknown.
4. **Missing quantitative evaluations.** As stated, one contribution is to show that the visual quality of such controllable generation can be improved, then it is needed to include quantitative comparisons for image fidelity using metrics such as FID and KID, though I don't think they are good metrics but it provides enough insights.
5. **Lacking relevant baselines.** There are also other existing adaptor methods for such generation, for example, T2I-adaptor [2] and GLIGEN [3] seem to be highly relevant, and both of them are not used as a comparison in the paper.
5. **Related work.** If one of the contributions is to combine different modalities for compositional generation, then it would be important to add existing works related to compositional generation.
[1] Liu et al., Compositional Visual Generation with Composable Diffusion Models (ECCV 2022) \
[2] Mou et al., T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models. \
[3] Li et al., GLIGEN: Open-Set Grounded Text-to-Image Generation (CVPR 2023)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. How are both hybrid tasks generalization and zero-shot new task generalization done? For example, a detailed procedure (i.e., sampling technique) will be useful for better understanding.
2. There are 6 tasks overlapped with ControlNets, each of which is trained for 100K iterations. However, there are 8 tasks in your multi-task model, but it is trained for 900K, which in fact is not a fair comparison. If I misunderstood, feel free to point that out.
3. The method mainly uses modalities that are spatial, so is it capable of using global context, e.g., color, texture, to guide image generation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have included limitations and broader impact such as training data bias.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your suggestions and questions of our paper. Your concerns are addressed as follows.
**Q1: UniControl vs ControlNet**
Components and #params of whole UniControl Model and Multi-ControlNet
| | Stable Diffusion | ControlNet | MoE-style Adapters | TaskHyperNet | Total|
|--|--|--|--|--|--|
| UniControl| 1065.7M | 361M | 0.06M | 12.7M | 1.44B|
| Multi-ControlNet| 1065.7M | 361M * 9 |- | - | 4.315B|
Compared with the original ControlNet Model (Stable Diffusion + ControlNet)), the increase in UniControl's size is 0.09%, amounting to an additional 12.76M parameters. Notably, UniControl's versatility spans nine distinct tasks. In contrast, Multi-ControlNet (assembly of multiple ControlNets) needs a single Stable Diffusion and nine separate ControlNets to achieve comparable results. UniControl (1.44B #params) has greatly reduced the complexity compared with the Multi-ControlNet (4.315B #params) to achieve the same goal with better generation quality. The Stable Diffusion Model (including U-Net, CLIPText, VQVAE) is directly copied from its official checkpoint that is the same as the ControlNet's Stable Diffusion part.
**Q2: Details of zero-shot hybrid and new task inference**
Thank you for your suggestions. The work presented in [1] indeed offers a novel perspective on compositional generation via multiple conditions. However, UniControl adopts a distinct approach, detailed as follows:
*Hybrid Task:* As illustrated in the left subfigure of Fig. 3, the features from two condition inputs are integrated through an addition operation. This fused feature set is consolidated into a single tensor, which is subsequently fed into the ensuing ControlNet modules that are interleaved with modulated zero-convs. The entire inference procedure follows the standard DDIM protocol the same as ControlNet and single task UniControl.
*New Task:* Similar to the hybrid task, we feed the novel condition into a weighted ensemble of MoE-style adapters. An example configuration might be “depth: 0.6, seg: 0.3, canny: 0.1” for the colorization task. This process yields one feature tensor that is fed to subsequent modules. Again, the inference follows ControlNet's standard methodology, employing the regular DDIM.
We appreciate the reference to [1] and plan to incorporate it into our forthcoming manuscript.
**Q3: More quantitative evaluations and baseline methods**
Thank you for your suggestion. We've conducted quantitative analysis to include classic baselines such as GLIGEN [1], T2I-adapter [2], and ControlNet [3]. Our experimental setup remains consistent with Sec. 4.3 of the main manuscript, utilizing DDIM as the sampler with a guidance score of 9. With a collection of over 2,000 test samples sourced from Laion and COCO, we've assessed a wide range of tasks covering edges (Canny, HED), regions (Seg), skeletons (Pose), and geometric maps (Depth, Normal).
FID Scores
| | GLIGEN | T2I-adapter | ControlNet | UniControl |
|--|--|--|--|--|
| Canny | 24.9 | 23.6 | **22.7** | 22.9 |
| HED | 27.8 | - | 25.1 | **23.6** |
| Depth | 25.8 | 25.4 | 25.5 | **21.3** |
| Normal | 27.7 | - | 28.4 | **23.4** |
| Seg | - | 27.1 | 26.7 | **25.5** |
| Pose | - | 28.9 | 28.8 | **27.4** |
[1] GLIGEN: Open-Set Grounded Text-to-Image Generation. CVPR 23.
[2] T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models. arXiv 2302.08453.
[3] Adding Conditional Control to Text-to-Image Diffusion Models. arXiv 2302.05543.
**Q4: Claims are shaky.**
ControlNet trains separate models for different tasks, thus there is no parameter sharing for each task, and no misalignment for ControlNet. MoE-adapter is specifically designed to solve the misalignment issue for UniControl. Notably, UniControl's versatility spans nine distinct tasks. In contrast, Multi-ControlNet (assembly of multiple ControlNets) needs a single Stable Diffusion and nine separate ControlNets to achieve comparable results. UniControl (1.44B #params) has greatly reduced the complexity compared with the Multi-ControlNet (4.315B #params) to achieve the same goal with better generation quality.
For the bench example you mentioned (Fig. 4), the prompt is “A women sitting on a bench near a statue, checking her phone”. UniControl generates a real woman checking her phone, while ControlNet generates two statues, not checking her phone.
**Q5: Zero-shot generalization ability is misleading**
Directly using outpainting model for inpainting tasks can be challenging since the model tends to leave a sharp change over the mask boundaries as shown in pdf file. Currently we think the generalization ability mainly comes from the combination of different visual conditions by assigning different weights of the visual input. If we want to have more stronger generalization ability on zero-shot tasks, we agree we need to specifically train it.
**Q6: More related works [1,2,3]**
Thanks for pointing out the highly related works, we included [2,3] for comparison and will cite and discuss [1] in our upcoming version.
**Q7: Fair comparison with ControlNet in training cost**
Actually, the iterations are not fair to compare since the different configurations of model training. Instead, we compare the training cost on the GPU Hours. UniControl is trained by ~5000 GPU Hours by A100-40G. This is comparable to the overall training cost of different ControlNets with ~2500 GPU Hours on A100-80G.
| | Canny | HED | Pose | Seg| Depth| Normal|Sketch | Total|
|--|--|--|--|--|--|--|--|--|
| Hours |600 | 300 | 300 | 400 | 500 | 200| 150 | 2450 |
| GPU |A100-80G | A100-80G | A100-80G | 3090TI | A100-80G |A100-80G | A100-80G | |
**Q8: Inclusion of global context such as color and texture as condition**
We plan to include new conditions to UniControl similar as the T2I-Adaper-Color. Moreover, we can integrate the LoRA or Dreambooth to UniControl to control the style or texture of generative results.
---
Rebuttal Comment 1.1:
Comment: I greatly appreciate authors rebuttal and it does address most of my concerns.
Please incorporate what the authors promised in the next version of the paper.
I have raised my rating to borderline accept.
---
Reply to Comment 1.1.1:
Title: After Response
Comment: We sincerely thank the reviewer HFTZ for your detailed feedback, and are happy to hear that our rebuttal addressed your concerns! As suggested, we will certainly make revisions to the manuscript to add these quantitative comparisons and clarifications of related research works.
Best regards,
Authors of 6000 | Summary: UniControl is a diffusion-based image generation model that can condition on natural language input as well as multiple types of visual inputs (e.g. edge map, depth map). The framework is built upon components of Stable Diffusion Models, an MOE adapter and a ControlNet modulated by a task-aware HyperNet. The model demonstrates abilities of conditioning on one or multiple visual inputs at a time, as well as visual inputs that it has not seen during training.
Strengths: - UniControl extends ControlNet to work with multiple tasks and shows that the tasks help each other so as to improve performance on single task metrics as well.
- the MOE adapter set up easily allows the model to condition on multiple visual inputs.
Weaknesses: - The model has a certain complexity, as it involves multiple modules such as the MOE adapter, the ControlNet and a HyperNet.
- It needs to maintain two sets of SDM parameters. (This is the same as ControlNet, though.)
- MOE adapter means more parameters to be added with each new task added.
- Task instruction is needed for both training and inference, and is handled by a separate module than the language prompt, even though both are text prompts. This can be a downside if the task instruction is unknown or not well defined.
- It is not clear what $c_{task}$ is used for the task-aware hypernet during hybrid-task or zero-shot new task inference.
Ablation study would be helpful to show the importance/usefulness of the proposed components:
- model performance without the MOE adapter.
- model performance without conditioning on $c_{task}$, or merge $c_{task}$ into $c_{text}$ (in a thoughtful way). This will show whether the task-aware hypernet is needed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - HyperNet should be cited?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the limitations and broader impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your suggestions to enrich our paper. Your concerns and questions are addressed as follows.
**Q1: Complexity of UniControl**
Components and #params of whole UniControl Model and Multi-ControlNet
| | Stable Diffusion | ControlNet | MoE-style Adapters | TaskHyperNet | Total|
|--|--|--|--|--|--|
| UniControl| 1065.7M | 361M | 0.06M | 12.7M | 1.44B|
| Multi-ControlNet| 1065.7M | 361M * 9 |- | - | 4.315B|
Compared with the original ControlNet Model (incorporating Stable Diffusion + ControlNet), the complexity of UniControl has seen a marginal increase of 0.09%, amounting to an addition of 12.76M parameters. Despite this modest augmentation, UniControl's capability extends to nine distinct tasks. In contrast, the Multi-ControlNet (an ensemble of several ControlNets) demands a separate Stable Diffusion plus nine individual ControlNets to accomplish the same tasks. This underscores the efficiency of UniControl, which at 1.44B parameters, significantly trims the complexity when compared to the 4.315B parameters of Multi-ControlNet, all the while achieving a comparable objective.
**Q2: MoE adapter means more parameters to be added with each new task added.**
MoE-style adapters are exceptionally lightweight. It has ~0.06M #params whose size and associated computation cost are marginal compared with the entire model.
**Q3: Downside of task instruction**
As explained in Sec. A.2 of the Appendix, we utilize a predefined set of task instructions that can be systematically matched to each respective task, ensuring a consistent and transparent process. When it comes to unknown tasks, our model demonstrates robustness towards a spectrum of new instructions. This is largely attributed to the clustering of their embeddings, due to their semantic similarities.
TaskHyperNet is efficient in size, comprising just 12.7M parameters. And there are some techniques for acceleration. One such technique involves offline collection of task embeddings. By doing so, we can bypass the need for instruction-to-embedding inference and instead directly retrieve the pre-calculated task embeddings, expediting the process.
**Q4: How does Task-aware HyperNet use in the zero-shot tasks?**
In the context of zero-shot tasks, we defined the task instruction in a straightforward way. Examples include designations like "image inpainting" or "segmentation map and human skeleton to image." These instructions are subsequently processed by CLIPText and the Task-aware HyperNet to derive the requisite task embeddings.
**Q5: Missing Ablation Study**
Thank you for pointing this out. In response, we've conducted an ablation study, specifically focusing on the MoE-Style Adapter and TaskHyperNet. The table contains the FID scores. It is noticeable that the full-version UniControl (MoE-Style Adapter + TaskHyperNet) constantly outperforms the ablations.
| MoE-Adapter | TaskHyperNet | Canny | HED | Depth | Normal | Seg | Pose | Avg|
|--|--|--|--|--|--|--|--|--|
| x | x | 27.2 | 29.0 | 27.6 | 28.8 |29.1 | 30.2 | 28.7 |
| ✓ | x | 24.5 | 26.1 | 23.7 | 24.8 | 26.9 | 28.3 | 25.7 |
| ✓ | ✓ | **22.9** | **23.6** | **21.3** | **23.4** | **25.5** | **27.4** | **24.0** |
**Q6: Cite HyperNet**
Thank you for your suggestion. It is our mistake. We have updated these two references [1,2] in our latest manuscript.
[1] HyperNetworks. ICLR 17.
[2] Continual Learning with Hypernetworks. ICLR 20.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the rebuttal. Regarding Q1, what I meant by "complexity" is not equivalent to the number of parameters. A method can be complex yet computationally cheap (e.g. it can contain a lot of lightweight components) or simple but expensive (e.g. a large-scale LLM). I still do think the method is on the complex side. With the added ablation study I am willing to raise my score to borderline accept.
---
Reply to Comment 1.1.1:
Title: After Feedback
Comment: Dear Reviewer y5r6,
We sincerely appreciate your constructive comments and positive recognition of the paper. The ablation study will be added into the new version manuscript. We agreed with the point of complexity in methodology since unified controllable generation is a bright new and challenging task. Compared with the directive baseline, Multi-ControlNet, we believe the UniControl has a lower complexity in both inference and training. We will continue to explore more simple and effective solutions in this area.
Best regards,
Authors of 6000 | Summary: This paper introduces UniControl, a new generative foundation model that consolidates a wide array of controllable condition-to-image (C2I) tasks within a singular framework. UniControl enables pixel-level-precise image generation, where visual conditions primarily influence the generated structures and language prompts guide the style and context. For this purpose, the authors augment pretrained text-to-image
diffusion models (ControlNet) and introduce a task-aware HyperNet to modulate the diffusion models, enabling the adaptation to different C2I tasks simultaneously. UniControl was trained on nine unique C2I task and demonstrated excellent zero-shot generation abilities with unseen visual conditions.
Strengths: 1. The paper is clearly written and easy to follow.
2. The related work section covers the most relevant papers in the field.
3. The approach produces excellent image generation results.
4. The experimental evaluation is convincing.
Weaknesses: 1. It relies too much on existing image generation methods (ControlNet).
2. A more scientific contribution would have been expected.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Here are my concerns:
1. The paper claim in the abstract that: "...UniControl often surpasses the performance of single-task-controlled methods of comparable model sizes". Maybe I missed something, but could the authors elaborate more on this statement? What are the other 'single-task-controlled methods' they refer to? I only found comparison with ControlNet.
2. Could you adapt your approach to work with other Image Generation software (besides Stable Diffusion)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are addressed in the paper in a dedicated section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable time to review this paper. Your concerns are addressed below.
**Q1: More comparison with the single-task-controlled methods**
Thank you for your valuable suggestion. In response, we've expanded our quantitative analysis to include classic single-task-controlled methods such as GLIGEN [1], T2I-adapter [2], and ControlNet [3]. Our experimental setup remains consistent with Sec. 4.3 of the main manuscript, utilizing DDIM as the sampler with a guidance score of 9. With a collection of over 2,000 test samples sourced from Laion and COCO, we've assessed a wide range of tasks covering edges (Canny, HED), regions (Seg), skeletons (Pose), and geometric maps (Depth, Normal).
The following FID table demonstrates that our UniControl consistently surpasses the baseline methods across the majority of tasks. Notably, UniControl achieves this while maintaining a more compact and efficient architecture than its counterparts.
FID Scores
| | GLIGEN | T2I-adapter | ControlNet | UniControl |
|--|--|--|--|--|
| Canny | 24.9 | 23.6 | **22.7** | 22.9 |
| HED | 27.8 | - | 25.1 | **23.6** |
| Depth | 25.8 | 25.4 | 25.5 | **21.3** |
| Normal | 27.7 | - | 28.4 | **23.4** |
| Seg | - | 27.1 | 26.7 | **25.5** |
| Pose | - | 28.9 | 28.8 | **27.4** |
[1] GLIGEN: Open-Set Grounded Text-to-Image Generation. CVPR 23.
[2] T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models. arXiv 2302.08453.
[3] Adding Conditional Control to Text-to-Image Diffusion Models. arXiv 2302.05543.
**Q2: Adapt UniControl method to work with other Image Generation software (besides Stable Diffusion)**
Yes. UniControl is versatile and can be adapted to various diffusion-based models, including the likes of Deep-floyd [1]. To facilitate this integration, we project the embeddings from UniControl onto the new model's backbone using cross-attention layers or linear mapping. However, it's essential to note that this integration necessitates re-training UniControl to ensure seamless alignment with the new backbone.
[1] https://github.com/deep-floyd/IF
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I am satisfied with the authors' rebuttal, therefore I maintain my initial rating.
---
Reply to Comment 1.1.1:
Title: After Rebuttal
Comment: Dear Reviewer Haxt,
Thank you for your constructive comments and support of the paper! We will include the new experimental results into the next version manuscript.
Best regards,
Authors of 6000 | Summary: This paper presents a method for controlling the output of a diffusion
model with multiple modalities of reference images, e.g. edges,
segmentation, depth, etc. It can be seen as proposing a multi-task
version of ControlNet. Experiments in the paper show the multi-task
approach outperforms the single-task, and also allows zero-shot
applicability to novel tasks such as colorization.
Strengths: - The paper effectively demonstrates architecture modifications for multi-task controlling model
and shows that it is generally better than the single-task (Table
1).
- The model generalization to tasks that were not in the training set such
as colorization, deblurring or inpainting is quite remarkable.
- The paper introduces a dataset with 20M multi-modal condition
training pairs.
Weaknesses: The paper presents a well engineered solution to achieving a
multi-task version of ControlNet, and show generalization to some new
tasks. Some decision justifications are not backed up by ablation
experiments and the experiments on task generalization are
demonstrated only with a few visual examples.
- If I understood correctly, there "Mixture of Experts" component is
manually selecting the encoder for each modality. In that case
calling this module a MOE is justified, since it could be misleading.
- Unless I missed it, the different components in the design are not
ablated. What is the contribution of the Hypernet task embedding?
- How do you explain the poor performance of ControlNEt in
Normal-to-Image (Fig. 6)
- Quantitative comparison for the generalization to new tasks is
limited to a few visual examples. Some of the examples are even
repeated three times (segmentation+skeleton); for that case, it may
be more convincing if the paper showed three different results.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful comments and suggestions for our submission.
**Q1: Missing Ablation Study**
Thank you for pointing this out. We've conducted an ablation study with FID scores as follows. Our experimental setup adheres to Sec. 4.3 of the main paper, employing DDIM as the sampler and a guidance score of 9. Drawing from a collection of over 2,000 test samples sourced from Laion and COCO, we applied six tasks for evaluation, spanning edges (Canny, HED), regions (Seg), skeletons (Pose), and geometric maps (Depth, Normal). It's evident that the UniControl model (incorporating both MoE-Style Adapter and TaskHyperNet) consistently surpasses the results of its ablations.
| MoE-Adapter | TaskHyperNet | Canny | HED | Depth | Normal | Seg | Pose | Avg|
|--|--|--|--|--|--|--|--|--|
| x | x | 27.2 | 29.0 | 27.6 | 28.8 |29.1 | 30.2 | 28.7 |
| ✓ | x | 24.5 | 26.1 | 23.7 | 24.8 | 26.9 | 28.3 | 25.7 |
| ✓ | ✓ | **22.9** | **23.6** | **21.3** | **23.4** | **25.5** | **27.4** | **24.0** |
**Q2: UniControl is an engineered solution for multi-task version of ControlNet**
UniControl is deeply influenced by the principles of HyperNetwork [1] and multi-task visual learning as Taskonomy [2]. It embodies the concept of "Control over the Control" or "Meta Control." We believe that unifying diverse visual modalities and tasks within a single framework is not just an engineering endeavor, but a significant scientific challenge since it is required to keep both single-task discriminability and multi-task generality.
[1] HyperNetworks. ICLR 17.
[2] Taskonomy: Disentangling Task Transfer Learning. CVPR 18.
**Q3: Justification of MoE**
Yes. It is not the MoE as its assembly is not learnable. Because of this distinction, we name it MoE-style adapter instead of MoE in the paper. But the high-level ideas parallel since each module catering to a specific modality, effectively functioning as an "Expert" network. When dealing with zero-shot tasks, the adapter's weights can be autonomously determined through the computation of similarity scores derived from task embeddings.
The disparity in results can be attributed to the differing training data used for ControlNet and UniControl. Since ControlNet hasn't made its training data public, we were compelled to create our own datasets for both training and testing. In our approach, we selected images from Laion with a high resolution (>=512) and aesthetic scores exceeding 6. It's plausible that the quality of these images surpasses that of the original ControlNet’s training data for such tasks.
**Q4: Question of Fig. 6 about the poor result of Normal-to-Image ControlNet**
This Can be attributed to the differing training data used for ControlNet and UniControl. Since ControlNet hasn't made its training data public, we were compelled to create our own datasets for both training and testing. In our approach, we selected images from Laion with a high resolution (>=512) and aesthetic scores exceeding 6. It's plausible that the quality of these images surpasses that of the original ControlNet’s training data for these tasks.
**Q5: More results on zero-shot tasks**
Thank you for your suggestion. We've incorporated additional zero-shot results in the Appendix. It's worth noting that the zero-shot capabilities of UniControl are a welcome byproduct. An exhaustive exploration of arbitrary zero-shot success deserves its own comprehensive study and extends beyond the scope of our current focus. We remain hopeful that future research in this direction will exhibit the reliable zero-shot generalization capacities akin to those observed in LLMs.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal.
Comment: I thank the authors for the rebuttal and the new ablation studies showing the contribution of the Hypernet.
After reading the other reviewers' comments and authors' response, I am raising my score by one point.
---
Reply to Comment 1.1.1:
Title: After Feedback
Comment: Dear Reviewer J9bc,
We sincerely thank you for your helpful comments and we are happy to hear that we've addressed most of the concerns. The new experimental results will be integrated to the next version manuscript.
Best regards,
Authors of 6000 | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for your valuable time in reviewing this paper. We sincerely appreciate your constructive comments and questions to make this paper better. Below, we respond to each concern in the order.
Pdf: /pdf/0e6ce502ea59837a78d0dcf97bcc75ff12c2bcdc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents UniControl, that unifies multiple visual controlling condition into a single unified model. To achieve this, the authors introduce a task-aware HyperNet to modulate the diffusion models, enabling adaptation to different condition-to-image (C2I) tasks simultaneously. UniControl is trained on nine unique C2I tasks and demonstrates impressive controlled generation quality. It also demonstrates zero-shot generation abilities with unseen visual conditions, including condition combination and new conditioning. User studies show that UniControl often surpasses the performance of single-task-controlled methods of comparable model sizes.
Strengths: - The paper addresses an interesting and important research topic, unifying the controllability of different models into a single model.
- The authors compare the results with ControlNet and demonstrate better qualitative results.
- The zero-shot task generalization and instruction combination aspects of the proposed method are intriguing and valuable.
Weaknesses: - The paper lacks quantitative evaluations for the alignment of the generated content and conditional inputs. For segmentation and bounding box, pretrained detectors could be used for evaluating the alignment, following the configuration in [1]. Although ControlNet does not handle bounding box conditions, a baseline for object detection instruction-following would be [1].
[1] Li, Y., Liu, H., Wu, Q., Mu, F., Yang, J., Gao, J., ... & Lee, Y. J. (2023). Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 22511-22521).
- Zero-shot task generalization seems underexplored, and there is a lack of detailed analysis.
- Inpainting results are not surprising as they share a strong alignment in terms of task instruction with the pretraining with outpainting.
- For deblurring and colorization, it would be better to illustrate how the model can perform zero-shot task generalization. Does the capability mainly come from the task instruction encoder?
- If so, it would be more intriguing to demonstrate results with more tasks that can really benefit from the zero-shot task generalization. It is easy to obtain data for colorization, inpainting. It would be valuable to demonstrate results on tasks that may be hard to collect training data. For example, for scribble to image, ControlNet utilizes strong human-crafted data augmentation to synthesize the scribbles. Can UniControl generalize well to this case?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the authors please clarify if the difference between Ours-Single and ControlNet is only the training data / training schedule or there are other differences? From the results in Fig. 6 and Fig. 7, it seems that Ours is much better than ControlNet, while Ours is only slightly better than Ours-Single.
- Where does the performance can mainly come from ControlNet to Ours-Single? Is it because of the better dataset and/or longer training? Note that this is not a criticism, so I put it here in the questions. But I think a clarification or analysis in this regard can be helpful for the readers and the research community.
- Cost comparison with ControlNet. In L38-39, the authors mention "Retraining a separate model is necessary to handle a different modality of visual conditions, incurring non-trivial time and spatial complexity costs." However, UniControl requires 5000 GPU hours on A100, while ControlNet training for a single model is usually just 100-300 hours. This somehow invalidates this point.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss the limitations in the main papar.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable time to review this paper. We sincerely appreciate your constructive comments. Your concerns are addressed below.
**Q1: More quantitative evaluations for the alignment of the generated content and conditional inputs**
Thank you for highlighting this. We agreed with your feedback and have incorporated additional quantitative evaluations. We've included FID metrics and the average precision (AP) metrics, adopting the methodology from GLIGEN [1], and utilizing pre-trained detectors.
In addition to GLIGEN [1] as you suggested, we've also incorporated T2I-adapter [2] and ControlNet [3] as baselines, given their foundational contributions to this domain. Our experimental setup adheres to Section 4.3 of the main paper, and for sampling, we employ DDIM with a guidance score of 9. We've collected over 2,000 test samples from Laion and COCO, and conducted evaluations across six diverse tasks: these span edges (Canny, HED), regions (Seg), skeletons (Pose), and geometric maps (Depth, Normal).
Our FID table, presented subsequently, showcases evaluations related to visual quality. A clear observation from the table is that our UniControl model consistently surpasses the baseline methods across the majority of tasks. Notably, UniControl also offers a more dense architecture compared to other single-task methods, which typically needs multiple checkpoints.
FID Scores
| | GLIGEN | T2I-adapter | ControlNet | UniControl |
|--|--|--|--|--|
| Canny | 24.9 | 23.6 | **22.7** | 22.9 |
| HED | 27.8 | - | 25.1 | **23.6** |
| Depth | 25.8 | 25.4 | 25.5 | **21.3** |
| Normal | 27.7 | - | 28.4 | **23.4** |
| Seg | - | 27.1 | 26.7 | **25.5** |
| Pose | - | 28.9 | 28.8 | **27.4** |
Taking your advice, we've integrated the pre-trained YOLO-v4 detector. We deployed our UniControl model to generate images corresponding to the bounding box masks and captions from the COCO14-Val dataset. The summarized AP scores in the ensuing table demonstrate that our method is superior over GLIGEN.
AP Scores on COCO (YOLO-V4)
| | GLIGEN | UniControl |
|--|--|--|
| AP | 24.0 | **26.2** |
| AP_50 | 42.2 | **45.0** |
| AP_75 | 24.1 | **26.3** |
[1] GLIGEN: Open-Set Grounded Text-to-Image Generation. CVPR 23.
[2] T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models. arXiv 2302.08453.
[3] Adding Conditional Control to Text-to-Image Diffusion Models. arXiv 2302.05543.
**Q2: More details and exploration of the zero-shot tasks**
*Inpainting and Outpainting:* While inpainting and outpainting might appear related, they are fundamentally distinct. Inpainting heavily leverages the contextual information from unmasked regions, necessitating a precise match. Conversely, outpainting has more freedom, with the generative model prioritizing prompts to envision new content. Directly using outpainting model for inpainting tasks can be challenging since the model tends to leave a sharp change over the mask boundaries. Our pretrained UniControl, thanks to intensive training across multiple tasks, has learned edge and region-to-image mappings, which assists in preserving contextual information. We've incorporated visual comparisons in the attached pdf for further clarity.
*Deblurring and Colorization:* UniControl processes blurred or grayscale images as visual cues, relying on textual prompts and specific task instructions like "grey to image" or "image deblurring" during inference. The MoE-style adapters are structured based on the similarities observed among the pre-trained tasks, as explained in Sec. 3.3. For instance, the colorization task uses weights as "depth: 0.6, seg: 0.3, canny: 0.1". This adaptability stems from a blend of task-specific instructions and the MoE-style adapters. Misaligned adapters or inappropriate task instructions compromise the model's performance.
*Scribbles:* Thank you for your suggestion. Indeed, our model demonstrates a promising capacity to generalize under scribble conditions, showing parallels to the ControlNet's ability, even though UniControl hasn't been directly trained using scribble data. Our pdf file provides additional results illustrating the scribble-to-image generation.
**Q3: Ours-Single vs ControlNet**
The only difference between Ours-Single and ControlNet is the training data. The authors of ControlNet have not released their training data. As a result, it is necessary to reimplement the ControlNet model using our collected dataset, MultiGen-20M. Notably, MultiGen-20M is set to be the pioneering open-sourced conditional visual generation dataset in this area. The variance observed between Fig. 6 and Fig. 7 results from these differing datasets. We have filtered images with higher resolutions (>=512) and aesthetic scores (>6) from Laion whose quality would likely be better than the original ControlNet’s training data. The data quality is essential for visual performance. Therefore, Ours-single is better than the official ControlNet on these tasks.
**Q4: Fair comparison of training cost**
Thank you for your careful observation. Indeed, the ~5000 GPU Hours we've stated are similar to the cumulative training cost of multiple ControlNets. For our work, we utilized the A100-40G, whereas ControlNets employed the A100-80G. Let's break down ControlNets' training costs:
| | Canny | HED | Pose | Seg| Depth| Normal|Sketch | Total|
|--|--|--|--|--|--|--|--|--|
| Hours |600 | 300 | 300 | 400 | 500 | 200| 150 | 2450 |
| GPU Type |A100-80G | A100-80G | A100-80G | 3090TI | A100-80G |A100-80G | A100-80G | |
As we can see, most ControlNets are trained by A100-80G whose total training cost is ~2500 GPU Hours. In contrast, our use of the A100-40G, which has been shown to be less powerful than the A100-80G as per [4].
[4] https://www.topcpu.net/en/gpu-c/a100-sxm4-40-gb-vs-a100-pcie-80-gb
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. Most of my concerns are addressed by the authors' response, and I thus maintain my initial rating: weak accept. Please add these quantitative results in the revised paper. Thanks.
---
Reply to Comment 1.1.1:
Title: After Response
Comment: Thank you again for your understanding and constructive feedback. We are pleased to hear that most of your concerns have been addressed. As suggested, we will certainly make revisions to the manuscript to add these quantitative comparisons.
Best regards,
Authors of 6000 | null | null | null | null | null | null |
You Shall not Pass: the Zero-Gradient Problem in Predict and Optimize for Convex Optimization | Reject | Summary: The paper first characterizes the 'zero-gradient' issue---a challenge associated with learning a model in the 'predict-then-optimize' paradigm---in terms of the number of active KKT constraints of the optimization problem. It then proposes a surrogate optimization problem for which the zero-gradient does not arise and evaluates these surrogates on 2 domains.
Strengths: * The paper addresses an important problem, i.e., zero-gradients in predict-then-optimize.
* The paper proposes a novel surrogate.
Weaknesses: 1. **The 'zero-gradient theorem' is not novel:** The paper claims to 'discover and explain the zero-gradient problem in P&O for convex optimization'. However, a number of predict-then-optimize papers acknowledge the zero-gradient issue and propose their own surrogates, e.g., Elmachtoub and Grigas [2017] and Wilder et al. [2019]. In fact, for well-defined LPs (with unique solutions), it is known that the task performance/SPO loss is piecewise constant. In contrast, it is known that if the function and constraints are strongly convex, e.g., the portfolio optimization problem in the experiments with $\lambda > 0$, there are no zero-gradients. As a result, it's not clear why this is the domain that the paper chooses to run experiments on...
1. **The experiments have no baselines:** While the paper does provide a surrogate that does not run into a zero-gradient issue, the bar for publication is typically higher, i.e., that this specific surrogate outperforms others from the literature. As the related work section notes, there are other ways to get around the zero-gradient issue, like creating surrogate problems by adding quadratic/exponential regularization terms, however there are no comparisons to any methods not presented in the paper, not even simple baselines like 2-stage, random and optimal.
1. **The paper is poorly written:** There is almost no information in the about the contributions in the abstract and introduction and, as noted above, the paper does not adequately engage with past work.
### Update (20 Aug 2023)
After the discussion with the authors, reading the other reviews and thinking about this paper more, I find myself still recommending rejection. Here are the reasons:
1. The *submitted* version of the paper lacks any discussion of or comparison to related work, and also overclaims (e.g., "the first to discover the zero-gradient theorem"). While the authors have refined their position significantly in the rebuttals (and much improved their contribution as a result), I believe that (a) there still remain important unanswered questions, and (b) these changes lead to a paper with significantly different claims. I discuss both of these in terms of specific contributions below.
1. **Zero-gradient Theorem:** In the responses, the authors acknowledge that the zero-gradient issue has been known for optimization problems with linear objectives. The modified claim, as I understand it, is that the characterization of zero-gradients in this paper is significantly different/improved from the existing understanding. However, (a) the proof is not particularly novel (imo) because it formalizes existing knowledge in the language of KKT conditions, and moreover (b) it is not clear whether this characterization is significantly more powerful than the existing understanding (specifically, do gradients lie in the null space of the normals of the active constraints when the Jacobian matrix isn't zero, and what does this mean intuitively?). I believe that the strength of this contribution is dependent on the answers to these two (imo) unanswered questions.
1. **r-Smoothing Surrogate:** While avoiding zero-gradient issues is definitely a desirable property for a surrogate, it is by no means a *sufficient* for good performance - a number of papers in the literature do not run into zero-gradient issues. As a result, r-smoothing needs to be better motivated and compared to past work. While the authors have done an admirable job of running experiments in the author response period, there are some papers that do very similar things that the authors have not compared to, e.g., Sahoo et. al. (2022). I think there is work that remains to be done in situating their surrogate in the context of the large body of recent work on PtO.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. How does your proposed approach do in comparison to baselines from the literature?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: There are no limitations discussed; in contrast, I believe this paper over-claims its contributions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. The zero-gradient theorem is not novel**
It is indeed well known that differentiating through linear programs is impracticable due to zero/undefined gradients. However, to the best of our knowledge, the zero-gradient problem for *nonlinear* convex optimization was not known before.
The main theoretical results of our paper, Lemma 3.4 and Theorem 3.5, demonstrate that differentiation of generic convex problems can yield non-informative, zero gradients. In fact, we show that the cause of the zero-gradient problem is not the objective function, but the non-smooth points of the constraints set $\mathcal{C}.$
To additionally reiterate the essence of the zero-gradient problem, we would like to refer to Figure 1. The right half of the figure depicts the constraints set $\mathcal{C}$ defined as a three-dimensional cube. Consider the red point denoted by $\hat{x}$ and the gradient cone $G(x)$ depicted by the orange cone at this point. By Property 3.2 (KKT conditions), the point $\hat{x}$ is the solution of the internal optimization problem, $\arg\max_{x\in\mathcal{C}} f(x,\hat{w})$ iff the gradient of its objective, $\nabla_{x}f(\hat{x}, \hat{w})$, lies in $G(x)$.
Now, if this gradient lies in the interior of the gradient cone (i.e., complementary slackness holds), we can clearly see: infinitesimal changes of $\hat{w}$ can not move $\nabla_{x}f(\hat{x})$ outside of $G(x),$ and hence can not change the solution $\hat{x}$. Therefore, Jacobian $\nabla_{\hat{w}}x^\ast(\hat{w})$ is zero. We would like to emphasize, that this result is *completely independent* of the function class of $f.$
Importantly, the zero-gradient problem can occur even if the constraints are non-linear. For example, suppose $x\in\mathbb{R}^2$ and $\mathcal{C}$ is a convex lens -- an intersection of two disks. Then, $\mathcal{C}$ has two vertices where the gradient cone is two-dimensional. Hence, if the gradient of the internal objective $\nabla_xf(x, \hat{w})$ lies in the interior of either of these cones, the Jacobian $\nabla_{\hat{w}} x^\ast(\hat{w})$ is a zero matrix.
In the experiments, we aim at demonstrating that the zero-gradient problem indeed occurs in practice. We consider the portfolio optimization problem an interesting benchmark as it allows us to smoothly vary its true objective between linear and quadratic regimes.
Figures 3a and 4 of the main paper show that the performance of our $r-$smoothing approach is better than that of the standard algorithm. We believe that this happens due to the zero-gradient problem, but we also agree that the performance plots are not sufficient evidence. To support this point, we ran additional experiments. In the attached PDF file, Figure 2 shows the norm of the loss function gradient, $\|\nabla_{\theta}f(\hat{x}, w) \|_2$, and the number of active constraints during training. We compared the exact differentiation [2] of the QP approximation and our $r-$smoothing method for linear and quadratic versions ($\lambda=0$ and $\lambda=2$) of the portfolio optimization problem.
These experiments demonstrate that with training, more constraints become active, and consequently, the gradient norm decreases. This process can be observed in both linear and quadratic cases, but is much more prominent in the former. As the QP internal problem is used in both cases, the difference can not be explained by the properties of the internal problem itself. Our explanation, also provided in lines 310-312 of the main paper, is that when the true objective $f$ is linear the true optimal solution lies on the boundary of the feasibility set $\mathcal{C}$. Because of that, the gradient of the loss function $\nabla_{x}f(\hat{x}, w)$ pushes the predicted solution $x^\ast(\hat{w})$ to the boundary of $\mathcal{C}$, and hence more constraints get activated. Then, based on Lemma 3.4, the null-space of the Jacobian $\nabla_{\hat{w}}x^\ast$ becomes larger, and hence the zero-gradient problem is more likely to occur.
**2. The experiments have no baselines**
To the best of our knowledge, differentiation through convex programs [2] is considered to be the ultimate approach for convex non-linear P\&O problems, as it computes the exact derivative (which we show can be non-informative). We are not aware of other works studying approximations for the P\&O loss in this case. However, it is true that there exist various methods for approximate differentiation of linear problems. In the attached PDF file, we compared our $r-$smoothing method against SPO+ surrogate loss [3] (labeled ''SPO+'' in the figure), mean-squared error $\|\hat{w}-w\|^2_2$ (''MSE''), and perturbation-based approach [4] (''perturbed''). The method labeled ‘’standard’’ corresponds to using the QP approximation and computing its exact derivative using the results from [2]. This method is equivalent to quadratic regularization from [1], and hence we did not include the latter in the baselines. As SPO+ and perturbation-based approaches are only applicable in the case of linear problems, we used the linear portfolio optimization problem ($\lambda=0$) and the OPF problem in these new experiments. The results demonstrate that our method performs better than the baselines.
**3. The paper is poorly written**
Thank you for pointing out this omission. We will describe our contribution properly in the abstract and introduction. We will also try to provide better explanations for our main results.
[1] Bryan Wilder et al. Melding the Data-Decisions Pipeline: Decision-Focused Learning for Combinatorial Optimization (2019)\
[2] Akshay Agrawal et al. Differentiable convex optimization layers (2019)\
[3] Adam N Elmachtoub and Paul Grigas. Smart predict, then optimize (2017)\
[4] Quentin Berthet et al. Learning with differentiable perturbed optimizers (2020)\
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Responding point-wise.
### Novelty of zero-gradient theorem
Despite the fact that nothing has ever been explicitly proved for the nonlinear case, I don't believe that the theorem is particularly novel for the following reasons:
1. While I understand that the literature has typically talked about this issue in the context of linear programs, **the argument described in the paper's proof is exactly the same as that for the linear case**—(a) small perturbations to the parameters lead to the same decisions leading to zero gradients, and (b) this happens at the corners/edges of polytopes (or, generalizations thereof like the cylinder in Figure 1). For example, Figure 1 in Elmachtoub and Grigas (2017) and (to a lesser extent) Figure 1 in Berthet et. al. (2020) have the 2D equivalent of the Figure 1 in this paper. IMO, the paper's proof just frames this argument using the language of KKT conditions.
2. **It is also understood in the literature that this is not because of the linearity of $f$**, but rather an interaction between the objective and the constraints. The only reason that this issue doesn't typically arise in QPs because (a) the optima for the quadratic term (typically the origin) is on the interior of the feasible region and (b) the gain from the quadratic term by moving away from the boundary outweighs the gain from the linear term in moving towards the boundary. If any of these conditions is broken, there can be zero-gradient issues, so I don't doubt that that zero-gradient issues arise in the experiments. For example, even in the simple 1D case, if we're trying to find the gradients of $x^*(\alpha) = \arg\min_x \alpha x^2 - 10x$ s.t. $-1 \leq x \leq 1$ (which is a QP) with respect to $\alpha$, for values $0 < \alpha < 5$ we would have zero-gradients because the gain from moving away from the boundary is not high enough. The literature doesn't typically talk about general non-linear problems because there aren't many efficient ways to solve such problems even when the parameters are known.
### Literature related to r-smoothing-like surrogates
While it's true that the papers cited above are the most popular solutions for P&O, there has been a lot of recent work that is closer to the $r-$smoothing method that the paper proposes. For example, Sahoo et. al. [1] suggest using a surrogate gradient along with a projection component that seems very similar to the $x^*_{QP}$ surrogate proposed in the paper. The form of $x^*_{QP}$ also seems similar to recent E2E approaches (e.g., [2]) where the goal is to predict the decision directly and then use projection to ensure that it's feasible. It seems like the "decisions" such E2E models would learn is similar to the parameters $\hat{w}$ of $x^*_{QP}(\hat{w})$.
[1] Sahoo, Subham Sekhar, et al. "Backpropagation through combinatorial algorithms: Identity with projection works." arXiv preprint arXiv:2205.15213 (2022).
[2] Cristian, Rares, et al. "End-to-End Learning for Optimization via Constraint-Enforcing Approximators." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 6. 2023.
### Empirical Results
I also have a couple of questions about the new results:
1. **Exact may not be the "ultimate approach":** As described above, there may be zero-gradients even for QPs if the quadratic term is too weak. This is why the idea from Wilder et. al. (2019), i.e., to *tune the L2 regularization as a hyperparameter* in order to have good performance, is different from the "exact" approach described in your experiments. It would be useful to know how well such a baseline does.
1. **Linear Case:** SPO+ typically does quite well in the linear domain. Even MSE is provably optimal in the limit of infinite data and model capacity. They also don't run into the issue of having too many active constraints. However, there seems to be a big gap between the performance of your approach and these past approaches... Do you have any intuition for why $r-$smoothing is does so much better in your case?
---
Reply to Comment 1.1.1:
Comment: **Novelty of zero-gradient theorem.**
This is true that our proof resembles the argument for the linear case. This is not surprising, as we generalize the known zero-gradient phenomenon onto a general convex optimization case. We do not think that this resemblance makes the results less important or novel. We agree, however, that our claim to discover the zero-gradient problem might cause misunderstanding. We will adjust the paper to avoid any over-claiming and so that the relation between our result and the prior works is clear.
In short, our main theoretical result can be rephrased as *Number of active constraints defines the size of the null space of the Jacobian $\nabla_{\hat{w}}x^\ast(\hat{w})$. Moreover, normals to the active constraints are the basis of this null space. Hence, if the loss function gradient is contained in the span of these normals, the total gradient is zero.*\
The known fact about linear problems is:
*In the vertices of the constraint set, the Jacobian of the linear problem is a zero matrix*.
Clearly, these two statements are not equivalent. We agree that deriving the former from the latter is not very complicated technically, but it is still an important result.
The reviewer argues that our zero-gradient problem was, in fact, known before and that it does not really occur that often in practice. With all due respect, we disagree with both points and would like to explain our position. As an illustrative example, we will use the regularization method from Wilder et al. (2019) referenced by the reviewer. This method is designed to deal with the known version of the zero-gradient problem by adding $L_2$ regularization in the internal objective. However, as we explain below, it still suffers from the zero-gradient problem in 'our sense' (as in Theorem 3.9) both in theory and practice. We believe that this argument demonstrates that our zero-gradient problem indeed occurs in P\&O methods but it is not acknowledged anywhere.
Let $f_\gamma (x, \hat{w})=\hat{w}^\top x - \gamma\|x\|^2_2.$ Then, we can rewrite it as $f_\gamma (x, \hat{w})= -\gamma\|x-\frac{\hat{w}}{2\gamma}\|^2_2$. As mentioned by the reviewer, by increasing $\gamma$ we can ensure that the $\arg\max_x f_\gamma(x, \hat{w})$ lies in the interior of the constraints set. However, it is **incorrect** that having such $\gamma$ resolves the zero-gradient problem. Suppose the true objective we maximize is linear, $f(x, w)=w^\top x$ (e.g., the bipartite matching experiment from Wilder et al.). Then, with training, the solution to the internal (regularized) problem, will move along $w$, until it reaches the boundary of $\mathcal{C}$. Then, it will move on the boundary and it might enter a non-smooth vertex and get stuck there. Hence, independently of the hyperparameter $\gamma$, the solution will get to the boundary (assuming 'good' data and neural network), if the loss function is linear. Hence, any method that differentiates through quadratic problems, even with tunable parameters, is, theoretically, suspectable to the zero-gradient problem.
On the experimental side, the "exact" approach described in our experiments **is** equivalent to regularization from Wilder et al. (2019). As we show above, $f_\gamma (x, \hat{w})= -\gamma\|x-\frac{\hat{w}}{2\gamma}\|^2_2$. As the positive factor does not affect the optimization problem, using $f$ in the optimization problem is equivalent to using $-|x-\frac{\hat{w}}{2\gamma}\|^2_2$. Hence, if we scale the output of the neural network $\hat{w}$ by $\frac{1}{2\gamma}$, we see that QP approximation is equivalent to regularization.
The scaling factor of the neural network output is a hyperparameter which we call $x_{scale},$ determine with grid search and report in Tables 3 and 4 of the paper. Therefore, there is a one-to-one correspondence between the quadratic regularization method and its hyperparameter $\gamma$ and our QP approximation with hyperparameter $x_{scale}$. In the experiments in the PDF attached to the rebuttal, you can see that the standard method (exact differentiation of QP) is indeed initialized with no active constraints. However, during training, the solution moves to the boundary and the zero-gradient problem occurs.
To summarize, we would like to emphasize that, to the best of our knowledge, in the context of predict-and-optimize, all existing methods designed to deal with non-linear problems are based on computing the Jacobians of these problems. Our paper shows that such methods are likely to face the zero-gradient problem if the optimal solutions happen to be on the boundary and proposes a solution to this.
---
Reply to Comment 1.1.2:
Comment: **Literature related to r-smoothing-like surrogates**
We thank the reviewer for providing these references, we were not aware of them before the rebuttal. We agree that the method [1] is of the same nature as our $r-$smoothing and we will include it in the discussion and related work. It is worth mentioning, however, that [1] focuses on the linear case and it is not immediately clear how it can be extended to non-linear case.
The paper [2] operates in a slightly different setting, as its goal is *to solve* a given optimization problem in a differentiable way. The authors present their method as a neural approximate differentiable solver. In the experiments, they show that they can train it to solve linear problems with good accuracy. It is not clear, however, how this result compares to the existing P^&O methods. Moreover, their method is probably also susceptible to the zero-gradient problem -- if the neural network indeed learns to solve the optimization problems, it will have zero-gradients when the solution is on a vertex of the polytope.
**Empirical Results**
As SPO+ loss [3] and perturbed optimizers method [4] derive approximate ways to differentiate linear problems using different approaches, it is not an easy task to perform a theoretical comparison of them with our $r-$smoothing method. Our intuition regarding the differences in methods is that it comes from different internal problems. For $r-$smoothing, we use QP internal problem, while the two other methods use linear internal problems. Hence, they have a simpler model class for the $x^\ast(\hat{w})$ as it can only take values in the vertices of $\mathcal{C}$.
[1] Sahoo, Subham Sekhar, et al. "Backpropagation through combinatorial algorithms: Identity with projection works." arXiv preprint arXiv:2205.15213 (2022).\
[2] Cristian, Rares, et al. "End-to-End Learning for Optimization via Constraint-Enforcing Approximators." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 6. 2023.
[3] Adam N Elmachtoub and Paul Grigas. Smart predict, then optimize (2017)\
[4] Quentin Berthet et al. Learning with differentiable perturbed optimizers (2020) | Summary: This paper identifies the zero-gradient problem in Predict and Optimize (P&O) for convex optimization and proposes a method to address it. The method is based on using a Quadratic Programming (QP) approximation for computing decisions, smoothing the feasibility region around the current solution to reduce the dimensionality of the null space to one, and adding a projection distance regularization term. The proposed method demonstrates significant improvements for convex P&O problems with many constraints and with the true optimum lying on the boundary of the feasibility set.
Strengths: Originality: The paper identifies a previously unnoticed problem in convex optimization and proposes a novel method to solve it.
Quality: The proposed method is technically sound, and the experiments demonstrate its effectiveness in addressing the zero-gradient problem.
Clarity: The paper is well-written and clearly explains the concepts and methodology.
Significance: The proposed method has the potential to improve optimization in convex P&O problems, which are common in various domains.
Weaknesses: Insufficient experiments: The paper might lack a comprehensive set of experiments or fail to compare the proposed method with alternative approaches. This could make it difficult for readers to evaluate the true effectiveness and novelty of the proposed method.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In Section 3.2, the authors present a Quadratic Programming (QP) approximation for computing decisions, which substitutes the original objective function f(x, w) with an alternative objective function $f_{QP}(x, w)$. Although it may appear counterintuitive that using a separate objective function does not influence the final solution, the key motivation behind the QP approximation is its ability to simplify Jacobian computations and tackle the zero-gradient problem. However, in the experimental section, the authors only demonstrate the effectiveness of the QP approximation in one case, varying the parameter $\lambda$ and presenting the results in Table 1. Consequently, I am curious about how this approximation performs in other optimization problems. Additionally, the authors only test their overall method in two cases, which may not sufficiently demonstrate the method's robustness and generalizability. I would appreciate further insights into the performance of the proposed approach across a wider range of problems and scenarios.
Is the zero-gradient problem universal in all predict and optimize for convex problems? For example, if the surrogate function is not KKT-based and instead uses an extra large penalty term to penalize the constraints, would the same phenomenon exist[1], or is it specific to KKT-based techniques or the standard technique mentioned in the experiments? It would be helpful if the authors could clarify the scope of their contributions and avoid overclaiming.
In Line 260, the authors introduce the parameter $\alpha$, but it seems that this parameter is not discussed in the experiments section. Can the authors provide more information on how $\alpha$ is chosen or tuned in the experiments, and how its choice affects the performance of the proposed method?
Does the term "standard" in the figure refer to the work of [2]? It appears that the authors do not explicitly mention this term in the main text. Can the authors clarify the connection between the "standard" and the cited work, and if possible, provide a clearer definition or explanation of the term within the paper?
References:
[1] A Surrogate Objective Framework for Prediction+ Programming with Soft Constraints. Advances in Neural Information Processing Systems, 2021.
[2] Differentiable convex optimization layers. Advances in Neural Information Processing Systems, 2019
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for reviewing our work.
To the best of our knowledge, differentiation through convex programs [2] is considered to be the ultimate solution for convex non-linear P\&O problems, as it computes the true gradient. We are not aware of other works studying approximations for the P\&O loss in this case. However, it is true that there exist various methods for approximate differentiation of linear problems. In the attached PDF file, we compared our $r-$smoothing method against SPO+ surrogate loss [3] (labeled ''SPO+'' in the figure), mean-squared error $\|\hat{w}-w\|^2_2$ (''MSE''), and perturbation-based approach [4] (''perturbed''). As SPO+ and perturbation-based approaches are only applicable in the case of linear problems, we used the linear portfolio optimization problem ($\lambda=0$) and the OPF problem in these new experiments. The results demonstrate that our method performs better than the baselines.
1. We fully agree, that it would be beneficial to test QP approximation on a broader spectrum of problems. However, we could not find any benchmark problems for P\&O with convex, nonquadratic objectives. Instead, we ran an additional experiment with a modified portfolio optimization problem. We substituted the linear term in the objective with the LogSumExp:
$$f(x, w, Q)= \log(\sum_i e^{w_ix_i}) - x^\top Q x.$$
This problem does not necessarily makes a lot of practical sense, but it allows us to test how well the QP approximation works when the true objective $f$ is a convex, non-quadratic function. In Figure 3 in the attached PDF, we compare the QP approximation without (labeled 'QP') and with (labeled '$r-$smoothing') our $r-$ smoothing technique to using the true function $f$ in the internal problem (labeled 'true $f$'). The results demonstrate that QP approximation both with and without smoothing performs better than using the true $f$. We will run more experiments comparing QP approximation with non-quadratic internal problems and include them in the appendix of the paper.
2. The zero-gradient problem is a property of convex constrained optimization problems. Essentially, in some regions, the solution mapping $x^\ast(\hat{w})$ might be constant in certain (or all) directions. Hence, our results affect those methods that are based on differentiating $x^\ast(\hat{w})$. The method from the reference [1] operates in a different regime: it softens the constraints by adding them to the objective function. In this case, $x^\ast(\hat{w})$ becomes an unconstrained $\arg\max$. As there are no constraints, our results are not applicable here. However, the softened loss function (Eq. 6 in [1]) is non-convex and might have an arbitrarily complex landscape. Studying whether this landscape has flat regions would be an interesting task. We will also adjust the paper to make it clear, what types of methods are affected by our results.
3. $\alpha$ is a hyperparameter that defines the weight of the projection distance regularization in the loss function, i.e., it determines how strongly $\hat{w}$ is pulled towards $\mathcal{C}.$ For each experiment (OPF problem; portfolio optimization problem with different values of $\lambda$), we determine the best value of $\alpha$ by running a grid search. Search spaces and final values are reported in the supplementary material, in Tables 1-4.
4. By ``standard’’, we indeed mean the exact method to compute the Jacobian $\nabla_{\hat{w}} x^\ast$ introduced in [2]. We will emphasize this more in the experiment's description.
[1] Kai Yan et al. A Surrogate Objective Framework for Prediction+ Programming with Soft Constraints. (2021)\
[2] Akshay Agrawal et al. Differentiable convex optimization layers (2019)\
[3] Adam N Elmachtoub and Paul Grigas. Smart predict, then optimize (2017)\
[4] Quentin Berthet et al. Learning with differentiable perturbed optimizers (2020)\
---
Rebuttal Comment 1.1:
Comment: I still have some concerns regarding the scope and claims of this paper. Although the authors have clearly stated that they are addressing the Zero-Gradient Problem in Predict and Optimize for Convex Optimization, I believe that this zero-gradient issue does not necessarily need to be discussed in every predict and optimize framework, as exemplified at least by the previous work I cited, [1]. | Summary: This paper studies predict and optimize problem which utilizes machine learning to predict unknown parameters of optimization problems. The paper identifies the zero-gradient problem and proposes a method to solve this issue. Additionally, the paper conducts an experimental study to verify the proposed method.
Strengths: 1. This paper is technically sound. The claims regarding the zero-gradient problem in the paper are well-supported by theoretical analysis. The assumptions are clearly presented, and proof ideas are discussed after each theorem. The efficiency of the proposed solution to the zero-gradient problem is verified by the experimental results.
2. The paper is well-organized. It begins by introducing the problem formulation of predict and optimize and discusses the typical methods used to solve the problem. Then, the paper introduces the zero-gradient problem along with the theoretical analysis. Finally, the proposed solution and experimental results are presented.
Weaknesses: The paper can be improved by also experimentally demonstrating the yet noticed zero-gradient problem claimed in his paper. Demonstrating the the consistency between the theoretical findings and experimental observations can enhance the significance of this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Line 90-91. The parameter w is unknown. In this case, how can one minimize the loss function define in Eq (1)?
2. Is there any way to theoretically analyze and show how the proposed OP and local smoothing methods solve the zero-gradient problem?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to evaluate our work!
In the paper, Figures 3a and 4 show that the performance of the $r-$smoothing approach is better than that of the standard algorithm. We believe that this happens due to the zero-gradient problem, but we also agree that the performance plots are not sufficient evidence. To demonstrate that the zero-gradient problem really occurs, we ran additional experiments. In the attached PDF file, Figure 2 shows the norm of the loss function gradient, $\|\nabla_{\theta}f(\hat{x}, w) \|_2$, and the number of active constraints during training. We compare the standard approach and our $r-$smoothing method for linear and quadratic versions ($\lambda=0$ and $\lambda=2$) of the portfolio optimization problem. The results confirm -- in the standard method, more constraints become active with training and the gradient norm decreases. This phenomenon is more prominent when the true objective is linear. Our explanation for that is given in lines 310-312 of the paper:\
*... linear true objective pushes the decision $\hat{x}$ towards the boundary of $\mathcal{C}$, and hence it is more likely to enter points with a large gradient cone. For the more quadratic objectives, the true maximum is often in the interior of $\mathcal{C}$ and hence the zero-gradient problem occurs less often.*
1. It is assumed that the parameter $w$ is unknown at the moment when the decision is made, but it is accessible during training, e.g., to evaluate the loss function. It is a common assumption for a supervised learning setup.
2. In Theorem 3.5 we demonstrate that the zero-gradient problem occurs when the gradient of the true objective $\nabla_{x}f(\hat{x},w)$ lies in the null-space of the Jacobian $\nabla_{\hat{w}}x^\ast.$ In Lemma 3.4, we show that the dimensionality of the null space, in turn, is defined by the number of active constraints, i.e., the more constraints are active the more dimensions the null space contains. The $r-$ smoothing method introduced in Section 3.3 is based on approximating the Jacobian, $\nabla_{\hat{w}}x^\ast(\hat{w})\approx \nabla_{\hat{w}}x_r^\ast(\hat{w}),$ such that null-space of this approximation is always zero- (if no constraints are active) or one-dimensional (Property 3.8). Therefore, by design, the $r-$smoothing method will encounter the zero-gradient problem only when the true gradient $f_x(\cdot, w)$ aligns perfectly with the one-dimensional null space of $\nabla_{\hat{w}}x_r^\ast(\hat{w})$. To deal with this, we introduce the projection distance regularization in Eq. 6.
It is worth mentioning that QP approximation on its own does not affect the zero-gradient problem. However, the Jacobian $\nabla_{\hat{w}}x^\ast$ for the QP approximation is simple to analyze. We exploit this in the proof of Theorem 3.9, which shows that computing gradient steps using $r-$smoothing combined with the QP approximation is guaranteed to at least not decrease the performance. | Summary: Predict+Optimize (P+O) is an emerging paradigm that lies in the intersection of classical optimization and machine learning. Specifically, it considers the setting where a parameterized optimization problem:
$$x^{\star}(w) = \operatorname*{argmin}_{x} f(x,w) \text{ subject to } x \in \mathcal{C}$$
must be solved yet the parameters $w$ are unknown. Given observational data $o$ that is correlated with $w$, a natural approach is to train a machine learning model $\hat{w} = \phi_{\theta}(0)$ so that $\hat{w} \approx w$. Then at test time, we solve
$$ x^{\star}(\hat{w}) = \operatorname*{argmin}_{x} f(x,\hat{w}) \text{ subject to } x \in \mathcal{C} $$
The secret sauce to P+O is, instead of training to minimize prediction error $\|\hat{w} - w\|^2$, to use a loss function aligned with the actual goal {\em i.e.} that encourages $ x^{\star}(\hat{w}) \approx x^{\star}(w)$. There are several ways to do this, but one is to simply reuse the objective function and train $\phi_{\theta}$ so as to minimize
$$ \mathbb{E}_{(o,w)}\left[f(x^{\star}(\hat{w}, w)\right] $$
where $\hat{w} = \phi_{\theta}(o)$. Although a formula for the derivative of this loss is well-known, the core claim of this paper is that this derivative is less informative than previously thought. In fact, it is frequently zero. This idea is formalized through a theorem. The authors then propose a way to overcome this aplty named "zero-gradient" problem. Finally, the paper is rounded out by numerical experiments on two datasets.
Strengths: - The main strength of this paper is Lemma 3.4 and Theorem 3.5, which crystalize the core claim of this paper. This result is surprising, but I checked the proof to the best of my ability and I believe it is correct. This result is an important reality check for the field of Predict-and-Optimize.
- I enjoyed reading the proofs. The result of [1] were new to me. I liked the way the strict complementary slackness is used in the proof of Lemma 3.4
- Adding to the above, the authors do a good job of making their core results accessible through intuitive explanations and diagrams.
[1] Anthony V Fiacco. _Sensitivity analysis for nonlinear programming using penalty methods_ (1976).
Weaknesses: - Reusing the parameterized objective function $f(\cdot,w)$ as the loss function for training is not the only way to do P+O. One could also use the SPO+ loss [1], a perturbation based approach [2], or the least squares loss $\|x^{\star}(w) - x^{\star}(\hat{w})\|^2$ [3]. This should be mentioned as the zero-gradient theorem need not apply in these settings.
- I am perplexed at the stated motivation behind the quadratic programming approximation. While it is true that $f_{QP}(x,\hat{w}) = \|x - \hat{w}\|^2$ is strongly concave, and so on, it need not bear any relation to the actual problem we wish to solve, namely $f(x,w)$. So, this seems to run counter the spirit of P+O. The only case that makes sense to me is when $f(x,w) = w^{\top}x$. Expanding out we get:
$$ f_{QP}(x,\hat{w}) = \|x - \hat{w}\|^2 = -2\hat{w}^{\top}x + \|x\|^2 + \|\hat{w}\|^2 $$
So ignoring the irrelevant $\|\hat{w}\|^2$ term, it appears the authors are simply proposing to add a quadratic regularizer, which has been explored thoroughly in the literature (see [4] and elsewhere). Could the authors comment on this?
- I find the motivation behind $r$-smoothing a little opaque too. It seems as though the solution to the $r$-smoothed problem $P_{r}(\hat{x}\hat{w})$ might not be feasible (i.e. might not lie in $\mathcal{C}$). Is this correct?
- Using the Jacobian $\nabla_{\hat{w}} x_r^{\star}(\hat{w})$ in place of $\nabla_{\hat{w}}x^{\star}(\hat{w})$ is, as you show, essentially the same as just replacing $\nabla_{\hat{w}}x^{\star}(\hat{w})$ with the identity (independent of what $r$ is). This procedure is already well-studied, see [3, 5--7]. These papers should be cited and discussed.
*Minor Stuff:*
- In Figure 2, the smoothed feasibility region $\mathcal{C}_r(\hat{x}\hat{w})$ is the disk (i.e. the interior of the circle) right? If yes, this should be made clear in the caption and figure. Right now it looks as though the feasible region is just the boundary.
- In Definition 3.7, as the scale of $r$ doesn't really matter, I'd recommend not normalizing and simply writing $c = \hat{x} - r\nabla_xf(\hat{x},\hat{w})$.
- The experiments in Section 4 feel like ablation studies (i.e. just removing one element at a time from your proposed approach). I would like to see some benchmarking results, e.g. comparing the performance of your proposed algorithm to existing P+O approaches. You may find the benchmarking software PyEOPO useful for this [8]
[1] Adam N Elmachtoub and Paul Grigas. _Smart predict, then optimize_ (2017)
[2] Quentin Berthet et al. _Learning with differentiable perturbed optimizers_ (2020)
[3] Daniel McKenzie et al _Faster predict-and-optimize with three-operator splitting_ (2023)
[4] Bryan Wilder et al _Melding the Data-Decisions pipeline: Decision Focused learning for combinatorial optimization_ (2019)
[5] Samy Wu Fung et al _JFB: Jacobian-Free Backpropagation for Implicit Networks_ (2022)
[6] Zhengyang Geng et al _Is attention better than matrix decomposition?_ (2022)
[7] SS Sahoo et al _Backpropagation through combinatorial algorithms: Identity with projection works_ (2022)
[8] Tang and Khalil _PyEPO: A PyTorch-based end-to-end predict-then-optimize library for linear and integer programming_ (2023).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See "weaknesse" above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *1. Insufficient benchmarks*\
Thank you for providing us with these references. We have not included more benchmarks in the submission since all the papers we are aware of focus on linear/combinatorial problems. The reason behind that is simple -- the zero-gradient problem was not noticed before, and hence the exact differential optimization method from [1] was considered to be the ultimate solution to convex nonlinear P\&O problems.
There indeed exist various methods to approximately differentiate linear problems. In the attached PDF, we compare our $r-$smoothing method against SPO+ loss [2] (labeled ''SPO+'' in the figure), mean-squared error $\|\hat{w}-w\|^2_2$ (''MSE''), and perturbation-based approach [3] (''perturbed''). We have not included the argmax loss [4] $\|x^\ast(w) - x^\ast(\hat{w}) \|_2^2,$ as it is also susceptible to the zero-gradient problem (since it includes $x^\ast(\hat{w})$) but optimizes a surrogate loss.
As SPO+ and perturbation-based approaches are only applicable to linear problems, we used the linear portfolio optimization problem and the OPF problem in these new experiments. The results indicate that our local $r-$smoothing method outperforms all these benchmarks.
*2. QP approximation*\
We agree that the motivation for using QP approximation independently of the true objective $f$ might be unclear, so we elaborate on it here. In our eyes, P\&O is mostly about enforcing constraints on the output of predictive models, e.g., neural networks. Indeed, if the constraints set $\mathcal{C}$ is simple, e.g., a hypercube $\\{x\in\mathbb{R}^n| 0 \leq x \leq 1 \\}$, we do not need any of the P\&O methods. Instead, we can simply use an activation function that constrains the output (e.g., sigmoid) and then train the neural network to predict $\hat{x}$ by performing gradient descent on $f$. However, when constraints are more complex, this approach falls apart and we need a differentiable constrained optimization layer -- and this is exactly what P\&O provides us.
The key motivation behind P\&O [2] is that we do not need the internal objective $f(\cdot,\hat{w})$ to be similar to the true objective $f$ -- we only want it to yield a good decision $\hat{x}$. From this perspective, QP approximation seems like a logical next step, as it is the simplest constrained optimization layer that can output any point in $\mathcal{C}$.
When using QP approximation, we leave the heavy lifting related to ‘understanding’ the dependency between the features $o,$ the true objective $f$, and the optimal solution $x^\ast(w)$ to the predictor, while the P\&O module is used to enforce the constraints in a differentiable way.
*3. Smoothing*\
We would like to share our intuition behind the $r-$smoothing method. In Section 3, we show that the zero-gradient problem arises in the vertices where the constraints set $\mathcal{C}$ is not smooth, i.e., multiple constraints are active. If we could smooth all such vertices, it would resolve the zero-gradient problem almost entirely (the null space of the Jacobian can still be one-dimensional when the optimal solution is in the interior of $\mathcal{C}$). In fact, it is known (see e.g., [5]), that any convex polytope can be approximated by a smooth convex set with arbitrarily good accuracy. Suppose that we can use such a smooth approximation instead of $\mathcal{C}$. Then, the solution of the resulting problem can be made arbitrarily close to the true optimal solution, and yet the zero-gradient problem will be almost gone.
Taking this argument one step further, we can see that we do not really need to make the whole set $\mathcal{C}$ smooth. In fact, at every gradient step, we want to know *what would the Jacobian of the smoothed (globally) problem look like for current prediction $\hat{w}$*. The local $r-$smoothing method is designed to answer exactly that question.
Importantly, the solution to the $r-$smoothed problem is defined such that its solution equals to the solution of the non-smoothed internal problem (see lines 224-225 of the paper). In fact, we do not need solve the smoothed problem because of that reason. As we use the QP approximation, the Jacobian $\nabla_{\hat{w}} x_r^\ast(\hat{w})$ can be also computed explicitly, without differentiating the KKT conditions. In theory, we expect that it is possible to use other internal problems (e.g., with the original $f$) instead of QP approximation, and then compute the Jacobian $\nabla_{\hat{w}} x_r^\ast(\hat{w})$ by differentiating the KKT conditions of the $r-$smoothed version of this problem. However, unlike with QP approximation, we don't have a proof for non-decrease (Theorem 3.9) in this case.
*4.Similarities to Jacobian-free Backpropagation (JFB)*\
Thank you very much for providing these references, we were not aware of them. Indeed, our approach seems to be similar in spirit to the idea of JFB. Specifically, references [4], [6] have a lot in common with our $r-$smoothing approach. However, we also see some differences. These works focus on the linear case: [4] requires linear constraints and [6] needs a linear objective.
We will look deeper into JFB to better understand connections to our work and extend the related work section.
*5.Response to the Minor stuff.*
- You are correct about Figure 2; we will adjust it accordingly. Thank you for pointing that out!
- Similarly, we agree with the remark about Definition 3.7.
- We addressed this in the new experiments as described in the first paragraph.
[1] Akshay Agrawal et al. Differentiable convex optimization layers (2019)\
[2] Adam N Elmachtoub and Paul Grigas. Smart predict, then optimize (2017)\
[3] Quentin Berthet et al. Learning with differentiable perturbed optimizers (2020)\
[4] Daniel McKenzie et al. Faster Predict-and-Optimize with Davis-Yin Splitting (2023)\
[5] Mohammad Ghomi Optimal Smoothing for Convex Polytopes (2004)\
[6] SS Sahoo et al Backpropagation through combinatorial algorithms: Identity with projection works (2022)
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks to the reviewers for addressing my comments so thoroughly! I will respond point-by-point.
1. Good job for implementing so many additional benchmarks in such a short time! In addition to testing other methods on the problem you introduce, I think it is crucial to test your method on problems already in the literature. The PyEPO library mentioned above makes this pretty easy.
2. Unfortunately this motivation behind the QP approximation makes even less sense to me. I think P&O is significantly more general than enforcing the output of neural network to lie in a polytope $\mathcal{C}$. I maintain that the authors have essentially rediscovered the well-known trick of adding a quadratic regularizer to a linear objective introduced by Wilder et al. (I see you have experimented on a nonlinear objective described in the global rebuttal, but I don't understand how your QP approximation can achieve lower regret than using the true objective function)
3. Thanks for clarifying the use of $r$-smoothing. I agree that it would be interesting to see how it works in conjunction with other internal problems.
4. I agree that clarifying the relationship between your work and JFB (also known as one-step differentiation) is crucial.
Since posting my review, I have thought more about this paper. It seems to me that the key ingredient to proving the zero gradient theorem is strict complementary slackness. I believe this holds for all linear programs, and generically for semi-definite programs. What about more general constrained optimization problems. For example, does strict complementary slackness always hold for quadratic programs?
---
Reply to Comment 1.1.1:
Comment: 1. We agree that using more of the already existing test problems is important to strengthen our paper. The initial reason why we have not used problems from PyEPO is that they are from the combinatorial optimization domain and are used to evaluate algorithms for linear/combinatorial P\&O methods. We will however extend our experiments with some of these problems to demonstrate the effectiveness of our method in this case.
2. We agree that the QP approximation we use is not novel and we do not claim it to be so. We treat it as a tool which, combined with $r-$smoothing, allows us to solve the zero-gradient problem both theoretically and experimentally. Besides, it offers computional benefit as its Jacobian can be computed analytically, without inverting the Hessian of the internal objective.
As for experiments with LogExpSum objective, we believe the explanation is as follows: the LogExpSum objective has $n + n^2$ parameters (corresponding to the weghts $w$ and positive definite matrix $Q$). Hence, it results in a much more challeging problem for the neural network. This is generally true, as QP approximationg has the minimum possible number of parameters (equals to number of decision variables $n$).
However, we also admit that we ran this new experiment in a really short time, and we could have not found the optimal hyperparamters for the method that uses the true objective. We will rerun this experiment more thoroughly and report the results in the paper.
**Strict complementary slackness.**
Strict complementary slackness does not *always* hold, even in the case of linear problems. For example, consider a two-dimensional square $[0,1]^2$ as the constraints set and let $f_{lin}(x,w)=w_1x_1 +w_2x_2$. In this case, the optimal solution $x^\ast$ is not unique but we can still use it as an example.
Consider the prediction $\hat{w}=(0, 1)$ and decision $\hat{x}=(0, 1)$, where two constraints are active. Let $n_1=(0, 1)$ and $n_2=(-1, 0)$ be normals of these constraints. This point $\hat{x}$ is an optimal solution, but strict complementary slackness is violated -- $\nabla_x f_{lin}(x,\hat{w})=n_1 + 0n_2.$ Geometrically, it corresponds to the objective function gradient being on the *boundary* of the gradient cone at $x.$ In fact, as shown in Lemma 3.3 rephrasing known results from [1], $x^\ast$ is *non-differentiable* when strict complementary slackness is not satisfied. We can also see it geometrically -- slightly rotating the gradient of $f$ anti-clockwise ($w_1\downarrow$) will not change the solution while rotating it clockwise ($w_1\uparrow$) will make it jump.
The same example holds for QP, e.g., consider an objective function $f_{qp}(x, \hat{w})= \|x-\hat{w}\|^2_2$ and let $\hat{w}=(0, 5)$.
In this case, the gradient at the optimal solution $\hat{x}=(0, 1)$ is also pointing up from $\hat{x}$. The same argument as for the linear case applies. In the QP case, however, rotating the gradient clockwise will not make the solution 'jump' but it will smoothly move it along the edge. Hence, in this case, $x^\ast$ has directional derivatives.
In conclusion, we would like to say that the reviewer is correct that strict complementary slackness is crucial for our results, as when it is violated, Jacobian $\nabla_\hat{w} x^\ast$ is undefined. We would like to emphasize, however, that the points $\hat{w}$ that violate strict complementary slackness form a set of measure zero (as it requires landing the gradient exactly on the border of gradient cones) and hence can be neglected in practice.
[1] Anthony V Fiacco. Sensitivity analysis for nonlinear programming using penalty methods (1976). | Rebuttal 1:
Rebuttal: We thank the reviewers for providing us with valuable feedback. To address the comments related to benchmarks and experiments, we conducted some additional experiments. We provide detailed responses individually for each reviewer, and in this text, we describe the new Figures that can be found in the attached PDF file.
**Figure 1.** To respond to fair criticism regarding insufficient baselines, we implemented several new methods.
To the best of our knowledge, differentiation through convex programs [2] is considered to be the ultimate solution for convex non-linear P\&O problems, as it computes the true gradient. Hence, other existing P\&O loss methods area built for the linear/combinatorial case.
We implemented SPO+ surrogate loss [1] (labeled ''SPO+'' in the figure), mean-squared error $\|\hat{w}-w\|^2_2$ (''MSE''), and perturbation-based approach [2] (''perturbed''). and compared them to our $r-$smoothing method on the linear portfolio optimization problem ($\lambda=0$) and the OPF problem. The plots depict the test regret during training. In these experiments, our method significantly outperforms the baselines.
**Figure 2.** To provide additional evidence that the zero-gradient problem is indeed the reason why the $r-$smoothing and projection distance regularization outperform the standard approach in Figures 3,4 of the main paper, we measure two new metrics -- the norm of the loss function gradient and the number of active constraints. We plot the average of each of these quantities over the training dataset for each training epoch. Using the linear ($\lambda=0$) and quadratic ($\lambda=2$) portfolio optimization problems, we compare the $r-$smoothing and standard method (computing true Jacobian of the QP approximation by differentiating the KKT conditions). Panels (a-b) correspond to the linear problem, and (c-d) to the quadratic. It can be seen that the gradient norm indeed decreases rapidly for the standard method. Additionally, we see that the number of active constraints increases, in line with the theoretical results of Section 3.1.
**Figure 3.**
We also received questions regarding the generalizability of the QP approximation method, which we tried to address in Figure 3.
We could not find any benchmark for P\&O experiments that has a convex, nonlinear, and nonquadratic objective. Instead, we ran additional experiments with a slightly modified portfolio optimization problem. We substituted the linear term in the objective with the LogSumExp:
$$f(x, w, Q)= \log(\sum_i e^{w_ix_i}) - x^\top Q x.$$
This problem does not necessarily makes a lot of practical sense, but it allows us to test how well the QP approximation works when $f$ is not a quadratic function. In Figure 3 in the attached PDF, we compare the QP approximation without (labeled 'QP') and with (labeled '$r-$smoothing') our $r-$ smoothing technique to using the true function $f$ in the internal problem (labeled 'true $f$'). The results demonstrate that QP approximation performs even better than using the true problem. We believe that this is due to its simplicity -- the QP approximation only requires $n$ parameters to be predicted, while the true function $f$ has $n + n^2$ parameters.
[1] Adam N Elmachtoub and Paul Grigas. Smart predict, then optimize (2017)\
[2] Quentin Berthet et al. Learning with differentiable perturbed optimizers (2020)\
[3] Akshay Agrawal et al. Differentiable convex optimization layers (2019)\
Pdf: /pdf/250bf49f68313c6b00d157957d5c006c3bcd1035.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper focus on the topic of "predict and optimize" and identify the zero-gradient problem. This issue is characterized by a situation where the gradient related to the best decision concerning parameters in machine learning models might be zero. This can occur even when assuming convexity, smoothness, and strict complementary slackness. The authors introduce a QP approximation and an r-smooth technique to address this problem, effectively reducing the likelihood of encountering the zero-gradient issue. The numerical results show the merits of their proposed approach.
Strengths: 1. Section 3.1 uses straightforward theoretical outcomes to shed light on a significant practical problem. This section is clear and insightful.
2. Section 3.3 stands out for its clarity and insight as well, particularly Theorem 3.9. Initially, I was skeptical that the gradient with local smoothness would accurately approximate the original gradient, but Theorem 3.9 effectively argued this point. Although the locally smoothed gradient may not ensure the fastest improvement in function value as the original gradient does, it can be guaranteed to at least be a non-decreasing direction.
3. Figures (b) and (c) look interesting, showing that the new methods outperform standard approaches during the final training phase. This aligns with the theories in Section 3.1, where, towards the end of training, the conditions described in Theorem 3.5 become more likely. This leads to training difficulties with the standard method, whereas the proposed techniques can overcome them. Further validation on this matter is needed, as indicated in the third point under "Weaknesses."
Weaknesses: 1. Sec 3.4, Algorithm 1. The notation seems unclear. What's the dimension of $f_x$ and $\hat{f_x}$? Why $f_x$ can be directly multiplied with $\nabla_{\hat{w}}x^*_r(\hat{w})$? Does it mean inner product of two vectors? What's the meaning of $f_x - f^0$ given $f^0$ is a scalar while $f_x$ is a gradient (vector)?
2. Sec 4.1, equation (8). What's the meaning of $w$? Does $f(x,w)$ mean the original function defined in (7) or the QP approximation defined in (9)? This is quite critical: if (8) measures the regret based on QP approximation rather than the original function, the experiment results would be meaningless. While if (8) measures the original function, the numerical results will be good.
3. Is it possible to provide the norms of the gradients you observed in the experiments? If the gradients calculated with your proposed approaches have larger norms than the traditional calculation way, it would be a more direct and strong support of your approach.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: see "Weaknesses"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: see "Weaknesses"
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for identifying the strengths of our work as well as highlighting some important drawbacks.
1. There is indeed a typo in Algorithm 1, thank you for pointing that out. As $f_x, \hat{f}_x$ are the gradients of the objective function, they are vectors of size $n.$ Then, the definition of $f^0$ contains a typo -- it should read as $f^0:=\hat{f}_x\frac{f_x^\top \hat{f}_x}{\hat{f}_x^\top \hat{f}_x}$. Hence, $f^0$ is also an $n-$dimensional vector -- the orthogonal projection of $f_x$ onto $\hat{f}_x.$ Then, the difference $f_x - f^0 \in \mathbf{R}^n$ is the component of $f_x$ orthogonal to $\hat{f}_x$.
2. We agree that Eq. 8 is not properly presented. Also, there is a typo: the medium bracket should end after $\phi$, not $w$. Eq. 8 defines regret for the general case (notation from Sections 3.1-3.3). It uses the true objective function $f$, and the solution method affects it only via $x^\ast(\hat{w})$. In the case of the portfolio optimization problem, the unknown parameters are defined as $w=(p, Q)$.
3. Thank you for this suggestion. In the attached PDF file, Figure 2 shows the loss function gradient norm, $\|\nabla_{\theta}f(\hat{x}, w) \|_2$, and the number of active constraints during training. We compare the standard approach and our $r-$smoothing method for linear ($\lambda=0$) and quadratic ($\lambda=2$) versions of the portfolio optimization problem. The results confirm that the standard method suffers from the zero-gradient problem, especially when the true objective is linear. This provides another evidence to the explanation given on lines 310-312 of the paper:\
``... linear true objective pushes the decision $\hat{x}$ towards the boundary of $\mathcal{C},$ and hence it is more likely to enter points with a large gradient cone.'' | null | null | null | null | null | null |
MonoUNI: A Unified Vehicle and Infrastructure-side Monocular 3D Object Detection Network with Sufficient Depth Clues | Accept (poster) | Summary: This work focuses on the monocular 3D object detection task and proposes a unified 3D detection framework for both vehicle and infrastructure sides. In particular, to unify the diversity of pitch angles and focal lengths of multiple cameras, the authors propose a unified optimization target named normalized depth. Besides, they also design the 3D normalized cube depth of obstacle to promote the learning of depth information. They conduct extensive experiments on three datasets, including Rope3D, DAIR-V2X-I, and KITTI, and get promising results on them, especially on Rope3D and DAIR-V2X-I.
Strengths: 1. The main idea is easy-to-follow and the paper is well-written.
3. The proposed model gets SOTA performance on the infrastructure side, ie. Rope3D and DAIR-V2X-I datasets, and competitive results on the vehicle side, ie. the KITTI dataset.
4. Code will be open-sourced.
Weaknesses: The authors claim they proposed a unified 3D detection pipeline for both infrastructure and vehicle sides. Although the proposed method shows promising peroformance on Rope3D and DAIR-V2X-I datasets, it cannot well genelized to the vehicle side.
- Their proposed method can about 16 ap (moderate setting) on KITTI testing, while the existing models can get 17+, such as [1].
- The authors only evaluate their model on KITTI, which only contains 7K images, and evaluating it on the largescale dataset such as nuScenes and Waymo is required to show the effectiveness in vehicle side.
- In table 5, the experiments (a -> c) and (b->d) show the pitch-based depth normailization is harmful to the vehicle-centric detection.
- Also, table 5 shows the effectiveness of the 'cube' design to the vehicle-centric detection. Note KITTI is a small dataset with sparse depth supervision and the 'cube' design densify the depth supervision in fact, which maybe the underlying reason why it works . However, whether this design works or not when large-scale training data avaiable is still unclear. So I recommand the authors conduct more experiemnt on larger dataset again.
Based on the above points, I am not sure designing a unified pipeline for both infrastructure and vehicle sides is meaningful or not, and more experiemnts are required to support the claim.
[1] Online Monocular 3D Object Detection with Adaptive Token Transformer, CVPR'23
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the quality, clarity, and effectiveness on infrastructure side of our paper. As you have stated, our method is easy-to-follow, and achieves SOTA performance on the infrastructure side and competitive results on the vehicle side. We also thank you for providing insightful suggestions. We will try to explain your concerns in the following part of this comment and are willing to discuss them further.
**Performance on vehicle side:** Sorry for the lack of methods. As shown in the Table 3 in global rebuttal, we have added the latest vehicle side methods (including CMKD, LPCG, MonoATT and DEVIANT) for comparison. Our method as a unified framework is valuable, although not be able to surpass all vehicle-side methods. Reviewer rxxf have also acknowledged this point. In fact, in terms of the vehicle-side method, our method has a large increase compared with our baseline method, such as GUPNet (15.02->16.73). While compared with the DID-M3D of a similar framework, MonoUNI has an increase (16.29->16.73) and has the advantage of not requiring additional data.
**More results on Waymo and nuScenes:** Thank you for your sincere advice. We validate our method on Waymo val and perform cross-validation evaluation on KITTI-nuScenes, as done in MonoRCNN (ICCV2021) [1] and DEVIANT (ECCV 2022 oral) [2]. On Waymo, as shown in Table 1 in global rebuttal, our approach achieves overall superiority over GUPNet and DEVIANT. For example, our method has a 0.51% improvement on Level1 under IOU=0.7 compared to the DEVIANT. On nuScenes, we use the KITTI val model for inference. The results are shown in Table 2 in global rebuttal. Although it does not exceed DEVIANT, it still has competitive capabilities.
**Pitch-based depth normalization on vehicle-centric detection:** Pitch-based depth normalization is not harmful on vehicle-centric detection. In fact, since the optical axis of the camera on the vehicle side is parallel to the ground and the pitch angle is equal to 0, the pitch-based depth normalization does not take effect on the vehicle side, which is equivalent to not having this module. In fact, the difference between (a->c) and (b->d) in the KITTI data results of Table 5 in our original paper is the fluctuation caused by two identical experiments due to different random seeds.
**”Cube” design densify the depth supervision:** Your understanding is correct, which is also the motivation for our design of cube depth. We believe that dense depth supervision will deepen the model's understanding of geometric information. We have added Waymo validation and KITTI-nuScenes cross-dataset validation following your suggestion.
**Significance of unified vehicle and infrastructure pipeline:** We believe a unified framework is meaningful. Both reviewers rxxf and Yjqg acknowledged this.
The main reasons are as follows:
* This unified framework eliminates the influence of pitch angles and focal length, so that subsequent 3D detection, whether it is on the infrastructure side or on the vehicle side, can use the same regression target for supervision;
* Essentially, our paper is analyzing what information the model in 3D detection learns, which we consider to be geometric information. The normalized depth is actually disambiguating the geometric information, while the cube depth is increasing the geometric information for supervision. MonoUNI uses more geometric information to supervise in unambiguous scenes, which has a positive impact on the both vehicle and infrastructure sides.
* Following your comments, we have added more experiments to prove the effectiveness of our ideas and methods, including the results of Table 1 on Waymo, the results of Table 2 on nuScenes and the results of Table 4 for vehicle-infrastructure joint training. Cumulatively so far, we have proven our method on three vehicle-side and two infrastructure-side real datasets.
* In Limitation part, we objectively acknowledge that our method doesn't include additional design for the mixed training between vehicle and infrastructure sides. Because we believe that mixed training should be solved from adaptation for different appearance features between vehicles and infrastructures. This is somewhat different from the unified optimization targets and training pipline emphasized in our paper, and requires independent additional method to solve. Both reviewers rxxf and YJqg acknowledged this. Although no method is designed directly, we've conducted mixed training experiments to comprehensively evaluate our approach. As seen in Table 4 of the global rebuttal, our MonoUNI mixed training has fewer dropped points than GUPNet and SMOKE, indicating that our method can better alleviate the additional complexity and visual ambiguity introduced by mixed training.
**Missing relevant papers:** Sorry for the lack of methods. We will add more excellent related work, such as MonoEF, MoGDE and MonoATT.
[1] MonoRCNN-Geometry-Based_Distance_Decomposition_for_Monocular_3D_Object_Detection.
[2] DEVIANT-Depth EquiVarIAnt NeTwork for Monocular 3D Object Detection. | Summary: The paper proposes a new approach called MonoUNI for monocular 3D object detection of both vehicle and infrastructure sides in autonomous driving. The approach addresses the challenge of constructing algorithms for the two sides based on different prior knowledge, by taking into account the diversity of pitch angles and focal lengths. The proposed method uses a unified optimization target called normalized depth to unify the 3D detection problems for the two sides. Additionally, a 3D normalized cube depth of obstacle is developed to promote the learning of depth information and enhance the accuracy of monocular 3D detection. The paper presents extensive experiments on three widely used monocular 3D detection benchmarks, and the proposed method achieves state-of-the-art performance on all three benchmarks without introducing any extra information.
Strengths: 1. The paper proposes a novel approach that unifies monocular 3D object detection for both vehicle and infrastructure sides, which is an important problem in autonomous driving.
2. The proposed approach addresses the challenge of constructing algorithms for the two sides based on different prior knowledge, by taking into account the diversity of pitch angles and focal lengths. The use of a new optimization target called normalized depth and the 3D normalized cube depth of obstacle as an additional supervision clue helps to improve the accuracy of monocular 3D detection.
3. The extensive experiments on three widely used benchmarks show that the proposed method achieves state-of-the-art performance on all three benchmarks without introducing any extra information.
Weaknesses: 1. Lack of key details. In formula (4), the authors mentioned the use of the definition of normalized depth to simplify the depth detection task. It is unclear whether MonoUNI needs to be built on the assumption of known pitch angle and focal length during model inference. If it does, then the work does not truly address the problem of different conditions for vehicle and infrastructure sides. If it does not, the authors need to explain why adopting a unified form would reduce the difficulty of learning, and the inference in line 161 is still unconvincing.
2.Insufficient contribution. The proposed 3D normalized cube depth is inspired by AutoShape and DID-M3D, and extends from regressing corner coordinates to regressing the depth within the foreground 3D box. The proposed method is too straightforward and lacks the necessary motivation to explain why the depth on the surface is important, which needs to be supported by experiments. Introducing surface depth may not make a significant difference compared to regressing corner depth, and may introduce additional errors due to the irregularity of foreground instances.
3. Mismatch between the motivation and method of the paper. The motivation of the paper is to design a new depth annotation, i.e., normalized depth, to unify monocular 3D object detection for both vehicle and infrastructure sides. However, according to line 268, if the two scenarios need to be trained separately, the proposed method's significance is limited, and it seems unable to achieve the desired Mono3D scheme that unifies vehicle and infrastructure sides.
4. Cross-dataset experiments. The paper aims to establish a unified depth detection scheme for both vehicle and infrastructure sides. It is desirable for the authors to demonstrate the model's cross-dataset capability and the effect of multi-dataset mixed training. Additionally, using extrinsic perturbation experiments on a single dataset seems to be a necessary verification method.
5. Minor errors. Each element in the formula needs to have a corresponding explanation, such as (5).
6. Missing relevant papers.
[1] MonoEF: Extrinsic parameter free monocular 3D object detection.
[2] Mogde: Boosting mobile monocular 3D object detection with ground depth estimation.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: My main concern is that the authors lack the necessary motivation and explanation for their design of the unified depth and the cube depth. The authors need to provide a detailed explanation of the rationale and purpose of the proposed designs. Currently, the proposed design approach does not appear to have significant differences from previous works.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The main limitation of this paper is that it does not implement training a single model to handle different settings across multiple datasets. Currently, the proposed approach cannot be considered as a solution to the problem of different settings between the vehicle and infrastructure sides.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the significance of the unify problem in autonomous driving. As you have stated, our MonoUNI achieves state-of-the-art performance on all three benchmarks without introducing any extra information. We also thank you for providing insightful suggestions. We will try to explain your concerns in the following part of this comment and are willing to discuss them further.
**Lack of key details:** We apologize for our inappropriate representation. MonoUNI requires known pitch angle and focal length. Whether it is the vehicle-side real dataset (KITTI, Waymo or nuScenes) or the infrastructure-side dataset (Rope3D and DAIR-V2X), the pitch angle and focal length are known and willing to be used (vehicle-side pitch angle default 0). Even in industrial applications, they are actively used as prior knowledge. Using focal length and pitch angle is a way to address the problem of different conditions for vehicle and infrastructure sides, and they are usually not counted as additional data (many 3D detection methods use internal parameters for 2d-3d conversion). Both reviewers rxxf and YJqg acknowledged the value and significance of our solution.
**Method is “straightforward” and insufficient contribution:** Regarding contributions, we believe that simple and effective methods are worth advocating. Despite looking "straightforward", to the best of our knowledge, we are the first to directly introduce 3D bbox depth supervision in 3D detection without using any additional data. Both using GUPNet as the baseline, our results are superior to DID-M3D using additional data (16.29->16.73), and only need a simple change to improve 1.71% compared to GUPNet (15.02->16.73). Reviewer rxxf also thinks that our simple but effective is a strength.
**Motivation:** Compared with regressing corner depth, regressing cube depth is denser supervision. Dense depth supervision is beneficial to 3D detection. Reviewer yykW also acknowledged this and many existing papers have also proved it, including DID-M3D using obstacle surface depth for supervision, AutoShape using dense CAD information, and MonoDDE using 20 depths. That's where our motivation comes from. Essentially, our paper is analyzing what information the model in 3D detection learns, which we consider to be geometric information. The normalized depth is actually disambiguating the geometric information, while the cube depth is increasing the geometric information for supervision. MonoUNI uses more geometric information to supervise in unambiguous scenes, which has a positive impact on the both vehicle and infrastructure sides.
**Experimental proof:** The obvious positive results of Table 5 (a)-(e) and (d)-(f) in our original paper prove that dense deep supervision cube depth is very important. And we also added the experimental results of Waymo and nuScenes in the Global rebuttal. Cumulatively so far, we have proven our method on three vehicle-side and two roadside real datasets.
Regarding the irregularity of foreground instances, we have not observed such phenomenon in our experiments. We also believe that if it exists for the regression cube depth, then the regression corner depth still has this problem, because the corner depth belongs to the corner point of the 3D bounding box, and it also does not really exist in the physical world. In comparison, regressing cube depth is still superior to corner depth.
**Mismatch between the motivation and method:** We believe there is no mismatch problem. The purpose of our work is to unify the 3D detection tasks of the vehicle and infrastructure from the perspective of regression target and training pipeline, and we have achieved this goal. Both reviewers rxxf and YJqg acknowledged this.
In Limitation part, we objectively acknowledge that our method doesn't include additional design for the mixed training between vehicle and infrastructure sides. Because we believe that mixed training should be solved from adaptation for different appearance features between vehicles and infrastructures. This is somewhat different from the unified optimization targets and training pipline emphasized in our paper, and requires independent additional method to solve. Both reviewers rxxf and YJqg acknowledged this. Although no method is designed directly, we've conducted mixed training experiments to comprehensively evaluate our approach. As seen in Table 4 of the global rebuttal, our MonoUNI mixed training has fewer dropped points than GUPNet and SMOKE, indicating that our method can better alleviate the additional complexity and visual ambiguity introduced by mixed training.
**Cross-dataset experiments:** We validate our method on Waymo val and perform cross-validation evaluation on nuScenes, as done in DEVIANT. On Waymo, as shown in Table 1 in global rebuttal, our approach achieves overall superiority over GUPNet and DEVIANT. For example, our method has a 0.51% improvement on Level1 under IOU=0.7 compared to the DEVIANT. On nuScenes, we use the KITTI val model for inference. The results are shown in Table 2 in global rebuttal. Although it does not exceed DEVIANT, it still has competitive capabilities. The results of mixed training are also explained in the answer above. Regarding the external disturbance, since the data enhancement of the pitch angle is irreversible (it is a 3d->2d process, the appearance generated by 2d->3d does not conform to the real situation), so we did not add this experiment. Any further guidance on this experiment would be greatly appreciated.
**Minor errors:** We have diligently revisited our paper multiple times, ensuring that the revised version is comprehensive and reader-friendly. Each new module or concept is thoroughly elucidated to prevent any reader confusion.
**Missing relevant papers:** Sorry for the lack of methods. We will add more excellent related work, such as MonoEF, MoGDE and MonoATT.
---
Rebuttal Comment 1.1:
Comment: Thanks for making a detailed rebuttal, it answered most of my questions. I have a few concerns.1. MonoUNI requires some prerequisite parameters such as camera position and focal length, which will limit its use in real-world deployments. We know that there is no way we can measure them all.2. I am not sure that regressing the cube depth is justified as it is still rough and doesn't do the job of having an accurate modeling of complex surfaces like a CAD model would do.
---
Reply to Comment 1.1.1:
Comment: First of all, thank you once again for your response. Regarding your two concerns, we provide the following explanations:
(1) To our best knowledge, contemporary practical industrial implementations, particularly in the field of 3D detection within contexts like autonomous driving (including vehicles and infrastructure), the pose and focal length information of any camera sensor will be obtained in advance. Focal length and camera pose information are also used in real-world industrial system application. In academia, prevalent BEV (Bird's Eye View) 3D detection methods (both monocular and multi-view), such as CaDDN [1], BEVDet [2], and BEVDepth [3], extensively rely on both camera focal length and pose.
- Focal length: Cameras used for autonomous driving will undergo strict intrinsics calibration (including distortion correction, focal length and principal points adjustment). The intrinsics is the bridge between the 2D image and the 3D coordinate system. The focal length is necessary prior information. Most methods use the focal length to convert 2D and 3D results, such as GUPNet [4], MonoDDE [5], etc.. Therefore, we have not introduced any additional focal length dependency, either in academic or real-world deployments.
- Pose: For the vehicle-side method, we do not rely on the camera pose (Pitch angle=0). For the infrastructure side, the camera pose belongs to the camera extrinsic parameters. In real-world industrial deployments, precise calibration of camera extrinsic parameters remains imperative. This calibration is indispensable as outcomes from the infrastructure-side camera necessitate transformation into a global coordinate system (e.g., WGS84 or UTM) for use by downstream systems (e.g., self-driving vehicles). Notably, datasets like DAIR-V2X [6] and Rope3D [7], devised for addressing practical application challenges, also recommend the utilization of camera poses and even ground equations.
(2) In the case of not using additional data, regression cube depth is a more adequate depth information supervision method, which seems "rough" but effective. Using CAD models is indeed more accurate, but CAD data is difficult to obtain, while using cube depth is a weakly(none) CAD-dependent solution, which is more practical for real-world deployments.
[1] Categorical depth distribution network for monocular 3d object detection.
[2] BEVDet: High-Performance Multi-Camera 3D Object Detection in Bird-Eye-View.
[3] BEVDepth: Acquisition of Reliable Depth for Multi-view 3D Object Detection.
[4] Geometry uncertainty projection network for monocular 3d object detection.
[5] Diversity Matters: Fully Exploiting Depth Clues for Reliable Monocular 3D Object Detection.
[6] Dair-v2x: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection.
[7] Rope3d: The roadside perception dataset for autonomous driving and monocular 3d object detection task. | Summary: This paper proposes an optimization target which unifies 3D detection problems for vehicle and infrastructure sides, by taking into account the diversity of camera pitch angles and focal lengths. Furthermore, the paper develops 3D normalized cube depth of obstacle to promote the learning of depth information.
The authors provide extensive experimental results on several monocular 3D detection benchmarks to prove the effectiveness of the proposed approach on both vehicle and infrastructure scenarios.
Strengths: 1. The authors take notice of that depth can be ambiguous under the influence of focal length given similar visual features and conduct experiments to verify their claim.
2. They propose to decouple depth from focal length and the camera optical axis orientation, specifically, to learn a normalized depth which is unaffected by focal length and axis orientation, from which the real depth can be recovered. I think the proposed idea is novel, reasonable and also proved to be effective, which eases the model from predicting ill-defined depth from visual feature.
Weaknesses: The idea of normalized depth is good, but the article seems like semi-finished and needs to be completed carefully.
- In sec 3, the definition of H' is missing, though it can be speculated from figure 3.
- L#159 principle->principal.
- Many key components of the proposed method are only described by text or even only be found in the figure.
For instance in sec 3.3, L#193, the authors say "Compared with only supervising the depth of the center point and corner points, the 3D cube depth is a sufficient way to utilize the depth information". So I guess you supervise the depth of all the points on the visible surface of the obstacle? How is it implemented? Mathematically, what are the output of the model and how are they supervised? I fail to find any formular definition of your training loss and the "depth uncertainty" in fig 2. As a reader, I can roughly guess the meaning but I would appreciate it if the authors could complete all the missing definitions.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1.It is mentioned in sec 3 L#157 that the pitch angle theta can be calculated from the ground equation. I suppose the accuracy of pitch angle is severely affected by the accuracy of ground plane estimation. So I wonder how the parameters of the ground plane equation are estimated.
2.The derivation of the normalized depth are done in a degenerated 2d view instead of 3d. It would be better if the authors can provide simple explanation or prove that neglecting the effect of camera yaw angle is reasonable.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: This paper mentioned that one model file to support 3D detection for both sides rather than separate training for the two sides. This is a work worth investing in, and it will solve the problem of vehicle-road collaborative perception very well. I look forward to the subsequent output of the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the motivation, novelty, rationality and effectiveness of our method. We especially thank you for supporting our insight on learning a normalized depth that decouples depth from the focal length and the camera’s optical axis orientation. As you have stated, our normalized depth can ease the model from predicting ill-defined depth from visual features. We also thank you for providing insightful suggestions. We will try to explain your concerns in the following part of this comment and are willing to discuss them further.
**About the article seeming like semi-finished:** First and foremost, we extend our gratitude for your meticulous review, and we sincerely apologize for any hurried writing in the initial submission. We have diligently revisited our paper multiple times, ensuring that the revised version is comprehensive and reader-friendly. Each new module or concept is thoroughly elucidated to prevent any reader confusion. Below are our responses to your comments.
* Added the explanation of H’ in section3: Extend a line from the obstacle's center point C along the camera's imaging plane direction, intersecting line OP at point P'. The distance CP' is denoted as H.
* Principle point was corrected to principal point.
* We will introduce a new subsection, 3.4, to provide a comprehensive and detailed account of our loss design. Your understanding of the 3D cube depth supervision approach is accurate: indeed, we supervise the depth of all points on the visible surface of the obstacle. During training, this is achieved by supervising points with 3D cube depth ground truth on the obstacle-level feature (7x7 size) after ROI-align while the remaining points without 3D cube depth ground truth are unsupervised. The methodology for supervising Bias Depth mirrors that of Cube Depth. Concurrently, we will include explanations and relevant references concerning Depth Uncertainty, initially introduced in [1]. This concept adds an extra layer of uncertainty to each depth prediction of the model, capturing observation noise from input data. As emphasized in [1], this approach enhances the loss's robustness against noisy inputs in regression tasks. During inference, each obstacle will produce a 7x7 cube depth, a 7x7 cube depth uncertainty, a 7x7 bias depth, and a 7x7 bias depth uncertainty. Cube depths and bias depths will be weighted based on their respective uncertainties to yield the unique cube depth and bias depth, which will be added to obtain the final actual depth.
**About the ground plane equation:**
* For the vehicle-side, we default the camera optical axis to be parallel to the ground (pitch angle=0), so no additional ground plane equation data is required.
* For the infrastructure-side, the ground plane equation is provided by the Rope3D and Dair-V2X datasets. Their methods for estimating the surface equation are the same, that is, least squares plane fitting. During making datasets, they used autonomous vehicles with LIDAR to scan the collected scenes with point clouds and used the obtained dense point clouds for ground extraction to obtain dense ground point cloud data. These ground point cloud data are included in the datasets. Additionally, the datasets supply the ground plane equation (defined by coefficients a, b, c, and d) obtained through least squares plane fitting on this data. The Rope3D and Dair-V2X datasets themselves are actively willing users to use ground equation data and original ground point clouds, and recommend users to explore different usage methods, because these data also be used in real industrial applications. The baseline methods proposed in both Rope3D and Dair-V2X papers also use these data.
**The effect of camera yaw angle for normalized depth:** The paper's schematic diagram (Figure 3) represents a simplified scenario where the vehicle orientation and camera optical axis are coplanar (parallel). Even when an angle exists between the camera's optical axis and the vehicle's orientation (i.e., yaw != 0), the derivation remains applicable. This is due to the fact that the normalized depth solely depends on two points: the depth calculation point (center point C of the obstacle) and its corresponding vertical projection point on the ground (bottom center point P of the obstacle). Irrespective of the camera's yaw angle, the scenario can be visualized as the vehicle rotating around PC while the camera is stationary. In this process, PC is not changed, thus the geometric modeling process remains unaltered and the normalized depth remains constant. We will incorporate this explanation into subsection 3.3.
**Limitations on separate training on vehicle and infrastructure dataset:** In the Limitation of our paper, we objectively acknowledge that our method doesn't include additional design for the mixed training between vehicle and infrastructure sides. Because we believe that mixed training should be solved from adaptation for different appearance features between vehicles and infrastructures. This is somewhat different from the unified optimization targets and training pipline emphasized in our paper, and requires independent additional method to solve. We appreciate your agreement with this opinion. Although no method is designed to solve mixed training directly, based on comments from reviewer Lnrw, we've conducted mixed training experiments to comprehensively evaluate our approach. As seen in Table 4 of the global rebuttal, our MonoUNI mixed training has fewer dropped points than GUPNet and SMOKE, indicating that our method can better alleviate the additional complexity and visual ambiguity introduced by mixed training.
[1] What uncertainties do we need in bayesian deep learning for computer vision. | Summary: This paper proposes a unified architecture for vehicle and infrastructure-based monocular 3D object detection network. At its core, the paper puts forth the concept of normalized depth that is independent of camera intrinsic focal length and extrinsic pitch angle w.r.t the ground plane. As such, the network is applicable to cameras with varying focal length and mounting angle, while not affected by the ambiguity. Following DID-M3D, the framework decomposes the center depth into cube depth and the so-called bias depth. The experiments demonstrate significantly better performance in Rope3D and DAIR datasets.
Strengths: 1. The paper presents interesting new insights towards the problem of monocular 3D object detection under varying focal length and pitch mounting angle. The varying focal length problem has been tackled with normalized depth by existing works in the context vehicle-based 3D object detection, but handling pitch angle is as yet under-explore. The paper for the first time derived the normalized depth of object center that is independent of the pitch angle.
2. The method is simple yet effective. The experiments demonstrate that the proposed method yields significantly superior performance over the state-of-the-art, as shown in Table. 2.
3. The proposed framework is a unified framework for both vehicle-based and infrastructure-based 3D detection. This open doors to the new possibility in combining the research and data across these two domains. While the method currently requires separate training on each, it holds the potential for joint training that may improve both.
Weaknesses: 1. In Line 158, the paper makes approximation of the angle \delta by replacing v_p with v_c, but the paper does not discuss the implication of this approximation in practice. It seems the approximation error could possibly be large especially for nearby objects and with smaller pitch angle in camera mounting. How would this impact the performance? In addition, why not let the network to predict v_p as well? If the network is able to prediction of position of object center, i.e. v_c, what prevents it from predicting the position of bottom center v_p?
2. While the performance gain on infrastructure cameras are significant, it is small in vehicle cameras, i.e. KITTI. In particular, the paper only compares with [1,24,44,29] while omitting other stronger existing methods, such as CMKD ECCV 2022 and LPCG ECCV 2022. It is fine that the method does not outperform these methods given the unique advantage of being a unified framework, but the paper should acknowledge this for readers’ better understanding. There are also concurrent works such as NeurOCS and Mix-Teaching with better accuracy, which are not necessary to compare against but would be good to discuss in related works for completeness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The motivation and the implication of approximating v_p with v_c are the main question I have. I hope the authors could address this in the rebuttal.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper has discussed the paper properly, i.e. the method currently requires separate training on vehicle and infrastructure dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the motivation, significance, and potential of our method. We are pleased that you support our insight on the problem of monocular 3D object detection under varying focal length and pitch mounting angle. As you have stated, our network firstly simultaneously avoids the ambiguity problem introduced by diverse focal lengths and pitch angles. We are also glad that you agree with the simplicity and effectiveness. Finally, we sincerely thank you for providing insightful suggestions. We will try to explain your concerns in the following part of this comment and are willing to discuss them further.
**The motivation and implication of approximating v_p with v_c:** Firstly, we extend our gratitude for your thorough review and insightful suggestions. As you recommended, utilizing the model-predicted v_p is a method with reduced errors. In our initial experimental procedure, for the sake of simplification, we directly replaced v_p with v_c. To our pleasant surprise, this replacement resulted in a notable enhancement of 2.18% (Table 5 (a) and (c) in the original paper). Consequently, we inadvertently disregarded the influence of its inherent approximation error and the potential for additional improvements.
These days, we conducted three experiments to comprehensively analyze this problem. Exp1 proved that the original scheme would introduce an average relative error of 2.9% and a maximum relative error of 9.1%. Exp 2 proved that predicting v_p can indeed further improve the performance. Exp3 shows that the original scheme is indeed relatively weak in near distance, but using predicting v_p can alleviate the problem.
The details are as follows:
* **Exp1 (Statistical Error Analysis) :** On Rope3D dataset, we measured the v pixel distance between the center and bottom center of all obstacles. The maximum distance is 98.32 pixels (for the nearest vehicle), with an average of 31.7 pixels. With the minimum focal length at about 2100 pixels, the highest error in calculating $\tan(\delta)$ using v_c instead of v_p is 98.32/2100=0.0468, and the average error is 31.7/2100=0.015. The $\tan(\delta)$ range is [-540/2100, 540/2100]= [-0.257, 0.257], and the pitch angle range is roughly [10, 15]. This affects the cosine (0.9848 to 0.9659) and sine (0.1736 to 0.2588) of the pitch angle, resulting in a normalized pitch range of [0.8994, 1.0324]. The maximum absolute error from $\delta$ is 0.0468 * 0.2588=0.0121, averaging at 0.015 * 0.2588 =0.0039. The highest relative error is 0.0121/(1.0324 - 0.8994)=0.091, and the average relative error is 0.0039/(1.0324 - 0.8994)=0.029.
Although under the existence of this error, the normalization of the pitch angle has brought about a performance improvement of 2.18%, but there is still room for improvement.
* **Exp2 (Predicting v_p):** We added a head to predict the vertical coordinate of the bottom center point P, and used the predicted v_p to calculate the normalized depth. On Rope3D, the AP_3D increased from 81.55 to 82.63. Combined with cube depth, we added a head to predict the v coordinates on the second stage. As shown in Table 5 in global rebuttal, the AP_3D was increased from 92.45 to 92.61.
* **Exp3 (Rope3D Evaluation by Distance):** As shown in Table 5 in global rebuttal, we split the evaluation into different dimensions according to the distance between obstacle and camera, in order to explore the detection performance at different distances. The original scheme (v_c instead of v_p) has a small improvement (92.69->92.84) due to the introduction of errors for nearby obstacles, while using the model to directly predict v_p has a more obvious improvement (92.69->93.17). We believe that the slight drop in the 30-60m (93.10->93.05) is caused by fluctuation between two independent trainings.
In summary, there are other ways to compute v_p, like having the model predict obstacle corner points and deducing v_p geometrically. However, due to time limitations, we haven't pursued this approach. In the revised paper, we'll detail various v_p solution methods (v_c substitution, model-predicted v_p, and geometric solutions) along with respective experimental results, enhancing reader understanding.
**More methods on vehicle side:** Sorry for the lack of methods. As shown in the Table 3 in global rebuttal, we have added the latest vehicle side methods (such as CMKD, LPCG and MonoATT) for comparison. As you mentioned, our method as a unified framework is valuable, although not be able to surpass all vehicle-side methods. In fact, in terms of the vehicle-side method, our method has a large increase compared with our baseline method GUPNet (15.02->16.73). While compared with the DID-M3D of a similar framework, MonoUNI has an increase (16.29->16.73) and has the advantage of not requiring additional data. We will also add more concurrent work such as NeurOCS and Mix-Teaching to related work.
**Limitations on separate training:** In the Limitation of our paper, we objectively acknowledge that our method doesn't include specific adjustments for the mixed training between vehicle and infrastructure sides. We believe that addressing mixed training requires domain adaptation for the distinct appearance features between vehicle and infrastructure sides, deviating from our paper's emphasis on unified optimization target and training pipline. We appreciate your agreement with this perspective. While our method doesn't address mixed training directly, based on comments from reviewer Lnrw, we've conducted mixed training experiments to comprehensively evaluate our approach. As seen in Table 4 of the global rebuttal, our MonoUNI mixed training has fewer dropped points than GUPNet and SMOKE, indicating that our method can better alleviate the additional complexity and visual ambiguity introduced by mixed training.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I would like to thank the authors for the rebuttal. I appreciate the new experiments studing the impact of the approximation of v_p, as well as having the network to predict v_p. Please include the new experiments and address other comments into the camera-ready version. | Rebuttal 1:
Rebuttal: **Global Rebuttal**
**Table 1: Monocular 3D detection performance of Vehicle category on Waymo val set**
|$\mathbf{IOU_{3D}}$ |Difficulty|Method|Reference|Extra|$\mathbf{AP_{3D}}$(all) | $\mathbf{AP_{3D}}$(0-30m) | $\mathbf{AP_{3D}}$(30-50m) | $\mathbf{AP_{3D}}$(50m+) | $\mathbf{APH_{3D}}$(all) | $\mathbf{AP_{3D}}$(0-30m) | $\mathbf{AP_{3D}}$(30-50m) | $\mathbf{AP_{3D}}$(50m+) |
|:---:|:---:|---|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|0.7|Level_1|CaDDN|CVPR2021|LIDAR|5.03|15.54|1.47|0.10|4.99|14.43|1.45|0.10|
|0.7|Level_1|GUPNet|ICCV2021|None|2.28|6.15|0.81|0.03|2.27|6.11|0.80|0.03|
|0.7|Level_1|DEVIANT|ECCV2022|None|2.69|6.95|0.99|0.02|2.67|6.90|0.98|0.02|
|0.7|Level_1|**MonoUNI**|None|None|3.20|8.61|0.87|0.13|3.16|8.50|0.86|0.12|
|0.7|Level_2|CaDDN|CVPR2021|LIDAR|4.49|14.50|1.42|0.09|4.45|14.38|1.41|0.09|
|0.7|Level_2|GUPNet|ICCV2021|None|2.14|6.13|0.78|0.02|2.12|6.08|0.77|0.02|
|0.7|Level_2|DEVIANT|ECCV2022|None|2.52|6.93|0.95|0.02|2.50|6.87|0.94|0.02|
|0.7|Level_2|**MonoUNI**|None|None|3.04|8.59|0.85|0.12|3.00|8.48|0.84|0.12|
|0.5|Level_1|CaDDN|CVPR2021|LIDAR|17.54|45.00|9.24|0.64|17.31|44.46|9.11|0.62|
|0.5|Level_1|GUPNet|ICCV2021|None|10.02|24.78|4.84|0.22|9.94|24.59|4.78|0.22|
|0.5|Level_1|DEVIANT|ECCV2022|None|10.98|26.85|5.13|0.18|10.89|26.64|5.08|0.18|
|0.5|Level_1|**MonoUNI**|None|None|10.98|26.63|4.04|0.57|10.73|26.30|3.98|0.55|
|0.5|Level_2|CaDDN|CVPR2021|LIDAR|16.51|44.87|8.99|0.58|16.28|44.33|8.86|0.55|
|0.5|Level_2|GUPNet|ICCV2021|None|9.39|24.69|4.67|0.19|9.31|24.50|4.62|0.19|
|0.5|Level_2|DEVIANT|ECCV2022|None|10.29|26.75|4.95|0.16|10.20|26.54|4.90|0.16|
|0.5|Level_2|**MonoUNI**|None|None|10.38|26.57|3.95|0.53|10.24|26.24|3.89|0.51|
**Table 2: Cross-dataset evaluation of the KITTI val model on KITTI val and nuScenes frontal val cars with depth MAE.**
|Method|KITTI VAL(0-20m)|KITTI VAL(20-40m)|KITTI VAL(40m+)|KITTI VAL(all)|nuScenes VAL(0-20m)|nuScenes VAL(20-40m)|nuScenes VAL(40m+)|nuScenes VAL(all)|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|M3D-RPN|0.56|1.33|2.73|1.26|0.94|3.06|10.36|2.67|
|MonoRCNN |0.46|1.27|2.59|1.14|0.94|2.84|8.65|2.39|
|GUPNet|0.45|1.10|1.85|0.89|0.82|1.70|6.20|1.45|
|DEVIANT|0.40|1.09|1.80|0.87|0.76|1.60|4.50|1.26|
|MonoUNI|0.38|0.92|1.79|0.865|0.72|1.79|4.98|1.43|
**Table 3: Monocular 3D detection performance of Car category on Rope3D val, DAIR-V2X-I val
and KITTI test sets.**
|Method|Reference|Extra Data|$\mathbf{AP_{3D}}$(Rope3D)|$\mathbf{R_{score}}$(Rope3D)|Easy(DAIR)|Mod(DAIR)|Hard(DAIR)|Easy(KITTI)|Mod(KITTI)|Hard(KITTI)|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|M3D-RPN|ICCV2019|Depth/None|67.17|73.14|-|-|-|14.76|9.71|7.42|
|MonoDLE|CVPR2021|Depth/None|77.50|80.84|-|-|-|7.23|12.26|10.29|
|MonoFlex|CVPR2021|Depth/None|59.78|66.66|-|-|-|19.94|13.89|12.07|
|DID-M3D|ECCV2022|None/Depth|-|-|-|-|-|24.40|16.29|13.75|
|**CMKD**|ECCV2022|None/Depth|-|-|-|-|-|25.09|16.99|15.30|
|**LPCG+MonoFlex**|ECCV2022|None/Depth|-|-|-|-|-|25.56|17.80|15.38|
|**MonoEF**|CVPR2021|None/None|-|-|-|-|-|21.29|13.87|11.71|
|**DEVIANT**|ECCV2022|None/None|-|-|-|-|-|21.88|14.46|11.89|
|MonoCon|AAAI2022|None/None|-|-|-|-|-|22.50|16.46|13.95|
|**MonoATT**|CVPR2023|None/None|-|-|-|-|-|24.72|17.37|15.00|
|**MoGDE**|NeurIPS2022|None/None|-|-|-|-|-|27.07|17.88|15.66|
|Kinematic3D|ECCV2020|None/None|50.57|58.86|-|-|-|19.07|12.72|9.17|
|SMOKE|CVPR2020|None/None|72.13|76.26|66.03|62.24|60.71|14.03|9.76|7.84|
|GUPNet|CVPR2021|None/None|66.52|70.14|62.22|55.94|55.90|22.26|15.02|13.12|
|Imvoxelnet|CVPR2022|None/None|-|-|44.78|37.58|37.55|17.15|10.97|9.15|
|BEVFormer|ECCV2022|None/None|50.62|58.78|61.37|50.73|50.73|-|-|-|
|BEVDepth|AAAI2023|None/None|69.63|74.70|75.50|63.58|63.67|-|-|-|
|BEVHeight|CVPR2023|None/None|74.60|78.72|77.78|65.77|65.85|-|-|-|
|MonoUNI|None|None/None|92.45|92.63|90.92|87.24|87.20|24.75|16.73|13.49|
**Table 4: Multi-dataset mixed training under KITTI and Rope3D datasets.** "**mixed training**" means using KITTI + Rope3D training sets together for mixed training. The evaluation is performed on separate Rope3D and KITTI val sets.
|Method|Rope3D(mixed training)|KITTI(mixed training)|Rope3D(only training under Rope3D)|KITTI(only training under KITTI)|
|---|:---:|:---:|:---:|:---:|
|SMOKE|63.24|6.65|72.13|12.85|
|GUPNet|43.82|4.89|66.52|16.46|
|MonoUNI|87.89|13.62|92.45|17.18|
**Table 5: Rope3D Evaluation by different distance.**
|Method|$\mathbf{AP_{3D}}$(0-30m)|$\mathbf{AP_{3D}}$(30-60m)|$\mathbf{AP_{3D}}$(60-90m)|$\mathbf{AP_{3D}}$(90m+)|$\mathbf{AP_{3D}}$(all)|
|---|:---:|:---:|:---:|:---:|:---:|
|MonoUNI(without pitch normailization)|92.69|92.41|89.66|84.52|90.97|
|MonoUNI(v_p with v_c)|92.84|93.10|91.49|88.70|92.45|
|MonoUNI(model-predicted v_p)|93.17|93.05|91.52|88.62|92.61| | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
H2RBox-v2: Incorporating Symmetry for Boosting Horizontal Box Supervised Oriented Object Detection | Accept (poster) | Summary: This paper proposes to exploit the reflection symmetry as a new supervision to HBox-supervised oriented object detection. Several modifications are made to adapt the new self-supervised (SS) branch, including removing the angle subnet in the weakly-supervised (WS) branch and a CircumIoU loss for box regression. Experiments show a clear state-of-the-art result to previous Hbox-supervised oriented object detectors.
Strengths: 1. The paper tries to exploit the reflection symmetry to improve the Hbox-supervised object detector, which is interesting.
2. The paper is well-motivated and easy to understand.
3. The proposed H2Rbox-v2 achieves state-of-the-art performance in the Hbox-supervised oriented object detectors.
Weaknesses: Major concern:
1. The soundness of using reflection symmetry is somewhat low. According to page 5 line 130, if the network $f_{nn}(\cdot)$ subjects to both flip and rotate consistencies, then $f_{nn}(I_0)=\theta_{sym}$. I don't know why this equation holds. Clearly, we don't know the symmetrical axis of the image $I_0$. And if we rotate the image by a random angle $\theta$ in the Way2 of Fig.3, then $f_{nn}(I_2)$ would never be equal to $\theta_{sym}$. Even if the symmetrical axis is known, this reflection symmetry learning can only theoretically and ideally work under the setting of a single symmetric object. As a matter of fact, it would be more rational that the converse proposition holds - "If the network $f_{nn}(\cdot)$ can always predict the symmetrical axis $\theta_{sym}$ of the image, then, it must subject to both flip and rotate consistencies." I suggest the authors reconsider the logic here.
2. Due to weakness 1, the novelty is also limited since one can easily conclude that H2Rbox-v2 is to add one more additional branch with a new view generation upon H2Rbox, i.e., the left column of Fig. 2(b) ("vertical flipping"). The right column is random rotation which is similar to the SS branch of H2Rbox. This is what I believe the model brings us performance gains, but not the so-called reflection symmetry learning. The experiments cannot support that the reflection symmetry is important, but can only verify the effectiveness of adding the vertical flipping branch, which is supported by Tab. 6. It is still unclear for the relationship between reflection symmetry learning and multi-branch self-supervised learning.
3. The ablation studies provide a weak explanation for the effectiveness of the proposed components. In Tab. 4, adding the $l_s$ loss only produces near zero mAP on DOTA, while it is much higher on HRSC. Why did such drastic fluctuations occur? The PSC coder is just an angle encoder-decoder module, which is originally proposed to improve the performance of oriented object detectors. It would not be a critical factor affecting the performance, as demonstrated by the PSC paper, ~2 mAP at most. This paper shows PSC is crucial in the proposed method. That is to say, without PSC, the proposed method produces much poor performance. This largely weakens the soundness of the designs in the SS branch. Additionally, PSC imposes the loss on two encoded phase-shifting patterns of angles, which inherently solves the boundary discontinuity, while the proposed SS branch decodes the phase-shifting patterns to angle and then calculates the loss on two angles, which introduces the boundary discontinuity again, thus undermining the contribution of the snap loss.
4. It is confusing to me that Fig.2 shows the angle is predicted by the SS branch, while all the other properties are predicted by the WS branch. That is to say, the inference time of H2Rbox-v2 (WS+SS) should be double of the WS branch, while H2Rbox only needs the WS branch for inference. I'm concerned about why the FPS of H2Rbox-v2 is equal to H2Rbox as shown in Tab. 2. The authors need to describe the inference process clearly.
Minor concern:
The baseline model is unclear in the ablation studies. In Tab. 4, if the baseline model is "w/o PSC and $l_s$", then what is the angle coder used in the baseline? And what are the angles in calculating Smooth-L1 loss? The same question emerges in Tab. 5 and Tab. 6. Is that mean all the baseline models of the tables adopt the optimal strategies of the other tables? If so, then the best results of Tab.6 seem to be not in line with the others.
Overall:
While the proposed method achieves state-of-the-art performance, my initial rating score is 4. I have some concerns about the soundness of the reflection symmetry learning and the ablation studies. It is of vital importance to give an explanation and analysis of the rationality of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Suggestions:
1. For the weaknesses 1 and 2, I suggest the author to rethink the logic of the analysis of Sec. 3.1. It would be better if we could restate the analysis in page 5 line 130 to the converse proposition - "If the network $f_{nn}(\cdot)$ can always predict the symmetrical axis $\theta_{sym}$ of the image, then, it must subject to both flip and rotate consistencies." And then the contrapositive - "if the network subjects to neither flip consistency nor rotate consistency, it cannot predict the symmetrical axis of the image." Therefore, we need to enhance both flip consistency and rotate consistency. H2Rbox considers the rotate consistency only, which leads to sub-optimal results.
2. There is a missing experiment, i.e., $\lambda=0$ in Tab. 6.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: see Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer Pshp
We sincerely appreciate your valuable suggestions. We hope our clarification could help to make a more informed rating to our work.
**Q1 It would be more rational that the converse proposition holds for symmetric axis prediction and flip/rotate consistency**
We think our (empirically verified) idea is beyond this converse proposition. We restate the main facts and our theory in H2RBox-v2 as:
- **Observation (facts):** Collect a set of images containing a symmetric object along an arbitrary angle, and use this set to train a three-branch like neural network supervised just by the flip and rotate consistencies respectively via the two of the branches **(without the supervision of any annotation)**. We show that the trained network is able to estimate the angle of the symmetric axis of the object.
- **Underlying theory:** We have proved in our paper (see Sec 3.1) that if a function always satisfies the flip and rotate consistencies for the input symmetric image, then the function's output (by certain derivation - see details in the paper) is exactly the angle of the axis of the symmetric image.
- **Handling multiple objects in images:** The above theory strictly holds in the single object case. With an assigner in the SS branch to match objects in different views (see Line 151), the consistency loss is calculated between these matches, so that our "single-object" theory can be applied to each matched object.
**Q2.a One more branch is the key to performance improvement rather than reflection symmetry learning**
- **V2 has completely different angle information acquisition mechanism from v1:** Refer to the general response.
- **Performance improvement is not due to one more branch:** A new experiment that randomly selects from 5% flip or 95% rotation in only one branch (based on $\lambda=0.05$ in Table 6) shows that the number of branches is not the key to improving performance, AP50/AP75:
| Methods | original (2 branches) | multiplex (1 branch) |
|:-:|:-:|:-:|
| H2RBox-v2 | 72.31/39.49 | 72.24/39.51 |
**Q2.b Experiments cannot support reflection symmetry is important**
Sorry for leaving an impression of "adding flip to H2RBox-v1" which is in fact not true. Essentially we discard the use of geometric constraints from human annotations in obtaining angles (the way of v1), and the angles are fully learned by the reflection symmetry of objects (refer to the answer of Q2.a). We are not proving that symmetry improves -v1, yet rather that symmetry can replace the way of learning angle in v1.
**Q2.c Relation between reflection symmetry learning and multi-branch SSL**
The former is our theory that allows the network to perceive the angle of objects from their symmetry and the latter is a neural network implementation.
**Q3.a The loss (Tab. 4) produces near 0 mAP on DOTA but much higher on HRSC. Why did such drastic fluctuations occur?**
Without handling boundary discontinuity, we empirically found that the loss could fluctuate in a wide range, even failure in convergence. In comparison, when both PSC and snap loss are used, the training is very stable. The "drastic fluctuations" exactly show the instability and prove the necessity of PSC and snap loss. We now give more discussion in the new version.
**Q3.b Poor performance w/o PSC: PSC would not be a critical factor, ~2 mAP at most**
We agree that the impact of boundary problems may be ~2 mAP in **supervised** setting. Yet the impact of boundary problem in our consistency-based self-supervised setting could be much greater than that in the supervised case, especially in terms of stability, as shown in Table 4. Our point is that, "the poor performance without PSC" suggests in fact H2RBox-v2 allows PSC to well address the boundary problem to avoid poor performance. PSC is indispensable in our approach - but it does not mean our other parts' design is not sound.
**Q3.c Calculating the loss on two decoded angles introduces the boundary discontinuity again**
The snap loss limits the difference between two angles into the $\pi/2$ (see Fig. 4a), thus the calculation won't introduce the boundary discontinuity again. This is why both PSC and snap loss are important to solve boundary problem in the self-supervised design.
**Q4 H2Rbox-v2 predicts angle by the SS branch, while H2Rbox only needs the WS branch for inference. Why is the FPS equal?**
As shown in the WS branch in Fig. 2, due to parameter sharing, the inference is in the form of WS + Angle head from SS, so the efficiency is almost the same as that of H2Rbox.
To be precise, the only additional cost of v2 during inference is PSC decoding. Thus, FCOS/H2RBox-v1/H2RBox-v2 have similar inference times. With input shape (3, 1024, 1024), the accurate costs are:
| | H2RBox | H2RBox-v2 |
|:-:|:-:|:-:|
| Flops | 206.91 GFLOPs | 207.01 GFLOPs |
| Params | 31.92 M | 31.93 M |
**Q5.a When "w/o PSC and $l_s$", what is used in the baseline?**
"w/o PSC" means that the conv layer directly outputs the angle. "w/o $l_s$" means using smooth-L1 loss. We will add the description in our new version.
**Q5.b Do all the tables adopt the optimal strategies of the other tables? Tab.6 seems to be not in line with the others**
Yes, we adopt the optimal strategies if not otherwise specified. The best results are not in line with the others because we run the same config ($\lambda=0.05$) more than once, i.e. DOTA: 40.39/72.59.39.18 (Tab. 6) vs. 40.69/72.31/39.49 (other Tabs.), HRSC: 56.76/89.63/62.93 (Tab. 6) vs. 58.03/89.66/64.8 (other Tabs.), and we forgot to modify the Tab. 6 to the final adopted result. We will unify this result in the final version.
**Q5.c Missing $\lambda=0$ in Tab. 6**
The experiment is now added, which shows that the flip branch is necessary (AP50/AP75/AP):
| $\lambda$ | HRSC | DOTA |
|:-:|:-:|:-:|
| 0 | 0.32/0.00/0.06 | 66.37/25.03/31.60 |
Please let us know if there are further questions.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for the detailed rebuttal. Most of my concerns are addressed. The main paper should include the experiment of Q2.a. The authors should add analyses on Q3.a, Q3.B, and Q3.c, and fix the issue of Q5.b. I'm willing to improve my rating score from 4 to 7.
---
Reply to Comment 1.1.1:
Comment: We will carefully fix these issues in preparing the final version. Thanks again for your nice suggestions and efforts on this paper! | Summary: This paper proposes an advance solution for using horizontal bounding boxes as supervision to learn oriented object detectors.
The proposed method H2RBox-V2, which is a modification of the recent work H2RBox, has some technique novelty and contribution as it jointly uses the weakly- and self-supervised branches to learn the rotation angle.
Experiments are validated on DOTA, HRSC and FAIR1M, and show state-of-the-art performance.
Strengths: + The overall idea to learn the rotated angles from both weakly- and self- supervised branch is very interesting.
+ The task itself, to learn rotated bounding boxes from horizontal bounding boxes, is a new setting for the rotated object detection community.
+ The performance improvement against prior arts is significant.
Weaknesses:
- Unclear methodology design and presentation.
(1) The authors claim to propose a new CircumIoU loss for this framework. Unfortunately, in the methodology section, especially Sec3.4, there is no term ‘CircumIoU loss’, and it is impossible to know which loss is the claimed novelty.
(2) Besides, given this unclear description, it is very difficult to know its difference or novelty against some prior works such as [a], which also optimizes the detector from rotation angles.
[a] Arbitrary-Oriented Object Detection with Circular Smooth Label. ECCV 2020.
Other minor issues and comments for improvement:
- Please provide some rotation angle distribution visualizations from the proposed framework.
- Fig.1, column2 and column4. The predictions from the proposed method and SAM-RBox seem not to have much difference. Please consider use more representative figures.
- Table 1, the performance gap. Please explictly mention which dataset it is that leads to 3.41% drop and 0.07% improvement.
- Fig.2 looks very crowded. It can be better polished, and the size of each content can be made more fit.
- For loss function, please use \mathcal{} to distinguish it from scalars.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
The authors have addressed my questions in the rebuttal well.
______ before rebuttal _____________
Q1: Issues on the flaw of the technique framework. Both experimentally and theoretically.
Q2: The details, design and clear presentation of the CircumIoU loss.
Q3: systematically address the unfair experimental comparison issue.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have addressed my concerns well.
______________________before rebuttal ___________
The limitations are not properly discussed. The reviewer believes the technique flaws as mentioned in weakness part is more critical to the proposed framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer tfHb
Thank you for the time and nice suggestions. We humbly point out that there may exist misunderstandings in the review, and we hope our clarification could help on a more informed rating to our work.
**Q1.a Eq. 2 and Eq. 4 conflict: When k is an odd number, the nets learn the opposite rotation angle, i.e. $\theta$ and $\theta + \pi$**
Unfortunately that we didn't well explain or emphasize the basic concept in oriented object detection that bounding boxes have the periodicity of $\pi$.
Angle $\theta$ and $\theta + \pi$ refer to the same bounding box. Similarly, symmetric axis of $\theta$ is also equivalent to symmetric axis of $\theta + \pi$. We will clarify it in our new version.
Based on our above clarification, Eq. 4 is the extended version of Eq. 2, which takes into consideration the equivalence between symmetric axes of $\theta + k\pi$.
**Q1.b Lack experiment: Can using the weakly-supervised branch to predict warrant the same rotation angle as the semi-supervised branch, or at least very close?**
We hope our above explanation could have eased your concerns and potential misunderstanding.
For your specific question here, strictly speaking, our method doesn't have a "semi-supervised branch", do you mean the self-supervised branch? Actually, there is only one angle prediction during inference. If this misunderstanding is aroused by the multiple networks drawn in Fig. 2, we clarify that these networks/branches share the same parameters (see Line 149 and in Fig. 2). Such a parameter-shared graph is widely used (e.g. also in H2RBox). Therefore, no matter which branch is used for inference, the result should be the same in theory.
We further use HRSC to compare the AP50 by using different branches for inference:
| WS | SS-Flip | SS-Rotate |
|:-:|:-:|:-:|
| 89.66 | 89.66 | 89.66 |
The above results confirm that with the same input image, all parameter-shared branches have the same output.
**Q2.a The authors claim to propose a new CircumIoU loss, but in the methodology, especially Sec 3.4, there is no term "CircumIoU loss"**
Thanks for your careful review and suggestion. The term "CircumIoU loss" in Sec 3.4 is at Line 182 and in Fig. 4 (b). In Fig. 4 (b), we use a graph similar to the well-known paper "IoU loss" to illustrate the calculation process.
We give a more specific description as follows, which will be added to our new version:
To calculate CircumIoU loss, the predicted box $B_\text{pred}$ is first projected to the direction of ground-truth box $B_\text{gt}$. The obtained projected box $B_\text{proj}$ is displayed as the dashed box in Fig. 4 (b). Afterward, the loss can be calculated as:
$-\ln \frac{intersection(B_\text{proj}, B_\text{gt})}{union(B_\text{proj}, B_\text{gt})}$
CircumIoU loss can enable H2RBox-v2 to use random rotation (RR) data augmentation to further improve the performance (as shown in Tab. 2), which is not supported by H2RBox-v1.
**Q2.b Difference or novelty against prior works such as Arbitrary-Oriented Object Detection with Circular Smooth Label (ECCV 2020)**
Circular Smooth Label (CSL) is about **supervised** learning that learns angle classification from **angle annotations**. However, as described in our paper's title and introduction, H2RBox-v2 is aimed at a different and more challenging setting -- **self-learning** oriented boxes from **horizontal box annotation** without human labeled angle. CSL cannot even work without labeled angles. Therefore, the difference (novelty) between them is huge.
**Q3.a Prior work H2RBox with 1x schedule and multi-scale (MS) has mAP of 74.40%. But in this submission, H2RBox is made deliberately low by removing its MS scheme. More importantly, all the H2RBox-v2 performance is reported with MS scheme. If consider MS for H2RBox, it can even outperform v2**
Thanks for your careful thoughts on our work and giving us the chance to clarify and improve the unclear part of the paper.
Please note that in fact all the H2RBox-v2 performances in this paper are reported with the **single-scale** scheme unless otherwise specified in Table 2 in the paper, where it shows the performance of **H2RBox-v2 using multi-scale: 77.97%**. Compared to the performance of H2RBox with MS reported in the original paper (74.40%), the improvement is even higher than the value 2.26% that we claimed in our paper (see line 229: H2RBox-v2 outperforms H2RBox by 2.26% = 72.31% - 70.05%).
We will add the following comparison (i.e. enrich/update Table 2). Instead of "74.40%", we will use our reproduced result 75.35% (also a higher baseline).
| Methods | 1x w/o MS | 1x w/ MS |
|:-:|:-:|:-:|
| H2RBox (R50) | 70.05% | 75.35% |
| H2RBox-v2 (R50) | 72.31% | 77.97% |
| Improvement of v2 | +2.26% | +2.62% |
**Q3.b The highest result of H2RBox-v2 79.75% mAP is made by the Swin-transformer. It is more meaningful to also report the H2RBox (in a fair way) with Swin**
Thanks for your suggestion. The new results are as follows (4 GPUs and batchsize=4 to speed up experiments in the limited rebuttal period):
| Methods | 1x w/ MS |
|:-:|:-:|
| H2RBox (Swin-Tiny) | 61.60% |
| H2RBox-v2 (Swin-Tiny) | 79.39% |
| Improvement of v2 | +17.79% |
|||
| H2RBox (Swin-Base) | 61.05% |
| H2RBox-v2 (Swin-Base) | 80.35% |
| Improvement of v2 | +19.30% |
On Swin-transformer, -v2 outperforms -v1 by a large margin. But since the original H2RBox paper does not report performance on Swin, it may be doubted that we "made low" the performance of "H2RBox (Swin)" for comparison. Therefore, we contacted the authors of H2RBox about the experiment of H2RBox+Swin. They told us that their experimental results are consistent with ours, but they have not found the reason for this phenomenon so far. Anyway, this result demonstrates H2RBox-v2 has more robustness.
**Q4 Other minor issues**
We will carefully follow these suggestions and revise the paper accordingly. Thanks again for your efforts on this paper!
Please let us know if there are further questions.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: Many thanks for the time and effort from the authors.
Most of my concerns have been properly addressed, and I realized that the orignal rating before rebuttal is down-graded.
Especially, the clarification of significant improvement against H2RBox under *the same and fair setting* makes this work much better than the previous grade.
However, before the discussion period ends, still some minor concerns regarding **Q1.b** :
- Indeed the authors have clarified that both backbones are weight-sharing before angle prediction. Does it involve any additional layers or parameters to map the frozen backbone feature to the angle?
- If the answer of first question is true, then, empercially extract the frozen features and learns the mapping between angle and backbone through the other branch may have some unforseen impact.
- In this regard, it would be better for the authors to show some empercial outcomes on this aspect, so that my last remaining concern is resolved.
I am willing to improve my rating to the accept threshold if these final concerns can be properly addressed in the following week.
---
Reply to Comment 1.1.1:
Comment: Thanks for your prompt and responsible response. We try to get your point and our tentative response is as follows.
**Q1: Does it involve any additional layers or parameters to map the frozen backbone feature to the angle?**
Following backbone, there is an angle head, consisting of several convolution layers and a PSC decoder, to map the feature to the angle (see Fig.2b). The angle heads in different branches also share the same parameters. In fact, all heads, including regression, classification, center-ness and angle, their parameters are also shared, which is consistent with the classic detectors, i.e. RetinaNet and FCOS.
**Q2: Extract the frozen features and learn the mapping between angle and backbone through the other branch may have some unforeseen impact**
Please note that our backbone is not frozen **during training** -- the backbone, angle head, and other heads (i.e. regression, classification, center-ness) are updated together by SS and WS losses. Once the training is complete, both the backbone and the heads are frozen, and there is no need to generate SS views or learn the angle mapping **during inference**.
In oriented object detection, it is common to predict angle in a separate head/branch, e.g. in your mentioned prior work CSL, the angle is also separately predicted by an angle head which is trained by an independent angle loss. The detection results and the performance show that our design works well.
Please let us know if we misunderstood your questions, and also let us know if there are further questions. | Summary: This paper introduces H2RBox-v2, an innovative approach to further bridge the gap between HBox-supervised and RBox-supervised oriented object detection. It seeks to address the limitations of the original H2RBox model, which required high-quality annotations and large training datasets, and was incompatible with rotation augmentation. H2RBox-v2 augments the original model by adding a self-supervised branch that learns object orientations from inherent visual symmetry, and a weakly-supervised branch that incorporates a new CircumIoU loss to allow for random rotation augmentation.
Strengths: - **Originality:** While the paper builds upon existing work (specifically H2RBox), it infuses new concepts, the most significant being the utilization of symmetry for angle regression. This concept is highly innovative and introduces a novel angle of approach for oriented object detection. This creativity in applying the natural property of symmetry to enhance detection accuracy sets this work apart from previous methods and broadens the boundary of the field.
- **Quality:** The proposed H2RBox-v2 exhibits enhanced performance on various datasets compared to its predecessor, extensive ablation experiments have demonstrated the importance of each module.
- **Clarity:** The paper is well-structured and clearly presents the methodology and results, making the contributions of this research easily understandable.
- **Significance:** Bridging the gap between HBox-supervised and RBox-supervised oriented object detection significantly reduce the annotation costs.
- H2RBox-v2 has shown improved performance on various datasets and is specifically designed to cope with situations where H2RBox-v1 may underperform. This makes it a valuable contribution to real-world applications of oriented object detection.
Weaknesses: - **Theory is not fully comprehensive:** *L138-142* only discussed the case where a single instance is contained in an image. However, in reality, remote sensing images can contain many objects, especially in dense scenarios. Despite the theoretical proof being not fully comprehensive, their simplicity of idea and empirical effectiveness seem to be sufficient according to me.
- **Training overhead not detailed:** It is crucial that the model does not introduce additional overhead during the testing phase, but the paper does not discuss the computational overhead and the time taken for the model training process in detail
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Would it be more effective to conduct the comparative experiments on the DOTA dataset, regarding the assertion that H2Rbox-v1 requires more training data than v2? Specifically, by training both v1 and v2 with varying percentages of DOTA data (namely 10%, 20%, and 50%), we could gain a more robust comparison of their respective performances.
- When the object is not in the center of the image, are Way1 and Way2 described in L124-132 still equivalent?
- In remote sensing scenarios, some objects may not possess symmetry, such as swimming pools and harbors. What impact does this have on the theory?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Nothing to report.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer W3Ay
Thanks for your positive comments and constructive suggestions. Your endorsement of our method gives us significant encouragement.
**Q1 The computational overhead and the time taken for the model training/testing process**
**Test phase:** In our submission, we only gave FPS for evaluating inference time (in Table 2). We now add the accurate computational costs (the additional cost of -v2 is due to PSC decoding) and we will also enrich the experiment section in our new version:
| | H2RBox (v1) | H2RBox-v2 |
|:------:|:-------------:|:-------------:|
| Flops | 206.91 GFLOPs | 207.01 GFLOPs |
| Params | 31.92 M | 31.93 M |
**Train phase:** H2RBox-v2 is slower than -v1 in training as it involves one more branch. Here we provide an additional experiment that randomly selects from 5% flip or 95% rotation in only one branch ("5%" is based on $\lambda=0.05$ in Table 6). The resulting AP50/AP75 are:
| Methods | original (2 branches) | multiplex (1 branch) |
|:---------:|:---------------------:|:--------------------:|
| H2RBox-v2 | 72.31/39.49 | 72.24/39.51 |
The multiplex version requires similar training time to -v1 while keeping high performance as -v2:
| FCOS | H2RBox-v1 | H2RBox-v2 | H2RBox-v2 (multiplex) |
|:-----:|:---------:|:---------:|:---------------------:|
| 5h10m | 7h10m | 8h56m | 7h7m |
**Q2 Comparative experiments on the varying percentages of DOTA dataset, regarding the assertion that H2Rbox-v1 requires more training data than v2**
Thanks for your nice suggestions. In the following table, we show that the gap between -v1 and -v2 becomes larger on the sampled version of DOTA dataset (30%, 10%). The AP50/AP75 are:
| Methods | Full | 30% | 10% |
|:---------:|:-----------:|:-----------:|:-----------:|
| H2RBox | 70.05/38.38 | 55.73/20.14 | 37.71/ 6.98 |
| H2RBox-v2 | 72.31/39.49 | 61.25/27.91 | 44.61/14.97 |
**Q3 When the object is not in the center of the image, are Way1 and Way2 still equivalent?**
Yes, actually for any image and any $\theta$, { flip about line $\theta$ } is equivalent to { flip vertically and then rotate by $2 \theta$ }. But when the object is not in the center, the input image becomes asymmetric, so $f_\text{nn}\left ( I' \right ) = f_\text{nn}\left ( I_0 \right )$ (on Line 125) does not hold.
Technically speaking, when the object is not in the center, the situation is similar to multiple object detection. Although our theoretical study is performed on a single object, there is an assigner in the SS branch to match the center of objects in different views (see Line 151), and the consistency loss is calculated between these matched center points. With the assigner, our "single-object" theory can be applied to each matched object center, and this is the way our network can be used for multiple objects (and objects not in the center).
**Q4 Some objects may not possess symmetry, e.g. swimming pools and harbors. What impact does this have on the theory?**
While the training objects are preferred to be symmetric which often holds in aerial images, our experiments show that the symmetry need not be strictly obeyed. H2RBox-v2 can still optimize for the most likely solution -- an approximate axis that divides the object into two "most mirrored" parts. This mechanism extends the applicability of H2RBox-v2 to most elongated objects. As a result, the performance of each class and the visualization in our supplementary material demonstrates that H2RBox-v2 still gives a competitive performance in terms of swimming pools and harbors.
Please let us know if there are further questions.
---
Rebuttal Comment 1.1:
Comment: I would thank the authors for addressing my concerns. Given my current rating of 7, I intend to maintain it, unless other reviewers introduce new issues that warrant reconsideration.
---
Reply to Comment 1.1.1:
Comment: We will carefully prepare the final version. Thanks again for your recognition and valuable suggestions! | Summary: This paper proposes a new horizontal box-supervised rotation object detection detector. The proposed detector consists of two modules: a self-supervised regression branch for angle regression and a weakly supervised branch for horizontal box regression. This method is more simpler and shows clear improvement over the previous one.
Strengths: - The paper fully considers the independence and correlation between angle and horizontal box in rotation object detection. It only uses angle regression in self-supervised regression while using Circumscribed RBox IoU to associate angle regression and horizontal box regression in weakly supervised regression to obtain the rotated box.
- The paper greatly improves the performance of the hbox-supervised detector, approaching or even reaching the performance of some rbox-supervised detectors.
- The paper is well written, and the method is clear and well described.
Weaknesses: - Figure 2 seems a bit complicated, it would be better to highlight the main points to make it clearer.
- Although this article has achieved impressive results, it is important to consider the robustness of the model when horizontal bounding box annotations are not accurate enough, the article's experiments do not explicitly demonstrate the robustness towards inaccuracies in horizontal bounding box annotations.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - I have a question about the self-supervised branch in the proposed method. Why was it designed with two perspectives instead of using one perspective and randomly selecting multiple variations under that perspective? Additionally, would using a more perspective, such as different rotation angles, combinations of rotation and symmetries, or scaling variations, lead to performance improvements?
- Regarding Table 4, I noticed that the experiments on DOTA and HRSC datasets show inconsistent results. Specifically, in the Dota dataset, not using PSC will result in an incorrect result, while in the HRSC dataset, not using PSC only results in a performance loss. This raises the question of whether PSC only improves performance or if it has an indispensable impact on the model.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations:
N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Review k1cY
Thank you for the nice comments and valuable suggestions. By revising accordingly, the article is now clearer and more complete!
**Q1 Robustness to inaccuracies in horizontal bounding box annotations**
Thanks. We add some random noise to the annotation and record AP50/AP75 under different noise levels on DOTA-v1.0. Noise=30% indecates Height=Height*[0.7-1.3] and Width=Width*[0.7-1.3], both from uniform distribution. The results are as follows:
| Noise | 0% | 10% | 30% | 50% |
|:---------:|:-----------:|:-----------:|:-----------:|:-----------:|
| H2RBox | 70.05/38.38 | 69.19/35.24 | 67.39/26.02 | 61.66/14.55 |
| H2RBox-v2 | 72.31/39.49 | 71.68/36.33 | 71.11/34.12 | 67.88/21.56 |
Results show that when adding 30% random noise to the annotations, the AP50 of H2RBox-v2 drops by only 1.2%, less than H2RBox (2.69%), which demonstrates the better robustness of our method. We will also add these results in the new version to make it more convincing.
**Q2.a About the self-supervised branch**
Thank you for bringing up this interesting idea. Our two-perspective paradigm is intuitively derived from our theory which involves two consistencies, and you provide a new multiplexing solution for symmetry-aware learning. According to your suggestion, we conduct two additional experiments:
1. Randomly select from 5% flip or 95% rotation in one perspective/branch.
2. Use one more perspective/branch with another random rotation angle.
The results (AP50/AP75) are as follows:
| Exp ID | original |rand sel. | +branch |
|:---------:|:-----------:|:-----------:|:-----------:|
| H2RBox-v2 | 72.31/39.49 | 72.24/39.51 | 72.05/39.18 |
The results show that the multiplexing setting reaches almost the same accuracy as the original one. We will also add this result in the new version.
**Q2.b Using scaling variations**
DOTA usually follows two protocols: with and without multi-scale (MS). In the "w/o MS" setting, using scaling can improve the performance, but it also makes the comparison unfair if we integrate scaling and compare with baselines without MS. Whereas, in the "w/ MS" setting, reflection/scaling are already used as augmentation. In this case, we suppose using them in view generation plays the same role as in augmentation, on the ground that scaling does not change the angle, and the consistency loss between the original view and the scaling one is zero. We will further explore your suggestions in future work.
**Q3 Inconsistent results in Table 4. Does PSC have an indispensable impact?**
PSC is indispensable for the stability of our model.
Without PSC, we had empirically observed that the loss could fluctuate in a wide range (possibly due to the angular boundary discontinuity), even failure in convergence. In comparison, when both PSC and snap loss are used, the training is very stable, with not a single failure in our entire experiments.
In terms of your mentioned results, they seem to be inconsistent, but the instability is consistent, and both results prove the necessity of PSC and snap loss.
Please let us know if there are further questions.
---
Rebuttal Comment 1.1:
Title: My final decision
Comment: Thanks for the response. I think it all makes sense, and glad to see the authors added the experiment of using one perspective/branch, leading to an improvement in H2Rbox-v2’s performance compared to v1 without any adverse effects. Thanks for the good paper. Now I do not find any other fatal problems, and I will increase my rating from 6 to 7.
---
Reply to Comment 1.1.1:
Comment: We will carefully revise and prepare the final version. Thanks again for your nice words and constructive suggestions on this paper! | Rebuttal 1:
Rebuttal: General Response:
We thank the reviewers for their time and constructive suggestions. And the reviewers give appreication in a few points:
1. writing/presentation (**k1cY**:The paper is well written, and the method is clear and well described; **W3Ay**: The paper is well-structured and clearly presented; **Pshp**: The paper is easy to understand)
2. motivation/methodology (**tfHb**: The idea to learn the angles is very interesting; **Pshp**: Exploiting the reflection symmetry to improve the object detector is interesting and the paper is well-motivated; **W3Ay**: The paper infuses new concepts -- utilizing symmetry for angle regression, which is highly innovative; **k1cY**: The paper fully considers the independence and correlation between angle and horizontal box, and it only learns angles in self-supervised regression)
3. experiments/results (**k1cY**: The paper greatly improves the performance, approaching or even reaching the performance of some rbox-supervised detectors; **W3Ay**: It exhibits enhanced performance on various datasets and extensive ablation experiments have demonstrated the importance of each module; **Pshp**: The proposed H2Rbox-v2 achieves state-of-the-art performance in the Hbox-supervised oriented object detectors)
However, there are also some major concerns as follows, in which we have to humbly suggest that there may exist misunderstandings.
Q1. **Unfair comparison** (**tfHb**: Comparison with prior work H2RBox is unfair. All the H2RBox-v2 performance is reported with the multi-scale scheme, but H2RBox is made deliberately low by removing its multi-scale scheme)
Our response: All the H2RBox-v2 performances are reported with the **single-scale** scheme unless otherwise specified in Table 2 in the paper. In our comparison (see line 229: H2RBox-v2 outperforms H2RBox by 2.26% = 72.31% - 70.05%), both sides are based on single-scale. When comparing on multi-scale scheme (according to the reviewer's suggestion), the improvement is 2.62%, even higher than the value 2.26% that we claimed.
Q2. **Soundness of the theory** (**tfHb**: Eq. 2 and Eq. 4 conflict: When k is an odd number, the nets learn the opposite rotation angle, i.e. $\theta$ and $\theta + \pi$; **Pshp**: I don't know why the equation $f_\text{nn}\left ( I_0 \right ) = \theta_\text{sym}$ holds)
Our response:
**to tfHb:** In oriented object detection, bounding boxes have the periodicity of $\pi$. Angle $\theta$ and $\theta + \pi$ refer to the same bounding box. Similarly, symmetric axis of $\theta$ is also equivalent to symmetric axis of $\theta + \pi$.
**to Pshp:** The equation is solved from $f_\text{nn}\left ( I_0 \right ) = \theta_\text{pred} = - \theta_\text{pred} + 2\theta_\text{sym}$, which is mathematically derived from the equivalence between Way 1 and Way 2 (see definition of the two ways in Line 124-127).
Q3. **Improvement not due to the new theory** (**Pshp**: One more branch than H2RBox (v1) is the key to performance improvement rather than so-called reflection symmetry learning)
Our response: H2RBox-v2 has a completely different angle information acquisition mechanism from v1. To prove v2 learns angle from the image (unlike v1 from annotation), we provide an additional experiment (see the attached PDF in this rebuttal). We weaken the annotation of DOTA to square boxes so that they don't contain any angle-related information. The results show that in this experiment v1 fails to learn the correct angle, while v2 still finds the correct angle. This verifies the soundness of our theory (If our theory doesn't work, v2 should be the same as v1 in this experiment).
We hope our clarification could help to make a more informed rating to our work. In the following individual response, we provide answers to each raised weakness/question.
Best regards,
Authors
Pdf: /pdf/4b08abc5ca2d1ad422f7263f47d9fcba07e96db1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Egocentric Planning for Scalable Embodied Task Achievement | Accept (poster) | Summary: The paper introduces Egocentric Planning, alternating exploration and task solving symbolic planning for long-horizon object-oriented POMDP in environments with deterministic action effects. The presented method is used as the planner in a hybrid agent, SOTA in the 2022 ALFRED benchmark, with neural perception. The agent is based on a previous successful design (FILM), with semantic SLAM in the latter replaced by a graph representing the current knowledge about the scene, besides the novel open loop planning algorithm exploiting this graph. The experimental results show a considerable improvement over FILM at the cost of longer trajectories required to succeed due to the time required to gather information about the scene, besides error analysis and ablations. An exciting result is the generalization to new task types not present in training.
Strengths: - The proposed method enables out-of-the-box generalization to new task types (within the same set of objects and relationships).
- The graph representation and planner algorithm work as a tailored alternative to semantic SLAM for object-oriented POMDPs with remarkable performance.
Weaknesses: - The presented approach is effective under the assumption of deterministic action effects, but in real-world usage this should not generally apply. The paper consequentially discusses some possible ways to overcome this limitation, but also claims some better performing alternatives should be considered as baselines rather than comprehensive solutions, since they are limited in scope. I'd encourage rephrasing that discussion, which as is could be interpreted as unfair to the alternatives.
- The need for an exploration phase, when the egocentric planner is in principle designed to determine when to explore, seems a little ad hoc. I wish there was a clearer explanation why this cannot be provided by the main algorithm.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. How could the proposed framework be extended for multiple agents interacting with the environment?
2. Related to one of the weaknesses, what is missing in the egocentric planner to be capable of successfully exploring the environment until enough information is gathered to solve the task?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I think the paper makes a good job describing limitations of the proposed method, but, as mentioned in Weaknesses, I think the judgment of better performing alternatives as baselines, given the acknowledged limitations of the proposed method, could be seen as unfair, so I think an improved contextualization is needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We appreciate the insight into our work in the multi-agent setting.
> **Strengths:**
>
> The primary contribution is that our method enables generalized solutions for new task types, using the same set of objects and relationships.
>
> **Weaknesses:**
>
> Your point on the effectiveness under deterministic action effects is taken. Our method is distinct from those with native support for non-deterministic actions. While the egocentric algorithm doesn't require deterministic actions, our implementation for ALFRED does. This iterative algorithm is our baseline for studying high generalization, even in deterministic domains.
>
> The exploration phase, as you noted, seems ad hoc. We don't propose a universal solution but applied a specific one for ALFRED, enough to win the competition. This approach is open to integration with other SLAM methods, offering a promising avenue for improvement.
>
> **Questions:**
>
> - "How could the proposed framework be extended for multiple agents?"
>
> A subset of multi-agent problems can be mapped into classical planning and adapted to our method, as per Muise et al. (2015).
>
> - "What is missing in the egocentric planner to explore the environment until enough information is gathered?"
>
> The egocentric planner's current optimization restricts exploration due to its limitation on actions and the imperfection of a learned Unet. Our initial steps of exploration help generate reliable candidates, but with improved depth perception and budget, the current setup could explore effectively. Future work may weight the graph with rewards associated with locations where objects can be perceived, guiding A* to those locations.
>
> **Limitations:**
>
> We acknowledge your thoughts on the judgment of alternatives as baselines and will revise the manuscript. Our focus is on generalization, whereas other methods may have different priorities.
---
Rebuttal Comment 1.1:
Title: Thank you for your answers
Comment: In first place, I would like to thank the authors for their responses. I would also like to clarify the point of one of my questions.
Q1. I was actually interested in a high-level description of the management of the information graph/mental state in each agent. For example, how would the state of objects that have undergone changes without the agent's interaction (e.g. caused by the environment, or by an external agent) be updated?
---
Reply to Comment 1.1.1:
Title: Managing Multi-Agent Interactions and State Updates
Comment: Thank you for the clarification. The central issue here pertains to the subset of changes that are relevant to the current plan. These changes might render actions inapplicable, causing the plan to fail in achieving its goal. To address this, the state can only be fixed through new observations or by communicating with other agents; however, let’s set aside the communication aspect for now.
The notion of unbounded world modifications would make planning fundamentally impossible, necessitating some assumptions on our part.
In the scenario of a bounded number of relevant world changes, it becomes straightforward to adapt the algorithm. Here, the state is updated after each action, and we can then inexpensively verify if the remainder of the current plan aligns with the current goal (e.g., using logical regression [1]), be it exploration or task achievement. Should this approach fail, we can replan. This simplistic idea might fall short in situations where changes lead to further exploration possibilities, like a corridor opening after moving furniture. In such a case, the agent won’t recognize the opportunity unless it revisits the location.
When it comes to modifications made by other agents, we can design new egocentric planning algorithms drawing inspiration from existing Multi-Agent planning literature. A relevant survey, such as the one by [2] Torreno et al., “Cooperative multi-agent planning: A survey,” can provide valuable insights into the deterministic cooperative setting. An egocentric planning algorithm for multiple agents might be derived from one of these algorithms, as it assumes the underlying planner’s relies on full observability.
To illustrate, let’s look at the FurnMove challenge [3], which operates in the same simulator as ALFRED. Agents must possess local information and policies, and in an algorithm centered on egocentric planning, each agent will maintain its map. Implicit coordination can be achieved if every agent plans for all agents but executes only its actions. This approach may mimic human coordination, where, knowing a piece of furniture must be moved, individuals instinctively grab the side closest to them.
- [1] C. Fritz and S. McIlraith. Monitoring plan optimality during execution. In Proceedings of the 17th International Conference on Automated Planning and Scheduling (ICAPS-07), pages 144–151, 2007.
- [2] Torreno, et al. “Cooperative multi-agent planning: A survey.” ACM Computing Surveys (CSUR) 50.6 (2017): 1-32. https://arxiv.org/abs/1711.09057
- [3]. https://ai2thor.allenai.org/FurnMove/ | Summary: The authors propose an approach combining symbolic planning and object-oriented POMDPs for symbolic planning, which gets extremely strong performance on the ALFRED benchmark and won the CVPR ALFRED challenge. Their approach uses PDDL, but extends it with a set of exploration-focused actions. They use a combination of explicit knowledge and heuristics (exploring close to a seen object for example) that are learned from data. They explore for 500 steps, using this to build a spatial-semantic graph (instead of an explicit map like many previous methods). They then use this with an off-the-shelf task planner to choose which actions to execute.
Strengths: - Building a spatial semantic graph seems like a better (more scalable) way of solving ALFRED tasks than explicit maps
- Strong performance on a well-respected benchmark
- A lot of great ideas, and some of the explanation is very good
- Great to see a *new* approach to solving ALFRED, not just building off of FILM/HLSM
Weaknesses: - The name "egocentric planning" doesn't make a lot of sense to me; everything in ALFRED is going to be some manner of egocentric planning
- Not clear how general this is - lots of engineering in the planning domain. More analysis would help here; ideally the authors could run on a different domain (OVMM might be one - ovmm.github.io), but there really aren't good options with strong existing baselines like ALFRED. Instead maybe they could describe in more detail how it would be applied to other domains.
- Writing could be improved. Several typos (Apporach --> Approach), IER (exploration algorithm) not being referenced by name in text.
- It's not really clear this is a *learning* contribution - learning components seem minor. I think this is ok, because it's still a useful result on a learning problem, but I could see the argument against it.
- Very engineering heavy.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How is the spatial graph constructed? It seems like a lot of the detail here is lacking.
- How important are the 500 exploration steps? Seems like it should be able to work well without this, if it's really a good POMDP planner.
- Construction of the initial state was also a bit unclear to me.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Only applied to one benchmark
- Need for 500 exploration steps at the beginning seems weird
- Very dependent on perception models - perfect detection, depth, etc., which limits application to other domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We've addressed some of them in the general response.
> **Strengths:**
>
> - A lot of great ideas; some explanations are very good
> - Great to see a new approach to solving ALFRED, not just building off FILM/HLSM
We look forward to elaborating on these ideas.
> **Weaknesses:**
> - "Egocentric planning" doesn't make sense; everything in ALFRED is egocentric planning
Egocentric planning emphasizes using a planner with full-observability for problems under partial observability. We intend to keep working in this direction. What would you suggest as a name?
> - Unclear generality; lots of engineering in the planning domain
See the general response for a discussion on planning domains. Our method emphasizes scalability and could be a target for learning.
> - More analysis needed; describe how it would apply to other domains
OVMM is a possible domain, but physical manipulation is beyond our scope.
> - Writing could be improved
Thank you. We'll correct typos and inconsistencies.
> - Not clear this is a learning contribution
Our focus is on integrating existing models. While perceptual models improve, multi-step generalization remains challenging.
> - Very engineering heavy
The task required initial domain engineering to create a set of action object types. The planning actions should generalize to objects beyond ALFRED.
- Generic: Perceptual models, Egocentric algorithm
- Environment: actions, object types
- Engineering: PDDL model, Semantic part updating
- ALFRED tuning: exploration, policy changes
> **Questions:**
> - How is the spatial graph constructed?
See general response.
> - Importance of 500 exploration steps?
Ours is a lightweight POMDP solver. Early exploration is less useful. SLAM methods could be used to update semantic maps.
> - Construction of the initial state?
The initial state comprises a location and perceived objects in the facing direction.
> **Limitations:**
> - Only applied to one benchmark
Indeed, but our method might enable other applications with trained models for the AI2Thor environment.
> - Very dependent on perception models
Our contribution abstracts the perception model, widening applicability as generalist models improve. | Summary:
The paper presents a modular approach that combines symbolic planning and object-centric Cost POMDP for solving ALFRED tasks. The proposed method demonstrates improvements compared to previous end-to-end and modular approaches, such as FiLM. Unlike methods like FILM, HLSM, and Prompter, EPA utilizes a semantic graph representation instead of a top-down 2D map.
The method incorporates an initial phase of 500 exploration steps to gather sufficient knowledge, which is crucial for determining an appropriate expandable initial state. Object information is selectively saved only when objects are within immediate reach (0.25m ahead). This selective saving facilitates the conversion of observations into a symbolic state and reduces the length of generated plans.
The authors' findings suggest that the performance enhancements achieved by EPA over FILM and similar approaches primarily stem from its iterative planning approach. This approach enables the agent to recover from failure scenarios through flexible subgoal ordering, leading to improved performance.
According to the authors, EPA achieved the second-highest ranking on the ALFRED leaderboard, closely following the Prompter method. The superiority of Prompter, however, is attributed to modifications in obstacle size and reachable distance rather than the use of prompts.
Strengths: 1. The paper demonstrates the use of preconditions and effects through PDDL to improve the overall success of long-horizon tasks, especially unseen success rate.
2. The paper combines the use of symbolic planning using learned vision and language models and highlights how certain aspects of generalization can be achieved by abstraction.
Weaknesses: 1. The current assumptions on semantic spatial graphs require random exploration for 500 steps to visit each location and form a node in the graph. The paper reports a drop in performance if this initial observation phase is ignored. This approach has two major limitations: (1) it dramatically increases the timestep overhead for the proposed agent as compared to the existing works. (2) it assumes a static unchanging environment after mapping and will likely fail in realistic environments with dynamic obstacles. Given the existing visual-inertial SLAM approaches, as noted by the authors, it seems that this issue can be mitigated. Some existing approaches for topological mapping [1] might also be relevant, and in turn, improve the path length weighted success rate (PLWSR).
1. Writing PDDL domain definitions and problems for each task is known to be a tedious coding task. Any errors in representing the available objects and actions would yield no plan. The current approach seems too close to reverse engineering the process of creating trajectories for the ALFRED task. While the authors report 150 hours for PDDL domain and problem definitions, a large chunk of the work involving object types and action predicates is already described in the ALFRED metadata. This does not give a reasonable perspective on how many hours would it take to scale and maintain this approach further, especially in the physical world.
*[1] Chaplot, D.S., Salakhutdinov, R., Gupta, A. and Gupta, S. 2020. Neural Topological SLAM for Visual Navigation. In CVPR.*
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Could you kindly provide some clarification regarding the semantic spatial graph? Specifically, what information does it contain and what is the average size of such a graph? The description in Appendix F2 suggests that the graph incorporates visual observations, segmentation, and depth. I would like to understand if this graph represents the visual scene "in front of" the agent. If so, I have a couple of related questions:
1. In case the agent is at the same location but facing a different direction, would the information on the graph be overridden?
2. If not, does each node in the graph contain a representation of the "360-degree visual observation" at that particular location? How are the "actions" represented as edges in this context?
2. I'm curious to know how the approach encodes common sense knowledge or visual knowledge (mentioned in Lines 57-58) as part of the domain definition in PDDL.
3. Could you please explain how the "exploration actions" are defined? In Appendix F3, Listing 3, it is mentioned that the agent cannot hold something and explore the environment. For example, can the agent actively search for the coffee after picking up the cup? I would appreciate some clarification on this.
4. I would like to understand how the possible predicates are listed in the PDDL. Additionally, how are the preconditions and effects identified for each predicate? Are they learned or inferred from visual observations, language goals, or the agent's interaction to gather information? Are these predicates hard-coded by a human? If so, I'm interested in understanding how this differs from the classical task planning setup.
5. It is not entirely clear how the generalization to new tasks is achieved, particularly when the language module is trained to output a task type out of seven tasks (as mentioned in lines 73-74). Could you please elaborate on the meaning of the statement in lines 308-309, "Our egocentric planning breaks a task into a set of goal states, autonomously generating an action sequence"?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Overall, it appears that the current approach is heavily focused on reverse engineering the trajectory creation process for the ALFRED task, utilizing learned vision and language models. While the method demonstrates a high success rate on the ALFRED benchmark for unseen scenarios, there is room for improvement in terms of clarity and the overall significance of the proposed approach. It remains uncertain how applicable the use of semantic spatial graphs or PDDL domain definitions would be beyond the ALFRED simulated benchmark, particularly when considering real-world physical environments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. Some of your comments were addressed in the general response.
## Strengths
> The paper demonstrates:
> - The use of preconditions and effects through PDDL to improve the overall success of long-horizon tasks, especially unseen success rate.
> - The combination of symbolic planning using learned vision and language models, highlighting how generalization can be achieved by abstraction.
Those were part of our goals. Thank you.
## Weaknesses
> - The current assumptions on semantic spatial graphs require random exploration for 500 steps to visit each location and form a node in the graph.
> - Writing PDDL domain definitions and problems for each task is tedious. While 150 hours were reported for PDDL domain and problem definitions, scaling and maintaining this approach's perspective is not clear, especially in the physical world.
See general response on PDDL construction and semantic spatial graphs.
> [1] Chaplot, D.S., Salakhutdinov, R., Gupta, A. and Gupta, S. 2020. Neural Topological SLAM for Visual Navigation. In CVPR.
Thank you. We will add the citation. Very relevant.
## Questions
### Semantic Spatial Graph
- **Information and Size:** The semantic map contains visual and depth information. It stores actionable items, their class, average pixel distance, obstacles, and more. The average size of such a graph is around 500 from our observation.
- **Different Directions:** We encode unique nodes with different orientations, and there is no 360-degree observation. A detailed explanation of how the coordinates and edges are handled is provided.
### Domain Definition in PDDL
- **Common Sense and Visual Knowledge Encoding:** Extra goals can be added in PDDL, and LLM can help turn comments into extra PDDL goals.
- **Predicates and Preconditions:** Planning domains are available, human-written, or learnable. Anchor types are identified, and an egocentric algorithm solves planning problems centered on the agent location.
- **Generalization to New Tasks:** We tested on new tasks written directly as PDDL goals.
### Exploration Actions
- Exploration actions are defined to move to unexplored locations. They update the spatial graph and are affected by holding objects that obscure the view.
### Generalization and Task Handling
- **Language Module and Generalization:** Elaboration on how the egocentric planning breaks a task into goal states, autonomously generating an action sequence.
- **Applicability and Reverse Engineering Concerns:** The approach focuses on ALFRED but generalizes to objects and planning actions beyond it. The reverse engineering concern and the point of ALFRED in generalization are addressed.
## Limitations
- **Clarity and Significance:** Improvement in terms of clarity and the overall significance of the proposed approach is needed.
- **Real-world Applicability:** Uncertainty regarding the applicability of semantic spatial graphs or PDDL domain definitions beyond the ALFRED simulated benchmark, particularly in real-world physical environments.
Answer to physical "environments" will be provided in the general response.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thoughtful questions. In response to your specific query about how our approach differs from the classical task planning setup, we would like to add:
We employ a classical planning model for ALFRED, but apply it to problems under partial observability where new objects may be revealed, both features beyond classical planning. The key advantage is that classical planning models are simpler to create. Our innovation lies in exploiting anchor predicates and exploration actions without requiring a more complex planning model.
Please let us know if there's any further clarification we can provide on the concerns you raised, and thank you again for looking into what we've accomplished. | Summary: This paper studies the problem of embodied tasks in which the agent needs to plan over long task horizons given natural language instruction. Current methods that use end-to-end training leads to entangled representation, which makes it hard to solve the task. On the other hand, planning methods such as PDDL can produce high-quality actions given a well-defined problem specification. To solve the task, the paper proposes a method that consists of two parts: (1) goal-oriented exploration that aims to gather missing information, and (2) a classical planning method that aims to produce a feasible action. The proposed method is evaluated on the embodied benchmark ALFRED. The method improves the prior SOTA performance by 8.3%, winning the ALFRED challenge at CVPR 2022 Embodied AI workshop.
To be more precise, the proposed method consists of several parts: (1) a visual module for semantic segmentation and depth estimation of the scene, (2) an egocentric planner for planning given the information gather, (3) a semantic spatial graph for scene memorization. At the beginning of the task, the agent is provided with 500 steps to explore its surroundings. After that, the information is converted to a semantic spatial graph for the input of PDDL. Figure 1 shows the method. The algorithm inerates using the following steps: (1) find a path for reaching the goal, (2) if the goal is not reached, do exploration again
Table 1 shows the main result of the paper. And the rest of the experiments section provides a detailed ablation study of the method.
Strengths: 1. The writing of the paper is clear and easy to follow. For instance, in the introduction, I can easily understand the motivation of the method.
2. The proposed method is a winning approach at the CVPR Embodied AI workshop in 2022. This shows that the method has been thoroughly tested, and the result is convincing.
3. In the AI era, it is nice to see a classical planning method is robust in such tasks over a neural network-based method.
4. The related work is reasonable. It covers important papers such as Saycan
5. Overall, I think this is a good paper, and the method is elegant and promising.
Weaknesses: The proposed method could be very specific to the ALFRED task, in the sense that the method is solely optimized and engineered for the ALFRED task. For example, in some tasks, there is no abstraction of the action space, like OpenDrawer.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. If I want to deploy such a method in the real world, what would be the challenge and the additional step to do this?
2. Following the previous questions, what is the Sim-to-real gap here?
3. Will the method be able to handle tasks such as you need to open the drawer to find the coffee mug in it, instead of target objects being visible to the agent? I will say this is more challenging in the POMDP setting than the setting in the paper.
4. What are some failure cases? Could you provide a couple of examples?
5. Is it possible to integrate such a method with semantic exploration (https://devendrachaplot.github.io/projects/semantic-exploration)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper has clearly stated the limitations of the paper, including the method cannot recover from irreversible failures, being sensitive to perception errors, and having no memory for belief tracking.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. The connection with SayCan is interesting, as our work complements theirs.
> **Weaknesses:**
> The proposed method might be too specific to the ALFRED task, in the sense that the method is optimized solely for this task. For example, there's no abstraction of the action space, like OpenDrawer.
We agree that competitions—like metrics—often lead to specific optimizations. We tuned parameters, such as random exploration and object reachability depth. Yet, some optimizations are principled methods, like using graph reachability to eliminate irrelevant actions. This method is applicable beyond the ALFRED challenge.
> If I want to deploy this method in the real world, what would be the challenges and additional steps?
Our approach targets errors at the action level. For any application, ensure that actions perform as intended. In realistic environments, it is prudent to guarantee the robot's capabilities. Refer to our planning-based modular printer example.
> What is the Sim-to-real gap here?
Robust actions foster high accuracy. Ruml et al., JAIR 2011, describe a real-time modular printer. [Here's the video](https://www.jair.org/index.php/jair/article/view/10693). Noisy actions or unexpected results are outside our scope, but limited support is offered for failed actions.
> Will the method handle tasks like opening a drawer to find a coffee mug?
While we only support deterministic actions, classical planning can tackle partial observability. By following Albore, Palacios, Geffner, we can create hypothetical objects and optimistic actions. Upon interaction, if a mug is not found, the agent will seek it elsewhere.
> What are some failure cases?
Failures often arise from object recognition and distance estimation errors. An agent might misidentify an apple or collide with an obstacle. These failures are detailed in Table 2 on page 7.
> Is it possible to integrate this method with semantic exploration?
Thank you for the reference. Our plans can play the role of the local policy $\pi_L$ in SemExp, and we can integrate with other SLAM algorithms by using SLAM updates instead of our own.
> **Limitations:**
> The paper states the limitations, including irrecoverable failures, sensitivity to perception errors, and no memory for belief tracking.
Thank you. We do perform belief tracking in ALFRED. The semantic graph contains current information, and unvisited locations implicitly represent unknown objects and locations. Our method's open-world nature means we don't assume a finite number of objects. This relates to the 0-approximation notion by [Baral et al., AIJ 2000](https://www.sciencedirect.com/science/article/pii/S0004370200000436). | Rebuttal 1:
Rebuttal: ## Introduction
We would like to extend our heartfelt gratitude to the reviewers for their insightful comments and constructive criticism. The feedback has been encouraging and instrumental in helping us understand the significance of our work in the following areas:
- Our **modular approach** (R1, R3)
- The **role of symbolic planning** in solving a diverse set of tasks (R1, R2, R3)
- Our unique **notion of the semantic graph** (R4)
- The success we demonstrated in our entry in the **ALFRED competition** (all reviewers)
However, the reviewers have raised certain questions regarding the semantic graph, the symbolic planning model, and their relationship. We appreciate the opportunity to respond and provide clarity on these aspects.
## Core Contributions
Before diving into the detailed response, we wish to revisit the core of our contribution. Our focus is on environments with fixed, known types, relationships, and actions grounded on them. We aim to solve a series of tasks in the same environment, and our winning entry in the ALFRED challenge is a testament to the method that allows us to explain a particular implementation in depth.
### The Essence of Our Method
- We present a lightweight method for achieving goals that can be expressed and solved in the fixed environment.
- In ALFRED, we reuse existing perceptual models and employ a simple exploration strategy, so the novelty of our method resides in the planning part.
- While ALFRED features only seven classes of tasks, our method supports others as long as natural language processing can map the user goal into the fixed object types and relationships.
- We expect non-symbolic planning methods to perform better with more data or by focusing on specific tasks. However, our method serves as a simple baseline for studying systematic generalization and compositionality per domain.
### The ALFRED Challenge
- Our method's effectiveness is shown by our success in the ALFRED challenge.
- Though limited to seven task classes, our approach can support other tasks.
- We demonstrate additional tasks that can be solved in the implicit Object-Oriented POMDP, consisting of the ALFRED environment and standalone models for object detection and semantic parsing of the tasks.
### Object-Oriented POMDPs
R1 rightly pointed out issues in sections 4 and 5 of our manuscript describing Object-Oriented POMDPs and the core algorithm. In response:
- We have massively rewritten these sections.
- The definition of Object-Oriented POMDP is now more finely defined.
- In section 5, we define an Environment as a tuple \(\mathcal{E} = \langle A_\mathcal{E}, \mathcal{T_\mathcal{E}}, \mathcal{V_\mathcal{E}}, \mathsf{reset}, \mathsf{step} \rangle\), with various elements representing the set of parameterized actions, object types, initial observation, and action execution.
- Our method's goal is to solve a series of tasks in the same environment, which we define as a Task, \(T_\mathcal{E} = \langle I_\mathcal{E}, G_\mathcal{E} \rangle\), setting the initial state and goal of the environment.
### Observations and Planning
Our approach entails three major components:
1. **Object-Oriented Environment**: The environment encapsulates object types, potential values, and actions.
2. **Egocentric Planning Algorithm**: This algorithm uses deterministic actions and re-planning, focusing on a fragment of Object-Oriented POMDPs where the set of objects is initially unknown.
3. **Symbolic Planning Model**: Assumed to be provided by the user, the symbolic model may be non-trivial but can be argued to be necessary for generalization.
We also introduce the notion of **anchor objects**, which may reveal the existence of new objects. This concept generalizes to other domains, allowing the agent to discover new options while navigating different environments, such as a warehouse or a web application.
### Alignment between Symbolic Model and Environment
The alignment between the symbolic model and the underlying environment can be challenging. However, in ALFRED, this alignment is facilitated by a spatial semantic map. For each unique environment, we use a spatial graph to encode the agent's location and direction, with movements becoming the edges. This simplifies navigation and serves as a top-down map.
### Engineering Decisions
Some reviewers raised questions about our engineering decisions in ALFRED, like the exploration strategy. We will respond to these directly but emphasize that our ALFRED entry demonstrated our approach successfully.
## Additional Considerations
- **500 Steps**: We need to address the choice of limiting actions to 500 steps, explaining the reasoning and potential outcomes if this constraint is not used.
- **Sim2Real Limitations**: Our approach does not support continuous actions or complex manipulation. We assume discrete actions.
- **Perception and Multimodal Models**: We utilize multimodal models such as CLIP [14] and DETIC [26] for open-vocabulary object detection.
- **Potential Applications**: Our method could be applied in various scenarios, as described in table 1 of "HomeRobot: Open-Vocabulary Mobile Manipulation."
## Conclusion
Our response aims to shed light on our method's novelty, the alignment between the symbolic model and environment, and other specific questions raised by the reviewers. We hope to have provided a comprehensive overview of our work and look forward to further discussions and feedback. Thank you again for your valuable insights.
Pdf: /pdf/58ee8ac578d763e31970a11ee79ad25af26dcdcc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a hybrid approach leveraging neural perception models and symbolic planners for egocentric planning and task completion in embodied environments. They demonstrate their approach in ALFRED benchmark winning the 2022 CVPR challenge. Their central idea is the use of symbolic planners, which are typically used in fully observable settings. To deal with partial observability, they perform iterative exploration with symbolic planning. Structurally, their implementation is similar to the previous SOTA on ALFRED — FILM. However, instead of using a semantic map as FILM does, they use a graph structure for object and location information storage. This graph is updated through exploration and then used by the downstream symbolic planner.
Strengths: - The authors’ hybrid approach leveraging symbolic planners is novel amongst the modular approaches for long-horizon EAI tasks, and can be promising for future research.
- The authors won the CVPR 2022 EAI ALFRED challenge.
Weaknesses: - Poor clarity of exposition: I found the major aspects of authors’ technical approach poorly explained in the paper. This really limits reproducibility as well as use by the community IMO and consequently limits the value of the paper.
- Spatial graph: Unclear how the graph is built and what does it look like. The authors say that the graph encodes location as the node key and visual observations as values. However, it is unclear what is the co-ordinate system for location (consequently would we get the same graph for the same environment and task but different agent start position?), what happens when the agent visits the location more than once, and lastly, how is it used for the low-level policies that have to navigate given there is no map.
- Object-centric POMDP: The authors briefly mention the Cost POMDP but do not appropriately define or explain it at all in the main paper. Most definitions are pushed to supplementary. While it is okay to push nitty-gritty details to supplementary, without properly framing the problem and then grounding their approach in the framework, I am not sure if I can buy the authors’ claim that their approach is theoretically sound (L121).
- Exploration: Unclear how exploration is handled. The only description is in the intro (L52-56).
- The algorithm is not commented and contains symbols which are never defined or explained, making it difficult to understand what did the authors actually implement and why:
- Symbol Ge is never defined
- Unclear how cost c of action is computed and used in function $M^{PD}$.
- The authors say that the PDDL domain and environment definitions of actions are misaligned, which make sense (L149). However they never explain how this alignment is obtained in their approach.
- Furthermore, given the PDDL and planning terminologies might not be accessible/known to everyone, I encourage the authors to be more careful and clear in their descriptions. For instance, what exactly are anchor object types and why do we need them? Also unclear what other anchor object types can exist in EAI settings.
- I didn’t get much out of Sec.4 and 5 despite the fact that I work with symbolic planning and EAI and even when there is so much to explain about the approach as mentioned above. I recommend rewriting entire Sec.4 and 5.
- Generalization: Authors claim and attempt to show generalization in Sec.8.4. However, this section had no details whatsoever about the new tasks they use nor a reference to supplementary. I managed to find some details in supplementary. However, I am sure a general reader will be lost here. Furthermore, the authors claim that they do better on the new tasks and achieve zero-shot success 82% times, better than other methods but do not show any such comparison in their work (tab.6 supplementary). Given this is one of the main claims of advantage of their approach, I’d like to see comparison with neural and template-based approachers, and also with the prompter method.
With the above (also see limitations section below), IMO the paper is not yet at par for publication at Neurips. In its current state, it can be a great workshop paper but requires major rewriting and additional comparisons e.g., for generalization claims otherwise.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Table1: main results should have bolded numbers and indications on which metrics need to be higher/lower for better task performance (e.g., with $\uparrow$ and $\downarrow$) to improve interpretability of results. Similarly sorting rows in Tab2 based on performance in unseen might be useful.
- L166: “This setup promotes more robust action sequences and generalization beyond the seven ALFRED-defined tasks” — not sure how?
- Results: Given that the LGS-RPA also comes close in terms of GC and SR (unseen) to the authors approach, I encourage the authors to discuss LGS-RPA in the result section as well.
- Tab:3 Unclear why the method only achieves ~60-70% GC with all ground truth available. Is this the upper bound? Why is the upper bound not 100% given that the env. is deterministic and we are using a symbolic planner?
- Is the planning performed from scratch every time? I imagine that the state of the world and thus the graph have incremental changes so wondering if the authors do anything smart to reduce the planning time at each iteration.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: - The authors do not touch upon the issue of obtaining PDDL domain in the first place, which is core to their approach. Unclear how this would scale for real world applications/agents and how errors in domain file can impact their approach. Lastly, they mention that they can handle additional constraints e.g., energy during planning. However, in practice, I’ve found planning time to blow up when using durative and cost constraints. Unclear again how authors approach would scale for more complex problems. Also, on that note, I’d like to see planner time numbers for the tasks perhaps in the supplementary.
- Similarly, the authors say that they are more robust to perception errors however they say they do so using hand-crafted policies for replanning (Sec.8.3). Unclear if the robustness is because of these handcrafted policies or because of their iterative planning approach combining exploration and symbolic planning.
- The section on broader impact and societal impact is missing. I'd encourage the authors to think about real world applications that their work might enable for this section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful review.
There are indeed issues in the manuscript.
We apologize for the lack of clarity.
As we stated, we have already improved part of the issues in the manuscript and will certainly further improve it thanks to your feedback.
The general response addresses the following issues mentioned in this review:
- Object-oriented POMDP and framing of the problem.
- Spatial semantic graph.
- Misalignment between the environment and the symbolic model.
- The PDDL provided by the user.
Here, we provide additional answers:
> **Spatial graph**: See the attached PDF and the general response.
The coordinates are (x-coordinate, y-coordinate, facing direction).
If the agent visits the same location more than once and faces the same direction as before, that will not create a new node.
Visiting the same room twice would create the same graph if we fixed the random seed for exploration, and all other components were deterministic.
If we let the agent explore the whole space in another order, the new resulting semantic graph would be equivalent to the previous one, except for object names created on the fly.
> **Algorithm issue**: Apologies.
The algorithm has been fixed.
For instance, $G_e$ is now part of the Task that the algorithm receives as input.
We define Tasks in the shared response.
> **Cost computation**: The generic algorithm supports a planner that minimizes or reduces cost.
The classical planner used in ALFRED does not use cost, but the plans tend to be short, as the domains are simple from the planning point of view.
> **PDDL terminology**: We agree it might not be accessible.
We will improve the manuscript by providing timely examples and leave the formal definitions for the supplementary material.
> **Generalization claim**: We fixed the references and improved the description to emphasize that we are solving new tasks, starting with a symbolic goal.
So, the 82% success rate is related to the 52.29% for w/ gt language in Table 3, valid unseen.
The comparison is not meaningful as these are different tasks.
> **Comparisons**: Implementing the new tasks in Prompter and FILM++ (Inoue et al 2022), and FILM would require retraining the classifier with new tasks and creating templates by hand.
While our results rely on creating a symbolic planning model, its compositionality allows us to request the agent with unforeseen tasks.
The best course of action depends on the domain.
> **Publication readiness**: We hope this response has clarified how we will address the limitations of the current manuscript.
Please let us know if there is anything else that we could address, which might change your assessment regarding our submission’s acceptance.
> **Table 1 and 2 recommendations**: Will do.
For all the metrics, higher is better.
Success Rate demands to achieve the goal, while Goal-condition Success focuses on subgoals.
However, in planning, achieving subgoals independently might not predict the success rate and have low value for the user.
> **Clarification on robust action sequences**: We intended to say that our setup can solve tasks beyond the seven included in the ALFRED challenge.
We will improve Section 8.4 to make this connection clear.
The robustness of new tasks depends on the robustness of each action that might have been used in previous tasks.
> **Anchor object types**: We discussed them in the general response.
An anchor type in an extension of ALFRED could be a panel that shows the power status of appliances.
> **LGS-RPA comparison**: LGS-PRA is a very interesting piece of work.
However, we believe that the improved accuracy is due to techniques on landmark detection method and local pose adjustment, which are not the emphasis of our work.
We find it difficult to conduct a proper comparison.
> **Tab. 3 and upper bound**: There are two main reasons we cannot achieve 100% even with ground truth.
First, the simulator can get stuck when holding large objects close to a wall, and some objects spawn in the corner cannot be reached due to fixed 90% turns.
Secondly, our method does not build a graph for ‘turning up’ and ‘turning down’ actions, making some things unreachable.
> **Planning process**: As classical planners expect a fixed set of objects, we plan from scratch every time we reach an exploration goal that might reveal new objects, or if there is an execution failure that we use for revising the semantic graph.
> **PDDL scaling and constraints**: We discuss the obtention of PDDL in the general response.
Writing, learning, or debugging PDDL leads to higher generalization, and the right decision depends on the domain.
Regarding constraints, some can be mapped into classical planning with costs.
> **Planner time numbers**: In the beginning, replanning is frequent but fast.
When the graph is large enough, the plan can take up to 5 seconds but often takes around 1 second.
> **Robustness**: Navigating around objects is a weak point of our approach.
Our hand-crafted policy doubles down in this direction, following the ALFRED environment’s simplification.
The strength of our approach lies in exploiting skills and perception across tasks.
> **Broader and societal impact**: Thank you.
We will elaborate on the potential benefits of our approach in this section, including robustness and higher explainability.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed rebuttal
Comment: Thanks for providing various clarifications, a graphic for the graph, and updating symbols and descriptions. I am a little concerned that many technical descriptions in the paper have been changed but not reviewed fully (as in initial reviews), but I'll let the ACs decide if it would be okay to accept the paper in such a case. Either way, I'll raise my rating to "weak/borderline accept" -- I will not argue for the paper but I won't push back if other reviewers are supporting the paper/there is a champion for the paper.
- It would be good to provide clear distinction on LGS-RPA and why a comparison isn't possible in the main paper. Same for the tasks used for generalization, to explain why other baselines are not possible.
- I am also concerned about the planning time numbers for real-world applications, perhaps the authors can add that in their discussions/limitation section.
- Also glad to see more clarity on PDDL assumptions and prior work on learning PDDL. Hoping authors can do the same in the main paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for diving back in and being amenable to raising the score! All three suggestions would certainly make for a stronger paper, and we'll gladly make the additions to the final paper. Do let us know if you'd like us to surface any of that discussion here for further review, and thanks again for your insight! | null | null | null | null | null | null |
Temporal Continual Learning with Prior Compensation for Human Motion Prediction | Accept (poster) | Summary: The paper proposes a novel multi-stage training framework called Temporal Continual Learning (TCL) for Human Motion Prediction (HMP) to address the challenges of short-term and long-term predictions and the incorporation of prior information from past predictions into subsequent predictions. The Prior Compensation Factor (PCF) is introduced to compensate for the lost prior information, and an optimization objective is derived through theoretical derivation. The TCL framework can be easily integrated with different HMP backbone models and adapted to various datasets and applications. Extensive experiments on three HMP benchmark datasets demonstrate the effectiveness and flexibility of TCL.
Strengths: + Introducing the Temporal Continual Learning (TCL) framework, a multi-stage training framework that addresses the constraint between short-term and long-term prediction in Human Motion Prediction (HMP) and allows for better utilization of prior knowledge from short-term prediction to enhance the performance of long-term prediction.
+ Introducing the Prior Compensation Factor (PCF) to mitigate the issue of forgetting information during multi-stage training, and deriving a more reasonable optimization objective through theoretical derivation.
+ Exploring a new pipeline for the HMP task.
Weaknesses: + Writing:
+ Without periods in lines 113-117.
+ $P\left(Z_{k} \mid Z_{1} Z_{2} \cdots Z_{k-1} \theta\right)$ -> $P\left(Z_{k} \mid Z_{1} Z_{2} \cdots Z_{k-1}; \theta\right)$.
+ **The following concepts are confusing: short+long, short only, short then shorr long. The explanation in Figure 1 is not easy to understand. I suggest providing a figure to illustrate the difference between the three concepts.** It is essential to your motivation.
+ **Please provide more experimental settings about the toy example (Figure 1).**
+ Methods part should be carefully written. Otherwise, it is somewhat confusing.
+ It is not clear about the motivation for designing $\alpha$s. The authors name it Prior Compensation Factors in Section 3,2. Why can it compensate for forgotten knowledge when leveraging prior knowledge? It is a scalar and why can it reflect so much knowledge of complex motions?
+ For experiments:
+ Datasets: I suggest authors provide the results on AMASS datasets, which will make your work more solid.
+ Authors choose PGBIG as the backbone. More choices and comparisons of backbones should be presented in the main result (not in ablation). This verifies the main contribution of the paper.
+ I did not find the codes and demo videos in the supplementary. (Not necessary. If provided, more convincing. Pretrained models with inference codes are acceptable.)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: + It is clear that "our objective" is between the "actual objective" and "naive objective". In Figure 2, is "our objective" close to the "actual objective"? Why isn't it closer to the "naive objective"? Any intuition or proof?
+ Missing baselines or related work: [1, 2, 3, 4]. I would like to discuss with the authors about the choices of baselines.
+ Efficiency and multi-stage pipeline. Recent researches [5] suggest predicting motions in one stage, which is easier to train. Will the multi-stage training be harder to tune or train? Will it be more time-consuming? I would like to discuss with the authors about it.
+ It will be great if authors can discuss or provide zero-shot adaptation experiments on other datasets.
**I would like to discuss with the authors according to the authors' rebuttal. The experiments are not sufficient and the presentation of the paper is not good enough. However, the theoretical insights are interesting. Therefore, I provide a weak accept score here. I will adjust my score according to the authors' response. I will carefully check the authors' responses. I would like to discuss with the authors with details of the paper and improve the quality of the papers jointly.**
[1]: Zhong, Chongyang, et al. "Spatio-temporal gating-adjacency GCN for human motion prediction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[2]: Sofianos, Theodoros, et al. "Space-time-separable graph convolutional network for pose forecasting." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
[3]: Bouazizi, Arij, et al. "MotionMixer: MLP-based 3D Human Body Pose Forecasting."IJCAI 2022.
[4]: Guo, Wen, et al. "Back to mlp: A simple baseline for human motion prediction." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023.
[5]: Chen et al. "HumanMAC: Masked Motion Completion for Human Motion Prediction." arXiv preprint arXiv:2302.03665 (2023).
---
I revise my rating to borderline accept. See [detail](https://openreview.net/forum?id=v0GzRLvVp3¬eId=FWQ456OsEQ).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Authors discussed limitations.
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes
Flag For Ethics Review: ['No ethics review needed.'] | Rebuttal 1:
Rebuttal: **Part 1 (Part 2 is in global rebuttal)**
Thanks to the reviewer for the constructive comments. We have carefully addressed your concerns and provided detailed responses for each review.
**Q1:Some issues with the wording.**
Re: Thank you for pointing out them. We will correct them.
**Q2: Some questions of fig.1.**
Re: Thank you for your suggestion. "short+long" represents the training approach of the baseline model, where both short-term and long-term predictions are trained together. "short only" indicates that only short-term predictions are trained on the baseline model without considering long-term predictions. "short then short+long" means that after training the model using the "short only" approach, the model is further trained by combining both short-term and long-term predictions together. We will provide a figure to illustrate the difference between different concepts and express it in a clearer manner. As for the experimental setting of Fig. 1, we followed the implementation details of the backbone PGBIG.
**Q3: It is not clear about the motivation for designing αs. The authors name it Prior Compensation Factors in Section 3,2. Why can it compensate for forgotten knowledge when leveraging prior knowledge? It is a scalar and why can it reflect so much knowledge of complex motions?**
Re: As we conduct a preliminary experiment and find that knowledge learned in short-term prediction can serve as a prior to facilitate the learning of far-future, this motivates us to formally model the motion prediction and propose a multi-stage training strategy to better exploit this prior knowledge. However, since we discover that the prior knowledge acquired from previous tasks diminishes when switching stages, our motivation for designing αs is to address this issue and estimate this lost prior knowledge to assist in predicting subsequent task. By applying a series of derivations, we obtain the final objective function as Eq. 5, where αs are formally presented as controlling the weighted combination of losses at different stages. Since the fitting of long-term prediction dominates the learning process, the prediction model struggles to preserve the prior knowledge learnt from previous short-term predictions. Consequently, the loss function assigns lower weights to current task to mitigate the loss of forgotten information. Thus, αs which is a vector does play a compensatory role for the forgotten knowledge and demonstrates effectiveness.
**Q4: Datasets: I suggest authors provide the results on AMASS datasets, which will make your work more solid.**
Re: We follow your suggestion and conduct experiment on the AMASS dataset. However, due to the unavailability of some sub-dataset ("Eyes_Japan_Dataset" and "BioMotionLab_NTroje") in AMASS, we could only perform experiments on its subset. We use the STSGCN [2] model you mentioned as backbone. Detailed results can be found in Q8. The experimental results indicate that our strategy effectively enhances model training on this dataset.
**Q5: Authors choose PGBIG as the backbone. More choices and comparisons of backbones should be presented in the main result (not in ablation). This verifies the main contribution of the paper.**
Re: We select the PGBIG as the backbone of our main experiments as it is the state-of-the-art architectures for human motion prediction under the common setting. However, we have also followed your suggestion to test the results of our approach with other backbones such as STSGCN [2] (GCN-based), MotionMixer [3] (MLP-based), siMLPe [4] (MLP-based) and POTR [6] (Transformer-based). The results can be found in Q8. As can be seen, our training method still improves the performance of the original prediction backbone model.
**Q6: I did not find the codes and demo videos in the supplementary. (Not necessary. If provided, more convincing. Pretrained models with inference codes are acceptable.)**
Re: We have provided a link of demo videoto AC. Due to NIPS' official policy that prohibits providing external links in the rebuttal, we would release our codes and pre-trained models after this work is accepted for publication.
**Q7: It is clear that "our objective" is between the "actual objective" and "naive objective". In Figure 2, is "our objective" close to the "actual objective"? Why isn't it closer to the "naive objective"? Any intuition or proof?**
Re: Fig. 2 is a illustrative diagram illustrating the relative positioning among "naive objective", "our objective" and "actual objective". This serves to demonstrate that our objective is a superior upper bound compared to the naive approach.
**Q8: Missing baselines or related work: [1, 2, 3, 4]. I would like to discuss with the authors about the choices of baselines.**
Re: Our approach can be flexibly applied to various backbones, such as [1, 2, 3, 4]. [1] enhances the generalization ability of GCN using a gating network. [2] utilizes space-time separable GCN to extract features from different dimensions. [3] presents an MLP-based architecture that effectively leverages spatiotemporal aggregated features. [4] is a recent MLP-based model. Due to the lack of open-source code for [1], we can only report experimental results for [2], [3] and [4]. The experiment of baseline [2] was conducted on the AMASS dataset with mean per joint position error as the evaluation metric. The experiments of baselines [3, 4] were conducted on the Human3.6M dataset, and the evaluation metric used was mean per joint position error. We also conducted experiments on POTR [6] (a Transformer-based model). And the results for POTR are shown using euler angle error as the evaluation metric. From the experimental results in the Table 6, 7, 9 and 10, it can be observed that our strategy consistently improves the performance of these backbones, validating the effectiveness of our proposed strategy. We will provide discussions on [1, 2, 3, 4] in subsequent versions of the manuscript.
---
Rebuttal Comment 1.1:
Title: Response#1 to authors (after rebuttal)
Comment: Thanks for your efforts. My concerns still exist.
I reply to the author's rebuttal ASAP to allow more time for the author's feedback.
+ For Re1, please show how you will revise the method part. How can I be convinced?
+ Authors did not provide a figure in the rebuttal pdf. It is not convincing.
+ I still do not know what $\alpha$s mean. "Thus, $\alpha$s which is a vector does play a compensatory role for the forgotten knowledge and demonstrates effectiveness." Are $\alpha$s vectors or scalars?
+ Note that "Eyes_Japan_Dataset" and "BioMotionLab_NTroje" are used in the HumanML3D dataset. Besides, [2] provides experiments on AMASS (author acknowledged in rebuttal). I am curious about why they cannot be used.
+ For Re5, the author did not get my idea. My comment is that the ablation should be your main table result to verify your claim.
+ For Re6, I will discuss with AC in the following process.
+ **Re7 is dodging my question.** In Figure 2, is "our objective" close to the "actual objective"? Why isn't it closer to the "naive objective"? Any intuition or proof? Can you provide any answer?
+ For Re8, will you release the code about mentioned experiments if accepted? Note that both reviews and responses will be open if accepted.
+ The zero-shot experiment is not convinced enough. The train and zero-shot experiments are both on the Human3.6M dataset. It seems no challenges for the method. A better choice is training on H3.6M and test in AMASS. For code and the setting, please refer to `https://github.com/LinghaoChan/HumanMAC#zero-shot-prediction-on-amass`. Therefore, I will not improve my score according to current experiments.
### I read other reviewers' concern during the rebuttal and provide following concerns.
+ I found reviewer skai show the same concern on $\alpha$s. And author did not provide the figure suggested by reviewer skai. Waht is the reason.
+ "Q6 by GqjD: In Eq. 5 the weights for each term is changed from \alpha to 1-\alpha, is this really valid?" I have the same concern with reviewer GqjD. Author did not provide any evidence to verify it.
---
Reply to Comment 1.1.1:
Title: Replying to the reviewer's comments (Part 1)
Comment: Thanks to the reviewer for the constructive comments.
**Q1: For Re1, please show how you will revise the method part. How can I be convinced?**
Re: We will revise the method part mainly in the following aspects. For the expressions in the training periods, we will explicitly define the human motion prediction problem, which aims to predict the future sequence with a period $T_h+1:T_{Z_K}$ conditioned on the observed history $T_1:T_h$. To achieve this, we first divide the future sequences into K segments, with each segment denoted as k ranging from $T_{Z_{k-1}}+1$ to $T_{Z_k}$. And the task $Z_k$ is defined to predict the motions of the k-th segment. We will include this in the revision. As for the ';' before $\theta$, we will follow your suggestion and rewrite it. Also, we accidentally omit an explanation in Eq. (3) where α is a random variable. With this explanation, it would be clearer about the form of α in the loss function. We hope the revision is now acceptable.
**Q2: Authors did not provide a figure in the rebuttal pdf. It is not convincing.**
Re: Actually, the pdf is already filled with experiment result tables as required by the reviewers which occupy an entire page. Thus limited by the space constraints (1 page), we are unable to include a figure in the rebuttal pdf. Note that we have already explained Fig. 1 in our initial rebuttal response with three statements ("short only" indicates that only short-term predictions are trained on the baseline model without considering long-term predictions. "short then short+long" means that after training the model using the "short only" approach, the model is further trained for both short-term and long-term predictions jointly), which is considered to be a clear demonstration for the training strategy. In our revision, we will involve these in the figure and explicitly illustrate the sequences and strategy involved in the toy experiments.
**Q3: I still do not know what αs mean. "Thus, αs which is a vector does play a compensatory role for the forgotten knowledge and demonstrates effectiveness." Are αs vectors or scalars?**
Re: α is a scalar. Here, αs refer to a vector containing multiple elements and the i-th element is a scalar for the i-th training sample trained in the certain stage.
**Q4: Note that "Eyes_Japan_Dataset" and "BioMotionLab_NTroje" are used in the HumanML3D dataset. Besides, [2] provides experiments on AMASS (author acknowledged in rebuttal). I am curious about why they cannot be used.**
Re: We have double checked and confirm that the link to the AMASS dataset provided in [2] does not include the BioMotionLab_NTroje and Eyes_Japan_Dataset subset. We suspect that the subset is not available due to privacy or copyright concerns, since [2] is an early work published before 2021. Indeed, this is not a special case in the community. For example, raw videos in Human3.6M are no longer available, even though they could be downloaded before 2022.
**Q5: For Re5, the author did not get my idea. My comment is that the ablation should be your main table result to verify your claim.**
Re: Thank you for your suggestion. In the ablation study, we have provided the results on all of the datasets involved in the main experiment section. We will move the results to the main table in the final version.
**Q6: Re7 is dodging my question. In Figure 2, is "our objective" close to the "actual objective"? Why isn't it closer to the "naive objective"? Any intuition or proof? Can you provide any answer?**
Re: Indeed, Figure 2 is provided as a simple graphic illustration showing that our objective is closer to "actual objective" than "naive objective". With respect to your concern that "our objective" is closer to "actual objective" or "naive objective", we cannot provide a definite conclusion about which is closer, as the distances depend on how well the model learned. We would like to further clarify that this is not our main concern and it does not affect our conclusion derived in this work.
**Q7: For Re8, will you release the code about mentioned experiments if accepted? Note that both reviews and responses will be open if accepted.**
Re: Yes. We will release code of all the experiments if accepted.
---
Rebuttal Comment 1.2:
Comment: If authors finished resolving some sub-questions, you can provide sub-responses first.
---
Rebuttal 2:
Title: Revise my rating
Comment: Dear AC, reviewers, and authors,
I provide my latest rating here. I have carefully read the manuscript, supplementary, authors’ rebuttal, and other reviews. Therefore, I provide my comment for rebuttal and other reviews [here](https://openreview.net/forum?id=v0GzRLvVp3¬eId=7H0dti0atS). After many rounds of discussion, I have a better understanding of this manuscript.
During rebuttal and discussion, the authors try to resolve my concerns. **Following my code-level suggestions, the authors provide zero-shot experiments on AMASS, training, and prediction results on AMASS.** I appreciate the authors' efforts. Besides, the authors took my advice and detailed how to revise the manuscript if accepted. This is serious and convincing. And experiments of this work are more solid than the first version. However, some of my concerns still exist.
**This manuscript exist some significant problems on writing in the submission.** The caption in Fig. 1 is really hard to follow in the first version. $\alpha$s is confusing for readers (reviewer skai presents this concern at first as well). In the rebuttal pdf, there is no caption for tables. It seems like a not well-prepared submission.
When performing experiments on AMASS, authors claimed multiple times ([first time in Re4](https://openreview.net/forum?id=v0GzRLvVp3¬eId=JXNkeD6aGJ), [second time in Re4](https://openreview.net/forum?id=v0GzRLvVp3¬eId=aMZNvA21Lp)) that the data could not be found. After I provide the authors a [guidance](https://openreview.net/forum?id=v0GzRLvVp3¬eId=gktdoOx4BG) for downloading the data, the author finally found the data ([See Re3](https://openreview.net/forum?id=v0GzRLvVp3¬eId=4grqIk0xER)). **The authors also acknowledged the carelessness during the [discussion](https://openreview.net/forum?id=v0GzRLvVp3¬eId=GCaELD7JRr).** Since AMASS (cited by 650+) is a dataset commonly used by peers and community who study human motions, **I have concerns about the professionalism of the author**.
Overall, I think the quality of this work is at **the bottom ~10% of accepted NeurIPS papers**. That is to say, it is **marginally $\underline{\text{above}}$ the borderline**. Therefore, I will revise my rating **from WEAK ACCEPT to BORDERLINE ACCEPT confidently**.
Best,
Reviewer aXof
---
Rebuttal Comment 2.1:
Title: Replying to the reviewer's comments
Comment: Thanks for appreciating our efforts at this work. **Indeed, the contribution of this work is significant and its effectiveness has been demonstrated by extensive experiments, including the experiments on three datasets (Human3.6M, CMU-MoCap and 3DPW) reported in the manuscript and the new experiment on AMASS dataset required during rebuttal.** We would follow your suggestion to improve our writing in the revision, including the statement of $\alpha$s and the caption in Figure 1. We would appreciate it if you could consider how our work inspires and helps the future research in the community. | Summary: This paper introduces the continual learning insight into human motion prediction. By analysis of the performance relationship between the short and long-term prediction, a compensatory method is proposed in a multi-stage learning setting. On several widely-used benchmarks, the proposed method is cooperated with different backbones and methods and compared with previous methods. Some decent progress is shown according to the analyses.
Strengths: + Splitting the long-term prediction into multi-stage and using the continuous learning insight is interesting and non-trivial.
+ The method proposal looks sound and designed well.
+ A good comparison and discussion are given to support the proposed method.
Weaknesses: - What is the additional cost of using the proposed method? Please discuss the efficiency and the other possible cost.
- Lacking a vivid figure to illustrate the whole method pipeline before the detailed method introduction. Besides, the introduction part, especially the method description can be organized better and more logically.
- L128-130: Though embedding the “knowledge” into the trained parameters seems very reasonable, this is just an intuitive discussion. A more detailed and clear explanation is essential to illustrate what is knowledge really, or what sign we can get to observe the knowledge utilized or not, to avoid a metaphysical or empirical discussion only.
- Fig. 3: hard to read and discover the difference between methods.
- Lacking direct and clear experiments to show the effectiveness of the continual learning design, e.g., better avoidance of forgetting, better balance of the shot and long predictions, etc.
- typo: Fig. 1: lacking space before the (
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Though there are some "traditional" settings of the length definition of the long and short terms. I wonder if we change the 5 and 15 frames setting, is the analysis of Fig. 1 kept or changed?
2. Learning rate in Eq. 9: choice discussion.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please add a discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the constructive comments. We have carefully addressed your concerns and provided detailed responses for each review.
**Q1:What is the additional cost of using the proposed method? Please discuss the efficiency and the other possible cost.**
Re: The extra cost only occurs during training. However, the testing time is exactly the same as that of the baseline model. In practice, training models for short-term predictions is easier and fast to converge. Training long-term predictions in our multi-stage training strategy also converge fast since the model trained for earlier stages are employed as pretrained model to provide prior information. We will include the discussion in the revision.
| | **stage 1** | **stage 2** | **stage 3** | **total** |
| ----------------- | ----------- | ----------- | ----------- | --------- |
| **Baseline** | 16h | - | - | 16h |
| **Baseline+Ours** | 8h | 6h | 5h | 19h |
**Q2:Lacking a vivid figure to illustrate the whole method pipeline before the detailed method introduction. Besides, the introduction part, especially the method description can be organized better and more logically.**
Re: Thank you for your suggestion. We will provide a figure to illustrate our pipeline and improve our presentation.
**Q3:A more detailed and clear explanation is essential to illustrate what is knowledge really, or what sign we can get to observe the knowledge utilized or not.**
Re: The knowledge refers to the information involved in the model, which is exploited not only for completing the current prediction task but also enhancing the performance of subsequent tasks. We can assess whether this knowledge has been utilized through the accuracy of long-term predictions. More details can be found in Q5.
**Q4:Fig. 3: hard to read and discover the difference between methods.**
Re: In Fig. 3, the upper image represents the actions of directions, with the person maintaining an upright position throughout the sequence. The results of PGBIG exhibits a bent posture during long-term predictions, whereas our method can predict states closer to the ground truth (GT) position. While in the lower image, the person first bends and then stands upright again. The PGBIG method maintains the bent posture throughout the long-term prediction and our method can accurately predict the changes in posture. It is evident that our method demonstrates a improvement in long-term prediction effectiveness. Thanks for your suggestion and we will highlight the major differences in visualization results among different methods.
**Q5:Lacking direct and clear experiments to show the effectiveness of the continual learning design, e.g., better avoidance of forgetting, better balance of the short and long predictions, etc.**
Re: Following your suggestion, we have conducted experiments to evaluate the evolution of the performances on the previous stages as training continues on the future stages on the Human3.6M dataset. The results are shown in the table below, where lower values indicate more accurate prediction. As shown, introducing prior compensation factor alleviates the performance degradation from stage 1 to stage 3 of Z1 predictions. Specifically, without prior compensation, the prediction error of Z1 increases by 0.83, whereas with prior compensation, it only increases by 0.27. This result suggests that the prior compensation factor can effectively alleviate the forgetting issue. As a result, Z1 can offer more comprehensive priors for Z2 and Z3 predictions, resulting in better prediction performance compared to the training approach without prior compensation.
**without prior compensation:**
| | **Z1** | **Z2** | **Z3** |
| ----------- | ------ | ------ | ------ |
| **Stage 1** | 9.03 | - | - |
| **Stage 2** | 9.44 | 45.33 | - |
| **Stage 3** | 9.86 | 45.70 | 92.80 |
**Ours:**
| | **Z1** | **Z2** | **Z3** |
| ----------- | ------ | ------ | ------ |
| **Stage 1** | 9.03 | - | - |
| **Stage 2** | 9.10 | 44.43 | - |
| **Stage 3** | 9.30 | 44.62 | 91.37 |
**Q6: typo: Fig. 1: lacking space before the (**
Re: Thank you for pointing out this. We will add space before the missing ( in Fig.1.
**Q7:Though there are some "traditional" settings of the length definition of the long and short terms. I wonder if we change the 5 and 15 frames setting, is the analysis of Fig. 1 kept or changed?**
Re: If we set the short-term to 15 frames, the analysis depicted in Figure 1 still holds. The detailed results are shown in the table. They are consistent with the conclusion drawn from Figure 1, indicating that short-term predictions offer valuable prior information for improving long-term prediction performance.
| | **short+long** | **short only** | **short then short+long** |
| ---------- | -------------- | -------------- | ------------------------- |
| **80ms** | 10.53 | 9.88 | 10.20 |
| **1000ms** | 110.37 | - | 109.86 |
**Q8: Learning rate in Eq. 9: choice discussion.**
Re: We have exactly followed the implementation details of the backbone PGBIG and maintained consistency with its learning rate (initialize to 5e-3 and decrease the learning rate exponentially).
---
Rebuttal Comment 1.1:
Title: Post-rebuttal
Comment: Thank the authors for the response and additional results. If the next version can be revised according to the promise above, my main concerns are addressed.
Looking forward to the other reviewers' discussions.
---
Reply to Comment 1.1.1:
Title: Replying to the reviewer's comments
Comment: Thank you for the response! We highly value your insightful feedback and we will incorporate your suggestions to the subsequent version accordingly. | Summary: This paper proposes to train human motion prediction networks by gradually increasing the prediction horizon.
This encourages the network to learn short-term predictions first and then leverage the learned to predict longer horizons.
The easy-to-hard curriculum makes the network learn more efficiently, as evidenced by the comparison to networks that learn all horizons at the same time.
Given the continual setting, forgetting prevention is needed and dealt with by the introduced prior compensation factors.
Experiments are carried out on three major benchmarks, and effectiveness is observed compared with selected baselines.
Strengths: The paper is well-written. The idea of training sequence prediction with gradually increased horizons is well instantiated in the context of human motion prediction.
The motivation is also clearly conveyed by the ablation in Figure 1.
Better performance is achieved when compared with two recent methods.
It is good that some derivation is shown to arrive at the final combined loss of predictions at different horizons.
The adaptive scheme from the derivation shows better performance than a hand-crafted fixed set of weights.
Weaknesses: The comparison is a bit weak. Shall compare with more recent methods, for example, "Back to MLP: A Simple Baseline for Human Motion Prediction, 2023."
Also, since the method proposed is not backbone-dependent, so more evaluation is needed, for example, transformer-based architectures.
Moreover, there are some questionable parts in the derivation, even though it seems that the final result may not be heavily affected.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) In lines 114-117, all predictions are conditioned on X_1:T_h, however, in Eq. 2, the conditions consist of previous predictions.
2) Eq. 3 assumes that the actual distribution is a degenerated distribution plus a positive bias, why is the bias always positive? This seems not reasonable as the degenerated one could be larger than the actual one.
3) Eq. 4 shows that under the bias assumption, the final loss can be treated as a linear combination of the loss terms for different horizons, however, in Eq. 5 the weights for each term is changed from \alpha to 1-\alpha, is this really valid?
4) Are the terms involving only \alphas really optimized by Eq. 8?
5) Not sure whether the presentation/derivation is really necessary given that a lot of approximation is needed, yet what we want is just a loss that weights the prediction error at different horizons, hopefully the adaptiveness is the key?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is addressed in the broader impact section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Part 1 (Part 2 is in global rebuttal)**
Thanks to the reviewer for the constructive comments. We have carefully addressed your concerns and provided detailed responses for each review.
**Q1: The comparison is a bit weak. Shall compare with more recent methods, for example, "Back to MLP: A Simple Baseline for Human Motion Prediction, 2023."**
Re: The mentioned approach is a very recent work, and considering your suggestion, we have also conducted experiments with the Human3.6M dataset on this model. The experimental results are shown in the Table 6. It can be observed that our proposed training strategy consistently improves the performances of corresponding models. The average error of the baseline **siMLPe** **[1]** model is 68.76. When training it with our multi-stage strategy, the average error reduces to 67.57. This also verifies that our approach can be flexibly applied to various backbone models for human pose prediction, enhancing their performance for human motion prediction.
**Q2: Since the method proposed is not backbone-dependent, so more evaluation is needed, for example, transformer-based architectures.**
Re: Yes. Our training strategy is not dependent on the backbone and can be flexibly applied to other models. In the manuscript, we have tested it with three different backbones: PGBIG [3] (GCN-based), LTD [4] (GCN-based), and Res. Sup. [5] (LSTM-based). Following your suggestion, we also included a transformer-based backbone and tested it on the Human3.6M dataset. Specifically, we used the POTR [2] as our backbone model, which leverages a transformer as the primary framework for parallel prediction manner. The same as [2], we used Euler Angle Error as the evaluation metric. The experimental results are presented in the Table 7. As expected, our method shows consistent improvements with this backbone as well, further validating that our proposed training strategy is quite flexible and effective to improve different prediction models.
**Q3:There are some questionable parts in the derivation, even though it seems that the final result may not be heavily affected.**
Re: Thank you for the suggestion. We have thoroughly validated our approach both theoretically and experimentally. We will carefully examine the details of the derivations to ensure the accuracy of our conclusions.
**Q4:In lines 114-117, all predictions are conditioned on X_1:T_h, however, in Eq. 2, the conditions consist of previous predictions.**
Re: Actually, the conditions in lines 114-117 and Eq.2 have different meanings. Specifically, in line 114-117, we define the prediction task $Z_k$ as predicting segment $k$, i.e., $X_{T_{Z_{k-1}}+1:T_{Z_k}}$, conditioned on the history $X_{1:T_h}$. While in Eq.2, the probability of $Z_k$ conditioned on $Z_1, …, Z_{k-1}$ suggests exploiting the beneficial knowledge of previous tasks $Z_1, …, Z_{k-1}$ to predict task $Z_k$. We will improve the writing to ensure a more concise and understandable presentation.
**Q5:Eq. 3 assumes that the actual distribution is a degenerated distribution plus a positive bias, why is the bias always positive? This seems not reasonable as the degenerated one could be larger than the actual one.**
Re: Indeed, this bias is always positive. The $P(Z_{k}|Z_{1}Z_{2}\cdots Z_{k-1}\theta)$ represents the most ideal scenario, where the current prediction task can fully leverage the prior information provided by previous prediction tasks. While $P(Z_{k}|\hat{Z}_{1:k-1}\theta)$ indicates completing the current prediction task with incomplete prior information. Hence, utilizing complete predictive priors would yield more accurate predictions than using incomplete information.
**Q6: In Eq. 5 the weights for each term is changed from \alpha to 1-\alpha, is this really valid?**
Re: Yes, it is valid. We have double checked the derivations presented in the appendix material.
**Q7: Are the terms involving only \alphas really optimized by Eq. 8?**
Re: We actually optimize $\alpha$s by leveraging Eq.7 as the loss function. Eq.8 is used to estimate the $\hat{\alpha}_{Z_{1:k-1}\rightarrow Z_{k}}$ involved in Eq. 7 for the training in subsequent stages. The detailed training process can be found in Algorithm 1.
**Q8: Not sure whether the presentation/derivation is really necessary given that a lot of approximation is needed, yet what we want is just a loss that weights the prediction error at different horizons, hopefully the adaptiveness is the key?**
Re: Thank you for your insightful comments. The derivation is necessary. Aiming to compensate for the loss of prior knowledge when switching stages, we introduce α and obtain Eq. 3. Only by relying on Eq.3 can we derive the final objective function with the form of Eq.5. Although the objective function appears to be dynamic weighting control, the underlying mechanism is derived from rigorous theoretical analysis. Specifically, our strategy promote the model training process by estimating the extent of prior information loss. This estimation can only be achieved through a multi-stage training process. In contrast, the assignment of different task weights can be accomplished through a one-stage training approach. Moreover, arbitrarily designed dynamic weight control often fails to improve the training process effectively.
To validate the effectiveness of our derived objective function, we also conducted an experiment using a simple dynamic weighting approach in one-stage training manner. The results demonstrate that our strategy achieves better performance, which is shown in Table 8.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Will keep my current rating and hope the authors can make the modifications as promised in future versions.
---
Reply to Comment 1.1.1:
Title: Replying to the reviewer's comments
Comment: Thank you for the response. We are grateful for your insightful review and we will incorporate your suggestions to the later version accordingly. | Summary: This paper aims to enhance human motion prediction. The main contributions of this paper are:
1. The paper presents a multi-stage training strategy named Temporal Continual Learning to incorporate the learning of both short-term prediction and long-term prediction.
2. The paper introduces Prior Compensation Factor to better preserve prior information during the process of Temporal Continual Learning.
These optimizations are given through theoretical derivation.
Strengths: The paper has several strengths:
1. Overall, the paper is well-written with a clear and well-motivated introduction. The storyline to leverage the prior knowledge learned from short-term inputs to facilitate long-term predictions makes sense.
2. Secondly, the proposed method is flexible and demonstrates good performance when applied to different human motion predictors, outperforming state-of-the-art on different datasets.
Weaknesses: The paper could benefit from a more thorough discussion of related work on Human Pose Prediction, such as in L23-24, transformer architectures include
- PoseGPT: Quantization-Based 3D Human Motion Generation and Forecasting. ECCV 2022
and graph convolution networks include
- Diverse Human Motion Prediction Guided by Multi-Level Spatial-Temporal Anchors. ECCV 2022
The key insight is supported by L32-42 and Figure 1. I'm wondering if the comparison can be represented by a clearer illustration, e.g. using figures to explain different terms such as “short+long” and “short then short + long”. The current version looks a bit rough.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. typo in L119, should be T_{Z_{K-1}}
2. Figure 4 in the ablation study is a bit confusing. What is ablated here? Is the alpha value the same across different experiments?
3. The implementation defines a specific partition of sequence for the multi-task. I think it would be interesting to see the ablation on different partitions, e.g. different number of sub-tasks
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: It would be helpful if the authors could provide videos to demonstrate the quality of generated human motion. I wonder if the multi-task learning for different motion segments will cause the discontinuity
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the constructive comments. We have carefully addressed your concerns and provided detailed responses for each review.
**Q1:** **The paper could benefit from a more thorough discussion of related work on Human Pose Prediction, such as in L23-24, transformer architectures.**
Re: Thank you for the suggestion. Some works attempt to develop human motion prediction models based on Transformer architectures. For instance, PoseGPT [1] focus on generating diverse future poses with GPT-like model. [2] introduces spatial-temporal anchor-based sampling to generate diverse human poses. We will include these works and provide a more thorough discussion in the revision.
**Q2:** **The key insight is supported by L32-42 and Figure 1. I'm wondering if the comparison can be represented by a clearer illustration, e.g. using figures to explain different terms such as “short+long” and “short then short + long”. The current version looks a bit rough.**
Re: "short+long" means that the model are trained for both short-term and long-term predictions together. "short only" indicates that the model is only trained for short-term predictions without considering long-term predictions. "short then short+long" means that after training the model for short-term prediction, the model is further trained by combining both short-term and long-term predictions together. We will follow your suggestion and provide a figure to illustrate the differences of these terms.
**Q3:** **typo in L119, should be T_{Z_{K-1}}**
Re: Thank you. We will correct the writing error.
**Q4:** **Figure 4 in the ablation study is a bit confusing. What is ablated here? Is the alpha value the same across different experiments?**
Re: The purpose of the figure is to visualize the average value of α for each stage. As can be seen, as the training stages gets larger, more prior information loss can be observed, and our multi-stage training strategy can effectively reduce the information loss. Since alpha is learned in the data-driven manner, its value could vary in different experiments.
**Q5:** **The implementation defines a specific partition of sequence for the multi-task. I think it would be interesting to see the ablation on different partitions, e.g. different number of sub-tasks.**
Re: Thank you for your suggestion. We have conducted experiments with different numbers of tasks, and the results are shown in the following table. As can be observed, the model's performance improves as the number of tasks gets larger from 1 to 3, and it remains stable when the number of tasks becomes larger than 3.
| **number of tasks** | **1** | **2** | **3** | **5** | **8** |
| ------------------- | ----- | ----- | ----- | ----- | ----- |
| **Avg_err⬇** | 66.95 | 66.02 | 65.00 | 65.05 | 65.03 |
**Q6:** **It would be helpful if the authors could provide videos to demonstrate the quality of generated human motion. I wonder if the multi-task learning for different motion segments will cause the discontinuity.**
Re: In the proposed multi-stage learning strategy, the training process does not change the way of model prediction (parallel or auto-regressive). We only changed the way of determining parameters of the model. As a result, we did not break the continuity of the results generated by the original model. We also examine the generated motion sequence and find that it is visually continuous. And we have provided a link of our demo to AC.
**reference:**
[1] Lucas, T., Baradel, F., Weinzaepfel, P., & Rogez, G. (2022, October). Posegpt: Quantization-based 3d human motion generation and forecasting. In European Conference on Computer Vision (pp. 417-435). Cham: Springer Nature Switzerland.
[2] Xu, S., Wang, Y. X., & Gui, L. Y. (2022, October). Diverse human motion prediction guided by multi-level spatial-temporal anchors. In European Conference on Computer Vision (pp. 251-269). Cham: Springer Nature Switzerland. | Rebuttal 1:
Rebuttal: **(Part 2 for reviewer GqjD)**
**reference:**
[1] Guo, W., Du, Y., Shen, X., Lepetit, V., Alameda-Pineda, X., & Moreno-Noguer, F. (2023). Back to mlp: A simple baseline for human motion prediction. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 4809-4819).
[2] Martínez-González, A., Villamizar, M., & Odobez, J. M. (2021). Pose transformers (potr): Human motion prediction with non-autoregressive transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 2276-2284).
[3] Ma, T., Nie, Y., Long, C., Zhang, Q., & Li, G. (2022). Progressively generating better initial guesses towards next stages for high-quality human motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6437-6446).
[4] Mao, W., Liu, M., Salzmann, M., & Li, H. (2019). Learning trajectory dependencies for human motion prediction. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9489-9497).
[5] Martinez, J., Black, M. J., & Romero, J. (2017). On human motion prediction using recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2891-2900).
**(Part 2 for reviewer aXof)**
**Q9: Efficiency and multi-stage pipeline. Recent researches [5] suggest predicting motions in one stage, which is easier to train. Will the multi-stage training be harder to tune or train? Will it be more time-consuming? I would like to discuss with the authors about it.**
Re: The multi-stage training approach we proposed does not require complex adjustment techniques. What's more, it offers certain advantages over single-stage training. The temporal multi-stage training process leverages prior predictive information from earlier stages more effectively, thereby guiding the learning of challenging long-term prediction. Simultaneously, training the short-term prediction separately is easier to learn and results in better short-term predicting capabilities. Although our proposed strategy incurs increase in training time, the testing time remains consistent with that of the backbone model. Regarding the training process, due to the faster convergence of short-term predictions, fewer iterations are needed for training the first stage. Additionally, because the model can better utilize the prior information provided by short-term predictions, the subsequent stages of training also converge more easily. As a result, the overall training time does not increase significantly. The training time of different stages are demonstrated in the Table 5. We will include this discussion in the subsequent manuscript.
**Q10: It will be great if authors can discuss or provide zero-shot adaptation experiments on other datasets.**
Re: We conducted zero-shot experiments based on your advice. Due to variations in the number of key-points and annotations across different human motion datasets, we conduct the zero-shot experiments on Human3.6M dataset with backbone only trained on stage 1 (task Z1). The results can be found in the table below. First, the zero-shot performance is not as good as training all tasks. Second, there are variations in performance between different backbones, with PGBIG outperforming others.
| | **Z2** | **Z3** |
| :----------------------------: | :----: | :----: |
| **Res. Sup.+Ours** | 83.23 | 146.80 |
| **Res. Sup.+Ours (zero-shot)** | 86.26 | 155.66 |
| **PGBIG+Ours** | 44.39 | 91.11 |
| **PGBIG+Ours (zero-shot)** | 76.84 | 122.59 |
**reference:**
[1] Zhong, C., Hu, L., Zhang, Z., Ye, Y., & Xia, S. (2022). Spatio-temporal gating-adjacency gcn for human motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6447-6456).
[2] Sofianos, T., Sampieri, A., Franco, L., & Galasso, F. (2021). Space-time-separable graph convolutional network for pose forecasting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 11209-11218).
[3] Bouazizi, A., Holzbock, A., Kressel, U., Dietmayer, K., & Belagiannis, V. (2022). Motionmixer: Mlp-based 3d human body pose forecasting. arXiv preprint arXiv:2207.00499.
[4] Guo, W., Du, Y., Shen, X., Lepetit, V., Alameda-Pineda, X., & Moreno-Noguer, F. (2023). Back to mlp: A simple baseline for human motion prediction. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 4809-4819).
[5] Chen, L. H., Zhang, J., Li, Y., Pang, Y., Xia, X., & Liu, T. (2023). HumanMAC: Masked Motion Completion for Human Motion Prediction. arXiv preprint arXiv:2302.03665.
[6] Martínez-González, A., Villamizar, M., & Odobez, J. M. (2021). Pose transformers (potr): Human motion prediction with non-autoregressive transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 2276-2284).
[7] Ma, T., Nie, Y., Long, C., Zhang, Q., & Li, G. (2022). Progressively generating better initial guesses towards next stages for high-quality human motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6437-6446).
[8] Martinez, J., Black, M. J., & Romero, J. (2017). On human motion prediction using recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2891-2900).
Pdf: /pdf/9c3a76527d1d377d3005a63b0b2570ae1402d56d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper addresses the common trade-off between the short- and long-term prediction quality of 3D human motion prediction models. The proposed training technique improves the performance of the underlying models both on short- and long-term prediction horizons, where the models generally prioritize one and suffer from the other. It is a multi-stage training scheme resembling curriculum learning with an increasing prediction horizon. The prediction horizon is split into consecutive chunks of frames (i.e., stages). The model is trained iteratively for every stage by initializing the weights from the previous stage and considering all the stages so far. Different from curriculum learning, the proposed technique aims to preserve and leverage the prior information of the past stages by alleviating catastrophic forgetting in the future stages. To do so, the paper incorporates the "Prior Compensation Factor" in the training objective. Intuitively, for every training sample, the model predicts a weight parameter that dynamically distributes the total loss weight across stages. The proposed technique is evaluated using three different architectures. Experiments on various benchmarks show that it is highly effective.
Strengths: **originality**
The paper addresses a common and often neglected problem in the 3D human motion prediction domain. In fact, most prior works prefer to ignore the short- and long-term prediction trade-off and present results with separate model configurations. I find this effort valuable.
**quality**
The proposed technique is well-motivated and sound. I have some concerns regarding the theoretical analysis (see below), but the experimental results are solid.
**clarity**
The motivation is clear. The paper provides enough background to understand the problem setting. However, it is not straightforward to intuitively understand what the proposed technique does.
**significance**
I find the paper interesting and useful for the community. The experiments show that the proposed technique could improve the performance of the out-of-the-box models.
Weaknesses: There is a potential problem with lemma 3.1. It expects `b` to be between 0 and 1, a likelihood value from a probability mass function. For the proposed training objective (Eq. 7) to be held, the loss functions should be chosen carefully. In this work, it is implemented as a MSE loss (Eq. 6 and 7), which does not have an upper-bound. How about implementing the objective as a log-likelihood with a Gaussian output model? It may have negative values.
I had to spend some time getting an intuitive understanding of Eq. 7 (no, the “An intuitive explanation” section does not help). It boils down to a dynamic weighting scheme where the network learns to distribute the loss weights across prediction horizons (i.e. stages). I think this could be explained better in the paper. In fact, the training dynamics is very straightforward. If the MSE loss is high, then the predicted alpha should be higher to optimize the objective. We can say that the model learns to “assess the difficulty of the task”. Naturally, alpha values get higher in the future stages as it becomes more difficult. If you run the following code snippet, you get a very similar plot to the one in Fig. 4 ($\alpha$ values at different stages). It gives you the optimal $\alpha$ value for different MSE values. I kindly ask the authors to clarify if I am wrong or share their thoughts on this analysis.
```
import matplotlib.pyplot as plt
from scipy.optimize import minimize_scalar
import math
import numpy as np
def objective(mse):
return lambda alpha: (1 - alpha) * mse + (1-alpha) * math.log(1-alpha) + math.log(1 + alpha)
mse_values = np.arange(0, .5, 0.01)
best_alphas = []
best_res = []
for mse in mse_values:
opt_res = minimize_scalar(objective(mse), bounds=(0, 1), method="bounded")
best_alphas.append(opt_res.x)
best_res.append(opt_res.fun)
plt.plot(mse_values, best_alphas)
plt.xlabel("MSE values (i.e., stages)")
plt.ylabel("Alpha values")
plt.show()
```
The authors can also try setting the $\alpha$ values for the “HC” ablation using this code snippet.
An ablation on the number of stages is missing. Similarly, its cost is not discussed. While it improves the performance of the underlying model, it introduces a significant overhead at training time.
It would be interesting to see the evolution of the performances on the previous stages as training continues on the future stages. A triangular matrix reporting the performance of stage K after training every stage > K.
Instead of following a dynamic approach (i.e., predicting an $\alpha$ per training sample), would it still work if a single, stage-wise $\alpha$ variable was trained?
This is merely a suggestion for the presentation. I think it would be more interesting to report the performance for the pairs (baseline, baseline + ours) in all benchmarks (as in Table 5).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I’ve raised my concerns and asked my questions in the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper does not have a limitations section. I think it is clear that the proposed technique significantly increases training complexity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the constructive comments. We have carefully addressed your concerns and provided detailed responses for each review.
**Q1:** **There is a potential problem with lemma 3.1. It expects b to be between 0 and 1, a likelihood value from a probability mass function. For the proposed training objective (Eq. 7) to be held, the loss functions should be chosen carefully. In this work, it is implemented as a MSE loss (Eq. 6 and 7), which does not have an upper-bound. How about implementing the objective as a log-likelihood with a Gaussian output model? It may have negative values.**
Re: We would like to clarify that in our implementation, the objective is formulated as log-likelihood with a Gaussian output, in which the model output is defined by a Gaussian distribution with value from 0 to 1. Thus, the requirements of lemma 3.1 are explicitly satisfied. Then, we take the negative logarithm of the Gaussian distribution and derive the MSE loss, whose value range between [0, +∞).
**Q2: α** **values get higher in the future stages as it becomes more difficult. It's like the model is learning to evaluate task difficulty. I kindly ask the authors to clarify if I am wrong or share their thoughts on this analysis.**
Re: Our perspective on this differs slightly from yours. Rather than evaluating difficulty of the current task itself, our proposed strategy estimates the extent of prior information loss, which can also be viewed as estimating the difficulty of transferring prior information from previous tasks to the current one. Our motivation is to effectively utilize the prior knowledge acquired from past tasks. To achieve this, we employ a multi-stage training strategy and introduce a prior compensation factor to estimate the extent of information loss.It is worth noting that the multi-stage training method allows us to estimate the degree of prior information loss. This estimation can also be seen as measuring the difficulty of transferring prior knowledge of previous tasks to the current one. In contrast, evaluating task difficulty to assign varying task weights can be achieved in one stage. This is a significant difference.
**Q3:** **The authors can also try setting the α values for the “HC” ablation using this code snippet.**
Re: Thank you for your suggestion. We have conducted an experiment, referred to as "HC2", using the α values calculated with the code you provided to validate this viewpoint. The results are shown in Table 1, from which we can obtain two observations. First, the analytical α calculated by “HC2” also helps to mitigate the loss of prior information. Second, our learning strategy performs better than “HC2”. It is because that in our model learning process, α is treated as a distribution that estimates the loss of prior information for previous stages. Based on the α definition, given that the model's output is inherently a distribution, the α should be considered as a random variable satisfying a certain distribution as well. Therefore, using the model to estimate the distribution of α is a more reasonable way.
**Q4: An ablation on the number of stages is missing.**
Re: Following your suggestion, we have conducted experiments with different numbers of stages, and the results are presented in Table 2. As can be seen, the model's performance improves as the number of stages gets larger from 1 to 3, and it remains stable when the number of stages becomes larger than 3.
**Q5: It would be interesting to see the evolution of the performances on the previous stages as training continues on the future stages. A triangular matrix reporting the performance of stage K after training every stage > K.**
Re: Based on your suggestion, we conducted experiments on the Human3.6M dataset, as shown in the Table 3 (lower values indicate smaller errors). It can be observed that as the training continues, the performances of previous prediction tasks slightly decrease. Taking Z1 task for example, the performances decrease from 9.03 to 9.30.
**Q6: Instead of following a dynamic approach (i.e., predicting an α per training sample), would it still work if a single, stage-wise α variable was trained?**
Re: We conduct experiments based on your suggestion, using the code you provided to calculate the average α value as stage-wise α. The results are shown in the Table 4. It shows that the stage-wise α helps in mitigating the loss of prior information and aiding the model's training process. However, this approach isn't as effective as our strategy, largely because the model approximates the loss of prior information at each stage in a coarse-grained manner, leading to imprecise control over the training process.
**Q7: I think it would be more interesting to report the performance for the pairs (baseline, baseline + ours) in all benchmarks (as in Table 5).**
Re: Thank you for your suggestion. We will report the performances in the form of **baseline and baseline+ours** **pairs**.
**Q8:Its cost** **is not discussed. While it improves the performance of the underlying model, it introduces a significant overhead at training time.**
Re: Thank you for your comments. We would like to point out that our multi-stage training approach does not lead to a significant increase in the overall training time as compared with the single stage training approaches (16h vs. 19h). The detailed training time on Human3.6M dataset can be found in the Table 5. In practice, training models for short-term predictions is easier and fast to converge. Training long-term predictions in our multi-stage training strategy also converge fast since the model trained for earlier stages are employed as pretrain model to provide prior information. We will include the discussion in the revision.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal comment
Comment: I thank the authors for their rebuttal and additional results! Their rebuttal provides clarifications and addresses my main concerns.
I think interpretation of the $\alpha$ term depends on the perspective. I see the point in the authors' interpretation. I do not have further questions. I will follow the discussion with other reviewers and revise my score accordingly.
---
Reply to Comment 1.1.1:
Title: Replying to the reviewer's comments
Comment: Thank you for the response. We appreciate your valuable review and we will incorporate your suggestions to the later version accordingly. | null | null | null | null | null | null |
Online robust non-stationary estimation | Accept (poster) | Summary: This paper considers an online estimation setting where the learner observes a sequence of samples, which are drawn from a (previously determined but unknown) sequence of probability distributions, and with some fraction of samples arbitrarily corrupted. In each round, the learner makes a decision, and regret is measured based on how far this decision is from that which would minimize their expected loss if no corruption occurred. The loss function is strongly convex, and the goal is to obtain total regret bounds which are sublinear in the time horizon $T$ (but may scale with some standard parameters like diameter of the decision space, (unknown) amount of distribution shift, (unknown) amount of corruptions, (known) gradient norm upper bound, (unknown) gradient covariance upper bound). The authors show that a tuned version of clipped SGD achieves the desired regret bounds, with some partial lower bounds. The proofs use combine a novel inductive argument with martingale concentration techniques to provide high probability regret bounds under arbitrary distribution shift, and the results are verified with some simple experiments pertaining to mean estimation and linear regression.
Strengths: - To the best of my knowledge, and as claimed by the authors, no work has addressed this issue of online, (outlier/heavy-tail) robust, and non-stationary convex optimization, and this setting seems important for many applications.
- There are several settings where their regret bounds are tight, and many improvements over the state-of-the-art
- The proposed algorithm is quite simple, and they empirically validate the choice of tuning suggested by their theory
- The paper is generally well-written, with thorough explanations of the problem setup, prior work, and the different components of their regret bounds (though I struggled a bit with understanding the analysis of their general regret bound)
Weaknesses: - It would seem more natural to measure regret via the excess risk
$\sum_{t=1}^T \mathbb{E}\_{Z \sim P_t}[\mathcal{L}(Z,\theta_t)] - \inf_{\theta \in \Theta}\mathbb{E}_{Z \sim P_t}[\mathcal{L}(Z,\theta)].$
Do your results transfer to this metric? If not, why? Cor. 4.6-style results for the stationary setting without corruptions are usually stated with respect to this benchmark, as far as I am aware.
- In the stationary setting with corruptions, it is not clear to me that the diameter-dependence is necessary (and this is considered quite undesirable in the robust mean estimation literature). The authors' lower bound in Section 16 looks to require non-stationarity. Can a lower bound be provided without this requirement?
- There is a claim of dimension independence (modulo the diameter dependence), but I view dependence on the trace of $\Sigma$ as implicitly depending on the dimension in many cases of interest, so this seems a bit misleading
- It is rather unclear to me whether these results are fundamentally interesting, or whether the solution is somewhat standard and only new because this combination of problem settings hasn't been explicitly studied before. In particular, clipped SGD is a standard solution in this space, though their tuning analysis appears novel.
- If the analysis for the general case is of fundamental interest, I suggest that more space is spent in the appendix (or added page of a final version) describing the induction details - I found the appendix a bit hard to follow and did not verify correctness
Minor Nits:
- On line 48, the superscript for the footnote looks like a power. Right after that, the way the loss function is introduced read to me as if it was already used.
- In the abstract and elsewhere, "high-probability" is used as an adjective without an accompanying noun (presumable "regret bounds").
- Footnote 4 appears before its reference in Table 1
- "an" -> "a" on line 202
- "at-least" -> "at least" in Theorem 4.3
- Notation "m" for strong convexity constant conflicts with power of $T$ in Eq. 1
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In addition to the questions raised in the "weaknesses" section, I was curious if this approach could be adapted to incorporate differential privacy (since gradient clipping + noise is a common approach to private learning).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Assumptions are made clear, and I don't anticipate any negative impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Excess risk regret is a direct corollary of our result:**
As the loss function $\mathcal{L}$ is M smooth (Assumption 1 in our draft), we have that $\mathbb{E}[\mathcal{L}(Z,\theta_t)−\mathcal{L}(Z,\theta^*_t)]\leq M \|| \theta_t− \theta_t^* \||^2$. Thus, a regret bound on the norm $\||\theta_t− \theta_t^* \||^2$ translates to a regret bound as measured through the excess risk. We thank the reviewer for this pointer and will add this fact as a corollary.
**Finite diameter assumption is necessary in the presence of corruptions, even if there is no distribution shift**
***Proof Sketch*** Similar to Prop 2.6, consider two scenarios for mean-estimation. In one scenario, the un-corrupted samples are all drawn from a Dirac mass at $0$, but the first $\Lambda_T$ samples are corrupted with all $d$ coordinates set to $\mathcal{D}/\sqrt{d}$. In the other scenario, there are no corruptions and the un-corrupted samples are all from Dirac mass at location with all coordinates $\mathcal{D}/\sqrt{d}$. In both situations, the first $\Lambda_T$ samples are identical. Thus, no estimator for the first $\Lambda_T$ samples can distinguish between these two scenarios and will incur regret at-least $\Omega(\Lambda_T\mathcal{D})$.
We will add a proposition in the revision showing the necessity of finite diameter in the presence of corruptions, even if there is no distribution shift. This does not contradict [18] since in their model, corruptions occur at random instants while corruptions occur at arbitrary instants in ours.
*In summary,* (i) finite diameter is necessary in the presence of corruptions, whether or not there is distribution shift, and (ii) an infinite diameter can be handled in the absence of corruptions even if there is distribution shift. See also the attached pdf.
**Regret depending on $\text{Trace}(\Sigma)$ trace of the covariance matrix is a classical ``definition” of dimension free regret in statistics literature:**
We follow standard terminology that states that a bound is dimension free if it depends on the trace of the co-variance matrix and not on dimension times the maximum eigen-value of the covariance matrix, (c.f. [35, 36] of our attached draft). To be concrete, for mean-estimation of a $d$ dimensional vector, a regret bound that depends on $\text{Trace}(\Sigma)$ is deemed to be “dimension-free” (c.f. [35, 36] ). On the other hand, a bound that depends on $d\nu_{max}(\Sigma)$ where $\nu_{max}(\Sigma)$ is the highest eigen-value of the co-variance matrix is NOT dimension free. A bound on the trace is more favorable in high-dimensional settings (cf [35,36]) since by definition we always have $d\nu_{max}(\Sigma) \geq \text{Trace}(\Sigma)$. We will add this definition and discussion in the revision stating that our bounds only depend on $\text{Trace}(\Sigma)$ and not on $d\nu_{max}(\Sigma)$ thereby making our results dimension-free.
**Our setting and insights are conceptually new and interesting! Our work is the first to extend online robust estimation beyond iid/stationary assumption:** We strongly believe our setting and results are interesting (and new)! To the best of our knowledge this is the first work to understand how to estimate in a streaming setting when there are heavy-tails, distribution shifts, high-dimensional observations and adversarial corruptions. As we mention in our introduction, there is a plethora of work in the applied literature where heuristics are proposed for dealing with streaming settings with all these characteristics. Our work is the first to formalize the question, set benchmarks and desiderata and present an analysis of an achievable algorithm and lower bounds.
From a technical perspective, there are new insights this work provides. For example, we show in Proposition 2.6 (and in this response) that finite diameter is now a necessary in the presence corruptions.
The proofs are non-trivial and are *NOT* a corollary of existing results. Analyzing gradient based methods for heavy-tailed *stationary settings without corruptions* is itself an actively emerging field of literature (see [24, 34, 43, 52, 57, 60]). Our work extends this line of work by providing the first analysis in the presence of distribution shifts and corruptions, that is based on different martingale concentration arguments combined with novel induction arguments.
Thus, we respectfully disagree on the claim that “*the work is not interesting since it only combines problem settings not studied before*”. A key surprising result is that a simple/practical algorithm such as clipped SGD with the right tuning is able to be robust to drifts, corruptions, heavy tails and lead to dimension free results. Our work provides insight that the learning rate should straddle the $O(1)$ known to be optimal in the absence of noise to be adaptive to distribution shift and the $O(1/t)$ known to be optimal in the stationary setting when there is no drifts. Thus, we also respectfully disagree with the claim that “*clipped SGD is a standard/known solution for these problems*”.
***That said however***, this is a first work in this space and our paper leaves several fundamental open questions as we list in the conclusions which are exciting avenues of future work. We will also take up the reviewers suggestion and add more discussion (for example highlight Lemma 19.9 and 20.10) in the additional page that will be available for the camera ready.
**Writing errors and corrections:**
We thank the reviewer for a thorough and careful read! We will make these corrections and other writing fixes in the revision.
**Connections with privacy:** Unfortunately, we don't have anything interesting to say. Aggarwal et.al. consider privacy implications in the stationary non-stochastic setting. Studying the price of privacy in a non-stationary setup with drifts , heavy-tails and corruptions is exciting future work.
*Aggarwal et.al. The Price of Differential Privacy for Online Learning, ICML 2017*
---
Rebuttal Comment 1.1:
Comment: Thanks for your clarifications. Your included lower bound without distribution shift was very helpful for my understanding. It's pretty interesting that there is such a contrast between randomly vs adversarially placed corruptions.
I am considering increasing my score, but I want to spend more time understanding some of the other reviewer's concerns. | Summary: This works studies robust sequential estimation under a non-stationary environment. A loss function is fixed in advance. The data generating process is non-stationary over time, and hence the optimal parameter \theta^*_t, which minimizes the expected loss over the distribution at time t, is changing over time. A policy returns in each round an estimated parameter \theta_t. The goal is to minimize the regret, defined as sum of differences between \theta^*_t and \theta. The central question is, is there a policy that is free of distributional knowledge (i.e., moments of the data-generating distributions or stream complexity). They answered this question by presenting a gradient-based algo with sublinear regret. These upper bounds match the known lower bounds in the no-noise and no-drift setting.
Strengths: Presented sublinear regret bounds which matches known lower bounds for the no-drift or no-corruption setting.
Weaknesses: 1. I am not able to find significant novelty in either the problem formulation or the results. Maybe there is a good practical reason to consider this particular formulation but at least in this submission, the authors did not sell it well.
2. Writing: In general this paper is not written well.
- There are plenty of typos and gramatic mistakes, even in the abstract and formulation section. E.g. “A observation … “ in the abstract. Line 133 “formalize”. E.g. “upto” -> up to. Line 99, “to derive high-probability under any rate..”
- Consistency: do not use a concept before defining it. E.g. in the abstract, “neither the O(1/t)....” What is "t"? It seems lower case t is not the same as "T"
- Vague language. Just to name a few, in the Abstract: “A observation … can be used” – I don’t understand this line. In the def of regret, what norm are we using? Line 96, “the data stream is subgaussian”, do you mean the distribution in each round is subgaussian, or the the entire stream?
- Do not start a sentence with a mathematical notation; see e.g. “X_t is shown as …”
- The tone switches between being very informal to very formal abruptly (and why use the word “diserderata” so frequently?)
**The above issues combined suggest that the submission has been written in a rush.** I suggest the authors carefully polish the paper.
3. Lacking of discussion to previous work. It seems that this problem can be reduced to the problem studied in “non-stationary stochastic optimization” (Oper. Res. ‘14) by Besbes et al. Both papers proposed gradient-based algo and used the total-variance distance (“\Phi_T”) to measure the non-stationarity. I am wondering what results do previous results in OCO imply, and how the results
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: It seems that this problem can be reduced to the problem studied in “non-stationary stochastic optimization” (Oper. Res. ‘14) by Besbes et al. Does this work already imply sublinear regret for some special case of this work? I understand in this submission the corruption is assumed to be adversarial, but this does not seem to be an essential consideration.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Fundamental improvements in the problem setting and results compared to Besbes et.al. :**
We thank the reviewer for pointing out missing a reference and comparison to Besbes et.al. which we will add in the revision. Here, we highlight two conceptual contributions we make in the paper compared to Besbes et.al.
**1. In expectation bounds given by Besbes et.al versus high-probability bounds in our paper.** Even in the absence of corruptions, we give high-probability regret bounds while the work of Besbes et. al only give regret bound in expectation. This jump from in expectation to high-probability bound is both technically challenging and algorithmically insightful. The insight we make is that for having high-probability bounds, we need to have "clipped SGD". On the other hand, since Besebes et.al. only give bounds holding in expectation, they can get away without having to do clipping. The necessity of clipping in heavy-tailed settings is not an artifact of analysis, but is crucial for good empirical performance, as noted in recent works of [24]. Thus a conceptual contribution we make is that even in the absence of corruptions, a different algorithm compared to Besbes et.al., namely that of clipping gradients is required to get regret bounds holding in high-probability. From a technical perspective, the proofs for high-probability bounds need different techniques as compared to Besbes et.al. For instance, we need several martingale and induction arguments to arrive at high-probability bounds in heavy tails while Besbes et.al. have a much more simpler proofs just based on convexity, since they only give bounds in expectation.
**2. Impact of corruptions which we study, is not secondary and is a fundamental algorithmic challenge!** There is a huge line of work in statistics and algorithms that deals with design of algorithms in the presence of adversarial corruptions. (For example the book of Algorithmic Robust Statistics by Ilias Diakonikolas and Daniel M. Kane and the survey of [35] cited in our manuscript). Given this huge literature and sub-field of learning in the presence of corruptions, our result shows rather surprisingly that a simple algorithm such as clipped SGD can yield algorithms that are simultaneously robust to corruptions and drifts in the online setting. In light of this, we respectfully push back on the claim of the reviewer that corruptions is only a secondary aspect of the problem.
In light of this, we respectfully dis-agree that *our work is a direct corollary of the work of Besbes et.al.*
**Reiterating the novelty in our work from the paper here in the response:**
**Novelty in the problem setting :** As mentioned in the introduction on Page 1 of our manuscript, online estimation on data streams with high-dimensions, heavy tails and corruptions is a fundamental and important sub-routine in several applications. The conceptual improvement in our problem setting compared to prior work on online estimation is to go beyond the un-corrupted data assumption and consider the impacts of corruptions. The impact of corruptions is both critical to applications (as we show in Page 1), and is also technically challenging (cf. the book of Algorithmic Robust Statistics by Diakonikolas and Kane). However, the impact of corruptions on estimation has mostly been studied in the offline setting. Our setup is the *first* to jointly consider the effects of corruptions and distribution shift simultaneously for online estimation.
**Novelty in the results :** Ours is the first algorithm that is provably robust to outliers and can adapt to distribution shifts in high-dimensional heavy tailed data streams. No other algorithm or analysis can achieve all of these simultaneously. Furthermore, we make conceptual contributions in the paper. For instance, we show in Proposition 2.6 that finite diameter is a necessary criteria to have meaningful performance in the presence of non-stationarities and corruptions. Our work also provides insight that the learning rate should straddle the $O(1)$ known to be optimal in the absence of noise to be adaptive to distribution shift and the $O(1/t)$ known to be optimal in the stationary setting when there is noise.
**Novelty in the analysis :** Providing high-probability finite sample bounds for stationary data streams in the absence of corruptions and drifts are challenging and are only recently being understood (see [24, 34, 43, 52, 57, 60]). All of these papers only present an analysis *in the stationary setting without corruptions*. Our work contributes to this line of work by providing the first analysis in the presence of distribution shifts and outliers that is based on different martingale concentration arguments combined with induction arguments.
**Improvements to the writing:**
We thank the reviewer for a careful review and identifying issues in presentation. We propose to make these changes in the revision. | Summary: The paper studies the problem of online estimation in a setup that generalizes the stochastic i.i.d. input assumption. The authors consider a setting where the input distribution is allowed to change over time (a certain number of times), and furthermore the input is allowed to be adversarially corrupted (a certain number of times). The authors analyze the standard clipped SGD algorithm to tackle both of these issues simultaneously (Being a simple and implementable algorithm, clipped SGD has many favorable properties in practice). In essence, the paper establishes that the clipped SGD algorithm is "Lipschitz" with respect to distribution drift and contaminations.
Strengths: The paper studies an important problem setting in online convex optimization that gracefully generalizes the standard stochastic i.i.d. input assumption.
Weaknesses: 1. At a high level, my reservations with the paper are that the paper proposes a goal in terms of drift and corruption tolerance (on page 4) that seems a bit arbitrary. I was unable to see if even one of these in isolation is understood and what the correct rates are in those settings. In particular, in all of the explicit examples that I could find in the paper, all the lower bounds in terms of $\Delta_T$ (or $\Phi_t$) did not have any multiplicative term with $T$; see, for example, Proposition 2.6 and Section 11.
1. (How the distribution shift is measured) The paper defines the quantity $\Phi_t$ to be the number of drifts in the input sequence. However, all of the results in the paper have regret scaling with $\Phi_T$ times a polynomial in $T$. Is this necessary for algorithms that achieve vanishing regret? What are the best upper bounds and lower bounds for the regret in terms of $\Phi_T$ (without any outliers)? (See the first point for more context)
2. (How the clean error is measured) For outliers, the paper defines the quantity $\Delta_t$ to count the number of outliers in the input sequence. However, the results then depend on $\Delta_T$ multiplied by the diameter of the set and a polynomial in $T$. I am not sure if multiplicative dependence on $T$ is necessary for stochastic inputs (See the points above for more context). Are there lower bounds?
3. (What counts as dimension free? and Comparison with existing work) The paper lists their results as achieving dimension-independent errors, but they have suboptimal dependence on these quantities in the regret bounds. For example, heavy-tailed mean estimation (where the claimed results are somewhat immediate since they multiplicatively depend on the trace of the covariance matrix). The paper [18] is said to have dimension-dependent errors but I was unable to find the entry corresponding to [18] in Table 1 in the paper [18]. Their result on isotropic covariance matrices naturally uses $d$ samples.
*All references are based on the version uploaded to the supplementary material.*
---
## Recommendation
This is perhaps because I am not from this subfield but I am unable to appreciate the technical results of the paper (more comments below). Thus, I recommend weak reject, and I would be happy to change my score if the authors/other reviews convince me otherwise.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See above
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for making a thorough read and providing feedback on the paper. Part of the reviewer's questions are also addressed in the table in the attached pdf. Below here, we respond in text with the reviewer’s questions highlighted in ***bolded italics*** with our response below.
***“At a high level, my reservations with the paper are that the paper proposes a goal in terms of drift and corruption tolerance (on page 4) that seems a bit arbitrary. I was unable to see if even one of these in isolation is understood and what the correct rates are in those settings.“***
Indeed, settings where only either there are distribution shifts or only corruptions is not yet completely understood as we mention in the conclusion section. We have included a pdf in this rebuttal showing the best known bounds for the various scenarios and what new results our paper contributes. Summarizing from the attached pdf, our paper is the first to give lower and upper bounds in the case when there are corruptions, both in the presence and absence of distribution shift. In the absence of corruptions, prior works have characterized upper and lower bounds only in the absence of distribution shift. In the presence of distribution shifts ours is the first high probability upper bound. Previous works only gave regret upper bounds *only holding in expectation* for the setting with distribution drift but absence of corruptions. See the table in the attached pdf for specific details.
Proving high-probability results for heavy-tailed data is technically non-trivial. Just providing a high-probability bound heavy-tailed settings *without distribution shifts and corruptions* is itself new and an actively emerging field of literature (see [24, 34, 43, 52, 57, 60]). Our work contributes and extends this line of work by providing the first analysis in the presence of distribution shifts and corruptions, that is based on different martingale concentration arguments combined with induction arguments, which we believe are interesting in their own right.
Practically, the setting with drifts and corruptions are important in applications where several heuristics are proposed (see the Introduction on page 1). Our work is the first to put such estimation tasks on a formal footing to identify upper and lower bounds in the different regimes of presence and absence of drifts and corruptions.
***“In all of the explicit examples that I could find in the paper, all the lower bounds in terms of $\Phi_T$ (or $\Lambda_T$ ) did not have any multiplicative term with ; see, for example, Proposition 2.6 and Section 11.”***
*Lower bounds for drift:* As we mention in the attached pdf, Besbes et.al.‘s Non-stationary Stochastic Optimization, Operations Research, 2015 and [47], show that $(\Phi_T)^{1/3}T^{2/3}$, is a lower bound for the expected regret in the absence of corruptions. Further, Besbes et.al., show that this bound can be achieved *in expectation*. However, since we are seeking regret bounds holding in high-probability, our upper bounds have a gap from the lower bound. Concretely, we can only establish upper bounds of the form $T^{l}\Phi_T$ for some l<1 (Thm 5.1). In the conclusions section of our paper, we list as an open question (second bullet point) of whether there exists an algorithm that can obtain high-probability regret bound of the form $T^{l}\Phi_T^{1-l}$ for some $l\in(0,1)$, which will then close the gap to the lower bound of Besbes et.al.
*Lower bounds for corruption:* Our work is the first to give non-trivial upper and lower bounds on the regret in the presence of corruptions on the online stream (see also attached pdf). The contribution in our lower bound in Proposition 2.6 and the attached pdf shows that one cannot aim for standard statistical aspirations of infinite diameter, i.e., $\mathcal{D}<\infty$ is needed. However, as the reviewer correctly identifies, our lower bound does not have any dependence on the time-horizon and thus we conjecture it to be loose. This is an artifact of our proof technique where we only consider settings with 0 variance. We believe that more sophisticated arguments with non-zero variance settings, can recover a polynomial in T term in the lower bound. Improving either the lower bound or the high probability upper bound in the case of corruptions is a challenging future work.
***“What counts as dimension-free?”***
We follow standard terminology that states that a bound is dimension free if it depends on the trace of the co-variance matrix and not on dimension times the maximum eigen-value of the covariance matrix, (c.f. [35, 36] of our attached draft). To be concrete, for mean-estimation of a $d$ dimensional vector, a regret bound that depends on $\text{Trace}(\Sigma)$ is deemed to be “dimension-free” (c.f. [35, 36] ). On the other hand, a bound that depends on $d\nu_{max}(\Sigma)$ where $\nu_{max}(\Sigma)$ is the highest eigen-value of the co-variance matrix is NOT dimension free. A bound on the trace is more favorable in high-dimensional settings (cf [35,36]) since by definition we always have $d\nu_{max}(\Sigma) \geq \text{Trace}(\Sigma)$. We will add this definition and discussion in the revision stating that our bounds only depend on $\text{Trace}(\Sigma)$ and not on $d\nu_{max}(\Sigma)$ thereby making our results dimension-free. We clarify the dimension-free definition in the revision
***Entry corresponding to [18] in Table 1:*** Please look at the third row from the top.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response.
**Tightness of bounds**
After reading the rebuttal, I am rather surprised by the omission of the paper Besbes-Gur-Zeevi-13, here onwards [BGZ13], in the literature survey; thanks to the reviewer 6VQW for pointing it out. [BGZ13] does help put this paper in context and deserves a prominent discussion in the paper.
That being said, the results in the paper are rather unsatisfactory compared to [BGZ13]. Their bounds are always sublinear whenever $\Phi_T$ is sublinear, as opposed to the present paper. (Similarly one expects sublinear regret whenever the corruption level is sublinear). I understand that their regret bound holds only in expectation but making their bounds high probability should be relatively easy by making clipping (high probability bounds are usually nontrivial in high dimensional settings where one wants a finer control on $\textrm{trace}(\Sigma)$ and $\|\Sigma\|_2$).
**First row in the table in rebuttal** How is the first row obtained from the results in Catoni12 and Lugosi-Mendelson-19? Those results are for offline algorithms.
**Entry corresponding to [18] in Table 1**
My question regarding [18] in Table 1 was how the third row was obtained from the results in [18]. I do not see such result in [18].
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their time and energy and providing very valuable feedback! We clarify the questions raised in the response above where we paraphrase the question/concern in ***bold-italics*** followed by our response.
1. ***Our results are weak since we do not have sub-linear regret whenever drift is sub-linear***: This question is stated as open-problem number 2 in Section 9 of our paper. The best lower bounds from [BGZ13] and [47] hint that it might be possible to find an algorithm to satisfy open problem 2. However, solving the open problem is technically challenging requiring new ideas and thus outside the scope of the present paper.
2. ***High-probability should be relatively easy***: We dis-agree with this claim. Paraphrasing from our first rebuttal in this reply chain — *proving high-probability results for heavy-tailed data is technically non-trivial. Simple settings without distribution shifts and corruptions is itself an emerging field (see [24, 34, 43, 52, 57, 60]). Our work contributes to this by providing the first analysis in the presence of distribution shifts and corruptions, that is based on different martingale concentration and induction arguments, which we believe are interesting in their own right*.
3. ***High probability bounds are only nontrivial in high dimensional settings***: Our results are *high-dimensional* as we give explicit characterization in terms of trace and largest eigen values of the covariance matrix. Theorem 5.1 does not assume that a known upper bound on the second moment unlike previous works such as [18,47]. Nevertheless, regret only depends on $\text{Trace}(\Sigma)$ and not on the dimension $d$.
4. ***First row of rebuttal***: The offline results state that at any time $t \in \{1,\cdots, T\}$ where there are $t$ samples to estimate the mean, a bound on the instantaneous regret is provided. Summing the instantaneous regret over time $t=1$ through to $t=T$, gives a regret bound for online mean-estimation.
5. ***On the results in [18]***: Equation (8) in Theorem 4.2 of [18] can be translated into a regret bound, since it gives a formula for the instantaneous regret at time $n$. Summing that over time yields a cumulative regret bound. However, we want to point out that Theorem 4.2 in [18] is established under a weaker condition where the time instants of corruption are random and not adversarially chosen (see line 61 of our submission). We will repeat this caveat in table 1 in our revision. | Summary: The paper studys online estimation problems on data stream exhibits challenging properties, including distribution drift, heavy tails, and outlier/anonmalies corruptions. Formally, at each time step, (given all the data that has arrived) the algorithm needs to output an estimation on certain unknown parameter so as to minimize the cumulative regret.
Consider the task of mean estimation as an illustrative example: At each time step, the algorithm receives a data point drawn from an unknown distribution and must estimate the mean of that distribution. The core challenges here are threefold:
1. Distribution drifting: The mean of the distribution from which the data point is drawn can change over time.
2. Heavy tail: The data's distribution might possess unbounded 3rd or higher-order moments.
3. Outliers: Observed data points could be significantly distorted or corrupted.
Interestingly, the paper reveals that a modified version of the clipped Stochastic Gradient Descent (SGD) can attain sublinear regret, even when all three of the aforementioned challenges are present. A particularly notable insight from the research is the necessity of an intermediate learning rate for optimal performance. While an $O(1)$ rate is ideal for addressing distribution drift and an $O(1/t)$ rate is best suited for noise, managing both simultaneously requires a rate of $O(1/T^\alpha)$, where $\alpha$ lies between 0 and 1.
While these findings are derived under certain assumptions regarding the parameter domain and loss functions, the authors further fortify their claims by proving several lower bounds. This demonstrates the indispensability of the stated assumptions.
Strengths: The paper effectively articulates the problem setting and its significance. Even with my limited familiarity with the subject, it's evident that the problem is complex and the results presented are notably substantive.
Weaknesses: N/A
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: (The paper under review is outside my area of expertise, and I had not initially opted to review it. Unfortunately, I wasn't presented with alternative submissions for evaluation)
While I struggled to grasp the entirety of the analysis due to unfamiliarity with the methodologies used, I found the distinction between drift and corruption intriguing. It's a bit surprising to me that, it is possible for an algorithm to even distinguish between *drift* and *corruption*, and it required very different rate ($O(1)$ vs $O(1/t)$) to handle them optimally. The two definitions appear to me to be essentially analogous. The paper's main result seem to confirm this intuition, but the analysis still treats the two issues separately. A more detailed discussion from the authors on this particular point would be beneficial.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a thorough read and providing feedback on the paper.
**Distinguishing between drift and corruption:** Indeed, the reviewer’s intuition is spot on. Our lower bound in Proposition 2.6 and in the sketch in the attached pdf is based on the fact that drift and corruption in a precise sense are indistinguishable. Further, the analysis of our algorithm does not impose/prove that our algorithm can distinguish between drift and corruption. Rather the analysis only shows that *for any data-stream* with total distribution shift $\Phi_T$ and corruptions $\Lambda_T$, the regret of clipped-SGD when appropriately tuned, is bounded by an explicit formula of $\Phi_T$ and $\Lambda_T$ as given in Theorem 5.1.
We hope this clarifies the intuition the reviewer is seeking. | Rebuttal 1:
Rebuttal: Here, we address a common questions asked by multiple reviewers -
***"What is the best known upper and lower bounds for the various settings of online estimation in the presence and absence of distribution shifts and corruptions?"***
We answer this in the table attached in the pdf, which we will add to the revised version.
To summarize the pdf, our work gives the first results for both upper and lower bounds in the presence of corruptions, whether or not there is distribution shifts. In the absence of corruptions, a lower bound was given in prior work of Besbes et.al. to handle distribution shifts, while ours is the first *high-probability* regret bound for heavy-tailed data under distribution shifts. Proving high-probability results for heavy-tailed data is technically non-trivial with emerging literature establishing bounds in the *stationary setting without drifts and corruptions*. (see [24, 34, 43, 52, 57, 60]). Our work contributes and extends this line of work by providing the first analysis in the presence of distribution shifts and corruptions, that is based on different martingale concentration arguments combined with induction arguments, which we believe are interesting in their own right.
Pdf: /pdf/512f4fd2dc1eef409dee9b407912b0b7be498d0b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the online robust estimation problem in possibly non-stationary environments. Under the assumption that the loss function is strongly convex, the authors propose an online clipped stochastic gradient descent algorithm with tunable clipping parameter that is able to achieve both adaptiveness in distribution shift and robustness to heavy-tailed inliers and arbitrary corruptions. Moreover, the algorithm does not require distributional knowledge. Theoretical results and experiments are provided.
Strengths: This paper presents an online estimation algorithm that for the first time provably robust to heavy-tails, corruptions and distribution shift simultaneously. The soundness of the paper is supported by their theoretical results and experiments.
Weaknesses: The presentation of this paper lacks some organization and the problem setup part is not clear at the beginning. For example, $X_t$ is said to be the corrupted input, which is the summation of the sample $Z_t$ And the corruption $C_t$, then the authors want to estimate $\theta_t$, which is not reflected in $X_t = Z_t+C_t$ at all. Moreover, some of the contents in the appendix part (simulations on real data) could be moved to the main body.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a thorough read and providing feedback on the paper.
**Regarding model definition:** The unknown vector $\theta^{*}_{t}$ is the minimizer of the expected loss function $\mathcal{L}$, where the expectation is with respect to the random vector $Z_t$. Concretely, $\theta\_{t}^{\*}$ $ = \arg\min\_{\theta \in \theta}\mathbb{E}\_{Z_t}[\mathcal{L}(Z_t; \theta)]$. However, the estimaotr cannot directly observe $Z_t$, but can only observe $X_t := Z_t + C_t$. We will make this clarification in the revised version.
Further, the camera ready allows for one extra page which we will use to bring in the key lemmas (Lemma 19.9 and 20.10) in the analysis and details/plots on the real data experiments.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Based on your replies, I maintain my current score. | null | null | null | null | null | null |
Neural McKean-Vlasov Processes: Inferring Distributional Dependence | Reject | Summary: The submission considers McKean-Vlasov Stochastic Differential Equation models, which are a generalization of the more-familiar Ito processes. The difference is that the former additionally feature the evolution of the law of the process $X_t$ (denoted $p_t$) as well as $X_t$ itself. Such processes are the limit as the number of particles in a system approach infinity, and their finite-particle approximation (where $p_t$ becomes some empirical distribution) give rise to a model that exhibits temporal as well as between-particle interactions.
A key aspect of the work is to posit a model whereby $p_t$ enters only through a term $\mathbb{E}_{y_t \sim p_t}[\varphi(X_t, y_t)]dt$ for some *interaction function* $\varphi$. The advantage of such a formulation is that the transition density of the process then reduces to the solution of a PDE. The work discusses the properties of MV-SDEs of the aforementioned form that make them a desirable class of models (e.g., non-local interactions between paths, and the ability to incorporate jumps).
Three neural architectures are proposed, differing in particular for their approach as it relates to the modelling of $p_t$:
* Implicit Measure: which recasts the empirical measure as a single layer of a neural network (though I did not completely understand how this happens). The implicit measure architecture has the advantages of being able to cope with the missing data setting.
* Empirical Measure: where $\varphi(\cdot, \cdot)$ takes the form of a trainable neural network.
* Marginal Law: which involves learning a generative model.
Methods for parameter estimation are presented (Section 4), including methods for the missing data setting based on Brownian bridges. A result is proven (Proposition 5.1) regarding implicit regularization properties. A numerical study is conducted on a number of simulated data examples, real data (for which forecasting is also explored), as well as a generative modelling task.
Strengths: * The ideas contained in the submission are highly non-trivial, yet explored in impressive depth from a number of different angles in a sophisticated manner.
* The study of such techniques is impressively comprehensive in terms of both theory and different approaches (giving a number of methodological approaches, as well as some theory). The submission appears to represent the outcome of a significant body of work and investigation.
* The submission is very well written and presented. The numerical experiments appear well-executed.
Despite some familiarity with concepts in the submission, I am not an expert on SDE modelling, so can not particularly comment on the novelty in that regard. However, the ideas in the submission appear like ones that are very natural to explore, have been explored well, and are worthy of publication in my opinion.
Weaknesses: * As much as I enjoyed the abstract presentation, it would be beneficial to have additional clarity as to settings where MV-SDE modelling may be of interest, so to contextualise the work, or where different approaches would be preferred. It is mentioned that few works have considered such models in machine learning tasks, and it would be beneficial to have a small discussion what the contribution could potentially be there (at a high level, that is).
* Related to the above, a clearer motivation of the task at hand would be beneficial. The paper takes an SDE-first viewpoint, but isn't the overall goal to fit some sort of particle approximation to the SDE? Some additional discussion and background would be beneficial.
* Some applications are mentioned at present, but it would be nice to have a small discussion something along the lines of "MV-SDEs are most useful when the goal of interest is to model... ".
* The derivation of the implicit measure architecture was something that I could not properly parse as currently written.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * The simplified case of fixing the diffusion coefficient is considered, and only looking at learning the drift. How strong is this restriction in terms of modelling potential? Does it exclude certain behaviour?
* Similar to the above, but for the factorization imposed on $b(X_t, p_t, t)$ (though I do understand the impetus for such is the PDE representation, I am curious about the other aspects).
* p4, l130: Additional background should be given regarding what "mean-field drift" is, and mean-field approximations of MV-SDEs in general. Those without a strong background in MV-SDE modelling will likely not understand the paper otherwise. Also, should "in (2)" not be "of the form in (2)", as (2) is the limiting case?
* p4, l153: It is not clear what the intended meaning of this equation or the sentence preceding it is. Is the intention to say that one is using an MLP with stochastic neurons? This section would benefit for additional details and clarity.
* p4, l151: Why does having few samples make it difficult to obtain an empirical measure? I would have thought this would simply made the computation easier.
* Regarding the title: "inferring distributional dependence" is a little vague, and perhaps could be replaced by something else that does the paper more justice and gives those without knowledge of what a MV-SDE is more of a clue what the paper is about (this is a very minor point, and just a suggestion).
Minor things:
* p5, l180: should $(X,p)$ be $(X_t, p_t)$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: A little additional discussion would be beneficial (see other parts of this review for specifics).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and invaluable comments.
We also appreciate the positive remarks.
We address the individual concerns below.
1. (Where MV-SDEs may be of interest)
The reviewer makes a good point, we will include additional discussion on the applications of MV-SDEs and their comparison to Ito-SDEs.
We will add the following note:
Ito-SDEs do not include interactions with other sample paths and is appropriate when modeling dynamics that are known to be independent,
such as the movement of molecules that do not interact.
On the other hand, it is often natural that the dynamics of particles influence each other.
There are many examples where this interaction is important to model.
It is well illustrated in a double well system that has two stable states. In the It\^o case with no particle interaction, the probability of a particle switching states is exponentially small.
In the McKean-Vlasov case, particles can switch potentials through the influence of their mean-field interactions as shown in [1].
As another example, modeling the beliefs of different agents who can influence the opinions of other agents, leading to polarization.
Additionally, MV-SDEs have appeared within the context of analyzing the behavior of neural networks, specifically the transformer architecture [2].
MV-SDEs also appear in inferring the trajectories of single cell RNA data [3].
The interactions between cells are important to model to maintain dynamics that are similar to the data.
[1] Garnier et al. "Large deviations for a mean field model of systemic risk." SIAM Journal on Financial Mathematics 2013.
[2] Sander et al. "Sinkformers: Transformers with doubly stochastic attention." AISTATS 2022.
[3] Chizat et al. "Trajectory inference via mean-field langevin in path space." NeurIPS 2022.
2. (Clearer task motivation and applications)
The reviewer makes a good point on the motivation.
While we started motivating with the two questions within the introduction, we should include additional context on how these questions apply.
We will rewrite the introduction to include the following:
"
In many scientific disciplines, the interaction of different agents is important to study and inferring these interaction properties from data remains a challenging task.
Curiously, similar ideas have also found their way in probabilistic modeling in machine learning settings but have not been very well studied through the lens of particle interactions.
For example, when inferring single cell RNA trajectories, McKean-Vlasov processes have been used to find the correct particle distributions [3].
Our goal is then to identify appropriate techniques for applying McKean-Vlasov processes in both (scientific and machine learning) disciplines and study their influences on the respective problems.
"
[3] Chizat et al. "Trajectory inference via mean-field langevin in path space." NeurIPS 2022.
3. (Derivation of implicit measure architecture)
We apologize for the confusion here.
Please refer to the mean-field layer rewrite in the response to all the reviewers.
4. (Fixed diffusion coefficient)
A fixed diffusion coefficient generally implies that the marginal density has roughly exponential decay.
The model and algorithms can work with an estimated or known constant diffusion coefficient (it only requires a small change in the ELBO to estimate this parameter).
Since we would like to focus on estimating the unknown drift which is related to process trend, we assume a known constant diffusion coefficient.
We thank the reviewer for pointing this out and we will include a discussion on estimating the diffusion coefficient in the appendix.
5. (Factorization of the drift)
We thank the reviewer for pointing these out and added a discussion on the factorization in the response to all reviewers.
6. (MLP architecture exposition)
We apologize for the confusion here.
The main point regarding this paragraph is to show that we can write $\mathbb{E}_{y \sim p_t} [\varphi(x - y)]$ as an MLP where the width tends to infinity and using an activation function given by $\varphi$.
To see this, we need the weight to correspond to a delta function at the point $x$ and the bias to correspond to many samples from $p_t$.
This motivates why it is natural to consider a MLP for the task of estimating MV-SDEs and motivates how to build upon this idea to develop more general architectures to support MV-SDEs.
The architectures section begins with these thoughts on the MLP architecture and then continues by considering different modifications of the base MLP structure and their influences on the performance in some of the tasks of interest.
7. (Few samples in the empirical measure architecture)
The reviewer is correct that having only a few samples would make the computation easier (since the summation over a smaller number of points in the expectation would be easier to compute).
However, when considering too few samples, the estimation of the expectation may be incorrect.
Consider the case of a double well potential where at some time marginals we only observe samples from one of the potentials.
The empirical measure in this case would be biased to one particular mode of the distribution for that time marginal.
8. (Title changes)
This is a good point, we mainly want to emphasize that the architectures that provide the decomposition in terms of the MV-SDE includes distributional dependence.
We can possibly change it to "Distributional dependence in diffusion models", but we are open to other suggestions that the reviewer may have.
9. $(X, p) \to (X_t, p_t)$
The reviewer is correct, and we would like to thank the reviewer for pointing this out, we will correct it in the final manuscript.
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for their thorough response. I confirm that I have read it as well as the other reviews and their responses. I commend the authors not only on the details of their response, but in providing concrete examples of new explanations. I retain my score of "Strong Accept" as a result, though in light of the clarifications now increase my confidence score from two to three.
Regarding two minor points:
* (MLP architecture exposition) Thanks for the clarification. It may be beneficial to include it in the final paper, but this is at the authors discretion.
* (Title) I agree and understand the motivation of the title in the original submission, and confess I don't have any particularly better suggestions. This was a very minor point, and entirely up to the authors. I like the alternative being considered, but it may be best not to say "diffusion models" as, while entirely correct, the term tends to be more associated with a particular class of deep generative models these days. Perhaps the original title is indeed optimal after all!
---
Reply to Comment 1.1.1:
Comment: Many thanks to the reviewer for the kind and fast reply. We will include the clarification on the MLP architecture in the final paper. We agree that diffusion model tends to be more associated with a particular class of deep generative models and it may be good to retain the focus on MV-SDEs and inferring distributional dependence. Nevertheless, we would like to thank the reviewer for helping to bring up the suggestion of a title change. Once again, thank you very much for all your comments and feedback! | Summary: This paper considers the problem of parameter estimation from data when the underlying dynamical system is modeled by the MV-SDE. To represent the target MV-SDE, the authors propose two strategies: (i) expressing a layer in a neural network as an expectation with respect to a density and (ii) using generative models to capture distributions that generate observations at different time stamp. With these strategies, the authors then propose to conduct parameter estimation via 1. maximum likelihood estimation, 2. using Brownian bridge for estimation, 3. explicit marginal law estimation.
Strengths: Please see the discussion below.
Weaknesses: 1. Poor presentation. Overall this paper is obscure and hard to follow. Here are some detailed examples.
* When stating the underlying MV-SDE model in Eq.(2), it is not clear which terms are known and which terms are to be learned. Specifically, do we know $f$ and $\phi$? Since we are interested in the problem of parameter estimation, both terms should be learned from the observations.
* It is now clear how the proposed implicit measure architecture can be carried out: In Eq.(6), it is not clear what $\mathbb{P}_t$ and $\mathbb{P}_0$ are and why we can estimate the Radon–Nikodym derivative $d \mathbb{P}_t/d\mathbb{P}_0$. Moreover, the authors are motivating the implicit measure architecture as an alternative to the standard empirical measure approach since it can handle the situation when only samples from irregular time stamps are available. However, it is not clear why this is the case.
* In section 4.2, where the authors propose to estimate parameters using the Brownian bridge, it is not clearly how the Brownian bridge comes into play.
* The maximum likelihood estimation in section 4.1 is proposed in previous work [Sharrock et al. 2021], but is presented as a strategy proposed by this work, which is misleading.
2. Lack concrete contributions. Partly due to the poor presentation of this work, this paper presents no clear contributions. For example, the proposed implicit measure architecture and the marginal law architecture are just two ways to rephrase the standard empirical measure. The results presented in section 5 are well-known results in the literature, e.g. the Wasserstein gradient flow structure of the MV-SDE.
3. A contradiction between the goal of this project and its fundamental assumption. While the authors motivate the research of MV-SDE to model jump (discontinuous behavior) in time sequence data, they also assume that the drifting term of the MV-SDE is sufficiently regular. This is a clear contradiction. For MV-SDE with regular drifting term and interaction term, it can be proved that the characteristic flow is also regular.
Technical Quality: 1 poor
Clarity: 1 poor
Questions for Authors: Please see the comments above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 1 poor
Contribution: 1 poor
Limitations: Please see the comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We regret that the reviewer felt so negatively towards the work and could not find a single strength within the entire manuscript.
1. (Poor presentation)
We regret the reviewer found the presentation poor.
We will address the reviewer's points.
- Both $f$ and $\varphi$ are learned.
We would like to bring to attention in lines 89, 90, we stated that we ''focus on estimating the drift, $b$, from data'', then proceeded to decompose $b$ into $f$ and $\varphi$ in equation (2). In lines 136, 137, we stated that we `` denote a function $f$ parameterized by parameters $\theta$ as $f(\cdot;\theta)$ then proceeded to note in lines 142-144 that $f(\cdot;\theta)$ and $\varphi(\cdot;\theta)$ are represented with neural networks.
- $P_t$ is the marginal distribution at time $t$, $P_0$ is the base measure at an arbitrary time (call it `$0$').
Both are absolutely continuous with respect to the Lebesgue measure and the Radon-Nikodym derivative exists.
Unlike the EM architecture, the IM architecture estimates the time marginal law implicitly through a change of measure with the Radon-Nikodym derivative, which means we do not need a different set of samples at each time point.
For further details, please also refer to the mean-field layer exposition in the response to all reviewers.
- The Brownian bridge is used to sample paths between observed time margins for data collected at irregular time intervals.
We would like to bring to attention that the application of Brownian bridges to irregular time intervals as an interpolator in maximum likelihood estimation was stated in lines 208-210.
- The reviewer's comment that we present Sharrock et al's work as ours is *not true*.
First, we clearly cited the work in line 200.
Second, the proposed approach by Sharrock et al involves computing the stochastic exponential, which is a standard approach within most SDE parameter estimation techniques.
Third, our contribution is within the context of the relevant problems within the NeurIPS community, which Sharrock et al did not consider.
Out of the many works that describe SDE parameter estimation, we cited Sharrock et al's work due to its focus on MV-SDEs.
2. (Lack of concrete contributions)
We regret the reviewer found our work lacking in concrete contributions.
The contributions of this work were clearly delineated in the introduction. The reviewer even described our contributions within their summary at the beginning of the review.
We restate that our main contributions are the architectures and the analysis while providing an estimation framework within the context of the architectures.
The analysis we provide linking the approximation capabilities have not been considered in the literature.
The analysis implies that modeling MV-SDEs using neural networks is appropriate and then considers specific architectures and their implications.
The reviewer's comment that ``the proposed implicit measure architecture and the marginal law architecture are just two ways to rephrase the standard empirical measure'' is incorrect. The standard empirical measure architecture relies on observations to compute the empirical expectation, the proposed architectures learn the measure from observations.
The reviewer's comment that ``the results presented in section 5 are well-known'' is incorrect. The implicit regularization analysis in section 5.1 is new and specific to this work while the application to the energy distance in 5.2 has not been done before (as far as we are aware).
To say that these are not worthy contributions would negate a large number of contributions within the machine learning literature, since many results pull from existing topics and consider them within a new context.
For example, the mathematics behind diffusion models is well known yet its introduction within the machine learning community has led to many advances in probabilistic modeling.
As such, we strongly disagree with the reviewer on this point.
3. (Contradiction between stated goals and assumptions)
We assumed that the drift is sufficiently regular in section 2.1, where we established the background. As is often done for the analysis, we impose restrictions on the processes to make the exposition and theoretical developments clear.
We then described discontinuous sample paths in section 2.3 to illustrate and motivate an interesting property of MV-SDEs which specifically is due to the interactions between particles. We will note in the discussion of the motivations, experiments (and specifically the mean-field atlas model and OU jump process), and limitations that we relax these assumptions and consider the performance of the proposed architectures on a wide range of scenarios that may not satisfy the assumptions.
This is a strategy in many machine learning papers, since in many scenarios it is often impossible to determine whether assumptions are satisfied.
This does not imply a contradiction, since often algorithms are applied in scenarios that have assumptions that are not verifiable or to demonstrate how they work beyond the scope of the original assumptions.
This is hardly a reason to reject a paper, but rather it showcases the versatility of the methods beyond the original theoretical constraints.
Separately, though not path discontinuities, we would like to bring to attention an interesting and related case of phase transitions with only interaction through weak attraction. Simulation parameters and proofs are given in [1].
We would also like to bring to attention that for the case of positive feedback, under relaxed assumptions on the drift, a simple proof is given in [2] Theorem 1.1 that the path is discontinuous.
[1] Garnier et al. "Large deviations for a mean field model of systemic risk." SIAM Journal on Financial Mathematics 2013.
[2] Hambly et al. "A McKean-Vlasov equation with positive feedback and blow-ups." Annals of Applied Probability 2019.
---
Rebuttal Comment 1.1:
Title: Response to the authors' rebuttal
Comment: First of all, I thank the authors for the detailed response.
Regarding to the presentation of the IM architecture, I think now I understand the meaning of proposed approach. Please let me know if the following restatement of the approach is correct:
The purpose of this this "layer" is to approximate the mean-field interaction $\phi \ast p_t$. Since we have $\phi \ast p_t(x) = \int \phi(x, y) \frac{p_t}{\mu}(y) \mu(y) d y$, the author proposes to approximate this quantity by $\int \phi(x, y; \theta) h(t, y; \xi) \mu d y$. Here, $h(t, y; \xi)$ is some neural network parameterized by $\xi$ and is used to approximate the time-varying importance weight $\frac{p_t}{\mu}$. Further, the authors then decide to merge $\phi(x, y; \theta)$ and $h(t, y; \xi)$ and define this quantity as $\phi(\cdot, \cdot, t; \theta)$. With the abuse of the notations $\phi$ and $\theta$ ($\phi$ and $\theta$ defined in line 165 are of different meaning from the ones mentioned in Eq.4), I could not parse the sentence in line 165-166.
Should my understanding be correct, I guess for the easy of the audience, the authors should very clearly state what is the quantity to be approximated using neural network and what are the parameters to be learned.
In terms of the novelty of the proposed approach, I do not find the IM architecture and the ML architecture new as they are simply the importance weight and push-forward model commonly used in the ML community to represent distributions.
> the energy distance in 5.2 has not been done before (as far as we are aware).
Please check the diffusion-advection-interaction equation in Eq.(4.14) of [Santambrogio 2017]. This is a paper cited by the authors and I am surprised that the authors did not know this result.
> We would also like to bring to attention that for the case of positive feedback, under relaxed assumptions on the drift, a simple proof is given in [2] Theorem 1.1 that the path is discontinuous.
Interesting. I apologize for not noticing this work as you have cited it in line 126. However, I find it hard to formulate the SDE considered therein as an instance of the MV-SDE in Eq.(2) considered in your work. Could you please elaborate on this?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response, below we address the three main comments.
1. The reviewer's understanding of the IM architecture is now correct.
We note that a large part of the field of machine learning uses similar notions of importance sampling and push forward measures in many research directions, so the reviewer's questioning of the novelty of our paper because we also applied these techniques is surprising.
Additionally, we note a few of the other contributions that are new:
First, we study the effect of distributional dependence on generative modeling and time series.
This is a new direction and, as highlighted in the text, has high relevance due to research interest in diffusion models and the relationship to attention.
Second, the proposed architectures provide a practical way of applying distributional dependence to the problems of interest and empirically demonstrate improvements over existing methods.
Third, the analysis of the IM architecture provides intuitive properties on the regularization induced by the architecture.
This allows a user to gain an idea of how the process is learned using this particular architecture.
In that sense, we believe these are valuable contributions to the machine learning community, with particular emphasis to those working on generative modeling using diffusions.
2. We regret that the reviewer may be mixing the energy distance with energy functions. These are not the same. The equation 4.14 in Santambrogio 2017 describes an energy function related to the granular media equation.
However, we must emphasize that we are describing a specific functional that minimizes the *energy distance* between two probability distributions, *not an arbitrary energy function*.
We are not aware of an existing reference where the energy distance has been studied or derived within the context of gradient flows.
We then use this to motivate the experiments where we study how the different architectures minimize the energy distance with respect to a target distribution.
We would be happy to cite a reference if the reviewer knows of one regarding the energy distance.
3. We can represent this with drift $-\alpha \frac{\partial}{\partial t}\mathbb{E}[\mathbf{1}_{y_t \leq 0} ]$, $\alpha\in\mathbb{R}^+$. | Summary: The authors proposed two new methods of modelling McKean--Vlasov SDEs using a neural network, and studied its empirical performance.
Since I'm not an expert this exact topic, I would like to ask the authors some questions first. I would be happy to raise my score further once I understand the paper better.
Strengths: The authors proposed methods that do not model a finite population of the particles, which is a very interesting alternative.
Weaknesses: Several parts of the paper seem unclear to me at the moment, and I will ask specific questions next.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Again, I would be happy to raise my score once my questions are adequately addressed.
1. Let me start with a basic question: under what type of conditions can we factor the drift $b$ into $f$ and $\varphi$ components as in equation 2? This seems to be an important simplification and I would like the authors to motivate this further. Perhaps the authors can provide some important examples in applications where this simplification is available.
2. The authors seem to also occasionally write $\varphi(x,y) = \varphi(x-y)$ fairly interchangeably, e.g. in equation 3. Can the authors clarify if this is intentional in any of the circumstances? For example, it seems like in the MLP representation case of Appendix A.1, it is necessary to use $\sigma = \varphi(x-y)$.
3. On a related note, when is $\varphi$ known a priori? It seems like if this is known, then we wouldn't need a neural network to estimate $\varphi$ in the empirical measure approach, we would just directly simulate the particle system?
4. The authors wrote that MLPs can model McKean--Vlasov dynamics, but in the derivations it seems like it would require the weights to be identity matrix, which fundamentally restricts the complexity of the bias $b$ since it can only be $d$-dimensional. So doesn't this mean that $\nu$ can at most be an average of $d$ Dirac-delta's, and not necessarily capable of representing a general distribution $p_t$?
I found the definition and discussion about the mean field layer quite confusing. I have several questions specifically dedicated to this.
5. What is the distribution $\mathbb{P}_t$, and how does the Radon--Nikodym derivative $\frac{d \mathbb{P}_t}{d \mathbb{P}_0}$ show up in equation 6?
6. When written along with $\varphi( X_t, W_0^{ (i) } )$, is the Radon--Nikodym derivative a function of $X_t$ or $W_0^{(i)}$? As in which measure is being changed here?
7. Can we interpret each of the $W_0^{(i)}$ as a hypothetical particle?
8. Most importantly, the authors suggested that the Radon--Nikodym derivative can be learned, and that as $n\to\infty$ this can also represent the drift. Can the authors provide more details on this part? I don't think either claims are clear at all, and I would like to understand the arguments behind this critical step.
With respect to the marginal law, I also wanted to ask a few questions.
9. Are the authors estimating the marginal law $P_t$ at each time of $t$? Does this imply that if the authors were to increase the number of time steps, this would require more estimates?
10. Can the authors provide more details about how $P_t$ is being estimated? I can't seem to find anywhere the authors described the procedure to modelling this density at all.
On a high level, I also have questions regarding the empirical measurements of errors.
11. While MSE going to zero of course implies the method is correctly modelling the underlying dynamics, but it doesn't provide a relative scale of how the methods are performing. Can the authors provide a measure in terms of the Kolmogorov--Smirnov distance, i.e. $L^\infty$ norm of the empirical CDFs between true $p_t$ and estimations?
12. Can the authors also demonstrate how the method improves as a function of computational power, e.g. in terms of the width of the network and maybe the number of particles for the empirical measures case?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments, questions, suggestions for improvement, and the time spent reviewing the paper. We respond to individual points below.
1. (Linear factorization of the drift)
We added a discussion of this in the response to all reviewers.
2. (Different forms of $\varphi$)
We primarily use the form $\varphi(x-y)$ to describe some properties of MV-SDEs. MV-SDEs are often written in this convolutional form and its properties are well studied.
We use the general form $\varphi(x,y)$ in the proposed architectures to express more general interactions.
The reviewer is correct, we described the MLP with $\varphi(x-y)$. Extending the MLP to support arbitrary interactions is difficult since it requires an operation that repeats the weight for each input point.
We will note the role of the convolutional form in the description of the MLP architecture.
3. (When is $\varphi$ known)
The reviewer is correct, if $\varphi$ is known, then we can directly simulate the particle system.
In practice, it is not clear when $\varphi$ is known, significant domain knowledge is needed.
A different strategy could be to constrain $\varphi$ to a known class of functions.
In the well-studied case where $\varphi$ is the gradient of a convex function, we can constrain the parameterization appropriately.
This would imply aggregation properties for the limiting particle distribution.
4. (MLP representing MV-SDEs)
These are great points, and we apologize for the confusion.
To gain some intuition, we first consider the 1 dimensional case where the input $x$, weight matrix $W$ and bias matrix $b$ are of sizes $1$, $K\times 1$ and $K\times 1$. We wish to obtain $K$ repeats of the input particle to interact with the $K$ particles represented by the bias. The weight is thus $K$ repeats of $1$ and the number of rows (or the width) $K$ can be arbitrarily large.
As $K\to\infty$, we obtain the expectation with respect to the true measure given as the values of the bias.
In the $d$ dimensional case, $x, W, b$ are of sizes $d, K\times d \times d, K\times d$ and $W$ is $K$ repeats of the $d$ dimensional identity matrix to obtain $K$ repeats of the input particle.
The expectation is then taken with respect to the $K \times d$ bias. As $K \to \infty$, we obtain the true expectation.
We will include this more detailed explanation in the derivation of the MLP architecture.
(Mean-field layer)
We apologize for the confusion and added a new exposition to this section in the general response to all reviewers.
We answer specific questions below.
5. (Distribution of $P_t$)
The distribution $P_t$ defines the particle distribution at time $t$.
Since each time marginal is absolutely continuous with respect to the Lebesgue measure, the Radon-Nikodym derivative exists.
That way, if we are approximating a drift of the form: $\mathbb{E}[ \varphi(x, y) ]$ with the expectation with respect to $P_t$, we can rewrite as $\mathbb{E}_{P_0}[\varphi(x, y)\frac{dP_t}{dP_0}]$.
6. (Radon-Nikodym derivative)
The Radon-Nikodym derivative is a function of $W_0$. The change of measure $\frac{dP_t}{dP_0}$ is applied to the base measure $P_0$ given by the weight matrix $W_0$.
This leads to an interpretation of the IM architecture as shared weights, re-weighted by the Radon-Nikodym derivative at each time.
7. (Interpretation of $W_0^{(i)}$)
Exactly, $W_0^{(i)}$ can be thought of as a particle from some distribution that is shared across all time marginals through the change of measure.
8. (Learning the Radon-Nikodym derivative)
We apologize for the issues in clarity.
We recall the goal to compute $\mathbb{E}_{y \sim P_t}[\varphi(x, y)]$.
If we consider this to be an empirical expectation, then we can rewrite it as $\frac1n \sum_{i=1}^n\varphi(x, y^{(i)})$ where $y^{(i)}$ are observations with distribution $P_t$.
This is where the concept of width of the mean-field layer comes into play -- as $n\to\infty$ this empirical expectation becomes exact.
We write it as an expectation with respect to the empirical measure which is a sum of Dirac measures.
Since we only want to compute an expectation at each time, we can rewrite it as an expectation with respect to a change of measure given by the Radon-Nikodym derivative.
In particular, the factor $\frac{d P_t}{d P_0}$ is approximated by a neural network with inputs $W_0$ and $t$.
This allows us to take the expectation with respect to the base measure defined by $W_0$ with a biasing term given by $\frac{d P_t}{d P_0}$.
9-10. (ML architecture and $P_t$)
We are sorry for the confusion, the marginal law is jointly estimated and penalized to be self-consistent via equations (10) and (11), which are repeated at all time intervals.
The estimation procedure is described in Algorithm 3.
In the implementation, we represent $P_t$ as a conditional normalizing flow (the GLOW model) where the conditioning variable is $t$.
The architecture is also described in Appendix C.3.
11. (KS Statistic)
This is a great point, we included additional results in the PDF on the KS statistic for the one-dimensional datasets.
12. (Performance as a function of width and particles)
We thank the reviewer for this suggestion, we included ablation studies in the PDF in the general response.
For some equations (e.g. the ones requiring jumps) we note empirically that when the width is increased, accuracy is improved until a saturation point.
Others (e.g. Kuramoto), the width parameter does not play as big of a role, but mainly maintaining the structure of the network improves the performance relative to other methods as shown in Appendix C.4.1.
We also included figures illustrating the improved convergence rate of the larger width architectures when $\varphi$ is known and unknown.
We suspect that this is due to favorable properties of the optimization landscape when including more parameters.
The EM architecture also improves with increased particles as expected.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the reply and the added KS statistics. Some of my questions are addressed, but I would like to follow up with the others.
On questions 1,2,4, I think my main concern regarding these are some stylistic, but I believe this is important. In particular, I would like to read a paper defining the context of the problems that it's solving clearly, and do not over claim anything that is not exactly matching the claims upon clarification. For example, I don't think it's fair to claim that MLPs can model MV dynamics, as the connection is quite weak in my opinion.
W.r.t question 5, is there a difference between $\mathbb{P}_t$ and $P_t$? Also when you say the distribution of the particles at time t, do you mean the joint law over all the particles?
For question 8, can you clarify exactly how you get the neural network to represent and learn this Randon--Nikodym derivative? Like I would like to understand the setup here, I'm genuinely not sure where the signal is coming from, and what the loss is etc.
So I know I asked for the KS statistic in the review only, so you didn't have much time to experiment further, but I am somewhat concerned the KS statistics are quite large. Since the CDFs are functions contained in $[0,1]$, I would hope the KS statistics are less than 0.1 by a successful method. A large KS statistic can mean many things, but most likely the numerical method is not quite capturing the same distribution. You can plot the two empirical CDFs, or histograms/kernel density estimates, so that you can visually examine them and that should paint a clear picture.
Honestly speaking, at this point I don't think I'm too convinced yet, but feel free to respond further so we can continue the discussion.
---
Reply to Comment 1.1.1:
Comment: We appreciate the follow up and would also like to thank the reviewer for the opportunity to discuss.
1. Please allow us to clarify our claims on neural architectures, only two proposed neural architectures for representing MV-SDEs, implicit measure (IM) and marginal law (ML) architectures, based on learned measures and generative networks (line 71).
These architectures are able to represent the general form of MV-SDEs, including general factorization of the drift and general form of interaction.
We will introduce the MLP architecture only for motivation, relegate the bulk of the MLP exposition to the appendix, and clarify in the main text the assumptions and weak connection of MLPs to MV-SDEs.
For consistency, we begin by clearly defining the context of the problem we are solving as modeling and inferring parameters for MV-SDEs with linearly factorized drift and general interaction:
$$b(X_t,p_t,t)=f(X_t,t)+E_{y\sim p_t}[\varphi(X_t,y)]$$
where $p_t=\mathrm{Law}(X_t)$. The particles $X_t$ are exchangeable and distributed as $p_t$.
Then, to prevent misunderstandings, we restate our contributions on neural architectures as only the implicit measure (IM) and marginal law (ML) architectures, with the mean-field components summarized as:
$$\mathrm{IM}(X_t): E_{y\sim p_t}[\varphi(X_t,y)]=E_{y\sim p_0}[\varphi(X_t,y)\frac{\mathrm{d}P_t}{\mathrm{d}P_0}]\approx\frac{1}{K}\sum_{i=1}^K[\varphi(X_t,W_0^{(i)},t;\theta)].$$
$$\mathrm{ML}(X_t): E_{y\sim p_t}[\varphi(X_t,y)]\approx\frac{1}{K}\sum_{i=1}^K[\varphi(X_t,Y_t^{(i)};\theta)], \quad Y_t^{(i)} \sim p(\varepsilon^{(i)},t;\phi).$$
where in the IM architecture, $\varphi(\cdot,W_0,t;\theta)$ is a MLP with inputs $X_t, W_0$ and $t$ that approximates the combination of the interaction function $\varphi$ with inputs $X_t$ and $y_t$, and the change of measure with inputs $y_0$, represented by weight $W_0$, and $t$;
and in the ML architecture $\varphi(\cdot, \cdot;\theta)$ is a MLP with inputs $X_t, Y_t$, and $p(\cdot, t;\phi)$ is a generative architecture with an input noise source $\varepsilon$ and conditioning on time $t$.
In addition, to prevent over claims, we will emphasize the weak connection of MLPs to MV-SDEs, specifically MV-SDEs with the linearly factorized drift and convolutional interaction, introduce it only for motivating the representation of expectations with MLPs and relegate the bulk of the MLP exposition to the appendix.
With regards to questions 1, 2, 4:
a. We motivate with the form of the linearly factored drift but this condition is not necessary in the proposed methods.
b. We consider the convolutional form of the MV process due to its ubiquity in the literature. We only require this structure in the MLP architecture and we do not impose this structure in the other architectures.
c. We note the MLP approximation of the MV process holds in the case where the convolutional form is given. The MLP is only used as motivation and is not used as a main contribution of the work.
It is not our intention to over claim, please let us know what else the reviewer believes is oversold and we will make adjustments to the final manuscript.
2. $P_t$ and $\mathbb{P}_t$ are the same, and yes this refers to the joint law of all particles at time $t$. Apologies for the confusion here, we originally used $\mathbb{P}_t$ in the Radon-Nikodym derivative since that is conventionally written using the blackboard font but realized it may introduce more confusion when switching to referring to the density only.
We were aiming for a consistent notation and will use $p_t$ and $\mathrm{d} P_t$ in the final manuscript.
3. The Radon-Nikodym derivative can be understood as a positive function that takes as inputs a point $y$ and time $t$ and outputs the weight of that point at time $t$ such that the weighted expectation matches the true expectation.
To parameterize such an object, we need to represent a function $\lambda$ that maps $y, t \to \mathbb{R}^+$.
This can be done using a neural network.
Putting this together with the interaction function $\varphi$, we have
$$
\mathrm{IM}(X_t) := \frac1K \sum_{i=1}^K\varphi(X_t, W_0^{(i)}; \theta) \lambda(W_0^{(i)}, t; \theta) = \frac1K \sum_{i=1}^K \varphi(X_t, W_0^{(i)}, t; \theta)
$$
where $\lambda$ represents the weight on each hypothetical particle of the base measure and we can combine the product $\varphi(X_t, W_0^{(i)};\theta)\lambda(W_0^{(i)},t;\theta)$ into a single term $\varphi(X_t, W_0^{(i)}, t;\theta)$ represented with a single MLP.
For estimation, we perform maximum likelihood estimation (MLE) with the likelihood given in equation (8) derived from Girsanov's theorem that takes as input the modeled drift and the observations. The estimation procedure is general and works across the proposed architectures. We thus use the IM parameterization and perform the same estimation techniques that we proposed (MLE) in the rest of the text. If the drift is correct, an appropriate Radon-Nikodym derivative was learned. | Summary: This paper proposes a methodology for simulating McKean-Vlasov (mean-field) equations using standard function approximation techniques, e.g. neural networks. It provides mathematical intuition for these algorithms and evaluates them on a broad suite of benchmarks.
Strengths: The paper is very well written with clear exposition of its main points. The proofs of key claims seem broadly correct as well.
The methodology is broadly well justified and uses very intuitive ideas from stochastic analysis.
In particular, the adaptation of standard neural network techniques for simulating ODEs/SDEs is not entirely applicable here, and so the derived techniques need to account for the estimation of the particle density. The resulting algorithm is novel and an important independent contribution.
The experiment evidence, especially in the Gaussian case, seems to vindicate the intuition of this algorithm and clearly outperforms the chosen baselines.
Weaknesses: I would say that the ultimate idea is rather simple, i.e. approximating both the drift function (both interactive and interaction-free), and possibly the particle density with some kind of learned approximations.
I appreciate the inclusion of standard deviations in the Tables, however these values seem, particularly in Table 1, to be quite large relative to the proposed gains.
I have some additional questions about the methodology. To summarize, I think this paper makes fairly solid contributions to the simulation of McKean-Vlasov equations, which is an important problem. If my questions are addressed, I would be amenable to raising my score.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: How is the Radon-Nikodym change-of-measure learnt in the implicit model? This seems like a difficult task and I would be skeptical that it would be done without large amounts of reference data. Consequently, I am not sure what the clear advantage of IM over EM would be. Could the authors elaborate?
The usage of sample trajectories from Eq. (12) in the objective (11) seems quite non-trivial if the constraint in (11) is exact. In particular, I don’t see how (11) could be easily enforced. It seems that this is not being done exactly from Algorithms 3 and 4 in the appendix, so this should probably be clarified in the main text.
I would argue that Methods 3.2 and 3.3 are more similar than given in Figure 3, since both are essentially proposing to learn the densities (but one using a Radom-Nikodym w.r.t. Time $0$ and the other a generative model). However the emphasis on neural networks in the implicit method is quite important, so perhaps the current organization is OK.
The first line in sentence 180 is redundant given the definitions.
Remark 3.2 is probably better cited from Villani, particularly Section 8.3 of Topics or 9.6 of Topics.
**Typos:**
L. 591 -> I don’t think this should be pointing to section C.2.3, but rather to C.3 or something like that.
**References**
Villani, C., 2021. Topics in optimal transportation (Vol. 58). American Mathematical Soc..
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the well-thought feedback and comments.
We address these individually below.
- (Simplicity in the ideas)
We hope that in addition to the methodological contributions, the motivations behind studying distributional dependence within the context of stochastic processes in machine learning is received as a new and interesting approach.
In addition, we showed with some preliminary results that compared to a generic MLP, using the MF layer and generative model brought about some theoretically and empirically justified improvements, such as the implicit regularization, relation to attention and the ability to model jumps without a jump noise process.
- (Large variances)
The reviewer is correct, the performances of the proposed methods in the real datasets are difficult to compare since they tend to have higher variances.
We note that for the probabilistic modeling experiments, we mainly wanted to highlight the performance compared to the MLP architecture, which in many of the scenarios the proposed methods show an improvement.
To add additional context, we included other common density estimation methods, but we are mainly testing whether the architectures explicitly informed by the McKean-Vlasov structure can improve under the same estimation technique.
For the synthetic experiments, the differences between the architectures become more apparent.
We will highlight this within the discussion of the results and the limitations.
1.a. (How the change of measure is estimated)
We apologize for the confusion regarding this computation.
In the implicit model, the base measure $P_0$ is given by the weight $W_0$, the change of measure $\frac{d P_t}{d P_0}$ is approximated by a neural network with inputs $W_0$ and time $t$.
The change of measure is estimated jointly with the network parameters through the ELBO.
We will rewrite according to the note on the mean-field layer in the general response.
1.b. (Advantage of IM over EM architecture)
For the EM architecture, the inputs to the network need to be the $N$ sample points that we observe at each time in order to compute the empirical expectation of the $N$ particles influencing the current particle.
For all time marginals, there is no learned measure. We need the population of $N$ particles, thus the name empirical measure.
If there are too few particles, the empirical expectation may have high variance. We added an ablation study on the number of particles in the PDF in the general response.
The IM architecture on the other hand represents the time marginal law $P_t$ implicitly through the base measure $P_0$ and the change of measure $\frac{d P_t}{d P_0}$. To compute the expectation, we do not need a different set of samples at each time point.
Specifically, the factor $\frac{d P_t}{d P_0}$ acts as an importance sampling weight applied to the base particles of $W_0$ which is shared across time.
It also acts like an interpolator in cases where there are too few data points.
In addition, if we consider the case of the stationary measure (i.e. $P_t \to P_\infty$), then the IM architecture needs to only estimate the best fitting weight $W_0$.
This is why we studied the implicit regularization of the IM architecture in section 5 so that we can better understand this interpolating behavior.
2. (Enforcing the constraint)
The reviewer is correct, the constraint is not exactly satisfied but included as a penalty term in the loss function during optimization.
This is done using sampling, where we sample the trajectory and compare the expectation of the samples versus the function itself.
We briefly described the procedure in Algorithm 3, and will further clarify this in the final manuscript.
3. (Differences between methods 3.2 and 3.3)
The reviewer is correct, both approaches model distributional dependence, one with a base measure and change of measure and the other with a generative model.
The main differences between the IM and ML architectures is the IM architecture requires integrating according to a sampling scheme like the Euler-Maruyama method to obtain marginal observations while the ML architecture allows sampling of arbitrary time marginals.
In that sense, we have access to the full density at each time marginal with the ML architecture.
4. (Redundancy in line 180)
We thank the reviewer for pointing that out, we will remove this line or rewrite it in a way such that we recall the properties of the original assumptions.
5. (Citation)
This is a great point, we included the citation in Remark 5.2 originally to Santambrogio's manuscript due to its ease in accessibility but we will modify the citation to Villani's book.
6. (Incorrect hyperlink)
We thank the reviewer for pointing that out, we will correct the link to Appendix C.3 in the final manuscript.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their prompt and comprehensive response. I agree in principle that this paper makes some important contributions in terms of building a practical framework for simulating MV-SDEs. Having reviewed both the response to my initial remarks and those of the other reviewers, it seems that the presentation of the paper could yet be improved in many aspects.
As the authors have addressed my main concerns (in particular I appreciate the additional KS statistics in the Response to Review DRq5, which are quite convincing), I will raise my score to a 7, contingent on improvements being made to the presentation in the final draft.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Again, Thank you very much for your detailed comments which will significantly improve the final version of our paper. We will incorporate all the comments that you have made.
Again, a million thanks for your thoughtful review and comments. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their helpful feedback and all the comments in helping improve the paper. Here we include the common responses to all reviewers:
### Mean-field layer exposition
Let us first define the measure of the particles at time $t$ to be $P_t$ and an arbitrary base measure of the particles at a time (call it `$0$') as $P_0$.
Our goal for this architecture is to
a) represent the base measure $P_0$ as a sum of Dirac functions; and,
b) change the measure for each time such that the expectation is correct for each time, i.e. $\mathbb{E}[\varphi(x, y) ]=\mathbb{E}_{P_0}[\varphi(x, y) \frac{dP_t}{dP_0}]$.
Both the base measure $P_0$ and the Radon-Nikodym derivative $\frac{dP_t}{dP_0}$ are modeled and learnt.
Before we describe how this is done, we want to note the motivations behind this representation.
We no longer need particles at all times $t$ -- rather, we just need to know the base measure and change of measure.
Furthermore, we can use this concept to interpolate between time points using the change of measure that is a function of the base measure and time.
To represent the base measure $P_0$, we consider a weight matrix $W_0 \in \mathbb{R}^{K \times d}$ with the number of rows (or the width) equal to $K$. Each row of the weight matrix may be seen as a hypothetical particle of dimension $d$.
To represent the change of measure $\frac{dP_t}{dP_0}$, we consider a neural network with inputs $W_0$ and $t$ to approximate $\frac{dP_t}{dP_0}$ for all $t$.
This is formalized by the mean-field layer which includes both of these components, $P_0$ and $\frac{dP_t}{dP_0}$.
We can think of this as a particular type of neural network where the weight $W_0$ representing $P_0$ is shared across different time steps through a re-weighting factor $\frac{dP_t}{dP_0}$.
The key is that by including the mean-field layer, we can easily and explicitly represent complex interactions between each input $X_t$ and the measure $P_t$.
### Important models with linearly factorized drift
The drift can be factored linearly in a number of models that have been studied in the literature, we will list a few below:
- The mean-field Fitzhugh-Nagumo equation which models neural spikes.
- The Kuramoto equation which models oscillators.
- The opinion dynamics model which models the interactions of opinions of individuals.
These equations also appear in Appendix C.2.1 where we described the synthetic experiments.
We note, however, that the linear decomposition of the drift is not necessary to use the proposed architectures, but we present it this way for ease of exposition, and since many important models in the literature assume this form.
In implementation, the proposed architectures support the form of $g(\mathbb{E}[ \varphi(x,y)], x)$ for representing the drift, allowing application to more general scenarios without linearly factorized drift, therefore no additional conditions are required.
We will motivate the linear factorization and also note the general representation in the final manuscript.
#### Tables and figures in the PDF
We include the following tables in the PDF:
1,2. The Kolmogorov–Smirnov statistic for experiments that are 1-dimensional.
3. Ablations on the accuracy of IM architecture with different widths.
4. Ablations on the accuracy of EM architecture given different number of observations.
We include the following figures in the PDF:
1,2. Ablations on the convergence of IM architecture with different widths.
Pdf: /pdf/3d7412c61ddc6def771be62d605cd6b6ed4e41bc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers | Accept (spotlight) | Summary: This paper proposes an efficient and interpretable context-based dynamic pruning method. They use additional query / key layers to generate dynamic attention masks and sparsify self-attention maps. They also introduce a sparse sigmoid and regularization term to control the sparsity. In experiments, they demonstrate the lowest performance degradation compared to previous static attention pruning methods at the same level of sparsity. The results of throughput and speed analysis show that the proposed method can achieve additional inference efficiency with minimal performance loss. They analyze the distribution of remaining contexts with respect to part of speech and the depth of the layer. They also demonstrate that the proposed method dynamically prune attention based on contexts through context switch experiments.
Strengths: - The proposed method is well-motivated and easy to follow.
- The proposed method efficiently increases interpretability and sparsity with minimal performance drop compared to previous research.
- They provide an extensive analysis of the relationship between context length, throughput, and speed in various parameter sizes.
- Based on the proposed pruning method, they devise efficient batched data structures for the optimized computation.
Weaknesses: - Quantitative comparison of throughput and speed with local/sparse attention would contribute to a comprehensive understanding.
- Further qualitative study of interpretability (Fig. 8) varying the sparsity level (gamma) would provide additional intuitive observations.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Do you have expectations on how the zero-shot performance difference between the pruning strategies will change as the number of model parameters increases?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: As written in the Limitations section, scalability studies on larger language models (>7B) would provide further insights into the dynamic pruning method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and the interesting questions. We are glad that the reviewer found the method easy to follow, and we hope our work sparks future work in the direction of efficient and more interpretable inference. In the following, we address comments made.
> Quantitative comparison of throughput and speed
Local and Sparse attention have a fixed logic in determining which tokens to drop. These methods then, require fewer FLOPs for the same level of sparsity. For completeness, we present here preliminary results on the throughput for different levels of sparsity for a GPT-2-xl model on an NVIDIA RTX A5000.
```
+------------------------------------------------------+
|Sparsity \ Model|Local Attention|Sparse Attention|Ours|
+----------------+---------------+----------------+----+
| 0.4 | 640 | 641 | 570|
+----------------+---------------+----------------+----+
| 0.6 | 850 | 851 | 765|
+----------------+---------------+----------------+----+
| 0.8 | 1391 | 1390 |1190|
+------------------------------------------------------+
```
Numbers indicate throughput in tokens/s, when the original unpruned context length is 1000 tokens. Although Local and Sparse Attention mechanisms allow for slightly higher throughput, these are counteracted by a higher drop in performance. This is captured both when measuring upstream perplexity (Figure 4), and especially when evaluating zeroshot performance (Figure 5).
> Further qualitative study of interpretability.
Interpreting attention weights is in general a challenging task. Our sparse attention is in that regard unique for two reasons. Firstly, only a subset of the context tokens are being attended to, so attention weights are concentrated only on the more important parts of the context. This allows us to study the significance of each token, depending on if it is being dropped or not. Secondly, by analyzing which tokens trigger pruning, we can see better understand at what time and which parts of the context become irrelevant. We have included more results in Appendix C, which we hope might help towards grasping a better intuition. Specifically, we present the sparsity patterns per layer in Figures 12 and 13. We also visualize the mechanism of dropping in Figure 14, including in an artificial scenario where we expect dropping to occur, by concatenating paragraphs of different content, in Figure 15. We also provide some additional results on the embedding nature of the dropped tokens.
However, we agree with the reviewer that more possibilities exist to study and interpret the nature of this attention. We hope that our approach offers concrete advantages towards such an interpretation. Some related references [1, 2, 3, 4].
Preliminary and complementary to Figure 8 (bottom-left) we present the alive probability for different parts of speech elements and models with different sparsity patterns. As $\gamma$ increases, the alive probability of different parts of speech decays differently.
```
+---------------------------------------+
|gamma| NOUN | DET |PROPN| VERB | ADV |
+-----+------+------+-----+------+------+
| 0.0 | 0.967| 0.940|0.985| 0.970| 0.982|
+-----+------+------+-----+------+------+
| 0.0 | 0.919| 0.763|0.964| 0.935| 0.932|
+-----+------+------+-----+------+------+
| 0.0 | 0.850| 0.532|0.884| 0.822| 0.783|
+-----+------+------+-----+------+------+
| 0.1 | 0.672| 0.305|0.775| 0.648| 0.587|
+-----+------+------+-----+------+------+
| 0.3 | 0.508| 0.170|0.621| 0.439| 0.354|
+-----+------+------+-----+------+------+
| 1.0 | 0.309|0.0851|0.439| 0.253| 0.212|
+-----+------+------+-----+------+------+
| 3.0 | 0.159|0.0506|0.274| 0.127| 0.135|
+-----+------+------+-----+------+------+
| 10. |0.0803|0.0338|0.132|0.0455|0.0553|
+---------------------------------------+
```
[1] Chefer, Hila, Shir Gur, and Lior Wolf. "Transformer interpretability beyond attention visualization." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
[2] Bolya, Daniel, et al. "Token merging: Your vit but faster." arXiv preprint arXiv:2210.09461 (2022).
[3] Rigotti, Mattia, et al. "Attention-based interpretability with concept transformers." International Conference on Learning Representations. 2021.
> How will zero-shot performance change as the number of model parameters increases?
In our conducted experiments involving models with varying numbers of parameters, a consistent trend has emerged. Specifically, we have consistently observed that models employing pruned contextual components exhibit the capability to retain the performance levels of their unpruned counterparts, even when subjected to notably high sparsity levels.
The continuous evolution and advancement of efficient fine-tuning approaches [4, 5] gives us confidence that the extensibility of these techniques to even larger models (>7B) is within reach, despite computational constraints. Looking ahead, we are particularly enthusiastic, as this avenue of investigation holds substantial promise, particularly given its potential for effective deployment within real-world systems.
[4] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021).
[5] Dettmers, Tim, et al. "Qlora: Efficient finetuning of quantized llms." arXiv preprint arXiv:2305.14314 (2023).
We thank the reviewer for the interesting points raised, which will be discussed in the main text.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification and additional results which are interesting.
I have raised the rating after reading the rebuttal. | Summary: The paper introduced a novel inference strategy for transformer models that focuses on inference efficiency. Instead of retaining all context tokens throughout the entire inference process, they gradually eliminate tokens as they move deeper into the layers. To determine which tokens to drop, they trained some small linear layers to predict the remaining relevant context. Their experiments revealed that approximately 80% of the tokens could be safely discarded without much adverse impact on downstream task performance or perplexity. As a result, this approach significantly reduces the computational resources required for inference when the context length exceeds 500 tokens.
Strengths: - The paper demonstrates excellent writing with a clear flow of ideas.
- The authors introduce a unique and innovative data structure that enables batch operations involving masked tokens.
- The experimental results indicate that in scenarios where long context (> 500 tokens) is involved in the inference process, the models can safely drop up to 80% of the tokens without any impact on perplexity.
Weaknesses: - **The choice of downstream tasks raises questions:** The downstream tasks evaluated in Figure 5 primarily involve small context sizes, and Figure 7 indicates that a smaller context leads to reduced throughput compared to the standard dense model. It would be more persuasive to demonstrate that task performance remains intact when longer contexts are required, while simultaneously achieving gains in inference efficiency.
- **Insufficient experiments with stronger base models:** It appears that the proposed method inevitably leads to performance degradation for larger and more capable base models, as evident from both Figure 5 and 7. This is likely because stronger base models are better at utilizing contexts, and additional contextual information enhances performance. To strengthen the paper's argument, the authors should include more results using larger and stronger base models (>1.5B parameters), such as LLaMA, Pythia, or even OPT.
- **Evaluating generation quality:** Though the authors perform evaluations on language modeling with perplexity, it does not necessarily align with generation quality. If would be helpful if the authors could further provide evidence that dropping context tokens does not affect generation quality.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - The paper introduces the use of sparse sigmoid to gradually enforce sparsity on the context tokens. It is important to understand why sparse sigmoid techniques were chosen and if they offer any specific advantages. Could other sparsity techniques, such as l0-regularization with hard-concrete distributions (Louizos et al., 2017), or techniques used in movement pruning (Sahn et al., 2020), achieve similar results?
- There seems to be a typo in Figure 5. The green curve should be labeled as "Sparse Attention".
- Figure 2 appears to be a bit confusing without proper legends. While it is clear that "X" denotes dropped tokens, the meaning of the red blocks is unclear. Could you provide an explanation or add appropriate legends?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors mentioned the limitation of the working being exclusively tested on autoregressive language models, and specifically GPT2 model family. The paper would be stronger if the author can show positive results on stronger base models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and the interesting points raised. We are glad to hear that they found our method description clear and easy to follow. In what follows ahead, we would like to take the opportunity to address the primary concern.
> Choice of downstream tasks and evaluation of generation quality
We thank the reviewer for raising this point. We have provided more task-specific performance results on our *global response*. Specific tasks selected, better capture long-range dependencies and interactions across sentences and paragraphs.
> Stronger base models
We agree with the reviewer that generalization to stronger base models is not always straightforward. In our *global response*, we provide experimental evidence that our finetuning method works *without any adaptation* for other models, *pythia* in this case. We hope that the additional results serve as a convincing argument that the proposed method is applicable to a large class of models.
> Why sparse sigmoid?
We experimented with a few different ways to enforce sparsity, including $L_0$-regularization and using a sigmoid with a tunable temperature parameter [Kim et al., 2022]. We found that these techniques were unsuitable for enforcing exact sparsity, while also leading to training instabilities. The sparse sigmoid on the other hand, inspired by Peters et al. [2019], leads to exact zeros for $\alpha > 1$, while also leading to well-behaved gradients.
Louizon et al. propose the hard concrete distribution, which is obtained by “stretching” a binary concrete distribution and then transforming its samples with a hard-sigmoid. They also rely on a tunable temperature parameter to control the “softness” of the distribution, similar to the techniques we experimented with, that we found unsuccessful. We do not apply our sparsity regularization in the weights directly but on the products $(\mathbf{Q}\_{\text{int}}^{\ell})\_n^T (\mathbf{K}\_{\text{int}}^{\ell})\_j$. This is where we believe the instability is emerging from, as gradients are back-propagated through the whole network.
Sahn et al. select $\text{top}\_k$ entries that are active in the forward pass and use a straight-through estimator to update weights even if they are pruned in the forward pass. This pruning strategy is less dynamic, as the number of $k$ active elements is chosen in advance. In our experiments, we found that models trained with the same regularization exhibited different sparsity levels for different contexts, i.e. the model “understood” that some contexts may require more information to be preserved compared to others.
Thank you for the relevant references, we will include them in the related work.
> Typo in Figure 5.
Thank you, we will fix this.
> Legend for Figure 2.
“X” denotes tokens that are currently being dropped (their cached Key-Value values still exist up to this point). Red boxes corresponds to tokens that are already dropped, for which Key-Value values are no longer being cached. We will update the legend, to make things more clear.
We thank the reviewer again for the interesting points raised, we will update the paper to discuss these in detail.
---
Rebuttal Comment 1.1:
Title: Thanks for your response!
Comment: I have read the response and think my questions and concerns are well explained and addressed. Though I highly recommend that the authors include larger models (LLaMA-7b) to strengthen the paper, I also believe the current results sufficiently demonstrate the key arguments. Thus I will keep the original score. | Summary: The authors propose a modified dynamic masking operation to the traditional multi-headed attention in transformers in order to allow models to learn to drop tokens at specific layers during training. In order to facilitate learning, they use a sparse sigmoid that is annealed to interpolate from a traditional sigmoid up to a step function in the infinite limit. They show that this dynamic sparse attention achieves lower perplexity at the same sparseness levels compared to other modified attention mechanisms, as well as better zero-shot accuracy and throughput. The proposed model shows promise for improving the efficiency and overcoming the quadratic time complexity of traditional dense transformers.
Strengths: Improving the efficiency of vanilla dense transformers is an important problem, and being able to learn the sparsification operation rather than setting a static prior is an interesting direction. The proposed method is simple and computationally efficient, and the manuscript is well written and clear. The results demonstrate marked improvements over other sparse attention mechanisms at similar sparsity levels as well as increased throughput.
Weaknesses: The main concerns I have are with how well this adaptive sparsity can be used when training from scratch and on long range dependency benchmarks such as Long Range Arena [1]. Since the experiments are initialized from GPT-2, all dense information is present, the model simply has to learn which information it can safely ignore. However, when training from scratch this optimization problem becomes much more difficult since falling into a local optimum with respect to pruning early on can reduce the model’s effectiveness in the future as the $\alpha$ annealing increases towards a step function. It would be interesting to see results (even on small models) on how much utilizing this mechanism during from scratch training affects perplexity, given the same compute budget.
[1] Long Range Arena: A Benchmark for Efficient Transformers. 2020. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, Donald Metzler
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - The $\alpha$-sigmoid operator is a bit confusing to me. How are argmax values calculated efficiently and how are gradients propagated through the argmax operator in equation 8?
- Do the baseline GPT-2 models use FlashAttention in their implementation? If not, how does the throughput of the dynamically pruned model compare to one with FlashAttention?
- How sensitive is the training to the correct annealing of the $\alpha$ parameter?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the importance of efficient inference and the novelty of dynamically pruning the context as a successful alternative. In the following, we take the opportunity to address the comments made.
> How well this adaptive sparsity can be used when training from scratch
Please see also our *global response*. We do not train from scratch to avoid extra computational costs. Training from scratch is a possibility, and local minima can be avoided by using a schedule for the sparsity parameter. However, our method is **designed** for finetuning, which is the most compelling option.
> Long range dependency benchmarks
We agree with the reviewer that a close examination needs to be performed to determine whether long-range dependencies can be preserved. See our *global response* for some additional results on benchmarks that better capture some of these interactions. Long Range Arena represents a suitable and attractive benchmark for encoder models, where computational benefits from sparsity are minimal. This is due to the lack of the possibility for a cumulative effect in dropping, as the one we have introduced through Eq. (6). We could not find a suitable benchmark for autoregressive models. As longer contexts become more relevant, however, we are sure that such benchmarks will emerge, as hinted by concurrent work [1].
[1] Mohtashami, Amirkeivan, and Martin Jaggi. "Landmark Attention: Random-Access Infinite Context Length for Transformers." arXiv preprint arXiv:2305.16300 (2023).
> How are gradients propagated through the argmax operator in equation 8?
The forward pass is calculated efficiently via a bisection approach, by iteratively narrowing down the interval that contains the exact solution. The backward pass can then be computed independently. For details on the backward pass, we refer to Section 3.4 of [2]. As this is essential to our work, we will update the text with a concrete pointer to the section of the cited paper.
[2] Peters, Ben, Vlad Niculae, and André FT Martins. "Sparse sequence-to-sequence models." arXiv preprint arXiv:1905.05702 (2019).
> Do the baseline GPT-2 models use FlashAttention in their implementation?
For all attention operations, we use flash-attention as provided by the `scaled_dot_product_attention` in *pytorch-2.02* (L462-463 in Appendix). Flash attention becomes increasingly more important as sequence length increases and so removing flash attention will further highlight gain with respect to the baselines. Thank you for pointing this out, we will highlight this in the main text.
> How sensitive is the training to the correct annealing of the parameter?
We provide some results in Appendix B and more specifically in Figure 10. In short, increasing $\alpha$ rapidly does not allow for the new interaction parameters to be properly tuned, i.e. the achieved sparsity is worse for the same perplexity. Increasing it too slowly leads to solutions of equal quality, but requires more compute for the finetuning phase.
We thank the reviewer again for all the questions, we will update the paper to highlight the points raised.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. My questions have been answered but I will keep my score. | Summary: Given the trend in large language models, it is a pretty important problem to search to efficient architectures. In this direction is the line of work to make the attention component efficient by introducing sparsity in the attention block and allowing every token to attend only a subset of the previous tokens.
The paper presents usage of Adaptively Sparse Attention whereby
1) The network learns to drop parts of the context no longer required.
2) The tokens dropped at every layer are independent, and different set of tokens might be chosen to be persisted at every layer.
3) The network is trined using sparse sigmoid functions that introduce sparsity during training itself in contrast to some works that have tried to introduce sparsity only during inference.
Experiments show that this methodology can lead to pretty strong performance (with minimal perplexity loss), even with 80% sparsity.
Strengths: The work presented in the paper is pretty innovative and impactful in this direction as finding efficient transformer architectures is key to sustain and grow the research and production usage of these large language models.
The model successfully exploits the fact that the attention matrix in these models are pretty sparse and additionally encourages that with sparse sigmoid-like function to mask out some tokens.
Experiments show pretty strong performance even with 80% sparsity in the network.
Weaknesses: Given that the model is performing pretty well in terms of perplexity even with huge sparsity, it would be interesting to perform an analysis of whether there is a class of NLP tasks the is significantly affected (like knowledge intensive tasks?), or perhaps translation or similar tasks where the structure of the input sentence matters if the model is focusing to forget stopwords very early.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Was there any analysis done on which set of NLP tasks get affected the most with such sparsity?
- For inference on a given device, do we have numbers on the increase in the max sequence length that can be supported?
- Do these changes impact the training time throughput in any way?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: None discussed in paper, and nothing important that I can think of.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and for acknowledging the impact of efficient inference in autoregressive models. Here, we discuss the comments raised.
> Was there any analysis done on which set of NLP tasks get affected the most with such sparsity?
Such an analysis is indeed interesting and would enhance our intuition. We have provided some preliminary evidence (see our *global response*) on task-specific performance and on zeroshot performance on some additional tasks that we consider interesting in the high-sparsity regime. Thank you for raising this point, we will update the paper with a concrete discussion on this matter.
> For inference on a given device, do we have numbers on the increase in the max sequence length that can be supported?
We agree with the reviewer that increasing the maximum sequence length during inference for a given device is one of the most promising outcomes of pruning the context. In our case, the maximum context length is limited by the maximum supported sequence length of the pre-trained GPT model, see also our *global response*. Provided a base model that supports larger sequence lengths, benefits given fixed hardware, are substantial (see Table 1 of attached PDF in our *global response*). We also present results for *pythia* models that already support longer context windows (see Figure 2 of the attached PDF in our *global response*).
> Do these changes impact the training time throughput in any way?
See also our *global response*. Training FLOPs are only marginally affected. Still, we only finetune pre-trained models, to avoid even the smallest increase in training throughput.
We thank the reviewer again for all the questions. We truly believe that addressing them helped improve our work significantly.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications! Nice work.
I have read through the responses and will keep my original scores. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for taking the time to review our paper and for the valuable feedback. Here we address common points raised and present more experiments that we believe further strengthen our findings.
**How sparsity affects specific NLP tasks (@65eF, @LhSf, @uWWw)**
Evaluating GPT models pretrained by language modelling objective is in general *challenging* and different techniques have been developed based on the final intended use. This challenge motivated us to showcase zeroshot accuracy to downstream tasks, in addition to evaluating perplexity for upstream tasks. The tasks we selected are commonly used, see e.g. Dettmers et al. [2022], Frantar et al. [2023b] or other popular benchmarks [gpt4all](https://gpt4all.io/index.html), [ilm-eval](https://tju01.github.io/ilm-eval/#?benchmark=lm-evaluation-harness). We agree with the reviewers, however, that some tasks may be affected more by sparsity compared to others. For that reason, we provide per-task zeroshot performance in the attached PDF, see Figure 1. We also include zeroshot results for additional tasks, that require long-range dependencies. It becomes clear that some tasks are affected more than others, according to our intuition.
**Impact on training time (@65eF, @LhSf)**
The dropping mechanism introduces a *marginal* computational cost in terms of FLOPs, as highlighted in Figure 6 (left). Furthermore, our approach can be applied as a post-processing step given an initial model trained in a standard fashion, decreasing the training cost even further. Training from scratch is also a possibility, given a suitable decay schedule for the sparsity parameter, but fine-tuning is more compelling as it can be applied to existing pretrained models. Also note that one could tune our hyperparameter $\gamma$ during training dynamically, to achieve a desired level of sparsity, e.g. by monitoring the running average of the sparsity during training and increasing/decreasing it in small steps (similar to ADA [1] in GANs). As, for evaluation purposes, we wanted to acquire multiple models with different levels of sparsity, we did not do that.
**Additional Models and Inference Compute (@65eF, @uWWw, @jYJo)**
As the original context grows, pruning the context makes an increasingly significant difference, as more opportunities for pruning exist. This is clearly demonstrated by Figure 7 (bottom right). Generally, enlarging the context length of Transformers is a fascinating area of research with novel techniques being currently proposed, e.g. [2, 3]. In our experiments, we are limited by the maximum positional encodings of the base GPT models. To evaluate how our method performs in larger contexts, we additionally present preliminary results on *pythia* language models. *Pythia* uses rotary position embedding [4], as many current state-of-the-art LLMs, e.g. LLaMa-(1 and 2).
We present results on “Perplexity vs sparsity” in the attached PDF, in Figure 2. These concrete results demonstrate that *our pruning technique works out of the box for different positional encodings* as well. Longer contexts supported by our base model (for pythia models that is 2048) additionally allow for higher levels of sparsity for the same performance, in the case of Figure 2 in the attached PDF, that is perplexity. In the future, we intend to also evaluate larger models, potentially with the use of LoRA [5]. We again highlight that our approach is not specific to a particular language model, as it can be applied to any autoregressive transformer architecture.
An additional benefit of our pruning strategy is that we can accommodate longer initial contexts for the same hardware, as raised by reviewer **@65eF**. Table 1 in the attached PDF showcases the maximum potential context windows for different batch sizes. Our strategy accommodates inference with much longer contexts, for a fixed device.
[1] Karras, Tero, et al. "Training generative adversarial networks with limited data." Advances in neural information processing systems 33 (2020): 12104-12114.
[2] Press, Ofir, Noah A. Smith, and Mike Lewis. "Train short, test long: Attention with linear biases enables input length extrapolation." arXiv preprint arXiv:2108.12409 (2021).
[3] Chen, Shouyuan, et al. "Extending context window of large language models via positional interpolation." arXiv preprint arXiv:2306.15595 (2023).
[4] Su, Jianlin, et al. "Roformer: Enhanced transformer with rotary position embedding." arXiv preprint arXiv:2104.09864 (2021).
[5] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021).
Pdf: /pdf/914ec744891c766d6009c59b45fb8d075b79c802.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Training Your Image Restoration Network Better with Random Weight Network as Optimization Function | Accept (poster) | Summary: This work introduces a novel and orthogonal approach by exploring the potential of using random weights network as a loss function. The authors have carefully designed the random weights network with theoretical constraints based on mathematical manifolds. To validate the proposed solutions, extensive experiments have been conducted on mainstream image restoration tasks. The results consistently demonstrate the effectiveness of the approach.
Strengths: There are several strengths here:
1. This paper presents a pioneering exploration of the potential of using random weights network as a loss function. With a clear motivation and a series of interesting experiments, this study offers valuable insights into the applicability of the proposed approach, potentially shaping the direction of the loss function community.
2. The proposed designs seamlessly integrate into existing methods, resulting in performance improvements. A thorough set of ablation studies provides strong evidence to validate these findings.
3. The paper is well-written and maintains a high level of readability, ensuring that it is easily understandable and accessible to readers.
Weaknesses: There are several weakness here:
1. To ensure clarity, it is recommended to provide detailed information regarding the experimental settings, including the specific methodologies and procedures employed. This will allow readers to have a clear understanding of how the experiments were conducted.
2. The figures and tables in the paper lack consistent style, indicating the need for a thorough review by the author to identify and correct any errors. Additionally, it would be beneficial to address potential limitations and investigate the extent to which the random weights network can be applied to various tasks or datasets.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The authors are recommended to provide a detailed response addressing each concern raised in the weaknesses section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1,detailed information.**
Thanks for pointing out this issue. Initially, due to page constraints, we've presented the relevant methodologies and procedures in the supplementary materials. Furthermore, we will share the source code to elucidate the experimental setup details. Lastly, we'll thoroughly review and elaborate on the settings in the revision to ensure readers have a clear comprehension of the conducted experiments.
**2,consistent style.**
Initially, we will meticulously review the entire paper, ensuring uniform style for figures and tables, while rectifying any errors. Our proposed random weights network, serving as a loss function, addresses data bias and holds theoretical applicability across various image restoration models.
Following your suggestion, we expanded the application of our proposed loss functions to broader image restoration tasks, including super-resolution. Due to time constraints, we focused on the representative image super-resolution model RCAN [1] with 2-4x scaling factors to assess effectiveness. These results further validate our assertions.
|setting | RCAN | +Taylor| +INN| +Zerofilter|
|----|----|----|----|----|
|X2 |38.27 |38.35 |38.35 |38.36|
|X4 |32.63 |32.71 |32.69 |32.69|
[1] Image super-resolution using very deep residual channel attention networks, TPAMI 2020.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thanks for the author's detailed reply. The rebuttal addressed my concerns. After carefully reading other reviews and the rebuttals, I agree with authors' claims on the extra training cost and differences from existed methods. Thus, I would raise my rating for this work. | Summary: This paper seeks to explore the untapped capabilities of random weights networks as a loss function. Inspired by mathematical manifolds, the authors propose innovative and straightforward solutions for random weights networks based on rigorous mathematical properties. Extensive experimental results across various image restoration tasks validate the efficacy of these solutions, showcasing their plug-and-play nature and ability to enhance model performance while preserving the original model and data configuration as the baseline. The novelty and interest of the idea are noteworthy.
Strengths: 1. Innovative approach: The authors propose a novel concept of utilizing a well-designed random weights network as a loss function, offering a plug-and-play solution that leads to remarkable performance improvements when integrated into existing baselines. This approach avoids the need for complex network architecture designs, making it highly appealing in the field of efficiency.
2. Theoretical foundation: The design of the random weights network is derived from rigorous mathematical manifolds, ensuring a solid theoretical basis. Furthermore, the authors have tailored the random sampling strategies to enrich the manifold representation, adding depth to the approach.
3. Comprehensive experiments: The paper provides extensive comparison experiments in both the main paper and the appendix, showcasing the advantages of the proposed flowchart. The inclusion of ablation studies and motivation analysis further strengthens the findings, ensuring convincing evidence of the method's effectiveness.
Weaknesses: 1. In all the tables, the authors have suggested to highlight the best results for a clear illustration. In addition, the more visual comparison is required to show the main body.
2. The authors have performed sufficient ablation studies. However, the corresponding experimental configuration like convolution kernel sizes needs to be detailed.
3. It would be better if the authors have presented more experimental analysis.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See weaknesses part.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1,visual comparison.**
Thanks for pointing out this issue. As you suggested, we will highlight the best results for a clear illustration. In addition, we will provide the more visual comparison in the main body to enrich this work.
**2,experimental configuration.**
Thanks for pointing out this issue. First, due to the page limit, we have shown the corresponding experimental configuration like convolution kernel sizes of ablation studies in supplementary materials. Second, we will open the source code to highlight the details. Finally, we will check and detail the corresponding setting in revision.
**3,experimental analysis.**
Thanks for pointing out this issue. Due to the page limit, we have provided the partial experimental analysis and we will add the underlying working mechanism of experimental analysis to enrich this work. | Summary: This paper introduces the idea that random weight networks can be used as loss functions for training image restoration networks. The paper proposes to use Taylor’s Unfolding Network, Invertible Neural Network, Central Difference Convolution, and Zero-order Filtering as random weight networks. The analysis and ablation studies show the effects of initialization strategy, model architecture, model depth, and model numbers. Experiments on image enhancement, image denoising, and guided image super-resolution validate that the proposed loss functions improve the performance of several existing image restoration methods.
Strengths: + The idea of using random weight networks as image restoration loss functions is interesting.
+ The quantitative evaluation is performed on several image restoration tasks.
Weaknesses: - The paper has a major technical flaw. The abstract states that the proposed loss functions do not incur additional training computational cost. This is unreasonable because the gradients of these loss functions require additional computation cost. To be effective, the proposed loss functions must be used in conjunction with a pixel loss and are more complex than the pixel loss. It consumes more GPU memory and time during training. The paper should report the additional training cost or the extra training time.
- The quantitative improvements are not significant. From Table 1 to Table 9, the proposed loss functions have limited impact on the PSNR and SSIM results. For example, MPRNet is a representative denoising method, but its PSNR gain is less than 0.1 dB. In addition, the paper reports MPRNet achieves 39.24 dB on the SIDD dataset, which is far behind the PSNR result of 39.71 dB in the original paper of MPRNet. I suspect that this paper does not train MPRNet to convergence, and it is unreasonable to compare different loss functions without full convergence. As far as I know, NAFNet [a] achieves state-of-the-art 40.30 dB on the SIDD dataset. Are the proposed loss functions applicable to NAFNet?
- Lack of visual results in the main paper. As a paper on image restoration, it is unreasonable that the main paper does not contain any visual results. Moreover, the visual results in the supplementary materials have negligible differences, which suggests the proposed loss functions are ineffective.
- Lack of evaluation on more general image restoration tasks. The paper selects image enhancement, image denoising, and pan-sharpening as the image restoration tasks. Are the proposed loss functions applicable to more general image restoration tasks such as super-resolution?
[a] Chen et al. “Simple Baselines for Image Restoration”, ECCV, 2022.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the weaknesses section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not mention any limitations of the proposed method. I believe some discussions on the perceptual quality are necessary since the quantitative improvements are limited and the visual results in the supplementary materials have negligible difference.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1, technical flaw.**
1) Since our proposed loss functions are used alongside pixel loss, there is no added computational burden. Pure pixel loss optimization can lead to local optima and oscillations. Conversely, our approach mitigates local oscillation and enhances model convergence. For instance, while DnCNN with pixel loss required 50K iterations to converge, integration with our designs only necessitated 30K iterations.
2) Regarding training memory, our lightweight random weights networks, detailed in the supplementary materials, ensure minimal memory overhead. Thus, employing our designs introduces only marginal and inconsequential memory demands throughout training, compared to using pure pixel loss.
**2, quantitative improvements.**
1) To ensure comprehensive experimentation, we introduced a state-of-the-art model like MPRNet for validation. Given dataset limitations, the current top-performing model naturally approaches the dataset's upper bound. Despite minor impact from added efficient designs, this remains understandable.
2) Our random weights network, serving as a loss function, counters data bias and offers theoretical generality across image restoration models. Following your advice and due to time constraints, we adopted NAFNet as the baseline for evaluating our operator's efficacy. Results are provided below:
|dataset | NAFNet | taylor | INN | zerofilter |
|----|----|----|----|----|
|SIDD | 40.30| 40.41 | 40.38 | 40.46|
**3, visual results.**
Regarding qualitative outcomes, we'll heed your suggestion to include an increased number of visual results in the main paper.
**4, general image restoration tasks.**
Following your suggestion, we expanded the application of our proposed loss functions to broader image restoration tasks, including super-resolution. Due to time constraints, we focused on the representative image super-resolution model RCAN [1] with 2-4x scaling factors to assess effectiveness. These results further validate our assertions.
|setting| RCAN | +Taylor | +INN | +Zerofilter |
|----|----|----|----|----|
|X2 | 38.27 | 38.35 | 38.35 | 38.36|
|X4 | 32.63 | 32.71 | 32.69 | 32.69|
[1] Image super-resolution using very deep residual channel attention networks, TPAMI 2020.
---
Rebuttal 2:
Comment: Dear Reviewer UxgD,
Thank you for being a reviewer for NeurIPS2023, your service is invaluable to the community!
The authors have already submitted their feedback and I noticed that you don't appear to have submitted a new round of comments.
Could you examine rebuttals and other reviewers' comments, and open up discussions with the authors and other reviewers?
Regards, Your AC | Summary: This paper explores the notion of using random weight networks as a constraint during the training process for image restoration. This approach aims to encourage the network to learn more robust features and produce better results, addressing the limitations of traditional optimisation methods and deep learning-based methods. By incorporating the random weight network as a constraint, the authors validate the approach towards improving image restoration performance.
Strengths: 1. The authors provide sufficient theoretical insights behind the formulation of using a randomly initialised network as an auxiliary loss function during the optimisation process.
2. The ablation studies are elaborate and cover a wide variety of initialisation configurations and examine its effect on final restoration performance.
Weaknesses: 1. Experimental Setting section is repeated
2. The proposed approach is similar to [1, 2, 3] and without discussion on differences, the contribution of the proposed work is weak. Specifically identification of different distributions and its impact on final restoration performance.
3. The authors should discuss the impact of utilising multiple network architectures on the overall training period as well as memory requirements.
4. The impact of initialisation distribution should be discussed, which is missing. Furthermore in the qualitative results the authors should also provide corresponding input and ground truth images for easier evaluation.
5. While the authors evaluated the impact of network structures by replacing the CNN with transformer architectures in ablation. Other configurations such as using transformer based restoration networks and implications of using a lightweight optimisation network aren't considered. These ablations are necessary to identify the overall implication of using different strategies during optimisation.
[1] Gallicchio, Claudio, and Simone Scardapane. "Deep randomized neural networks." Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019). Springer International Publishing, 2020.
[2] Herrera, Calypso, et al. "Optimal stopping via randomized neural networks." arXiv preprint arXiv:2104.13669 (2021).
[3] Tarvainen, Antti, and Harri Valpola. "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results." Advances in neural information processing systems 30 (2017).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Kindly address the comments raised in weakness section
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed the limitations arising from space but not the limitation of their methodology, which was the original objective.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1, typos.**
Thanks for highlighting the issue! We'll thoroughly review the entire paper and correct all typos and grammar errors.
**2, differences to other works.**
1) Our method centers on designing the loss function, initially employing a random weights network with a strict mathematical manifold as a loss constraint, inspired by Functional theory. In contrast, the suggested works aim to explore a suitable subnet within the random network, known as ticket theory, to function as the working network. This leads to a distinction: our proposed random network serves the loss function, while theirs serves as a feed-forward network. Theoretical assurance for our approach is straightforward due to its strict mathematical foundation.
2) As per your suggestion, we will incorporate a detailed discussion of the differences to enhance the content.
3) Our work delves into identifying various distributions, as discussed in Tables 8 and 9.
**3, memory requirements.**
Initially, we explore the effects of employing diverse network architectures on training duration in our supplementary materials. Moreover, all random weights networks in our study are lightweight; specifics are outlined in the supplementary materials. Consequently, the utilization of multiple network architectures introduces minimal and inconsequential memory demands throughout the training process.
**4, initialization distribution.**
Initially, we address the initialization distribution impact, as demonstrated in Table 8 and Table 9 with analysis in the ablation studies section. Furthermore, we'll heed your advice to include input and ground truth images for qualitative results.
**5, network structures.**
All our random weights networks are lightweight; see supplementary materials for detailed configurations. For model depth, specific experiments are in Table 6 and Table 7. In the transformed-based approach, we validate our claim using the INNformer model and its results. Additionally, due to time constraints, we tested the effectiveness of SWINIR for image denoising, as shown in the subsequent table.
|dataset|SWINIR | +Taylor | +INN | +Zerofilter |
|----|----|----|----|----|
|Set12 | 31.01 | 31.22 | 31.29 | 31.27 |
|BSD | 29.50 | 29.63 | 29.67 | 29.65 |
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I thank the authors for their detailed reply which addressed my concerns. Thus I upgrade my rating for this work. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Can semi-supervised learning use all the data effectively? A lower bound perspective | Accept (spotlight) | Summary: this paper investigates whether improvements in the rates of sample complexities are possible for semi-supervised learning compared with supervised learning and unsupervised learning (up to a change in sign) for the specific case of a classification problem where each class is a symmetrical Gaussian. it is found that no improvement in the rate can be made, but it is possible to improve the constants.
Strengths: 1 I think it is a very relevant question to investigate whether SSL can improve performance in theory over sup and unsup, and this can have a great impact
2 I like the deep theoretical analysis where minimax upper and lower bounds are derived
3 the theory is complemented by some experiments to validate the theoretical findings
Weaknesses: 4 the work misses a very relevant recent survey concerning this question in general:
Mey, A., & Loog, M. (2019). Improvability through semi-supervised learning: A survey of theoretical results. arXiv preprint arXiv:1908.09574.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 5 for Algorithm 3, how does it make sense that $t$ is an input? should this not be learned also? similarly, I see the same problem with Theorem 6. This would imply the algorithm has knowledge of the distribution? Or is there a universal (distribution independent) t?
6 Similarly, in the experiments, these parameters seem to be tuned on a holdout labeled set. Why is that appropiate?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for appreciating the importance of the problem as well as the thorough theoretical and experimental analyses in our paper. We are grateful for the feedback that you have provided and hope that our answers address the points raised in your review.
> 5 for Algorithm 3, how does it make sense that $t$ is an input? should this not be learned also? similarly, I see the same problem with Theorem 6. This would imply the algorithm has knowledge of the distribution? Or is there a universal (distribution independent) t?
The reviewer is correct in noting that the algorithm takes $t$ as a hyperparameter And that different $t$ leads to different estimators. The $t^\star$ that achieves a smaller risk than the switching algorithm via Theorem 6 indeed depends on $s$ (for both algorithms that we are comparing), which is a distributional parameter that describes the hardness of the problem. Theorem 6 belongs to a standard type of result that shows the existence of a hyper-parameter for which the estimator achieves certain favorable properties [Wei et al; Li et al, Meinshausen et al; van de Geer]. In practice, it is very common to use standard data-dependent model selection tools that also can achieve good performance. We show this empirically and leave an analysis of the impact of model selection with held-out data as an exciting direction for future work. We refer the reviewer to the “General comments” for a more detailed discussion on this topic. For Theorem 6 specifically, we would like to furthermore emphasize that both SSL-S and SSL-W with optimal $t^\star$ require the knowledge of $s$.
> the work misses a very relevant recent survey concerning this question in general
Thank you for bringing this to our attention! As a survey paper that is aimed at the same question as us, it does indeed discuss a lot of prior work that our paper is based on. We will include this reference in our related work section in the revised version.
References:
[Wei et al] – Y. Wei, F. Yang, M. Wainwright. Early stopping for kernel boosting algorithms: A general analysis
with localized complexities, 2017.
[Li et al] – M. Li, M. Soltanolkotabi, S. Oymak. Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks, 2020.
[Meinshausen et al] – N. Meinshausen, P. Buhlmann. High-dimensional graphs and variable selection with the Lasso, 2006.
[van de Geer] – S. van de Geer. On tight bounds for the Lasso, 2018. | Summary: The paper presents a detailed analysis of semi-supervised learning (SSL) algorithms. Specifically, the authors establish lower bounds for 2-Gaussian mixture model distributions, revealing that no SSL algorithm can improve the sample complexities of optimal supervised or unsupervised learning. However, SSL can improve their error rates by a constant factor. The authors propose an algorithm that achieves lower error than both supervised and unsupervised learning, and conduct experiments on synthetic and real-world data.
Strengths: - The paper investigates the learning capacity of SSL algorithms and theoretically establishes a lower bound, which is an inherently interesting and meaningful endeavor.
- The paper is well-organized and easy to comprehend. The authors provide a clear overview of the problem and their proposed solution.
Weaknesses: - Some details about the method and theory part should be further elaborated on, see Questions below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The paper only considers Gaussian mixture distributions. Could more complex distributions be considered in future work to accommodate more real-world problems?
- The paper only considers the worst-case scenario. Is it possible to consider milder cases? For instance, could a data-dependent bound that relates to problem difficulty be derived?
- The statement on line 103 seems to be erroneous: when using the UL algorithm, why can't labeled data be considered as unlabeled data, so that UL can use $n_l + n_u$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - As mentioned in the Questions section, the authors should provide further clarification on the method and theory parts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for appreciating the clarity of our manuscript and the importance of the contribution presented in it. In the “General comments” we address your question regarding possible extensions of our results to more general distributions, beyond GMMs. In what follows, we answer the remaining points in the review.
> The paper only considers the worst-case scenario. Is it possible to consider milder cases? For instance, could a data-dependent bound that relates to problem difficulty be derived?
We would like to point out that one of the novel characteristics of the lower bound that we derive is that it adapts to the problem difficulty, which in the setting considered in the paper is quantified by the signal-to-noise ratio $s$. This is in contrast to existing lower bounds for SSL [6, 18, 32] that only focuses on the worst-case scenario in which SSL achieves the same sample complexity as SL. We hope that our answer addresses the point that you have raised. If you have further questions regarding this aspect, we would like to kindly ask you to let us know, and we would be happy to provide further clarifications.
> The statement on line 103 seems to be erroneous: when using the UL algorithm, why can't labeled data be considered as unlabeled data, so that UL can use n_l+n_u?
Thank you for raising this point! This paragraph should indeed be revised.
First of all, it is indeed true that UL+ can use the labeled data without the labels in the first stage, when it performs unsupervised learning. However, typically, in empirical papers, UL+ style algorithms (e.g. SimCLR, SwAV etc) do not use the labeled data in this way since $n_u =\omega(n_l)$, and hence, it would not make a significant difference. This fact is also reflected in our rate comparisons: even if we allow UL+ to use the samples in $\mathcal{D}_l$ as unlabeled data, only the last line of Table 1 would change by a small constant in the regime when $n_u / u(n_l) = \Theta(n_l)$. More importantly, the main takeaway from Table 1 would not change: SSL cannot be both better than UL+ and SL at the same time since there is no regime for which $h_l$ and $h_u$ are simultaneously $0$. For the above reasons and in order to reduce notational overhead, we chose the slightly simpler definition of UL+ as algorithms that do not use the labeled set as unlabeled data. However, we acknowledge that you raise a very natural question, and, if it helps to avoid confusion, we would be happy to revise the definition of UL+ to also use the labeled data without the labels.
Secondly, the paragraph you have referenced (lines 103-110) is intended to show how UL+ algorithms are not making the best use of the available data. This shortcoming is not immediately obvious in some of the practical settings where UL+ style algorithms were applied in prior work, i.e. where unlabeled data is many orders of magnitude more numerous than labeled data. However, the “wasteful” aspect of UL+ algorithms becomes clear when $n_u$ is, for instance, only larger than $n_l$ by a constant factor. The issue is caused by the two-stage nature of UL+, which selects and commits to a small set of estimators using the unlabeled set, before choosing among them using labeled data.
Finally, in this paragraph (lines 103-110), we consider an exaggerated scenario in order to illustrate this failure of UL+. This is, of course, not a scenario likely to occur in practical applications, where $n_u$ is usually not smaller than $n_l$. We do acknowledge, however, that the current phrasing can lead to confusion, and will change it in the final version of the manuscript to better reflect our intention.
We again appreciate your time and effort in critically reviewing our paper. We will be happy to promptly answer any further questions you may have about our manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I don't have any further questions. | Summary: This paper analyzes the effect of SSL methods to improve the error bound. The results suggest that SSL cannot improve over the statistical rates of both SL and UL at the same time, but it is possible to improve the errors by a constant factor. Simple experiments on synthetic and small-scale real-world data validate the theoretical findings.
Strengths: - This paper studies an interesting and important problem: whether semi-supervised learning can effectively use all the data.
- A detailed theoretical study is provided on the 2-GMM distribution, which sheds light on this problem. Modification:
- Though simple, the proposed SSL switching algorithm is interesting and insightful.
Weaknesses: - The theoretical analysis is based on GMM, which introduces a strong assumption about the data distribution. This limits the scope of application of the theoretical analysis.
- The proposed SSL switching algorithm introduces an extra hyperparameter s/t, and it seems that the algorithm is sensitive to the choice of the hyperparameter. Though it can be chosen by holdout labeled validation set, labeled data are relatively rare in the SSL setting. Is it possible to estimate the value of the hyperparameters?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the concerns in the weaknesses part.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main limitation of this paper is that the analyses mainly focus on GMMs, which limits the scope of application.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for appreciating the importance of the problem that we study as well as the significance of the insights that follow from our theoretical analysis. We are also grateful for bringing up the important observation that in the SSL setting, model selection based on only the unlabeled data is much more desirable than using a labeled validation set. As we argue in the “General comments” above, we find that selecting hyperparameters using a margin-based criterion computed on unlabeled validation data leads to similar results as the ones currently presented in the manuscript. We provide as evidence for this claim some key experimental results that we include in the attached PDF file. We will update all experiments to use the unsupervised model selection metric and stress that this does not affect in any way the takeaways of our experimental analysis.
We are thankful for your time and effort in reviewing our paper. In case you have any further questions regarding this manuscript, we will be happy to answer them promptly. | Summary: The paper provides a tight lower bound for semi-supervised learning for the 2GMM model. It compares the minimax rate with supervised and unsupervised learning. The authors also provide supporting experiments on real-world and synthetic datasets."
Strengths: 1. The paper is very well-written. The authors have effectively communicated their ideas, making it easy for readers to follow and understand the research.
2. The results presented in the paper go beyond existing prior work (when s > 0).
3. The authors have conducted experiments to support their theoretical analysis. This empirical validation strengthens the credibility of their findings and enhances the practical relevance of the proposed approach.
Weaknesses: The paper relies on strong assumptions regarding the 2GMM and linear model. This might limit the generalizability of the proposed method to other scenarios or data distributions.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Line 65: There seems to be a typographical error in the notation "A(D_l, D_l)." Please clarify this notation.
2. Line 121: I assume that 'u' represents a function mapping the number of labeled data points to the corresponding number of unlabeled data points. Can you please confirm this understanding?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our results and empirical validation as well as the clarity of our writing! We are grateful for the points that you have brought up in the review! We addressed your comment on the 2-GMM distribution in the “General comments” and now answer your remaining specific questions.
> There seems to be a typographical error in the notation "A(D_l, D_l)." Please clarify this notation.
Indeed, the notation should read $\mathcal{A}(\mathcal{D}_I, \mathcal{D}_u)$, that is, the algorithm $\mathcal{A}$ is applied to the labeled set $\mathcal{D}_l$ and the unlabeled set $\mathcal{D}_u$ – we will correct this and other typos in the final version of our manuscript.
> I assume that 'u' represents a function mapping the number of labeled data points to the corresponding number of unlabeled data points. Can you please confirm this understanding?
Your understanding is indeed correct.
We again appreciate your time and effort in reviewing the paper. We will be happy to answer any further questions you may have about our manuscript.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' rebuttal. I feel they have adequately addressed my questions and concerns. I am leaning toward leaving my score as it is. | Rebuttal 1:
Rebuttal: We extend our sincere thanks to the reviewers for their thorough review of our manuscript and for the constructive feedback provided. We are heartened by the acknowledgment of the importance of our research question: whether SSL algorithms can enhance performance over optimal SL and UL algorithms. It is particularly encouraging that all reviewers considered our theoretical analyses to be "inherently interesting" and that it "goes beyond prior work". Furthermore, we are gratified that our experimental analysis, which "validates our theory", was deemed “insightful” by the reviewers.
We deeply appreciate the feedback and recognize its helpful role in improving our manuscript. In the ensuing comments, we address the prominent concerns raised in the reviews. Specific feedback from the reviewers will be attended to in our individual responses.
**Theoretical analysis focuses solely on GMM distributions.**
We agree that it would be interesting to obtain theoretical results that go beyond GMM distributions in future work. Nevertheless, we would like to emphasize the following:
Analyzing the lower bound for GMM distributions is insightful in its own right:
Not only is it common to model various real-world distributions as a mixture of Gaussians [Bouguila et al], but this choice also allows our analysis to yield precise upper and lower bounds that depend on the hardness of the problem. In particular, this is reflected by the dependence on the signal-to-noise ratio $s$ in our results.
Previous works on SSL with concrete upper bounds have also focused on 2-GMMs (or slight generalizations thereof) for likely similar reasons (e.g. [16]). Moreover, giving lower or upper bounds for this commonly analyzed setting has the added benefit that it allows for comparisons with prior works.
Going beyond GMM distributions requires fundamental advances in the analysis of UL algorithms, which we consider to be beyond the scope of this paper. For instance, it is only recently that the (tight) lower and upper bounds for UL on isotropic 2-GMMs have been proven [12, 13, 22, 35], and precise rates for more general distributions are even more difficult to come by.
Having said that, we agree with reviewers that analysis of more general distributions would be interesting and leave that to future work.
**Algorithms 1, and 2 take in the switching point/weighting coefficient as parameters.**
The primary goal of this paper is to prove a “hardness”-dependent minimax lower bound and show that it is tight. For the purpose of the latter, we prove a simple upper bound for the SSL-S algorithm (with oracle knowledge of $s$). As a second contribution, we prove that there exist algorithms that can improve over the error of SSL-S by a constant factor and give one example (SSL-W with a particular choice of $t$) – papers of such flavor are common, e.g. the theoretical analyses of early stopping [Wei et al; Li et al], or Lasso [Meinshausen et al; van de Geer].
Even though our optimality guarantees for the algorithms hold for specific hyperparameters $s$ and $t$ that are derived from theory and unknown to the method, in practice, we can estimate them using held-out data, which is what we show empirically in our experiments (see also the next section of the “General comments” about labeled vs. unlabeled validation data).
We agree that it would be desirable to prove guarantees for data-dependent hyperparameter search with an appropriate model selection procedure. We believe this is one of the many exciting directions for future work that could follow from our paper.
**In experiments, the switching point and weighting coefficient are selected using labeled data, which is scarce in SSL settings.**
We thank the reviewers for bringing up this point which is indeed important for applying the algorithms in practice. We have now run additional experiments using unsupervised validation data for model selection and observe the same trends as when using a labeled validation set. We use a margin-based metric for hyperparameter selection i.e. we choose the hyperparameters that lead to the model with the largest (average) margin on the unlabeled validation set. We attach the revised version of Figures 3a and 3b in the PDF file and will change all experiments to use an unlabeled validation set. Notably, the new figures preserve the important trends that we discuss in Section 4.2, namely: 1) SSL-W is always better than SL and UL+; and 2) the gap between SSL-W and SL decreases with the SNR, and the gap between SSL and UL+ increases with the SNR.
References:
[Bouguila et al] – N. Bouguila and W. Fan. Mixture Models and Applications, 2019.
[Wei et al] – Y. Wei, F. Yang, M. Wainwright. Early stopping for kernel boosting algorithms: A general analysis
with localized complexities, 2017.
[Li et al] – M. Li, M. Soltanolkotabi, S. Oymak. Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks, 2020.
[Meinshausen et al] – N. Meinshausen, P. Buhlmann. High-dimensional graphs and variable selection with the Lasso, 2006.
[van de Geer] – S. van de Geer. On tight bounds for the Lasso, 2018.
Pdf: /pdf/080789b9ef9e027c071d24a91ba67016d0568711.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
All Points Matter: Entropy-Regularized Distribution Alignment for Weakly-supervised 3D Segmentation | Accept (poster) | Summary: This submission proposes a semi-supervized pseudo-label-based method for 3D point cloud segmentation method with sparse label annotations. Instead of thresholding on the confidence of pseudo labels, the authors propose to use all unlabelled points and encourage high-confidence pseudo labels by regularizing with their entropy. The authors propose a theoretical analysis of their approach and experiments where they show that their method improves the low-label regime for several backbone and several datasets.
Strengths: - the authors propose a theoretical analysis with some insight
- The experimental results are convincing
- The experiments are extensive: 4 datasets and 4 baselines
Weaknesses:
- The entire ER+DA analysis leads to using a loss which is the cross-entropy between the pseudo labels and the prediction, which is already sensible and does not need to be seen as a special case of a more general setting that is never explored anyway. The fact that KL(p,q)+H(q)=CE(p,q) was known and did not need two full pages of motivation and equations (some parts are interesting such as the gradient limits, but would be better suited in the appendix). Equation (6) is completely logical and can be used directly.
- On the other hand, the most interesting part of the paper is the pseudo-label generation hidden in Appendix B, which, as far as the reviewer knows, is novel. The reviewer thinks that the non-backpropagable momentum update of the class prototypes used for pseudo labels makes this method work. Without this, if p could be learned along q, then equation (6) would be meaningless as p=q is a trivial solution, and the pseudo labels would not help at all. The paper would need to be rewritten to highlight this hidden mechanism, brushed in the main paper in a single sentence, and at the method's core.
- Equation (6) reduces to classic cross entropy when all points are labelled, and yet the proposed methods improve the backbones in the fully supervised setting?
- The writing is subpar and lacks rigour. This leads to imprecise or even outright false statements at the core of the motivation. For example, the authors state that regularizing by the entropy decreases the "noise" in pseudo-labels when it actually encourages confident distributions and does not affect noise. There are many vague and uninformative sentences, some dealing with critical aspects of the paper. Some variables are also not introduced. The article remains overall understandable with the suppmat open in another tab.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Overall, the reviewer believes a very good idea is hidden in the paper; but, it takes a lot of effort to see through the subpar writing, imprecise statements, core ideas hidden in the appendix, and confusing notations.
The reviewer also believes the authors focused on the wrong part of their contribution.
As of now, the paper is not publishable. But if the authors put the work into improving their clarity and rigour, and focused on the important part of their contribution, this could become an impactful paper. The amount of effort might be too much for a rebuttal.
Q1) Equation (6) reduces to classic cross entropy when all points are labelled, and yet the proposed methods improve the backbones in the fully supervised setting?
S1) Swap the derivation of (6) and the pseudo-label generation between the appendix and the main paper.
S2) Add detailed proofs of Table 1 in the appendix
S3) Remove the notion that lower entropy = less noise
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: ok
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and your acknowledgment of our extensive experiments and the provided theoretical analysis. In the following, we address your concerns carefully.
---
### S1. Motivation and swapping contents.
We very much appreciate your confirmation of our idea.
Besides the formulation of ERDA loss, we also agree that the pseudo-label generation scheme is indeed another important component of our method. However, we believe that the motivation for ERDA loss could be more beneficial for the weakly supervised 3D segmentation task and can provide better insights to the community.
Regarding the motivation, we would like to mention that, some existing papers [3,4,5] also adopt prototypes for pseudo-label generation in this 3D segmentation task.
Moreover, while methods such as [4,6] also utilize momentum-update for pseudo-label generation, they generally fail to utilize dense pseudo-labels due to the low-quality pseudo-labels.
This drives us to devise an effective learning scheme of ERDA to tackle the issues of using dense pseudo-labels.
Therefore, we highlight the design of ERDA in our paper.
Regarding the discussion under general setting, we would also like to mention that, the discussion is related to the mutual learning on pseudo-labeling [1,2], as also discussed in Sec.2 "pseudo-label refinement". While different types of loss are proposed in both 2D and 3D domains [1,2,3,5] during the use of pseudo-labels, there is generally a lack of comparison among these losses.
We thus motivate the ERDA from a more general setting for better comparison.
Lastly, we will revise our paper and will certainly add more discussions about our pseudo-label generation process, such as by including Fig.3, and will simplify our existing descriptions about motivation to avoid redundancy.
---
### Q1. Improvement for fully supervised learning.
We would like to first clarify that, in fully-supervised setting, we generate the pseudo-labels for all points and regard the ERDA loss as an auxiliary loss for fully-supervised learning.
We would also like to mention that the promising improvement brought by ERDA for the fully supervised setting is truly a surprising benefit as we designed the ERDA primarily for improving learning on unlabelled data.
Regarding this, we hypothesize that the ERDA for pseudo-labels can help stabilize fully supervised learning.
Considering that there could be noises in the ground-truth labels [7,8], pseudo-labels from ERDA learning may provide unexpected benefits.
Additionally, we also provided more ablation studies in fully-supervised setting in **Tab.R1** above.
Interestingly, we find that the distribution alignment (DA) shows more benefits than entropy regularization (ER), but both terms are beneficial.
We would like to further explore their relationship under full supervision as a promising future work, but it may be beyond the scope of this paper that focuses on weak supervision.
Lastly, we would revise to include more discussion regarding fully-supervised setting in the paper.
---
### S2. Detailed proofs of Table 1
Thanks for your advice and we will revise our paper accordingly.
---
### S3. Notions about entropy and noise.
Regarding the confusion about the connection between "entropy" and "noise" in our paper, we would like to first clarify that we actually consider 'noise' as uncertain predictions in our paper, which we believe aligns with the definition of entropy [9], while this formulation of "noise" does not strictly refer to the general concepts like incorrect predictions. As a result, there may be confusion about connecting "entropy" with "noise" in our paper. We will revise our paper to better clarify these concepts and refine our original statements.
Regarding symbols, some symbols are not defined when introducing the implementation of pseudo-label generation, as they are moved into the appendix due to the limited space. We will revise and include them together with Fig.3 in the main paper for better clarity.
---
### References:
[1] Feng et al. DMT: Dynamic mutual training for semi-supervised learning (Pattern Recognition) \
[2] Wang et al. Repetitive Reprediction Deep Decipher for Semi-Supervised Learning (AAAI 2020) \
[3] Zhang et al. Perturbed Self-Distillation: Weakly Supervised Large-Scale Point Cloud Semantic Segmentation (ICCV 2021) \
[4] Xu et al. Weakly Supervised Semantic Point Cloud Segmentation: Towards 10x Fewer Labels (CVPR 2020) \
[5] Zhang et al. Weakly supervised semantic segmentation for large-scale point cloud (AAAI 2021) \
[6] Zhang et al. Semisupervised Momentum Prototype Network for Gearbox Fault Diagnosis Under Limited Labeled Samples (TII 2022) \
[7] Ye et al. Learning with Noisy Labels for Robust Point Cloud Segmentation (ICCV 2021) \
[8] Song et al. Learning from Noisy Labels with Deep Neural Networks: A Survey (TNNLS 2022) \
[9] Shannon. A Mathematical Theory of Communication (1948)
---
Rebuttal Comment 1.1:
Title: Follow-up
Comment: Thank you for the clarification.
Please make sure to include the positioning of your pseudo-label genetration module with respect ot he work you mention here.
The current presentation of the paper seems somewhat oblique. Specifically, the authors only explore the scenario where lambda=1, which leads the ERDA loss to merely simplify to standard cross-entropy. Given this, I recommend one of two approaches:
- Streamline this section to maintain clarity and coherence.
- Explore varying lambda values. This would validate the in-depth theoretical discussion and provide readers with a more comprehensive understanding of the benefit of the ERDA loss compared to more standard approaches.
Overall, the paper showcases promising ideas, and the results appear commendable. My reservations primarily pertain to its presentation. The emphasis, in my opinion, seems to be on less impactful aspects of an otherwise interesting paper. I've adjusted my rating accordingly.
---
Reply to Comment 1.1.1:
Title: Follow-up Response
Comment: We sincerely thank you for your positive comments and the suggestion that helps refine the quality of this paper.
We will surely include the pseudo-label generation module with the discussion both here and in the appendix. We hope this could alleviate your concern regarding the balance.
We will also revise the existing discussion on motivation and general setting to maintain clarity, as also suggested in our previous response to your S1.
Besides, it is worth mentioning that we have explored the situations when $\lambda$ takes various other values in Tab.7(b). We will refer to these experiments earlier in the discussion to provide a more comprehensive understanding.
Lastly, we also sincerely thank you for your adjustment on rates. | Summary: This paper proposes two losses for point cloud semantic segmentation. The first one is Entropy Regularization (ER) loss, which makes the pseudo-labels have low entropy and thereby be confident (like one-hot vectors). The other one is Distribution Alignment (DA) loss, which is a KL divergence between the pseudo-labels and the predictions. Extensive experiments show that the combination of these two losses is not only beneficial in a weakly supervised setting but also in a fully supervised setting.
Strengths: 1. The proposed method is extremely simple, but theoretically sounding and effective in practice.
2. Extensive experiments strongly support the benefit of the proposed method.
3. Overall presentation is great. Especially, section 3.2 is notably helpful to understand the working logic behind the proposed method.
Weaknesses: 1. Could the authors provide a visualization of the (1) entropy map, (2) pseudo-label, and (3) KD loss map of the point cloud, and how they change as training proceeds? The figure would be extremely helpful to elucidate how the proposed method works.
2. Actually, it was quite surprising for me that the proposed method brings performance gain even in a fully-supervised setting. I think more discussion is required for this. Also, it would be great if the authors could provide an ablation study of Table 7(a) in a fully-supervised setting. I expect that DA would be much more beneficial compared to ER in this case, but not sure of course.
3. I understand why the authors refrained from comparing with some super-voxel-based approaches for a fair comparison. However, using a super-voxel partition is one of the widely used settings in 3D semantic segmentation. Hence, to clearly demonstrate the superiority of the proposed method, it would be much better if the authors could provide the performance of ERDA using a super-voxel partition.
4. Some related works are missing.
Joint Learning of 2D-3D Weakly Supervised Semantic Segmentation (NeurIPS 2022)
Box2Mask: Weakly Supervised 3D Semantic Instance Segmentation Using Bounding Boxes (ECCV 2022)
Weakly Supervised 3D Segmentation via Receptive-Driven Pseudo Label Consistency and Structural Consistency (AAAI 2023)
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The results of the experiments on 2D semi-supervised semantic segmentation (SSSS) are interesting. I think the less significant improvement is mainly due to the gap between the weakly-supervised 3D semantic segmentation task and the SSSS task. Maybe the proposed ERDA can make benefit from 2D WSSS (using some points or scribbles).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are properly mentioned in the paper (section 5).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We sincerely thank you for your positive comments and the overall acknowledgment of our deceptively simple yet effective method. In the following, we address your concerns carefully.
---
### Q1. Visualization of training process.
We thank you for your advice on inspecting the training process through visualization.
For better understanding, we visualize the ER and DA terms, together with the pseudo-labels in **Fig.R4** above, where we find pseudo-labels can capture meaningful estimation for different semantic classes.
Since KL-Distance could indicate the difference between pseudo-labels and model prediction, we observe that, though pseudo-labels are similar to model prediction in an early stage, pseudo-labels appear to be different from model predictions around complex and cluttered areas as the training proceeds. This may show that the pseudo-labels could capture additional information that helps the model learning.
Moreover, we find pseudo-labels can also provide estimations of different entropy, thus different levels of certainty, for different semantic classes and areas. This may indicate that our pseudo-labels could capture some underlying knowledge of the difficulties of various 3D scenes and classes, which then benefits the model learning.
---
### Q2. Effects of ERDA on fully-supervised learning.
We thank you for your constructive suggestion.
As shown in **Tab.R1** above, we perform more experiments under the fully-supervised setting for a better investigation.
Indeed, we find the empirical results follow your expectation that distribution alignment (DA) shows more benefits than entropy regularization (ER), and it indeed further boosts our performance.
Besides, we may also note that both terms are beneficial compared with the baseline.
Regarding this, we hypothesize that the noise-aware learning of ERDA on pseudo-labels could stabilize fully-supervised learning, considering that ground-truth labels may suffer from the problem of label noise [1,2].
We would also include more discussion regarding full supervision in the paper.
---
### Q3. Comparing with super-voxel methods.
For popular super-voxel methods, such as OTOC [3], during the adaptation of our methods to theirs, we find OTOC requires iterative training, which is both time and resource consuming.
We could thus not fully reproduce their results.
Instead, since OTOC is based on a popular voxel-based 3D CNN, Minkowvski Unet. We thus adapt our method to its baseline methods (without iterative training). We find that, when using 1% labels, we improve the baseline from 63.4 to 66.8 mIoU by directly adding the proposed ERDA learning without hyper-parameter searching or further adaptation.
Additionally, we would like to first note that, our performance have already surpassed the results of the popular super-voxel methods in weakly supervised point cloud segmentation, such as OTOC[3], under a fair comparison when there is no other dataset-level meta-knowledge available, such as in S3DIS dataset.
For S3DIS, they provide only "1pt" performance, which is 50.1, while ours is 52.0, using a similar 3D CNN baseline.
---
### Q4. Missing related work.
We thank you for your careful read. We also recognize these works are related and would include them in our paper: \
"Joint 2D-3D" proposes a novel learning setting to leverage paired 2D-3D data to improve weakly-supervised learning in both 2D and 3D domains.\
"Box2Mask" proposes to use box annotation as weak supervision for 3D instance segmentation, inspired by hough voting and box clustering.\
"Receptive-Driven" designs three consistency constraints to enhance their pseudo-labels, by utilizing multi-scale consistency, spatial consistency, and semantic consistency.
---
### Q5. Extension to 2D weak-supervised semantic segmentation (WSSS).
We also agree that there could be gaps between the 3D weakly-supervised setting and 2D semi-supervised semantic segmentation (SSSS), especially regarding the form of available labels.
We thank you for your insights and for pointing out a promising future direction about using point-based or scribble-based supervision to make benefits from 2D WSSS.
Considering the similarity in type of supervision, *e.g.* point-based supervision, we would expect our method to be effective as well. We will explore this direction in the future.
---
### References:
[1] Ye et al. Learning with Noisy Labels for Robust Point Cloud Segmentation (ICCV 2021)\
[2] Song et al. Learning from Noisy Labels with Deep Neural Networks: A Survey (TNNLS 2022)\
[3] Liu et al. One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR 2021)
---
Rebuttal Comment 1.1:
Comment: Thanks for all your effort.
All of my concerns are appropriately addressed.
I would keep my rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your positive comments as well as your acknowledgment. | Summary: This paper proposes a novel learning strategy to regularize the generated pseudo-labels and narrow the gaps between pseudo-labels and model predictions. It introduces an Entropy Regularization loss and a Distribution Alignment loss for weakly supervised learning in 3D segmentation tasks. The approach can better leverage unlabeled data points and achieves state-of-the-art performance under different settings.
Strengths: 1. The proposed losses can work with other frameworks to consistently boost their performances. It can potentially achieve better performance with future stronger approaches.
2. The approach can improve many existing approaches by a significant margin.
Weaknesses: I think the motivation and design of the proposed losses needs further analysis.
1. Entropy Regularization loss: does it filter out some high-frequency predictions? How to balance the effects of filtering high-frequency components and removing noise?
2. Distribution Alignment loss:
(a) Does it always improve the performance to encourage the consistency of prediction and pseudo labels?
(b) Are the pseudo labels derived from network predictions? If so, they should be the similar thing. In what cases they tend to be similar and in what cases they are different? This part seems not very clearly presented.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your acknowledgment of the performance gain of our method and its potential to improve future stronger baselines. In the following, we address your concerns carefully.
---
### Q1. Entropy Regularization (ER) on high-frequency predictions.
We would like to mention that, since the ER works on the soft pseudo-labels at a per-point basis, it would tend to reduce the noise by reducing the occurrence of confusing pseudo-label predictions, *e.g.* predictions with a confidence score around 0.5 in the case of binary classification, rather than spatially smoothing out high-frequency predictions such as areas around scene boundaries and edges.
Although the high-frequency predictions might still be influenced by reducing the level of confusion, we believe that these influences are generally positive. This is because we observed that the edge areas predicted by the model trained with ERDA loss become cleaner and more accurate, as in original Fig.2. We will discuss this more in our paper.
---
### Q2. Questions on Distribution Alignment (DA) term.
Regarding your sub-question **(a)** about whether DA consistently improves, we have performed experiments under different settings to support this, as in Tab.7(a) and Tab.7(b).
In addition, we have also evaluated the effectiveness of DA under fully-supervised setting.
As shown in **Tab.R1** above, we find DA to be beneficial as well. More interestingly, we notice that the DA term shows even more benefits than the ER term, while both terms are both beneficial.
We would like to further explore their relationship under full supervision as a promising future work, but it may be beyond the scope of this paper that focuses on weak supervision.
Regarding your sub-question **(b)**, we would like to clarify that the pseudo-labels are generated based on the features and prototypes that are projected by a projection network from the backbone features, as shown in Fig.3 in Appendix B.
The pseudo-labels and model predictions are thus not necessarily the same or similar.
For example, we offer the visualization of pseudo-labels, model prediction, and ground-truth labels in Fig.7 of Appendix F. We find that pseudo-labels can provide different estimations and diverge from the model predictions on complex and cluttered areas, such as boards on walls.
Additionally, we also provide a visualization of how pseudo-labels evolve as the training proceeds, as in **Fig.R4** above. Since the KL-Distance (DA term) indicates the similarity between pseudo-labels and model predictions, we find that pseudo-labels gradually learn to be different from the model predictions on cluttered areas, which could regularize the model learning, such as preventing the model from overfitting.
We hope this could shed some light and we will add more discussion and details regarding the pseudo-label generation as well as its comparison with model prediction in our paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for the author's rebuttal. Some of my concerns are addressed and I keep my score as "borderline accept".
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your positive comments. | Summary: This paper considers the task of weakly supervised 3D scene semantic segmentation, where only a limited number of points in each training scene are given labels. Assuming a baseline system that operates within a pseudo-label paradigm, the paper proposes a new set of regularizing loss terms, that aim to (1) reduce pseudo-label entropy and (2) align the distribution of the pseudo-labels and network predictions. Under a default weighting strategy, these terms simplify into cross entropy from the pseudo-labels to the network predictions. Under a wide variety of experimental settings, the paper shows that incorporating this term leads to improved performance for 3d scene semantic segmentation, regardless of the level of supervision.
Strengths: This is a well-written, clear, and compelling paper on an important topic of interest to the community. While the introduced technique is not terribly complex, its benefits are well-justified, and the paper provides substantial analysis to support its inclusion: investigating how gradients from this loss term behave under different prediction settings, and why those gradients align with desirable properties. Further, the paper provides extensive ablation experiments that experimentally support this analysis, and show that all of its components lead to improved performance on the domains under investigation.
The strongest point for the paper is in its thorough and overwhelmingly positive experimental results. For multiple datasets of 3d scenes (all standard), under multiple levels of supervision, adding this loss to an array of baseline models always improves performance, and outperforms previous state-of-the-art models on competitive benchmarks. Substantial improvements are observed when labels are severely limited, and even under fully supervised settings, including this loss term is helpful. From the presented evidence, it seems likely this term should be widely useful for this task and domain in future work, as it presents robustly strong performance under a myriad of framings and settings.
Weaknesses: My biggest outstanding question is to what extent this technique can offer benefits for other domains? In its formulation, there is nothing specific to 3D scene segmentation, so ostensibly it could be generally useful for other weakly supervised domains that employ pseudo-labels. Some evidence is provided that it can transfer to image segmentation, but it would also be interesting to consider domains like 3D shape segmentation. The initial results (on domains other than 3d scenes) provided by the paper are encouraging, but a more thorough analysis would of course strengthen the paper, and likely dramatically improve the reach/impact of the contribution.
Relatedly, I would like to see more analysis / discussion about under what situations this term is helpful? Is it always beneficial to include such a term (no matter the domain / task). For instance, I could imagine that when the initial pseudo-labeling mechanism is highly inaccurate, this term might actually be harmful for learning. For 3D scene segmentation, my prior is that pseudo-labeling techniques are largely successful because strong locality cues in this domain can often be used to propagate labels to nearby unlabeled points, with a relative high degree of confidence; so the quality of initial pseudo-labels for 3D scene segmentation might be higher than would be expected for other domains of interest. It would be interesting to consider the effect that the “goodness” of the pseudo-labeling mechanism has on the final model performance, which could potentially be evaluated in a synthetically designed experiment that introduces “corruption” (at varying levels) into the distributions produced by the pseudo-labeling network. A deeper understanding of how the various components of the system interact with the added loss terms would be beneficial, and may give insight into what other domains and systems may benefit from this insights this paper provides.
Minor:
The formatting of table 2 can be improved. The read highlights are distracting, and largely unneeded as they overstate information. Consider replacing the red text coloring with italics, or better yet, marking only the columns that the baseline does not get improved with the added loss term.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Perhaps the most surprising result in the paper is that the method improves baselines, even under full label supervision. While I don’t doubt that trend is “real”, as the experimentation seems robust and well-designed, I was not quite satisfied by the explanation given to explain the result on lines 250-254. Is this explanation claiming that the “gt” labels have noise, so using ERD, which is “noise-aware”, can help regulate and remove the noise present in them? If so, this seems like a testable hypothesis (e.g. analyzing differences between pseudo-label predictions and gt label predictions). While I don’t think its required to have a compelte explanation for this phenomenon, the paper should either clarify the explanation here, or simply say that it is unknown why ERDA offers benefits in this paradigm, and to fully understand it would require further study.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: "Limited" limitations are given, see weaknesses section as to other potential limitations that should be explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your acknowledgment of both our theoretical and empirical analysis, as well as the potential broader impact. In the following, we address your concerns carefully.
---
### Q1. Application to other tasks.
We also agree that our method could be extended to other tasks, such as the suggested 3D shape segmentation. As in **Tab.R2** above, we find that ERDA also illustrates improvements over the baseline for weakly-supervised shape segmentation.
Along with experiments on semi-supervised image segmentation, these results may indicate that our method can be extended to different domains and tasks.
In the future, we will investigate a more generic formulation of our method with more complete analysis to benefit other tasks like classification and detection.
---
### Q2: Corruption on pseudo-labels and its relation to model performance.
Thanks for the insightful advice for investigating our methods from the perspective of noise, and we provide related experiments in **Tab.R4** above.
We find that, though corrupted pseudo-labels would affect the model performance, ERDA appears to be relatively robust. Especially, when adding a relatively large noise $\mathcal N(0,0.1)$ on our cosine distance, ERDA is still able to improve the baseline from 59.8 to 65.55 mIoU. Moreover, from the perspective of the final training loss on the segmentation task, the model training is almost not influenced across different noise levels, which may indicate the strong ability of ERDA learning.
---
### Q3: Improvement under full supervision and the potential effect of overcoming potential label noise.
We thank the reviewer for the suggestion to analyze the label noise for a better investigation into the benefits of our method for fully-supervised learning.
While we are also surprised to find our method effective under full supervision,
we also hypothesize that the ground-truth labels may have noise that would affect the model performance, which has been recognized in general tasks as well as in point cloud datasets [1,2].
More concretely, we checked the difference between the generated pseudo-labels and the ground-truth labels by estimating the accuracy of pseudo-labels *w.r.t.* ground-truth labels.
The results show that even using the top-5 accuracy of pseudo-labels *w.r.t*. ground-truth labels, the accuracy can only reach 98.6 but not 100. Since the level of uncertainty would be reduced by optimizing with our ERDA loss, this phenomenon partially reveals that the ground-truth labels may have noise that would increase uncertainty and thus affect model training to some extent.
Regarding this, we acknowledge that such divergence between pseudo-labels and ground-truth data is worth further analysis.
Besides, in **Tab.R1** above, we also perform more ablations under full supervision for a deeper understanding. We find that, with full supervision, while both terms are beneficial, the DA term demonstrates more benefits than ER term, and we observe further improvement on performance.
We would like to further explore their relationship and how they are effective under full supervision as a promising future work, but it may be beyond the scope of this paper that focuses on weak supervision.
Lastly, we would revise to include more discussion regarding fully-supervised setting in the paper.
---
### Q4: Tab.2 formatting.
Thanks very much for your advice. We will revise our table accordingly.
---
### References:
[1] Ye et al. Learning with Noisy Labels for Robust Point Cloud Segmentation (ICCV 2021)\
[2] Song et al. Learning from Noisy Labels with Deep Neural Networks: A Survey (TNNLS 2022)
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed and well-written response. I remain very positive on this paper, and would like to see its inclusion to the conference.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your positive comments and greatly appreciate your acknowledgment. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers time and effort in providing feedback.
Here, we provide more experiments and visualization for better analysis and understanding of our paper, including the table **R1**, **R2**, **R3**, and Figure **R4** mentioned below.
Pdf: /pdf/152264468c63757bbae346a2b907f239f3fb5b7d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions, which introduces an Entropy Regularization loss and a Distribution Alignment loss for weakly supervised learning in 3D segmentation tasks, resulting in an ERDA learning strategy.
Strengths: This paper solve an interesting problem, and the experimental results support the conclusions.
Weaknesses: 1.There exists some confusions in Figure 1:
a)For (a), there are generally two ways of generating pseudo-labels based on self-training weakly supervised methods. One way is to input both the sample and its augmented version into two different or shared-weight networks (with gradient updating); the other way is to input them into the student network and the teacher network updated via EMA (with only the student network being updated). I think the author's general description of sparse pseudo-labels in (a) is inadequate, which makes it difficult to establish a connection with (b) and understand the essential differences between them.
b)I think the author's naming of (b) is inappropriate. It only optimizes the pseudo-labels p and predictions q simultaneously, without reflecting the "dense" aspect.
2.I appreciate the author's theoretical analysis of entropy regularization and distribution alignment, as well as the evaluation of different loss combinations via formula derivation. However, when I saw Line 160 and Table 7, I wondered if when lambda=1, the best result is achieved, and at this point, the ERDA loss simplifies to a single cross-entropy-based loss that optimizes both p and q. My question is, to my knowledge, I don't think only optimizing p for gradient update at the same time can bring such a high performance gain, and I speculate that the diversity of different perturbations as input enables the model to learn the geometric invariance of features, and the ERDA loss is just the icing on the cake.
3.In Line 207, the author uses a prototypical pseudo-label generation process due to its popularity and simplicity. However, in Table 7(a), the baseline with pseudo-labels already reaches 63.3%. I would like to see the difference between the results of prototypical pseudo-labels and plainest pseudo-labels.
4.I am a little confused about the author's approach under the setting of “fully”. Generally, the PL strategy is applied to unlabeled data, and then leverage prototypes, perturbation or contrastive learning to improve the model's robustness to the unlabeled data. However, in the fully experiment, the author applies ERDA to labeled data, so how can we talk about pseudo-labeling? There is also no gradient update for p.
5.Table 1 is difficult to understand, and I think this part is the core of the method. Therefore, the author needs to provide a detailed description of it.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See section weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There are some issues not explained clearly, see Section Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and efforts and we are grateful for your confirmation of the novelty and effectiveness of the proposed method. In the following, we address your concerns carefully.
---
### Q1. Descriptions in Figure 1:
Thanks for your advice.
For your question **a)**, we would like to mention that the concrete pseudo-label generation pipeline we studied falls in the category of prototypical pseudo-labels, which has been adopted with various modifications in point cloud segmentation [1] and other related fields [2,3]. We will revise our descriptions to make them clearer and more accurate.
Regarding your question **b)**, we agree that the connection between our naming and the 'dense' concept is somewhat implicit.
In general, we would like to mention that, with our ERDA learning on pseudo-label generation, the benefits of dense pseudo-labels can be better exploited;
in contrast, existing methods suffer from noises in dense pseudo-labels and require label selections and thus use only limited sparse pseudo-labels.
We thus highlight the ERDA optimization in Fig.1(b), and only hint on the resulting dense pseudo-labels in Fig.1(b) with corresponding performance comparison in Fig.1(c).
We will revise our figure to improve the clarity of Figure 1 for describing the connection between our method and the 'dense' concept.
---
### Q2. Reason for high-performance gain.
We thank you for your acknowledgment of our theoretical analysis.
We would like to first clarify that, as discussed in our paper, optimizing the pseudo-label generation network in a cross-entropy-like loss has the effects of both reducing the entropy of pseudo-labels and maintaining consistency in label distribution at the same time.
We believe the high performance gain is supported by the effective exploitation of ALL unlabeled data due to the more appropriate utilization of pseudo-labels, which has been demonstrated by our experimental results.
While other similar formulations (*eg* when $\lambda\neq1$) could also be promising, their improvements are not as significant as our cross-entropy-like ERDA loss, as in Tab.7(b).
Secondly, we would also like to mention that we follow the training of baseline and do not impose any specifically designed perturbation or weak-to-strong consistency.
Lastly, we also demonstrated that the performance would be worsened without using our method when generating dense pseudo-labels, as in Fig.1(c) and Tab.7(c).
This may also suggest that the diversity of different perturbations would not be helpful and even harmful if not using our ERDA loss.
As a result, we believe our method is far beyond the icing on the cake.
---
### Q3. The difference between prototypical pseudo-labels and plainest pseudo-labels.
By removing the momentum prototype, we further explore the effect of our plainest pseudo-label generation and achieve 62.3 mIoU, which is not significantly different from 63.3 mIoU achieved with momentum prototype in Tab.7(a).
Besides, we may refer to Zhang et al. [1]. Our pseudo-label generation could be closely related to theirs, where they adopt classic prototypical pseudo-labels and can also be viewed as the "plainest" prototypical pseudo-labels with no momentum. They achieve 61.8 mIoU. More concretely, in Fig. 1(c), we have included [1] for comparison with our prototypical pseudo-labels (blue), and it does not yield a significant difference.
---
### Q4. Improvement of ERDA in fully-supervised setting.
We thank you for pointing this out. We would like to first clarify that, in fully-supervised setting, we generate the pseudo-labels for all points and regard the ERDA loss as an auxiliary loss for fully-supervised learning (so there actually exist updates on p).
We would also like to mention that the promising improvement brought by ERDA for the fully supervised setting is truly a surprising benefit as we designed the ERDA primarily for improving learning on unlabelled data. Regarding this, we hypothesize that the ERDA for pseudo-labels can help stabilize fully supervised learning. Considering that there could be noises in the ground-truth labels [4,5], pseudo-labels from ERDA learning may provide unexpected benefits.
To answer this question better, we have also performed more ablations in fully-supervised setting in **Tab.R1** above.
Interestingly, we find that the distribution alignment (DA) shows more benefits than entropy regularization (ER), but both terms are beneficial.
We would like to further explore their relationship under full supervision as a promising future work, but it may be beyond the scope of this paper that focuses on weak supervision.
We would also include more descriptions and discussion for fully-supervised setting in the paper.
---
### Q5. Better Tab.1 discussion.
We agree that Tab.1 is at the core of our method and thus dedicate Sec 3.2 for discussion. We also realize that prolonged formulas may hinder readability. In addition to the visualization of Tab.1 we offered in Appendix C, we will also revise the paper to facilitate easier and better understanding. For example, we can include some visualization of Tab.1 in the main paper, as in Appendix C.
---
### References:
[1] Zhang et al. Weakly supervised semantic segmentation for large-scale point cloud (AAAI 2021) \
[2] Zhang et al. Semisupervised Momentum Prototype Network for Gearbox Fault Diagnosis Under Limited Labeled Samples (TII 2022) \
[3] Li et al. MoPro: Webly Supervised Learning with Momentum Prototypes (ICLR 2021) \
[4] Ye et al. Learning with Noisy Labels for Robust Point Cloud Segmentation (ICCV 2021) \
[5] Song et al. Learning from Noisy Labels with Deep Neural Networks: A Survey (TNNLS 2022)
---
Rebuttal 2:
Comment: I appriciate the answer made by the authors, and I keep my original positive score.
---
Rebuttal Comment 2.1:
Comment: We sincerely thank you for your positive comments. | null | null | null | null | null | null |
Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications | Accept (poster) | Summary: This work presents a novel graph mixup strategy, which focuses on synthesizing a 'midpoint' graph between two graphs based on the graph structure-signal product metric space. The authors utilize the Fused Gromov-Wasserstein (FGW) distance to achieve this and also propose a method to accelerate the computation of FGW distance solvers by relaxing the polytope constraint. The effectiveness of the proposed method is verified through experiments on molecule and social network datasets.
Strengths: - The draft is well-written, and the proposed methods are technically sound with theoretical analysis.
- Performance improvements compared to the recent baseline are significant.
Weaknesses: - Efficient computation is a key aspect when deploying augmentation strategies in real-world applications. However, Table 3 is not conclusive to me in its current form. Please provide the computation cost for the vanilla model and other baselines to better understand the tradeoff between performance gain and efficiency.
- Lack of qualitative analysis. It remains somewhat unclear whether the synthetic 'midpoint' graphs possess semantic meaning. For instance, in the case of molecules, specific functional groups determine their properties. If these key groups are missing in the synthetic 'midpoint' graphs, it might mislead the network. To provide readers with more insights regarding the 'midpoint' graphs, including real samples of synthetic graphs is recommended.
- The experiments are currently limited to small-scale graph datasets (1K~4K). To demonstrate the applicability of the proposed method to medium/large-scale graphs, conducting experiments on more large-scale datasets such as the OGB benchmark datasets is highly required.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I noticed that Table 1 shows that the recent state-of-the-art baseline, G-Mixup, performs worse than the vanilla model in many cases. Is there a reason that the authors conjecture for these results?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors addressed the limitations of their work in the draft.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your comments and suggestions. We made every effort to address all the concerns. In the following, we quote your comments and then give our detailed response point-by-point.
> **W1. Please provide the computation cost for the vanilla model and other baselines to better understand the tradeoff between performance gain and efficiency.**
We provide runtime comparisons between our methods and compared baselines as well as the time cost for the vanilla model traning in the public response. Please check **Q1** in the **public response** for details.
> **W2. To provide readers with more insights regarding the 'midpoint' graphs, including real samples of synthetic graphs is recommended.**
Here we provide two mixup examples of FGWMixup in the attachment PDF file in the **public response** for a more comprehensive qualitative analysis. The subfigures on the left and middle are the original graphs to be mixed up, denoted as G1 and G2, respectively. The subtitle denotes the mixup ratio $\lambda$. The subfigure on the right is the synthetic mixup graph. The node features are one-hot encoded and distinguished with the feature ID and corresponding color. In Example 1, we can observe that the mixup graph adopts an overall trident structure and a substructure (marked green) from G1, and adopts several substructures from G2 (marked red), finally formulating a new graph sample combined properties from both graphs. In Example 2, we can observe that the mixup graph is quite similar to G2, but breaks the connection of two marked (red arrow pointed) edges and formulates two disconnected subgraphs, which is identical to the overall structure of G1. Moreover, in both examples, we can observe that the preserved substructures are not only topologically alike, but also highly consistent in node features. These examples demonstrate that **FGWMixup can both preserve key topologies and generate semantically meaningful node features**. We will add the qualitative analysis to the Appendix of our paper.
> **W3. To demonstrate the applicability of the proposed method to medium/large-scale graphs, conducting experiments on more large-scale datasets such as the OGB benchmark datasets is highly required.**
This paper focuses on data augmentation, which mainly tackles the circumstances of data insufficiency. Therefore, it is more persuasive to observe results on small datasets. Furthermore, in former works [17, 18], datasets from TUDataset are widely adopted as the benchmark. Hence, we follow their settings and present the results on those datasets as the main results. However, we also have considered validating the effectiveness of our method on larger datasets, and **we have reported experimental results on large OGB datasets with 40K+ samples** (see Appendix E.2), and our methods also outperform all baselines.
> **Q1. The recent state-of-the-art baseline, G-Mixup, performs worse than the vanilla model in many cases. Is there a reason that the authors conjecture for these results?**
On the one hand, **G-Mixup does not consider the joint modeling of graph structures and node features**. As we discussed in the Intro, ignoring the interaction between the graph structure and signal spaces may degrade the quality of graph data augmentation.
Another reason lies in that **the node matching problem remains unsolved using the linear interpolation strategy** introduced in G-Mixup to mix up two graphons. When calculating graphons, G-Mixup only focuses on aligning node distributions of graphs from the same class, but ignores the alignment across different classes. Therefore, the graphons of different classes are not ensured with an aligned node distribution. Thus, simply conducting the Euclidean addition operation on two unaligned graphons is not appropriate.
The two factors together may lead to a meaningless mixup graphon and probably introduce noises to the dataset, thus leading to performance decay.
---
Rebuttal Comment 1.1:
Title: Acknowledgement to the authors' rebuttal
Comment: I appreciate the authors for their response. After carefully checking the authors' rebuttal and considering the comments from other reviewers, I'm pleased to note that my most concerns have been well addressed. I agree with the authors' claim regarding to the computation cost. As a result, I raise my score from 4 to 5. For the final revision, I kindly request the inclusion of further analysis regarding W2.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer Ake3
Comment: We would like to express our sincere gratitude for your thoughtful reviews and for considering our rebuttal. We will certainly include the further analyses on time complexity, mixup quality, etc. in our final version.
Thank you once again for your time, effort, and the positive assessment. Your comments have been invaluable in refining our research. | Summary: Authors study the problem of graph data augmentation for graph-level classifications. They propose a mix-up strategy based on the computation of Fused Gromov-Wasserstein(FGW) “mid-point” (or barycenter) between a pair of graphs from the training dataset. To solve these optimization problems, they adapt a recent single-loop GW solver [28] to the FGW setting (i.e including node features in the OT problem) and provide a theoretical analysis of the resulting algorithm. Finally, they benchmark their novel mix-up strategy against SOTA mix-up approaches on five real-world datasets using 4 different GNN architectures and show that their method achieves better performances than its competitors.
Strengths: - Authors propose an interesting mix-up strategy capable of incorporating both structure and feature information via the FGW distance.
- They investigate two FGW solvers to estimate the inherent soft (attributed) graph matching problems in a novel task of graph dataset augmentation which is also interesting for the active optimization literature on (F)GW.
- They partially extend the analysis done in [28] for GW to FGW.
- They benchmark their FGWmixup with various SOTA mix-up strategies with 4 different backbones on 5 relatively small real-world datasets (small graphs with around 1k to 4k samples). The supplementary material also contains concluding experiments on the ogbg-molhiv datasets containing +40k samples. Plus study the robustness to label corruption of FGWmixup.
- They study different mix-up strategies w.r.t the size of the FGW barycenters with pairwise (local) and global strategies (e.g proportional to the median size in the dataset)
- Overall I find the paper well-written
Weaknesses: - 1. *[addressed by authors]* **The FGWmixup is badly positioned with respect to the G-mixup** and the current literature on GW (resp. FGW) for the estimation of graphons (resp. attributed graphons):
From my understanding, the G-mixup [18] strategy comes down to estimate 1 graphon per class then perform mix-up by sampling a graph from a linear interpolation between a pair of these graphons. [A] actually showed that a GW barycenter better estimates graphon than all methods considered in G-mixup paper. As discussed in [B], we can consider that an analog result holds for estimating attributed graphons using FGW barycenters. These observations highlight the following weaknesses / missing points in authors’ overall reasoning currently illustrated in the papier:
- a) G-mixup should be benchmarked using GW solvers which correspond to the FGW solvers studied by the authors. Note that [A] rather studied the Proximal Point solver introduced in [C], which seems to lead to lower performances on several tasks than the single-loop solver in [28] which coincides with your FGWmixup*.
- b) The extension of the G-mixup strategy using FGW instead of GW should also be discussed and benchmarked. In the same spirit, an ablation study with $\alpha = 1$ should be considered for the FGWmixup. Basically, I would expect from your paper to identity what is better between a global mix-up strategy (G-mixup) or a pairwise one (FGWmixup) for data augmentation. From my point of view, the first one is favorably biased by a class (as a unique barycenter represents a class) but in the meantime the generated graphs are poorly representative of the graph dataset manifold as only one graphon is taken per class. Whereas your FGWmixup is not really biased by a class (potentially too local) but the generated graphs are intrinsically more representative of the input graphs.
- 2. *[partially addressed by authors, the effect of removing marginals' cost is still unclear]* The single loop solver for FGW problems used in Algorithm 2, should be better positioned w.r.t to the one in [28] for GW. It is an exact adaptation from [28] adding the linear OT cost from node features. Moreover you propose to solve for an equivalent problem, where you removed the fixed terms on the marginals. This should be clear in the paper.
- a) As you remove fixed terms on the marginals, you solve an equivalent problem to get an OT solution. De facto, equation 4 is wrong as you do not consider exactly the gradient of FGW. Make this clearer in the paper please (maybe via different notations for both objective functions).
- b) In the spirit of [28, Table 7], the feasibility error i.e errors on marginals should be benchmarked across used solvers. I’m worried that removing the marginal terms in the FGW cost makes this feasibility error problem worse. Also this analysis should be completed with an analysis of the estimated FGW distances across used FGW solvers, over a dataset considered in the mix-up experiments.
- c) The theoretical results, especially proposition 2, are fairly easily proven by following [28]. This reduces the credit we can give to your theoretical contributions.
- 3. [*partially addressed by the authors, the extension to full learning of the 'midpoint' distribution requires more in-depth analysis and will be the subject of future work.*] On the form there are still some points which can be easily improved:
- a) L93. Mu is not a probability measure, but a probability vector. + L95, references for ‘normalized’ degree distributions ?
- b) FGW (semi-)metric properties hold on the space of metric spaces quotiented by the notion of strong-isomorphism, like graphs depicted by distance matrices and invariant to permutations. Otherwise, following [D] it can be extended to any graph representations but w.r.t the notion of weak-isomorphism, hence FGW rather defines a pseudo-metric.
- c) I find the optimization parts confusing and heavy to read. I believe that their readability could be clearly improved by referring efficiently to specific steps in the algorithm and/or specific updates, instead of the confusing nested loop vocabulary.
- d) I got confused with Equation 2 being a nested bi-level optimization problem, could you further elaborate ? In my opinion it is more simple than that knowing that Atilde and Xtilde are not involved in the coupling constraints.
- e) Equation 2. In practice it seems that you do not consider learning the masses of the barycenter which removes a degree of freedom in your model. Learning those has actually been showed to be beneficial for many graph representation learning tasks (see [51, 52, 54]). Could you further justify your choice ? and I guess address that as a current limitation of your work. (Considering that it is not difficult to deduce a subgradient for the masses when using bregman projection based solvers.)
- f) I believe that the paragraph ‘Effects of Mixup Graph Sizes’ could also take into consideration [E] where graph size also is a current limitation and does not exactly fall into the mentioned problem of graph size generalization. I believe that the latter problem mostly focuses on the question: can we generalize to big graphs when learning on small graphs ? (maybe I’m wrong)
[A] Xu, Hongteng, et al. "Learning graphons via structured gromov-wasserstein barycenters." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 12. 2021.
[B] Xu, Hongteng, et al. "Learning Graphon Autoencoders for Generative Graph Modeling." arXiv preprint arXiv:2105.14244(2021).
[C] Xu, Hongteng, et al. "Gromov-wasserstein learning for graph matching and node embedding." International conference on machine learning. PMLR, 2019.
[D] Chowdhury, Samir, and Facundo Mémoli. "The gromov–wasserstein distance between networks and stable network invariants." Information and Inference: A Journal of the IMA 8.4 (2019): 757-787.
[E] Brogat-Motte, Luc, et al. "Learning to predict graphs with fused Gromov-Wasserstein barycenters." International Conference on Machine Learning. PMLR, 2022.
*Update after rebuttal : The authors have fully or partially addressed most of my concerns through their rebuttals and discussions. Considering that the authors have undertaken to amend the paper and supplementary material accordingly, I increase my grade from 4 (borderline reject) to 6 (weak accept).*
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I refer the authors to the weaknesses' list above for first suggestions and questions. Follows some questions/suggestions to potentially make the paper clearer:
Q1. Could you please explicit the thresholding method used to deduce new sample from the FGWmixup strategy ?
Q2. Could you report an ablation study with respect to the alpha parameter in FGW for the validated values {0.05, 0.5, 0.95} ?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I think discussing the weaknesses I've mentioned and answering my questions detailed above would help identify and work around the other limitations of their current work. If the authors manage to address these, I will happily increase my rating.
This work has no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your comments and suggestions. We made every effort to address all the concerns. In the following, we give our detailed response point-by-point. W denotes Weakness, and Q denotes Questions.
- **W1:**
- a) We want to point out the fact that G-Mixup does not apply GW metrics to estimate the graphons (as they introduce in Table 1 and Table 6 in their paper [18]). In our implementation, we select the USVT method as the graphon estimator for G-Mixup. Indeed, it is also practical to apply the GW metric for graphon estimation. We also provide the experimental results of G-Mixup+GW and G-Mixup+GW*(single-loop solver) on PROTEINS in Table 1 of the PDF file from the **public response**. No matter what metrics are applied for graphon estimation, we want to emphasize that G-Mixup does not model the joint distribution of graph structure and signal spaces, and regard them as two disentangled and independent factors. However, **our main contribution and the greatest advantage compared with G-Mixup comes from the joint modeling of graph structure and signal spaces**. This also explains why FGWMixup can outperform.
- b) Just as the reviewer pointed out, G-Mixup can be extended with the attributed graphon using FGW estimator introduced in [B]. With the attributed graphon, G-Mixup can also consider the joint modeling problem. However, this is **NOT** the contribution and the core idea from G-Mixup, and we think **the comparison between our method with the original G-Mixup is already sufficient to prove the effectiveness of our core idea – solving the joint modeling problem with FGW**. Yet, we are also interested in the comparison of the sample-wise and class-wise mixup methods just like the reviewer. Hence, we conduct experiments with the extended version of G-Mixup with FGW graphon estimator, and the results are shown in Table 1 of the PDF file from the **public response**. We can observe that FGW does not significantly improve the performance of G-Mixup and still cannot outperform our methods. The main reasons lie in that the node matching problem remains unsolved using the linear interpolation strategy of two graphons introduced in G-Mixup. Though the intra-class node matching has been done with FGW graphon estimation, **the graphons of different classes are not ensured with an aligned node distribution**. Simply conducting addition on two unaligned graphons is inappropriate.
- c) As for the ablation study of $\alpha$=1 for FGWMixup, we will introduce it in our answer to Q2.
- **W2:**
- a) After removing the constant terms in the FGW equations, we indeed should use another notation for this new optimization objective, and the gradient is calculated on this new objective instead of the original FGW. We will clarify the notation in the paper accordingly.
- b) For the analysis on infeasibility, please refer to **Q2** in the **public response** for details.
- **W3:**
- a) Sorry for the mistake, and we will correct $\mu$ as a probability vector. The normalized degree distribution is introduced in [C] (also [49] in our paper), and we will add this reference where we introduce this distribution.
- b,c) We will clarify the pseudo-metric property of FGW according to [D] in the paper and change the expression of the loop vocabulary to specific steps in the algorithm.
- d) If we take Eq.(1) into Eq.(2), we can find that Eq.(2) will be a nested minimization problem, where the optimization over $\pi$ involves the calculation of $\tilde{X}$ and $\tilde{A}$ (according to Eq.(1)), and the minimization over $\tilde{X}$ and $\tilde{A}$ relies on the optimal $\pi$. A classic paradigm to solve this problem is to iteratively solve the inner and outer optimization based on the local optima [a]. Actually, we think traditional solvers of (F)GW barycenters [24, 26] are based on this paradigm, and Alg.1 in our work is adapted from those solvers.
- e) We have considered whether to learn the masses of nodes ($\mu$) in the mixup graph when designing our method. In fact, both choices are practical. For instance, [26] fixes the $\mu$, while [51, 52, 54] adaptively optimize it. However, in our current version, we find that FGWMixup has already outperformed SOTA methods without adaptively learning $\mu$. Considering that this design does not influence the core contribution of our method and the pages are limited, we have not explored the adaptive $\mu$ version in our current paper and regard this as our future work.
- f) In the paragraph ‘Effects of Mixup Graph Sizes’, we actually do not claim to solve the graph generalization problem, but discuss why the fixed graph size is not a good option in our method. We explain that when we add a large proportion of graphs in a fixed size to the training set, the graph size distribution will be hugely biased and will be bad at generalizing to the test distribution. Especially for small graph sizes (0.5x Median), as we mentioned in the paper, they will make the distribution peaking at small sizes, which will ‘struggle to generalize to larger ones’ and lead to consistently worse performance as shown in Fig.1.
- **Q1:**
- Our thresholding strategy is introduced in Appendix C. Here we further clarify the concept of ‘density difference’ in this section. The density of a graph G of N nodes is defined by: the number of edges / N(N-1), denoted as den(G). The density difference between the mixup graph $\tilde{A}$ and two original graphs $A_1$, $A_2$ is: |den($\tilde{A}$) – ($\lambda$ den($A_1$) + (1-$\lambda$) den($A_2$))|, and we try to seek a threshold to discretize$\tilde{A}$ for minimizing this density difference.
- **Q2:**
- Please refer to our response to **Q1** of Reviewer ZCHq for details. (If invisible, we will add them to the comments below)
[a] Dempe, Stephan, and F. Mefo Kue, Solving discrete linear bilevel optimization problems using the optimal value reformulation, Journal of Global Optimization 68 (2017): 255-277.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Thank you for your replies, I consider this initial rebuttal compelling and I will increase my grade. Follows some remarks and questions:
**About your answers addressed to all reviewers**: The running time comparison is indeed important and should be reported in the supplementary material. Overall if I understood correctly authors' experiments I agree with authors on the fact that the higher computational cost of FGW mixup compared to other augmentation methods is not a huge bottleneck as there is a lot of room to improve those and it constitutes one of the main concern of the current FGW literature (better solvers, better initialisations, GPU friendly solvers etc) while performances' improvements are consistent. Moreover, FGWmixup remains a pre-processing for GNN models that require extensive hyper-parameter tuning and validation. Two small questions to clarify this runtime table:
i) For FGWmixup, does it correspond to the total runtime while performing your FGW mixup on cpus without explicit parallelization over pairs of pairs ? ii) Did you estimate G-mixup + GW graphons using a Block Coordinate Descent ? even in this case, no change needed but I am confident that at this scale(protein or nci1) a better choice would be SGD+Adam both in terms of speed and precision like for GW-based dictionary learning methods.
The analysis of feasibility errors is interesting. Could you please complete it by providing the same marginal error measures than in [28] ?
**W1** a-b) I agree with authors' rebuttal. Moreover complementary experiments in the PDF in my opinion are important and should be at least reported in the supplementary material and mentioned in the main paper. It shows that FGW in itself (compared to GW) is not enough and proper interpolation schemes are essential. I believe that the natural extension of G-mixup to G-mixup+FGW exhibits well the limitations of linear interpolation between attributed graphons and even more on attributed graphon estimates (modelling at the core of G-mixup), which would require a much more consequent amount of samples to work well when node features come into play.
**W3** e) Indeed I believe that this extension deserves to be discussed in the sense that it would really allow the mixup graphs to fully leverage FGW abilities providing non-uniform nodes relative importance which would may be well merged with novel graph pooling techniques e.g weighted means or OT-based pooling.
f) I did not want to express that you were claiming to solve the graph generalization problem. In FGW-based structured prediction (cf [E]), the question of predicting graphs with proper sizes is also omnipresent. In both mixup and [E] contexts, one wants to seek a good understanding of the graph data manifold that includes this notion of graph sizes. So indeed, median sizes and so on seem clearly too limited and convex combination of sizes as you suggested is a good idea, but the questions of existence of optimal sizes and whether it coincides with your convex combination remain opened and interesting.
Q1. Some thresholding methods were also used e.g in the original paper of FGW to reconstruct shortest path matrices from barycenters, or in the semi-relaxed GW paper for graph completion but perhaps more with a view to reconstruction. To mimic the latter, could you please compare your method benchmarking L2 differences with the following method (detailed only for GW): Denote the GW barycenter $C_{\lambda}$ between $C_1$ and $C_2$ with corresponding OT plans $T_1$ and $T_2$. i) Find best thresholds $\rho_i$ minimising $||C_i - (T_i C_{\lambda} T_i^\top > \rho_i)||$. ii) compute the discrete $\tilde{C}_{\lambda}$ with threshold $\lambda \rho_1 + (1-\lambda) \rho_2$.
Something that bothers me is: if you have to modify the structure of the FGW barycenter so that a given GNN model can read it, why not modifying the barycenter node features too so that both thresholded and row barycenters are as close as possible in the FGW sense ?
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer WT1n
Comment: We genuinely appreciate your kind and timely response, and we address your further concerns point-by-point as follows:
- **Questions about the runtime table:**
- i): Yes, we report the total runtime of FGWMixup on CPUs without parallelization over graph pairs.
- ii): Yes, we still use the block coordinate descent method for a fair comparison.
- **Questions about the feasibility error analysis:**
Sure, we also provide the L2 norm of the differences between the row and column marginals given by the two solvers on PROTEINS.
| Row marginal L2 diff | Column marginal L2 diff |
|-------------|-------------|
| 1.478e-9(4.448e-9) | 0.0016(0.0023) |
From the results, we can also observe that the differences between the marginals are quite small, demonstrating that the single-loop solver will not lead to huge infeasibility.
- **W1 a-b)** We are happy that the complementary results and our explanation helps. We will certainly add them to our paper to better clarify that a proper interpolation scheme is essential for graph augmentation.
- **W3 e)** We have attempted to add the extension of adaptive optimization on $\mu$. However, it is not that practical to incorporate this design directly with our current Block Coordinated Descent optimization framework. There are two main reasons: 1) the optimization w.r.t. $\mu$ is constrained with the simplex and does not have an analytic solution as X and A do. Hence, the optimization requires extra Proximal Gradient Descent iterations in the outer loop, making the time complexity significantly higher. 2) In graph dictionary learning [51], $\mu$ is adaptively learned from the whole dataset with sufficient samples, whereas in the mixup scenario, $\mu$ can only be adjusted through two samples. This design may dramatically enlarge the degree of freedom when solving the mixup problem and probably lead to unstable solutions with insufficient samples. Furthermore, [54] does not adaptively learn $\mu$ as they have relaxed the simplex constraints on the target marginal. [52] learns $\mu$ through supervision from downstream tasks, which is not applicable for data preprocessing. In a nutshell, we have not yet discovered a proper method to accomplish the adaptive optimization on $\mu$. Though most previous works [24, 26] assume a known and fixed $\mu$ when solving the barycenter, we still believe it is indeed a valuable question to be explored. We will do more research on this topic in future work.
- **W3 f)** Sorry for our misunderstanding. This indeed is an interesting question. We think the existence of optimal sizes hugely relies on the generation process of the graphs. Only by assuming that graphs are generated from a stationary stochastic process, the optimal sizes can be estimated with proper prior and sufficient data. However, estimating the optimal size for a generic random graph process can be extremely hard, which is not the main focus of this work. On the other hand, our strategy has been validated as effective so far, and we will explore the optimal graph size problem in the future.
- **Q1.**
1) We guess there might be some typos in the thresholding method that you introduced. The symbol $\rho_i$ is designed to threshold $T_i C_\lambda T_i^{\top}$, which might be inappropriate to discretize $C_\lambda$. Moreover, $C_i$ and $T_i C_\lambda T_i^{\top}$ seem not to be in the same metric space. We also surveyed the two thresholding methods that you mentioned [24, 54]. We use the one in [54] to compare with ours and benchmark on the NCI1 dataset with 500 mixup pairs. The following metrics are reported: 1) averaged L2 of the difference matrix (L2), 2) averaged percentage of non-zero entries in the difference matrix (non-zero%), and 3) percentage of identical matrices given by two methods (identical%).
| L2 | Non-zero\% | Identical\% |
|-------|-----------|-------------|
| 1.925 | 0.69% | 40.6% |
We can observe that the two thresholding methods present quite similar results. Hence, we think that our thresholding method is empirically reasonable and effective.
2) Like the original FGW paper, the intention of thresholding is to make the structural cost matrix a discrete graph adjacency matrix that can be read by GNNs. However, current node features can already be read by GNNs, thus we only use the threshold discretization to approximate the graph structure without changing the node feature.
We appreciate for providing a motivating insight to make the thresholding stricter in the FGW sense. It is true that modifying node features based on current discrete structures can make it closer to the ideal optimum of the FGW barycenter. Yet, this may incorporate extra rounds of optimization on node features with more computation costs. We will consider further optimizing this thresholding method in our future work. But this does not affect our contribution to the core problem we addressed, that is, realizing joint modeling in graph augmentation. | Summary: This paper proposes a new graph data augmentation method for graph-level classifications. To address the limitation of existing methods, the authors consider the joint interaction between the graph structure and node features by finding an optimal inter-graph node matching strategy. Furthermore, the authors introduce a relaxed FGW solver to accelerate the proposed method FGWMixup. Experiments show that the FGWMixup outperforms multiple baselines on five datasets using different GNN backbones.
Strengths: Overall the paper is well-written and easy to read.
Using the Fused Gromov-Wasserstein distance metric space seems to be an interesting solution for mixing graphs.
The authors provide theoretical analysis for accelerating FGWMixup by improving the convergence rate.
Weaknesses: My major concern is about the claim that most existing graph data augmentation methods only consider one of the graph structure space and node feature space. There exist many graph augmentation methods (adversarial augmentation, local graph augmentation, automated graph augmentation and etc). Many of them change both graph structure and node features to generate augmented graphs. I don't think this work is the only one considering the joint modeling problem.
Besides, the authors claim the importance of the interaction between graph structure space and node features space. I think more explanations/analyses are needed to show that augmented graphs by FGWMixup can preserve key topologies of the original graphs and have semantically meaningful node features simultaneously.
For the experiments, the improvement of the performance is not significant compared to the high std. I would suggest the authors include experiments on the OGB benchmark(https://ogb.stanford.edu/docs/graphprop/) since the std on OGB is typically small compared to TUDataset. Furthermore, except for some graph mixup methods, the authors only compare with DropEdge and DropNode. It would make the experimental results more convincing if the authors can include some advanced graph data augmentation methods.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: There are multiple hyperparameters in the proposed FGWMixup, ex. $\alpha$ in equation 1. How to choose the hyperparameters? How sensitive is the performance gain to the hyperparameter tuning?
The authors provide experiments about the running time between FGWMixup and FGWMixup*. How does FGWMixup compare to the other baselines? Does the improvement of the performance come from a much higher computational cost?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes. The authors have discussed the limitation of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your comments and suggestions. We made every effort to address all the concerns. In the following, we quote your comments and then give our detailed response point-by-point.
> **W1. Many existing works change both graph structure and node features to generate augmented graphs. I don't think this work is the only one considering the joint modeling problem.**
We want to emphasize that **we do NOT claim that existing methods only consider one of the graph structure and the node feature**, but express that **most existing works regard them as two disentangled perspectives and consider the augmentation on the two parts independently (separately)**. This means most works do not consider the effects of node features when generating new graph structures, and vice versa. We provide two examples (G-Mixup, ifMixup) in the Intro, and more are listed in the Related Works. However, as we mentioned in L.45-46 in the Intro, the graph structures and node features are correlated with each other, and it is essential to depict this correlation while generating new graph samples. Hence, we propose to consider the joint modeling problem to enhance the quality of graph augmentation.
> **W2. More explanations/analyses are needed to show that augmented graphs by FGWMixup can preserve key topologies of the original graphs and have semantically meaningful node features simultaneously.**
We have presented two mixup examples in the attachment PDF file in the **public response**. In Example 1, we can observe that the mixup graph adopts an overall trident structure and a substructure (marked green) from G1, and adopts several substructures from G2 (marked red), finally formulating a new graph sample combined properties from both graphs. In Example 2, we can observe that the mixup graph is quite similar to G2, but breaks the connection of two marked (red arrow pointed) edges and formulates two disconnected subgraphs, which is identical to the overall structure of G1. Moreover, in both examples, we can observe that the preserved substructures are not only topologically alike, but also highly consistent in node features. These examples demonstrate that **FGWMixup can both preserve key topologies and generate semantically meaningful node features**. We will add this analysis to the Appendix of our paper.
> **W3. It would be more convincing to include larger datasets and more data augmentation baselines.**
This paper focuses on data augmentation, which mainly tackles the circumstances of data insufficiency. Therefore, it is more persuasive to observe results on small datasets. Furthermore, in former works [17, 18], datasets from TUDataset are widely adopted as the benchmark. Hence, we follow their settings and present the results on those datasets as the main results. However, we also have considered validating the effectiveness of our method on larger datasets, and **we have reported experimental results on large OGB datasets with 40K+ samples** (see Appendix E.2) and our methods also outperform all baselines.
For the compared baselines, we think mixup-based methods are the state-of-the-art of graph augmentation. Former works such as ifMixup and G-Mixup have included some other graph augmentation methods as baselines (e.g., Subgraph, NodeAttrMasking), and have proven their superiority to those methods. Therefore, we think it is sufficiently convincing and persuasive to adopt SOTA mixup methods as the compared baselines.
> **Q1. How to choose the hyperparameters? How sensitive is the performance gain to the hyperparameter tuning?**
As we introduced in Appendix D.4. L674, the hyperparameters are selected by grid search on validation sets. We have provided the analysis on hyperparameters such as graph sizes and GNN depths in Section 3.3. Here we additionally provide the sensitivity analysis w.r.t. $\alpha$ valued from {0.05, 0.5, 0.95, 1.0}. Note that $\alpha =1.0$ falls back to the case of GW metric where node features are not incorporated.
Results on PROTEINS:
| $\alpha$ | GIN (FGWMixup) | GCN (FGWMixup) | GIN (FGWMixup*) | GCN (FGWMixup*) |
|----------|----------------|----------------|-----------------|-----------------|
| 0.95 | 0.7502(0.0386) | **0.7601(0.0319)** | **0.7520(0.0330)** | **0.7520(0.0303)** |
| 0.5 | **0.7538(0.0258)** | 0.7547(0.0356) | 0.7457(0.0330) | 0.7457(0.0352) |
| 0.05 | 0.7486(0.0240) | 0.7493(0.0274) | 0.7439(0.0301) | 0.7484(0.0316) |
| 1.0 | 0.7457(0.0262) | 0.7440(0.0357) | 0.7394(0.0386) | 0.7466(0.0291) |
Results on NCI1:
| $\alpha$ | GIN (FGWMixup) | GCN (FGWMixup) | GIN (FGWMixup*) | GCN (FGWMixup*) |
|----------|----------------|----------------|-----------------|-----------------|
| 0.95 | **0.7832(0.0265)** | **0.7837(0.0240)** | **0.7727(0.0271)** | 0.7847(0.0174) |
| 0.5 | 0.7742(0.0193) | 0.7793(0.0168) | 0.7723(0.0247) | 0.7766(0.0148) |
| 0.05 | 0.7762(0.0237) | 0.7800(0.0100) | 0.7659(0.0214) | **0.7893(0.0191)** |
| 1.0 | 0.7591(0.0293) | 0.7727(0.0092) | 0.7720(0.0169) | 0.7771(0.0197) |
From the results, we can observe that 1) when FGW falls back to GW ($\alpha$ =1), **where node features are no longer taken into account, the performance will significantly decay** (generally the worst among all investigated alpha values). This demonstrates the importance of solving the joint modeling problem in graph mixup tasks. 2) $\alpha$=0.95 is the best setting in most cases. This empirically implies that **it is better to conduct more structural alignment in graph mixup**. In practice, we set $\alpha$ to 0.95 for all of our reported results. We will add this analysis to the paper.
> **Q2. How does the runtime of FGWMixup compare to the other baselines? Does the improvement of the performance come from a much higher computational cost?**
We provide runtime comparisons between our methods and baselines in the public response. Please check **Q1** in the **public response** for details.
---
Rebuttal 2:
Comment: Dear Reviewer,
The authors have provided a comprehensive response to your review. Could you please confirm if it addresses your concerns? With the rebuttal deadline fast approaching, we would greatly appreciate your feedback before then.
Thank you for your time and consideration.
---
Rebuttal Comment 2.1:
Title: Reply to authors
Comment: Thanks for the rebuttals.
I have some remaining questions.
For the results on ogbg-molhiv dataset, why the performance of the baseline models vGCN and vGIN are much lower than the OGB leaderboard? Based on my previous experiments, ignoring edge features should not cause such a significant performance drop. Besides, in Table 6 of the Appendix, I didn't see a significant reduction in the variance when using vGIN as backbones, I believe the authors made a false claim in lines 706-710.
For the baselines, I agree that previous mixup methods compare with other simple data augmentation methods such as Subgraph, and NodeAttrMasking. However, there is no comparison between mixup methods and other advanced methods such as [1][2].
[1] Kong, Kezhi, et al. "Flag: Adversarial data augmentation for graph neural networks." CVPR 2022.
[2] You, Yuning, et al. "Graph contrastive learning automated." ICML, 2021.
[3] Luo, Youzhi, et al. "Automated data augmentations for graph classification." ICLR 2023.
---
Reply to Comment 2.1.1:
Title: Reply to Reviewer ZCHq
Comment: Thank you for your reply, and we address your further concerns point-by-point as follows:
> **Q1. The difference in the vGIN/vGCN performances between ours and OGB leaderboard.**
Except for ignoring edge features, the difference in the experimental results can be attributed to the experimental environments. In our experiments, vGIN and vGCN are implemented with dgl library, which is not comparable to the results in the OGB leaderboard using the torch_geometric lib. More importantly, the vanilla model is consistently deployed for every compared graph augmentation method. **We suggest that it is the performance gain of each augmentation method that should be paid more attention to, instead of the absolute performance of the vanilla model.**
> **Q2. The std reduction of vGIN is not significant.**
Indeed, the std of vanilla vGIN is quite small, but the performance of vanilla vGIN is also relatively low. Thus, we believe that the vanilla vGIN is stably influenced by the underlying noises. In contrast, **our method can provide a 2.5%+ performance gain with only a 0.01% std increase. This shows that our method can help resist those potential noises and increase the model robustness.** Moreover, we find that although the investigated augmentation methods can improve the performance of vGIN, they also lead to much higher variance. However, our method can **effectively improve the performance and simultaneously keep the variance at a low level**. From this perspective, we can also reach a conclusion that our method is better at guaranteeing model robustness.
> **Q3. Some other baseline methods?**
In response to your query regarding the baseline comparison, we want to assure you that our choice of baseline models is consistent with standard evaluation protocols of the previous graph mixup works (such as G-Mixup, ifMixup). Our baselines are selected from those used in the previous works and, of course, the previous mixup works themselves. Hence, this approach has already provided a meaningful assessment of the effectiveness of our method within the existing context.
We acknowledge the articles you mentioned, and we have taken into account their comparisons. However, not all the mentioned articles should be considered. For example, [2] is a graph contrastive learning framework whose negative samples comes from some augmentation methods (including DropNode, Subgraph, etc that we have considered or mentioned). We believe our method can be incorporated into their work, but should NOT be compared with theirs.
We also want to claim that **the core contribution of this work is a novel graph mixup method that jointly models the interaction between graph signal space and structure space**. We think **it is more necessary and convincing to compare our methods with the SOTA mixup methods to validate our contribution**.
In conclusion, we believe that our approach provides a well-rounded evaluation framework that captures the essence of the problem while maintaining consistency with previous evaluation practices. We hope this explanation clarifies our contribution and its alignment with existing evaluation standards. | Summary: This paper addresses a gap in graph data augmentation for graph-level classifications, where existing methods mainly focus on augmenting graph signal space and graph structure space separately, overlooking their mutual interactions. The authors formulate the issue as an optimal transport problem that considers both graph structures and signals, and propose a novel graph mixup algorithm called FGWMixup. FGWMixup seeks the "midpoint" of source graphs in the Fused Gromov-Wasserstein (FGW) metric space. The authors further introduce a relaxed FGW solver to improve the scalability and performance of FGWMixup, and experimental results across five datasets and various GNN backbones demonstrate its effectiveness in enhancing the generalizability and robustness of GNNs.
Strengths: S1. Innovation in Graph Mixup Method: The paper introduces FGWMixup, a novel graph mixup method that is formulated as an optimal transport problem. This innovative approach aims to find the optimal graph at the "midpoint" of two source graphs in terms of both graph signals and structures. This unique formulation could potentially provide a more effective way to combine graphs compared to traditional methods.
S2. Comprehensive Evaluation: The paper comprehensively evaluates the proposed method on five widely-used graph classification tasks from the graph benchmark dataset. The use of four different types of backbones for the evaluation further demonstrates the versatility and robustness of the FGWMixup method. This extensive evaluation provides strong evidence of the effectiveness of the proposed method across various tasks and settings.
S3. Theoretical Analysis and Optimization: The paper provides a thorough theoretical analysis of the proposed FGWMixup method, including the convergence of the algorithm and the correctness of FGWMixup. This rigorous theoretical foundation strengthens the credibility of the proposed method. Additionally, using an algorithm to optimize the computation complexity is a commendable effort to address potential efficiency concerns.
Weaknesses: W1. High Time Complexity: The proposed FGWMixup and Accelerated FGWMixup methods have a relatively high time complexity. This could limit their applicability in scenarios where computational resources or time are constrained. The paper would benefit from a more detailed analysis of the time complexity of these methods, including how it scales with the size and complexity of the input graphs.
W2. Lack of Results for Varying Beta Distribution Parameter: The paper needs results showing the impact of changing the beta distribution parameter 'k' on the performance of the proposed methods. This parameter could significantly influence the distribution, and it would be informative to understand how its variation affects the results. This could also provide insights into how to choose the best 'k' for different scenarios.
W3. Effects of Relaxed Projection: The paper could provide more discussion about the potential negative effects due to the relaxed projection. While this approach may improve the algorithm's efficiency, it could also introduce inaccuracies or instability in the results. A deeper exploration of these trade-offs would be beneficial.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Q1. The algorithm may sacrifice some feasibility due to the relaxed projection. Can you provide some potential effects?
Q2. How does the computational complexity of FGWMixup compare to other graph mixup methods?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper does not explicitly address the potential limitation of the proposed approach. The authors can test their method on node classification or link prediction tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your comments and suggestions. We made every effort to address all the concerns. In the following, we quote your comments and then give our detailed response point-by-point.
> **W1/Q2. Analysis and comparisons of computational complexity:**
We provide runtime comparisons between our methods and compared baselines as well as the computation complexity analysis in the public response. Please check **Q1** in the **public response** for details.
> **W2. Sensitivity analysis on varying beta distribution parameter $k$:**
Empirically, we follow the $k$ setting in [a] where the mixup method is first proposed. We also provide the sensitivity analysis of the Beta distribution parameter $k$ on PROTEINS and NCI1 datasets as follows:
Results on PROTEINS:
| k | GIN (FGWMixup) | GCN (FGWMixup)| GIN (FGWMixup*)| GCN (FGWMixup*)|
|-----|----------------|----------------|----------------|-----------------|
| 0.2 | **0.7502(0.0386)** | **0.7601(0.0319)** | 0.7520(0.0330) | 0.7520(0.0303) |
| 0.5 | **0.7502(0.0267)** | 0.7547(0.0312) | 0.7359(0.0238) | 0.7429(0.0462) |
| 1.0 | 0.7493(0.0293) | 0.7565(0.0258) | 0.7386(0.0281) | **0.7538(0.0341)** |
| 2.5 | 0.7439(0.0107) | 0.7430(0.0412) | **0.7610(0.0297)** | 0.7421(0.0452) |
Results on NCI1:
| k | GIN (FGWMixup)| GCN (FGWMixup)| GIN (FGWMixup*)| GCN (FGWMixup*)|
|-----|----------------|----------------|----------------|-----------------|
| 0.2 | **0.7832(0.0265)** | **0.7837(0.0240)** | 0.7727(0.0271) | 0.7847(0.0174) |
| 0.5 | 0.7637(0.0206) | 0.7800(0.0140) | 0.7725(0.0209) | **0.7871(0.0149)** |
| 1.0 | 0.7659(0.0239) | 0.7798(0.0150) | **0.7732(0.0178)** | 0.7810(0.0171) |
| 2.5 | 0.7771(0.0267) | 0.7796(0.0173) | 0.7701(0.0214) | 0.7796(0.0102) |
From the results, we can find that **Beta(0.2, 0.2) is the overall best-performed setting**. There are also a few circumstances where the other settings outperform. In our opinion, different datasets and backbones prefer different optimal settings of $k$, but we should choose the one that is overall the best across various settings.
> **W3/Q1. More exploration of the infeasibility introduced by the relaxed FGW solver:**
First of all, we conduct an experiment to analyze how much infeasibility or inaccuracy has been introduced by our single-loop FGW solver compared with the strict CG solver. We randomly select 1,000 pairs of graphs from PROTEINS dataset and apply the two solvers to calculate the FGW distance between each pair of graphs. The distances of the $i$-th pair of graphs calculated by the strict solver and the relaxed solver are denoted as $d_i$ and $d^{\*}_i$, respectively. We report the following metrics for comparison: i) MAE (mean absolute error of FGW distance, $\frac{1}{N}\sum |d_i - d^{\*}_i|$), ii) MAPE (mean absolute percentage error of FGW distance, $\frac{1}{N}\sum \frac{|d_i - d^{\*}_i|}{d_i}$), iii) mean FGW distance given by the single-loop solver ($\frac{1}{N}\sum d^{\*}_i$), iv) mean FGW distance given by the strict CG solver ($\frac{1}{N}\sum d_i$), v) L2-norm of the difference between two transportation plan matrices (divided by the size of the matrix for normalization). The results are shown as follows:
| MAE | MAPE | mean_FGW | mean_FGW\* | T_diff |
|----------------|----------------|----------|-----------|-----------------|
| 0.0126(0.0170) | 0.0748(0.1022) | 0.2198 | 0.2143 | 0.0006(0.0010) |
We can observe that the MAPE is only 0.0748, which means the FGW distance estimated by the single-loop relaxed solver is only 7.48% different from the strict CG solver. Moreover, the absolute error is around 0.01, which is quite small compared with the absolute value of FGW distances (~0.21). We can also find that the L2-norm of the difference between two transportation plan matrices is only 0.0006, which means two solvers give quite similar transportation plans. All the results imply that **the single-loop solver will not produce huge infeasibility or make the estimation of FGW distance inaccurate**.
From another perspective, we do not think the infeasibility brings totally negative effects. As we have analyzed in our experiments (Table 1), FGWMixup* even sometimes performs better than FGWMixup. We explain this with the subtle infeasibilities introduced by the relaxed single-loop solver. By incorporating those infeasibilities, FGWMixup* may be able to **occasionally generate more diverse examples, which potentially enlarge the input space, thus bringing opportunities to improve the generalizability of GNN models.**
[a] Zhang Hongyi et al., mixup: Beyond Empirical Risk Minimization, ICLR 2018
---
Rebuttal Comment 1.1:
Title: Ack of rebuttal
Comment: I thank the authors for the rebuttal. It addressed most of my concerns and I hope these discussions and remedies can be properly incorporated into the final version. I have raised my overall rating.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer JSkL
Comment: Thanks again for all the constructive suggestions for improving the quality of our work. We will incorporate those discussions and remedies into our final version. | Rebuttal 1:
Rebuttal: Thanks for all the constructive suggestions and comments concerning our paper from five nice reviewers. Here we provide our responses to some questions that are commonly asked by the reviewers.
> **Q1. Can you present the runtime comparison between your methods and compared baselines? Can you further analyze the time complexity of FGWMixup?**
We present the averaged efficiencies of different mixup methods and time spent on training vanilla backbones of each fold on PROTEINS and NCI1 datasets as follows. As Reviewer WT1n requires, we also include G-Mixup with GW graphon estimators (denoted as G-Mixup+GW) in the table.
| | DropEdge | DropNode | G-Mixup | ifMixup | G-Mixup+GW | FGWMixup | FGWMixup* |
|:--------:|:--------:|:--------:|:-------:|:-------:|:----------:|:--------:|:----------:|
| PROTEINS | 0.192 | 0.229 | 6.34 | 2.08 | 2523.78 | 802.24 | 394.57 |
| NCI1 | 0.736 | 0.810 | 10.31 | 5.67 | 9657.48 | 1711.45 | 637.41 |
| | GCN | GIN | Graphormer | GraphormerGD |
|----------|--------|--------|------------|---------------|
| PROTEINS | 47.13 | 52.68 | 2636.98 | 2371.31 |
| NCI1 | 209.28 | 338.72 | 2701.04 | 6175.99 |
We can observe that FGWMixup(\*) and G-Mixup+GW are slower than the other data augmentation methods. The main reason is that the complexity of calculating (F)GW distances between two graphs is cubic ($O(mn^2+nm^2)$) [a], where $m, n$ are the sizes of two graphs. Moreover, when calculating barycenters, we need an outer loop with T iterations and M graphs. In total, the time complexity of mixing up two graphs of size $n$ is $O(MTn^3)$. FGWMixup\* boosts the efficiency by enhancing the convergence rate and reducing the required iterations T (see Table 5 in Appendix E.1 for more details), whereas G-Mixup+GW will have to go over the whole dataset to calculate graphons, which is much more time-consuming than FGWMixup.
However, sacrificing complexity to pursue higher performance has been the recent trend of technical development, e.g. GPT-4. Moreover, **we believe that the current time complexity of FGWMixup\* is still acceptable compared with the time cost of model training (especially Graphormers) and our performance improvements**, as most compared graph augmentation methods cannot effectively enhance the model performance as shown in Table 1.
More importantly, in practice, the main computational bottleneck of (F)GW-based method is the OT network flow CPU solver in the current implementation based on the most widely used POT lib. In other words, GPU-based network flow algorithms have not been applied in current computation frameworks. Moreover, mini-batch parallelization is not yet deployed in POT. However, recent works [b, c] from NVIDIA have focused on accelerating network flow algorithms on GPU, which may probably be equipped on CUDA and allow a huge acceleration for GW solvers in the near future. **Hence, we firmly believe that the fact that our method is not yet optimized in parallel on GPUs is only a temporary problem.** Just as some other works (e.g., MLP, LSTM, etc.) that have brought enormous contributions in the past era, they are initially slow when proposed, but have become efficient with fast-following hardware supports.
> **Q2. Analysis of infeasibility of the single-loop FGW solver?**
First of all, we conduct an experiment to analyze the infeasibility of our single-loop FGW solver compared with the strict CG solver. We randomly select 1,000 pairs of graphs from PROTEINS dataset and apply the two solvers to calculate the FGW distance between each pair of graphs. The distances of the $i$-th pair of graphs calculated by the strict solver and the relaxed solver are denoted as $d_i$ and $d^{\*}_i$, respectively. We report the following metrics for comparison: i) MAE (mean absolute error of FGW distance, $\frac{1}{N}\sum |d_i - d^{\*}_i|$), ii) MAPE (mean absolute percentage error of FGW distance, $\frac{1}{N}\sum \frac{|d_i - d^{\*}_i|}{d_i}$), iii) mean FGW distance given by the single-loop solver ($\frac{1}{N}\sum d^{\*}_i$), iv) mean FGW distance given by the strict CG solver ($\frac{1}{N}\sum d_i$), v) L2-norm of the difference between two transportation plan matrices (divided by the size of the matrix for normalization). The results are shown as follows:
| MAE | MAPE | mean_FGW | mean_FGW\* | T_diff |
|----------------|----------------|----------|-----------|-----------------|
| 0.0126(0.0170) | 0.0748(0.1022) | 0.2198 | 0.2143 | 0.0006(0.0010) |
We can observe that the MAPE is only 0.0748, which means the FGW distance estimated by the single-loop relaxed solver is only 7.48% different from the strict CG solver. Moreover, the absolute error is around 0.01, which is quite small compared with the absolute value of FGW distances (~0.21). We can also find that the L2-norm of the difference between two transportation plan matrices is only 0.0006, which means two solvers give quite similar transportation plans. All the results imply that **the single-loop solver will not produce huge infeasibility or make the estimation of FGW distance inaccurate**.
From another perspective, we do not think the infeasibility brings totally negative effects. As analyzed in our experiments (Table 1), FGWMixup* even sometimes performs better than FGWMixup. We explain this with the subtle infeasibilities introduced by the single-loop solver. By incorporating those infeasibilities, FGWMixup* may be able to **occasionally generate more diverse examples, which potentially enlarge the input space and bring opportunities to improve the generalizability of GNN models.**
[a] Gabriel Peyre et al., Gromov-Wasserstein Averaging of Kernel and Distance Matrices, ICML 2016
[b] https://on-demand.gputechconf.com/gtc/2017/presentation/S7370-hugo-braun-efficient-maximum-flow_algorithm.pdf
[c] https://mate.unipv.it/gualandi/talks/Gualandi_Aussois2020.pdf
Pdf: /pdf/ca231a177f048776936ab7a59556d443c71aae6b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors study a new method "FGWMixup" for graph data augmentation.
For the two input graphs $G_1, G_2$, they propose to construct a synthetic graph $\tilde G$ through optimizing the weighted distance sum (2). To further improve the efficiency of the algorithm, they relax the polytope constraint (row sum and column sum meet the source and target distribution) on the transport matrix $\pi$, to two alternatively enforced constraints (separately on row sum and column sum); they summarize the new solver in Algorithm 2.
They also study the performance of the proposed method/solver through empirical experiments.
Strengths: - originality:
- While the rough idea to use OT for data augmentation is regular, this paper proposes a different solution and a new relaxed FGW solver, which is novel to me.
- quality:
- The new method/solver they propose does work and can provide comparable performance to existing mixup methods.
- significance:
- This paper provides a OT-based mixup method, which can be a good reference for self-supervised learning on graph.
Weaknesses: - quality
- Some empirical results may be unconvincing due to the small size of datasets. I leave a related comment below.
- clarity:
- Some concepts are not clearly illustrated. See the questions below. With the unclear illustration, the theoretical justifications for some claims are hard to follow.
- significance:
- Even with the new approximate solver, I doubt the new FGWmixup is much slower than previous mixup methods, which can make the work less attractive to practitioners.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In solving (2), they "fix the node probability distribution of $\tilde G$ with a uniform distribution" (Line 129). It makes sense, while it can be better to add more justification, either empirically or theoretically.
2. The concept of “Bregman projection” in Line 164, 171 is not well explained.
3. The statement of Proposition 1 is confusing.
- "Let $\pi_t$ be the Sinkhorn iterations" is unclear. What's the relationship between $\pi_t$ and the following $\pi_{2t}, \pi_{2t+1}$?
- "Let $\pi^*$ be the unique optimal solution" needs more explanation, optimal w.r.t. what?
- What's the definition of $\mu_t, \nu_t$?
- Does the proposition mean the new solver may not converge to $\pi^*$?
4. The statement of Proposition 2 is also confusing.
- The concept of “FGW function” is not defined.
- The concept of “normal cone” is not well explained.
- Actually the follow-up remark after Prop 2 is not very informative. Please provide more details and explain why the distance bound implies FGWMixup∗ "will converge close to the ideal optimum". What's the scale of $\tau/\rho$?
5. The experiments only involve small datasets. Can the authors add some medium to large datasets, and also report the runtime of different mixup methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your comments and suggestions. We made every effort to address all the concerns. In the following, we quote your comments and then give our detailed response point-by-point. The references given in numbers correspond to the reference id in our paper, and those given in letters are provided at the end of our response.
> **Q1. Justification on the uniform distribution selection of $\tilde{G}$:**
In most cases where we have no prior knowledge of the node importance, works [a, 24] empirically choose a uniform distribution of $\mu$. There are also some works [51, 52, 54] adaptively optimize it, which incorporate another degree of freedom to the model. Both approaches are reasonable and practical. In our current version, we choose the former one, and we find that FGWMixup has already outperformed SOTA methods with a fixed uniform distribution of $\mu$. Considering that this design does not influence the core contribution of our method and the pages are limited, we have not explored the adaptive $\mu$ version of FGWMixup in our current paper. We regard this as our future work. We will add more explanations and references of this design to the paper for better justification.
> **Q2: The concept of "Bregman Projection":**
We have introduced the Bregman projection $\phi$ is in Appendix A.2 Proposition 3. Specifically, Bregman projection is a function $\phi()$ used to calculate the Bregman divergence $D_\phi(x,y)$. More concretely, in the Mirror Descent algorithm, due to the geometry constraint of the feasible set $\mathcal{X}$, we aim to map $x$ to the dual space and take a gradient step there, thence mapping it back to the primal space (illustrated in Fig.3 in Appendix A.2). The dual space is precisely produced by this Bregman projection, which maps the primal space with $\nabla \phi()$.
> **Q3: Confusions about Proposition 1 - Definition of notations and meaning of the proposition:**
Sorry for the confusion. We clarify the questions as follows:
1) *What's the relationship between $\pi_t$ and $\pi_{2t}, \pi_{2t+1}$?* $\\{ \pi_t \\}$ is the sequence of $\pi$ generated from each update step $t$ of Sinkhorn iterations. At the even number steps ($2t$), we optimize the first marginal $\mu$, and at the odd number steps ($2t+1)$, we optimize the second marginal $\nu$. This is precisely the alternately updating procedure in Alg.2.
2) *What is $\\pi^{\*}$ optimal w.r.t?* $\pi^*$ is the optimal solution of the FGW distance, which can also be regarded as a function of $\pi$ (i.e., $f(\pi)$), as shown in Appendix A.1.
3) *What is the definition of $(\mu_t, \nu_t)$?* As we mentioned in Prop.1, $\pi \in \Pi(\mu, \nu)$. Analogously, $(\mu_t, \nu_t)$ is the marginals of $\pi_t$.
4) *The meaning of the proposition?* This proposition demonstrates a faster convergence rate of the single-loop solver. Specifically, the divergences between the optimized marginals $\mu_t, \nu_t$ and the objective marginals $\mu, \nu$ descend quadratically as iteration $t$ goes ($O(t^{-2})$), which is optimized by the single loop solver. Yet the divergence of strict solver over the joint distribution $\pi$ goes linearly ($O(t^{-1})$).
> **Q4: Confusions about Proposition 2 - Definition of concepts, meaning of the proposition, and the scale of the upper bound:**
Sorry for the confusion. The explanations are as follows:
1) *What does the FGW function refer to?* The FGW function is introduced in Eq.(1), and it can be regarded as a function of $\pi$ ($f(\pi)$). We have introduced this in Appendix A.1.
2) *The definition of "normal cone"?* The concept of the normal cone is introduced in Definition 1 in Appendix A.2.
3) *What is the scale of $\tau/\rho$ and why will our algorithm "converge close to the ideal optimum"?* Prop.2 gives an upper bound ($\tau/\rho$) of the distance between the optima given by our relaxed solver and the ideal optimum. In fact, as we introduced in L.557 and L.177, the entropic regularization coefficient $\rho$ is actually equivalent to $1/\gamma$, where $\gamma$ is the step size of MD update. $\tau$ is a fixed constant given by various factors (see Appendix B.2). Noted that the step size of MD can be arbitrarily adjusted, **when we select a step size as small as possible, the scale of the upper bound $\tau/\rho$ will be sufficiently small**. This proves that FGWMixup* can converge to the ideal optimum.
> **Q5: Requirements of larger datasets and runtime of different mixup methods:**
1) This paper focuses on data augmentation, which mainly tackles circumstances of data insufficiency. Therefore, it is more persuasive to observe results on small datasets. Furthermore, in former works [17, 18], datasets from TUDataset are widely adopted as the benchmark. Hence, we follow their settings and present the results on those datasets as the main results. However, we also have considered validating the effectiveness of our method on larger datasets, and **we have reported experimental results on large OGB datasets with 40K+ samples (see Appendix E.2)** and our methods also outperform all baselines.
2) We present the runtime analysis of different methods in the public response. Please refer to our explanations in **Q1** of the **public response** for details.
[a] Xu Hongteng, et al. Representing graphs via Gromov-Wasserstein factorization[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(1): 999-1016.
---
Rebuttal 2:
Comment: Dear Reviewer,
The authors have provided a comprehensive response to your review. Could you please confirm if it addresses your concerns? With the rebuttal deadline fast approaching, we would greatly appreciate your feedback before then.
Thank you for your time and consideration.
---
Rebuttal 3:
Comment: I appreciate the response from the authors. After reading it and the other reviews, I plan to raise my score from 5 to 6.
However, I would like to encourage the authors to clearly indicate the related appendix contents in the main text; I believe some readers would be similarly confused by the current form, like me. Furthermore, I would urge the authors to indicate the limitations in the next revision: 1. the current slow implementation, and 2. we cannot "select a step size as small as possible" in practice due to the time constraint.
---
Rebuttal Comment 3.1:
Title: Reply to Reviewer Cjiv
Comment: We would like to express our genuine gratitude for your thoughtful and detailed reviews and for considering our rebuttal. Sorry again for our missing indications of the related Appendix contents in our main texts. We will make these remedies and supplement the limitations that you have mentioned in our final version.
Thank you once again for your time, efforts, and positive assessments. Your comments have been constructive and helpful in refining our research. | null | null | null | null | null | null |
Optimal approximation using complex-valued neural networks | Accept (poster) | Summary: The paper studies approximation rates of shallow complex-valued neural networks (CVNN) with general non-polyharmonic activation functions. First, the paper establishes upper bounds for the error of approximation of polynomial and general smooth functions by CVNNs. Then, various aspects of the optimality of these bounds are discussed. First, it is shown that these bounds are tight assuming continuous weight selection. Next, a special activation function is presented for which the bound can be improved. Finally, it is shown that for a standard sigmoid-type activation function the rate bound is tight.
Strengths: **Contribution.** The paper seems to be the first paper that establishes optimal approximation rates for CVNNs with general non-polyharmonic activation functions. Previous results were either limited to specific activations or only established the universal approximation property, without convergence rates. Moreover, the paper comprehensively analyzes the optimality of its convergence rates. It proves their tightness assuming a continuous weight selection and shows that faster rates can be achieved if this assumption is dropped and a specially designed activation is used. It should be noted, however, that the CVNN model considered in the paper does not seem to be very important, and that most results of the paper are analogs of existing results for usual real-valued neural networks - see Weaknesses below.
**Quality and clarity.** The paper is very well written. The theorems are clearly stated, sketches of key ideas of the proofs are provided in the main text. Proof details are provided in a large appendix and also seem to be carefully written, though I did not study all of them closely.
Weaknesses: **Questionable significance of the CVNN model.** The usual real-valued neural networks are important both mathematically and practically. They represent simple and natural non-linear models whose significance is well-established. In contrast, CVNNs do not seem to be significant from either perspective. Mathematically, they mix holomorphic linear operations with generic non-linear operations. This combination does not seem to have interesting analytic properties (except for the original observation by Voigtlaender that the condition of non-polynomiality for RVNN gets replaced by non-polyharmonicity for CVNN) or obvious practical or computational meaning. The paper does not explain why CVNNs are important, instead referring to a small number of papers from 2016-2018 where a similar structure was applied.
**Limited conceptual novelty.** The paper generally adapts existing methods and results from the real-valued to the complex-valued setting - admittedly with many extra new twists. The established convergence rate is the same as for the respective real network with doubled real dimension.
**High technicality for a conference.** This is quite a technical paper with 40 pages of proofs in the appendix. It is unlikely that any NeurIPS reviewer properly checks all these, so the paper might be more suitable for a journal with a more comprehensive review process. However, this is only a minor point, since the paper makes a good effort to present key ideas and sketches of proofs already in the main text.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Why specifically is the particular computational structure used in CVNNs (a layer of holomorphic linear operations + a layer of pointwise non-holomorphic non-linear operations + another layer of holomorphic linear operations) important?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback and in particular for the positive comments on the quality and clarity of the paper.
In the following, we individually answer your comments:
> The usual real-valued neural networks are important both mathematically and practically. They represent simple and natural non-linear models whose significance is well-established. In contrast, CVNNs do not seem to be significant from either perspective. Mathematically, they mix holomorphic linear operations with generic non-linear operations. This combination does not seem to have interesting analytic properties (except for the original observation by Voigtlaender that the condition of non-polynomiality for RVNN gets replaced by non-polyharmonicity for CVNN) or obvious practical or computational meaning. The paper does not explain why CVNNs are important, instead referring to a small number of papers from 2016-2018 where a similar structure was applied.
> Why specifically is the particular computational structure used in CVNNs (a layer of holomorphic linear operations + a layer of pointwise non-holomorphic non-linear operations + another layer of holomorphic linear operations) important?
We agree that compared to RVNNs CVNNs are less used. One reason for this is certainly the limited availability of wide-spread libraries for implementing and training CVNNs. Nevertheless, there has been sufficient interest in CVNNs such that the effort of implementing such a library has been invested (https://github.com/NEGU93/cvnn). We would also like to mention the following more recent papers that find CVNNs to be advantageous for tasks that naturally involve complex numbers. We will include them in the final version of our paper:
- https://doi.org/10.1109/ICASSP39728.2021.9413814
- https://doi.org/10.1007/s11265-022-01793-0
- https://doi.org/10.1002/mrm.28733
- https://doi.org/10.1002/nbm.4312
The intuitive reasons for why CVNNs perform better than RVNNs in some application areas are the following: In applications where complex-valued inputs naturally occur, it is natural to use complex arithmetic in applied machine learning models as well. In particular, the use of phase-preserving complex activation functions such as the complex cardioid or the modReLU should be mentioned. Note that phase preservation cannot be achieved when non-trivial real activation functions are applied separately to real and imaginary parts of complex-valued neurons. Likewise, the holomorphic nature of the linearities (i.e., the linearities are linear over $\Bbb{C}$) is crucial for a faithful handling of the phase.
As you have already pointed out, CVNNs exhibit quite interesting mathematical properties, in particular regarding the interplay of the properties of the activation function and the expressivity of the resulting CVNNs. The universal approximation theorem states that shallow CVNNs are universal if and only if the activation function is non-polyharmonic, but it is a priori not clear at all that the property of not being polyharmonic is enough to guarantee *optimal quantitative* approximation rates. Our paper shows that this is the case.
> The paper generally adapts existing methods and results from the real-valued to the complex-valued setting - admittedly with many extra new twists. The established convergence rate is the same as for the respective real network with doubled real dimension.
Our proof indeed required "many extra new twists" compared to the proof of the real-valued case.
Indeed, there are several novel results and ideas in our paper. In particular noteworthy is Prop. 2.1 which connects the notion of non-polyharmonicity with the non-vanishing of Wirtinger derivatives at a single point. This is non-trivial and central to our proof. Also the other proofs are not simply a matter of "replacing $\Bbb{R}$ by $\Bbb{C}$" but rather require substantial changes to the proofs, mostly because real-valued activation functions are 1D objects, whereas complex activation functions are multi-dimensional (cf. the differences between ODEs and PDEs).
The established convergence rate is indeed the same as for RVNNs with doubled real dimension. Note, however, that we prove this bound to be optimal (assuming a continuous weight selection) so that a better bound is not to be expected anyways.
> High technicality for a conference. This is quite a technical paper with 40 pages of proofs in the appendix. It is unlikely that any NeurIPS reviewer properly checks all these, so the paper might be more suitable for a journal with a more comprehensive review process. However, this is only a minor point, since the paper makes a good effort to present key ideas and sketches of proofs already in the main text.
Including such a long paper at NeurIPS is not completely unusual. See for instance https://papers.nips.cc/paper_files/paper/2021/hash/c82836ed448c41094025b4a872c5341e-Abstract.html
We are happy that you appreciate our efforts to present the key ideas and sketches of proofs in the main text.
---
Rebuttal Comment 1.1:
Comment:
Thank you for your answers. Your arguments look convincing. I'm increasing my score.
I still have an impression, though, that your setting is a special case of a more general setting that would be more natural. The usual shallow real networks are constructed from layers that obey two kinds of constraints: linear layers obey the linearity constraint, while the pointwise nonlinear layers obey the constraint that the action is pointwise. My understanding is that your setting extends this setup in the following way. First, you group the variables in blocks of two (real/imaginary parts), and then you impose Cauchy-Riemann conditions on the linear layers. In the linear case the Cauchy-Riemann conditions reduce to a pair of linear algebraic conditions in each $2\times 2$ block of the respective matrix. You also replace pointwise nonlinearity by blockwise nonlinearity. Now, my understanding is that your results show that these modifications do not break the classical approximation rate. This suggests that none of the specific modifications that you do (blocking, Cauchy-Riemann, blockwise activation) actually has any effect on the approximation rate. So if you divided the variables into blocks of say three variables rather than two, and imposed some blockwise linear algebraic constraints other than Cauchy-Riemann, and used a suitably non-degenerate blockwise activation, would this also preserve the classical rate?
---
Reply to Comment 1.1.1:
Comment: Thanks for your positive feedback and for increasing the score! We greatly appreciate it.
It is indeed an interesting question whether our results can be embedded into an even more general setting, where arbitrarily big groups of neurons are considered. And indeed, some older works (such as "Quaternionic Neural Networks for Associative Memories" by Teijiro Isokawa, Haruhiko Nishimura, and Nobuyuki Matsui) have for instance considered quaternion-valued neural networks or even so-called Clifford-algebra-valued neural networks (although we are not aware of significant real-world applications of these).
However, in general going beyond the setting of complex numbers and quaternions, the considered structures get much less canonical, i.e., it is not quite clear which algebraic constraints one should impose in the case of, e.g., blocks of three neurons. This is mostly since it is not possible to endow $\mathbb{R}^3$ with an algebraic structure that represents a sensible multiplication (allowing for division). In fact, it is not possible to endow $\mathbb{R}^s$ with such a structure whenever $s \notin \{1,2,4\}$ (which is exactly the case of the usual real numbers, the complex numbers and quaternions). This is known as Frobenius' Theorem for real division algebras.
Finally, the set of "good" blockwise activation functions would heavily depend on the chosen algebraic constraint. Thus, one should probably first settle the question of universality in these general settings, and then study the question of approximation rates.
We would expect that the reachable approximation rate (under suitable assumptions such as continuity) again agrees with the "expected" rate, but this is not clear at all and would have to be carefully investigated. | Summary: The paper studies approximation error bounds for complex-valued neural networks based. The authors relied on several techniques from the work by Mhaskar (1996) and proved several theorems in the paper. I will summarize two important results here.
1. Given a function from $C^k$, an optimal error bound is proved under some conditions on the activation function and the hypothesis of continuous weight selection. Furthermore, the complex-valued neural network achieving this bound has a universal first layer.
2. When the hypothesis of continuous weight selection is dropped, the authors proved that there exists an activation function allowing the complex-valued neural network to achieve a better bound. No optimality is claimed under this case. On the other hand, there is an activation function such that the bound cannot be improved up to logarithmic factors.
Strengths: The novelty of this paper is clear. The paper provides error bounds for complex-valued neural networks using more general activation functions and shows that the optimal bound can be achieved under the hypothesis of continuous weight selection. The results in this paper generalize the results in the previous work [9]. The problem is well-motivated, and the theorems proved are sound. Overall, the paper is well-written, and I enjoy reading the paper.
Weaknesses: The hypothesis of continuous weight selection is not a practical assumption. The optimal choice of the network weights is discontinuous in general (see the reference below). It would be better if the authors could make this clear in the paper.
Kainen, Paul C., Věra Kůrková, and Andrew Vogt. "Approximation by neural networks is not continuous." Neurocomputing 29, no. 1-3 (1999): 47-56.
Under this assumption, the error bound can be proved to be optimal. However, no optimal results are provided if this assumption is dropped. Regarding this, it would be more convincing for the paper if the authors could provide insights into difficulties that arose in proving the statements and potential workarounds.
The results provided are similar to [9] so it would be more interesting if the authors could describe what difficulties they have overcome to prove these results. Given the high similarity of the results, I would expect more discussions in the related work section or in the descriptions of the proof sketch.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Line 28: Would it be more precise if referencing the work by Cybenko in 1989?
Line 140: What is your definition of a smooth function? Please make it precise.
From Theorem 3.1, the complexity of an approximating complex-value network is established for a polynomial. Would it be possible to use the Stone-Weierstrass theorem to establish an approximation bound? This seems to be a natural step to apply.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have sufficiently addressed the limitations of the paper in Section 5. In my view, the greatest limitation of this work is the use of continuous weight selection for guaranteeing the optimality of the error bound. It would be more convincing if the authors can give some insights into this in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that you enjoyed reading our paper!
In the following we address each of the points that you raised.
# Point 1
> The hypothesis of continuous weight selection is not a practical assumption. The optimal choice of the network weights is discontinuous in general (see the reference below). It would be better if the authors could make this clear in the paper.
Thanks for suggesting that "Approximation by neural networks is not continuous" might be a relevant reference for our paper. We will include it in the final version and point out that selecting the **best approximating NN** of a given size for a given function is discontinuous.
In addition:
1. The continuity assumption is satisfied in many practical scenarios. In practice NNs are trained by variants of SGD. Here the training algorithm does not have direct access to the (unknown) ground-truth function $f$, but can access $f$ only through the training samples $(x_i,y_i)$, where $y_i =f(x_i)$ or $y_i =f(x_i)+\mathrm{noise}$. One can show that each gradient update depends continuously on the training samples and therefore also on the ground-truth function $f$. Strictly speaking, this continuity is only guaranteed to hold if the activation function is continuously differentiable. Therefore, for any continuously differentiable activation function the continuity assumption is satisfied in practice.
2. The paper mentioned above shows that selecting **the best** approximating NN for each function $f$ is a discontinuous operation. This, however, is not the setting that we consider. We show that for $\varepsilon=c\cdot m^{-k/(2n)}$ each function in the $C^k$-unit ball can be approximated using a shallow CVNN with $m$ neurons up to error $\varepsilon$, but it could be that several functions in the $C^k$ unit ball can in fact be approximated much better. Thus, the mentioned paper does not fully apply to the setting we consider.
# Point 2
> (...) it would be more convincing for the paper if the authors could provide insights into difficulties that arose in proving the statements and potential workarounds.
The following are the main difficulties:
1. Our Theorems 4.2 and 4.3 show that, if one drops the continuity assumption, there is no single approximation rate that is the optimal rate jointly for all smooth activation functions. This means that one would need to perform a separate analysis for different (classes of) activation functions.
2. For deep NNs (with more than two hidden layers) with general smooth activation function it is not possible to derive any non-trivial lower bounds without assuming continuity, since there exists an activation function with the property that NNs of constant size using this activation function can approximate any continuous function to arbitrary precision (see [22, Theorem 4]). Note that [22] considers real-valued NNs, but the results can be transferred to CVNNs with a suitable choice of the activation function. Hence, a lower bound in the case of unrestricted weight selection can, if at all, only be derived for shallow NNs.
3. In the real-valued case, fully general lower bounds for the approximation capabilities of shallow NNs have been derived by using results from [15] regarding the approximation properties of so-called ridge functions, i.e., functions of the form $\sum_{j=1}^m \phi_j(\langle a_j, x\rangle)$ with $a_j \in \Bbb{R}^d$ and $\phi_j: \Bbb{R} \to \Bbb{R}$. We are interested in generalizing these results to higher-dimensional ridge functions of the form $\sum_{j=1}^m \phi_j (A_j x)$, where each $\phi_j : \Bbb{R}^s \to \Bbb{R}$ and $A_j \in \Bbb{R}^{s \times d}$. This would imply lower bounds for shallow CVNNs. However, such a generalization seems to be highly non-trivial and is outside the scope of our paper.
We will include a brief discussion of these points in the final manuscript.
# Point 3
> The results provided are similar to [9] so it would be more interesting if the authors could describe what difficulties they have overcome to prove these results. Given the high similarity of the results, I would expect more discussions in the related work section or in the descriptions of the proof sketch.
1. As outlined in Section 1.1.2 our results differ *significantly* from [9]: we consider general activation functions, we show that the rate proved in [9] for deep NNs can in fact already be achieved using shallow NNs, and can be improved by a log factor.
2. Our proof techniques differ significantly from those in [9]. While in [9] the target function is approximated by many polynomials of low degree, combined with a partition of unity, we approximate the target function by a single polynomial of high degree. For approximating the polynomial using a NN, [9] uses a construction specific to ReLU which was pioneered by Yarotsky [34]. In contrast, we use Wirtinger derivatives and divided differences which can be done for quite general activation functions.
We will include a discussion of the proof techniques in the final manuscript.
# Questions
> Line 28: Would it be more precise if referencing the work by Cybenko in 1989?
We will cite the work by Cybenko in the final version. The reason why we cite [19] is that it considers very general activation functions.
> Line 140: What is your definition of a smooth function? Please make it precise.
Thanks for noting that the definition is missing. By "smooth" we mean $C^\infty$-functions in the sense of real variables. We will include that in the final version of the paper.
> From Theorem 3.1, the complexity of an approximating CVNN is established for a polynomial. Would it be possible to use the Stone-Weierstrass theorem to establish an approximation bound? This seems to be a natural step to apply.
Using the Stone-Weierstrass theorem, one could establish qualitative results. However, this theorem is not quantitative. Theorem J.15 can be seen as a quantitative version of the Stone-Weierstrass Theorem for approximating $C^k$-functions.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed responses. My concerns are fully addressed, and in light of this, I have increased my rating by 1. I hope the authors keep their promises and deliver extra analyses and clarifications in their final version. One of the concerns raised by Reviewer GqHo is about the constant $c(n,k)$. I think this is not that problematic given the independence of $\epsilon$. However, I believe it is helpful for the reader to know the upper bound of the constant and know why the constant is huge and perhaps some potential approaches to improve it. I also agree with the authors that the curse of dimensionality cannot be avoided without making strong assumptions about the function family. | Summary: This paper studies the approximation power of complex-valued neural networks (CVNNs). They derive that the approximation error is with the order $m^{-k/2n}$, where m is the number of neurons, k is the smoothness of the target function, and n is the input dimension of the neural network. They also show that this approximation error is optimal under some mild assumptions.
Strengths: 1. This paper derives the approximation bounds of CVNNs for any continuous activation functions.
2. Furthermore, they show that the approximation bounds they obtain are optimal with a natural continuity assumption.
3. For several specific activation functions, the authors derive the upper bounds without the assumption of the continuity of weight selection.
Weaknesses: 1. For general activation functions, the authors need to assume the continuity of weight selection to derive the optimal approximation rates. It will be more convincing if they can show results for general activation functions without the assumption of the continuity of weight selection.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback for our paper.
Your main criticism of our paper is that
> It will be more convincing if they can show results for general activation functions without the assumption of the continuity of weight selection.
We agree that it would be interesting to establish lower bounds also in the case of discontinuous weight selection, as we also mention in the Limitations section of our paper. However, in full generality this is a highly non-trivial problem a full resolution of which is outside the scope of this paper. In particular we would like to mention the following points:
1. The continuity assumption is natural: In practice neural networks are trained by variants of stochastic gradient descent. Here the training algorithm does not have direct access to the (unknown) ground-truth function $f$, but can access $f$ only through the training samples $(x_i, y_i)$, where $y_i = f(x_i)$ or $y_i = f(x_i) + \mathrm{noise}$. One can show that each gradient update depends continuously on the training samples and therefore also on the ground-truth function $f$. Strictly speaking, this continuity is only guaranteed to hold if the activation function is continuously differentiable. Therefore, for any continuously differentiable activation function the continuity assumption is satisfied in practice.
2. For *deep* neural networks (with more than two hidden layers) with general smooth activation function it is *not* possible to derive any non-trivial lower bounds without assuming continuity, since there exists an activation function with the property that neural networks of constant size using this activation function can approximate any continuous function to *arbitrary* precision (see [22, Theorem 4]). Note that [22] considers the real-valued setting but the results can be transferred to complex-valued networks with a suitable choice of the activation function. Hence, a lower bound in the case of unrestricted weight selection can, if at all, only be derived in the case of shallow networks.
3. In the real-valued case, fully general lower bounds for the approximation capabilities of shallow neural networks have been derived by appealing to the results from [15] regarding the approximation properties of so-called ridge functions. Here, a ridge function is of the form $\sum_{j=1}^m \phi_j (\langle a_j, x\rangle)$ with $a_j \in \mathbb{R}^d$ and each $\phi_j: \mathbb{R} \to \mathbb{R}$. We are interested in generalizing these results to higher-dimensional ridge functions, i.e., functions of the form $\sum_{j=1}^m \phi_j (A_j x)$, where each $\phi_j : \mathbb{R}^s \to \mathbb{R}$ and $A_j \in \mathbb{R}^{s \times d}$. This would imply lower bounds for shallow complex-valued neural networks. However, such a generalization seems to be highly non-trivial and is outside the scope of our paper.
4. We already discuss the issue of possibly discontinuous weight selection to some depth in Theorems 4.2 and 4.3. Theorem 4.2 shows that there is a smooth activation function for which one can even achieve the rate $\mathcal{O}(m^{-k/(2n-1)})$ which shows that without the continuity assumption and for this special activation function the upper bound of $\mathcal{O}(m^{-k/(2n)})$ is not sharp. In contrast, Theorem 4.3 shows that for some special activation function the upper bound of $\mathcal{O}(m^{-k/(2n)})$ is actually sharp (up to logarithmic factors), even in the case of possibly discontinuous weight selection.
In combination, these theorems show that in the case of unrestricted weight selection one cannot derive a sharp approximation rate holding simultaneously for all smooth activation functions. Specifically, Theorem 4.2 shows that such a bound would have to be smaller than $m^{-k/(2n-1)}$ while Theorem 4.3 shows that it would have to be greater than $(m \cdot \ln (m))^{-k/(2n)}$. This means that for the case of discontinuous weight assignment one has to perform an individual analysis for different activation functions. | Summary: The paper studies the expressive power of complex-valued neural networks. It is shown that depth 2 networks with non-polyharmonic (the complex equivalent of non-polynomial) activations can approximate any continuous function on a compact domain to arbitrary accuracy which decays polynomially with the width and exponentially with the smoothness of the target function, but nevertheless suffers from the curse of dimensionality. This upper bound is also shown to be optimal in general, and that furthermore, there are more tailored examples where one can slightly improve the approximation rate.
Post-rebuttal:
I still find the technical contribution of the paper rather fair (as my original score indicates). My main concern with the paper is that its impact feels somewhat limited as it provides a guarantee which suffers from the curse of dimensionality. While I understand that this cannot be evaded in general, obtaining a result which is tight in the worst case where the worst case is intractable is a clear limitation. While I appreciate the fact that there are other older results in the literature which are highly cited and obtain such results for the real setting, this paper's merits should be considered compared to what is already known, and from this perspective, obtaining results in the complex setting that are analogous to the real setting is of interest but not groundbreaking. I have therefore decided to keep my original score.
Strengths: - The problem feels overall well-motivated, and CVNNs seem like an interesting model to study.
- The analogies and differences between real and complex neural networks highlighted by the results in this paper are interesting.
Weaknesses: - On a technical perspective, it seems like the main technical contributions here are to adapt existing technique to the CVNN setting. Most results seem to take existing proofs and just apply them to the complex setting, which doesn't feel novel enough.
- The exact dependence on the rate of approximation is unclear. There's a `constant' $c(n,k)$ which hides some dependence on the input dimension and smoothness parameters, which could be potentially huge (see questions below).
- The separation established in Section 4 by Theorem 4.2 and Theorem 4.3 is not particularly strong, and is only significant in cases where the input dimension $n$ is very small.
- I didn't find the main approximation result in Theorem 3.2 very surprising. This is an analogous result for something which is already known for real-valued networks, so obtaining this result for complex-valued functions feels incremental. In particular, it would be of greater interest, in my opinion, to study cases where complex-valued networks attain an \emph{efficient} approximation in the input dimension $n$, since these have stronger practical implications.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Questions:
- "complex-valued networks behave significantly different from real-valued networks": I don't see how this is `significantly different'. The second property does indicate a difference, but the first characteristic is analogous (non-polynomial and non-polyhedral activations).
- There's no clear motivation for the very technical choice of the $C^k$-norm in the paper. Can you provide a more intuitive explanation for it and why it is chosen? In particular, why is this choice not stylized for obtaining the main result?
- "It is crucial that the size of the networks considered in Theorem 3.1 is independent of the approximation quality $\varepsilon$": Why is this important?
- What is the quantity $c(n,k)$ which appears in many of the theorem? The bounds in the paper are interesting since they allow us to study the dependence of the accuracy attained as a function of, say, $n$ for some fixed $k$; but the dependence of $c(n,k)$ on $n$ isn't clear. I tried to better understand it by looking at the proofs in the appendix but it's not made explicit there as well. $c(n,k)$ could, potentially, have magnitude $m^{k/(2n)}$ which would render the upper bound vacuous. Why is this not the case? Is this the same quantity in both the upper bound in Theorem 3.2 and the lower bound in Theorem 4.3? If not, it could make the either bound very loose despite appearing to provide a tight result.
Comments:
- Abstract: "the real-valued case is supported by a firm mathematical foundation" -- I wouldn't say our mathematical understanding of deep learning is firm.
- The abstract doesn't explicitly state what are the functions the main result applies to.
- Line 27: The universal approximation theorem dates back to 1989. See "Approximation by Superpositions of a Sigmoidal Function" by G. Cybenko
- "CVNNs have the same excellent approximation properties as real-valued networks.": Why is this conclusion reached? The results in the paper hold for a very broad class of smooth function and therefore suffer from the curse of dimensionality in the worst case.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: - I think that the lack of clarity in the theorem statements regarding $c(n,k)$ is problematic, yet this is not discussed in the paper. I urge the authors to clarify this and discuss this more clearly.
- The limitations of the provided results which suffer from the curse of dimensionality should be clearly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback. We agree that CVNNs are an interesting model to study.
# Weaknesses
> Most results seem to take existing proofs and just apply them to the complex setting.
We do not think that this is a fair assessment of our work. There are several novel results and ideas in our paper. In particular noteworthy is Prop. 2.1 which connects the notion of non-polyharmonicity with the non-vanishing of Wirtinger derivatives at a single point. This is non-trivial and central to our proof. Also the other proofs are not simply a matter of "replacing $\Bbb{R}$ by $\Bbb{C}$" but rather require substantial changes to the proofs, mostly because real-valued activation functions are 1D objects, whereas complex activation functions are multi-dimensional (cf. the differences between ODEs and PDEs).
> I didn't find the main approximation result in Theorem 3.2 very surprising. [...]
We don't think this result is "boring": It is highly non-trivial (and maybe surprising) that the correct complex generalization of "non-polynomial" (for our problem) is "non-polyharmonic". [33] shows that this is the correct generalization for *universality* of **shallow** NNs, but already in the case of NNs with two hidden layers this stops being true. Therefore, it is a priori not clear that the notion of "non-polyharmonic activation function" is the correct one for deriving *quantitative* approximation bounds such as studied in our paper.
> it would be of greater interest [...] to study cases where CVNNs attain an *efficient* approximation in the input dimension $n$
In this paper, we study $C^k$-functions, for which the curse of dimensionality is inevitable. We agree that studying other function classes (such as generalizations of the Barron class to CVNNs) is interesting, and plan to do this in the future.
> The exact dependence on the rate of approximation is unclear. [...]
See below.
> The separation established in Section 4 by Theorems 4.2 and 4.3 is not particularly strong [...]
The purpose of these two theorems is to show that it is impossible to derive upper and lower bounds with a *matching* approximation rate for all smooth activation functions and a possibly discontinuous weight assignment. We just care that the two rates are **distinct**, not that there is a strong separation.
# Questions
> "CVNNs behave significantly different from RVNNs": I don't see how this is `significantly different'. [...]
The difference that we emphasize here is that, to obtain universality, there are more admissible activation functions for *deep* NNs than for *shallow* NNs in the complex case, which is not the case for RVNNs.
> There's no clear motivation for the very technical choice of the $C^k$-norm in the paper.
The $C^k$-space and the norm that we use are standard and widely used in the literature. Maybe you refer to the definition below l.116, where we wrote "$\partial^{\mathbf{k}}$" instead of "$\partial^{\mathbf{k}} f$". We will fix this in the final version.
> "It is crucial that the size of the NNs considered in Theorem 3.1 is independent of the approximation quality ": Why is this important?
It is important to obtain the final approximation result regarding the equation in l.184-185. Moreover, it is noteworthy that for polynomials the network size is independent of the error, which is not the case for our other results.
> Abstract: [...] I wouldn't say our mathematical understanding of deep learning is firm.
We agree this formulation should be changed. We propose "is supported by a growing mathematical foundation".
> The abstract doesn't explicitly state what functions the main result applies to.
Thanks for noting that this is not entirely clear. In the final version we will make this more explicit.
> Line 27: The universal approximation theorem dates back to 1989. [...]
We agree that the work by Cybenko should be cited and we will do so in the final version. We cited [19] since it is the first work to consider *arbitrary, non-polynomial* activations.
> "CVNNs have the same excellent approximation properties as RVNNs.": Why is this conclusion reached? The results in the paper [...] suffer from the curse of dimensionality [...]
We mean "excellent" in the sense that CVNNs attain the same approximation rate as RVNNs for $C^k$-functions. This is the *optimal* rate that can be achieved with $m$ (continuously selected) parameters (Theorem 4.1). You are correct in noting that even though the results are optimal they suffer from the curse of dimensionality (COD). We plan to study alternative function classes for which CVNNs do not suffer from the COD in future work.
> What is the quantity $c(n,k)$ which appears in many of the theorem? [...] $c(n,k)$ could, potentially, have magnitude $m^{k/(2n)}$ which would render the upper bound vacuous. [...] Is this the same quantity in both Theorem 3.2 and Theorem 4.3? [...]
The constant $c(n,k)$ only depends on $n$ and $k$ and is **independent** of the number of neurons $m$. Therefore, it cannot grow like $m^{k/(2n)}$. Roughly, $n$ and $k$ are taken as fixed and we ask how quickly the approximation error decays (asymptotically) as $m\to\infty$.
We don't compute $c(n,k)$ explicitly but note that the dependence is indeed exponential in $n$, even for large $k$ (e.g., $k = n$). However, this is not a shortcoming of our proof but a fact of life, as follows from "Approximation of infinitely differentiable multivariate functions is intractable" by Novak and Woźniakowski, Journal of Complexity (2009).
The constants that appear in theorems 3.2 and 4.3 are not the same, but this is standard in the literature for similar results. Precisely studying the constant $c(n,k)$ is extremely difficult and has only been done in a few very special cases (see the paper by Novak et al. above).
> The limitations of the provided results which suffer from the curse of dimensionality should be clearly discussed.
The final version will discuss this in greater detail.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal response
Comment: Dear authors,
Thank you for your detailed response which addresses the questions I raised. I appreciate your intensions to incorporate some of my suggestions into the paper.
I want to point out that a setting where $n$ is taken as fixed is limiting. While such results are mainly of interest when $n$ is moderate, the resulting approximations quickly become inefficient as $n$ grows. I understand that this is the best that can be done in the worst case, but this ignores potential "average case" results and might be very loose in many instances. Such a limitation should be clearly discussed in the paper. Moreover, even though $c(n,k)$ is independent of $m$, in cases where it behaves as $k^n$ for example still make the upper bound weak.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. We understand that you are dissatisfied with the dependence of our approximation bounds on the input dimension, which makes them subject to the curse of dimensionality, and we agree that it is important to study alternative function classes for which neural networks can avoid the curse. We strongly disagree, however, with the conclusion that this makes our results uninteresting, or not worth publishing.
In the following, we try one final time to make our point.
1. As we already pointed out in an earlier response, for the $C^k$ function class that we consider, the curse of dimensionality is an inescapable fact of life and cannot be avoided. This holds in a worst-case sense, but quite likely also in an "average case" sense. Indeed, the paper "Phase Transitions in Rate Distortion Theory and Deep Learning" by Grohs, Klotz, and Voigtlaender shows this optimality in an average sense in a slightly, but closely related setting. We are strongly convinced that this result also extends to our setting. Verifying this, however, is outside the scope of this paper.
2. In the real-valued setting, there are several highly influential (well published and highly cited) works that are subject to the same limitations as our result. As selected examples we mention the following:
- "Neural networks for optimal approximation of smooth and analytic functions" by Mhaskar
- "Error bounds for approximations with deep ReLU networks" by Yarotsky
- "Optimal approximation of piecewise smooth functions using deep ReLU neural networks" by Petersen and Voigtlaender
- "Optimal approximation of continuous functions by very deep ReLU networks" by Yarotsky
- "The phase diagram of approximation rates for deep neural networks" by Yarotsky and Zhevnerchuk
- "Deep network approximation characterized by number of neurons" by Shen, Yang, and Zhang.
This underlines that such results are of high interest in the community.
3. Our results are not strictly limited to the class of $C^k$ functions. As an important auxiliary result (which might be of independent interest), we show that CVNNs can well approximate algebraic polynomials. There are natural and widely studied classes of functions that can be very well approximated by polynomials; for instance this holds for certain classes of holomorphic functions; see e.g. Example 2 in the paper "Approximation of smooth functionals using deep ReLU networks" by Song, Liu, Fan, and Zhou. For instance, for this class, our results on the approximation of polynomials using CVNNs would imply that using CVNNs with $N$ neurons, one can obtain an approximation error bound of $C \cdot \rho^{- N^{1/(2n)} / 5}$, where $C > 0$ and $\rho > 1$ only depend on the size of the polyellipse on which the considered functions are holomorphic, but not on the dimension. This bound is still subject to the curse of dimensionality, but much less than our bound for $C^k$ functions. This shows that our results and proof techniques can be useful to tackle the question of alternative function classes for which the curse can be avoided. We emphasize that there are many other function classes that can be well approximated by polynomials; for these, our results will thus be helpful.
We will be happy to add a brief discussion of these points to the final version of the paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning | Accept (poster) | Summary: This paper presents a method to enhance the random sampling component of hyperband. The authors propose replacing it with a combination of random sampling, prior-based sampling, and incumbent-based sampling. They also suggest adjusting the proportion of these samplers based on the current state in the hyperparameter optimization process. The authors perform experiments on a series of benchmarks and compare the proposed method with multiple classical HBO baselines.
Strengths: This work might be useful in some particular situations.
Weaknesses: - The motivation for this work may appear artificial, aiming to reduce the cost of hyperparameter optimization in the age of deep learning. However, the paper lacks a detailed explanation of how the proposed adjustments to the sampling component can make hyperparameter optimization more practical and cost-effective in the era of deep learning. Can the authors provide a more comprehensive reasoning process to support this claim?
- Technically, this method is essentially a combination of prior work, including multi-fidelity optimization [1], expert priors [2], and local search [3]. The authors' contributions primarily build upon and benefit from these existing approaches, rather than introducing original ideas. The authors declare that their approach fulfills all the desired requirements for application to deep learning, but this claim is essentially derived from the benefits provided by multi-fidelity optimization.
[1] Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., & Talwalkar, A. (2017). Hyperband: A novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research, 18(1), 6765-6816.
[2]Hvarfner, C., Stoll, D., Souza, A., Lindauer, M., Hutter, F., & Nardi, L. (2022). $\pi $ BO: Augmenting acquisition functions with user beliefs for bayesian optimization. arXiv preprint arXiv:2204.11051.
[3]Wu, Q., Wang, C., & Huang, S. (2021, May). Frugal optimization for cost-related hyperparameters. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 12, pp. 10347-10354).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Why modifying the random sampling component of HyperBand can help decrease the expenses associated with tuning deep learning models?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: As a hyperparameter optimization algorithm, the proposed method encompasses several hyperparameters that may significantly influence its final outcomes. These include the hyperparameters associated with the perturbation operation and those control the proportion of the three sampling methods. However, there is currently a dearth of corresponding ablation studies that specifically investigate the individual contributions of these hyperparameters.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your comments.
We would request you elaborate on your thoughts on the situations in which our work could be useful. Understanding this could give us perspective on how to address your concerns.
---
> The motivation for this work may appear artificial...the paper lacks a detailed explanation of how the proposed adjustments to the sampling component can make HPO more practical and cost-effective in the era of deep learning. Can the authors provide a more comprehensive reasoning process to support this claim?
> Why modifying the random sampling component of HyperBand (HB) can help decrease the expenses associated with tuning deep learning models?
---
1.
**a)** We respectfully disagree with the statement of artificial motivation. In many cases, human experts have strong knowledge about hyperparameter settings that are likely to perform well. E.g., if you’re optimizing GPT-4, you do not run HyperBand (HB), but you rather reuse previously known good hyperparameters, maybe with small local adjustments. PriorBand exploits the same prior sampling approach and marries it with multi-fidelity optimization.
**b)** In the original Hyperband paper, the following is stated in Future Work:
"_Finally, **Hyperband can benefit from different sampling schemes aside from simple random search** ... meta-learning can be used to **define intelligent priors informed by previous experimentation**. Finally, as mentioned in Section 2, exploring ways to combine Hyperband with **adaptive configuration selection strategies** is a very promising future direction._"
As such, the original authors share the opinion that this is a good direction for increased performance; the results confirm this hypothesis.
**c)** _Fig. 2_ shows that augmenting HB’s random sampling with prior sampling improves anytime performance. To account for failure modes, a local search around the incumbent is introduced as the third sampler, and a dynamic weighting algorithm is designed to trade off the 3 samplers. _Fig. 1_ illustrates the improved anytime performance of this method. _Section 7_ and _Appendix F_ show strong anytime performance for PriorBand under good priors. _Tables 10-14_ show per-dataset performance gains even under budgets equivalent to 4-5 model training.
**d)** Improving anytime performance directly translates to lower compute requirements for HPO, allowing practitioners to find suitable model configurations at lower costs than previously existing methods. Our work thus contributes to making HPO practical for DL.
**e)** Our experiment design illustrates results for a maximum budget of 5 model training when 4 parallel model training are possible. Whether such a budget is feasible for HPO is heavily subjective. However, we believe it captures tractable hardware and compute resources in many more cases than previously possible.
---
> Technically, this method is essentially a combination of prior work, including multi-fidelity optimization [1], expert priors [2], and local search [3].
> The authors declare that their approach fulfills all the desired requirements for application to deep learning, but this claim is essentially derived from the benefits provided by multi-fidelity optimization.
---
2.
**a)** In the realm of HPO, Multi-fidelity Optimization (MFO) and expert priors represent two key paradigms, with HB [1] and $\pi$BO [2] serving as their respective representatives in our work. Neither one of these in isolation is sufficient to fulfill all desired requirements. To our knowledge, there is no existing work which efficiently merges these two paradigms. Our contribution, the Ensemble Sampling Policy (ESP), is the first to propose a general yet systematic approach to doing so robustly, allowing for multi-fidelity algorithms to interface with expert priors and fulfill the required desiderata. Moreover, the ESP enables an explicit expert prior interface to an entire family of multi-fidelity algorithms.
**b)** The referenced paper [3] can be integrated into our related literature, thank you. Our proposed method’s contribution over [3] is that we are agnostic to the exact form of local search used. Sampling from the unit sphere [3] aligns with one of our local search ablations ("hypersphere" in _Fig. 16_), but we found it to exhibit higher variance and to be less robust than the local mutation in PriorBand. Sampling from a neighborhood sphere offered more hyperparameters and issues mentioned in _L811-816 (Appendix E.2.3)_.
---
> As a HPO algorithm, the proposed method encompasses several hyperparameters … a dearth of corresponding ablation studies that specifically investigate the individual contributions of these hyperparameters.
---
3. In Fig. 3 of the rebuttal PDF, we have added ablation studies on the local search hyperparameters (Appendix E.2.5) as pointed out. We ablate over the standard deviation of the Gaussian around the incumbent configuration and the mutation rate, that is, the probability of selection of a hyperparameter for perturbation. Our chosen default setting for these hyperparameters, which we keep fixed across all our experiments, is amongst the better choices, but performance is very robust to these hyperparameters.
### ___
We hope that our explanations were satisfactory and brought about more clarity. If so, we would appreciate it if you consider increasing our score. If you have any further questions and comments we would be very glad to discuss them.
### References
[1] Li et al. (2017). Hyperband: A novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research, 18(1), 6765-6816.
[2] Hvarfner et al. (2022). $\pi$BO: Augmenting acquisition functions with user beliefs for Bayesian Optimization. arXiv preprint arXiv:2204.11051.
[3] Wu et al. (2021, May). Frugal optimization for cost-related hyperparameters. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 12, pp. 10347-10354).
---
Rebuttal Comment 1.1:
Comment: I think most of my concerns are addressed. I will raise my score. Thanks for your comments.
---
Reply to Comment 1.1.1:
Title: Re: Official Comment by Reviewer meuK
Comment: We thank you for your revision; it is much appreciated.
We are eager to address any remaining concerns you may have, in order to enhance your confidence in our work.
We also welcome feedback to improve the final draft. | Summary: This paper proposes PriorBand, an extension of HyperBand that adds expert priors and a novel sampling technique to replace random sampling in HB, called the Ensemble Sampling Policy (ESP). The ESP allows the algorithm to lean on the expert prior, but also use the current incumbent in case the prior is non-optimal. Under good and bad priors, the authors demonstrate the efficacy of this approach, showing additionally that the ESP also improves the performance of other Successive Halving (SH)-based methods.
**Rebuttal:** The authors have clearly addressed my comments in their rebuttal. I have updated my score from 5 to 7.
Strengths: - My favorite part of this paper is Section 7.2, which shows that the ESP proposed in this paper extends to other SH-like HPO methods as well. This contribution should perhaps be highlighted more.
- The paper's verbiage is very clear, and examples presented in the introductory sections are useful aids to readers who may not be familiar with HyperBand's workings.
Weaknesses: - My major concern is in the use of average relative rank to demonstrate efficacy. While a lower rank indicates better performance, it does not indicate whether that performance delta is statistically significant. As opposed to confidence intervals over *ranks*, I would much rater see the actual improvement in per-dataset metrics. For example, the original HyperBand paper [1] showed the test error over wall time for multiple HPO algorithms.
- minor: In L130, you're missing a period.
- L182: typo: modelling --> modeling
[1] Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., & Talwalkar, A. (2017). Hyperband: A novel bandit-based approach to hyperparameter optimization. The journal of machine learning research, 18(1), 6765-6816.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In Alg. 2, L5, can you explain why $p_\pi$ is used instead of $p_{\hat{\lambda}}$?
- In Fig. 14, it seems the lines have confidence intervals. How are these computed, and how many repeats of the experiment were used?
- In Appendix D.3, the authors describe the way a "good" prior is generated, using the best of 25 configurations. In practice, this would add computational overhead to the overall process of training a model with good hyper-parameters. In that sense, perhaps it should be emphasized that the Bad prior results are more useful to a practitioner, who, following their intuition/expertise/random choice, might possibly use a non-optimal prior (i.e., without running anything). Can you comment on this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed the limitations of their work appropriately in the main text and the supplementary materials.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad to hear that the reviewer appreciated our presentation and found a favorite part too! Thank you for your review and comments.
---
> My major concern is in the use of average relative rank to demonstrate efficacy. While a lower rank indicates better performance, it does not indicate whether that performance delta is statistically significant. As opposed to confidence intervals over ranks, I would much rather see the actual improvement in per-dataset metrics. For example, the original HyperBand paper showed the test error over wall time for multiple HPO algorithms.
---
1. Thank you for raising this important point. We agree that relative ranks only show part of the picture, and we used it throughout for ease of aggregation across far more benchmarks than, e.g., the Hyperband paper, for additional robustness of results. Aggregating raw performance across benchmarks has to be done carefully to avoid individual benchmarks being over-represented due to different scales of performances; we now did this using average normalized regret. Fig. 1 and Fig. 2 in the _rebuttal PDF_ include per-dataset regret plots for good priors and average normalized regret for different prior strengths, respectively. Due to page restrictions, in the main paper, we can only include either the aggregated average rank or the average regret. Your suggestion on how to effectively communicate and persuade the reader with Appendix support is welcome, but we will make sure to add regret plots in a camera-ready version, including a pair (good-bad priors) of per-dataset normalized regret plots.
---
> In Alg. 2, L5, can you explain why $p_\pi$ is used instead of $p_{\hat{\lambda}}$?
---
2. We acknowledge any potential confusion that may have arisen from our use of $p_\pi$ in Alg. 2. We shall rename $p_\pi$ in L1 and RHS of L5-6 in Alg. 2 to $p_\pi^{\mathrm{old}}$ in the camera-ready.
L3 in Alg. 1 computes the probability of sampling uniformly random and the remaining probability is assigned to $p_\pi$. The role of Alg. 2 now (taking the current $p_\pi$ as input) is to split this probability between the prior sampler and the incumbent-based sampler. L5 in Alg. 2 thus assigns the determined proportion of this probability to $p_{\hat{\lambda}}$ and in L6 $p_\pi$ is updated as well.
Please let us know if this clarifies the confusion or whether you have any further questions.
---
> In Appendix D.3, the authors describe the way a "good" prior is generated, using the best of 25 configurations. In practice, this would add computational overhead to the overall process of training a model with good hyper-parameters. In that sense, perhaps it should be emphasized that the Bad prior results are more useful to a practitioner, who, following their intuition/expertise/random choice, might possibly use a non-optimal prior (i.e., without running anything). Can you comment on this?
---
3. This is a crucial misunderstanding; please let us explain.
**a)** The best-of-25-samples methodology is strictly an experimental setup protocol to obtain a sense of what a good configuration is. That is, we are not suggesting running 25 random configurations before running PriorBand (with that budget we want to long be done with the optimization), but rather we assume that a practitioner may have existing knowledge based on earlier tuning experience or the literature. (If you’re, e.g., optimizing GPT-4, you use your knowledge gathered from GPT-2 and GPT-3.) For our experimental evaluation, we require a simple, general and reproducible protocol and therefore followed prior work on Bayesian optimization with expert priors [1]. However, we acknowledge that this type of prior may not perfectly match a practitioner’s prior. In _Appendix D.3, Fig 13_, we show the performance of random samples produced from these priors to help judge their relative quality.
**b**) We agree that the issue of prior quality is a relevant one. _Tab. 10_ displays PriorBand’s performance across prior strengths, compared to other algorithms. PriorBand’s performance degrades gracefully even in the face of a highly adversarial prior (the worst of 50000 random samples), as it maintains the performance of HyperBand on a 12-evaluation budget and only lags behind slightly on a 5-evaluation budget. This adversarial prior is highly unrealistic; if a user doesn’t know which hyperparameter values would work well they can always specify a uniform prior, in which case prior samples are additional random samples.
Given the prevalence of manual tuning, we ultimately believe that practitioners are substantially more well-informed than a vanilla (random) approach. As such, we believe that they should generally have a positive influence on PriorBand’s performance.
---
> In Fig. 14, it seems the lines have confidence intervals. How are these computed, and how many repeats of the experiment were used?
---
4. We would like to refer you to _Section 6.3_, _Appendices D.4_ and _D.5_ where we detail our experimental setup. We hope this addresses your question about Fig. 14. If not, we are happy to clarify further.
### ___
We appreciate your feedback and believe to have addressed all the points you raised. Given our additional results for your main concern of only showing rank-based results and our clarification of the crucial misunderstanding of the source of priors (25 random points being far beyond our total budget), we would greatly appreciate it if you were to increase your score correspondingly. If you have additional questions we would be more than glad to discuss them.
We shall certainly fix the typos you pointed out, in the final draft.
### References:
[1] Hvarfner et al. (2022). $\pi$BO: Augmenting acquisition functions with user beliefs for Bayesian Optimization. arXiv preprint arXiv:2204.11051.
---
Rebuttal 2:
Comment: The authors have sufficiently addressed my major concern with the paper. I have accordingly revised my score from 5/4 to 7/4.
---
Rebuttal Comment 2.1:
Comment: We appreciate and thank you for the revision!
Any general comment for improving a camera-ready draft is most welcome! | Summary: In the paper "PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning" the authors propose an extension to the well-known HPO methods Hyperband by adapting the way how candidate hyperparameter settings are sampled. To this end, the authors propose to use a weighting mechanism to balance between three sampling distributions: random, locally random close to the incumbent, and a prespecified prior. The latter allows experts to inject beliefs about the optimum. In the empirical study PriorBand is found to perform best on average among the considered methods.
Strengths: + PriorBand compares favorably to its competitors and allows for hyperparameter optimization with comparably low budgets.
+ The paper is very well written and the presentation in general is excellent.
+ The authors put very much effort into making their work reproducible, openly accessible and easy to understand.
+ For the self-imposed desiderata, PriorBand offers the most desired properties.
Weaknesses: - The work is pretty incremental. The only original contribution of this work is to weight the different probability distributions from which candidates are sampled.
- It is not clear what effect the incumbent-sampling has on the overall performance. At least I could not find any ablation studies in this regard. Is it really necessary to include incumbent-sampling? What would happen if only random sampling is balanced against prior-sampling? Is it then harder to retain decent performance in the light of bad priors?
- Nothing is stated about the impact on theoretical guarantees that are known to hold for Hyperband. Does the theoretical framework of Hyperband still apply for the changed distributions?
- When comparing different hyperparameter optimizers only relative ranks are
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - What is the individual effect of incumbent-based and prior-based sampling?
- What is the impact on the theoretical guarantees for Hyperband? Are all the assumptions still fulfilled?
- What is the definition of "relative rank"? Is it only a rank or does it also include performance differences?
- Where do these desiderata stem from? Although they all seem intuitive it is not clear whether they are exhaustive and what the coverage is like.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Limitations of PriorBand are well discussed. However, it is not entirely clear to what extent the desiderata for HPO in the age of deep learning is limited or maybe even exhaustive.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for appreciating the strong performance of PriorBand, our paper presentation and our efforts to make research open-source and reproducible.
---
> What is the individual effect of incumbent-based and prior-based sampling?
---
1.
**a)** We apologize that this was not sufficiently clear from the main paper. We actually do report multiple ablation studies on the individual components of PriorBand in the appendix. The incumbent-based ablation can be found in _Appendix E.2, Fig. 17_, showing that, for any prior quality and fidelity correlation, the incumbent-based sampling improves performance. _Appendix E.3_, _Fig. 19_ also highlights the role of how the weights of each sampler adapt to lend robustness to PriorBand.
**b)** The effect of prior-based sampling can be found in _Fig. 1_ and _Fig. 2_, (as a substitute to RS) and in _Fig. 5_ (the effect of good/bad priors on the outcome of optimization) with complementary results in _Fig. 21_ and _Fig. 22_. While a bad prior naturally degrades performance initially, its long-run impact is negligible compared to RS.
If the reviewer has additional ablations in mind, we would be happy to add them to the camera-ready.
---
> The work is pretty incremental. The only original contribution of this work is to weight the different probability distributions from which candidates are sampled.
---
2. While your observation holds merit, we believe our contribution facilitates new directions in hyperparameter optimization (HPO), adding novelty to the existing literature
Our work is the first, to our knowledge, to empower experts in enhancing multi-fidelity optimization through their knowledge and intuition. It's principled, well-motivated, with thorough ablations, enabling effective model-based extensions. Based on the gap it identifies and fills, we believe our work is novel.
We would also like to mention that the established HPO algorithms we compare to may be considered to be incremental: ASHA is a simple if-condition different from SuccessiveHalving, BOHB is Bayesian Optimization (BO) as a sampler in HyperBand (HB), etc. However, they all provide unique benefits and are extensively used algorithms.
PriorBand adds to this list by uniquely supporting expert priors, additionally empowering _all_ such algorithms to interface with expert priors through our contribution of the Ensemble Sample Policy (ESP), making our approach novel in scope, design, and application. Moreover, its robustness and simplicity make it extremely practical - a largely overlooked criterion in the BO/HPO community.
---
> Nothing is stated about the impact on theoretical guarantees that are known to hold for HB. Does the theoretical framework of HB still apply to the changed distributions?
> What is the impact on the theoretical guarantees for HB? Are all the assumptions still fulfilled?
---
3.
**a)** Thank you, we would like to clarify that PriorBand ensures a constant proportion of random sampling at the highest fidelity (_Eq. 2_) and thus can trivially be proven to be no more than a constant factor worse than HB in the worst case. We did not add this theorem due to its simplicity and since the important practical reductions of anytime regret over HB come from the local search component and the user prior, NOT through random sampling, any theoretical results for these speedups remain elusive.
We are hopeful that we can apply a multi-armed bandit formulation to the selection of sampling ratios in the future to obtain a regret bound compared to the best of the candidate sampling strategies.
**b)** While it's feasible to apply HB's multi-armed bandit formulation to PriorBand by considering the local search as a form of exploitation, we believe this topic deserves its own dedicated paper. As such, we consider it as potential future work.
---
> When comparing different hyperparameter optimizers only relative ranks are _(used?)_
> What is the definition of "relative rank"? Is it only a rank or does it also include performance differences?
---
4.
**a)** Our ranking procedure is aggregated to show robustness across benchmarks and is described in detail in _Appendix D.5_.
We agree that ranking plots only show part of the picture and therefore have now also uploaded plots showing average normalized regret in our _rebuttal PDF_, whose conclusions remain the same.
**b)** It ranks algorithms based on their best solutions at a given time. We compare these rankings across benchmarks for one seed, average across benchmarks for robustness, and repeat across seeds to calculate mean and standard errors for overall ranking.
---
> Where do these desiderata stem from? Although they all seem intuitive it is not clear whether they are exhaustive and what the coverage is like.
> ...it is not entirely clear to what extent the desiderata for HPO in the age of deep learning is limited or maybe even exhaustive.
---
5. Our desiderata are extending the desiderata stated in the influential paper on BOHB, Falkner et al. [1] which combines BO and HB, in combination with recent insights [2, 3] that manual search is preferred to (more sophisticated) HPO. When HPO is adopted, simple, yet effective, algorithms like HB are preferred. As such, “Expert Beliefs” and “Simplicity” were well-grounded additions to that list. We believe our list is thorough, but we invite the reviewer to suggest anything that might be missing to make it truly exhaustive.
### ___
We'd appreciate knowing if our responses have satisfied you and if our clarified perspective might improve your evaluation. Any further questions, comments or suggestions for enhancing the paper are most welcome.
### References:
[1] Falkner et al., BOHB: Robust and Efficient Hyperparameter Optimization at Scale, 2018
[2] Bouthillier and Varoquaux, Survey of machine-learning experimental methods at NeurIPS2019 and ICLR2020, 2020
[3] Schneider et al., HITY workshop poll, NeurIPS, 2022
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you very much for the thorough rebuttal and the clarifications. Most of my points are reasonably addressed in your comments and revisions.
However, the aspect of being incremental remains and I do not find the argument that there exist other incremental works very convincing. Furthermore, I do not consider myself a deep learning practitioner, therefore I unfortunately need to decline the invitation of providing more desiderata. Yet, I would wish for a survey among practitioners, on what they would actually desire from HPO methods to incorporate them into their daily routine when working with such expensive training processes. I would agree with the authors that reducing the cost for HPO might indeed be a key point but maybe not the only one. In general, I would not expect such studies but in this paper on a presentation level, this was very prominently exposed to the reader.
Appreciating the work of the authors in their rebuttal and the practical use of PriorBand, I will increase my score to weak accept but only to that level due to the remaining points.
---
Reply to Comment 1.1.1:
Comment: We thank you for your revision, it is much appreciated.
Regarding your point about a survey from practitioners discussing their needs, there are related works. In [1], a survey was conducted to show that tuning algorithms are either partially adopted or not at all in over 60% of cases. This survey includes demographic breakdowns and can provide insight into what can make HPO easy to integrate into existing pipelines, depending on the end-user or application.
A more recent and relevant survey was conducted among various practitioners from novices to experts in different ML research areas [2]. The key findings are well summarized in Figures 1, 2, and 3 in the paper. Their survey identifies that _increased model performance_, _decreased compute requirements_ are the top-2 requirements for a practitioner to adopt HPO.
Our list of desiderata includes these requirements while our empirical evaluation of PriorBand supports them.
The third goal in Hasebrook et al. [2], focused on _reducing practitioner effort_, aligns closely with our aim for _simplicity_, which led us to adopt HyperBand as the foundation for PriorBand. Our approach employs straightforward early stopping and ranking, mirroring manual tuning practices. The survey [2] also mentions how practitioners often intend to not just tune but understand step by step _“what is working and what is not”_ (a quote from the survey [2]). This perspective underscores the significance of our Expert Prior Interface in an HPO algorithm. Additionally, in _Appendix E.3, Fig. 19_ illustrates how practitioners can perform post-hoc analysis to gauge whether the prior input remains pertinent to the problem at hand.
Our specific list of desiderata aims to thus capture such requirements that make HPO more amenable in practice. Our contributed algorithm PriorBand ticks all these boxes while allowing model-based extensions, that is, PriorBand can be leveraged with Bayesian Optimization, thereby covering a large set of requirements as reported in Hasebrook et al. [2].
We agree that it will be useful to back up our desiderata with references to these surveys and will do so in the camera-ready version; thanks a lot for the suggestion! Ours is the first work satisfying these desiderata; in particular, it is the first work to show how expert priors can be effectively applied to a multi-fidelity algorithm that not only benefits an HPO run but also is generally robust to any kind of expert input, making the decision of using HPO with expert priors a potential default in practice. Also, our contribution of dynamic weighting of samplers can be applied to _any_ algorithm which means that adding an expert prior interface to an existing implementation of an HPO algorithm is rather trivial while satisfying the desiderata we identify.
### ___
We highly appreciate your time and comments and are thankful for your feedback. If there are any other points we can clarify to increase your confidence further or improve our final draft, we are happy to hear them.
### References:
[1] van der Blom et al., AutoML Adoption in ML Software, 2021
[2] Hasebrook et al., Practitioner Motives to Select Hyperparameter Optimization Methods 2023
Title: Re: Re: Rebuttal by Authors | Summary: This paper presents PriorBand, a hyperparameter optimization (HPO) algorithm designed specifically for deep learning models. PriorBand fulfills six key requirements and tackles the shortcomings of existing HPO methods that are unsuitable for DL. It leverages cheap proxy tasks while considering expert input. The algorithm eliminates the need for a naive solution for integrating expert domain knowledge into HPO. Experiments across a wide array of DL tasks are conducted to demonstrate the efficiency and robustness of PriorBand.
Strengths: + The proposed algorithm is simple yet effective. The augmented prior based on the existing HyperBand algorithm is technically sound and exhibits clear improvement over baselines.
+ The paper is well-written and well-structured. Motivation is clearly stated before the introduction of detailed algorithms (e.g., Sect. 4).
Weaknesses: - Though the experiments conducted in the paper are quite comprehensive, I found most of them are moved to the appendix where the figures in the main paper are of limited information (most are about robustness to bad priors). I would suggest authors rearrange the paper by moving figures from Appendix F to the main text.
- In Table 1, given the fact that all the baseline methods satisfy "Mixed search spaces" + "Speedup under parallelism", I would suggest the authors remove these two rows as they are also not the major technical contributions of the proposed method. Maybe instead, replace with one sentence in the caption highlighting both the baselines and the proposed method can satisfy these two criteria.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See Weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: See Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We’d like to thank the reviewer for their kind remarks of appreciation for the method itself and we are delighted to read that the effort put into the presentation, structure, and motivation made the paper clear.
---
> Though the experiments conducted in the paper are quite comprehensive, I found most of them are moved to the appendix where the figures in the main paper are of limited information (most are about robustness to bad priors). I would suggest authors rearrange the paper by moving figures from Appendix F to the main text.
---
1.
**a)** We appreciate that the reviewer took the time to read our Appendix and we agree with the reviewer's remarks at large. We decided on a set of plots aggregated across tasks in the main paper, that are focused on the primary message of how the Ensemble Sampling Policy (ESP) improves performance. For the camera-ready, we plan to move _Fig. 21_ and _Fig. 22_ to the main paper.
**b)** As an alternative, we now also created average normalized regret plots which we show on the _additional page_ we can upload during the rebuttal. They could substitute average relative ranks as an alternative way of communicating aggregated results over benchmarks.
We kindly request the reviewer to check these plots to see if they are a more suitable set of visualizations for the final main draft.
---
> In Table 1, given the fact that all the baseline methods satisfy "Mixed search spaces" + "Speedup under parallelism", I would suggest the authors remove these two rows as they are also not the major technical contributions of the proposed method. Maybe instead, replace with one sentence in the caption highlighting both the baselines and the proposed method can satisfy these two criteria.
---
2. Thank you for pointing this out, it is indeed correct. However, we would like to clarify that we do not intend the desiderata to be technical contributions, but an overview of the HPO approach landscape. In fact, in terms of technical contributions, the Ensemble Sampling Policy, which is our major contribution, improves both anytime performance and final performance and can therefore not be isolated into any one of the 7 criteria in Table 1. In the end, we believe users of an HPO system are agnostic to *how* a desideratum such as efficiency is fulfilled, as long as it is indeed fulfilled.
### ___
We value the reviewer's input and recommendations and invite the reviewer to explore the additional PDF we've provided. We are happy to address additional questions to increase your confidence in the review.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: Thanks for the authors' reply. The rebuttal has resolved my concerns. I am not mainly working on hyperparameter optimization so I cannot faithfully judge the novelty + the technical contribution of the paper compared with the existing SOTA ones, as concerned by Reviewer QGiN and meUK. However, the comprehensive comparison in the paper has convinced me of the method's effectiveness, so I will maintain my positive score. | Rebuttal 1:
Rebuttal: To all reviewers and chairs,
We upload the permitted extra PDF with plots addressing a few points raised across the reviews.
**Figure 1)** shows the influence of a "good" prior on PriorBand, per-dataset under the reproducible protocol for prior generation in our experiments (_Appendix D.3_). This is an alternative to the mean relative ranks in the draft.
In this plot, we show the mean normalized regret over iterations for PriorBand per benchmark. The regret is calculated by normalizing all values between the minimum and maximum values seen across all included algorithms and seeds for each benchmark.
**Figure 2)** shows that the absolute performance of PriorBand is significant over the baselines, something not visible in relative ranking plots. It shows the mean normalized regret averaged across all benchmarks under different strengths of priors. This demonstrates both strong anytime and final performance of PriorBand across different prior qualities, under tractable budgets.
The second from right plot in this figure is the aggregated view of Figure 1.
**Figure 3)** highlights that our chosen hyperparameters for PriorBand offer a good balance between exploration and exploitation. This figure shows two ablation studies over the PriorBand local search hyperparameters: (i) the standard deviation of the local perturbation, and (ii) the mutation rate, which determines the chance of selection of each hyperparameter for perturbation.
More exploitative local search hyperparameters naturally give gains under strong priors. However, given the goal of being robust across all possible scenarios, a conservative local search (our default for PriorBand) offers steady anytime gains under different prior qualities. This allows PriorBand to be truly practical for HPO.
We have provided further captions to explain the figures but we are happy to clarify further if required.
We are looking forward to an engaged discussion period!
Pdf: /pdf/6166eb54b7239345571763f81a18027936f8925d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Minimax Forward and Backward Learning of Evolving Tasks with Performance Guarantees | Accept (poster) | Summary: This paper presents an incremental minimax risk classifier (imrc) to effectively utilize forward and reverse learning and explain evolutionary tasks. Furthermore, the performance improvements provided by forward and backward learning can also be described analytically based on the expected quadratic changes of the task and the number of tasks.
Strengths: In the article, the effects and limitations of IMRCs are verified by rich experiments, and using IMRCs can better use forward and reverse learning to explain the evolutionary tasks and improve the effective sample size.
Weaknesses: However, there are some deficiencies in the ablation aspect, and the experimental volume is relatively small.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The setting of the task in the article mainly starts from the high similarity continuous task, so whether the high similarity continuous learning has practical application, and whether this method can have a broad development prospect in the future technology development.
2. In the article, it is necessary to more clearly describe the task premise, between the similarity in the similarity continuous learning and the evolution in the evolutionary tasks, and then describe the clarity and rationality, why the similar continuous tasks are more consistent with the evolutionary principles.
3. In the article, I learned the role of positive learning and reverse learning in the experiment, but I want to know whether reverse learning alone is feasible, whether positive learning has an inhibitory effect on reverse learning?
4. In Figure 1 of the article, although the meaning of the expression can be roughly understood, the graph line is a little confused, especially the meaning of the black line is unclear.
5. In Figure 2 of the article, the meaning of j in the first red square and k in the second red square is not clear, and the symbols introduced in the text in Article 4.2 are inconsistent with the flow chart symbols in Figure 2, which is easy to confuse the readers' memory.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: see questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The assumption in the paper can better describe datasets with evolving tasks than the usual i.i.d. assumption**
The paper's assumption describes scenarios in which the underlying distributions change as a random walk with independent increments. This type of assumption is often used to describe processes such as stock prices. The paper's assumption can better describe datasets with evolving tasks than the usual i.i.d. assumption, as described in the general response and shown in Figure 7 in Appendix G and Figure 2 in the attached pdf. We will also include other example of evolving tasks in the introduction and describe the evolving aspect of the datasets used in the experimentations, as follows.
Examples of evolving tasks are the classification of portraits from different time periods [1] and of spam emails over time [41]; in these problems, the similarity between consecutive tasks (portraits and emails of consecutive time periods) is significantly higher.
The proposed method is evaluated using 12 datasets composed by evolving tasks. Six datasets are formed by images with characteristics that change over time: tasks in the Yearbook dataset correspond to portraits from different years over more than a century; tasks in the ImageNet noise dataset correspond to images with increasing noise factors; tasks in the DomainNet dataset correspond to images with decreasing realism; tasks in the UTKFaces dataset correspond to face images with increasing ages; the Rotated MNIST dataset contains rotated images with increasing angles; and tasks in the CLEAR dataset correspond to images with a natural temporal evolution of visual concepts. The rest of the datasets are formed by streams of tabular data that are segmented by time: the Power Supply dataset contains power supply records over time; the Usenet datasets contain a stream of messages from different newsgroups that are sequentially presented to a user; the German dataset contains information of bank clients over time; the Spam dataset contains malicious and not malicious emails received over time; and the Covertype dataset contains cartographic information of a forest area over time.
**Feasibility of backward learning**
The proposed IMRC method allows to do backward learning alone since backward mean and MSE vectors are obtained recursively from those for the succeeding task. We obtain the backward mean and MSE vectors using forward learning recursions in retrodiction. These vectors could then be used to obtain IMRCs parameters solving the optimization problem (3). In addition, the backward ESS in Theorem 3 shows the performance improvement of the backward learning techniques. Such ESS shows that backward learning can achieve similar positive transfer as forward learning. Theorem 3 also shows that forward and backward learning results in higher ESS. Such qualitative consequences of the theoretical results are corroborated by the numerical results in Section 5 and Appendix G, for instance in Figures 4, 5, and 6. Such figures show the average classification error of single-task learning, forward learning, and forward and backward learning. Therefore, such results show that forward and backward learning improves performance in comparison with forward learning.
**Additional explanations of Figure 1**
Following the reviewer's suggestion, the updated caption of such figure will more clearly describe its contents, including the meaning of the black line. Figure~1 in the paper shows an example of evolving tasks and how the proposed techniques can adapt to evolving tasks and effectively exploit forward and backward learning. The example of evolving tasks is the classification of portraits from different decades; in this problem, the similarity between consecutive tasks is significantly higher. The black line in the figure represents the evolution of the underlying distributions that characterize different task (black dots). IMRCs minimize the worst-case error probability over uncertainty sets that can include the underlying distribution. For each task, the proposed methodology obtains a single task uncertainty set (blue hexagons) leveraging information only from the corresponding task (blue arrows), and a forward uncertainty set (green hexagons) from the forward uncertainty set for the preceding task (green arrows) and the single task uncertainty sets. Then, we obtain forward and backward uncertainty sets (red hexagons) from the forward and backward uncertainty sets for the succeeding tasks (red arrows) and the forward uncertainty sets. Such transfer of information significantly increases the ESS of each task that is represented by the size of the uncertainty set since decreasing uncertainty sets increases the ESS.
**Additional explanations of symbols used in Figure 2**
We thank the Reviewer for finding out the typo of j in the first red square and k in the second red in Figure 2. We will fix it in the new version of the paper. In addition, we will clarify the symbols used in Figure 2 and in Section 4.2. Figure 2 in the paper depicts the flow diagram for the proposed IMRC methodology for each j-th task, while the text in Section 4.2 describes the procedure for the proposed IMRC methodology at each step k. The IMRC method obtains, for each j-th task, forward mean vector $\tau_j^\rightharpoonup$ leveraging information from preceding tasks and the sample average $\tau_j$. Reciprocally, backward mean vectors $\tau\_{j+1}^{\leftharpoondown k}$ for each j-th task are obtained leveraging information from the k-th task through the sample average $\tau_k$. Then, the forward and backward mean vectors $\tau_j^{\rightleftharpoons k}$ are obtained from the forward mean vectors and the backward mean vectors. Note that the forward mean vector and sample average are obtained for each j-th task at step j. Then, at each step k, the IMRC method obtains forward mean vector for the k-th task and obtains the forward and backward mean vectors for each j-th task.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed feedback. I will keep my score. | Summary: IMRCs (Incremental Minimax Risk Classifiers) are a technique for incremental learning of a growing sequence of classification tasks. The key feature of IMRCs is their ability to exploit the similarity between consecutive tasks by leveraging both forward and backward learning.
Forward learning utilizes information from preceding tasks to improve the classification performance of the current task. It recursively updates the mean and mean squared error (MSE) vectors by considering the difference between the current task and the preceding task. Backward learning, on the other hand, uses information from the current task to improve the classification performance of preceding tasks.
Strengths: (1) IMRCs leverage the evolving nature of tasks in a growing sequence. By utilizing both forward and backward learning, IMRCs effectively exploit the similarity between consecutive tasks.
(2) By leveraging information from preceding and succeeding tasks, IMRCs can incorporate additional knowledge and enhance classification performance.
(3) IMRCs minimize the worst-case error probabilities over uncertainty sets. This means that they provide robust classification performance, even in the face of uncertainty or variations in the underlying distributions of tasks.
Weaknesses: (1) The forward and backward learning processes, along with the uncertainty set updates, can require significant computational resources and time. How about the training time compared to other methods?
(2) The selection of appropriate uncertainty sets can be challenging and may require prior knowledge or assumptions about the task distributions.
(3) The authors assume that the distributional changes between tasks (p2-p1, p3-p2, ...) are independent and have a mean of 0. How can IMRCs be extended or modified to handle scenarios with non-zero-mean changes between consecutive tasks or explore novel tasks outside the existing sequence?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness part.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Complexity of IMRCs and running time in comparison with other methods and for different backward steps**
In the final version of the paper, we will show that the running time of IMRCs is similar to other state-of-the-art techniques and increases moderately with the number of backward steps and the number of tasks, as described in the general response. Figures 1(a) and 1(d) in the attached pdf show the average running time per task for different number of backward steps and sample sizes using Yearbook and Spam datasets; and Figures 1(b), 1(c), 1(e), and 1(f) in the attached pdf show the running time for different number of backward steps, number of tasks, and sample sizes $n = 10$ and $n = 100$ using Yearbook and Spam datasets. Such figures complement those presented in Table 6 in the current Appendix G. These results show that the complexity of IMRCs increases linearly with the number of backward steps and the number of tasks in agreement with the analysis in Section 4.2.
**The assumption in the paper is not stronger than the usual i.i.d. assumption**
In this paper, we develop techniques designed for evolving tasks that can be mathematically modeled assuming that the changes between consecutive distributions $\mathrm{p}_{2} - \mathrm{p}_1, \mathrm{p}_3 - \mathrm{p}_2, \ldots$ are independent and zero-mean. Specifically, the assumption in the paper describes scenarios in which the underlying distributions change as a random walk with independent increments, so that it can more accurately describe evolving tasks. For instance, such type of assumption is often used to describe processes such as stock prices. The assumption in the paper differs from the usual i.i.d. assumption but is not necessarily stronger. The main difference between both assumptions is that with the random walk assumption each difference between consecutive distributions is independent of the others, while with the i.i.d. assumption each distribution $\mathrm{p}_i$ is independent of the others.
**Mean of changes between consecutive distributions**
IMRCs can be modified to handle scenarios with non-zero-mean changes between consecutive distributions. If $m_j$ is the mean of the change between the $j$-th and the $(j-1)$-th task, then the mean vectors evolve over time steps through the linear dynamical system
$$\boldsymbol{\tau}_j^\infty = \boldsymbol{\tau}\_{j-1}^\infty + \boldsymbol{w}_j$$ where vectors $\boldsymbol{w}_j$ for $j \in \{2, 3, \ldots, k\}$ are independent and mean $m_j$. Then, we can obtain recursions for mean and MSE vectors after some algebra from the Kalman filter recursions and fixed-lag smoother recursions. Such recursions are given by the recursions in the paper subtracting the expectation of changes between consecutive distributions.
IMRCs can be used in scenarios with an ever-increasing sequence of tasks. For each new task in the sequence, IMRCs obtain mean and MSE vectors for the new task using forward learning recursions, and then obtain mean and MSE vectors for each task in the sequence using forward and backward learning recursions.
---
Rebuttal Comment 1.1:
Title: Reviewer spbD
Comment: Thank you for the detailed feedback. I will keep my score which leans toward acceptance. | Summary: This paper presents a novel method to tackle continual learning, called IMRCs which is able to exploit forward and backward learning to account for evolving tasks. The authors provide theoretical guarantees on the performance of IMRCs in terms of the tasks' expected quadratic change and the number of tasks. Authors compared the proposed method with prior work on multiple datasets, sample sizes, and number of tasks.
Strengths: + The idea seems interesting and performance guarantee in the context of continual learning is a promising research direction.
+ A thorough theoretical analysis of IMRCs is provided.
Weaknesses: - My main concern about this paper is the experimental section. The diversity of the datasets and experiments with different sample sizes is a plus but at the same time it is different from the common continual learning setting in the literature and the baselines are outdated whereas the field of continual learning is rapidly growing and there has been numerous works developed between 2021 and now.
- The paper does not discuss the computational complexity of IMRCs.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Authors mention they used the original hyper parameters for the baseline. But given that some methods such as MER, GEM, EWC which were introduced on image data, cannot be taken as is and applied to tabular datasets without tuning hyper parameters. How did the authors make sure of a fair comparison? Note that results for some methods such as GEM and MER are somewhat equal to those from EWC (Table 1) but in vision applications this is not expected and EWC is always outperformed by every single memory-based method.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Computational complexity of IMRCs**
In the final version we will further discuss the computational complexity of IMRCs, the running time for different number of backward steps and number of tasks, and the running time in comparison with the state-of-the-art methods, as described in the general response.
**Scope of the paper and experimental comparisons with the rapidly growing set of algorithms for continual learning**
As the reviewer points out, the field of continual learning is rapidly growing. This paper aims to present methodological contributions that enable to effectively exploit forward and backward learning and account for evolving tasks. The proposed IMRCs determine classification rules minimizing worst-case error probabilities over uncertainty sets that can contain the sequence of evolving underlying distributions. Specifically, we propose forward learning techniques that recursively use the information from preceding tasks to reduce the uncertainty set of the last task, and we propose forward and backward learning techniques that use the information from the last task to further reduce the sequence of uncertainty sets obtained with forward learning. In addition, we describe the performance guarantees of the methods introduced and analytically characterize the increase in ESS provided by the presented methods in terms of the expected quadratic change between consecutive tasks and the number of tasks. The main contributions in the paper are in terms of new methodological approaches and theoretical analysis of the methods presented. The paper also includes sizeable experimental results with multiple datasets and baseline methods, but an exhaustive experimental comparison that include the most recent techniques is beyond the scope of the paper.
The existing techniques for concept drift adaptation are designed for evolving tasks but only aim to learn the last task in the sequence, while the existing techniques for continual learning aim to learn the whole sequence of tasks but are designed for situations where tasks’ similarities do not depend on the time steps when tasks are observed. The methodology proposed fills this gap in the existing literature and present techniques that simultaneously learn a sequence of tasks and account for evolving tasks.
**Hyper-parameters for the baselines and comparison methods**
As described above, the main goal of the paper is to present methodological contributions that enable to learn a sequence of evolving tasks; rather than improving performance in comparison with the most recent state-of-the-art methods in each specific dataset. While hyper-parameter fine-tuning is a crucial step in achieving the best possible results, it often involves a significant amount of computational resources and extensive experimentation. Given the focus on methodological contribution, we demonstrate the effectiveness of the techniques introduced by using default hyper-parameters for all datasets, methods, and sample sizes.
EWC is outperformed by GEM and MER in most of vision datasets such as Yearbook, I. noise, DomainNet, R. MNIST, and CLEAR datasets. For the tabular datasets, results for GEM and MER are somewhat equal to those from EWC. This fact can be due to the strong similarity between consecutive tasks in later datasets that have been often used as benchmarks for concept drift adaptation. Note that GEM, MER, and EWC are designed for situations where tasks’ similarities do not depend on the time steps when tasks are observed so that their performance is expected to be similarly deficient in situations where the tasks' similarities between consecutive tasks are markedly higher.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their thorough rebuttal. I appreciate the effort taken to provide clarity on the computational complexity aspect. However, I still have reservations regarding the experimental section. While I concur with the authors' assessment of the paper's contributions, I believe that the experimental design should align more closely with these contributions. Specifically, I find it important to address the rationale behind EWC, a regularization technique with inherent limitations, performing at a comparable level to memory-based methods. The latter possess greater flexibility in retaining past knowledge due to their capacity to store samples. The authors' justification, centered around the "strong similarity between consecutive tasks in later datasets," suggests that forgetting might not be a significant challenge in the selected datasets. Consequently, even a less potent method like EWC could exhibit satisfactory results. Thus, I question the reliability of such datasets as suitable benchmarks. Furthermore, I respectfully disagree with the statement that "GEM, MER, and EWC are designed for situations where tasks’ similarities do not depend on the time steps" being a fair reason for their comparable performance. These methods have undergone testing on continual learning datasets with temporal disparities in the past, and the outcome trends have persisted.
In conclusion, after thoroughly reviewing both the critiques and responses, I am inclined to revise my initial assessment. I acknowledge the strengths highlighted by myself and other reviewers. As a result, I am willing to adjust my score accordingly and raise it from 3 to 5.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for his/her careful reading of the responses in the rebuttal and the appreciation of the paper's main contributions. In addition, we would like to thank the reviewer for raising the score from 3 to 5.
In the final version of the paper, we will clarify that the main aim of the paper is to present methodological contributions as well as the theoretical properties of the methods proposed. | Summary: The papers studies a specific case of continual learning over a sequence of tasks. Authors extend the typical i.i.d. assumption of task meta-distribution to the case where only the differences between subsequent task distributions are assumed to be independent and zero-mean. Under this assumption, they devise two versions of a minimax risk classifier and prove performance guarantees that improve upon single-task bounds by relying on effective sample size instead of just the number of samples in a task. The effective sample size is no smaller than the single task size and can be larger if the quadratic change between consecutive tasks behaves in a favorable way, e.g. decreasing over time. The authors also present an experimental study of the proposed algorithms and show that it either improves the baseline performance or achieves similar one across 12 datasets.
Strengths: ### Originality
This is a novel approach that relies on independent differences between subsequent task distributions rather than on the typical i.i.d. assumption.
### Quality
* Paper studies the presented algorithm theoretically and experimentally.
* Illustrative theoretical guarantees that are always no worse than a single task with a clear influence of the task distribution properties in the form of effective sample size.
* Experiments uses 12 different datasets and cover 7 baselines.
* Authors provide insights into impact of number of tasks and number of samples on the performance of different variants of the algorithm.
### Clarity
The paper is well written and easy to follow in general.
### Significance
There are numerous ways of how i.i.d. assumption can be relaxed and paper studies one of them. The significance will rely on how close the new assumption is to practical scenarios. From experimental setup it seems to be a rather specific case that relies too much on exact task ordering, thus I would expect the paper to have average impact from theoretical point of view, but quite limited impact in practice.
Weaknesses: The paper doesn’t have major weaknesses. There are things that can be improved in the presentation and discussing limitations, which I cover in questions section.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The following things would help with the presentation:
* The notations used are overloaded (e.g. by using double superscripts) and sometimes are hard to follow. The paper would benefit from a small table with all notations in one place.
* Experiments present the result for samples size of 10 without any additional comments on exact value of 10. Appendix shows that the general conclusion is valid irrespective of the sample size, so it would be good to highlight that.
* The main assumption on the task distribution needs to be made more precise and explicit in the theorems (introduce assumptions separately and then refer to it in theorems). A discussion on how realistic it is and providing more than one example would go a long way.
* Theoretical analysis needs a discussion on different regimes where the ESS improvements would be realised and what they mean for the task distributions process. Authors do have a short discussion of that after Theorem 4, but it lacks the connection between different regimes of nd^2 and process properties. For example, $nd^2 \leq \frac{1}{j^2}$ translates into a rather fast convergence requirement of the task distribution process.
* It would be great to discuss what happens to the algorithm when the main assumption breaks. Does it go back to single task performance? How would one figure this out in practice?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation of the approach are not adequantly discussed. I’ve made a few suggestion in the questions section on how to address that.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Table with main notations used in the paper**
In the final version of the paper we will include Table 2 in the attached pdf that shows the main notations used. Specifically, such table shows the notation used for the mean vector, the confidence vector, the MSE vector, the ESS, the uncertainty set, and the minimax risk. In addition, symbols without superscript represent variables obtained using single task learning; symbols with right harpoon superscript represent variables obtained using forward learning; symbols with left harpoon superscript represent variables obtained using backward learning; and symbols with right left harpoon superscript represent variables obtained using forward and backward learning. Due to space constraints, we will include the table in the appendices and will reference it in the Notation paragraph.
**IMRCs are successful irrespective of the sample size**
We will follow the reviewer's suggestion and highlight that the improved performance of IMRCs compared to the state-of-the-art techniques is similar irrespective of the sample size. We will clarify that the results for $n = 10$ samples per task in Table 1 in the main paper are complemented with results for $n = 50, 100$, and $150$ samples per task in Tables 4, 5, and 6 in Appendix G. These results show that the improved performance of IMRCs compared to the state-of-the-art techniques is similar for different sample sizes.
**More formally describe the paper's assumption and reference to it in theorems**
We will follow the reviewer's suggestion and more formally describe the assumption in Section 2, as follows. Most existing continual learning techniques are designed for tasks characterized by distributions $\mathrm{p}_1, \mathrm{p}_2, ...$ such that the tasks’ distributions $\mathrm{p}_i$ are independent and identically distributed (i.i.d.) for $i = 1, 2, ...$. In the following, we propose techniques designed for evolving tasks that are characterized by distributions $\mathrm{p}_1, \mathrm{p}_2, ...$ such that the changes between consecutive distributions $\mathrm{p}\_{i+1} - \mathrm{p}_i$ are independent and zero-mean for $i = 1, 2, ..$. In addition, we will refer to such assumption as the ''evolving tasks assumption'' in Section 2 and in theorems.
**Suitability of the assumption in the paper for evolving tasks and additional examples**
The assumption in the paper describes scenarios in which the underlying distributions change as a random walk with independent increments. This type of assumption is often used to describe processes such as stock prices. The assumption in the paper can better describe real-world datasets with evolving tasks than the usual i.i.d. assumption, as described in the general response and shown in Figure 7 in the current Appendix G and Figure 2 in the attached pdf. We will also include other example of evolving tasks in the introduction and describe the evolving aspect of the datasets used in the experimentations, as follows.
Examples of evolving tasks are the classification of portraits from different time periods [1] and the classification of spam emails over time [41]; in these problems, the similarity between consecutive tasks (portraits of consecutive time periods and emails from consecutive years) is significantly higher.
The proposed method is evaluated using 12 datasets composed by evolving tasks [9, 1, 34–41]. Six datasets are formed by images with characteristics/quality/realism that change over time: tasks in the Yearbook dataset correspond to portraits from different years over more than a century; tasks in the ImageNet noise dataset correspond to images with increasing noise factors; tasks in the DomainNet dataset correspond to images with decreasing realism; tasks in the UTKFaces dataset correspond to face images with increasing ages; the Rotated MNIST dataset contains rotated images with increasing angles; and tasks in the CLEAR dataset correspond to images with a natural temporal evolution of visual concepts. The rest of the datasets are formed by streams of tabular data that are segmented by time: the Power Supply dataset contains power supply records over time; the Usenet datasets contain a stream of messages from different newsgroups that are sequentially presented to a user; the German dataset contains information of bank clients over time; the Spam dataset contains malicious and not malicious emails received over time; and the Covertype dataset contains cartographic information of a forest area over time.
**Discussion on regimes of the ESS improvements**
In the updated version of the paper we will further discuss the regimes of the ESS improvements shown in Theorem 4 and Figure 3, as follows. The increase in ESS can be classified into three regimes depending on the relationship between the task index $j$, the sample size $n$, and the expected quadratic change $d^2$. The ESS becomes proportional to the total number of tasks $k$ if $nd^2$ is rather small (very small sample sizes or very slow changes in the distribution); the ESS quickly increases when $nd^2$ becomes small (reduced sample sizes and moderate changes in the distribution); and the ESS is only marginaly larger than the sample size for sizeble values of $nd^2$ (large samples sizes or drastic changes in the distribution).
**IMRCs' behavior when the similarity between consecutive tasks is small**
IMRCs are designed for situations where consecutive tasks are significantly more similar and the expected quadratic change ($d^2$) between consecutive tasks is small. If, instead, this expected quadratic change is large we would have that the forward and forward and backward mean vectors in equations (6) and (10), respectively, become the mean vector of single task learning, as hinted by the reviewer. Such cases can be detected in practice because the estimate for $d^2$ in equation (7) would become large. Then, in these cases, the performance of IMRCs becomes the performance of single task learning.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. You have addressed most of my questions/concerns expect for the last one: "It would be great to discuss what happens to the algorithm when the main assumption breaks. Does it go back to single task performance? How would one figure this out in practice?"
I probably didn't express myself well enough. What I meant is adding a discussion on limitations of your main assumption. When will your approach stop working? Main assumption "breaks" when consecutive distributions stop being independent and/or zero-mean. You have clarified how to adapt the approach for the non-zero mean in one of other rebuttals, but what would happen when the distributions are dependent? Obviously your theoretical bounds won't hold, but would approach reduce to learning each task independently? Paper would definitely benfit from this information.
Overall, after reading the rebuttal and other reviews, my recommendation remains the same.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for his/her careful reading of the responses in the rebuttal and the positive feedback provided that will help us describe the methods' reliance on the assumptions.
The proposed method should improve performance as long as consecutive tasks have a higher similarity (evolving), even if their differences are dependent. IMRCs obtain mean vectors using those for the consecutive tasks and the expected quadratic change between consecutive tasks $\boldsymbol{d}_j^2$. If $\boldsymbol{d}_j^2$ is large, we would have that the mean vectors become the mean vectors of single task learning; while if $\boldsymbol{d}_j^2$ is small, we would have that the mean vectors become the mean vectors of standard supervised classification. The main limitation of the methods proposed is that IMRCs cannot exploit strong similarities among non-consecutive tasks. The methods presented are designed for situations in which tasks are evolving in the sense that consecutive tasks often have a higher similarity. | Rebuttal 1:
Rebuttal: In the responses below we are confident to have addressed all the comments and questions made by the reviewers. Please let us know if you have any additional inquiry so that we can completely clarify any aspect in the paper during the rebuttal period.
We plan to use the extra page allowed to clarify the questions raised by the reviewers as described in the responses below. In this general response, we clarify the two main questions raised by the reviewers regarding the running time of the methods proposed and the assumption for evolving tasks.
**Running times in comparison with other methods and for different backward steps**
Table 6 in Appendix G and Table 1 in the attached pdf show that the running time of the IMRC method is similar to other state-of-the-art methods, and Table 6 in Appendix G and Figure 1 in the attached pdf show that the running time of IMRCs increases moderately with the number of backward steps. Specifically, Table 1 in the attached pdf shows the average running time per task of IMRCs in comparison with the state-of-the-art methods and Figure 1 in the attached pdf shows that the complexity increases linearly with the number of backward steps and the number of tasks. Such results complement those presented in Table 6 in the current Appendix G and agree with the theoretical analysis in Section 4.2. This analysis shows that, for $k$ steps, IMRCs have computational complexity $\mathcal{O}((b+1)Kmk)$ and memory complexity $\mathcal{O}((b+k)m)$ where $K$ is the number of iterations used for the convex optimization problem~(3), $m$ is the length of the feature vector, and $b$ is the number of backward steps.
**The assumption in the paper more accurate describes evolving tasks than the usual i.i.d. assumption**
Figure 7 in Appendix G and Figure 2 in the attached pdf show that the assumption in the paper can better describe datasets with evolving tasks than the usual i.i.d. assumption. We evaluate the paper's assumption of change between consecutive tasks being independent and zero-mean by assessing the partial autocorrelation of mean vectors. Partial autocorrelations are the usual tool to assess if a process is a random walk (see Section 4 in Cowpertwait and Andrew V. Metcalfe (2009)). In particular, the partial autocorrelation at any lag would be zero if tasks are i.i.d.; while the partial autocorrelation at lag 1 is larger than zero if tasks satisfy the assumption of Section 2. Figure 7 in the current Appendix G and Figure 2 in the attached pdf show the averaged partial autocorrelation of the mean vectors components +/- their standard deviations for different lags. Such figures show a partial autocorrelation clearly non-zero at lag 1 that reflects a dependence between consecutive mean vectors in accordance with the assumption of Section~2.
Paul SP Cowpertwait and Andrew V. Metcalfe. Introductory time series with R. Springer Science \& Business Media, 2009.
Pdf: /pdf/52a8e6642ec44195d77ba9237e978b6732358df3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Though class-incremental learning has been studied the most as a default setting for continual learning, task-incremental study could be crucial for other areas. This paper focuses particularly on the case where tasks being introduced over time are interrelated under specific conditions. Under the supervised learning framework, the task distributions $p_t$ (of instances and labels at any time t) are independent and identically distributed, which would be a strong and less-realistic condition. In contrast, this paper covers the case where the difference of two consecutive task distributions is independent, making the difference between two distant tasks have increasing variance over time. Then the authors propose Incremental Risk Minimization Classifier (IMRC) method that effectively leverage forward and backward learning algorithms, improving the performance against existing methods on various datasets, sample sizes, and the number of tasks.
Strengths: (1) The paper formally defines a non-trivial task-incremental learning setting whose assumption is more realistic than the earlier approach.
(2) The paper analytically shows performance guarantees and effective sample sizes as well as forward and backward learning algorithm.
(3) The paper addresses empirical improvement on multiple datasets against existing state-of-the-art algorithms.
Weaknesses: (1) Current draft is rarely self-contained. Readers must read the appendix frequently. Minimally, having explanations about methods and datasets in Table 1 would significantly increase the readability.
(2) Run-time trade-off of considering the backward (possibly per different sample sizes) are missing.
(3) Characteristics of each experimental dataset: how much they agree and disagree the independence assumption across consecutive task distributions are missing.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: (1) Providing comprehensive details about different tasks and their distinctive characteristics helps readers better understanding the contributions of the paper beyond the theoretical excitement. Formalizing the time-evolving task with an independence assumption on consecutive task difference sounds exceptional. It will be great to know how much this assumption is agreed and violated on individual tasks on experiments.
(2) In addition to (1), accessing real examples or qualitative explanations about hard cases (that were not successful in the previous approaches, but exceptionally successful with the current IMRC) will be useful. For example, it seems IMRC is distinguished notably from other algorithms on P. Supply, Usenet1, and Spam datasets. Understanding the rationale will be insightful.
(3) Most importantly, run-time trade-off for doing backward process must be measured for making the provable guarantees practically effective. As the trade-off would change for different number of tasks and sample sizes, practitioners require better guidance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: No specific limitations are described or probed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Explanations about methods and datasets**
In the final version of the paper, we will further describe the methods and datasets used in Table 1 to make the main paper more self-contained as suggested by the reviewer. Specifically, we will include the following comments on the datasets and methods.
The proposed method is evaluated using 12 public datasets composed by evolving tasks [9, 1, 34–41]. Six datasets are formed by images with characteristics/quality/realism that change over time: tasks in the Yearbook dataset correspond to portraits from different years over more than a century; tasks in the ImageNet noise dataset correspond to images with increasing noise factors; tasks in the DomainNet dataset correspond to images with decreasing realism; tasks in the UTKFaces dataset correspond to face images with increasing ages; the Rotated MNIST dataset contains rotated images with increasing angles; and tasks in the CLEAR dataset correspond to images with a natural temporal evolution of visual concepts. The rest of the datasets are formed by streams of tabular data that are segmented by time: the Power Supply dataset contains power supply records over time; the Usenet datasets contain a stream of messages from different newsgroups that are sequentially presented to a user; the German dataset contains information of bank clients over time; the Spam dataset contains malicious and not malicious emails received over time; and the Covertype dataset contains cartographic information of a forest area over time.
In Section 5, we compare the results of IMRC methods with 7 state-of-the-art-techniques [4, 16, 3, 5, 8, 9, 15]: GEM method [4] is a technique developed for continual learning based on experience replay and learns each new task using a stochastic gradient descent with inequality constraints given by the losses of preceding tasks; MER method [16] is a technique developed for continual learning based on experience replay and learns each new task using sample sets that include stored samples from preceding tasks; EWC method [5] is a technique developed for continual learning based on regularization and learns each new task using regularization parameters based on the relevance for preceding tasks; ELLA method [3] is a technique developed for continual learning based on dynamic architectures and learns each new task transferring knowledge from a shared basis of task models; Condor method [8] is a technique developed for concept drift adaptation based on weight factors and adapts to evolving tasks by weighting the models in an ensemble at each time step; AUE method [15] is a technique developed for concept drift adaptation based on weight factors and adapts to evolving tasks by incrementally updating all classifiers in an ensemble and weighting them with non-linear error functions; and DriftSurf method [9] is a technique developed for concept drift adaptation based on sliding windows and adapts to evolving tasks by restarting the model when a change in the distribution is detected.
**The assumption in the paper more accurate describes evolving tasks than the usual i.i.d. assumption**
We will show that the assumption in the paper can better describe datasets with evolving tasks than the usual i.i.d. assumption as described in the general response. Specifically, Figure 2 in the attached pdf shows the averaged partial autocorrelation of the mean vectors components +/- their standard deviations for different lags using Power Supply, Covertype, and R. MNIST datasets. Such results complement those using Yearbook and UTKFaces datasets presented in Figure 7 in the current Appendix G. Figure 7 in Appendix G and Figure 2 in the attached pdf clearly show that the paper's assumption better describes evolving tasks than the usual i.i.d. assumption. Note that in the i.i.d. case the partial autocorrelation at lag 1 would be 0, while such figures show a partial autocorrelation clearly non-zero at lag 1.
**IMRCs are especially successful in Power Supply, Usenet1, and Spam datasets**
As the reviewer points out, IMRCs significantly improve performance in comparison with other algorithms on Power Supply, Usenet1, and Spam datasets. Such datasets are used as benchmarks in concept drift adaptation [9, 39, 41] and have a markedly strong similarity between consecutive tasks. As described in the first response, the Power Supply dataset contains power supply records over time; the Usenet1 dataset contains a stream of messages from different news-groups that are sequentially presented to a user; and the Spam dataset contains malicious and not malicious emails received over time. Existing techniques designed for continual learning do not account for evolving tasks, and existing techniques designed for concept drift adaptation do not exploit backward learning. IMRCs are successful in the datasets cited by the reviewer because the methods proposed account for evolving tasks and effectively exploit forward and backward learning.
**Running times for different backward steps, number of tasks, and sample sizes**
The running time of IMRCs increases moderately with the number of backward steps and the number of tasks as described in the general response. Figures 1(a) and 1(d) in the attached pdf show the average running time per task for different numbers of backward steps and sample sizes using Yearbook and Spam datasets; and Figures 1(b), 1(c), 1(e), and 1(f) in the attached pdf show the running time for different numbers of backward steps, number of tasks, and sample sizes $n = 10$ and $n = 100$ using Yearbook and Spam datasets. Such results complement those presented in Table 6 in the current Appendix G. These results and the analysis in Section 4.2 show that the complexity of IMRCs increases linearly with the number of backward steps and the number of tasks.
---
Rebuttal Comment 1.1:
Title: Thanks for your rebuttal.
Comment: Thanks the authors for their feedback on my comments and suggestions. I will keep my score based on their clarifiations. | null | null | null | null | null | null |
Sample-efficient Multi-objective Molecular Optimization with GFlowNets | Accept (poster) | Summary: Molecule generation involves the optimization of multiple potentially competing objectives simultaneously. As evaluating these objectives can be a time-consuming and costly task, sample efficiency is paramount. This work proposes a multi-objective Bayesian optimization approach leveraging GFlowNets to tackle this problem. The authors propose a preference-based decomposition and a hyper-network-based parameterization for incorporating preferences and conditioning on those. They also employ a hindsight-experience replay strategy to utilize offline data during learning. The hyper-network-based GFlowNet is then used as an acquisition function optimizer to find a diverse set of molecules on the empirical Pareto front.
Strengths: - The paper covers an important and impactful topic of multi-objective optimization in molecule design.
- The experimental results, in principle, are strong.
- It is an original combination of existing techniques.
- Many benchmarks are compared.
- Ablation study
Weaknesses: - It is not directly evident to me how this work is different from Pareto GFlowNet that was proposed in [GFlowNet Foundations](https://arxiv.org/pdf/2111.09266.pdf)
- Some parts remain somewhat unclear to me. How is the hyper-network trained? Is it stable, does it do what it is expected to do?
- I am somewhat missing the link between the surrogate model and HN-GFN from what is described in the main paper. Why do we actually need the surrogate model? Can't the GFlownet propose diverse candidates by itself?
- Perhaps the work would benefit from a clearer description and/or toy tasks to clear this up.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - It would be nice if the authors could dedicate some more information about the test problem. What is the search space of the molecules, what atom types, how are impossible molecular structures avoided, etc? Are the building blocks created in such a way that they always can be connected to each other without violations?
- Why a batch size of 100 and 8 rounds? In what setting would a practitioner be able to perform 100 wet lab experiments simultaneously? Perhaps with a simulator, this would be possible. But wouldn't the surrogate models (at least from the BO qEHVI and qParEGO) benefit from lower batch sizes and more iterations? This could change the benchmarks significantly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer for the insightful and constructive comments!
> It is not directly evident to me how this work is different from Pareto GFN proposed in GFlowNet Foundations.
While the concept of Pareto GFN was theoretically discussed in GFlowNet Foundations, we are among the first to study and instantiate this concept for MOO, and we address practical challenges (not discussed thoroughly in the original theoretical exposition) to potentially make sample-efficient molecular optimization a reality. We extensively study the impact of the conditioning mechanism (Sec5.1) and surrogate models (Sec4.4). Moreover, we delicately propose a hindsight-like off-policy strategy (Sec4.3) which is rarely studied for MOO (in both RL and GFN literature).
> How is the hypernetwork trained? Is it stable, does it do what it is expected to do?
We are sorry for our unclear description. Hypernetworks generate the weights of a target network based on inputs. In our implementations, the hypernetwork takes the preference vector $λ$ as inputs and outputs the weights of prediction heads upon the MPNN encoder. In short, the parameters of the hypernetwork are optimized like normal parameters, while the parameters of the prediction heads are the output of the hypernetwork.
In our experiments, the hypernetwork is stable and does what it is expected to do. We attribute this to our design where only the weights of prediction heads are conditioned with hypernetwork while the weights of the encoder are shared among all preference vectors (Sec4.2.1). We study various conditioning mechanisms and find that the hypernetwork-based approach performs best.
> I am somewhat missing the link between the surrogate model and HN-GFN. Why do we actually need the surrogate model? Can't GFN propose diverse candidates by itself? Perhaps the work would benefit from a clearer description and/or toy tasks to clear this up.
We apologize for our unclear description. We propose to extend GFN for MOO. We first consider a synthetic single-round scenario (Sec5.1) where the oracle is assumed to be called as many times as necessary, without the need of a surrogate model. In this case, GFN can directly optimize the reward defined by the oracle. Practically, realistic oracles are extremely expensive to evaluate. We thus focus more on the BO scenario (Sec5.2) where the oracle is given an evaluation budget. Then we need a cheap statistic surrogate model to approximate the expensive oracle. As the surrogate model cannot exactly reproduce the oracle's full behaviors, an ideal one should provide a calibrated uncertainty (exploration) estimate. By incorporating uncertainty into the reward function, GFNs search not only the space where the model prediction is high (exploitation) but also the space where the uncertainty is large (exploration). This behavior is paramount for BO.
> It would be nice if the authors could dedicate some more information about the test problem. What is the search space of the molecules, what atom types, how are impossible molecular structures avoided, etc? Are the building blocks created in such a way that they always can be connected to each other without violations?
We appreciate the reviewer's helpful reminder. We follow the molecule environment of [1], where the molecules are generated by sequentially attaching a fragment (from a predefined vocabulary of building blocks) to an atom of the partially constructed molecules. The maximum trajectory length is 8, with the number of actions varying between 100 and 2000 depending on the state (the larger a molecule, the more possible additions exist), making $|\mathcal{X}|$ up to $10^{16}$. The blocks are chosen via the process proposed in JT-VAE [2]. To ensure valid molecular structures, we only predict action (which block to attach) for each stem of the graph (stems are atoms which new blocks can be attached to). We will revise our writing to dedicate experimental settings.
> Why a batch size of 100 and 8 rounds? In what setting would a practitioner be able to perform 100 wet lab experiments simultaneously? Perhaps with a simulator, this would be possible. But wouldn't the surrogate models (at least from qEHVI and qParEGO) benefit from lower batch sizes and more iterations? This could change the benchmarks significantly.
While these choices are unusual for common problems in BO, they are common but challenging in molecular optimization and biological sequence design, where large-batch and low-round setting is desired, as candidates can be synthesized and evaluated in parallel in biochemical experiments. For example, DyNA PPO [3] considers 10 rounds with a batch size of 100. The original paper of GFlowNet [1] even considers a batch size of 200.
I agree that the surrogate models benefit from lower batch sizes and more iterations. However, in practice, the time costs for proposing candidates are negligible compared to evaluation, especially when the candidates are proposed by generative models, these novel candidate molecules may not be available for purchase. Customized synthesis may take weeks if not more for each round; hence we consider such an experimental setting to be more practical in chemistry-related fields. As we noted in line 322, the latent space optimization (LSO) methods (qEHVI, qParEGO, and LaMOO) only support 160 rounds with batch size 5 due to memory constraints. Nevertheless, LSO methods still fall far short of combinatorial optimization methods. We believe that the reason is that the latent space learned by generative models (HierVAE and JT-VAE) is not sufficiently expressive and discriminative.
[1] Flow network based generative models for non-iterative diverse candidate generation
[2] Junction tree variational autoencoder for molecular graph generation
[3] Model-based reinforcement learning for biological sequence design
**We hope our response can alleviate your concerns. Please let us know if you have any additional questions.**
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's comprehensive response and clarifications. That clears quite some things up for me.
I will increase the score to a 6. | Summary: A GFlowNet for molecular optimization conditioned on the preference weights of multiple objectives is proposed. To be precise, the model is trained to sample molecules from a target space with probability proportional to some combination of reward functions (weighted sum or Chebyshev scalarization) and trained with varying weights of the rewards, with the GFlowNet conditioning achieved by a hypernetwork parametrization. The system is tested in both single-step and Bayesian optimization / active learning settings.
Strengths: - The writing is generally clear, problem is introduced well and placed in context.
- Good results on the tasks studied relative to the chosen baselines
- Two interesting GFlowNet contributions not specific to molecules:
- Study of conditioning algorithms (FiLM and Concat variants), with the hypernetwork-based approach found to perform best.
- The use of a conditional replay buffer, which is, as far as I know, new in GFlowNet literature (a replay buffer was used with GFlowNets in [arXiv:2202.13903, UAI 2022] but in a vanilla RL manner).
- Both are illustrated well with ablation studies.
- The code is a helpful addition that aids in understanding the details of what was done.
Weaknesses: Meta-weakness: Due to my closely following the literature, I was aware of an earlier submission of this paper to ICLR 2023. The present submission is almost identical to ICLR version (post-rebuttal) and therefore cannot incorporate the feedback of the reviewers. That said, to avoid biasing myself here, I did not read the ICLR reviews and this assessment is entirely my own.
Comparison with prior work:
- There is the paper "Multi-objective GFlowNets" [MO; arXiv:2210.12765, ICML 2023], which also studies GFlowNets for multiobjective Bayesian optimization by conditioning on scalarization weights. Can you comment on the differences in problem setting or method with that paper?
- The attached code includes the implementations of two baselines, in `code/generator/gfn.py`, but they are not included in the paper:
- MOReinforce, which is a natural candidate for an RL-based baseline that can readily share the parametrization of the agent with the GFlowNet.
- GFlowNet with trajectory balance [TB; arXiv:2201.13259, NeurIPS 2022]. Trajectory balance is the objective used in nearly all work on GFlowNets for molecular optimization after [5], but for some reason it is not evaluated or mentioned in the text. In particular, it is evaluated on molecule synthesis in both [TB] and [MO].
Small issue in presentation/math: There are some inconsistencies in definitions related to GFlowNets. Equation (1) writes $R(s)$ for the reward received when terminating at $s$, as in [5]. However, the text immediately following it talks about terminal and nonterminal states (with nonterminals having reward 0), and the text above says there is a special termination action that leads to terminal states. This is more consistent with the convention from [TB], where terminal states have no children and are the only states with nonzero reward. This is not a major bug as the two conventions are equivalent, but it would be good to be careful.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please see above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer for the insightful and constructive comments!
> There is the paper "Multi-objective GFlowNets" [MO; arXiv:2210.12765, ICML 2023], which also studies GFlowNets for multiobjective Bayesian optimization by conditioning on scalarization weights. Can you comment on the differences in problem setting or method with that paper?
Our work is actually concurrent with [MO]. They propose two variants MOGFN-PC and MOGFN-AL for MOO in the single-round scenario and Bayesian optimization (BO) scenario, respectively. Indeed, when HN-GFN is used as a stand-alone optimizer outside of MOBO (Section 5.1), it is similar to MOGFN-PC except for using different conditioning mechanisms. As for MOBO, MOGFN-AL is a vanilla GFlowNet whose reward function is defined as a multi-objective acquisition function (NEHVI). In our early experiments, we also tried to directly optimize this acquisition function. Unfortunately, we found that this approach is ineffective as the number of objectives increases. Because the value of NEHVI is so tiny that the policy cannot be effectively optimized. This phenomenon is tougher when the exponent of the reward is large. While in [MO], they only consider two objectives, and we consider setting up to four objectives which exposes the aforementioned challenges that we address in this work. In addition to extending GFlowNet, we delicately propose a hindsight-like off-policy strategy which is rarely studied for MOO (in both RL and GFlowNet literature).
> The attached code includes the implementations of two baselines, in code/generator/gfn.py, but they are not included in the paper:
> - MOReinforce, which is a natural candidate for an RL-based baseline that can readily share the parametrization of the agent with the GFlowNet.
> - GFlowNet with trajectory balance [TB; arXiv:2201.13259, NeurIPS 2022]. Trajectory balance is the objective used in nearly all work on GFlowNets for molecular optimization after [5], but for some reason, it is not evaluated or mentioned in the text. In particular, it is evaluated on molecule synthesis in both [TB] and [MO].
Thanks for the careful review of our code. 1) Indeed, MOReinforce (arXiv: 2203.15386, ICLR 2022) was included in the paper (referred to as P-MOCO). We believe this misunderstanding is caused by inconsistent naming: [P-MOCO] is termed MOReinforce in [MO], while we use the original name P-MOCO. 2) In our early experiments, we found that TB performs worse than FM, and we encountered a numerical issue with NaN in the predicted logits. We contacted the author of [TB] and described the situation, and he later confirmed that 'I did also get the error you reported when I ran the code myself.' Hence, we have chosen to use FM. In [MO], they did evaluate it on molecule synthesis. However, in their ablation experiments about the fragment-based molecular generation task (Figure 3c in ICML version and Table 6 in 2210.12765v1), we found that only when an extremely large exponent (96) is set can MOGFN-PC outperforms baselines (Table 2 in ICML version and Table 3 in 2210.12765v1). To some extent, we argue that setting such a large exponent is unreasonable, especially for sampling algorithms like GFlowNets, because the probability of sampling the molecules with slightly lower rewards (before the exponential operation) will be very low. Put differently, the sampling distribution is concentrated close to the modes, and the diversity of the candidates is sacrificed.
> There are some inconsistencies in definitions related to GFlowNets. Equation (1) writes $R(s)$ for the reward received when terminating at $s$ as in [5]. However, the text immediately following it talks about terminal and nonterminal states (with nonterminals having reward 0), and the text above says there is a special termination action that leads to terminal states. This is more consistent with the convention from [TB], where terminal states have no children and are the only states with nonzero rewards. This is not a major bug as the two conventions are equivalent, but it would be good to be careful.
Thanks again for your patience and helpful reminder. We will carefully revise these inconsistencies in definitions to ease the understanding of GFlowNets.
**We hope our response can alleviate your concerns. Please let us know if you have any additional questions.**
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Understood about baselines. I hope that this discussion will be added to the revised version of the paper.
I am also aware of the NaN-in-logits issue with TB. I believe that past work reported the performance at the last state before NaN, but expect this would happen earlier with a higher reward exponent.
Overall, I am satisfied with the response; having read the other reviews and responses, I maintain my score. Thank you again for the clarifications! | Summary: This paper proposes the use of GFlotNets to address the problem of sample-efficient multi-objective molecular optimization, an important problem in various scientific discovery application - such as materials design and drug discovery.
The key idea proposed in this work is to leverage hypernetwork-based GFlowNets - referred to as HN-GFN - to optimize the acquisition function for multi-objective Bayesian optimization (MOBO).
The goal is to enable efficient sampling of a diverse high-quality batch of molecular candidates from an approximate Pareto front.
Strengths: The proposed hyper network-based GFlowNets as a MOBO acquisition function optimizer provides a novel and intuitive way of using GFlowNets for sampling novel molecular candidates, where a "unified" GFlowNet is trained that considers the distribution of different reward functions that correspond to different preference vectors instead of relying on a single GFlowNet for a fixed preference vector.
This may allow the resulting model, HN-GFN, to naturally explore the various trade-offs between multiple objectives that may compete with one another by adapting to varying the input preference vector.
The proposed method builds on the recently proposed and widely popular GFlowNets and extends its flexibility for multi-objective molecular optimization by incorporating a hypernetwork-based approach.
Additionally, the adoption of a hindsight-like off-policy strategy is proposed to improve the learning efficiency, and ultimately, the multi-objective molecular optimization performance.
Overall, the paper is well-organized and written in a clear manner, the proposed method is well-motivated and novel, and the performance evaluation results demonstrate the potential advantages of the proposed HN-GFN.
Weaknesses: Although the batch size may significantly affect the overall computational cost as well as the optimization performance, there is no discussion on the impact of selecting a specific batch size nor any empirical evaluation based on different batch sizes.
While the evaluation results provide some preliminary evidence of the potential advantages of HN-GFN, the evaluations in the current study are limited to (virtually) a single problem: i.e., inhibition of GSK3β + JNK3 (with potential additional considerations for synthesizability and drug-likeness).
Additional examples are needed to more convincingly demonstrate the general applicability (and merits) of the proposed HN-GFN.
I suggest providing further evaluation results based on other benchmark problems often used for evaluating other generative models (e.g., JT-VAE, HierVAE, etc.)
In Figure 2, only trends for JNK3 are shown, but it would be helpful to show the optimization trends for GNK3β as well for completeness.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see the questions and suggestions in the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The conclusion section includes a very brief discussion of a limitation of the current method and suggests directions for future work.
The broader implication of the work is not explicitly discussed in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer for the insightful and constructive comments!
> Although the batch size may significantly affect the overall computational cost as well as the optimization performance, there is no discussion on the impact of selecting a specific batch size nor any empirical evaluation based on different batch sizes.
Thanks for pointing this out. We have conducted experiments to discuss the impact of different batch sizes. Just as expected, our HN-GFN benefits from lower batch size and more rounds. As GFlowNet amortizes the cost of optimization during training, the overall computational cost is proportional to the number of rounds. From the perspective of the trade-off between computational cost and optimization performance, we believe that 8*100 is suitable for the design and benchmarking of algorithms for molecular optimization.
| # Round * BS | 16 * 50 | 8 * 100 | 4 * 200 |
| :-----| ----: | ----: | ----: |
| HV | 0.442 ± 0.012 (0.026 $\uparrow$)| 0.416 ± 0.023 | 0.373 ± 0.038 (0.043 $\downarrow$)|
| Run-time | 2 $\times$ | 1 $\times$ | 0.5 $\times$ |
> While the evaluation results provide some preliminary evidence of the potential advantages of HN-GFN, the evaluations in the current study are limited to (virtually) a single problem: i.e., inhibition of GSK3β + JNK3 (with potential additional considerations for synthesizability and drug-likeness). Additional examples are needed to more convincingly demonstrate the general applicability (and merits) of the proposed HN-GFN. I suggest providing further evaluation results based on other benchmark problems often used for evaluating other generative models (e.g., JT-VAE, HierVAE, etc.)
Thanks for pointing this out. GSK3β + JNK3 (with potential additional considerations for synthesizability and drug-likeness) is the most widely-used benchmark for multi-objective molecular optimization since proposed in [1]. For further diverse evaluation, after reviewing the related literature, we consider the following objective combinations: dopamine type 2 receptor (DRD2) + QED + SA [2] and soluble epoxide hydrolase (sEH) + QED + SA [3]. Our HN-GFN still achieves significantly superior performance than P-MOCO (the best baseline for GSK3β + JNK3).
| | DRD2 + QED + SA | | sEH + QED + SA | |
| :-----| ----: | ----: | ----: | ----: |
| | HV | Div | HV | Div |
| P-MOCO | 0.710 ± 0.015 | 0.209 ± 0.056 | 0.649 ± 0.020 | 0.445 ± 0.005 |
| HN-GFN | **0.853 ± 0.001** | **0.735 ± 0.045** | **0.694 ± 0.010** | **0.828 ± 0.002** |
> In Figure 2, only trends for JNK3 are shown, but it would be helpful to show the optimization trends for GNK3β as well for completeness.
We appreciate the reviewer's helpful reminder. We only show trends for JNK3 due to the space limitation. We will include the trends for GNK3β as well.
[1] Multi-objective molecule generation using interpretable substructures.
[2] Multi-constraint molecular generation based on conditional transformer, knowledge distillation and reinforcement learning.
[3] Flow network based generative models for non-iterative diverse candidate generation.
**We hope our response can alleviate your concerns. Please let us know if you have any additional questions.**
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the clarification and the additional results.
While the rebuttal doesn't significantly change my overall evaluation (which was already on the positive side), I have updated the score on the presentation.
Thanks again. | Summary: This work addresses molecular design with GFLOWNET, an algorithm learning a sampling policy $\pi$ proportional to the reward function, i.e. where $\pi(x) \propto R(x)$. In particular, this work tackles the essential yet under-adressed multiobjective optimization setting. They do so using a hypernetwork (conditioned on a preference vector) providing weights for the sampler.
Strengths: In my opinion the presentation is clear and the motivation behind this work are nicely presented. In addition, the paper is well written and easy to follow. As a side note, its presentation of the GFN is interesting and worth the reminder to ease the understanding of the paper.
The approach is both original and sound, boosting its potential use in practical application.
Finally, the presented results are conclusive and seem to validate the intuition behind the method.
Weaknesses: The main weaknesses in the paper is in my opinion in the lack of diverse experiments or explanation about them. See questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1- A part of the protocole is unclear to me: are the generated molecules evaluated on the objective using an actual oracle or using the learned surrogate function ?
2 - How are the objectives computed ? Is it “physics” based or does it rely on learning a surrogate model ? Then, how are the generated molecules evaluated ?
3 - What is the length of the generated molecules ? Is it a hyperparameter of the model ?
4 - How sensitive are the results to the original dataset $D_0$? In my personal experience, Gflownets tend to be rather unstable or difficult to train due to the sparse feedback (and depends on the sequence size) and 200 initial molecules seem low to be able to learn a robust policy.
5 - Does the proposed Hindsight-like off policy strategy amounts to the proposition of [1], i.e. to hybrid the learning between generated molecules vs already generated molecules ?
6 - What are the variable predicted by the MPNN ?
7 - Can the authors discuss their implementation choice regarding the gflow network. For instance [1] use a MLP as sampler.
[1]: Jain et al, Biological sequence designs with Gflownets.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: A limitation of the paper is the lack of diversity in the experimental settings. For instance, the paper Gflownet for biological sequence design presents optimization results for several benchmark datasets such as GFP. Can the authors comment on the ability of their method to adapt to such an ‘off-line’ setting (which is an essential in biological sequence design) ?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer for the insightful and constructive comments!
> How are the objectives computed? Is it “physics” based or does it rely on learning a surrogate model? Then, how are the generated molecules evaluated? Are the generated molecules evaluated on the objective using an actual oracle or learned surrogate function?
For molecular optimization, the true objective values should be obtained by conducting expensive wet-lab experiments. However, real experimental evaluation makes the benchmarking of algorithms difficult and time-consuming. While physics-based simulation sounds rigorous, it consumes too many computing resources and may not be accurate enough to justify its extensive usage under many circumstances. Therefore, we follow the evaluation scheme adopted by prior related works (e.g., [1]) and use prediction models trained on an external dataset (not used for training HN-GFN) as our oracle functions.
At each round of BO, during the training of HN-GFN, the generated molecules (trajectories) are evaluated using the learned surrogate function. Then the trained HN-GFN is used to propose a diverse batch of candidates, which are evaluated using the oracle.
> What is the length of the generated molecules? Is it a hyperparameter of the model?
We are sorry for our unclear description. The length is not a hyperparameter, we allow the agent to learn a special stop action and generate up to 8 fragments.
> How sensitive are the results to the original dataset? In my personal experience, GFlowNets tend to be rather unstable or difficult to train due to the sparse feedback (and depends on the sequence size) and 200 initial molecules seem low to be able to learn a robust policy.
We argue that the robustness of the policy will not be affected by the number of initial molecules for the following two reasons. 1) The 200 initial molecules are mainly used for training the surrogate models which determine the reward. 2) The training of GFlowNets primarily relies on trajectories obtained through online sampling. Moreover, in BO, the initial molecules are commonly randomly sampled, and their evaluation is also included in the budget. Hence, initial molecules are relatively low, different from offline model-based optimization where the performance is greatly influenced by the quality of initial molecules. For a fair comparison, we use the same fixed set of random initial molecules. To confirm our point, we conducted experiments with different sets of random initial molecules and found that the results are not sensitive.
| | |GSK3β + JNK3 + QED + SA | |
|:-|:-|-:|-:|
| | Starting HV | Final HV | Improvement |
| HN-GFN (fixed) | 0.101 | 0.416 ± 0.023 | 0.315 |
| HN-GFN (random) | 0.082 ± 0.018 | 0.402 ± 0.018 | 0.320 |
> Does the proposed Hindsight-like off-policy strategy amounts to the proposition of [1], i.e. to hybrid the learning between generated molecules vs already generated molecules?
Indeed, [1] only incorporates the trajectories from the available observed dataset, rather than those generated during training. Inspired by the empirical observation that GFlowNets benefit from offline trajectories, we adopt the concept of replay buffer which is widely used in RL to store and share high-performing molecules among policies by re-examining them with different preferences. Our strategy is delicately proposed for MOO, and has not been actively explored (in both RL and GFlowNet literature) prior to our work. Furthermore, the proposition of [1] is a special case of our strategy (when $\gamma=0$, the third paragraph in Sec 4.3). In Figure 2 (right), our strategy off-policy ($\gamma>0$) consistently outperforms [1] off-policy ($\gamma=0$).
> What are the variables predicted by the MPNN?
There are two MPNNs in our algorithm. 1) The MPNN serving as the policy network takes as input the partially constructed molecular graphs and predicts edge flow $F(s,s')$ and state flow $F(s)$. Specifically, for each stem of the graph (an atom where the policy can attach a new block), MPNN predicts $F(s,s')$ representing the unnormalized probability of attaching each block to this stem. For the stop action, we perform a global pooling and predict $F(s)$. 2) The MPNN serving as the surrogate model takes as input the complete molecular graphs and predicts multiple objectives simultaneously.
> Can the authors discuss the choice regarding the flow network? For instance [1] use a MLP as a sampler.
Following [2,3], we used MPNN for a fair comparison. In general, MPNN performs well in representing molecules. Other GNNs fit our framework as well. MLP is used for a fair comparison too in [1]. In my personal experience, 1D CNN and Transformer are better for sequence design.
> A limitation of the paper is the lack of diversity in the experimental settings. [1] presents results for several benchmark datasets such as GFP. Can the authors comment on the ability of their method to adapt to such an offline setting (which is essential in biological sequence design)?
Thanks for pointing this out. To our best knowledge, offline multi-objective benchmarks are rare. After reviewing related literature, we choose RFP from LaMBO [4] (optimize red-spectrum fluorescent protein sequences to maximize stability and SASA) and adapt the online setting to the offline setting. Following [1], we generate 128 candidates starting with the same $\mathcal{D}$ = 512 examples as [4]. Our HN-GFN significantly outperforms LaMBO.
| |Relative HV|
| :- | -: |
|LaMBO|1.086 ± 0.010|
|HN-GFN|**1.268 ± 0.089**|
[1] Biological sequence designs with Gflownets.
[2] Flow network based generative models for non-iterative diverse candidate generation.
[3] Mars: Markov molecular sampling for multi-objective drug discovery.
[4] Accelerating Bayesian optimization for biological sequence design with denoising autoencoders.
**We hope our response can alleviate your concerns. Please let us know if you have any additional questions.**
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: I thank the authors for their valuable response and additional technical details that significantly improve my understanding of their work. I believe the discussion regarding the choice of the datasets could be included in the main paper.
I raise my score to 5. | Rebuttal 1:
Rebuttal: Dear Area Chairs and Reviewers,
We greatly appreciate the reviewers' time, valuable comments, and constructive suggestions. Overall, the reviewers deem our paper as "well-organized" (9YCN, BkDd, hutP), studying "an important problem" (9YCN, BkDd, Dc63), acknowledging our methodology novelty and soundness (9YCN, BkDd, hutP, Dc63), experiments with strong experimental results (hutP, Dc63) and ablation study (hutP, Dc63), demonstrating the potential use and advantages of the proposed method (9YCN, BkDd).
In the author response period, we made diligent efforts to address reviewers' concerns and provided additional experimental results to further verify our contributions. The summary of our main efforts is presented as follows:
- We have provided a detailed explanation for the protocol, experimental details, and implementation details. (To 9YCN, BkDd, hutP, and Dc63)
- We have further conducted diverse experiments (offline biological sequence design and different molecular optimization benchmarks) to demonstrate the general applicability of the proposed method (To 9YCN and BkDd)
- We have further conducted experiments to analyze the impact of the initial dataset to demonstrate that the results are not sensitive to the initial dataset (To 9YCN)
- We have further discussed and conducted experiments to analyze the impact of batch size to demonstrate that our setting (a batch size of 100 and 8 rounds) is reasonable and suitable for the design and benchmarking of algorithms for molecular optimization (To BKDd and Dc63)
In our individual responses, we provide detailed answers to all the specific questions raised by the reviewers. Further discussions are welcomed to facilitate the reviewing process toward a comprehensive evaluation of our work. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Energy-Based Sliced Wasserstein Distance | Accept (poster) | Summary: This paper introduces a new distribution of slices of the Sliced-Wasserstein distance based on the energy-based model framework. The authors study the theoretical properties and conduct numerical experiments on EBSW.
Strengths: - The study of the idea is organized and clear.
- Theoretical quantities of interest are derived.
- Various sampling methods are provided.
Weaknesses: - The method is not compared with the vanilla Distributional Sliced-Wasserstein method which should be the baseline to beat. The fact that only the vMF-DSW method is benchmarked is a weakness, as the learned distribution is unimodal, which prevents the model to be too expressive.
- I think it would be good to have a theorem studying the following sample complexity, that is more of interest for practical reasons:
$ \mathbb{E}[EBSW_p(\mu_n,\nu_n;f) - EBSW_p(\mu,\nu;f)]$
- Experiments are a bit light, I think a generative modeling experiment would be nice to have as this is one of the main applications of the SW distances.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the authors explain why experimental results are not compared to the DSW distance?
- It appears to me that EBSW falls into the family of adaptive Sliced-Wasserstein distances ( https://arxiv.org/pdf/2206.03230.pdf , which doesn't seem to be cited in the related work section). Can the authors comment on that and whether this framework can help studying this distance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None. I am willing to discuss with the authors and accordingly modify my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the review's feedback and comments and want to express our thanks. Our responses are as follows. We remain open to additional questions and further discussion.
**Q19**: The method is not compared with the vanilla Distributional Sliced-Wasserstein method which should be the baseline to beat. The fact that only the vMF-DSW method is benchmarked is a weakness, as the learned distribution is unimodal, which prevents the model to be too expressive.
**A19**: We would like to refer the reviewer to the global rebuttal for additional experiments comparing our method with vanilla DSW. In the submission. the primary reason we chose the von Mises-Fisher (vMF) distribution as the family of slicing distributions is its simplicity and parameter efficiency. The vMF distribution has only two parameters: the location parameter (a d-dimensional vector) and the concentration parameter (a scalar). Therefore, performing stochastic optimization for vMF is expected to be more stable. In contrast, using an overparameterized family of distributions, such as the push-forward distribution with neural networks in vanilla DSW, can be computationally expensive. For instance, using an MLP neural network could increase the computational complexity to be proportional to $d^2$ due to matrix multiplications, as opposed to the original scaling of $d$ in SW and EBSW. Additionally, vanilla DSW requires a constraint to control the concentration of the push-forward distribution, making the optimization less interpretable and potentially more challenging. Moreover, solving vanilla DSW necessitates an admissible regularizing constant and involves using duality, which could further complicate the optimization problem.
It's important to note that we are not challenging DSW in its population form but rather in its computational form. Utilizing a powerful family of slicing distributions with a high number of parameters would incur increased computational time, memory usage, and hyperparameter tuning. In our paper, we present a budget-constrained comparison, specifically the scaling constraint of $n \log n$. Consequently, we believe that using vanilla DSW may not offer significant improvements over vDSW. We would like to emphasize the main advantage of EBSW, which lies in its simplicity of use (choosing an energy function is easier than selecting a family of distributions) and computational efficiency (parameter-free and optimization-free).
**Q20**: On two-sided sample complexity
**A20**: Thank you for your insightful question. Proposition 1 can be referred to as the one-sided sample complexity, which is useful in certain settings, e.g., quantization of a distribution (clustering). The sample complexity you mentioned is called the two-sided sample complexity. Normally, we can directly derive the two-sided sample complexity from the one-sided sample complexity by using the triangle inequality. Unfortunately, the triangle inequality of EBSW has not been proven in our paper, due to the expressiveness of the energy-based slicing distribution. Specifically, while we can prove the triangle inequality using the triangle inequality of Wasserstein distance and the Minkowski inequality for a fixed slicing distribution, it is not trivial to establish this inequality for the adaptive slicing distribution of EBSW. We believe that proving the triangle inequality for EBSW presents a challenging theoretical problem, and we leave this open problem for future work.
**Q21**: Experiments are a bit light, I think a generative modeling experiment would be nice to have as this is one of the main applications of the SW distances.
**A21**: We would like to mention that it takes about one day to complete the training of a deep point-cloud autoencoder on a Tesla V100 GPU. Therefore, we believe our experiments are relatively intensive compared to the current literature on proposing new SW variants [1] [2]. Nevertheless, we agree that generative modeling is one of the main applications of SW distances. For detailed additional experiments on deep generative modeling, we refer the reviewer to the global rebuttal.
[1] Generalized sliced Wasserstein distances, Kolouri et al
[2] Spherical Sliced Wasserstein, Bonet et al
**Q22**: Can the authors explain why experimental results are not compared to the DSW distance?
**A22**: We do use v-DSW in the paper as an instance of DSW, which is more interpretable and simpler. As discussed in **Q21**, vanilla DSW requires more parameters and computation compared to vDSW due to the usage of neural networks for push-forward slicing distribution. However, we have added a comparison with DSW in the rebuttal PDF which shows the favorable performance of EBSW. We would like to refer the reviewer to the global rebuttal for detailed experimental results.
**Q23**: It appears to me that EBSW falls into the family of adaptive Sliced-Wasserstein distances ( https://arxiv.org/pdf/2206.03230.pdf , which doesn't seem to be cited in the related work section). Can the authors comment on that and whether this framework can help studying this distance?
**A23**; Thank you for providing the reference to the new ICML 2023 paper. We will include this paper in our references and discussions. In the paper, they define adaptive sliced Wasserstein distances (ASW) based on an optimization problem, similar to DSW. In contrast, EBSW is an optimization-free variant of SW. EBSW employs an adaptive slicing distribution, which is explicitly constructed, unlike the implicit construction in ASW and DSW. It would be interesting to explore if we can formulate EBSW as the optimal solution for ASW and DSW, under certain choices in the family of slicing distributions and regularity conditions. With such a connection, we can leverage the theoretical results from the mentioned paper to study EBSW in greater detail. Overall, we believe this direction holds great promise, and we will leave this investigation for future research.
---
Rebuttal 2:
Title: Thank You
Comment: Dear Reviewer krQV,
Thanks for your reviews of our paper. Since the discussion period between the authors and the reviewers was already over and we have not heard from you during this period, we would be grateful if the reviewer could let us know if all your questions are addressed to some extent. If you are satisfied with our answers, we hope that the reviewer will consider adjusting your score.
Best,
The Authors
---
Rebuttal Comment 2.1:
Title: Answer to rebuttal
Comment: Dear reviewers, I would like to apologize for the late answer and to thank you for the rebuttal.
I am happy with the comparison with DSW that I hope will be added to the paper.
I have the feeling that a little bit more effort should have been put into proving the triangle inequality for EBSW and the two-sided sample complexity even though I don't know how hard it is to prove.
I am however raising my score from 5 to 6 as I think the authors did a good rebuttal. | Summary: This paper proposes an extension to Sliced Wasserstein Distance (SW), an approach for measuring distances between distributions by computing the average of the energy of the 1-d Wasserstein distances between 1-d projections. The authors argue that moving towards non-uniform ways of sampling the projections is key, and subsequently show that the benchmark approach currently that does so (Distributional sliced Wasserstein) adds another optimization over a parametric distribution family, which may not yield stable results and is more computationally expensive. Subsequently, they propose a simple non-parametric extension to SW called Energy based SW (EBSW) that achieves the same objective of sampling non-uniformly, i.e. sampling projections with larger Wasserstein distance with a greater probability, by considering energy functions $f$ which are monotonically increasing w.r.t the sliced 1-d Wasserstein distance itself. Theoretical properties are highlighted, including the convergence properties of their proposed metric. Next, the authors propose a range of Monte Carlo estimation methods to approximate EBSW, including deriving the necessary gradient expressions to be used in some of their subsequent applications. Lastly, over multiple experiments ranging from point-cloud gradient flows to deep point-cloud reconstruction, the authors demonstrate that EBSW is faster (Importance Sampling approach), and yields distributions closer to the ground truth in most cases than other SW-based benchmarks.
Strengths: I really liked reading the paper and going through the various findings in it. The following are what I consider to be some of the main strengths of this work. (i) The paper is really well-written. All sections were intuitive to read and easy to understand for me, and this also applied to the Appendices, including the proofs of the Theorems and Propositions. (ii) The contributions of the paper are very clear and supported by the theoretical and empirical results. All proposed benefits of the approach have been verified by the authors in theory and practice. (iii) The experiments are well motivated and relatively exhaustive. The proposed approach is tested in a wide range of problems and the results convincingly showcase the benefits of EBSW. (iv) The background section is well organized and is an effective introduction to SW-based metrics in general.
Weaknesses: The paper is overall interesting and well-written. However, there are a few points noted below that can improve the work further.
1. I feel that the theoretical results can be given a bit more perspective w.r.t. the other SW-based metrics. Overall, I felt the theoretical results in Theorems 1, 2 and Propositions 1 and 2 are relatively intuitive to show. Some more specific insights on EBSW could be interesting (perhaps with constrained energy functions, i.e. Lipschitz constrained $f$).
2. I understand the overall direction for moving towards measures that look disproportionately at projections which have larger distances (lines 37-38). However, I feel that a few more words to elaborate on intuitively why one must move towards non-uniform distributions in the Sliced Wasserstein setting could be useful.
3. One of the main questions I have is the role of $p$, which is fixed for all experiments in this work. I feel that the authors can elaborate a bit on whether trying larger values of $p$ can achieve a similar goal to what EBSW-e achieves (More details in questions).
4. The result tables are convincing and informative w.r.t. the tested methods. However, in some of the visual depictions, I found it hard to see any significant differences between EBSW and some other variants (even in the Appendices). Mainly, the color transfer experiments were a bit hard to discern for me in terms of relative performance
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In addition to the list of weaknesses above, below are some questions and general suggestions for the authors to address:
1. I noticed that the authors fix the value of $p$ to 2 for all experiments. I wonder if increasing $p$ can achieve a similar effect to using a monotonically increasing energy function (such as the exponential distribution in this work). Because intuitively, it seems to me that increasing $p$ eventually also gradually assigns more importance to the projections with values near to the maximum. I’m mainly asking this because increasing $p$ still falls under SW-p, and thus could perhaps have similar sample complexity and computational load. Could you please provide a theoretical explanation on how changing $p$ fundamentally differs from EBSW with an increasing energy function?
2. For Figure 2, the authors state that the gradient flows for EBSW are smoother than other approaches. Perhaps describing some visual cues to elaborate on that point can make it easier to follow.
3. Although four different ways of estimating the EBSW variants are proposed, I only see the IS-EBSW being reported in the main paper. Furthermore, having looked at the results in the appendix, it seems in almost all cases, the variants have roughly similar performance, and IS-EBSW seems to lead to better results in most cases. Furthermore, it also seems that the time complexity for all algorithms are similar, also seeing Table 3 in C.1, it seems that in most cases IS-EBSW is faster as well. Thus, my question is: in discussing these other approaches, are there benefits to the other proposed MCMC variants?
4. Are there any cases where the choice of an exponential energy function $f$ doesn't work? It would be intuitive to me that, considering a more general family of exponentials in $e^{(\lambda x)}$, the optimal "width" of the slicing distribution as set by $\lambda$ may be different for different cases. For instance, can there be any cases where the underlying distribution enforces a much larger or smaller $\lambda$ than one? Or is $\lambda=1$ in some sense universal?
5. As a follow-up to the previous question, I would assume that for the optimization results in Tables 1 and 2, as the generated distribution edges closer to the ground truth distribution, would it be profitable to increase $\lambda$ higher than one? As then the distances would all get very small, and potentially max-SW would start to look more appealing.
6. The theoretical results are interesting and important, however, could the authors put into perspective how the theoretical properties of EBSW compare to other SW based measures? It seems to me that both Theorems 1 and 2 may hold for SW based measures (at least SW, not sure about max-SW and DSW). Similarly, for Proposition 2 it seems from the proof that it would hold for both SW and max-SW (not sure about DSW). Additional theoretical results in this direction, or at the least a rough intuitive discussion on how these results would compare for the other SW counterparts, would be useful. This would put more into perspective how EBSW's theoretical properties compare to the other SW benchmarks.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some limitations have been discussed, but I feel there is scope to discuss more limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We'd like to express our gratitude for the constructive feedback. We're here to provide responses as outlined below. We are eager to participate in further discussions.
**Q10**: On Lipschitz constrained $f$
**A10**: Regarding the Lipschitz energy function, the polynomial function is Lipschitz and the exponential function is locally Lipschitz. Lipschitzness could lead to a potential notion of robustness to the distance. Specifically, when two measures are contaminated with additive noise, a significant change in the Wasserstein value from two closed projecting directions could be due to the noise. Therefore, using a Lipschitz energy function could penalize the weight given to such projecting directions and has benefits.
From a Lipschitz energy function, we can show that the energy-based slicing distribution has a Lipschitz probability density function (pdf) since the SW distance is Lipschitz with respect to the projecting direction [1]. However, an exact computation of the Lipschitz constant is not trivial in the case of the SW distance.
[1] Statistical, Robustness, and Computational Guarantees for Sliced Wasserstein Distances, Goldfeld et al.
**Q11**: Why one must move towards non-uniform distributions in the SW setting could be useful?
**A11**: It benefits applications that have a sequence of measures that converge to a target measure e.g., gradient flow, and generative modeling. When using SW as the criteria to drive the sequence, the current measure is derived toward the target measure from all directions equally. This is undesirable since two measures can be closed in some directions and can be significantly different in other directions. Hence, the uniform distribution leads to sequences that converge slower.
**Q12**:whether trying larger values of $p$ can achieve a similar goal to what EBSW-e achieves? How changing $p$ fundamentally differs from EBSW?
**A12**: As in Proposition 1, we show that EBSW is an upper bound of SW for any $p$. Adjusting $p$ equals to change the notation of ground metric on the support sets of two measures. However, changing $p$ in SW does not affect the slicing distribution which is still uniform. Therefore, larger values of $p$ cannot achieve a similar goal as EBSW since EBSW has a non-uniform distribution. To verify further the intuition, we run the experiments on gradient flow again for SW with p=3 and p=10 in the global rebuttal.
In contrast, changing $p$ does affect the EBSW since the energy distribution is proportional to the composition between $f$ and $L_p$ norm. Due to the flexibility in choosing $f$, the interaction between $f$ and the norm is expressive and worth a careful future investigation.
**Q13**: On visual comparison in GF.
**A13**: One way to see the difference from our experience is by looking at the level of “yellow” and “orange" colors. In more detail, the level of yellow color is more than the level of orange color in the target image while the reverse phenomenon happens in the source image. From that perspective, the image from EBSW has more yellow and less orange than images from other distances.
**Q14**: On some visual cues to elaborate on the performance of EBSW in gradient flow.
**A14**: From the figure, we observe that the legs of the table (the source point-cloud) are bent in SW, Max-SW, and vDSW in the first step and do not look like a real table. In contrast, EBSW can create a more real table with a sharp connection between the legs and the desktop, and the legs are also not bent. In future steps, we see that the point-clouds from EBSW have fewer jumping points on the surface e.g., the cover of the light bulbs.
**Q15**: benefits of proposed MCMC variants?
**A15**: As discussed in the global rebuttal, MCMC variants are slow since they are not parallelable. However, using MCMC variants could benefit the approximation when we have a very peaky slicing density. In that case, important sampling might be not accurate since the proposal distribution could not yield good samples due to the concentrated mass of the slicing density. Moreover, the importance sampling is also unstable to use due to the huge value of the density ratio between the slicing density and the proposal density. However, MCMC can overcome these issues with a good transition distribution and achieve a better approximation.
**Q16**: On the energy function $e^{\lambda x}$... Would it be profitable to increase $\lambda>1$?
**A16**: With the exponential energy function, the importance sampling reverts to using the Softmax function. For your suggestion, it leads to the annealed exponential function, we get the annealed Softmax function which is better at controlling the energy density. However, it costs one hyperparameter in addition. A potentially useful choice of $\lambda$ is $1/p$ which helps us to prevent the effect of changing $p$ to the slicing distribution as discussed in **Q12**.
Increasing $\lambda$ leads to a peaker slicing density and makes the density closer to the Dirac delta at the max slice. However, a peaky density is hard and unstable to approximate as discussed in **Q15**. Secondly, such kind of peak density is not desirable since they are strongly affected by the noise as discussed in **Q10**.
**Q17**: How EBSW's theoretical properties compare to the other SWs?
**A17**: Theorem 2, and Proposition 2 mean that EBSW retains nice properties as SW and Max-SW i.e., weak convergence, and no curse of dimensionality. However, in Theorem 1, the triangle inequality of EBSW has not been proved yet due to the adaptive slicing distribution. Given a fixed slicing distribution, the triangle inequality can be proved using the metricity of Wasserstein distance and Minkowski inequality. In contrast, it is not trivial how to obtain the triangle inequality with the energy-based slicing distribution in EBSW.
**Q18**: On Limitation
**A18**: We would like to refer the reviewer to the global rebuttal of the limitations of our work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed replies to my questions and comments. I am happy to keep my rating. Thanks again!
---
Reply to Comment 1.1.1:
Title: Response to Reviewer
Comment: We want to thank the reviewer for keeping the score at 6. We will include our discussion in the revision of the paper.
Best regards, | Summary: This paper has proposed an energy-based slicing distribution that maps original distributions into one-dimesion space to compute the Wasserstein distance.
Strengths: This paper has proposed an energy-based slicing distribution that maps original distributions into one-dimesion space to compute the Wasserstein distance. The proposed method shows great advantages over other slicing distribution like sliced Wasserstein, Distributional sliced Wasserstein, and Max sliced Wasserstein. The authors also explore different sampling methods to approximate the value of the EBSW distance.
Weaknesses: 1. An introduction about optim transport should be included in the Background section.
2. Put a title for each row in Figure 2 could help reader easier to compare different SW methods.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In definitition 1, does $\theta$ represent parameters in the slicing distribution or just the transport function? If $\theta$ are parameters, why could the model be called as parameter-free?
2. In general, the energy function should be defined as $f:(-\inf, \inf)^d \rightarrow [0, \inf]$. Why the author defines it as in line 144?
3. Could the proposed distance measurement be used in generative models, like GANs and EBMs?
4. Is there any comparison with other probability distance measurement methods, like KL divergence, in the experiments?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: This paper has proposed an energy-based slice distribution to compute Wasserstein distance between two distributions. My question mainly lies in the application of energy function and comparison of methods based on other probability measurement methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We begin by expressing our gratitude for the insightful feedback and comments provided in the review. In response, we offer the following explanations and answers to your inquiries. We welcome any further questions and are open to engaging in a more in-depth discussion on the matter.
**Q5**: An introduction about optimal transport should be included in the Background section.
**A5**: Thank you for your suggestion. Due to the space constraint of the main paper, we skip the definition of optimal transport and Wasserstein distance. From your feedback, we realize that it affects the readability of the paper. Therefore, we will add those backgrounds to the paper in the revision.
**Q6**: Put a title for each row in Figure 2 could help reader easier to compare different SW methods.
**A6**: Thank you for your suggestions. We will add the legend to the figure for better readability.
**Q7**: In definition 1, does $\theta$ represent parameters in the slicing distribution or just the transport function? If $\theta$ are parameters, why could the model be called as parameter-free?
**A7**: Thank you for your insightful question. In Definition 1, $\theta$ is the realization of the distribution, not the parameter. For a more detailed explanation, in DSW, $\theta \sim vMF(\epsilon,\kappa)$, where $\epsilon$ and $\kappa$ are parameters, while EBSW does not have such parameters. Therefore, EBSW is parameter-free. To elaborate further, EBSW is defined based on the energy function which non-parametric and is easier and more flexible to choose than choosing a family of distributions as in DSW. In addition, in its computational form, Max-SW and DSW require additional hyperparameters for optimization, such as learning rate, the number of gradient steps, optimizers, and so on, while EBSW has only one hyperparameter, which is the number of projections, like the vanilla SW. Overall, this is the reason we refer to EBSW as parameter-free and optimization-free. We will include this explanation in the revision.
**Q8**: In general, the energy function should be defined as $f :(−\infty,\infty)^d\to [0,\infty]$. Why the author defines it as in line 144?
A8: Thank you for your question. We would like to clarify it as follow. The energy function takes the value of the $p$-power Wasserstein distance between two projected one-dimensional measures and yields a positive number. Since the Wasserstein distance is a scalar and always positive, the pre-image of the energy function is in the range of $[0, \infty]$. Based on the energy function, we can define the energy-based slicing distribution, which is a distribution on the unit-hypersphere and has a probability density function (pdf): $\mathbb{S}^{d-1} \to \Theta \in (0, \infty)$, as described in Definition 1. The reason we avoid having the point 0 in the image of the energy function is to prevent the slicing distribution from being undefined when two measures coincide. This ensures the distribution's well-defined nature and allows us to use it safely in practical applications. We will add a more detailed explanation to the revision.
**Q8**: Could the proposed distance measurement be used in generative models, like GANs and EBMs?
**A8**: The answer is yes. The reason we focus on point-cloud applications since they are disjointed supports that other divergences such as KL divergence, Jensen Shannon divergence, f-divergence cannot handle. EBSW can absolutely be used in generative modeling. We would like to refer the reviewer to the global rebuttal for a detailed additional result on deep generative modeling. Overall, IS-EBSW-e has shown a favorable performance compared to SW and DSW in generative modeling. However, we suggest using this additional result as evidence to show that EBSW is versatile since we only a limited time in the rebuttal period.
**Q9**: Is there any comparison with other probability distance measurement methods, like KL divergence, in the experiments?
**A9**: As stated in **Q8**, the KL divergence (one member of f-divergence) is not suitable for handling disjointed support measures. The reason behind this is that the KL divergence relies on the density ratio, which becomes undefined when the denominator is equal to 0. Consequently, KL divergence cannot be directly applied to compare two-point clouds. If one wishes to use KL divergence between two point clouds, it becomes necessary to convert them into voxels (3D bins). Nevertheless, this transformation is not differentiable, rendering it unsuitable for deep learning applications involving point clouds. Therefore, using KL divergence in such scenarios is not practical.
Moreover, it is important to note that KL divergence is not symmetric, while Wasserstein variants are all symmetric. This means that the Wasserstein distance and its variants, such as the Earth Mover's Distance, do not depend on the ordering of the points and provide a more symmetrical measure for comparing two probability distributions.
For a comparison in generative modeling, training conventional SNGAN can be seen as an application of Jensen Shannon divergence (which can be seen as a symmetric version of KL) which achieves a FID score of 21 [1]. By using the same architecture, we achieve a FID score of 13.24 with IS-EBSW-e in the rebuttal pdf.
[1] Spectral Normalization for Generative Adversarial Networks, Miyato et al
In summary, due to its limitations with disjointed support measures and its lack of symmetry, KL divergence is not the most appropriate choice for comparing point clouds directly. Wasserstein-based distance metrics offer more suitable alternatives for handling these scenarios, especially in deep learning applications involving point clouds.
---
Rebuttal Comment 1.1:
Comment: Thanks for the feedback from the authors. I will remain my rating.
---
Reply to Comment 1.1.1:
Title: Response to reviewer
Comment: We would like to thank the reviewer for keeping a positive score of 6. We are happy to discuss more if the reviewer still has questions.
Best regards,
Authors | Summary: This paper proposed a new variant of sliced Wasserstein distance that is inspired from Energy-Based Model called Energy Based Wasserstein Distance (EBWD). The proposed method models the energy function of the distance, from which the slices can be sampled. Three sampling techniques are proposed. The paper evaluate the proposed distance calculation using 3 tasks, Point Cloud Gradient flow, color transferring, and Point Cloud reconstruction. The results have shown that the new variant helps the training process convert faster and the transition between two distribution is smoother (in term of Wasserstein distance and Sliced Wasserstein distance getting small faster).
Strengths: The paper has the following strengths
- A clear motivation to use the energy function. Similar to SW, the proposed energy-based SW enjoys a non-optimization based computation via sampling using the energy function.
- A detailed theoretical analysis of the proposed distance, accompanied by a nice experimental setup to evaluate its performance.
Weaknesses: In general, I like the simple, yet interesting proposal from the paper. However, I also have a few concerns:
- It would be reasonable to evaluate EBSW in a more complex task such as image generation, similar to that where DSW is introduced. It seems like only IS-EBSW enjoys the low-computation benefit but other EBSW variants do not.
- It is not clear to me why EBSW is better than DSW while DSW finds the most separating directions. Especially the fact that EBSW converges faster.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see comments in Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not include a Limitation discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to extend our appreciation for the valuable feedback and comments provided in the review. In response, we would like to address your questions as follows. We are readily available to address additional inquiries and to engage in further discussion.
**Q1**: It would be reasonable to evaluate EBSW in a more complex task such as image generation, similar to that where DSW is introduced.
**A1**: We acknowledge the importance of image generation as a task that sliced Wasserstein variants can excel at. In our paper, we did not include generative modeling due to the well-known instability associated with training generative models using mini-batch stochastic training algorithms. Instead, we focused on a stable setting for comparison and demonstrated the benefits of EBSW in the gradient flow application, which can be seen as a simplified and stable version of generative modeling. Nonetheless, we appreciate the reviewer's feedback and have taken it into consideration. To address this concern, we have now added the results of generative modeling to our paper. This addition showcases EBSW's applicability to various tasks. For detailed experiments and results, we refer the reviewer to the global rebuttal section, where we provide an overview of the experiments and outcomes. Overall, EBSW still shows a favorable performance compared to SW and DSW in this task. Nevertheless, due to the limited time of the rebuttal period, we suggest using the generative modeling result only as a reference for the versatility of EBSW.
**Q2**: It seems like only IS-EBSW enjoys the low-computation benefit but other EBSW variants do not.
**A2**: Thank you for your insightful comment. The reason IS-EBSW is fast is that its computational algorithm is fully vectorizable, i.e., we can stack the supports of measures into matrices, stack the projecting directions into matrices, and then perform computations on matrices (e.g., matrix multiplication) that are parallelizable.
For sampling importance resampling (SIR-EBSW), the key bottleneck is the resampling step. In particular, our implemented sampling algorithm is not parallel since the computation library (PyTorch) does not seem to support it to the best of our knowledge.
Regarding Markov chain Monte Carlo (IMH-EBSW and RMH-EBSW), the sampling algorithms are naturally sequential; hence, they are not parallelizable. To the best of our knowledge, we could use multiple parallelizable Markov chains as a solution. However, that kind of speeding-up method needs a careful investigation of the mixing time of the MCMC algorithm to design the number of chains and the number of time steps for keeping a budget-constrained algorithm.
However, we would like to recall that all discussed specific computation algorithms (IS, SIR, MCMC) in the paper are just basic variants in their vast literature and can be significantly improved. Therefore, we will leave this investigation for future work since our main focus of the paper is the energy-based slicing distribution and EBSW. At the moment, we recommend importance sampling (IS-EBSW) as the main computation method for EBSW due to its fast computation and simplicity.
**Q3**: It is not clear to me why EBSW is better than DSW while DSW finds the most separating directions. Especially the fact that EBSW converges faster.
**A3**: We would like to direct the reviewer to the global rebuttal concerning the discussion on DSW. In our paper, we present a budget-constrained comparison that places a computational time limit on distances, defined by a scaling constraint of $n \log n$. As a result, DSW might not attain optimality. Furthermore, an optimal design for the slicing distribution in DSW remains unknown. In our study, we employ the von Mises Fisher family for the slicing distribution, chosen for its simplicity and efficiency. Nevertheless, it is still slower than EBSW with importance sampling. In our additional experiment, we also use DSW with an implicit slicing distribution induced by a neural network; however, it still does not perform as effectively as IS-EBSW-e in applications. To clarify, we are not questioning DSW in its population form, but rather its practical usage, which presents challenges such as sub-optimality and high computational complexity. In conclusion, EBSW with importance sampling emerges as the preferred choice due to its simplicity, effectiveness, and efficiency.
**Q4**: The paper does not include a Limitation discussion.
**A4**: Thank you for pointing this out. In the revision, we have gathered all limitations of the papers into a new paragraph. We would like to refer the reviewer to the global rebuttal for the discussion about the limitation of the paper. Here, we would like to summarize the discussion. As discussed in **Q2**, one limitation of EBSW is its non-parallelizable MCMC variations, resulting in sluggish computations. Furthermore, selecting suitable energy-based functions presents a challenge in applications. Finally, demonstrating the triangle inequality of EBSW is difficult due to the complexity of the energy-based slicing distribution. Overall, as discussed in the global rebuttal, these limitations are specific to the current paper itself, rather than EBSW.
---
Rebuttal Comment 1.1:
Title: Thank you for the responses!
Comment: Thank you for the detailed responses. Please see my additional comments:
Q1. Thank you for the additional results on image generation. I notice that the FID of DSW is significantly better than that reported in the original DSW's paper. Also, all the methods have quite similar FIDs, which is difficult to make a conclusion as mentioned in the rebuttal.
Q3. It's quite unfair to report the performance of DSW when it does not reach optimality. I think it is ok to report the performance of DSW even if it is better, although it incurs more computation. For example, one can trade-off between different estimation approaches, depending on how much computational budget they have. Currently, the experimental results seem to favor the proposed method also in quantitative metrics as well, which can be misleading.
For the other questions, thank you for the clarifications. I will consider all the responses in my final rating of the paper.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer
Comment: Thank you for your reply,
On Q1: The reason the FID score is better since we utilize a stronger backbone for the generative models i.e., ResNet50. As mentioned in the global rebuttal, the main aim of the additional generative modeling experiments is to show the ability to apply widely of EBSW.
On Q2: It is worth noting that we do not try to report unfairly for DSW. It is hard to know when DSW will reach its optimality and it could take a lot of computation. Due to the time limitation of the rebuttal, we must focus on the budget-constraint setting where we fix the budget of computation for all baselines. Since there are only 2 hours left of the discussion period, we cannot run additional experiments. However, we will add experiments on letting DSW have a large number of optimization updates in the revision. In the paper, we mainly focus on the computational aspect since EBSW is motivated by the computational limitation of previous variants. We will try to highlight this focus further in the revision.
Since the paper is borderline now, we would be grateful if the reviewer could increase the score if all questions are addressed to some extent. We are happy to discuss more if the reviewer is not satisfied with our answers.
Best regards, | Rebuttal 1:
Rebuttal: First, we would like to thank the reviewers for their time and feedback. We would like to summarize some additional experiments in the rebuttal PDF and address common questions from the reviewers. Other questions are addressed in the corresponding rebuttal of reviewers.
1. **Additional experiments on deep generative modeling.** As suggested by Reviewer **QgRs** and Reviewer **krQV**, we conducted additional experiments on deep generative modeling using the framework proposed in [1]. We compared SW, vanilla DSW (which utilizes an MLP neural network to model the slicing distribution as suggested by Reviewer **krQV**), and IS-EBSW-e on CIFAR10 and CelebA datasets. For SW and IS-EBSW-e, we set L=100. For DSW, we set $L=10, T=10$, and reported the best result for $\lambda \in \{1, 10, 50\}$ and slice learning rate ($\eta \in \{0.001, 0.01, 0.1\}$). We provided both quantitative results (FID score and IS score) and qualitative results (randomly generated images) in Figure 1 of the rebuttal PDF. Overall, we observed that IS-EBSW-e yielded the lowest FID score and the highest IS score. However, due to time limitations in the rebuttal, we recommend using this result as an example to showcase the versatility of EBSW, acknowledging that we ran experiments only once.
[1] Amortized Projection Optimization for Sliced Wasserstein Generative Models, Nguyen et al.
2. **Comparison to DSW.** As suggested by Reviewer **krQV**, we also incorporated vanilla DSW into our applications. Firstly, we applied the gradient flow with DSW and presented the results in Figure 2 and Table 2 of the rebuttal PDF. We observed that vanilla DSW performed better than v-DSW but required more computation time. Overall, DSW still falls short compared to IS-EBSW-e in terms of performance and computational efficiency. Similarly, we included DSW in the deep point-cloud reconstruction application, where DSW's performance was comparable to v-DSW but inferior to our proposed IS-EBSW-e.
Furthermore, we would like to emphasize the advantages of EBSW over DSW. Utilizing DSW involves two main steps: designing a family of distributions over the unit-hypersphere and selecting the best member through an interactive optimization procedure. Both steps present challenges; in the first step, selecting a suitable family remains an open question, and in the second step, optimization often suffers from local minima and is computationally expensive due to its iterative nature. Moreover, when performing statistical inference, using DSW (and other optimization-based SW, e.g., Max-SW) leads to a minimax problem known to be difficult to optimize. In contrast, EBSW is optimization-free, parameter-free, and stable. Although EBSW requires choosing an energy function, this process is easier and more flexible than selecting a family of distributions. Furthermore, EBSW can be computed exactly thanks to the asymptotic mixing property of importance sampling and Markov chains.
In the paper, our criticism of DSW pertains to its computational form rather than its population form. Specifically, we provide an approximate budget constraint for DSW and EBSW (in terms of scaling constraint of $n \log n$) in our experiments, which means that DSW might not achieve its optimality. Additionally, we use the von Mises Fisher slicing distribution for DSW, which might not be the optimal choice. However, it's worth noting that the optimal choice for DSW is still unknown, and adopting a more expressive family could lead to increased memory and computation costs, which is unfavorable in a budget-constrained comparison setting.
Overall, EBSW introduces a novel idea of energy-based slicing distribution that directly and explicitly conditions two input measures, resulting in a simple, effective, and efficient computation. Therefore, we believe it is easier to use EBSW in practice.
3. **Increasing $p$ in SW.** As suggested by Reviewers **vaua**, we increased the value of p in SW, i.e., $p = 3, p = 10$ in gradient flow. However, we found that increasing $p$ does not lead to better performance for SW. The reason is that changing $p$ does not affect the slicing distribution in SW; it remains uniform. In contrast, EBSW adapts its slicing distribution changes adaptively. Therefore, increasing $p$ in SW does not have the same effect as in EBSW.
4. **Limitations of the paper.** We will add a new paragraph to discuss the limitations of our paper, as suggested by Reviewers **QgRs** and **vaua**. In summary, the first limitation of EBSW is that its MCMC variants are not directly parallelizable, resulting in slow computation. Thus, conducting a more detailed investigation of customized MCMC methods for EBSW is crucial to unlocking its true performance. Additionally, the effective choice of energy-based functions is challenging. In the paper, we use simple and computationally effective energy-based functions, such as the exponential function and the polynomial function. However, there is ample room to explore other types of energy functions, such as the annealed exponential function and Lipschitz function, as suggested by Reviewer **vaua**. Finally, proving the triangle inequality of EBSW is challenging due to the expressiveness of the energy-based slicing distribution, which poses difficulties in deriving the two-sided sample complexity of EBSW, as questioned by Reviewer **krQV**. Overall, we believe these limitations pertain to the current paper itself, not EBSW.
Best regards,
Authors,
Pdf: /pdf/168901127184f035ca236440bf1d748e444a1fb3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Interpretable Characteristic Kernels via Decision Forests | Reject | Summary: The authors show the random forest induced kernel is characteristic, and they empirically study the validity of the kernel for independence and k-sample testing.
Strengths: The authors show the connection between the characteristic property of the kernel and random forest. The topic is interesting and important.
Weaknesses: The presentation should be improved. Since detailed explanations about random forest are missing, how we can connect the kernel methods to random forest is not clear. I think that is the most important background of this paper, so it should be explained more. In addition, the novelty of the paper is not clear to me since the theoretical results in this paper seem to depend strongly on the random forest setting, but the setting is not clearly explained.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - In Theorem 2, does $\phi_w$ become injective by the construction of the kernel through random forest? I think specific properties coming from the random forest setting should be explained clearly in the main text.
Minor comments:
- In the supplementary material, there are so many CSV files, but we only need codes and a PDF file for the appendix of the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: As I also mentioned in the weakness part, since the setting and the background are not sufficiently explained, the novelty and the nontrivial part of this paper are not clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the thorough review!
**Weakness and Limitation:**
We will incorporate more background information on random forest kernels and existing works related to hypothesis testing. Moreover, we will include an updated conclusion to emphasize the main contribution of the paper, i.e., the direct utilization of this proximity matrix as a valid and consistent kernel choice for hypothesis testing – a approach not explored in existing literature.
Here is the proposed changes for the first paragraph in the introduction section (Line 23-24) to provide a better review of random forest works:
> …This proximity matrix functions as an induced kernel or similarity matrix for the decision forest. The connection between random forest and the induced kernel is well-established, with research demonstrating that random forest can essentially be perceived as a kernel method, generating kernels that surpass conventional ones [1,2, 3]. Generally, any random partition algorithm has the potential to generate a proximity matrix and be conceptualized as a kernel.
Then by the end of the third paragraph (Line 44) we will add the following on existing works regarding random forest for testing.
> ... There are existing studies that utilize random forests for hypothesis testing, such as F-test and feature screening [4], as well as two-sample testing [5]. However, these studies utilize other random forest outputs rather than the proximity matrix. To the best of our knowledge, there has been no research that explores the characteristic properties of the induced kernel, nor any work that directly employs the induced random forest kernel for tasks related to independence or two-sample testing.
Then in the discussion section at the end of the paper (Lines 198-199), we will emphasize the major contribution, and the potential for other random forest algorithms:
> ... The primary contribution of the paper lies in directly leveraging the induced kernel from random forest for achieving consistent and valid hypothesis testing for independence and two-sample. We explored the theoretical properties of this approach while showcasing its numerical benefits. With the aid of the recent chi-square test procedure for kernel-based tests, the complexity of our proposed method is
to compute the test statistic and p-value. Additionally, owing to the data-adaptive nature of the random forest algorithm, the resulting kernel appears to effectively adjust to the inherent data structure, thereby enhancing testing power. Other kernels, whether derived from variations of the standard random forest [1,2] or from the proximity matrix of alternative random partition or forest algorithms [6, 7], may also be used in the framework.
**Additional references:**
1. Alex Davies, Zoubin Ghahramani, “The Random Forest Kernel and creating other kernels for big data from random partitions”, arXiv:1402.4293.
2. Erwan Scornet, “Random forests and kernel methods”, arXiv:1502.03836.
3. Gerard Biau and Erwan Scornet, "A random forest guided tour”, TEST, 2016.
4. Tim Coleman, Wei Peng, Lucas Mentch, “Scalable and Efficient Hypothesis Testing with Random Forests ”, Journal of Machine Learning Research.
5. Simon Hediger, Loris Michel, Jeffrey Näf, “On the use of random forest for two-sample testing”, Computational Statistics and Data Analysis, 2022.
6. Tyler Tomita et al., “Sparse Projection Oblique Randomer Forests”, Journal of Machine Learning Research, 2020.
7. Pierre Geurts, Damien Ernst, Louis Wehenkel, “Extremely randomized trees”, Machine Learning, 2006.
**Question 1:**
Regarding the theorem question, yes, $\phi_w$ is injective due to construction of the random forest — as the area of each leaf region goes to zero, two different observations must lie on two different leaf regions as $n$ increases, effectively making them injective for continuous random variables.
We will include the following brief explanation in the theorem section on why the leaf region goes to zero, and make it clear that the assumptions require properties from random forest construction:
> Note that a crucial assumption in this context is that the area of each leaf region approaches zero. This particular asymptotic behavior is generally met by most continuous and smooth random variables when employing standard random forest. For instance, consider a scenario where the sample data has a continuous distribution within the support. In the default configuration of a random forest, each leaf node accommodates a relatively small number of observations, typically 1 in regression or 5 in classification. Consequently, as the sample size increases to infinite, the area of each leaf node converges to zero.
**Question 2:**
Considering the extended runtime required for generating power curve plots, involving numerous settings, multiple sample size intervals, and numerous replicates, we decided to provide the power data in CSV files for the initial submission. We hoped this may facilitate potential figure reproduction and result replication for running the code. Subsequently, we will remove these CSV files, retaining only the code. Note that both the code and CSV files will be accessible on GitHub for public reference later.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I'm still concerned about readability. I wonder formal explanations (not just by giving references, but by giving mathematical definitions and formulae) of the setting of random forest and relation between random forest and kernel methods are missing.
---
Reply to Comment 1.1.1:
Comment: Yes, we agree that formal definitions of random forest and the kernel will enhance readability. We feel that this is very important, and so propose adding a new section 2.3 in the Preliminaries:
### Random Forest and the Proximity Kernel
Given a dataset of $(x_i, y_i)$ for $i = 1, \ldots, n$, we employed the standard Classification and Regression Trees (CART) algorithm (1, 2) to build each tree. In the context of an independence test, where $y_i$ is continuous, the construction of the tree involves identifying the optimal dimension $j$ and the corresponding optimal split point $s$ that satisfies the minimization problem:
$$
\min_{j, s} \left[ \min_{c_a} \sum_{x \in R_a(j,s)} (y_i - c_a)^2 + \min_{c_b} \sum_{x \in R_b(j,s)} (y_i - c_b)^2 \right].
$$
Here, $c_a$ and $c_b$ are the sample means within the regions $R_a$ and $R_b$ respectively. These regions are essentially half-planes within the current region. For instance, for the initial split, $R_a = \\{x\_i|x\_i^{j} \leq s\\}$ and $R_b = \\{x_i|x_i^{j}>s\\}$.
The criterion above is essentially the mean squared error within each region for regression. When $y_i$ is categorical, such as in the case of two-sample testing, the same algorithm is used, but the minimization is performed over the Gini index to gauge impurity of the region.
In essence, the CART algorithm identifies the optimal pair $(j,s)$ to iteratively build a tree structure on the sample data, aiming to decrease an objective function within each region, until a predefined stopping criterion is met. By default, the stopping criterion is five observations within each region for regression trees and, in classification, we typically stop when each region is 'pure' (only has samples from a single class). Upon reaching the stopping criterion, the resulting regions within the tree are commonly referred to as leaf nodes.
Random forests are ensembles of CARTs (3,4). Each tree is constructed by resampling the training data with replacement. In our experiments, we used 500 trees. The time complexity of random forests is $\mathcal{O}(mpn \log(n))$, where $m$ is the number of trees, $p$ is the dimensionality of the data, and $n$ is the sample size.
Finally, the standard proximity kernel is computed as follows:
**Definition 2.** *Given a forest of $m$ trees, let $\phi_w \in \mathbf{P}, w \in 1, \ldots, m$ denote a single decision tree. The proximity kernel matrix is computed by
$$\mathbf{K}^{\mathbf{x}}\_{ij} = \frac{1}{m}\sum\limits_{w = 1}^{m}[\mathbb{1}(\phi_w(x_i) = \phi_w(x_j))],$$
where $\mathbb{1}$ is the indicator function that observations $x_i, x_j \in \mathbf{x}$ are in the same leaf node in each tree.*
The proximity kernel lies in $[0,1]$. As a simple example, if $x_i$ and $x_j$ always lie in the same leaf node in every tree, then $\mathbf{K}^{\mathbf{x}}\_{ij}=1$. If out of $100$ trees, there are only $10$ trees where $x_i$ and $x_j$ belong to the same leaf node, $\mathbf{K}^{\mathbf{x}}\_{ij}=0.1$.
**References:**
1. Leo Breiman, "Classification and regression trees", Routledge, 1984.
2. Trevor Hastie, Robert Tibshirani, Jerome Friedman, "Elements of Statistical Learning", Springer, 2001.
3. Leo Breiman, "Random Forests", Machine Learning, 2001.
4. Leo Breiman, "Bagging predictors", Machine Learning, 1996. | Summary: The paper show how decision forest can be used to induce a kernel for k-sample testing.
Strengths: Better-than-SOTA results.
Weaknesses: The paper is a extremely hard to follow, particularly Section 3, which I was unable to follow without going deep into the referenced papers. The same goes for related works, which are simply mentioned en-passant in Section 5.1. A reader not directly familiar with all of them is left wondering in what they differ, and why.
Additional comments:
- L18: "are an ensemble".
- L18: "They are highly effective".
- Given that we're dealing with matrices, I'd suggest using \mathds{1} to indicate indicator functions, rather than I, which is usually used for identity matrices.
- Figure 4 only reports 2 competitors: what about the others?
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Could not formulate questions due to difficulty in understanding the paper.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: Limitedly comprehensible.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review the paper!
**Additional Comments 1-3:**
Thank you for pointing out the typos, we fixed them and also changed notation for the indicator matrix.
**Additional Comments 4:**
Figure 4 exclusively showcased two competitors, HHG and HSIC, for specific reasons: MGC detected the same peptide as KMERF, and HHG and HSIC identified a slightly distinct set of significant peptides. On the other hand, all other methods (DCor, CCA, RV) failed to identify any significant peptides. To provide more clarity, see updated Figure 3 in the attached PDF. This figure contains a more comprehensive caption, along with the addition of the base case where all peptides (dimensions) are incorporated into the classification task, representing all other methods.
**Weaknesses:**
We sincerely appreciate your feedback regarding Section 3 and Section 5. While the detailed background information on these sections was initially included in the draft, it was regrettably omitted from the initial submission due to space constraints. In our revised draft, we will ensure to reinstate this information within the appendix for comprehensive reference.
For the main method in Section 3, we will provide detailed information regarding each step. Specifically, we will provide the following:
- in step 1, we will explain the parameters and the default implementation utilized for the standard random forest.
- in step 3, we will present the motivation and rationale underpinning the use of the unbiased transformation. This transformation serves to achieve $E[corr]=0$ under conditions of independence, which, in turn, facilitates the successful application of the chi-square test.
- in step 5, an in-depth explanation will be provided as to why the chi-square test is a fast, valid, and consistent alternative to the permutation test. Furthermore, we will explain the standard permutation test for other methods.
Then, for the competitor methods in section 5.1, the updated appendix will include a thorough exploration of each method, elucidating their known properties and providing concise comparative insights. Specifically, we will incorporate the following information:
- DCor and HSIC: A detailed account of the statistics and testing process for both DCor and HSIC will be included. It will also be highlighted that DCor and HSIC are fundamentally the same methods, with DCor operating on distance matrices and HSIC on kernel matrices.
- MGC: It is a local and adaptive version of DCor. We will explain its use of local distance computation and its capability to identify optimal local neighborhoods, rendering it adept at identifying nonlinear relationships.
- HHG: it is a rank-based method, specifically tailored for capturing nonlinear dependencies effectively.
- CCA and RV: they are multivariate extensions of Pearson correlation, so their strengths lie primarily in linear dependence scenarios.
Overall, the updated appendix will provide the general audience with detailed information on all the methods without the need to go through other references. | Summary: In this paper, the authors introduced a new method called KMERF, which employs random forest for kernel construction. Through their algorithm, they were able to establish that the kernel they created has certain properties, namely being positive definite and asymptotically characteristic. The authors also demonstrated that KMERF has better statistical power than other independent methods. This makes KMERF one of the pioneering learned kernels that has been proven to be asymptotically characteristic.
Strengths: I find the proposed method to be impressive and well-organized. The paper is also well-written. It's worth noting that the number of learned kernels with characteristic properties is limited, so KMERF could be a valuable contribution to the kernel learning field. While it's not unexpected for a learned kernel to show better statistical power, the decent false discovery rate of KMERF is intriguing.
Weaknesses: 1. There is concern about potential inflation from post-selection inference since the kernel being proposed is learned. To address this, I suggest that the authors conduct additional calibration tests of KMER. For instance, they could create various simulation scenarios (similar to those presented in Fig1 and Fig2) involving a mix of causal and null features, and demonstrate the well-controlled FPR/ FDR.
2. Further details are required in Fig1 and Fig2, such as parameter values and exact equations, to fully understand how each simulation setting was carried out.
3. It would be beneficial to have a more comprehensive analysis of the runtime.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In supplementary proof theorem 1, line 8, it is not clear to me why "Each block matrix is always positive definite".
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: KMERF has been proven to exhibit asymptotic characteristics when the sample size n and the number of trees m approach infinity. However, in practical applications, m is often limited to prevent overfitting and computational issues. Providing insights on KMERF's behavior under finite n and reasonable m would be greatly beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the valuable comments and questions. Below is the response to all the weaknesses and questions.
**Weakness 1:**
As shown in Figure 1 and Figure 2, the test power equals the type 1 error under independence. This supports the validity of the test, which means the test will not cause an inflation of the false positive rate during post-selection inference. To substantiate this point, we conducted an additional simulation, the results of which are presented in Figure 1 of the attached document and showed no inflation.
In essence, we generated ($x_1$, $x_2$, $x_3$, $x_4$, $x_5$, $x_6$), where each $x_i$ is drawn independently from a $\mathcal{U}(-1, 1)$ distribution. Then we set $Y = x_1 + x_2^2 + x_3 + \epsilon$, where $\epsilon$ is a noise parameter. This establishes a relationship between the first three variables and $Y$, while the last three variables are independent of $Y$. This is repeated for 100 replicates, and in each replicate we generate sample data, then compute the KMERF statistic and p-value between each of $x_1, \ldots, x_6$ and $Y$. We set the true positive rate as how often the dependent variables are flagged as significant (p-value < 0.05), and set the false positive rate as how often the independent variables are flagged as significant. The computations were performed for each variable, and we report the average true positive among the first three variables, and the average false positive among the last three variables. As expected, the figure shows that the true positive goes to 1 and the false positive stays at 0.05.
**Weakness 2:**
For Figure 1 and 2, the exact parameters and equations for the simulations are included in appendix section C. The specific hyperparameters (n,p,q) employed in each test are specified in the figure captions. The code to generate them and replicate all figures will be publicly available on GitHub.
**Weakness 3:**
A detailed running time analysis is provided as follows, which will be added to the main paper:
> The KMERF method consists of three major components: random forest, distance correlation computation, and hypothesis testing. For the number of trees ($m$), the number of dimensions ($p$), and the number of samples ($n$), the complexity of the random forest is $\mathcal{O}(mpn\log{n})$, the correlation computation complexity is $\mathcal{O}(n^2)$, and the chi-square testing complexity is $\mathcal{O}(1)$. This overall process is fast and scalable.
For reference, In the attached post-selection figure (Figure 1 in appendix), it took about 45 seconds on a Python Jupyter (2019 Macbook Pro with a 2.3 GHz 8-Core Intel Core i9) to compute all six p-values for the sample data at $n= 2000$.
**Question 1:**
Thank you for pointing out this typo, we have amended the manuscript with the change. Indeed, each block matrix should be positive semidefinite — as the diagonal blocks are matrices of ones, eigenvalues of zero exist.
In the context of kernel theory, a kernel is considered positive definite when $\sum_{i,j} a_i a_j k(i,j) >= 0$. Therefore, when the kernel matrix or proximity matrix is positive semidefinite (psd) for any sample data, the kernel function is positive definite (pd). Note that this slight difference on psd / pd is merely a terminology discrepancy between matrix theory and kernel methods.
**Limitation 1:**
Concerning the number of trees, we have included an additional Figure 2 in the attached PDF. This figure examines the effect of varying values of the number of trees $m$ on the testing power for three distinct relationships. As depicted in the attached figure, it is evident that the number of trees has negligible impacts on the testing power of KMERF for these relationships and choices of $m$.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed the majority of my concerns and questions. It would become a big plus if the authors can provide more theoretical insight into why this post-selection inference procedure won't incur potential inflation.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising this insightful question! You are right to have concerns about whether learning labels effect validity of the test. To better justify the validity, we will add the following text into the paper:
When using the permutation test rather than the chi-square test, any post-selection inference is valid. This would involve calculating $c\_k^{n}(\mathbf{x}, \mathbf{y}\_{\pi})$ for $100$ permutations, wherein $\pi$ is a random permutation of the sample indices. The p-value would then be computed as $Prob(c_k^{n}(\mathbf{x}, \mathbf{y}_{\pi}) > c_k^{n}(\mathbf{x}, \mathbf{y}))$. As permutations effectively break the pairwise dependence between each pair of samples, the resulting permuted statistics closely approximate the null distribution of the test statistic. The only assumptions permutation tests make is that the observations are exchangeable under the null hypothesis (1), which is not violated by learning the kernel.
In the paper we use the chi-square test in step 5, i.e., compute
$$
p=1-F\_{\chi^{2}\_{1}-1} \left(n \cdot \frac{c_k^{n}(\mathbf{x}, \mathbf{y})}{\sqrt{c_k^{n}(\mathbf{x}, \mathbf{x}) \cdot c_k^{n}(\mathbf{y}, \mathbf{y})}}\right),
$$
where $\chi^{2}_{1}$ is the chi-square distribution of degree $1$.
This step is demonstrated to be approximately valid for kernel correlation using an unbiased kernel in (2).
In particular, it is derived in (3) that under independence between $(X,Y)$, the limiting distribution of an unbiased kernel correlation satisfies
$$
\left(n \cdot \frac{c\_k^{n}(\mathbf{x}, \mathbf{y})}{\sqrt{c\_k^{n}(\mathbf{x}, \mathbf{x}) \cdot c\_k^{n}(\mathbf{y}, \mathbf{y})}}\right) \stackrel{D}{\rightarrow} \sum\limits\_{i,j=1}^{\infty} w_{ij} (\mathcal{N}\_{ij}^{2}-1),
$$
where the weights satisfy $w_{ij} \in [0,1]$ and $\sum\limits_{i,j=1}^{\infty} w_{ij}^{2} = 1$, and $\mathcal{N}\_{ij}$ are independent standard normal distributions. Then it was proved in (2) that the weighted summation of densities is bounded by a chi-square distribution on the upper tail, leading to
$$
\left(n \cdot \frac{c\_k^{n}(\mathbf{x}, \mathbf{y})}{\sqrt{c\_k^{n}(\mathbf{x}, \mathbf{x}) \cdot c\_k^{n}(\mathbf{y}, \mathbf{y})}}\right) \preceq\_{\alpha} \chi^{2}\_{1}-1
$$
for sufficient large $n$ and sufficiently small $\alpha$. Here $\preceq\_{\alpha}$ means upper tail dominance, i.e., $V \preceq\_{\alpha} U$ in upper tail at probability level $\alpha$ if and only if
$$
F\_{V}(x) \geq F\_{U}(x)
$$
for all $x \geq F\_{U}^{-1}(1-\alpha)$. The above dominance is valid for sufficiently large $n$ and adequately small values of $\alpha$ (numerically verified to be approximately $0.08$). This makes the chi-square test a valid test (and in fact slightly conservative) in practical scenarios.
This leads to the following Lemma:
**Lemma 1.** *The forest-induced kernel satisfies the assumptions of Zhang et al. (3), and KMERF produces an unbiased kernel correlation, such that chi-square test is a valid test of independence under proper $\alpha$.*
Therefore, despite being an adaptive kernel construction, the random forest proximity kernel still adheres to the established properties within the existing kernel literature. Therefore, both the permutation test and the chi-square test can be used in KMERF as a valid test. Specifically, when $X$ and $Y$ are independent, the random forest construction effectively results in a random partition of $X$, as $Y$ does not carry any information about $X$, and the resulting kernel correlation converges to $0$ under independence.
Consequently, the kernel correlation derived from the random forest and the resulting chi-square test should not cause inflation in the test statistic nor lead to any issues with post-selection inference.
**References:**
1. Phillip Good, "Permutation, Parametric, and Bootstrap Tests of Hypotheses", Springer, 2005.
2. Cencheng Shen, Sambit Panda, Joshua Vogelstein, "The chi-square test of distance correlation", Journal of Computational and Graphical Statistics, 2022.
3. Qinyi Zhang, Sarah Filippi, Arthur Gretton, Dino Sejdinovic, "Large-scale kernel methods for independence testing", Statistics
and Computing, 2018. | Summary: In this paper, random forest induced kernel/proximity is combined with distance correlation and a recently developed chi-square test method to form a hypothesis testing method that is useful for independence testing and k-sample testing. The authors prove that the kernel is asymptotically characteristic and therefore the proposed method is valid and consistent for a sufficiently large sample size. Through experimental evaluation of statistical power for independence and two-sample testing on synthetic data, the authors show that their method performs better than other tests for the majority of simulations settings, and is good at identifying important features. When applied to real biomarker data, the method successfully identifies a potentially valuable marker for pancreatic cancer.
Strengths: - The method is easy to implement and seems powerful. It inherits the advantages of random forest, e.g., working very well on small datasets, almost no need for parameter tuning and preprocessing, easy-to-access high-quality implementations, etc.
- The paper is well-written and easy to follow.
- Theoretical properties of the method are investigated. The results seem correct.
- The chi-square test is much more efficient than the permutation test.
- Theorem 2 seems novel.
- Experimental evaluation is well conducted and looks convincing.
Weaknesses: - The novelty of this method is limited because the combination is straightforward. The random forest induced proximity is well-known in the literature.
- The related work on random forests for hypothesis testing is not surveyed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - When does the premise of Theorem 2 (part area goes to zero) hold?
- What are the labels for supervised RF training?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of this work have been discussed in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the review and the questions. Indeed, the main innovation is not about the random forest kernel itself, as the proximity matrix is well-established within the random forest framework. Our main contribution is the direct utilization of this proximity matrix as a valid and consistent kernel choice for hypothesis testing – a approach not explored in existing literature. In the realm of kernel testing, it is widely recognized that different kernels can yield distinct testing performance outcomes. Please find the proposed edits below on how we intend to expand the random forest background in the introduction, as well as to emphasize the main contribution in the conclusion.
**Question 1:**
Thank you for the question! This is indeed an important component that should have been clarified. We will include the following explanation after Theorem 2:
> Note that a crucial assumption in this context is that the area of each leaf region approaches zero. This particular asymptotic behavior is generally met by most continuous and smooth random variables when employing standard random forest. For instance, consider a scenario where the sample data has a continuous distribution within the support. In the default configuration of a random forest, each leaf node accommodates a relatively small number of observations, typically 1 in regression or 5 in classification. Consequently, as the sample size increases to infinite, the area of each leaf node converges to zero.
**Question 2:**
The simulations considered two types of hypothesis tests: testing for independence and testing for two-sample. For the RF training on two-sample testing, the labels are binary, specifically 0 and 1. Here, 0 signifies that the samples belong to one group, while 1 indicates that the samples belong to another group. Then, for RF training on independence testing, the labels are simply Y, representing a 1-dimensional response variable throughout the simulations.
**Proposed Edits to Introduction and Conclusion:**
Here is the proposed changes for the first paragraph in the introduction section (Line 23-24) to provide a better review of random forest works:
> …This proximity matrix functions as an induced kernel or similarity matrix for the decision forest. The connection between random forest and the induced kernel is well-established, with research demonstrating that random forest can essentially be perceived as a kernel method, generating kernels that surpass conventional ones [1,2, 3]. Generally, any random partition algorithm has the potential to generate a proximity matrix and be conceptualized as a kernel.
Then by the end of the third paragraph (Line 44) we will add the following on existing works regarding random forest for testing.
> ... There are existing studies that utilize random forests for hypothesis testing, such as F-test and feature screening [4], as well as two-sample testing [5]. However, these studies utilize other random forest outputs rather than the proximity matrix. To the best of our knowledge, there has been no research that explores the characteristic properties of the induced kernel, nor any work that directly employs the induced random forest kernel for tasks related to independence or two-sample testing.
Then in the discussion section at the end of the paper (Lines 198-199), we will emphasize the major contribution, and the potential for other random forest algorithms:
> ... The primary contribution of the paper lies in directly leveraging the induced kernel from random forest for achieving consistent and valid hypothesis testing for independence and two-sample. We explored the theoretical properties of this approach while showcasing its numerical benefits. With the aid of the recent chi-square test procedure for kernel-based tests, the complexity of our proposed method is $\mathcal{O}(n^2)$ to compute the test statistic and p-value. Additionally, owing to the data-adaptive nature of the random forest algorithm, the resulting kernel appears to effectively adjust to the inherent data structure, thereby enhancing testing power. Other kernels, whether derived from variations of the standard random forest [1,2] or from the proximity matrix of alternative random partition or forest algorithms [6, 7], may also be used in the framework.
**Additional references:**
1. Alex Davies, Zoubin Ghahramani, “The Random Forest Kernel and creating other kernels for big data from random partitions”, arXiv:1402.4293.
2. Erwan Scornet, “Random forests and kernel methods”, arXiv:1502.03836.
3. Gerard Biau and Erwan Scornet, "A random forest guided tour”, TEST, 2016.
4. Tim Coleman, Wei Peng, Lucas Mentch, “Scalable and Efficient Hypothesis Testing with Random Forests ”, Journal of Machine Learning Research.
5. Simon Hediger, Loris Michel, Jeffrey Näf, “On the use of random forest for two-sample testing”, Computational Statistics and Data Analysis, 2022.
6. Tyler Tomita et al., “Sparse Projection Oblique Randomer Forests”, Journal of Machine Learning Research, 2020.
7. Pierre Geurts, Damien Ernst, Louis Wehenkel, “Extremely randomized trees”, Machine Learning, 2006.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. I will keep the score unchanged. | Rebuttal 1:
Rebuttal: Thank you all very much for taking time to thoroughly reviewing our paper and providing valuable feedback. Attached is a one page document containing figures we will refer to in each of our rebuttals.
Pdf: /pdf/35ae1f2b90103c050d359aa5186e9682ebd2a2f8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this study, the authors proposed a new kernel KMERF for independence testing.
In KMERF, multiple decision trees are constructed similar to Random Forest, and the number of trees in which two data points belong to the same leaf node is calculated.
This count is used as the kernel value between the two points to form a kernel matrix.
The p-value is then computed using a method similar to HSIC.
The authors proved that KMERF is a characteristic kernel in the limit as the number of data points and the number of decision trees approach infinity.
This property guarantees that KMERF is an effective kernel (asymptotically) for independence testing.
Through experiments using synthetic data, the authors reported that independence testing with KMERF outperforms other kernel-based methods in terms of statistical power.
They also showed that by estimating the feature importance of the Random Forest, they can estimate the input features contributing to the independence testing.
This ability of estimating feature importance is considered an advantage of KMERF.
Strengths: The strengths of this study is on the theoretical anaysis of KMERF and the experimental results.
For valid independence testing, it is required to demonstrate the (asymptotic) characteristic kernel property of KMERF.
The theoretical proof in this study is therefore essential.
Moreover, the experiments report that the testing using KMERF exhibits higher statistical power compared to other methods using different kernels.
Additionally, a unique feature of KMERF is its ability to estimate the important features contributing to the testing by examining the feature importance of the Random Forest.
Weaknesses: A weakness of this study is the absence of a review of existing random forest-based kernels.
For example, [7] presents a kernel based on random partitions similar to KMERF.
Moreover, measuring the similarity of two data points using the number of trees with the same leaf node has been used for several data analysis tasks, and packages such as rfProximity have been publicly released.
When compared to these existing methods, Steps 1 and 2 of KMERF can be considered equivalent to rfProximity.
The absence of the aforementioned review poses a problem in properly evaluating the novelty of this study.
In fact, due to the similarity with rfProximity, the novelty of this study lies not in the random forest-based kernel itself (Steps 1 and 2), but rather in the application to testing in Steps 3--5, as well as demonstrating the asymptotic characteristic kernel property of KMERF.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Q1. Please reveiw the existing random forest / partition-based kernels.
* Q2. What is the novelty and the advantage of KMERF compared to the kernels in Q1?
---
I would like to thank the authors for the detailed reply. The proposed updates look reasonable.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors mentioned some possible future directions that are not addressed in the current study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for reviewing the paper and the valuable suggestions!
**Question 1:**
Thank you for pointing out the limited coverage of existing research on random forests. We will expand the background section to include more works of random forest and existing connections to kernel and hypothesis testing. Please find the proposed edits below on how we intend to expand the background.
**Question 2:**
Indeed, the core innovation is not about the random forest kernel itself. After all, the proximity matrix is an established and well-known component within the random forest framework. Instead, what sets our work apart is the direct utilization of this proximity matrix as a valid and consistent kernel choice for hypothesis testing – a novel approach not explored in existing literature. In the realm of kernel testing, it is widely recognized that different kernels can yield distinct testing performance outcomes. Therefore, this paper pioneers the potential of random forest kernels for hypothesis testing, and presents compelling theoretical groundwork and satisfactory numerical outcomes. This work also opens up future possibilities for delving deeper into other random forest or random partition methods within the hypothesis testing framework. Please find the proposed edits below on how we intend to emphasize the innovations.
**Proposed Edits to Introduction and Conclusion:**
Here is the proposed changes for the first paragraph in the introduction section (Line 23-24) to provide a better review of random forest works:
> …This proximity matrix functions as an induced kernel or similarity matrix for the decision forest. The connection between random forest and the induced kernel is well-established, with research demonstrating that random forest can essentially be perceived as a kernel method, generating kernels that surpass conventional ones [1,2, 3]. Generally, any random partition algorithm has the potential to generate a proximity matrix and be conceptualized as a kernel.
Then by the end of the third paragraph (Line 44) we will add the following on existing works regarding random forest for testing.
> ... There are existing studies that utilize random forests for hypothesis testing, such as F-test and feature screening [4], as well as two-sample testing [5]. However, these studies utilize other random forest outputs rather than the proximity matrix. To the best of our knowledge, there has been no research that explores the characteristic properties of the induced kernel, nor any work that directly employs the induced random forest kernel for tasks related to independence or two-sample testing.
Then in the discussion section at the end of the paper (Lines 198-199), we will emphasize the major contribution, and the potential for other random forest algorithms:
> ... The primary contribution of the paper lies in directly leveraging the induced kernel from random forest for achieving consistent and valid hypothesis testing for independence and two-sample. We explored the theoretical properties of this approach while showcasing its numerical benefits. With the aid of the recent chi-square test procedure for kernel-based tests, the complexity of our proposed method is $\mathcal{O}(n^2)$ to compute the test statistic and p-value. Additionally, owing to the data-adaptive nature of the random forest algorithm, the resulting kernel appears to effectively adjust to the inherent data structure, thereby enhancing testing power. Other kernels, whether derived from variations of the standard random forest [1,2] or from the proximity matrix of alternative random partition or forest algorithms [6, 7], may also be used in the framework.
**Additional references:**
1. Alex Davies, Zoubin Ghahramani, “The Random Forest Kernel and creating other kernels for big data from random partitions”, arXiv:1402.4293.
2. Erwan Scornet, “Random forests and kernel methods”, arXiv:1502.03836.
3. Gerard Biau and Erwan Scornet, "A random forest guided tour”, TEST, 2016.
4. Tim Coleman, Wei Peng, Lucas Mentch, “Scalable and Efficient Hypothesis Testing with Random Forests ”, Journal of Machine Learning Research.
5. Simon Hediger, Loris Michel, Jeffrey Näf, “On the use of random forest for two-sample testing”, Computational Statistics and Data Analysis, 2022.
6. Tyler Tomita et al., “Sparse Projection Oblique Randomer Forests”, Journal of Machine Learning Research, 2020.
7. Pierre Geurts, Damien Ernst, Louis Wehenkel, “Extremely randomized trees”, Machine Learning, 2006.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: I would like to thank the authors for the detailed reply.
The proposed updates look reasonable.
I will keep my score. | null | null | null | null | null | null |
Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork | Accept (poster) | Summary: This paper presents a Data-free Subnetworks (DSN) approach for task incremental learning. With a random initialized neural network, DSN learns a task-specific neuron-wise mask to find an optimal subnetwork for a new arriving task, and performs data-free replay for transferring the knowledge to the past tasks. DSN achieves superior performance on four benchmarks.
Strengths: - The paper's approach of designing a task-incremental learning method motivated by the Lottery Ticket Hypothesis is reasonable and novel.
- The emphasis on backward knowledge transfer, often overlooked in previous incremental learning methods, is commendable.
- The proposed method outperforms many State-Of-The-Art methods across four benchmarks under multiple evaluation metrics, demonstrating its effectiveness.
Weaknesses: - It is confusing how backward knowledge transfer is achieved through data-free replay. By definition, backward knowledge transfer refers to the process in which new knowledge is transferred to old tasks to potentially improve their performance. However, it seems that the replayed samples (impression crafts) only represent knowledge of old tasks and may not incorporate new knowledge for improving performance in those tasks.
- The related work section can be further improved by providing detailed comparisons with 1) architecture-based methods (such as [39][41]) that also learn masks to designate different subnetworks for different tasks; 2) incremental learning methods [40][1*] that also consider backward knowledge transfer; and 3) [2*] that is also motivated by Lottery Ticket Hypothesis.
Ref.
[1*] Class-incremental learning with cross-space clustering and controlled transfer, ECCV 2022;
[2*] On the Soft-subnetwork for few-shot class incremental learning, ICLR 2023
- The method details and formulas need clarification. For example, Eq. (9) lacks an explanation of how to calculate L_{IC}, the optimization objective does not include the variable x, and the discussion on the validity of the Dirichlet distributions representing the task output is less convincing.
- Knowledge transfer is only performed between current task and the most similar previous task. Considering more similar tasks and evaluating the associated cost could potentially lead to improvements.
- Some minors: 1) Case errors, such as “Without loss of generality, A classical TIL scenario, ...”; 2) Brackets missing on line 185; 3) Fig. 6 is not referred to in the manuscript.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Which direction(s) does the proposed method belong to, regularization-based, rehearsal-based, or architecture-based?
- What does the f represent in Eq. (1) and (2)?
- What is the difference between a neuron-wise mask and a weight-wise mask?
- What does “bridge the gap between the layer embedding and layer mask” mean?
- Could you provide more details about the structure of the over-parameterized deep neural network used in this method, such as the number of layers and neurons in each layer?
- In Algorithm 1, what do B1, ..., B_{argmax(S_t)} represent?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes. The authors have discussed the limitations regarding the efficiency and capacity issue of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Response to "*backward knowledge transfer*"
Thanks for this concern. After training a new task and task similarity measurement, our data-free replay is to produce impression crafts of the most similar task. For backward knowledge transfer, as we claimed that we treated the subnetwork as the knowledge instead of samples, we thus merge the optimal mask of the current task to the most similar task (cf. Knowledge Transfer in Sec. 3.4). This suggests that old task is able to share neurons used in new tasks, enabling the transfer of new knowledge. we then use the impression crafts to update the mask of the old task. This process ensures that the old task's mask is adapted to accommodate the new knowledge, effectively facilitating the transfer of valuable information from the new task to the past tasks. Thus, impression crafts serve as a crucial bridge for carrying over and integrating the latest knowledge into the existing model, leading to improved performance and informed adaptations across the entire task sequence.
## 2. Response to "*related work*"
Thanks for this suggestion. [1*] proposes two distillation-based objectives for class incremental learning. This method can utilize the feature space structure of the previous model to preserve the representation of previous classes. Besides, a controlled transfer is introduced to approximate and condition the current model on the semantic similarities between incoming and prior classes, maximizing positive forward transfer and minimizing negative backward transfer. However, it can not achieve a positive backward transfer. [40] analyzes the conditions under which updating the learned model of old tasks can be beneficial and lead to backward knowledge transfer. It should be noted that this method is based on SVD, which leads to high computational costs for large dimensional data. [2*] is similar to WSN [3]. Nevertheless, it focuses on a few-shot scene, which is accomplished by jointly learning model weights and adaptive non-binary soft masks, with the major subnetwork minimizing catastrophic forgetting and the minor subnetwork preventing overfitting. Both [2*] and [39] fail to take backward knowledge transfer into consideration. The major contribution of [41] is the induction of the biological neural system into the "M-P neuron model", which sheds light on utilizing the neural network to handle complex tasks.
## 3. Response to "*Eq.(9)*"
Thanks for this concern. $\mathcal{L}_{IC}$ is the common cross-entropy loss.
## 4. Response to "knowledge transfer"
Thanks for this concern. The most important contribution of DSN is to enable knowledge transfer from the current task to the previous task (i.e., backward knowledge transfer). The main additional computation costs happen in this procedure due to the impression crafting and the most similar task fine-tuning. In practice, the complexity of the backward knowledge transfer has been considered during the model design investigation. Specifically, transferring newly learned knowledge to multiple similar old tasks can result in significant time costs (as well as memory costs) that outweigh the benefits. Thus, as we claimed in the paper, we only make backward knowledge transfer to the most similar task. According to Table 2, we can find that our proposed DSN does not cause a huge time consumption. Also, we provided the additional experiments that the DSN can transfer to more old tasks **(cf. the Response 1 of #Reviewer bsXa**). We can find that this version can further promote the model's performance. However, the time cost is significantly increased.
## 5. Response to "*some minors*"
Thanks for pointing out these minors, we will revise them in the next release.
## 6. Response to "*directions*"
Strictly speaking, our approach DSN falls under architecture-based. Unlike rehearsal-based solutions, we do not retain any samples from the old tasks.
## 7. Response to "*f*"
Thanks for this concern. $f$ denotes a heypernetwork without any mask.
## 8. Response to "*weight-wise mask & neuron-wise mask*"
Thanks for this question. The weight-wise mask is one in which each weight in the hypernetwork is associated with a mask value that determines whether or not that weight is used. Note that the size of weight-wise mask equals the number of parameters. In contrast, neuron-wise mask is one in which each neuron in the hypernetwork is associated with a mask value that determines whether or not that neuron is used.
## 9. Response to "*layer embedding & layer mask*"
Thanks for this question. The layer embedding is the trainable vector. And each mask value is the score within the range of 0 to 1, which numerically determines whether the corresponding neuron should be activated.
## 10. Response to "*network details*"
Thanks for this concern. We have presented the details of the network architecture in Appendix C.3, as shown below.
- MLP for PMNIST and RMNIST: We follow [39] and start with 784-2000-2000-10 neurons with RELU activation.
- CNN for CIFAR-100 and TinyImageNet: We also follow [39] and extend a modified version of AlexNet for the first task. In more detail, it has three convolutional layers with 64, 128, and 256 filters, with 4×4, 3×3, and 2×2 kernel sizes, respectively, and plus two fully-connected layers of 2048 neurons each. Also, we use rectified linear units as activation and utilize a 2×2 max-pooling operation after three convolutional layers.
## 11. Response to "Algorithm 1"
Sorry for this confusion, according to the class definition in **Output Space Modeling**, herein, each $B_i$ regarding $C_i$-th of task argmax$(S_t)$ refers to the number of impressions, where $C_{argmax (S_t)}$ indicates that the task argmax$(S_t)$ has $C_{argmax (S_t)}$ classes.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts to respond to all the concerns raised by the reviewers. Clarifications on the concepts and notations are encouraged to be added to the revised manuscript for better understanding. However, I still have some concerns about the role of data-free replay in transferring new task information to past tasks. For me, data-free replay is more likely a way to avoid catastrophic forgetting of past task knowledge after merging a new mask into the old mask, than a way to transfer new knowledge to the old task. Since one major contribution of this paper is positive backward knowledge transfer, I am reluctant to support acceptance at the current stage.
---
Reply to Comment 1.1.1:
Title: data-free replay
Comment: ## Response
Thanks for the comments and valuable time. We use the impressions obtained by our data-free replay to evaluate whether the merged subnetwork (as we treat the task network as the knowledge) can promote the old task. In our experiments, we obtain positive results (cf. BWT). In practice. if we remove the data-free replay in DSN. DSN also does not confront the CF problem due to the mask mechanism. In fact, catastrophic forgetting and backward knowledge transfer are not equivalent. As opposed to most work on overcoming the problem of catastrophic forgetting, this study centers on an attempt to achieve backward knowledge transfer. However, the migration of new knowledge to old tasks is a tricky endeavor because the principle of continuous learning does not allow access to old samples.
Existing replay-based methods [1][2] have limitations in achieving forget-free performance and are susceptible to interference from other tasks. Drawing inspiration from the Lottery Ticket Hypothesis, our proposed DSN achieves forget-free performance by employing task-specific masks. Additionally, other architecture-based methods [3][4] also exhibit forget-free, but they neglect the consideration of backward knowledge transfer, and the task-specific masks remain fixed after task training. Our key observation is that while the parameters of neurons used in previous tasks should be protected and can not be updated, there are trainable parameters that continuously adapt with the introduction of new tasks. To illustrate this, let's consider $task_0$. In our DSN approach, optimal task-specific masks are assigned to $task_0$ during training. However, once $task_1$ is trained, the model parameters are updated, and there may be an opportunity to further optimize the architecture for $task_0$. Therefore, we retrain $task_0$ while keeping the parameters of neurons used in both $task_0$ and $task_1$ fixed, ensuring protection against catastrophic forgetting while allowing the model to assign additional neurons specifically for $task_0$, leveraging newly acquired knowledge. Previous methods have also attempted similar approaches [5][6]; however, they preserve previous data as a replay buffer for retraining old tasks, which raises concerns regarding data privacy and memory overhead. To address this, we introduce our data-free replay module, which generates impression crafts that approximate the representation of past tasks [2][7]. If there are any further questions, please do not hesitate to ask. We are delighted to provide a demonstration.
[1] Tiwari, Rishabh, et al. Gcr: Gradient coreset based replay buffer selection for continual learning. In CVPR, 2022.
[2] PourKeshavarzi, Mozhgan, et al. Looking back on learned experiences for class/task incremental learning. In ICLR. 2021.
[3] Kang, Haeyong, et al. Forget-free Continual Learning with Winning Subnetworks, In ICML, 2022.
[4] Serra, Joan, et al. Overcoming catastrophic forgetting with hard attention to the task. In ICML, 2018.
[5] Ke, Zixuan, et al. Achieving forgetting prevention and knowledge transfer in continual learning. In NeurIPS, 2021.
[6] Ke, Zixuan, et al. Continual learning of a mixed sequence of similar and dissimilar tasks. In NeurIPS, 2020.
[7] Nayak, Gaurav Kumar, et al. Zero-shot knowledge distillation in deep networks. In ICML, 2019. | Summary: In this work, the authors explore task-incremental learning through the lens of the Lottery Ticket Hypothesis (LTH). They contend, primarily from an LTH perspective, that a distinct task necessitates merely a sparse collection of neurons, hence using only a compact sub-network for its operation. Subsequently, every new task can integrate itself within these compact sub-networks, thereby enabling the overall network to learn new tasks sequentially.
The methodology employed by the authors is intriguing. Initially, a hypernetwork is set up at random, followed by the gradual learning of the model's parameters in tandem with masks that are specific to each task. These masks play a crucial role, determining which neurons and corresponding weights should be utilized or rendered static for the impending task.
Notably, the authors strive to enhance previously acquired knowledge via a process they term 'backward knowledge transfer'. This involves determining the most similar task previously encountered, creating a data impression of that task, and subsequently refining the mask/weights of the preceding tasks. This concept stands as a key contribution of this study.
Strengths:
1. The organization of the paper is commendable, effectively guiding readers through the progression of ideas, hypotheses, methodology, results, and conclusions. This well-structured approach allows for a clear understanding of the study and its outcomes.
2. The concept of data-free backward transfer presented in this work is notably intriguing. This aspect, overlooked by existing methods, has been adeptly integrated into the authors' framework, contributing to its uniqueness and potential impact in the field.
3. The authors have executed a compelling array of experiments in their work. Their holistic analysis of their method in relation to Forward Knowledge Transfer, Backward Knowledge Transfer, capacity issues, efficiency, and sensitivity analysis, is thoroughly detailed and insightful. This comprehensive evaluation underscores the robustness of their methodology and enhances the overall credibility of their findings.
Weaknesses: 1. The present work bears a significant similarity to the research conducted by Kang et al. [1], which also advocates for a similar framework leveraging the Lottery Ticket Hypothesis (LTH) and employs binary masks to learn task-oriented sub-networks. It would be advantageous for the authors to highlight their unique additions, modifications, or contributions vis-à-vis this recent analogous work, as this would underscore the true value and novelty of the current work.
2. For each new subtask, the presented method learns a mask embedding corresponding to the task. The current manuscript would have been more complete if the authors had discussed the space complexity of the masked embeddings learned with respect to the tasks, comparing it to [1].
Minor:
1. A more concrete review/literature survey of data-free replay should have been performed, For example, somewhat related works like [2] and [3] have not been discussed. Nonetheless, I encourage authors to explore more related works have utilized data-free replay in various other contexts.
[1] Haeyong Kang, Rusty John Lloyd Mina, Sultan Rizky Hikmawan Madjid, Jaehong Yoon, Mark Hasegawa-Johnson, Sung Ju Hwang, Chang D. Yoo. Forget-free Continual Learning with Winning Subnetworks, In ICML, 2022.
[2] Liu, Huan, Li Gu, Zhixiang Chi, Yang Wang, Yuanhao Yu, Jun Chen, and Jin Tang. "Few-shot class-incremental learning via entropy-regularized data-free replay." In ECCV, 2022.
[3] Choi, Yoojin, Mostafa El-Khamy, and Jungwon Lee. "Dual-teacher class-incremental learning with data-free generative replay." In CVPR Workshops, 2021.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. In connection with Objective 8, the authors introduce a capacity constraint to the current task, which may impact the task's performance due to the limitation it imposes on the neural network's capacity. I would be keen to hear an explanation from the authors on the extent to which this constraint influences the efficacy of the current task.
2. Considering the methodology used to measure the accuracy of task $argmax(S_t)$, I notice that the authors operate under the assumption of not having access to the previous task's data. Given this constraint, how do the authors determine the accuracy of the task post fine-tuning $argmax(S_t)$ and subsequently compare it with its preceding accuracy? Could they elaborate on their approach to evaluating accuracy under these specific conditions and how they ensure the validity of their comparisons?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: In relation to the proposed method, I understand that tasks are stored as parts of a subnetwork. This is an intriguing concept, yet I find myself questioning its scalability when considering a large number of diverse tasks. Given that the total upper capacity of the neural network is fixed, I am uncertain if the proposed method would accommodate scaling up effectively. It seems to me that a broad and varied set of tasks may exceed the inherent capacity limits of the neural network, potentially leading to capacity exhaustion or compromise in task performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Response to "*WSN*"
Thanks for this suggestion.
(1) DSN devises a neuron-wise mask mechanism to select neuron-affiliated weights for new task learning.
(2) DSN enables positive knowledge transfer in both forward and backward directions. In particular, the data-free replay mechanism in DSN regards the trained subnetwork as a past experience and uses it to craft impressions regarding the past samples, which does not require holding any actual samples related to past tasks.
The comparison details:
(1) WSN mainly relies on the weight-wise mask to select the optimal subnetwork from a hypernetwork. But the selected subnetwork is not a continuous structure of a layer-dependent neural network. Because WSN selects a portion of the weights that are relevant to the neuron. WSN also maintains a binary mask whose size equals the number of parameters, resulting in significant resource consumption. In contrast, DSN devises a neuron-wise mask to choose the optimal subnetwork, which can ensure the completeness of the subnetwork. Meanwhile, since our masks are neuron-level, they are lightweight. According to Table 3, we can find DSN requires less number of masks.
(2) WSN has well addressed the CF problem, but WSN does not incorporate backward knowledge transfer. That is, newly acquired knowledge cannot help facilitate any old task. In contrast, DSN is capable of achieving positive gains on old tasks. Meanwhile, in backward knowledge transfer, we consider the data inaccessibility of the old tasks. And we use the data-free replay to effectively avoid this limitation.
Thus, DSN is significantly different from WSN. We believe that DSN offers a new paradigm that focuses more on positive backward knowledge transfer and consider the concern of data unavailability.
## 2.Response to "*space complexity*"
For instance, given a task network containing a two-layer MLP and a classifier, each layer consists of n neurons, yielding the length of neuron mask embedding for each layer is $n$. Thus, the space complexity of each is $\mathcal{O}(n)$ due to its size equal to the neuron numbers. Instead, WSN will result in $\mathcal{O}(n^2)$. According to Table 3, we can find that DSN requires less number of masks.
## 3. Response to "*minor*"
Thanks for your significant suggestions. In the next release, we will thoroughly revise all the typos and also engage in a comprehensive discussion of the related works.
## 4. Response to "*Objective 8*"
Thanks for this question. In Eq.(8), we set a hyperparameter $\eta$ to control the capacity preference for each new task. The higher the value of $\eta$, the lower the number of neurons activated for the new task. In particular, there is no capacity limit when $\eta =0$.
First, in our main experiment, as shown in Fig. 6, when $\eta =0$, it is evident that all neurons have been utilized. But, the accuracy performance is the worst. Obviously, it is because the model cannot utilize new neurons to learn new knowledge as more new tasks arrive. In addition, a larger $\eta$ indicates that the model will hold more room for new task learning. However, we have found that having an excessively large capacity constraint does not lead to a significant improvement in accuracy performance. In other words, excessive reuse of old neurons can also result in underfitting issues during training for new tasks.
Second, in Appendix D.2, we conducted another experiment where we set up a different number of tasks (up to 100 tasks) to validate the performance, where the hypernetwork settings were the same as in the main experiment. As Fig. 10 shows, compared to HAT and WSN, we first observe that the accuracy of all three methods decreases as the number of tasks increases, mainly due to the fact that the hypernetwork is too small. However, we observe that the DSN always has the best performance as the number of tasks increases.
## 5. Response to "*old task accuracy*"
Thanks for this question. As we consider the unavailability of old samples regarding task argmax($S_t$), we are motivated by zero-shot learning and treat the trained subnetwork as the past knowledge of task argmax($S_t$). We also have it determined output space. Thus, we can craft the input space of task argmax($S_t$) using data-free replay. As these impression crafts are determined by the network of the old task, we use them to obtain ground-truth accuracy. Then, we merge the optimal mask of the current task to the most similar task. This suggests that the old task is able to share neurons used in new tasks, enabling the transfer of new knowledge. Next, we use impression crafts to update the mask of the old task. DSN decides to maintain the updated mask for the old task only if the updated subnetwork shows a significant improvement in accuracy over the ground-truth.
## 6. Response to "*capacity limitation*"
Thanks again. As Fig. 4 and Fig. 11 show, with the number of tasks increases, the network does not reach its limit, indicating that DSN effectively utilizes acquired knowledge to handle new tasks. Additionally, we conduct additional experiments to evaluate whether a progressive network can improve accuracy. We enable a random expansion of neurons when a new task arrives. The results indicate that the model does not significantly benefit from expansion. Besides, the expansion will lead to a substantial increase in parameters, as demonstrated below. Your insightful suggestion shad light on a new perspective. We will explore how to effectively utilize the newly added neurons in future work.
| Data | Fixed | Accuracy(%) | Layer1 | Layer2 | Layer3 |
| --- | --- | --- | --- | --- | --- |
| PMNIST | Yes | 98.24 | 2000 | 2000 | N/A |
| | No | 98.29 | 2273 | 2331 | N/A |
| RMNIST | Yes | 97.73 | 2000 | 2000 | N/A |
| | No | 97.75 | 2256 | 2385 | N/A |
| CIFAR100 | Yes | 75.17 | 64 | 128 | 126 |
| | No | 75.21 | 354 | 440 | 509 |
| TinyImageNet | Yes | 46.56 | 64 | 128 | 256 |
| | No | 46.58 | 391 | 345 | 576 |
---
Rebuttal Comment 1.1:
Title: Response to the "old task accuracy" still seems vague
Comment: Thanks to the authors for providing their response!
However, I am still finding it hard to comprehend how the crafted impressions are used to get the "ground-truth accuracy", especially given that these impressions are essentially pseudo-samples. In a similar vein, the authors in their response describe the following, "DSN decides to maintain the updated mask for the old task only if the updated subnetwork shows a significant improvement in accuracy over the ground-truth", again, the process of obtaining the "ground-truth accuracy" remains ambiguous.
---
Reply to Comment 1.1.1:
Title: old task accuracy
Comment: ## Response
Thank you for your valuable time and comments! After obtaining the optimal subnetwork for the current task $t$, we turn to find its most similar task argmax$S(t)$. We obtain a set of impressions for each class in task argmax$S(t)$ with the optimization of Eq.(9). We then perform knowledge backward transfer by merging the masks of the two tasks and fine-tuning optimization. Before that, in our implementation, we report two indicators as ground truths, i.e., the impression performance on the subnetwork (we call it old subnetwork) of argmax$S(t)$ including accuracy and loss. When fine-tuning the task argmax$S(t)$, the impressions are found to be significantly more accurate on the updated sub-network, or the loss is significantly reduced relative to the old subnetwork, then we decide to update the mask of task argmax$S(t)$. | Summary: The paper proposes a novel method for task incremental learning that uses a subnetwork for each task. The method is based on neuron masking obtained by learnable embedding. The method also finds the most similar task from the past tasks and allows for backward transfer. The method is tested on four benchmarks: PMNIST, RMNIST, CIFAR-100, and TinyImageNet. The results show an improvement in performance over other methods.
Strengths: - The proposed method achieves a better performance than other methods on the three studied benchmarks.
- Up to my knowledge, few papers addressed positive backward transfer in CL. It is nice to see more work addressing this aspect.
- Analysis of the reusability of the neurons over tasks is provided.
- Efficiency analysis is considered.
- Different metrics are evaluated Accuracy, BWT, and Trans, which assess different requirements in CL.
Weaknesses: - Backward transfer to one only past task is not that convincing. Multiple previous tasks could benefit from the new knowledge if they are similar enough. This is my main concern. Since backward transfer is one of the main contributions, I expect more investigation into that point.
- Despite the fact that most of the paper is easy to follow, Section 3.3 lacks an overview at the beginning that can link the following information to how it contributes to the method. It becomes more clear after reading Section 3.4.
[suggestion] Maybe the authors can consider reordering these two sections or clarify a bit more in Sec 3.3
- The paragraph explaining the backward transfer is very brief (Line 125-221), and some sentences are not clear. More elaboration could be useful. Algorithm 1 is helpful, though.
- An analysis that assesses the validity of the proposed similarity measure (even on a toy experiment) could be helpful.
- Minor: text in figures 3 and 4 is not readable.
- Minor: typo in the last line in Algo.1 “hyep”
- Some closely related works are missing. Not necessarily to empirically compare with them as you provide a comparison with one recent related method (WSN), but at least to have a discussion in the related work section. Examples of the missing works are below.
o Gurbuz, Mustafa B., and Constantine Dovrolis. "NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks." International Conference on Machine Learning. PMLR, 2022.
o Sokar, Ghada, Decebal Constantin Mocanu, and Mykola Pechenizkiy. "Spacenet: Make free space for continual learning." Neurocomputing 439 (2021): 1-11.
o Sokar, Ghada, Decebal Constantin Mocanu, and Mykola Pechenizkiy. "Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks." Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer Nature Switzerland, 2022.
o Wang, Zifeng, et al. "SparCL: Sparse continual learning on the edge." Advances in Neural Information Processing Systems 35 (2022): 20366-20380.
o Yin, Haiyan, and Ping Li. "Mitigating forgetting in online continual learning with neuron calibration." Advances in Neural Information Processing Systems 34 (2021): 10260-10272.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is the contribution of each proposed components (neuron wise mask and data free replay) to the performance gain in DSN? Do you have an ablation for that?
- L 215 “Instead, we only allow the parameter update in the task-specific classifier (head) while freezing any parameters in the subnetwork.” This sentence is not clear. Does it mean that in backward transfer, you only update the head?
- Looking at Table 1 and Table 2 together and comparing WSN to DSN, do you think that the improvement gain in performance worths the additional costs (i.e., double the training cost in some cases). In other words, Are there specific cases in that one can definitely choose DSN over WSN?
- Is the proposed method applicable for more realistic scenarios i.e. class-incremental learning?
- Do you have some thoughts on how this approach can be further extended/improved to allow for backward transfer to multiple tasks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Limitations and social impact are not discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Response to "*multiple previous tasks*"
Thanks for this concern. We consider that transferring new knowledge to multiple old tasks can result in significant time costs (as well as memory costs) that outweigh the benefits. Thus, we only make backward knowledge transfer to the most similar task. Herein, we also provided experiments that DSN can transfer to more old tasks as below. We find that this version can further promote model performance. However, the time cost is significantly increased.
| Dataset | Multiple old tasks | Accuracy(%) | BWT(%) | Elapsed Time |
| --- | --- | --- | --- | --- |
| PMNIST | No | 98.24 | 0.01 | 2.43h |
| PMNIST | Yes | 98.28 | 0.03 | 4.75h |
| RMNIST | No | 97.73 | 0.02 | 2.18h |
| RMNIST | Yes | 97.88 | 0.06 | 3.95h |
| CIFAR100 | No | 75.17 | 0.02 | 1.21h |
| CIFAR100 | Yes | 75.34 | 0.07 | 10.92h |
| TinyImageNet | No | 46.56 | 0.04 | 1.54h |
| TinyImageNet | Yes | 46.61 | 0.06 | 8.78h |
## 2. Response to "*Sec. 3.3*"
Thanks for your insightful suggestions. In the upcoming release, we will present an overview of this section. In addition, we will thoroughly revise all the typos and also engage in a comprehensive discussion of the related works.
## 3. Response to "*Ablation study*"
Thanks for this question. In our forward knowledge transfer, the role of the neuron-wise mask is to select an optimal subnetwork architecture for the newly coming task. That is to say, it will affect the new task learning. As for data-free replay, it aims to produce impression crafts for backward knowledge transfer. Hence, this component will significantly affect the accuracy improvements of old tasks. In other words, it is directly related to backward knowledge transfer.
We provided the ablation study as follows.
| Dataset | Neuron-wise mask | Data-free replay | Accuracy(%) |
| --- | --- | --- | --- |
| PMNIST | No | Yes | 97.99 |
| | Yes | No | 98.13 |
| | Yes | Yes | 98.24 |
| RMNIST | No | Yes | 97.42 |
| | Yes | No | 97.65 |
| | Yes | Yes | 97.73 |
| CIFAR100 | No | Yes | 74.28 |
| | Yes | No | 74.81 |
| | Yes | Yes | 75.17 |
| TinyImageNet | No | Yes | 46.02 |
| | Yes | No | 46.41 |
| | Yes | Yes | 46.56 |
For the neuron-wise mask in the above table, if the choice is "NO", we use the weight-wise mask instead. For Data-free replay, if the choice is "No", we block this module. The above results demonstrate that removing the neuron-wise mask is more sensitive to the model performance, which also suggests that our neuron-wise mask is better than the weigh-wise mask. Moreover, we find that removing the data-free replay component also degrades the model performance, which demonstrates that DSN enables the knowledge transfer to the old tasks.
## 4. Response to "*L 215*"
Sorry for the confusion. During backward knowledge transfer, we will not change any parameters in the hypernetwork as it could cause interference with other tasks. But, we update the masks of the most similar old task as well as the task-specific head. That is to say, the subnetwork architecture regarding the most similar old task can be adjusted while keeping the parameters fixed. In this way, this operation will not cause interference with other old tasks as well as the newly coming task.
## 5. Response to "WSN and DSN"
Thank you for this great question. We have the following explanation.
WSN mainly relies on the weight-wise mask to select the optimal subnetwork from a hypernetwork. In practice, the selected subnetwork is not a continuous structure of a layer-dependent neural network. This is because the WSN selects a portion of the weights that are relevant to the neuron. This results in the inability to separate the subnetwork selected from the hypernetwork into a single but complete network architecture. That is to say, every task in the continual learning system would need to download this huge hypernetwork in order to complete the model inference. In the future scenarios such as pervasive computing may be unrealistic. In addition, WSN needs to maintain a binary mask whose size equals the number of parameters also results in significant resource consumption.
WSN and many other prior works have well addressed the CF problem, i.e., zero forgetting or forgetting-free, but they do not consider backward knowledge transfer well. That is, newly acquired knowledge cannot help facilitate any old task. In contrast, our DSN is capable of achieving positive gains on old tasks. Meanwhile, in the process of backward knowledge transfer, we also consider the data inaccessibility of the old tasks. And we use the data-free replay mechanism to effectively avoid this limitation.
Therefore, we believe that our DSN offers a new paradigm that focuses more on positive backward knowledge transfer and takes into account the concern of data unavailability for old tasks.
## 6. Response to "*class-incremental learning*"
Thanks for this question. In practice, our proposed DSN can apply to more contexts including class-incremental learning. [1] also gives a detailed theoretical study that task-incremental learning can be naturally transformed into class-incremental learning with the help of out-of-distribution detection.
[1]Kim et al. A theoretical study on solving continual learning. NeurIPS 2022.
## 7. Response to "*future plan*"
Thanks again, we plan to devise an efficient machinsm that aims to craft the representative impressions. Besides, we conduct backward transfer on past tasks in a sequential order, sorting them based on their similarity to the current task. We monitor the performance of past tasks after backward transfer. If their performance improves, we proceed to the next task in the sequence. We continue this process until the performance of the old tasks no longer shows enhancement from knowledge transfer. This iterative approach ensures that we prioritize and focus on transferring knowledge to the most relevant tasks first.
---
Rebuttal Comment 1.1:
Title: Official Comment by by Reviewer bsXa
Comment: Thank you for your response and for providing extra results. I have read your rebuttal.
I am quite surprised that transferring the knowledge to multiple previous tasks led to a minor improvement in performance. Can the authors comment on that?
For the ablation study, it is nice to have a baseline that both components are not used such that we can assess the effect of each.
---
Reply to Comment 1.1.1:
Title: accuracy
Comment: Thank you for your valuable time and suggestions. First of all, the results in the table above are average accuracy rates. Second, during the backward transfer of multiple old tasks, it is difficult to further improve the accuracy by mask merging and classifier updating due to the decreasing similarity of the old tasks. That is, the old tasks with lower similarity can hardly benefit from knowledge transfer.
With regard to the second recommendation. We need some time to prepare such a baseline. We will update the comments later (we have updated the table in the following comment. Thanks for your great suggestion!). | Summary: This paper focuses on task continual learning and attempts to enhance the elastic knowledge transfer across the tasks that sequentially arrive. With the help of masks, achieve forward and backward knowledge transfer.
Strengths: This paper is well written and easy to follow. The proposed method can achieve backword knowledge transfer with the help of mask similarity.
Weaknesses: 1. Why can masks identify the similarity of tasks? The mask of the new task before learning should not be determined, so how to complete knowledge transfer at beginning.
2. During the forward transfer, the parameters of the old task were fixed to prevent catastrophic forgetting, while during the reverse transfer, the parameters of the old task were optimized. The two seem somewhat contradictory. Can you unify the two to improve learning efficiency?
3. The EWC and IMM algorithms used in the experiment are both class incremental learning. How can they be compared to Task incremental learning?
4. The operation of convolutional layers is somewhat different from fully connected, and the paper lacks effective descriptions of other structures. Does the convolutional part regards each convolutional kernel as fully connected int the experiments?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the Weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Response to "*mask & knowledge transfer*"
*(1) Forward Knowledge transfer*: Thanks for this question. Our solution is to use the mask mechanism to determine the subnetwork architecture of each arrived task from the hypernetwork $\mathcal{H}$. Thus, any subnetwork is a subset of $\mathcal{H}$. For a newly coming task $t$, the neuron-wise masks regarding $t$ are dynamically changed during the training process, as they are conditioned on the layer embeddings (cf. Eq.(3)). In other words, the embeddings corresponding to the masks are also trained during the task training process, and the final masks for the task $t$ are determined only when the training performance is optimal. As defined in Eq.(5), DSN using mask operation will only update the parameters that are not used in the previous tasks. And we regard the used neurons in the previous tasks as the synapses that only take the role of sending messages between different layers. Hence, the reused neurons from the past tasks allow the forward knowledge transfer to the new task.
*(2) similarity tasks*: Due to the unavailability of old samples, we cannot measure task similarity using the approach of [1], which relies on the distribution of data from previous tasks. In our solution, we treat the subnetwork determined by mask mechanism as knowledge rather than data. Since the subnetworks all originate from the same hypernetwork, with the optimization of Eq. (8), we enforce the new task to reuse more old neurons. Therefore, we measure whether two tasks are similar or not by mask similarity, which is actually subnetwork structure similarity.
[1] Ke Z, Liu B, Huang X. Continual learning of a mixed sequence of similar and dissimilar tasks. NIPS, 2020.
## 2. Response to "*backward knowledge transfer*"
Sorry for this confusion. Above we have explained the procedure for forward knowledge transfer. Please allow me to explain the backward knowledge transfer here. For a newly coming task $t$, we will obtain its optimal masks after training convergence. As we claimed that the newly learned knowledge could be beneficial for the past tasks. Thus, in backward knowledge transfer, we first make the task similarity measurement by mask measurement (cf. Eq.(10)) to find the most similar old task (i.e., task argmax($S_t$)). Then, we employ our data-free replay to generate impression crafts. As claimed in the paper, these impressions are to determine whether we need to adjust the subnetwork architecture of the most similar task. During backward knowledge transfer, we will not change any parameters in the hypernetwork as it could cause interference with other tasks. But, we can update the masks of old task argmax($S_t$) as well as the task-specific classifier. That is to say, the subnetwork architecture regarding task argmax($S_t$) can be adjusted while keeping the parameters fixed. In this way, this operation will not cause interference with other old tasks as well as the newly coming task. Based on this interference concern, we cannot unify the two knowledge transfer procedures in our solution due to the sequential nature of the learning process.
## 3. Response to "*EWC & IMM*"
Thanks for this concern. EWC[1] and IMM[2] have been widely adopted as the baselines for Task incremental learning such as [3][4][5].
[1] Kirkpatrick, James, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan et al. "Overcoming catastrophic forgetting in neural networks." *Proceedings of the national academy of sciences* 114, no. 13 (2017): 3521-3526.
[2] Lee, Sang-Woo, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. "Overcoming catastrophic forgetting by incremental moment matching." *Advances in neural information processing systems* 30 (2017).
[3] Serra, Joan, Didac Suris, Marius Miron, and Alexandros Karatzoglou. "Overcoming catastrophic forgetting with hard attention to the task." In *International conference on machine learning*, pp. 4548-4557. PMLR, 2018.
[4] Qin, Qi, Wenpeng Hu, Han Peng, Dongyan Zhao, and Bing Liu. "Bns: Building network structures dynamically for continual learning." *Advances in Neural Information Processing Systems* 34 (2021): 20608-20620.
[5] Masana, Marc, Tinne Tuytelaars, and Joost Van de Weijer. "Ternary feature masks: zero-forgetting for task-incremental learning." In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 3570-3579. 2021.
## 4. Response to "*convolutional layers*"
Thanks for this concern. We depicted the complete architectures of CNN in Appendix C.3. Following HAT and WSN, our CNN is a modified version of AlexNet containing three convolution layers and two fully-connected layers. Indeed, the convolution layers are different from MLPs. In our experiments, we treat each filter in convolution layers like the 'neuron' in MLPs. And using the mask operation to determine which filter will be activated for the newly coming task training. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Building upon the principles of the Lottery Ticket Hypothesis, this paper introduces a hyper network model embedded with a series of competitive "ticket" sub-networks. Each of these sub-networks is designed to excel at their corresponding tasks, with a particular emphasis on knowledge transfer. Furthermore, this model promotes not just forward knowledge transfer, but also supports backward transfer of knowledge.
Strengths: Pros:
1. The DSN method facilitates not only forward knowledge transfer but also backward transfer, offering a more dynamic and comprehensive learning paradigm that closely mimics human cognitive processes.
2. The DSN method uses a neuron-wise mask and data-free memory replay, which can significantly save computational resources compared to maintaining a binary mask equal to the number of parameters.
3. By using the mask to select neurons that have been used in earlier tasks and keeping their corresponding weights unchanged, the DSN method effectively addresses the issue of catastrophic forgetting that is common in continual learning scenarios.
4. The data-free replay mechanism treats the trained subnetwork as a past experience and uses it to craft impressions of past samples, which bypasses the need to retain actual past samples, thus addressing privacy concerns and computational overhead.
Weaknesses: Cons:
1. The DSN method may involve complex processes and computations, such as measuring mask similarity scores and fine-tuning the most similar tasks, which might make the method computationally intensive and challenging to implement.
2. While task-specific masks help in achieving better model performance, they add another layer of complexity to the training process and may lead to overfitting if not handled correctly.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Response to "*complex processes*"
*(1) Similarity measurement*: Thanks for this concern. First, the mask similarity measurement is a simple vector-based computation that uses the cosine distance to obtain the similarity scores between different tasks. Second, our masks are neuron-level, which will further decrease the computation cost. For instance, given a task network containing a two-layer MLP and a classifier, each layer consists of $n$ neurons, yielding the neuron mask length for each layer is $n$. Thus, the measuring complexity between two tasks is $\mathcal{O}(n)$ due to the element-wise computation. Instead, the complexity of the existing weight-wise mask mechanism will result in $\mathcal{O}(n^2)$.
*(2) Fine-tune the most similar task*: Thanks for this concern. First, the most important contribution of DSN is to enable knowledge transfer from the current task to the previous task (i.e., backward knowledge transfer). The main additional computation costs happen in this procedure due to the impression crafting and the most similar task fine-tuning. In practice, the complexity of the backward knowledge transfer has been considered during the model design investigation. Specifically, transferring newly learned knowledge to multiple similar old tasks can result in significant time costs (as well as memory costs) that outweigh the benefits. Thus, as we claimed in the paper, we only make backward knowledge transfer to the most similar task. According to Table 2, we can find that our proposed DSN does not cause a huge time consumption. Correspondingly, how to further compress the training time is the focus of our future research. In addition, compared to more recent state-of-the-art methods such as WSN, our number of parameters is significantly less than theirs, making the network more lightweight.
*(3) Implementations*: In our Supplementary Material, our source codes were available and the implementation details were provided in the Appendix.pdf file.
## 2. Response to "*masks*"
Thanks for this important concern. Several previous studies such as HAT and WSN demonstrate that the mask mechanism can be used to determine a subnetwork for each task from a large hypernetwork. Motivated by achievements in network pruning and Lottery Ticket Hypothesis, a hypernetwork is usually over-parameterized and usually even brings negative impacts on task performance than a smaller network. That is to say, many neurons or weights do not make any contribution to the task performance. In a continual learning context, ours and previous studies such as WSN consider that using the mask can discover a more compact subnetwork for each task. To bypass occasionality and randomness such as overfitting, our main experiments were compared by averaging the results of multiple training rounds (cf. Table 1).
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the authors' response, but I still got a few concerns as follows:
1. **Complex Processes**:
- **Similarity Measurement**:
- While it's acknowledged that the mask similarity measurement is a simple vector-based computation using cosine distance, the effectiveness of cosine similarity in capturing true mask similarity remains a concern. This is especially true given the potential high dimensionality of some neural networks.
- **Fine-tune the Most Similar Task**:
- The rationale for performing backward knowledge transfer to only the most similar task is understood, but it does prompt further inquiry. For instance, how do you ensure that the most similar task is the most beneficial for knowledge transfer? Transferring to only one task might miss out on critical shared knowledge between multiple tasks.
2. **Masks**:
- The introduction of mask mechanisms, inspired by other studies such as HAT and WSN, and its relation to the Lottery Ticket Hypothesis is intriguing. However, a key point of contention here is the assumption that a hypernetwork is usually over-parameterized. This assumes a universal pattern across different tasks and datasets, which might not be the case.
- The use of averaging results across multiple training rounds to mitigate concerns like overfitting is commendable. Nevertheless, it would be valuable to see a more comprehensive investigation into how these masks affect model robustness, especially when introduced to out-of-sample or adversarial data.
**Conclusion**:
While the proposed DSN method is no doubt innovative and offers promise, there remain critical considerations that need addressing for it to be widely accepted and implemented in diverse scenarios.
---
Reply to Comment 1.1.1:
Comment: Thanks for the comments and valuable time.
**#Similarity Measurement**
As each mask correspond to the nueron in the task network, using mask measurement is a vector-level while the weight-based measurement is a tensor-level. In highly dimensional context, in addition to the network scale, there is no doubt that weight-based similarity measurements and even similarity measurements based on task data are affected by the input dimensions, while the mask measurement complexity is only related to the task network scale.
We also conduct experiment to examine the effect of our similarity measurement. By following [1], we obtain the task similarity between two tasks using their real samples. And we obtain the mask similarity in our DSN. Fig. 1 shows the similarity results on PMNIST after training task 10. We can observe that our mask similarity has a similar function to task similarity in [1]. However, as we claimed before, we cannot use the task similarity directly due to the unavailability of old samples. We should have submitted a pdf conains visualized figure, however, we miss the deadline. Sorry for our ignorance.
| | Mask Similarity(%) | Task Similarity(%) |
| --- | --- | --- |
| Task 1 | 46.39 | 45.92 |
| Task 2 | 53.26 | 49.32 |
| Task 3 | 57.49 | 54.43 |
| Task 4 | 63.94 | 59.68 |
| Task 5 | 69.42 | 66.05 |
| Task 6 | 72.06 | 75.69 |
| Task 7 | 73.41 | 72.21 |
| Task 8 | 74.75 | 78.79 |
| Task 9 | 78.39 | 80.15 |
[1] Ke Z, Liu B, Huang X. Continual learning of a mixed sequence of similar and dissimilar tasks[J]. Advances in Neural Information Processing Systems, 2020, 33: 18493-18504.
**#Mask**
In response1 of bsXa, we have provided additional experiments. We consider that transferring new knowledge to multiple old tasks can result in significant time costs (as well as memory costs) that outweigh the benefits. Thus, we only make backward knowledge transfer to the most similar task. Herein, we also provided experiments that DSN can transfer to more old tasks as below. We find that this version can further promote model performance. However, the time cost is significantly increased. | null | null | null | null | null | null |
Annotator: A Generic Active Learning Baseline for LiDAR Semantic Segmentation | Accept (poster) | Summary: This work benchmarks several point selection approaches for label-efficient LiDAR point cloud semantic segmentation. Their proposed criterion leads to better results than existing selection approaches and good generalization with few annotations. They use several settings depending on the accessibility of auxiliary data to overcome the "cold-sart" problem.
Strengths: - the proposed criterion is sensible and leads to good results
- the experiments are extensive
- the definition of three different setting (no auxiliary data, pretrained model, auxiliary annotated data) is helpful
- active learning is a critical problem for the industrialization of ML methods and is not studied nearly enough
Weaknesses: - the entire contribution boils down to the "voxel confusion degree" selection criterion. Yet the authors present the entire field of active learning as a contribution
- the quality of writing is quite low, with many misused technical terms (softmax cross-entropy) and vague statements (active learning [is] an optimum paradigm).
- the authors only compared their approach to 2 selection criteria [27,76], but could have used many other approaches, such as [25, 35, 58, 59, 78, 84]. Otherwise, they need to explain why these approaches do not apply
- The results seem dubious. Annotating only 5 voxels (how many points total?) leads to such strong performance in a setting with many classes. Even when using 40 million parameters- networks trained from scratch and no particular measures taken to avoid the extreme overfitting bound to occur?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Q1) How do you explain that large networks can be trained with cross entropy and only a handful of points? How many epochs are you using?
Q2) Why not compare to any of the other DA approaches [25, 35, 58, 59, 78, 84]
S1) proof read the entire text looking for mistakes, vague and imprecise statements.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 1 poor
Limitations: not given
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the comments from the reviewer KrDz. We have answered all the questions and sincerely hope they can address the concerns.
**Q1: the entire contribution boils down to the "voxel confusion degree" selection criterion. Yet the authors present the entire field of active learning as a contribution.**
*Motivation*: Recently, there has been significant interest in achieving label-efficient LiDAR semantic segmentation. Both AL within the same data distribution and DA across different data distributions hold promise as solutions to alleviate the manual annotation burden. Nonetheless, **the absence of a standardized baseline** could hinder the advancement of research. Further, AL coupled with DA has great practical significance, which has been explored in 2D image classification. Yet, **it is still unclear how to make it work well in 3D domains**.
*Purpose*: This paper aims to present a simple and general active learning baseline for LiDAR semantic segmentation via the voxel-centric selection. With its simplicity and strong performance across various backbones (e.g., MinkNet, SPVCNN, SalasNet, PolarNet, etc.), regardless of in distribution or out of distribution setting (i.e., AL, ASFDA, and ADA), and robustness over simulation-to-real and real-to-real scenarios, we hope this baseline can facilitate future research.
*Contribution*: We would like to emphasize that the main contributions lie in three aspects.
- a label acquisition strategy (VCD) is **more robust and diverse** to select samples efficiently under a domain shift;
- a voxel-centric online active learning can largely reduce the labelling cost of enormous point clouds. Particularly, only requiring **1,000x fewer annotations** can reach a satisfactory performance;
- **generally applicable** for various network architectures (voxel-, range-, and BEV-view, etc), settings (in distribution and out of distribution), and scenarios (simulation-to-real and real-to-real).
**Q2: the quality of writing is quite low, with many misused technical terms (softmax cross-entropy) and vague statements (active learning [is] an optimum paradigm)**
Thank you. We will fix minor mistakes and revise paper thoroughly.
**Q3: the authors only compared their approach to 2 selection criteria [27,76], but could have used many other approaches, such as [25, 35, 58, 59, 78, 84]. Otherwise, they need to explain why these approaches do not apply.**
This question is very interesting and valuable. **First**, we would like to clarify that our goal is to build a simple and general baseline, in which we benchmark several selection strategies, i.e., Random, Entropy[76], and Maigin[27] since they are typically used to testify the proposed selection strategy. **Second**, as mentioned before, there is no standardized baseline. Thus, it might be unfair to compare with some works across different scenarios and settings. Here, we compare our method with other works [25, 35, 58, 59, 78, 84] as much as possible on the basis of the benchmarks presented in this paper. These works can be categorized into three groups:
1. *active learning for 2D image classification[58, 84].* core-set[58] proposes a diversity-based sample selection strategy for image classification, where a clustering algorithm is adopted and DUC[84] estimates uncertainty from the perspective of Dirichlet distribution. We modify these two selection strategies into our baseline and compare with them on two simulation-to-real tasks based on MinkNet. The detailed results are reported in `Table R2 of the one-page PDF`. Please be aware that our method continues to deliver the most favorable outcomes in two case.
2. *active learning for 3D LiDAR semantic segmentation[25, 35, 59].* Lidal [25] proposed a frame-level selection strategy. Less [35] and SSDR-AL [59] first introduced a pre-segment stage (a heuristic algorithm in [35] and superpoint in [59]) and then conducted labeling process on outdoor and indoor scenes respectively.
We first compare Lidal and Less on outdoor KITTI dataset as follows.
mehotd|budget|MinkNet|SPVCNN
-|-|-|-
Lidal|1%|47.5|48.5
Less|0.1%|52.8|51.1
Ours|0.1%|53.7|52.8
Our method is simpler yet achieves better performance. Then, following [59], we conduct experiment on indoor S3DIS dataset. Detailed results and analyses can be found at response of Q6 for Reviewer dFZX.
3. *Squeezeseg [78]* is an end-to-end road-object segmentation method with CNNs and CRFs and synthetic data (GTA-V) is used to train the model. It is not applicable to our setting.
**Finally**, we would like to emphasize that your suggestion is very helpful, and we will pay attention to it when revising the paper.
**Q4: the results seem dubious. Annotating only 5 voxels (how many points total?) leads to such strong performance in a setting with many classes. Even when using 40 million parameters-networks trained from scratch and no particular measures taken to avoid the extreme overfitting bound to occur?**
Thanks for the valuable question. On average, the total number of annotated points (5 voxels) in a frame (64,000 points in total) is about 35. `Figure R2 of the one-page PDF` provides curves of training loss and validation loss on task of SynLiDAR->POSS under AL (train from scratch) and ASFDA (train from an auxiliary model) settings. Both models are based on MinkNet (21.7 MB). The training loss and validation loss have been steadily decreasing and the final validation loss is smaller than the training loss as well, verifying that there is no extreme overfitting. Another finding is that the validation loss of ASFDA is smaller than that of AL, which confirms the power of auxiliary model. We will add this analysis in the revision.
**Q5: How many epochs are you using?**
The pre-training process uses 10 epochs and others (AL/ASFDA/ADA) use 50 epochs.
**Q6: limitations: not given**
Actually, we have discussed the limitations in the Conclusion. We will make it more clear in a separate section.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: The authors have addressed all my queries and reservations.
While I initially perceived the contribution as too limited, several points have swayed my opinion:
- the authors make a good point relative to the lack of standardized baseline, and their method makes a good stepping stone in that direction.
- they added more comparisons that still show the merits of their approach.
- the method, while simple, yields superior results. This makes its simplicity an asset.
I also recognize that some of my earlier concerns arose from my own minsunderstanding (specifically regarding the 5 voxels equating to 35 points).
Considering the above factors, I am now leaning towards endorsing this paper and have subsequently revised my rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your suggestions
Comment: We really appreciate your comments and reply. We believe that our work can establish a simple and general baseline for label-efficient LiDAR semantic segmentation, and hope our observation can inspire more subsequent label-efficient works. | Summary: This paper introduced a baseline for active learning called Annotator for LiDAR point cloud semantic segmentation. The paper includes an analysis of various active selection strategies, including random selection, entropy-based selection, margin-based selection, and a novel strategy called voxel confusion degree (VCD). The study investigates three different setups: active learning (AL), active source-free domain adaptation (ASFDA), and active domain adaptation (ADA). Through experiments conducted on multiple Simulation-to-Real and Real-to-Real benchmarks, the proposed method demonstrates its effectiveness.
Strengths: 1. The paper is well-written and easy to follow.
2. Considering the significant challenge of annotating point-wise point cloud data in LiDAR segmentation, exploring active learning approaches for this task is valuable and important.
3. The idea of annotator is simple but effective.
4. The authors conducted comprehensive benchmarks across different setups, including AL, ASFDA, and ADA. This extensive evaluation covers several novel investigations that have not been explored before.
Weaknesses: 1. This paper lacks in-depth insights and analysis. What is the fundamental intuition behind it? Why do these active selection strategies contribute to performance gains? Furthermore, what sets the proposed VCD apart and enables it to achieve superior performance improvements?
2. The selection of voxel grids plays a crucial role in determining the final performance, particularly when it comes to random selection. However, this paper did not include error bars in relation to the random seed for multiple experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why did the annotator achieve significantly smaller performance gains in SynLiDAR->KITTI in Table 2?
2. A minor issue: The per-class IoU performance of PolarMix is provided in its supplementary material, which is missing in Tables 2 and 3.
3. Please also refer to the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper discusses the limitations of their work in the conclusion. However, it is suggested to have a separate section dedicated specifically to addressing these limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer a35W for the very constructive comments. We are glad that the reviewer acknowledge that the task is valuable and important, the idea is simple but effective, and the investigation is novel. Here we address the biggest concern raised by the reviewer, i.e., the fundamental intuition behind voxel-centric selection, and hope our response can address this concern.
**Q1: This paper lacks in-depth insights and analysis. What is the fundamental intuition behind it?**
Thanks for the valuable question. We would clarify that the intuition of this paper is to establish a simple and general baseline for label-efficient LiDAR semantic segmentation. To mitigate the annotation burden in model training, most of existing paradigms delves into active learning (in distribution) or domain adaptation (out of distribution), separately. However, there are **not unified setting and baseline**, which hinders the research. Indeed, the two classes of approaches are deeply intertwined, but **little work has been done to consider the combination in 3D domains**. This paper targets at delivering a simple and general online active learning baseline by a voxel-centric selection. It unifies active learning and domain adaptation for LiDAR semantic segmentation. Moreover, sufficient and convincing empirical results provide insights for label-efficient 3D applications.
**Q2: Why do these active selection strategies contribute to performance gains?**
At first, we would like to clarify that there exist image/frame-, region-, and point-based selection strategies in previous research. Active learning aims to identify informative instances for labeling via a selection strategy. Thus, given limited budget, the selection strategy can select the most informative instances for labeling, which can be used to train the model, leading to performance gains.
**Q3: What sets the proposed VCD apart and enables it to achieve superior performance improvements?**
Our VCD design is inspired by an observation: traditional AL setting often focuses on learning a model from scratch rather than adapting under domain shift. In practice, models are trained in an auxiliary domain and deployed in a new domain of interest. In this case, existing AL strategies are less effective since uncertainty estimates on the new domain may be miscalibrated. By contrast, our label acquisition strategy (VCD) is tailored to estimate category diversity instead of uncertainty within a voxel, which is more robust under domain shift. Specifically, we first generate pseudo label for each point within a voxel based on model prediction. Second, we count the percentage of each category within the voxel. Third, we take the entropy of the statistical information within the voxel as a score. Finally, we select voxels with high score. The **higher the score, the more predicted classes within a voxel**, and we think this would help to train the model after being annotated. As shown in Fig. 5, we clearly see that the true class distribution of SemanticPOSS is exactly a long-tail distribution while our method can pick out more voxels that contain tail classes. Also, per-class results in Table 3-4 of the main paper and `Table R1 of the one-page PDF` show the superiority of our method in both tail and majority classes.
**Q4: The selection of voxel grids plays a crucial role in determining the final performance, particularly when it comes to random selection. However, this paper did not include error bars in relation to the random seed for multiple experiments.**
It is a very interesting question regarding the viability of Annotator in the real world. Actually, current results in our experimental section are the mean accuracy across three runs with different random seeds, and we include error bars on task of SynLiDAR $\to$ KITTI using three random runs in `Figure R1 of the one-page PDF`. And we will add the error bars on all benchmarks in the revision.
**Q5: Why did the annotator achieve significantly smaller performance gains in SynLiDAR->KITTI in Table 2?**
This question is very interesting and valuable. Please note that the performance gain (0.1% under AL/ 2.5% under ASFDA/ 2.8% under ADA in SynLiDAR $\to$ KITTI in Table 2) is compared to the state-of-the-art baseline method. We think that the numerical improvement should attribute to several reasons. First, there includes challenging variations in data distribution and annotation quality between synthetic SynLiDAR dataset and real-world KITTI dataset. Second, for simplicity, we keep all hyperparameters unchanged when change the backbone from MinkNet to SPVCNN.
**Q6: A minor issue: the per-class IoU performance of PolarMix is provided in its supplementary material, which is missing in Tables 2 and 3.**
We apologize for missing per-class IoU performance of PolarMix. Here we report them in the following tables and will add them in the revision.
SynLiDAR $\to$ KITTI (MinkNet)
mehotd|car|bi.cle|mt.cle|truck|oth-v.|pers.|b.clst|m.clst|road|park.|sidew.|oth-g.|build.|fence| veget.|trunk|terra.|pole|traf.|mIoU
-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-
PolarMix|76.3|8.4|17.8|3.9|6.0|26.6|40.8|15.9|70.3|0.0|44.4|0.0|68.4|14.7|69.6|38.1|37.1|40.6|10.6|31.0
SynLiDAR $\to$ POSS (MinkNet)
mehotd|pers.|rider|car|trunk|plants|traf.|pole|garb.|buil.|cone|fence|bike|grou.|mIoU
-|-|-|-|-|-|-|-|-|-|-|-|-|-|-
PolarMix|32.6|39.1|25.0|11.9|64.2|5.8|29.6|15.3|44.8|13.3|23.8|10.7|79.0|30.4
Compared with PolarMix, a strong DA method, we observe that traditional AL is more powerful than DA as it allows querying limited data to be labeled. This paper shows that, given limited annotations, DA (i.e., ASFDA & ADA) can achieve on-par performance with fully-supervised counterparts.
**Q7: Limitations: it is suggested to have a separate section dedicated specifically to addressing these limitations.**
Thanks for the valuable suggestion. We will add a separate section to discuss the limitations in the revision.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: The response has addressed most of my concerns. Given that the authors emphasize the significance of this work as a baseline for future studies, could you please provide information regarding the potential release of the code? If so, when is the expected release date?
---
Reply to Comment 1.1.1:
Title: Thank you for your positive feedback on our response.
Comment: Dear Reviewer a35W,
We're glad to hear that your concerns have been addressed. We have included the code for review in the preliminary submission. In terms of releasing the code, we are actively working to prepare the codebase for public distribution. We understand the value of reproducibility in research and are committed to making the code available to the community.
At this stage, we expect to release the code within the next month. We will ensure that the code is well documented and easy to use, so that other researchers can build on our work and continue to contribute to the field. We appreciate your interest in our code release and will keep you updated on its progress.
Thank you for your understanding and continued interest in our research.
Best regards,
Submission2070 Authors. | Summary: This work proposes a general and efficient data annotation pipeline, namely Annotator, to label LiDAR data for semantic segmentation. Specifically, the proposed method introduces a voxel-centric online selection strategy to determine which voxel should be annotated by humans. Voxel confusion degree (VCD) is thus proposed based on the classification labels predicted by a semantic segmentation network. To prove its efficiency, the authors also conduct some experiments on domain adaption and they use 1,000 times fewer annotations.
Strengths: This work proposes a new criterion namely voxel confusion degree (VCD) to evaluate the certainty of a voxel. It seems using this criterion the authors only need very few times of annotations to achieve better performance.
The experiments support the claim of this paper: the proposed method uses the least annotations and achieves better performance.
Weaknesses: 1. In Fig. 2 (ii), it is ASFDA source-free domain adaptation but in (1) pre-trained with source data. This is a bit confusing.
2. The VCD is well defined. How to use it to determine the next annotated voxel is not clear. Should we choose a voxel with larger VCD? Why is that? The motivation or insights are missing.
3. Algorithm 1 is not very informative but more like active learning pipeline.
4. The proposed method Annotator can be only applied to voxelized data. How does the size of voxels affect the final annotation rounds?
5. The compared baselines are all voxel based annotation. How about labeling points in BEV? In other words, there would be many ways to annotate LiDAR data. Why voxel is the best? Or is the proposed method best suitable for voxel based partition?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1 whether the proposed method can be applied to indoor semantic segmentation?
2. Why choose voxel based partition?
3. In Tables 1, 2, 3 The oracle performance (full annotations) should be provided for comparisons.
4. Can the proposed method be applied to other tasks, such as detection or instance segmentation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed the concerns properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer dFZX for the very valuable comments. We are glad that the reviewer acknowledges our new criterion can achieve better performance with only very few times of annotations. Here we address the concerns of our paper, and hope our response can address the concern.
**Q1: In Fig. 2 (ii), it is ASFDA source-free domain adaptation but in (1) pre-trained with source data. This is a bit confusing.**
We apologize for the misunderstanding. One thing that can be confirmed is that only the source model is available in source-free domain adaptation. Actually, Fig. 2 illustrates both (1) source model pre-training process and (2) adaptation and active learning process (ii). Therefore, in (1) we assume that source data is only used to pre-train the source model, while it is not available for adapting the model in (2).
**Q2: The VCD is well defined. How to use it to determine the next annotated voxel is not clear. Should we choose a voxel with larger VCD? Why is that? The motivation or insights are missing.**
Yes, a voxel with larger VCD will be chosen. Recall that, we first obtain pseudo label $\hat{y_i}$ of each point $x_i$ and then divide points within a voxel $v_j$ into $K$ clusters: $v_j^{<k>}=\{x_i\in v_j,\hat{y_i}=k\}$. At the moment, we can collect statistical information about the categories in a voxel. Next, we take the entropy calculated on the percentage of each distinct class as the voxel confusion degree of each voxel, that is, VCD($v_j$)$=-\sum_{k=1}^{K}\frac{|v_j^{<k>}|}{v_j}\log \frac{|v_j^{<k>}|}{v_j}$, where $|\cdot|$ denotes the number of points in a set. The insight is that if there are many predicted classes within a voxel, we assume that it can help to train the model after being annotated. As shown in Fig. 5, we clearly see that the true class distribution of SemanticPOSS is exactly a long-tail distribution while our method can pick out more voxels that contain tail classes. Also, per-class results in Table 3-4 of the main paper and `Table R1 of the one-page PDF` show the superiority of our method in both tail and majority classes.
**Q3: Algorithm 1 is not very informative but more like active learning pipeline.**
Good suggestion. Most of the lines in Algorithm 1 describe active learning pipeline, to which we add additional settings, namely ASFDA and ADA (line 2), and show how to perform voxel-centric selection and model training (lines 5-6). Perhaps we can describe our algorithm in text.
**Q4: The proposed method Annotator can be only applied to voxelized data. How does the size of voxels affect the final annotation rounds?**
This question is very interesting and valuable. The voxel size $\Delta$ is default set to 0.25 for selection process and 0.05 for training process. In the case of SemantcPOSS, for example, there are about 30,000 voxels per frame. Five rounds of selection are conducted, one voxel per round, resulting in a 1,000x reduction in annotations. When the voxel size is small, the number of voxels will be large, which will increase selection rounds. Here, we conduct experiments on different size (from 0.05 to 0.35) of voxels for selection process. The following results are conducted on SynLiDAR $\to$ POSS based on MinkNet.
$\Delta$|0.05|0.1|0.15|0.2|0.25|0.3|0.35
-|-|-|-|-|-|-|-
\# voxel per frame|64973|54543|43795|36091|30414|25992|22539|
\# selection rounds|11|9|7|6|5|4|4|
mIoU in AL|39.6|42.9|43.9|44.2|44.9|45.1|44.8
mIoU in ASFDA|40.0|46.2|46.0|48.0|48.2|48.3|48.0
It can be seen that the performance of the large voxel grid ($\Delta$>0.2) is more robust to the noise and sparsity of the point clouds. We will add the results to the revision.
**Q5: The compared baselines are all voxel based annotation. How about labeling points in BEV? In other words, there would be many ways to annotate LiDAR data. Why voxel is the best? Or is the proposed method best suitable for voxel based partition?**
Thanks for the valuable question. At first, we would like to clarify that there exist image/frame-, region-, and point-based selection strategies in previous research. The first two require an offline stage, which may be infeasible at large scales. The last one is costly due to the sparsity of point clouds. By contrast, voxel-based selection aims to query the salient and exemplar regions and annotate all points within the region, which is more efficient and flexible. It can be easily applied to different methods, e.g., point-, voxel-, range-, BEV-view. Except for the results of voxel-view method reported in the main paper, we have added experimental results on a range-view method SalsaNext and a BEV-view method PolarNet in `Table R1 of the one-page PDF`. It is seen that our method still brings a large improvement using either range- or BEV-view method with a limited budget.
**Q6: whether the proposed method can be applied to indoor semantic segmentation?**
Yes, our method can be applied to indoor semantic segmentation. Following SSDR-AL [59], we have conducted experiments on S3DIS dataset. We report results of comparing the percentage of labeled points required to achieve 90% accuracy for different methods based on Randlanet as follows.
method|Random|Entropy|Margin|SSDR-AL [59]|Ours
-|-|-|-|-|-
budget|40.9%|46.7%|43.0%|11.7%|9.9%
It is observed that our method can annotate 1.8% fewer points than SSDR-AL in achieving the 90% performance of fully-supervised counterpart.
**Q7: In Tables 1, 2, 3 The oracle performance (full annotations) should be provided for comparisons.**
Actually, "Target-only" in Tables 1, 2, 3, 4 denotes the oracle performance (full annotations in the target domain). Compared with the oracle performance, we find that our Annotator with 5 voxels per frame being labeled can obtain on-par or better performance.
**Q9: Can the proposed method be applied to other tasks, such as detection or instance segmentation?**
This question is very interesting and valuable. Please ref to `response to Q3 for reviewer zYAC`. | Summary: This paper presents Annotator, a general and efficient active learning baseline for LiDAR semantic segmentation, which can adapt to different settings and scenarios with minimal annotation cost. Annotator consists of a voxel-centric online selection strategy that exploits the local topology and structure of point clouds to query the most informative voxel grids for annotation. Annotator can also leverage an auxiliary model to address the cold start problem. The paper evaluates Annotator on two datasets (SynLiDAR and SemanticKITTI) with two network architectures (MinkNet and SPVCNN), and shows that Annotator can achieve on-par performance with the fully supervised counterpart using 1000 fewer annotations and outperform existing methods under various active learning settings. The paper states that Annotator is a simple and general solution for label-efficient 3D perception.
Strengths: **Originality**: The paper proposes 3 ideas, a voxel-centric online active learning baseline, a label acquisition strategy (VCD), and is generally applicable and works for different network architectures.
**Quality**: The paper provides a thorough **experimental evaluation** of Annotator on several simulation-to-real and real-to-real LiDAR semantic segmentation tasks, using different network architectures and baselines. The paper also conducts **ablation studies** to analyze the impact of different components of Annotator, such as voxel size, selection strategy, and auxiliary model. The paper demonstrates that Annotator can achieve **on-par or superior performance** with the fully supervised counterpart using 1000 fewer annotations, and significantly outperform other state-of-the-art methods.
**Clarity**: The paper also provides sufficient background information and related work to situate the contribution of Annotator in the context of existing literature on LiDAR perception, active learning, and domain adaptation. The Supplementary Material is well-written and contains enough details and information as needed.
**Significance**: The paper addresses an important and challenging problem of label-efficient LiDAR semantic segmentation, which has many applications in autonomous driving, robotics, and 3D scene understanding.
Weaknesses: **Major Issues:**
- **Insufficient novelty and contribution**: this paper perform an analysis of several common selection strategies, e.g., Random, Entropy and Margin, which are all proposed in previous research. The newly proposed VCD strategy lacks justification for its design, and contains too few details (L195-L199). The pipelines of distinct active learning settings (e.g., AL, ASFDA, and ADA) seem natural and basic.
- **Insufficient results for experiments:** Although the authors state in the main text, "***Experimentally***, we find that the large voxel grid is also more robust to the noise and sparsity of the point clouds", they provide no experimental results of different $\Delta$ in their main text and Supplementary Material.
- **Insufficient justifications**: For example, about **Voxel-centric** selection, some justifications are missing in this paper. Why choosing voxel-based selection rather than other methods, e.g., point-based, range-image-based, BEV, etc (L101-102)? Any advantages and limitations of **Voxel-based** methods?
**Minor Issues:**
- **Readability**: Some figures and text in figures (e.g., Figure 6 and Figure A3-A4) are too small for readability. Authors should attach more detailed images/visualization to the supplementary material.
- **Variables undefined** although they are obvious in meaning: e.g., $X_t$ in L155, $X_s$ and $Y_s$ in L157.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Is the term '*cell*' (L190) the same meaning as '*voxel*' (L189)? If they're interchangeable, it might be better to use the same word '*voxel*'; if not, it's a good idea to explain their differences, relations, or the definition of '*cell*'.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Authors should be rewarded that the limitations and potential negative societal impact are explicitly mentioned in the Conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer N28Y for the appreciative and constructive comments. We are encouraged that the reviewer found our three original ideas, thorough experiments, sufficient background, and well-written supplementary material. Here, we would like to respond to the issues and hope that our responses will address the concern.
**Q1: Insufficient novelty and contribution**
We would like to emphasize that the main contributions lie in three aspects.
- a label acquisition strategy (VCD) is **more robust and diverse** to select samples efficiently under a domain shift;
- a voxel-centric online active learning can largely reduce the labelling cost of enormous point clouds. Particularly, only requiring **1,000x fewer annotations** can reach a close performance to the fully-supervised counterpart;
- **generally applicable** for different network architectures (voxel-, range-, and BEV-view, etc), settings (in distribution and out of distribution), and scenarios (simulation-to-real and real-to-real).
**Q2: this paper perform an analysis of several common selection strategies, e.g., Random, Entropy and Margin, which are all proposed in previous research.**
Indeed, selection strategies such as Random, Entropy, and Margin typically serve as baseline methods to verify the effectiveness of the proposed method. In this work, we introduce a novel label acquisition strategy (VCD) and a voxel-centric online active learning. Therefore, we first testify common strategies in our voxel-centric selection (the response of Q6 gives justifications). And then we compare VCD with these strategies to show its superiority. In other words, these common selection strategies
are not technical contributions of the paper but are necessary to form our baseline.
**Q3: The newly proposed VCD strategy lacks justification for its design, and contains too few details (L195-L199).**
Our VCD design is inspired by an observation: traditional AL setting often focuses on learning a model from scratch rather than adapting under domain shift. In practice, models are trained in an auxiliary domain and deployed in a new domain of interest. In this case, existing AL strategies are less effective since uncertainty estimates on the new domain may be miscalibrated. By contrast, our label acquisition strategy (VCD) is tailored to estimate category diversity instead of uncertainty within a voxel, which is more robust under domain shift. Specifically, we first generate pseudo label for each point within a voxel based on model prediction. Second, we count the percentage of each category within the voxel. Third, we take the entropy of the statistical information within the voxel as a score. Finally, we select voxels with high score. The **higher the score, the more predicted classes within a voxel**, and we think this would help to train the model after being annotated.
**Q4: The pipelines of distinct active learning settings (e.g., AL, ASFDA, and ADA) seem natural and basic.**
This question is very interesting and valuable. To mitigate the annotation burden in model training, most of existing paradigms delves into active learning (in distribution) or domain adaptation (out of distribution), separately. However, there are **not unified setting and baseline**, which hinders the research. Indeed, the two classes of approaches are deeply intertwined, but **little work has been done to consider the combination in 3D domains**. This paper targets at delivering a simple and general online active learning baseline by a voxel-centric selection. It unifies active learning and domain adaptation for LiDAR semantic segmentation. Moreover, sufficient and convincing empirical results validate the necessity of these distinct settings.
**Q5: Insufficient results for experiments: provide no experimental results of different Δ.**
We agree with the reviewer, and thus conduct experiments on different Δ while keeping the same budget for selection process on SynLiDAR->POSS. The results are as follows
Δ|0.05|0.1|0.15|0.2|0.25|0.3|0.35
-|-|-|-|-|-|-|-
AL|39.6|42.9|43.9|44.2|44.9|45.1|44.8
ASFDA|40.0|46.2|46.0|48.0|48.2|48.3|48.0
It can be seen that the performance of the large voxel grid is more robust to the noise and sparsity of point clouds.
**Q6: Insufficient justifications: For example, about Voxel-centric selection, some justifications are missing in this paper. Why choosing voxel-based selection rather than other methods, e.g., point-based, range-image-based, BEV, etc (L101-102)? Any advantages and limitations of Voxel-based methods?**
Thanks for the valuable question. At first, we would like to clarify that there exist image/frame-, region-, and point-based selection strategies in previous research. The first two require an offline stage, which may be infeasible at large scales. The last one is costly due to the sparsity of point clouds. By contrast, voxel-based selection aims to query the salient and exemplar regions and annotate all points within the region, which is more efficient and flexible. It can be easily applied to other methods, e.g., point-, voxel-, range-, BEV-view. Except for the results of voxel-view method reported in the main paper, we have added experimental results on a range-view method SalsaNext and a BEV-view method PolarNet in `Table R1 of the one-page PDF`. It is seen that our method still brings a large improvement using either range- or BEV-view method. Additionally, a drawback is that one may need to tune the voxel size for different scenarios. We find that the large voxel grid (Δ>0.2) is more robust to outdoor scenes in our experiments.
**Q7: Minor issues: readability & variables definition.**
Thank you. We will improve the readability of the paper and add a table with all the variables and their detailed descriptions.
**Q8: Is the term 'cell' (L190) the same meaning as 'voxel' (L189)?**
Yes, both "cell" and "voxel" denote the smallest unit of selection. We will use the same term "voxel" through the paper.
---
Rebuttal Comment 1.1:
Title: A good paper
Comment: Thank you for additional experiments. The authors have addressed all my concerns and questions.
After reviewing the rebuttal, I am satisfied with the justifications provided for the voxel-centric online active learning approach. The additional ablation study on voxel size is useful for understanding the impact of the hyperparameter. I agree that the voxel-based selection strategy provides an efficient way to query salient regions in large 3D point clouds.
I believe this is a solid contribution. The voxel-centric online active learning approach could be valuable for label-efficient LiDAR segmentation. Therefore, I am inclined to accept this paper and have raised my final rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your help
Comment: We really appreciate your valuable suggestions and fruitful discussions. We hope this work can build a standard baseline and contribute to label-efficient LiDAR segmentation research. | Rebuttal 1:
Rebuttal: Dear reviewers and AC,
We sincerely thank all the reviewers for their positive comments and helpful feedback that have certainly helped improve the quality of this paper. We have uploaded the responses w.r.t. each reviewer together with `the one-page PDF`.
In response to the comments, we have carefully revised and enhanced the manuscript with the following additional discussions and experiments:
1. Add discussion about the cost (computation v.s. annotation) balance in AL paradigm.
2. Provide additional experiments that apply the proposed method to both range-view and BEV-view methods.
3. Provide additional comparisons with existing methods including two AL methods for image classification, two AL methods for outdoor LiDAR semantic segmentation and one for indoor scenes.
4. Add the error bars for the results.
5. Add additional experiments to analyze the effect of voxel size.
6. Provide the analysis of training the network with only a handful of points.
7. We shed light on some potential applications of our Annotator, such as indoor semantic segmentation, LiDAR object detection task.
We hope our response sincerely address all the reviewers' concerns.
Thank you very muck for your time and consideration.
Best regards,
Submission2070 Authors.
Pdf: /pdf/37f66480afcf61b4eb8415c512064ced2dfe84ef.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a voxel-centric online active learning baseline that efficiently reduces the labeling cost of enormous point clouds and effectively facilitates learning with a limited budget. The contribution of this paper can be summarized in three aspects:
1. A voxel-centric online active learning baseline that efficiently reduces the labeling cost of enormous point clouds and effectively facilitates learning with a limited budget.
2. A novel label acquisition strategy, voxel confusion degree (VCD), that requires 1000 times fewer annotations while reaching a close segmentation performance to that of the fully supervised counterpart.
3. A generally applicable and works for different network architectures (e.g., MinkNet, SPVCNN, etc.), in distribution or out of distribution setting (i.e., AL, ASFDA, and ADA), and simulation-to-real and real-to-real scenarios with consistent performance gains.
Strengths: 1. active learning is not entirely new in this field, but reducing the data volume requirement at the order of 1000 times is still very impressive
1. active learning coupled with domain adaptation has great practical significance, but little work has been done to consider the problem in 3D domains. The Annotator aims to minimize human labor in a new domain, regardless of whether samples from an auxiliary domain are available or not.
2. Methodologically, Annotator adopts a voxel-based representation for structured LiDAR data, which is different from the scan-based representation in UniDA3D.
3. Furthermore, Annotator can perform online active sampling that is more efficient than UniDA3D in terms of computation cost.
Weaknesses: 1. The cost of developing an AI based solution include multiple sectors: 1) data collection; 2) data annotation; 3) model training; 4) model deployment and integration. One of the most costly stage, both in time and money, is annotation. The AL paradigm, instead, proposes to use computation to reduce the total cost, by using computation to reduce the amount of data needed for human annotation. In this paper, an additional pre-training phase is added, it is unclear how to evaluate the cost balance, eg, increased cost of computation vs reduced cost of human annotation. In some sense, this is not the weakness/limitation of this particular paper, but rather applies to the whole AL paradigm.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. voxel based methods is one of the major and popular representations for lidar perception tasks, however there do exists other representations such as range view and point based methods, where the concept of voxel does not exist. question is, how does the proposed method apply to non-voxelization based methods?
2. segmentation tasks require dense point-wise annotation, but detection tasks only require, to some extent, much sparser box level annotations, how does the proposed method apply to detection based tasks? one would expect that the data volume reduction persists but not as impressive as 1000x
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: no potential negative societal impact is noticed by the reviewer to the best of their knowledge
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the Reviewer zYAC for the detailed summary and constructive comments. We are glad that the reviewer acknowledges that the problem has great practical significance, the method is novel and generally applicable, and the experiments are very impressive. Here we answer all the questions and hope they can address the concerns.
**Q1: In this paper, an additional pre-training phase is added, it is unclear how to evaluate the cost balance, eg, increased cost of computation vs reduced cost of human annotation. In some sense, this is not the weakness/limitation of this particular paper, but rather applies to the whole AL paradigm.**
We agree with the reviewer that the trade-off between computation cost and annotation cost is a key problem in active learning. In this paper, we mainly focus on reducing the cost of human annotation while reaching a close segmentation performance to that of the fully supervised counterpart. Take the task of SynLiDAR $\to$ SemanticKITTI (KITTI) as an example, the simplest setup is performing active learning within KITTI dataset. In this case, no extra pre-training phase is included, but the performance is **less than satisfactory**. The reason might be the lack of prior information to select an initial annotated set (i.e. *cold-start problem*). To address this, we utilize some open-access datasets, especially synthetic dataset, to train an auxiliary model via a pre-training phase that serves as a warm-up stage, allowing for a smart selection of the data in the first round. Importantly the pre-training phase is trained with a few epochs and only conducted once, and the cost (both time and money) is **very low**. Here we provide a detailed analysis of the cost.
phase|total epoch|running time (hours)|mIoU
-|-|-|-
pre-train on SynLiDAR|10|2.34|22.0
AL on KITTI|50|18.04|53.7
ASFDA on SynLiDAR $\to$ KITTI|50|18.39|54.1
ADA on SynLiDAR $\to$ KITTI|50|28.48|**57.7**
This demonstrates that our method can achieve a good balance between high performance and low cost (computation & annotation). In the future, we will explore more efficient ways to reduce the cost of both computation and annotation. We will add these results in the revision.
**Q2: voxel based methods is one of the major and popular representations for lidar perception tasks, however there do exists other representations such as range view and point based methods, where the concept of voxel does not exist. question is, how does the proposed method apply to non-voxelization based methods?**
Thanks for the valuable question. The main contribution consists of a plain voxel-centric online active learning baseline and a novel label acquisition strategy. As a general baseline, the proposed method can be easily applied to other non-voxelization based methods. Here, we have conducted experiments on a range-view method SalsaNext [a] and a BEV-view method PolarNet [b]. The per-class results on task of SynLiDAR$\to$POSS under AL setting with only 10 voxel budgets are shown in the following tables.
SalsaNext:
mehotd|pers.|rider|car|trunk|plants|traf.|pole|garb.|buil.|cone|fence|bike|grou.|mIoU
-|-|-|-|-|-|-|-|-|-|-|-|-|-|-
Random|22.8|10.4|30.9|15.6|66.6|8.3|5.5|0.0|57.2|10.6|26.6|40.6|74.7|28.4
Entropy|33.1|18.6|32.8|20.1|64.7|11.2|5.5|12.7|52.6|4.3|40.2|45.2|76.8|32.1
Margin|33.6|28.0|29.8|24.5|61.3|20.6|12.5|18.4|48.0|0.0|27.7|38.2|71.1|31.8
**Ours**|39.7|31.8|32.0|26.2|64.7|17.7|11.8|13.7|53.9|13.1|40.9|45.1|76.5|**35.9**
Target-only|52.7|40.2|39.2|28.1|71.5|28.3|18.7|8.0|66.1|16.7|50.1|51.0|79.3|42.3
PolarNet:
mehotd|pers.|rider|car|trunk|plants|traf.|pole|garb.|buil.|cone|fence|bike|grou.|mIoU
-|-|-|-|-|-|-|-|-|-|-|-|-|-|-
Random|40.8|0.1|36.6|3.1|74.0|1.3|17.8|0.0|69.3|0.0|50.3|50.5|76.1|32.3
Entropy|50.3|11.8|44.5|10.0|71.0|19.1|13.3|0.0|63.6|0.0|45.4|48.8|77.9|35.0
Margin|42.8|24.3|22.0|19.4|64.2|17.5|20.1|4.0|54.7|0.0|33.0|35.4|64.1|30.9
**Ours**|55.9|39.2|44.4|22.4|70.3|28.7|18.6|6.9|64.3|21.7|51.9|51.7|76.2|**42.5**
Target-only|62.3|51.8|66.3|22.8|75.5|29.4|21.8|4.8|74.9|46.1|61.3|57.2|80.8|50.4
It is seen that our method still brings a large improvement using either range- or BEV-view backbones with only the limited budget. However, **the performance gains are less than that of the voxel-view counterparts**. Also, to achieve 85% fully-supervised performance, the budget are twice. We think that some annotations of voxel-centric selection might be invalid in other non-voxelization representations. We will add these discussions in the revision for better understanding.
[a] Cortinhal et al. Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds. In ISVC 2020.
[b] Zhang et al. Polarnet: An improved grid representation for online lidar point clouds semantic segmentation. In CVPR 2020.
**Q3: segmentation tasks require dense point-wise annotation, but detection tasks only require, to some extent, much sparser box level annotations, how does the proposed method apply to detection based tasks? one would expect that the data volume reduction persists but not as impressive as 1000x.**
This question is very interesting and valuable. For active learning in 3D object detection task, most strategies are offline, i.e., a few frames are first selected from the entire dataset, and then all boxes in each frame are annotated manually. Our Annotator, on the other hand, is an online strategy that allows querying and annotating within each frame.
Therefore, some changes need to be made to adapt to the detection task. 1) frame-level selection should be used; 2) VCD should be reformulated as follows: in each frame, obtain pseudo-label for each box, collect statistical information on the predicted categories of all boxes, and VCD is computed based on the percentage of boxes belonging to each different category. In other words, in the original algorithm, a frame is a voxel and a box is a point of the voxel. We will explore the application to other LiDAR perception tasks in the future. | null | null | null | null | null | null |
Mechanic: A Learning Rate Tuner | Accept (poster) | Summary: This paper focuses on a method (MECHANIC) for tuning the learning rate of any given base optimizer. This is compatible with any given base optimizer along with a learning rate schedule*. Let u_1, u_2,..u_t be the steps of the base optimizer up until now, then MECHANIC chooses a s_t such that the neural network weights are set to w_{init} + s_t (\sum_i u_i). They do this by leveraging a similar scheme developed for online convex optimization. They implement this method with some minor changes to simplify the implementations and improve the convergence speed (by using momentum and a variant of weight decay).
The authors show that in experiments on masked language modeling and image classification, using MECHANIC leads to faster optimization (although it does sometimes lead to worse generalization).
*One thing to note is that this is not a replacement for learning rate schedulers. In the CIFAR10 plots (Fig 2), there is still a need for learning rate decay. Rather, it seems to help over the tuned baselines for any given learning rate schedule.
Typos:
Line 32: The end quote is flipped.
Line 17 in algorithm: s_{t+1,n} should be s_{t+1,i}.
Line 113: Text is missing.
Table 3, last line: MAdamW should be M-LION.
Strengths: Improving over tuned baselines is an important contribution. The authors do so for large scale deep learning datasets such as BERT and JFT-300m. It is a bonus that the authors are able to do so by using a theoretically motivated algorithm.
Weaknesses: The results would have been more impressive if the authors had used more recent baselines such as Mosaic ML’s BERT codebase.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could the authors provide some intuition about the fact that s_t is multiplied to the sum of the steps taken by the network till now rather than the next step (as is done in learning rate schedulers).
2. It is surprising that the learned value of s (Figure 1 and 3) is usually << .1? For comparison against tuned learning rates I would have expected s to be not significantly smaller/larger than 1. Does this suggest that the learning rates being compared to could have been reduced by a factor of 10 without significant effects? Could the authors provide some intuition about the learnt values of s?
3. The main body says that lamba is always set to .01 while Table 2 says that lambda=.1 in one of the cases. Was lambda tuned for MECHANIC? If so, was weight decay tuned for the base optimizers?
4. I may have missed the justification referred to in “This seems like a significant issue to disregard, but we will provide mathematical justification presently.”. Could the authors point where this is? Also, I think the first line of the proof of 3.1 is assuming this i.e. that g_t is independent of s_t.
5. Line 100: What is “s”? The previous lines only had s^{o}.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments and work reviewing our paper! We have included answers to your questions below, and we’d be happy to elaborate further.
Intuition for scaling by the sum of the steps: At a high level, the sum of the steps is a more “stable” value because it changes slowly. Individual steps, in contrast, change direction very quickly and so make it difficult to stabilize on the correct learning rate. In a technical sense, this manifests in analyses of algorithms that attempt to learn the scale factor applied to each step often needing to solve optimization problems with high variance or Lipschitz constants and so are unable to ensure quick convergence.
The scale factor is applied to the base learning updates with no learning rate (or, if you prefer, a learning rate of 1.0). This is why s is smaller than 1.
We did tune lambda for mechanic, as well as weight decay for the base optimizers. We found that without lambda, mechanic still performed well, but not quite as well (especially on smaller datasets). Please see the response to reviewer mCwi and the ablation plots included in the general response for more detail.
The justification is provided in Theorem 1, for which the regret of the base algorithm is measured with gradients evaluated at rescaled iterates rather than the original base iterates. We are not sure what you mean by the first line of the proof of 3.1: what is 3.1? We don’t think we ever assumed any kind of independence: in fact, our proofs do not assume any kind of stochastic structure at all and hold even for completely adversarial gradient or loss sequences, which is typical in online convex optimization. Specifically, our analysis has the following overall structure:
First, by convexity one has $\mathbb{E}\left[\sum_{t=1}^T F(x_t) -F(\mathring{x}) \right]\le \mathbb{E}\left[\sum_{t=1}^T \langle g_t, x_t - \mathring{x}\rangle\right]$.
Second, we prove that for *any* sequence of vectors $g_1,\dots,g_t$, the quantity $\sum_{t=1}^T \langle g_t, x_t - \mathring{x}\rangle$ is small. In particular, this part of the analysis actually does not need the $g_t$ to be gradients of anything - they can be arbitrarily and even adversarially generated. Please check out the response for reviewer 7tau for more intuition behind the analysis.
Line 100: $s$ is the scale value used in the base implementation of SGD, as described in equations (1) and (2). $\mathring{s}$ is the “correct” scale value that we should have used.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: >3.1
I meant theorem 1.
> "The justification is provided in Theorem 1, for which the regret of the base algorithm is measured with gradients evaluated at rescaled iterates rather than the original base iterates. "
> "Second, we prove that for any sequence of vectors"
Does this is point to a gap between the theorem and the application? or do you mean the theorem is more general then needed?
> "or, if you prefer, a learning rate of 1.0"
I see, so it starts with a higher learning rate than the base algorithm and then learns to decrease the learning rate in a near optimal way. Then why is the schedule needed at all i.e. why does MECHANIC not decrease the learning rate automatically?
---
Reply to Comment 1.1.1:
Comment: "Does this is point to a gap between the theorem and the application? or do you mean the theorem is more general then needed?"
The theorem is more general than needed. It holds for arbitrary sequences of vectors, when in reality we only require it to hold for sequence of vectors that are stochastic gradients of a loss. This is actually pretty common in the analysis of optimization algorithms, although it seems very unintuitive at first blush - you'd think that the supposedly harder setting makes you lose something, but it turns out that the achieved convergence results cannot be improved even if you do assume stochastic structure. Basically, it turns out that the "worst case" sequence of vectors usually actually tends to be a simple stochastic sequence and so you don't gain much by making the stochastic assumption. Does this also clear up the confusion about independence in the first line of the proof?
"I see, so it starts with a higher learning rate than the base algorithm and then learns to decrease the learning rate in a near optimal way"
Not quite: it starts with a very small learning rate scale of $s_{init}$ (say 1e-8), and then very very quickly increases the value to reach the "correct" scale - so quickly that it appear near immediate in the plots. In practice, it tends to actually overshoot a bit and then come back down.
You should compare $s$ with the scale factor employed after tuning (i.e. the maximum learning rate when a schedule is used), which is usually smaller than 0.1 as you observed. One might hope that mechanic would also learn to produce a schedule as well, but in practice including the schedule seems better (and our theorem only shows that it can find the single best scale for any given schedule, not that it will find an entire schedule). | Summary: This paper proposes a parameter-free technique to tune the learning-rate scale factor automatically, which can be applied to any given base optimization algorithm to match its performance given carefully tuned hyper parameters. This approach is mainly empirical in nature, though grounded in reduction from recent advancements in theoretical understanding of online convex optimization. The implementation is also "scale-free", i.e. invariant to scaling the stochastic gradients $g_t$ by a constant. The authors provide experimental evidence to support their claims, including both training from scratch as well as transfer learning, and across domains and model architectures.
Strengths: With the rapid rise in both demand and costs to train large models, the compounding effects of hyper-parameter tuning transitioned from being a technical inconvenience to a true bottleneck. While there have been some advancements in parameter-free optimization, any practical contribution to this field is highly valuable. A general-use, easy to implement extension to existing optimizers that act as a force multiplier and improves upon their results falls comfortably into this category.
In addition to providing some theoretical justification to their method, the authors also provide a detailed, easy-to-follow background to the field and its main results thus far. The paper also include some interesting intuitions by the authors, instead of post-hoc explanations of empirical findings.
The experiments performed by the authors include various domains and model architecture, and notably include both pre-training from scratch as well as fine-tuning which are known to behave differently.
Weaknesses: As stated by the authors themselves: their "primary contribution is not new theoretical development, but instead the translation between theory and practice". Since this paper is focused mainly on empirical evidence rather than new theoretical results, the burden of proof grows larger. While the authors do perform various experiments, there is key information that is missing to fully convince in the efficacy of the approach.
1. The experimental setting is lacking some details. Namely, how were the hyper-parameters for the base algorithm chosen? Following existing procedures while altering some hyper-parameters (e.g. dropping the NSP loss) may affect the results. Additionally, the given hyper-parameters are not guaranteed to be optimal in the first place. To make the claim that MECHANIC helps in achieving competitive results to carefully tuned optimizers, the base optimizers should indeed be carefully tuned.
2. Moreover, statistical significance of the results is not assured, as no measure for variance in the results is given. Very often, the advantage of the MECHANIC based training is small enough to potentially be attributed to the inherent noise in the evaluations.
3. It is not clear that all models converged in the presented results as there are missing details. For example, Table 1 provides fine-tuning results of BERT like models on GLUE, which underperform the results reported by Devlin et al. 2018, and Liu et al. 2019. This may be due to the difference in pre-training corpus, or due to undercutting of the models. In the latter case, additional training may have changed the results.
4. There are missing experiments to show how well does MECHANIC operate when the hyper-parameters of the base algorithm are not well tuned. If it fails in such cases, then one still has to tune the parameters of the base algorithm, and gain nothing by using this approach.
5. there are missing ablation tests - what is the effect of $\lambda$ and the effect of $\beta$? Also, what is the effect of learning rate schedulers on the performance of MECHANIC?
6. While the paper focuses on empirical findings, the authors suggest the method should be robust, but with no guarantees. For example, line 194-196 the authors claim that weight decay should not affect the operation of MECHANIC, but provide no proof (lie 195-196) and no empirical evidence.
7. While the method is presented as parameter-free, there is evidence that tuning its hyper-parameters may indeed be required. For example, lines 175-176 discuss the importance of setting $\beta$. While the authors try to obviate the need to tune it, the choice of defaults may not generalize well to other settings. Also, lines 239-240 mention that tuning $\lambda$ may be beneficial.
Some additional comments regarding the presentation of the paper:
* Algorithm 2 is referred to quite a lot and is being analyzed in the paper. As such, I think it will be beneficial for the reader if it is included in the body of the paper.
* In line 49, you state the "Empirical studies often take the route of "hyper-gradient descent". However, this approach is rarely used in practice, and even in the work you cite, most empirical approaches do not follow it, nor compare to it.
* The "scale" buy which the paper measure gains is not consistent. When MECHANIC is outperformed by base algorithms it is said to be "quite competitive", while when it has the advantage by smaller absolute gains, it is said to outperform. For example, see lines 247-248 which discuss the results of Figure 2.
There are a few typos and styling issues that should be fixed prior to final submission:
1. line 38, there is a missing "to" between "allowed" and "make"
2. line 93, the words "let and" were swapped
3. line 113, **missing key information** - the sentence of how you set the scaling factor is missing "and so by setting <MISSING> To show..."
4. In Table 2, there are at least 5 cases where the base algorithm achieved **identical** results to those of the MECHANIC tuning, while the table gives only the results of the MECHANIC tuning in boldface which is confusing and misleading.
5. line 218, "works" $\rightarrow$ "work"
6. line 219, "not that" is swapped
7. The bibliography style is inconsistent in its presentation. For example, [7,11,14,33] are all cited from CoLT, each with different styling. [20,21,36] are both from NeurIPS but with very different styles as well. Also 35 cites from arXiv differently from e.g. 38.
8. All references in the paper are not "clickable". Namely, figures, table and citation references are not clickable in the PDF
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How were the hyper-parameters of the base algorithms set? How are the results change if the hyper-parameter of the base algorithm are under-optimal? Was there a thorough tuning phase to verify the chosen hyper-parameters (including the total number of update steps) of the baseline algorithm were indeed correctly tuned?
2. What is the standard deviation in the given results?
3. What is the effect of $\lambda$ and the effect of $\beta$ ? Also, what is the effect of learning rate schedulers on the performance of MECHANIC?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitation of their work adequately. Namely, the authors mention the lack of theoretical guarantees or intuition into some of the results, as well as potential future work. There is no ethical considerations relevant in this case.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed comments! Your feedback will be very helpful to us in revising the paper. In the below we provide some more detail that we hope will address your main concerns.
**Main concern about missing empirical information**
We provide a lot more detail in the appendix section B. However, to specifically answer your questions:
```Namely, how were the hyper-parameters for the base algorithm chosen?``. We have actually tried to be faithful to the original baseline whenever possible. For instance, for our image classification experiments, we even use the exact code open-sourced by the original authors for both pre-training and fine tuning, with original hyperparameters (including the tuned learning rate) to limit any confounding factors. There is exactly one place where we deviate from the baseline, which is the removal of NSP loss. We did that only to simplify our experiments since it did not affect the performance of the baseline at all.
```To make the claim that MECHANIC .. the base optimizers should indeed be carefully tuned.```: You are absolutely right. Since learning rate is the most significant hyperparameter for this study, for all baselines we either grid sweeped over a reasonably large range learning rates (e.g. BERT) or (to limit costs) used a well tuned learning rate from a paper on the code open-sourced by the authors of the baseline (e.g. ViT pre-training) Exact values can be found in sections B.1 and B.2. We also note that as a sanity check, we report competitive results on the *baselines* when compared with what was reported by the original authors.
```Moreover, statistical significance of the results is not assured```: For results on small dataset like GLUE, you are right, it is hard to attribute the gains directly to Mechanic since there can be a lot of variance. But this is true even for the baselines. For that reason, similar to what was done in the original baseline papers, for all finetune runs, we report an average of 3 runs to counteract the variance. Finally, our aim is not to claim that with Mechanic one can get better performance than the baseline with well tuned learning rates, we merely hope to illustrate that it can help eschew the need for learning rate tuning for similar performance.
```It is not clear that all models converged```: We are a little confused by this concern. We did look again at the original BERT results and compared our ```baseline``` results with what they report. We would like to note that contrary to the concern, for some GLUE datasets such as QNLI, SST-2 ad QQP, with BERT-L, we in fact report better or almost the same results than what was reported with BERT on dev set (from Table 5 in Liu et al). Let us know if we misunderstood your concern.
``` There are missing experiments ..``` This is an interesting question and your point is well taken! We presented the case in which the other hyperparameters of the base algorithm (such as the schedule) are well-tuned. That said, we have observed that Mechanic continues to improve upon even poorly tuned baselines although we did not perform enough experiments on this case for the original submission as we felt that the well-tuned case was the one of most practical significance. Our hope is that Mechanic is indeed robust against other changes in hyperparameters, but there are still cases where Mechanic is currently less effective, such as when using dropout or normalized gradient updates like LARS and LAMB. These are certainly limitations of Mechanic and we will make sure to document them as such in our limitations section.
```\lambda, \beta and lr schedules```: [Also elaborated in response to reviewer 2EmS] you are absolutely right that ablating hyperparameters may result in useful insights, however in this case, beta (with the addition scheme), s_init and \lambda (if not too big) are robust enough not to produce meaningful difference in performance. Regardless we are happy to add the plots if you think they help!
Inspired by your suggestion, one interesting ablation study might be to change num_betas (*n* in Alg 1), following table shows *n* vs Acc of ViT-B/16 on JFT-300M:
```
| n | 2 | 4 | 6 | 8 |
| Acc | 48.9 | 49.5 | 49.9 | 49.6 |
```
Interestingly, training runs with *n* < 6 perform worse and also shows instabilities, however, increasing *n* beyond 6 leads to slightly poor performance.
**Tuning other hparams**
```If it fails in such cases, then one still has to tune the parameters of the base algorithm…```: We focused here on tuning only the scale factor for the updates, NOT in particular regularization constants or the schedule or any other aspect of optimization. Tuning this scale is typically one of the most important hparams, so removing the need to tune it is already a significant improvement that will certainly save computational resources. Note that without mechanic you still need to tune those parameters anyway. That said, we certainly acknowledge that our method does not completely obviate the need for ALL tuning.
```While the method is presented as parameter-free``` Please note that for our experiments we used the SAME values of beta/lambda across various models and datasets, providing evidence that in future tasks no alteration/tuning of mechanic is necessary. In the case of beta, the use of multiple beta values at once does actually come from some theoretical motivation for why this technique obviates the need to tune beta. For lambda we have less understanding and simply propose it as an empirical improvement. In practice it seems the default value is good, but certainly you could choose to tune it if you desire.
The sentence on line 113 should have ended after the parentheses, so the words “and so by setting” should have been deleted. The setting for $\mathring{s}$ in this particular example is provided later in the paragraph. See also our response to reviewer 7tau for a proposed alternative phrasing of this paragraph.
---
Rebuttal Comment 1.1:
Title: Thank you for your clarifications. Some follow-up questions.
Comment: Thank you for your clarifications.
Regarding the BERT/RoBERTa results: The results for BERT provided in Table 1 of Devlin et al. 2018 and Table 5 in Liu et al. 2019 are on the test set, and thus are expected to be lower to begin with. Furthermore, since the primary difference between RoBERTa and BERT is the addition of more training data (book corpus), the removal of the NSP loss (which was shown to improve results) and most importantly longer training, MECHANIC results should be best compared to RoBERTa’s validation results as presented in Table 8 of Liu et al. In every instance, RoBERTa outperforms your baseline, leading to my concern that the models may not have fully converged.
Regarding the missing experiments, you mentioned that for the baselines, you used well-tuned optimizers (either from the original authors or by performing a grid search). My concern here is the hyperparameters used for the model trained with MECHANIC. If they aren't calibrated correctly from the outset and the result is subpar, then one would need to perform hyperparameter tuning for the base optimizer in any case. Given that many of the experimental results don't show performance enhancement but rather produce results on par with well-tuned models, I wonder about the benefits of using MECHANIC if one still needs to perform this tuning.
---
Reply to Comment 1.1.1:
Title: answer to follow-up
Comment: Thank you for taking a look at our response promptly! We would be more than happy to provide additional clarification.
```Regarding the BERT/RoBERTa results```: We are again confused. We are looking at the first row of Table 5 from Liu et al. They mention that those results are **Single-task single models on dev** and not the test set. This is the exact setting that applies to our results too. We are comparing our results with **BERT-Large** (first row) from that table.
```Regarding why Roberta is not our baseline```: Note that we removed the NSP loss but the similarity of our setting to Roberta ends there. Most notably, we do **not** use 1) dynamic masking 2) Full doc sentences (bert uses short sentence pairs and we do too) 3) training for longer (like you mentioned) and much larger batch size (with differences in adam optimizer). These changes may explain the difference in performance. You are right that comparing to Roberta’s setting may also be a fruitful evaluation of Mechanic but we hope that this does not discredit our results on vanilla BERT (which is also well understood and a lot of optimization papers use as a baseline e.g. LAMB [1])
```Regarding model not being fully converged```: If your concern is whether we have linear decayed learning rate for the baseline to convergence, then yes we did! If you are asking why we didn’t train for as long as Roberta (or even longer), we would like to emphasize that pre-training runs are **very** expensive and we do train for a fair comparison with a well-known baseline of vanilla BERT for meaningful results. Additionally, if you are interested, we compare Mechanic on variety of other benchmarks too, most notably in figure 2, we compare on IWSLT14 (LSTM) and BookWiki (GPT Transformer) in the language domain.
```Regarding calibration of mechanic hyperparameters```: We don’t follow the concern here. Mechanic is designed to remove the need to tune the learning rate scale factor and ONLY that need. Of course other hyperparameters (such as a schedule or batch size) might still need to be tuned when using mechanic. However, even just removing the need to tune the learning rate scale factor results in a significant saving as this is usually one of the most critical hyperparameters to get right. In fact, since our experiments used only the tuning from the BASE algorithm for these other hyperparameters, the performance of mechanic would only increase if we were to actually tune for mechanic. We mostly opted not to do this so as to illustrate that mechanic’s good performance is due to the automatic scale factor tuning and not some interaction with other hyperparameters.
[1]: https://arxiv.org/abs/1904.00962 | Summary: The paper proposes a scheme to automatically tune the learning rate scale factor for any gradient-based optimization algorithm. The method can be viewed as a practical realization of recent theorertical results in online convex optimization (OCO), which reduces the problem to minimizing the regret of a one-dimensional OCO algorithm. The authors test their proposal on a range of large-scale deep learning benchmarks, showing competitive performance compared to strong manually tuned baselines. It is also shown that the method can outperform a recently proposed learning rate tuner, D-adaptation, without requiring the modifications accross base optimizers that D-adaptation does.
Strengths: * Strong empirical evaluation, which shows solid results against strong baselines in large-scale contexts where learning rate tuning is known to play a criticial role.
* Great potential for practicaly applicability, as it provides an off-the-shelf wrapper that can be used on top of any gradient-based optimization algorithm.
* (At least part of) the approach is theoretically motivated -- and subject to theoretical analysis.
Given the above, I believe the project can be turned into a very strong submission -- provided the weaknesses below are addressed.
Weaknesses: Despite the paper's great potential, a major reservation I have, which justifies my rating, is about the clarity of the presentation -- at least for those without strong expertise of the relevant literature.
* The background section, which motivates the approach, is based on prior work -- specifically, recent progress in parameter-free online convex optimization (such as ref [28]). I think the authors can do a much better job at reproducing and explaining these known results (the fact that presentation is plagued with misprints does not help).
For example, while Theorem 1, which bounds the regret of Mechanic in terms of the regrets of Base and Tuner, is relatively clear, the exposition of how it is exploited (end of Section 2.1 and Section 2.2 ) is very unclear to me. To be specific, the right-hand-side in the last equality on line 97, s dot shows up in both regrets, which suggests some sort of tradeoff between both regrets -- I do not follow the reasoning that the problem is completely relegated to Tuner (for example, the optimal s dot on line 101 from the BASE regret term looks distinct from the optimal s dot on line 119 from the tuner regret). Furthermore, the paragraph between l112-l122 completely lost me.
I think this would be highly beneficial to rewamp entirely this exposition for the sake of readability, especially for non OCO experts like myself.(I formulate more specific comments / questions in Questions section.
* In Section 3 the specific form of Tuner (even in its simplified form after line 148) seems to come out of the blue.
I am not sure whether or not it was supposed to follow from the background analysis; if it does, I think being explicit about the transition would be helpful (of course the empiricial justifications of some specific ingredients, as in 3.2 for weight decay, is completely fine).
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: I am confused by the notation (line 41 or line 95) suggesting that the MECHANIC iterates are mere interpolations of the initial point and the BASE iterates. It makes it look like one needs to run the base algorithm once completely to be able to define the MECHANIC iterates, which is clearly not the case from Algorithm 1. I see how MECHANIC depends on the BASE *updates* -- but these are built from gradients that depend on previous MECHANIC iterates, not on the BASE iterates. (The authors seem to acknlowedge this abuse of notation in Footnote 1, but I cannot see the mathematical justification that the footnote is referring to).
I am not sure about the editorial choice that consists of motivating the algorithm with the OCO analysis.
I feel a better choice could be to first introduce the wrapper as an additional optimization step over the scaling factor $s$ with loss $\ell_t(x_1 - s \sum_t \mbox{base updates}_t)$, which I find intuitive and elegant in and of itself -- as it optimizes in the direction of what would have been a good scale in hindsight ; then to introduce OCO subsequently for the sake of the analysis.
Miscalleneous:
* l38 allowed make <-- allowed to make
* Footnote 1: presently <-- below or shortly
* Inconsistent index for s_t between l41 and l42.
* From l45, I suggest to just create a new "related work" section. An explicit list of contributions in the introduction would also be helpful.
* l93 let and <-- and let
* l103 approximates of the gradient <-- approximates the gradient
* l110: isn't an example of such an advanced algorithm SGD initialized a s=0? (from l83).
* l110: A TUNER subscript is missing in the regret formula.
* l113 missing words: "so by setting To show" ...
* On Algorithm 1 : l6 gk <-- gt. I suggest adding What is zt? x_ref not updated?
* Table 1: it would be nice to see standard deviations reported as well
* The last mentioned limitation in section A looks like an advantage to me (no validation set required).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 4 excellent
Limitations: Limitations adequately acknowledged in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your detailed review and feedback on the presentation - making sure everyone can follow the paper is a priority for us. Unfortunately, we cannot provide an updated revision, so below we’ve copied in some changes that will ameliorate the issue. We would be happy to hear your feedback, and we intend to make a careful edit of the paper with an eye to accessibility.
```For the paragraph starting at line 99, we propose to add detail as follows:```
“
With this result, finding the optimal $s$ can usually be completely relegated to $\texttt{tuner}$. Although the value $\mathring{s}$ appears in both terms of the sum $\text{Regret}\_{\text{linear}}^{\texttt{tuner}}(\mathring{s}) + \mathring{s}\text{Regret}\_{\text{linear}}^{\texttt{base}}(x^{\texttt{base}}\_1+(\mathring{x}-x^{\texttt{base}}\_1)/\mathring{s})$, it turns out that for essentially all known base algorithms, there is a particular value $\mathring{s}$ that causes $\mathring{s}\text{Regret}\_{\text{linear}}^{\texttt{base}}(x^{\texttt{base}}\_1+(\mathring{x}-x^{\texttt{base}}\_1)/\mathring{s})$ to obtain the optimal regret bound $\|\mathring{x}\|\sqrt{\sum_{t=1}^T \|g_t\|^2}$. This value for $\mathring{s}$ is unknown a priori, and depends on the data and the base algorithm. However, if $\text{Regret}\_{\text{linear}}^{\texttt{tuner}}(\mathring{s})$ is sufficiently small for this unknown value $\mathring{s}$, then overall we achieve the optimal regret bound without having to know $\mathring{s}$ ahead of time. Note that setting $\mathring{s}$ in this way can be done entirely in the analysis without modifying the algorithms, as justified by the infimum in the Theorem.
“
```For the paragraph at line 112:```
“
In the a theoretical development of this technique, it is necessary to prevent the terms $\langle g_t, x_t^{\texttt{base}} - x_t^{\texttt{base}}\rangle^2$ from becoming too large (as otherwise $\text{Regret}^{\texttt{tuner}}$ is too large). Typically, this is accomplished by constraining the base algorithm to satisfy $\|x_t^{\texttt{base}} - x_t^{\texttt{base}}\|\le \rho$ for some user-specified arbitrary $\rho$. Enforcing such a constraint means that the regret bound (2) would only apply to $\|\mathring{x}\|\le \rho$, but ensures that $\langle g_t, x_t^{\texttt{base}} - x_t^{\texttt{base}}\rangle^2\le \rho^2 \|g_t\|^2$. Thus, by setting $\mathring{s} = \|\mathring{x}−x_1^{\texttt{base}}\|/\rho$, the combined algorithm obtains the optimal regret bound of $O(\|\mathring{x}−x_1^{\texttt{base}}\|\sqrt{\sum_{t=1}^T \|g_t\|^2})$ (amazingly, the value of $\rho$ is irrelevant!). In practice however, we do not attempt to explicitly enforce any such constraints and simply rely on the intuition that any non-diverging algorithm is unlikely to produce excessively large iterates.
“
```Regarding motivating the Tuner algorithm: We will first motivate a further simplified update with the following text:```
“
While the specific form of our tuner update is based upon more involved analysis, we can capture some intuition by appealing to the familiar SGD algorithm. First, notice that the “gradient” sent to Tuner is $h_t$. Thus, the SGD update would be $s_{t+1} = s_t - \eta h_t$ for some learning rate $\eta$. It turns out that the analytically optimal value for $\eta$ is $\frac{\mathring{s}}{\sqrt{\sum_{i=1}^T h_i^2}}$. Unfortunately, we do not know this value at first. Instead, at time $t$ we estimate the denominator with the running sum $\sqrt{\sum_{i=1}^t h_i^2}$ and the numerator with the “optimistic” approximation $\mathring{s}\approx s_t$, so that the update becomes $s_{t+1} = s_t - \frac{s_t h_t}{\sqrt{\sum_{i=1}^t h_i^2}}$. Unfortunately, this update is unstable and can result in exponential blowup in the $s_t$ values as bigger $s_t$ results in bigger learning rates. Much of the technical detail in our update is designed to deal with this instability, essentially by introducing a carefully designed decay into the $s_t$ update.
“
```Regarding the intuition of taking the “gradient with respect to s” of $\ell(x^{\texttt{base}}_1 + s\cdot(x^{\texttt{base}}_T -x^{\texttt{base}}_1)$```: We agree this makes sense a high level, but in fact the analysis is totally different and we could not see any way to formally justify the algorithm with this intuition. We were a bit wary of emphasizing it too much since we cannot be confident that variations based solely on this intuition will behave well. Nevertheless, we will flesh out the idea a bit more in the discussion after Theorem 1. This is actually related to your question about footnote 1 - our analysis is essentially a mathematical “trick” that enables us to sidestep issues that arise with more intuitive arguments.
```Regarding footnote 1, as well as the concern about needing to first do a run of the base algorithm```: The justification is Theorem 1, and you are correct that we do NOT need to run the base algorithm first.
Our analysis is based on regret and actually does not stipulate any particular meaning for the vectors $g_t$. The base algorithm is viewed as a black box taking as input any sequence of vectors $g_1,\dots,g_t$ and yielding some $x_{t+1}^{\texttt{base}}$. Although it is intuitive to set $g_t=\nabla f(x_t^{\texttt{base}},z_t)$, this is not assumed in our analysis. Nevertheless, all standard algorithms like SGD, AdaGrad etc ensure bounded regret for arbitrary sequences $g_t$.
```Other questions:```
Line 110: The “advanced” part of these algorithms is that they obtain the bound $\mathring{s}$ automatically, while SGD would require tuning the learning rate depending on $\mathring{s}$.
Line 113: The partial phrase “so by setting” should be deleted: the sentence ends after the parentheses (the value for $\mathring{s}$ is provided later in the paragraph).
Algorithm 1: $z_t$ represents the $t^{th}$ minibatch (see line 11). $x_{\text{ref}}$ should be $x_1^{\texttt{base}}$ (and so does not need updating).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal and theclarifications, which will indeed be very welcome in the revision. Since clarity and presentation was my only major concern, for what I think is otherwise a strong submission, I've raised my score. | Summary: This paper develops a method to automatically determine the learning rate of an optimization algorithm. Their approach, referred to as Mechanic and motivated by theoretical advances in online convex optimization, determines a new learning rate for iteration t using an update that resembles an adagrad update plus an additional decay factor. Their method is tested in deep learning settings ranging from language to large scale vision, where it approaches and sometimes outperforms a "tuned baseline".
Strengths: This paper has many strengths:
- The problem that the authors explore is very important to the community. Even if their method is not a universal solution, any sound progress towards this target is potentially valuable. If the community succeeds in removing the need for tuning learning rates, significant resources use will be eliminated.
- Mechanic can be applied to any base algorithm, not only in theory but also in practice: the authors show results for Adam, Lion, and SGD.
- Mechanic is tested in various different domains from masked language modeling to large scale JFT pre-training.
- Mechanic has only two tunable hyper-paramters, s_init and the vector of beta.
Weaknesses: - The central weakness in this paper is the lack of clarity surrounding the "tuned baseline". How was that baseline tuned? What scheduler is used? Is it the best learning rate and if so out of how many searched? It would be great to see a line plot of LR vs. perf for the baseline and then a horizontal line for mechanic. This weakness is why I've listed the soundness as only fair.
- There are no ablation studies for the relevant hyperparameters s_init and the betas.
- Minor: There are some claims in the paper which I do not believe to be true. For instance, in the conclusion the authors claim that it is infeasible to manually tune a scale factor for each layer. However, there is work, e.g., mu-transfer which aims to specify a "corrected" learning rate for each layer.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Different base algorithms and problem settings result in different performance for Mechanic vs. the baseline. Do the authors have any ideas why this may be?
- How is performance affected by s_init and the betas.
- I was slightly lost in the explanation for how multiple betas were used in the section where the adagrad like update transforms into more of an adam like update. Could the authors extend this trick so that multiple betas could be used simultaneously in other contexts, e.g., for the adam estimators for the params?
- Can you explain the tuning that went into the baseline. Ideally a plot of LR vs. perf for the baseline could be contrasted with mechanic.
- Why do the authors compare only to D-adapt and not, e.g., DoG https://arxiv.org/abs/2302.12022.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Main concern about baseline hyperparameter tuning**
We provide a lot more detail into the appendix section B. However, to specifically answer your questions:
```How baseline was tuned and Is it the best learning rate and if so out of how many searched```: For all baselines we either grid sweeped over a reasonably large range learning rates (e.g. BERT) or (to limit costs) used a well tuned learning rate from a paper on the code open-sourced by the authors of the baseline (e.g. ViT pre-training) Exact values can be found in sections B.1 and B.2
```What scheduler is used```: Assuming you mean LR scheduler, we used the schedules mentioned in the original baselines for both with and without Mechanic. Specifically, linear schedule for BERT and cosine schedule for ViT. Note that Mechanic does not eschew a need for a schedule (at least not yet), it simply learns the right scale factor to be used for a predefined schedule.
```It would be great to see a line plot of LR vs. perf for the baseline and then a horizontal line for mechanic.```: This is a great suggestion! We will post data for this during the discussion period and even add plots in the final version of the paper.
**Regarding betas and s_init**
You are absolutely right that ablating hyperparameters may result in useful insights, however in this case, both beta (with the addition scheme) and s_init are robust enough not to produce meaningful difference in performance.
We have found that using a single value for beta required tuning of the beta value for each model/dataset separately. However, the “addition scheme” we used does not. This was motivated by theory (via the “combining regret bounds via addition” technique introduced by [33]). We found that as long as a reasonable beta value for the problem is present in the betas sequence (default), performance doesn’t change.
Regarding tuning of s_init, we intended the scheme to work with “obviously too small” values like 1e-8 (a property that is supported by the logarithmic dependence on 1/s_init in theory) but in practice changing s_init to even a few orders of magnitude does not affect performance much.
Let us know if you still want us to add those results in the paper, we would be more than happy to do so!
**Regarding the multiple beta trick**
The trick we used to combine multiple beta values in the tuner algorithm relies on a theorem relating to parameter-free online algorithms that states a “meta algorithm” created by adding the outputs of many sub-algorithms will have performance no worse than the best of the sub-algorithms (even if we do not know which one is best at first). Thus, since we did not know which beta value would be best at first, we simply added the outputs of many instances of Tuner with different beta values. In principle, one might be able to apply a similar trick with the beta values in a high dimensional algorithm like Adam, but this would require blowing up the memory and runtime of Adam by a factor of [number of beta values]. Since Tuner requires only O(1) memory and time, this blowup is irrelevant compared to the time needed to simply compute a gradient, but Adam requires O(d) time/memory where d=[number of parameters in the model], so this may incur more significant performance issues.
---
Rebuttal Comment 1.1:
Title: Thanks for your response.
Comment: I look forward to seeing the data for the plot that you mention, and think it will be a good contribution to the paper.
---
Reply to Comment 1.1.1:
Title: Thank you and rebuttal round #2
Comment: Absolutely! Here are some additional data points that you requested. We hope that this resolves your concerns. Please let us know if you need any other information.
**Plots of learning rate vs performance of BERT models**
```
Adam Optimizer
| LR | | 5e-4 | 1e-3 | 2e-3 | 5e-3 | 1e-2 | Mechanic
---------------------------------------------------------------
| Acc | BERT-B | 71.1 | 71.5 | 71.5 | 71.5 | 71.3 | 71.7
| Acc | BERT-L | 75.0 | 75.4 | 75.4 | 74.6 | 74.4 | 75.3
```
```
Lion Optimizer
| LR | | 5e-5 | 1e-4 | 2e-4 | 5e-4 | 1e-3 | Mechanic
---------------------------------------------------------------
| Acc | BERT-B | 70.8 | 70.8 | 71.1 | 71.8 | 71.4 | 72.0
| Acc | BERT-L | 75.1 | 75.6 | 75.7 | 74.7 | Diverged | 75.5
```
**Ablations of s_init, \lambda and num_betas**
Since you were interested, we also re-ran the ablations of s_init, \lambda and num_betas using ViT-B/16 on JFT-300M with *M*-Adam . Definitely let us know if you require any other information.
```
| s_init | 1e-8 | 1e-7 | 1e-6 | 1e-5 | 1e-4
---------------------------------------------------------------
| Acc | 49.8 | 49.8 | 49.9 | 49.7 | 49.6
```
As we expected from theory, changing s_init by even orders of magnitude does not result in much difference in performance.
```
| wd (or \lambda) | 0 | 1e-3 | 1e-2 | 1e-1 | 1e0
---------------------------------------------------------------
| Acc | 49.7 | 49.8 | 49.9 | 49.7 | Diverged
```
We have observed that while \lambda is helpful in stabilizing Mechanic on some problems, as long as it is set to a reasonable value it does not affect performance by a lot.
```
| num_betas | 2 | 4 | 6 | 8
-----------------------------------------
| Acc | 48.9 | 49.5 | 49.9 | 49.6
```
Interestingly, we find that having training runs with num_betas < 6 not only perform worse but also leads to instabilities throughout the training run (we will include plots in the final paper). However, increasing num_betas beyond 6 does leads to slightly worse performance. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution | Accept (spotlight) | Summary: In this work, the authors propose a new genomic foundational model that can be pretrained on the human reference genome. This model, called HyenaDNA, is built upon the previous model Hyena. The advantage of this model is that it can train on ultra-long sequences (up to 450k) at single nucleotide resolution with significantly fewer parameters. The authors introduce a warm-up technique to stabilize the training procedure, and they also employ a soft prompting method for downstream tasks. The authors show the efficacy of their framework through various experimental settings.
Strengths: 1. This work tries to solve an important problem: developing new foundational models for DNA sequences.
2. Previous genomic foundational models are usually only able to be trained on relatively short sequences (<10kb). It is hard for them to deal with ultra-long sequences, which is necessary to capture long-range dependencies. This work provides a strategy to train the model on these long sequences.
3. The experimental results shown in the manuscript seem promising.
4. Overall, the manuscript is well written.
Weaknesses: 1. It is a bit difficult to understand 3.1 especially for people who are not familiar with the previous Hyena manuscript. $x_1$, $x_2$, $v$, $W_{x_{1}}$, $W_{x_{2}}$, $W_{v}$ are not defined in the main text. Line 159, not sure “input time” refers to. Line 162, not sure “projections” refer to.
Some minor points:
1. Section 2.1, no definition of D, is it equal to 4 representing {A,T,C,G}?
2. Line 320, A.5 should be A.4
3. Line 350, A.4 should be A.6
4. Appendix line 613, Fig. 4.2 should be Fig 4.1.
5. Line 206 equation, equation number missing
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Line 206, it is not so clear what the embedding step refers to. Also, perhaps to double check the correctness of $x\in\mathbb{R}^{L\times(T+N)}$
2. It seems that for different experiments, the authors train different HyenaDNA (different hyper-parameters, or using input sequences with different lengths as the pretraining data). Is there any specific reason that the authors use different models instead of e.g. using the biggest model that can handle up to 450k context, for all the experiments?
3. Table A.3, for different tasks the authors use different hyper-parameters, how these hyper-parameters be chosen?
4. The authors indicate in the main text their framework is more computationally efficient than Nucleotide Transformer since it has a significantly smaller number of parameters. However, in the appendix the authors mention that for each new task the entire model needs to be fine-tuned. How efficient this fine-tuning is compared to the fine-tuning procedure used in Nucleotide Transformer? How many computer resources are required for fine-tuning HyenaDNA?
5. The authors compare HyenaDNA with Nucleotide Transformer in 17 datasets. However, in the paper of Nucleotide Transformer there are 18 datasets in total. Why one dataset missing?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have pointed out some limitations of their work in the conclusion section. I don’t think there is any significant negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review! We’re glad you appreciate the importance of foundation models for DNA sequences, and the clarity of writing of the manuscript. Below we address concerns and clarifications the reviewer raised. We’re happy to answer any further questions the reviewer may have.
### Hyena method clarification
Thank you for pointing this out. We’ll be sure to modify the description for readers unfamiliar with the original Hyena work as well. Specifically:
Starting with the input $x$ (a length $L$ sequence of embeddings of size $D$), we apply three different parametric linear operators to obtain “projections” $x_1, x_2, v$, each of the same dimensions of the input. For example, $x_1 = T_{x1} x W_{x1}$ where $T_{x1}$ is the L x L Toeplitz matrix corresponding to a short convolution, and $W_{x1}$ a D x D weight matrix.
After computing the projections, we start applying the Hyena operator recurrently, starting with $v$. First, we compute $ z = D_{x1} v$ where $D_{x1} = diag(x_1)$ (gating), following up with the long convolution $T_h$ and another gate $D_{x2}$. Higher-order Hyena operators (with more projections) continue until all projections have been used in the recurrence, following the same pattern of diagonal and Toeplitz.
The phrase “input time” (on line 159) is meant to describe using time (or position in the sequence) as an input to the neural network parametrizing the convolution filter $h$. The inputs are the positions of the filter (kernel), and the outputs are the convolution weights for that position. Note: the use of “time” is a remnant of signal processing literature (which Hyena draws from), but can also refer to space or a position.
The term “projections” refers to a linear projection (applying a linear layer) to an input.
### Minor Points
Thank you for catching the fine grained details that were missing here.
1. In section 2.1, D refers to the model embedding dimension. We’ll define this explicitly in the paper.
2. through 5., we will edit these typos and omissions in the final paper.
### Questions
1. The embedding step uses a standard embedding function (e.g. nn.Embedding in Pytorch). In general, this maps an index value (like 0 through 3, representing indexes of DNA nucleotides) to a learned embedding space of dimension D (typically the model dimension).
1a. Indeed, we do finetune different pretrained models of different sizes depending on the downstream task. There are 2 reasons for this:
**a. Efficiency:** Using the “right” size model that is sufficient for the downstream task is far more compute efficient, since ultralong-range sequence models require more training time, data and parameters to perform well.
**b. Reduce overfitting:** the larger models tend to overfit quicker (a common occurrence across machine learning). This is especially true if we’re testing on a set of short-range tasks (<1k long). Using a large model (eg for 450k sequences) will overfit quicker on these short range sequences if there is not a sufficient number of samples. Many of the short range genomic datasets are indeed small, typically 10s of thousands of samples, and as few as 1200 samples.
2. In Table A.3. For the Nucleotide Transformer datasets, we performed an extensive hyperparameter sweep via a grid search, a common hyperparameter search strategy. In practice, as we train, we use our intuition about which hyperparameters to adjust based on the training loss curves, for example, if regularization needs to be adjusted to reduce overfitting.
3. Regarding computation efficiency compared to baseline models, the Nucleotide Transformer authors also finetuned on every new downstream task. In comparison though, the Nucleotide Transformer used 8-A100-80GB GPUs for about 50 mins on average to finetune each dataset, as described in their paper. We used a single A100-40GB GPU, for 10 to 30 mins across the same datasets. Concretely, we provide results in Table R4 of the common response for a comparison of GPU-hours for pretraining and finetuning between baseline models and HyenaDNA. We will add these results to the appendix.
4. The 18th dataset of the Nucleotide Transformer (on splice site prediction) had a broken link during the time we were working on the submission. The dataset is now available publicly, and we have updated our results with this dataset (as shown in Table R2 of the common response). Thanks for pointing this out. We slightly underperformed the Nucleotide Transformer on this task (by 0.4 accuracy points)
Thank you again for the thorough review. Please let us know if there are other things we can clarify.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I would like to thank the authors for their response. I have one follow-up question regarding the computational efficiency of fine-tuning. The authors mention in their comments that "the Nucleotide Transformer uses a parameter-efficient finetuning: only 0.5% parameters are updated.". However, the original paper says that they only need 0.1% parameters for fine-tuning. And if I understand correctly, fine-tuning HyenaDNA requires all the parameters. Therefore I am a little bit surprised that in table R5, HyenaDNA can be much faster than NT-2.5B. In addition, the number of fine-tuned parameters of NT-2.5B should be significantly smaller than the number of parameters of DNABERT. However, as shown in Table R5, DNABERT is still significantly faster than NT-2.5B. Can the authors do some more elaborations on that?
---
Reply to Comment 1.1.1:
Comment: Thank you for the comments, and for catching the mistake. It is indeed 0.1% parameters that are updated for the Nucleotide Transformer (NT) model. We will update that number for the NT in Table R5 of the common response.
Yes, it is also correct that HyenaDNA updates all its parameters during finetuning.
**In regards to how HyenaDNA is much faster (in overall GPU-hours) than the NT-2.5B even though it only updates 0.1% of its parameters (requiring forward and backward passes):**
- the NT model still needs to compute a very large forward pass on all its parameters, which consequently is still quite time consuming. As a rough benchmark, Li et al (2020) showed that a neural network (including BERT, like the NT model) can have a backward pass that is 3x longer than a forward pass (in latency). So although the NT model saves a lot of compute time by freezing most of its parameters (saving backward pass), its sheer size still requires a lot of compute (forward pass) compared to HyenaDNA (as well as DNABERT).
- Another reason for the large difference between the NT model in Table R5 is that HyenaDNA is pretrained on far less data while achieving competitive (or better) results than the NT model. We provided Table R5 to compare total GPU-hours (at the request of a reviewer) to provide a different perspective on efficiency, which can be viewed in multiple dimensions.
- Another efficiency dimension, in the original manuscript (Figure 4.1), is to control for the same model and data size, and compare the runtime for a small, 2 layer model with attention vs HyenaDNA. In that case, the main efficiency benefit of HyenaDNA is primarily at longer sequences, which grow near linearly ($N log N$) vs $N^2$ with attention.
Thanks very much for the engaging and thoughtful questions.
### Citation
Li, Shen, et al. "Pytorch distributed: Experiences on accelerating data parallel training." arXiv preprint arXiv:2006.15704 (2020). | Summary: This paper presents HyenaDNA, an advanced genomic foundation model leveraging the Hyena language model's capabilities, which are based on implicit convolutions. The authors highlight the limitations of prior Transformer-based genomic models, which have been constrained by token lengths and therefore impeded accurate modeling of long-range genetic interactions. These models also relied on tokenizers or fixed k-mers, which resulted in loss of single nucleotide resolution and crucial genetic variations. HyenaDNA addresses these issues. The model's unique abilities, including its sub-quadratic scaling in sequence length, usage of single nucleotide tokens, and full global context at each layer, mark a significant advancement in genomics.
Strengths: Overall, this is an excellent paper! I appreciate the application of the Hyena model to DNA sequences. The utilization of advanced sequence modeling to tackle significant challenges in biology is commendable.
The concept of soft prompting for long-context models is exceptionally innovative and interesting!
Weaknesses: - To be frank, I believe the evaluation of short sequences may not be crucial for this paper, considering its primary focus on long-range modeling. I would suggest including a comparison with the Enformer model for a more comprehensive perspective. More benchmarks on the long-context task are necessary.
- Moreover, additional technical details should be incorporated either in the main text or the supplementary materials to facilitate a thorough understanding of the methodology used.
See more detail in the questions:
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would raise my rating if the authors resolve my concerns.
- On line 132 , it would be beneficial to showcase $ T_{i, j} $.
- Could you elaborate a bit more on Figure 3.1, specifically, what the arrows represent?
- Could you provide detailed information about the baseline CNN model used in the GenomicsBenchmark?
- Are there any results from training the model from scratch? The connection between the pre-training goal and the downstream task doesn't seem very apparent.
- How well did the CNN model perform in predicting on the Nucleotide Transformer benchmark?
- On lines 259 - 260, how would training from scratch for each task fare? If the advantages of pre-training are minimal, using $3200 x$ less pretraining data doesn't present a significant benefit.
- Could you detail the fine-tuning process in section 4.2?
- For Figure 4.1, it would be helpful to incorporate the data from table 4.1 to display the "upper bound" performance.
- It would be advantageous to always include a training-from-scratch CNN or dilated CNN on all tasks.
- The species classification result is quite impressive! Would it be possible to apply any Explainable $\mathrm{Al}$ (XAI) methods to discern the principles that the model has learned?
- It would be beneficial to update the supplement to include more details about the Long Convolution. I had to refer to a previous paper to understand this concept.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for the thoughtful review. We’re glad the reviewer is excited about the contribution of Hyena to DNA sequences, as well as the novelty of soft prompting for long-context. Below we address concerns the reviewer raised, and clarifications to technical details that were asked. We’re happy to answer any further questions the reviewer may have.
### Short vs long range evaluations
The suggestion to apply HyenaDNA to the Enformer [Avsec et al., 2021] task is a good one. We dwelled on applying HyenaDNA to Enformer for quite some time. After much debate, we made the strategic choice to focus on a general foundation model (as opposed to a supervised Enformer task), with the compute budget we had. Our hope was that we could reach more computational biologists who needed more expressive long-range backbone models that could be easily adapted for their specific downstream tasks. We absolutely believe Enformer would be an incredibly exciting application of HyenaDNA and leave that to future research.
That being said, we do showcase a number of long-range downstream tasks including chromatin profile (8k long), a novel ultralong-range species classification tasks up to 1M tokens, in-context learning tasks that uses over 32k tokens, and biotype classification on up to 160k tokens.
We primarily included a high number of short-range benchmarks because of their accessibility and clear ability to compare against existing models. We hope that these make the case that HyenaDNA performs well on both short and long range tasks.
### Additional methodology details
Thank you for bringing these up. We’ll be sure to add further architecture and training details to section 4 and appendix of the manuscript. We will also release all the code (with colab notebook examples), and model weights.
### Questions
**Model and training clarifications**
- Thanks for pointing these out. We will add clarifications regarding $T_{ij}$ on line 132, more details on the long convolutions in the supplement section, and add an upper bound to the tuneable tokens for table 4.1. $T_h$ is the Toeplitz matrix representation of a convolution with filter $h$. It is a mathematically equivalent representation ($T_h u$ or $h * u$), with definition $T_ij = h_(i - j)$ i.e. T is the matrix with the filter on each row, shifted by the column index.
- We will amend section 3.1 for clarity (also see figure A.1 in the supplement for block diagram of HyenaDNA). To address your specific question, in Figure 3.1, the arrows represent how an input flows through a single Hyena operator (inside). Starting with the input $x$, separate projections are produced using 3 different $W$ matrices (dense layers). This is followed by 3 short convolutional layers, $T_{x2}$, $T_{x1}$, and $T_{v}$, creating $x_{2}$, $x_{1}$, and $v$. Then $v$ is elem-wise gated by $D_{x1}$ (a diagonal matrix/layer), followed by a long convolution by $T_{h}$, and finally another gate by $x2$.
- We will add further details in the appendix on the GenomicBenchmarks baseline CNN. As described by the original authors, the CNN uses an embedding layer, 3 conv layers with number of filters: 16, 8, and 4. It uses batch norm and max pooling after each conv layer, followed by 2 dense layers. It is trained for 10 epochs with batch size 64. The model sizes range from 120k to 520k, depending on sequence length chosen. We did not apply this basic CNN on the Nucleotide Transformer datasets as it appeared to be a simple demonstration model by the dataset authors. We’ll be glad to add these by the camera-ready if the reviewer believes this would bring clarity into how HyenaDNA performs over previous methods.
- The finetuning procedure in section 4.2 has some additional details in the supplement information section (A.2), but we will certainly provide further details in the camera-ready. Specifically, our finetuning procedure attaches a single linear layer to a pretrained HyenaDNA model (2 layer, d_model=128 and 256), pretrained on sequence length 1k. The output embeddings learned are averaged over all the tokens, and then used to classify a class prediction. Each dataset is finetuned separately and sweeped through a number of hyperparameters (via a grid search). We provide a range of the hyperparameters in Supplemental Tables A.2 GenomicBenchmarks and Table A.3 for the Nucleotide Transformer datasets.
**Scratch/pretraining**
- Thank you for the suggestion - we provided additional experiments training from scratch in the common response in Tables R1 to R3, along with our observations. We will add these to section 4 of the manuscript. The key takeaways were that on simpler tasks, pretrained boosted mildly to moderately for HyenaDNA.
- As difficulty increased (both the task itself and the sequence length), we observed greater performance gains from pretraining. See the common response and Table R1 and Table R3 for more in depth analysis of pretraining effects on downstream performance.
**Species classification**
We’re glad the reviewer found the species results impressive. We certainly think XAI methods can be applied to this task. Though our initial experiment focused primarily on showcasing the relative performance between longer sequence lengths, we think XAI methods applied to HyenaDNA would be an exciting future research direction, especially given the successful history of using convolutions in interpretability in genomics.
Thanks so much for the thoughtful and very detailed feedback and questions, it’s helped us better communicate our work!
### Citations
Avsec, Žiga, et al. "Effective gene expression prediction from sequence by integrating long-range interactions." Nature methods 18.10 (2021): 1196-1203.
---
Rebuttal Comment 1.1:
Title: concern about the short sequence benchmark
Comment: - The key concern is that I don't see much reason why the short sequence prediction is also improved, which is probably just because the CNN baseline is not so good (also, reviewer nDFb cannot reproduce this result). And based on the DeepSEA benchmark, I would say the difference is relatively small. The motivation should be the long context task from the design motivation view.
- For training from scratch, do you use the same hyperparameters to train or fine-tune the hyperparameters for training from scratch, such as the possibility that training from scratch may need longer training, a smaller learning rate? Also in the cross-specials benchmark, do you also use the warm-up?
- Could you please provide a time comparison between training from scratch and the pretrain+fine-tune setting? This could be a great comparison to know if pre-training is necessary.
---
Reply to Comment 1.1.1:
Comment: Thanks for your engagement, we appreciate your feedback and interesting questions.
**1. Short range performance:** We agree with the reviewer, the focus should be on the long range design and capabilities of HyenaDNA. The short range benchmarks are primarily meant as a comparison for common benchmarks, and to show HyenaDNA is at least as good as existing models. They are also readily available, and use low compute resources to enable most in academia to test it out.
That being said, the short range models do allow us to test out our other design choices on DNA sequences. From our ablations in the common response Table R2, it appears other design choices also improve the short range tasks, including the single nucleotide tokenization, which is useful for any range.
We also agree that the CNN is a weaker baseline. Another reviewer did suggest using a stronger baseline on the GenomicBenchmarks with DNABERT, which is now provided in Table R1, and which achieves SotA on 1 of 8 GenomicBenchmarks datasets.
The Nucleotide Transformer is arguably a much stronger baseline model (previous SotA) and evaluated on harder tasks across 18 datasets, for which HyenaDNA performs competitively as well. For the DeepSEA benchmark, HyenaDNA’s main benefit is the much smaller model size while performing similarly. Notably, this dataset is nearly saturated, and so reducing the error rate becomes exponentially more difficult compared, for example, the NT datasets. We look forward to continue applying HyenaDNA to longer range benchmarks.
**2. Training from scratch:** For training from scratch, we sweep hyperparameters as well, though sometimes the best hyperparameters are the same for both scratch and finetuning. It did not appear that the best learning rates were significantly changed because of the finetuning vs scratch. What we did notice was that the training becomes much more stable with the pretrained models, i.e. the loss curves are smoother and less “jumpy”. This was especially true for the long range species classification task, where the ultralong sequences cause severe instability.
**2.a Species warmup:** Indeed, we use sequence length warmup on species classification for the ultralong-range (250k+).
**2.b Reproducing results:** Please see the common response **[dated Aug 13]** to reviewer **[nDfb]**, in which we provide a Dockerfile that contains the exact settings and launch commands to reproduce 5 sample datasets from the Nucleotide Transformer. It appears the reviewer was missing the correct code, hyperparameters, and pretrained weights. We're confident the correct code (found in the Docker image) will reproduce the reported scores.
**3. Pretrain vs scratch time:** Thank you for the suggestion. We have now added this information in Table R6 of the common response - comparing convergence times between scratch and pretrained finetuning on the Nucleotide Transformer datasets. Though they have similar training times for their respective top metrics (23 vs 26 mins on average per dataset), the pretrained models generally reach the same performance metric (as the scratch models) faster, and then continue to eke out more gains, albeit more slowly. We appreciate the suggestions and will put these results in the appendix of the manuscript.
Thanks very much for the great follow-up questions. | Summary: This paper introduces HyenaDNA, a genome foundation model based on the Hyena architecture that replaces attention layers with implicit convolutions. Though being 2500 times smaller, it achieves better performance than the state-of-the-art model.
Strengths: - The paper is clearly written.
- The proposed method achieves very strong performance with very few parameters and is able to process ultra-long dna sequences.
- This paper investigates in-context learning and tunable prompting to the area of dna language model and show great performance improvements.
Weaknesses: - The comparison over baselines is unfair.
- Since Hyena is fundamentally different from the Transformer-based architecture, simply comparing the number of parameters does not accurate reflects the model efficiency. I think it would be more accurate the compare the wall-clock time and GPU time used in both pre-training and downstream evaluation. Actually, according to the figure A.2, the model is slower than its attention-based variants when inputs are shorted than $10^4$, which is very common in genome analysis tasks. Therefore, it is very interested to see how efficient the model is on each downstream task compared to DNABERT and Nucleotide Transformers.
- All the previous models use standard or parameter-efficient fine-tuning for downstream evaluation, I think it is important to also fine-tune HyenaDNA on each downstream tasks to show how good it is as a *backbone* model and for fair comparison with other models. Also, it is interesting to see how the baseline models performs with the tunable prompting.
- This paper introduces a very strong model with a series of fancy techniques, however, as an academical publication, it fails to provide enough insights and explanations to the community about the superiority of its performance.
- According to the Hyena paper, the Hyena architecture achieves a similar level of performance (slightly better) as the transformer-based model (e.g., GPT) with the same number of parameters in general language modeling. Therefore, the fact that it dominants transformer-based models in the dna language modeling indicates the existence of **fundamental errors** of all the existing works. If this is true, what are the errors? I can think of a few possibilities:
- Masked language modeling vs casual language modeling
- k-mer tokenization vs character-level tokenization
- attention mechanism fails to capture the semantics of DNA. (however, the AttnDNA performs very well)
- To be honest, I don't think any of the above can lead to such significant improvements. Thus, throughout ablation study and experiments under the same setting is necessary.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How does AttnDNA perform on the datasets used by Nucleotide Transformers?
- Can you implement Nucleotide Transformer and DNABERT on the GenomicBank datasets for comparison?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: In sum, the work is technically sound and empirically strong. However, based on the above thoughts, I tend to reject it for now. If the concerns are clearly solved, I would be happy to recommend a strong acceptance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review of our work. We’re glad they appreciate the writing clarity, strong empirical results and the exploration of in-context learning and tunable prompting in genomics. Below we address the weaknesses and questions the reviewer described. We’re happy to follow up with any further clarifications the reviewer may have.
### Pretrain and Finetune Runtime Comparison
Thank you for suggesting to compare runtime during pretraining and finetuning. We’ve provided these results for HyenaDNA and baseline models in Table R4 and R5 in the common response, and will add a section on efficiency in the appendix of the manuscript. We share part of the efficiency discussion we plan to include below as well.
- Indeed, FlashAttention is faster than HyenaDNA on shorter sequences. Fortunately, HyenaDNA required far less pretraining time and data to more than make up for this, as noted by the high GPU-hour difference in Table R4.
- We agree that many existing genomic tasks are shorter range, though we believe this to be due to the *constraints* of current sequence models - and that demand for long-range models is prevalent (eg gene expression, histone modification, splice prediction, chromatin accessibility and structure). As computational biologists see that longer-range models are possible, the emphasis on genomic benchmarks will likely shift to long-range tasks, shifting the advantage quickly to subquadratic scaling models.
### Insights into Hyena’s performance
We’re glad the reviewer is interested in understanding where the HyenaDNA gains come from. We performed a series of ablations (details and results in Tables R1 to R3 of the common response) to help understand the effects of some of the factors the reviewer suggested, including: k-mer tokenization vs single character, attention vs Hyena, and causal vs. bidirectional.
The results of the ablation suggest each of these design choices contributed to gains over previous genomic FMs, and will be included in the manuscript in section 4 and the appendix. It may suggest fundamental errors (as the reviewer noted), or simply design choices that have yet to be explored in genomics and biology. We’re excited to further investigate other existing assumptions that may exist at the intersection of biology and machine learning.
### Finetune comparison (as a backbone model)
We wanted to clarify that in all downstream tasks (except for section 4.3 on in-context learning), we perform standard finetuning with HyenaDNA. This enables a fairer comparison between previous genomic FMs. We hope this alleviates the reviewer’s concern about whether HyenaDNA is used as a backbone in a similar fashion to baselines.
**Tuneable prompting on Baseline models:** For comparing baselines models with tuneable prompting, this unfortunately would require a significant redesign of their models. The most important constraint is the context length, which HyenaDNA opens up, but previous methods are limited to 512 or 1000 tokens. HyenaDNA uses up to 32k tuneable tokens, in addition to adding multiple samples within the context, which would simply not be possible on DNABERT and Nucleotide Transformer. That may be a really interesting future research direction, and we hope we’ve inspired others to pursue this.
### Questions
**1. AttnDNA on the Nucleotide Transformer (NT) datasets:** As suggested by the reviewer, we added results for AttnDNA on the NT datasets in the common response in Table R2. In all tasks, AttnDNA underperforms HyenaDNA (see common response for more analysis).
To summarize the main takeaways, AttnDNA was able to perform well on simpler tasks ( promoter prediction), but struggled significantly on the more challenging histone mark tasks (the ones starting with “H”) as compared to HyenaDNA, and with very large gaps. This suggests that the Hyena operator itself also contributes to the boost in performance over previous attention-based genomic FMs.
**2. DNABERT finetuning:** As suggested, we finetuned DNABERT as a stronger baseline on the GenomicBenchmarks, shown in the common response in Table R1. Indeed DNABERT does reach SotA on 1 of 8 datasets, while HyenaDNA retains top performance on 7 of 8 datasets.
**Planned:** We plan to finish finetuning the Nucleotide Transformer model on the GenomicBenchmarks in the coming weeks. As these models are very large (500M to 2.5B parameters), we did not yet have the compute resources to complete training. We did finetune 1 of the 8 datasets for the Mouse Enhancer task. The 500M Nucleotide Transformer underperformed HyenaDNA by about 10 accuracy points (85.1 vs 75.2). If accepted, we can include the remaining 7 datasets in the camera-ready.
Thank you again for the suggestions to improve our work, they’ve helped us better understand and communicate the differences between our methods and previous genomic FMs.
---
Rebuttal Comment 1.1:
Comment: ***See comments 3/3 of common response for the analysis of AttnDNA finetuning on the Nucleotide Transformer datasets.*** | Summary: The authors train the Hyena operator model on the human genome and adapt it to downstream tasks in computational biology.
Strengths: Lots of clever things about this work that I really like:
Very good use case of Hyena model with very long-range dependencies
Curriculum learning is very clever and makes sense. Figure 3.2 is great to show this
I think Table 4.4 with the use of XGBoost is very simple but effective in getting a statistic about the utility of an embedding. With XGBoost, you are going to get a pretty reasonable and comparable every time you use it.
The model capacity shown by perplexity and sequence length is good.
I really really like that it is able to work at single base pair resolution.
I’m impressed and happy that section 4.4.3 uses held-out chromosomes for training and testing (rather than random sampling). Since you are working with multiple species, are there still some aspect of data leakage in train/test sets because of synteny?
I think this is impressive being only trained on the human genome. I’m excited to see this model trained on all genomes!
Weaknesses: I think the GenomicsBenchmark claims about SotA could be improved. Definitions of SotA should be based on architecture, not the datasets. Since the dataset is newer than previously benchmarked models, I would recommend the authors evaluate the GenomicsBenchmark across other SotA models or architectures (like DNABERT and Nucleotide Transformer, which are mentioned in this paper). Alternatively the SotA claim is weak at best, and misleading at worst.
I think section 4.4.3 could have a very simple baseline of a kmer count model. Can a simple kmer based model do just as well, or poorly? I’m sure it is very fast.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Figure 3.2 - Do longer sequences simply take longer to do a forward and backward pass? You show wall time. Are the number of steps relatively closer?
How much is the result of your model’s result is its architecture versus using tunable prompting? What would happen if you used tunable prompting for the Nucleotide Transformer?
How are the F1 and MCC scores for Table 4.2 generated? Which values are you comparing?
How was the finetuning in section 4.4.1 done? Is it not soft-prompting?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Generally, I find the term “Foundation Model” to be ambiguous and unnecessarily buzzwordy. HyenaDNA is a large, pretrained model. Even though the authors have pretrained the model, it has to be finetuned and quite heavily engineered differently for each subtask. But, I guess this is what the field is wanting to say.
While I do like that the important aspects of the paper are emphasized, some of the color, boldness, different font, and italicization can be toned down a bit. For example: “Therefore, having both long-range context and single nucleotide resolution simultaneously” doesn’t need to be blue and bolded. Also, what is the author’s intent in using blue text versus bold text?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review of our work and are glad they appreciate many of the contributions including the long-range capabilities, single nucleotide resolution, and the curriculum learning introduced. Below, we address the concerns the reviewer made about the GenomicBenchmarks baselines, and include additional ablations requested.
### GenomicBenchmark Baselines
In the common response above, we added ablation results to address concerns the reviewer (and others made), including:
- Finetuning a stronger baseline using DNABERT, which achieved SotA on 1 of 8 datasets (with HyenaDNA outperforming on the other 7 datasets).
- HyenaDNA trained with a K-mer (K=6) tokenizer to compare the effect of K-mers vs single character tokenizers. See Table R1 in the common response. The K-mer tokenizer consistently degrades performance over the single nucleotide tokenizer (2-10 accuracy points).
**Planned:** We plan to finish finetuning the Nucleotide Transformer model on the GenomicBenchmarks in the coming weeks. As these models are very large (500M to 2.5B parameters), we did not yet have the compute resources to complete training. We did finetune 1 of the 8 datasets for the Mouse Enhancer task. The 500M Nucleotide Transformer underperformed HyenaDNA by about 10 accuracy points (85.1 vs 75.2). If accepted, we can include the remaining 7 datasets in the camera-ready.
### Questions
**Longer sequence time:** Indeed, longer sequences do require more time for forward and backward passes. See figure 4.1 in the manuscript for the wall time (forward and backward pass) by sequence length. We compare times for both HyenaDNA and its attention counterpart.
**Data leakage / synteny:** That’s a really interesting point. It’s possible “data” leakage from this evolutionary process can occur, and it would be challenging to account for explicitly. We take comfort in that in the aim of the species classification task, we’re interested in the relative performance between long sequence lengths, over the absolute performance (in this example).
**Tuneable Prompting vs. Finetuning:** To be clear, all of our short-range tasks in section 4.2, as well as 4.4, use **standard finetuning**. This includes the chromatin profile task in 4.4.1 (as inquired by the reviewer). This makes the comparison between previous architectures more clear. The tuneable prompting approach is a standalone set of experiments in section 4.3 only, and considered “self-contained”. We’ll be sure to make this distinction more explicit in the manuscript, thank you for pointing this out.
**Tuneable prompting on Nucleotide Transformer:** For comparing baseline models with tuneable prompting, this unfortunately would require a significant redesign of their models. The most important constraint is the context length, which HyenaDNA opens up, but previous methods are limited to 512 or 1000 tokens. HyenaDNA uses up to 32k tuneable tokens, in addition to adding multiple samples within the context, which would simply not be possible on DNABERT and Nucleotide Transformer. That may be a really interesting future research direction, and we hope we’ve inspired others to pursue this.
**Foundation models terminology:** We acknowledge that this can be a contentious terminology to some. We took the stance that most effectively conveys what we hope and believe our model can do, which is to learn useful and general representations that can be applied to downstream tasks.
**Styling / bold font:** Thank you for bringing this up, we initially wanted to bold and highlight text we thought were key takeaways per section. We’ll be sure to readdress the format as to not be so distracting.
**F1 and MCC metrics:** The 18 datasets in the Nucleotide Transformer were sequence-level classification tasks (binary and 3-way). Using the model prediction for a DNA sequence, we’re able to determine a true/false positive or true/false negative with the label. F1 and MCC metrics are standard statistical methods used by the original authors of the Nucleotide Transformer.
Formally, these are calculated as:
$$F1 = 2 \cdot \frac{precision \cdot recall}{precision + recall}$$
$$ MCC = \frac{TP \cdot TN - FP \cdot FN } {\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}} $$
Implementation-wise, we used predefined metrics from the scikit-learn package to calculate our F1 and MCC metrics. For the F1 score, we used the f1_score function [https://scikit-learn.org/stable/modules/model_evaluation.html#precision-recall-f-measure-metrics] with the average flag set to macro. And for the MCC values, we used the matthews_corrcoef function [https://scikit-learn.org/stable/modules/model_evaluation.html#matthews-corrcoef].
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Thank you for your comments and continued discussion.
I will keep my score the same, so long as the claims about the GenomicsBenchmark in the abstract and text are updated.
---
Reply to Comment 1.1.1:
Comment: We appreciate your time! We will update the GenomicBenchmarks claim as you have suggested. Thank you. | Rebuttal 1:
Rebuttal: ### Common Response
We thank the reviewers for their time and in-depth reviews. We believe that addressing the reviewer’s feedback and questions has helped greatly improve the quality of our manuscript.
We are happy to hear the reviewers appreciated the strong empirical performance of HyenaDNA over previous methods **[YJ4C, LTPa, nDFb]**, the novelty/relevance of the application **[YJ4C, LTPa, nDFb, DWDg, JfVK]**, and the clear description of the methods **[YJ4C, nDFb]**.
### Ablations
Multiple reviewers suggested ablations, including downstream performance for scratch vs. pretrained models **[YJ4C, DWDg]**, K-mer vs. single character tokenizers **[YJ4C, LTPa, nDFb]**, causal vs bidirectional **[YJ4C, nDFb]**, and additional model baselines on the GenomicBenchmarks and Nucleotide Transformer datasets **[LTPa, nDFb]**.
Below, we provide a common response with updates and ablation results that were requested by reviewers **[YJ4C, LTPa, nDFb, DWDg]**. In these experiments, we further investigate how each design choice in HyenaDNA contributes to performance gains compared to baseline models.
### Updates
Since our submission, we are pleased to report that we have pretrained an even longer HyenaDNA model: 1M context length (vs 450k in the submission), 500x longer than previous genomic FMs and is 160x faster than attention.
The results from ablations are in the PDF, with a summary below:
- Several new choices in HyenaDNA contribute to gains over previous genomic foundation models (FMs): single character vs. k-mer tokenizer, Hyena operator vs. attention, and the causal vs. bidirectional Hyena.
- Pretraining has a greater effect on the more challenging tasks, and as sequences become longer, eg, by up to 30 accuracy points on species classification at 450k context.
- HyenaDNA uses far less compute for pretraining and finetuning. E.g. the Nucleotide Transformer uses 128 A100 GPUs to pretrain for 1 month, while HyenaDNA used 1 A100 for 80 mins to pretrain a model used on the same downstream tasks.
**We provide further details on the ablations in the sections below.**
### Scratch / Pretraining
Reviewers **[YJ4C, DWDg]** inquired about how much pretraining (vs. scratch) improves downstream performance. To address this, we include training from scratch on 3 groups of datasets: GenomicBenchmarks, Nucleotide Transformer, and Species Classification.
**1. GenomicBenchmarks (Table R1 PDF):** Pretraining boosted HyenaDNA by up to 3.5 acc points, and by 1.8 points on average. HyenaDNA already performed strongly from scratch, which made pretraining gains more difficult. For AttnDNA, pretraining is more important, boosting performance by up to 11.7 acc points and by 4.5 points on average.
**2. Nucleotide Transformer datasets (Table R2 PDF):**
On the Nucleotide Transformer datasets: On the more challenging tasks (histone marks, datasets starting with “H”), pretraining boosts HyenaDNA metrics by up to 21 MCC points on H3K4me3. For simpler tasks (with higher baseline values) such as the splice sites and promoter tasks, there was less boost (0 to 1 accuracy points). Note: the 18 tasks use a mix of MCC, F1, and accuracy metrics, and so an average comparison is less meaningful.
**3. Long-range species classification (Table R3 PDF):**
For species classification, pretraining becomes more important for longer sequences, addressing questions from reviewers **[YJ4C and DWDg]** about where pretraining helps in HyenaDNA. At sequence length 250k & 450k, the scratch/pretrain gap is 30+ accuracy points.
### K-mer tokenization vs single nucleotides
To ablate the impact of the K-mer tokenizer vs. single character **[YJ4C, LTPa]**, we used the same K-mer (K=6) tokenizer from the Nucleotide Transformer model, which had a vocabulary of ~4100. We then trained a scratch HyenaDNA model using this K-mer tokenizer on the GenomicBenchmarks. The K-mer tokenizer degraded performance for every dataset, from 2 to 10 accuracy points compared to (scratch) HyenaDNA with a single character tokenizer (see Table R1 PDF). The K-mer tokenizer is one factor in HyenaDNA’s gain, which we will add to section 4.2.
### Bidirectional vs Causal
To ablate the impact of using a causal model **[YJ4C, nDFb]**, we implemented a bidirectional version of HyenaDNA and trained from scratch on the GenomicBenchmarks. The bidirectional version degraded performance on 7 of 8 datasets compared to the standard causal HyenaDNA (also from scratch), on average by 3.8 accuracy points. See Table R1 PDF. The bidirectional HyenaDNA was implemented by using a circular FFT convolution.
### DNABERT baseline
To compare against a stronger baseline on the GenomicBenchmarks suggested by reviewers **[LTPa and nDFb]**, we also finetune DNABERT. DNABERT is able to reach SotA on 1 dataset (Coding vs Intergenomic), and match on another (Human Enhancers Cohn) with HyenaDNA (Table R1 PDF). DNABERT uses 110M params, while HyenaDNA uses just 400k param. We will add these results to section 4.2.
### AttnDNA finetuning
We finetune the AttnDNA model on the Nucleotide Transformer datasets, Table R2 PDF. AttnDNA and HyenaDNA are causal and use single nucleotide tokens, but AttnDNA significantly underperformed against its Hyena counterpart. This suggests the Hyena operator itself contributes significantly to the overall performance gains of HyenaDNA.
### Pretrain and Finetune Runtime Comparisons
**Pretraining compute:** Reviewers **[LTPa, nDFb, JfVK]** suggested also comparing efficiency by compute resources used (in addition to parameter count). When comparing actual GPU-hrs used for pretraining across baseline models, HyenaDNA is more efficient than baselines. See Table R4 PDF. We will put a full table of results in the updated manuscript.
**Finetuning compute:** We use the GenomicBenchmarks for finetuning, and record the per epoch runtime in Table R5 PDF. The Nucleotide Transformer uses a parameter-efficient finetuning: only 0.5% parameters are updated.
Pdf: /pdf/3610f5711e2910e188b64b062599c03360defbfb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This manuscript applies Hyena, a neural operator based on implicitly parametrized long convolutions and data-controlled gating, to the domain of DNA modelling.
The subquadratic complexity of Hyena enables scaling to context lengths of up to 450,000 at single nucleotide resolution.
This represents a significant improvement over previous methods based on dense attention, increasing pre-training sequence length by more than two orders of magnitude.
Pre-training is done using a sequence length warm-up scheme, such that sequences grow longer as training progresses.
The resulting model is benchmarked on a variety of downstream tasks using different adaptation methods.
Strengths: 1. (**Originality**) HyenaDNA is the first work to apply Hyena, a novel breed of implicitly parametrized long convolutions, to DNA modeling.
2. (**Originality**) The authors experiment with two adaptation strategies commonly used in NLP to apply the pre-trained model to novel tasks.
3. (**Quality**) The method is evaluated on a large number of genomic datasets introduced as part of related work. This firmly situates HyenaDNA within the existing literature.
4. (**Significance**) The proposed model outperforms state-of-the-art attention-based methods on the majority of presented downstream tasks with considerably fewer parameters and scales to extremely large context sizes of 450,000 tokens at single nucleotide resolution.
5. (**Clarity**) Overall, the manuscript is written clearly, and the arguments are easy to follow.
6. (**Clarity**) Architecture and hyperparameter details are clearly presented.
7. (**Clarity**) Including background task descriptions for the Nucleotide Transformer, the Biotype Classification, and the Chromatin Profile Prediction benchmarks are helpful for understanding the evaluation protocol.
Weaknesses: 1. (**Originality**) Beyond the application of Hyena to the DNA domain, the manuscript offers few new technical insights that could be of interest to the machine learning community. Sequence length warm-up is an established technique to stabilize training instabilities, and soft prompting, as well as few-shot fine-tuning, are commonly used adaptation strategies in the natural language processing community.
2. (**Quality**) Results are presented without any measure of deviation over multiple repetitions. This is true even for smaller scale fine-tuning experiments.
3. (**Quality**) For the GenomicsBenchmark, the authors should report more details about the CNN baseline (parameter count, number of layers, etc.), as well as training details and hyperparameters for AttnDNA. If the same hyperparameters are used for both HyenaDNA and AttnDNA, please state this explicitly.
4. (**Clarity**) Few-shot adaptation includes a tuning phase, making this adaptation method more similar to few-shot fine-tuning than in-context learning. Relevant literature should be cited.
5. (**Clarity**) The results reported in Table 4.1 for the Nucleotide Transformer do not seem to match with those reported in the original paper in Supplementary Table 6. Please explain this deviation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why is HyenaDNA pre-trained autoregressively rather than non-autoregressively, as was done in Nucleotide Transformers? Both methods would be feasible since downstream adaptation of HyenaDNA always includes a fine-tuning phase.
- Considering the limited vocabulary of just 4 nucleotides, it's somewhat unexpected that the reported perplexity score for a high-performing long-context model like HyenaDNA would be this high. Could you provide some insight on this? To underscore the significance of pre-training further, it would be valuable if you could supplement the evaluation of pre-trained embeddings using probing classifiers with an additional experiment where HyenaDNA is trained directly from scratch on the downstream datasets.
- The manuscript mentions that both a long context and single nucleotide resolution are crucial. However, K-mer tokenization could help reduce the temporal dimension, and previous work has shown that a higher k leads to improved results (Ji et al., 2021). Therefore, it would be beneficial to include an experiment that compares HyenaDNA models pre-trained with different tokenization schemes.
### References
- Ji, Y., Zhou, Z., Liu, H., & Davuluri, R. V. (2021). DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome. Bioinformatics, 37(15), 2112-2120.
### Minor comments
- Possible Typo in Table 4.1 row 2 (Coding vs. Intergenomic): the improvement over baseline performance (+3.5) does not match with the reported base performance. Both Base and HyenaDNA are reported to have the same accuracy.
- Typo on line 543 : sequence-level instead of sequence-leel
- Typo on line 544: that instead of taht
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors address the limitations of their method and experiments in the manuscript.
Possible malicious uses of HyenaDNA are not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review of our work. We’re glad they appreciate the strength of the results, the originality and significance of the application, and the clarity of the manuscript. Below, we address the reviewer's concerns about the contributions, design, performance, and tokenization.
### ML community contributions
This work is representative of many problems in ML, namely on how to efficiently capture statistics that include both local and long-scale dependencies, local spatial structure of objects and large-scale correlations of those objects in images, local motion and long action sequences in video, and structure within phrases and long-range correlation in text.
Regarding technical relevance, we would like to highlight a few observations and insights we think are relevant to the ML community, and that could be used as a recipe for similar tasks.
**Ultralong sequence training:** Training on 1M context length data gives rise to unique instability challenges, which depart from the typical scaling of model size or datasets. Sequence length warmup has been explored in the NLP community in the past, notably by only up to 2k long sequences [Li et al., 2022] and for small Transformers [Press et al., 2020]. Many questions remain about the schedule of increases (in particular for long sequences), learning rate adjustment, and how it affects token efficiency at ultralong sequences. We share one set of strategies and findings in genomics that were performant.
**Single character training:** In nLP, the use of single characters consistently performs worse than aggregated tokens [Yu et al., 2023, Tay et a., 2021, Kalchbrenner et al., 2016]. Our work contributes a successful recipe for single character training that surpasses aggregated-character methods.
**Model vs vocabulary size:** Our work raises novel questions on the tradeoff between model size vs vocabulary size. Given that our design uses a small vocabulary and model size (orders of magnitude smaller) than previous SotA genomic FMs, it begs the question of whether a similar design (single character tokens with Hyena) can be applied to natural language, and if these findings can be transferred.
### Clarity concerns
- **GenomicBenchmarks CNN baseline details:** We will include more details of the CNN baseline model in the appendix section A.2.1. As described by the original authors, the CNN uses an embedding layer, 3 conv layers with number of filters: 16, 8, and 4. It uses batch norm and max pooling after each conv layer, followed by 2 dense layers. It is trained for 10 epochs with batch size 64. The mode sizes range from 120k to 520k, depending on sequence length chosen.
- **Hyperparameters:** We indeed use different hyperparameters between HyenaDNA and AttnDNA, and will clarify this in supplement section A.2.1.
- **Few-shot adaptation vs in-context learning:** We will make the distinction more clear with citations.
- **Nucleotide Transformer Table 4.1 results:** Thank you for pointing out the discrepancy. The Nucleotide Transformer paper has 2 sets of results: one in Figure 2 of the main paper, and one in Supplementary Table 6. We originally used Figure 2 in the main paper, but after speaking with the authors, the Supplementary Table 6 indeed does fit our procedure more closely by using the best test set results. We will update values in our comparison in the manuscript, and is already reflected in the common response in Table R2. Notably, this does not change the ranking of SotA performance per dataset for HyenaDNA.
### Questions
**1. Autoregressive vs bidirectional:** HyenaDNA was pretrained autoregressively (causal) because the initial results showed better downstream performance over a bidirectional Hyena (see ablation in our common response) that we experimented with. The autoregressive training also offered additional appealing properties:
- The Hyena model is naturally flexible to variable length sizes over BERT-style models that need to be explicitly fed variable lengths during training, since the causal model “sees” different lengths incrementally.
- Additionally, we wanted to explore the use of in-context (ICL) learning in genomics, as we believe ICL has driven a lot of innovation in natural language. An autoregressive model is more amenable to current ICL methods that leverage next token prediction for class prediction, for example.
**2. DNA perplexity**: We too were surprised by the relatively high perplexity for the given vocabulary size. We think this is a challenge inherent to the domain of genomics, and provide a few relevant data points:
- Biologically, genomes carry “junk” DNA that is difficult to decipher and or not informative. One rationale for why this occurs can be viewed through an evolutionary lens, in that, as mutations occur over time, a mutation is less deleterious if it occurs in “junk” DNA, as opposed to informative DNA [Zhang el al., 2012, Ohno et al.,1972]. Knowing whether a sequence is “junk” (informative) or not is still an open question, making genomics overall particularly challenging.
- [Rajarajeswari et al] showed DNA compression algorithms that sought to reduce bits per character (BPC, convertible to perplexity) in a similar range. They achieved an equivalent perplexity of 1.58 vs our 1.54, with a lower score as better.
- [Benegas et al., 2022] trained a genomic plant Transformer (not FM) using single nucleotides, and observed similar perplexity.
**3. Pretraining vs. scratch:** Please see the common response for ablations on pretraining vs. finetuning.
**4. K-mer tokenization:** In the ablation presented in the common response in Table R1, we observed consistent (and significant) degradation in performance on the GenomicBenchmarks when using 6-mer tokenization on HyenaDNA and training from scratch, from 2 to 10 accuracy points.
**Limitations**
We will address the potential for malicious uses of HyenaDNA in the camera-ready.
---
Rebuttal Comment 1.1:
Title: Continuation of rebuttal response
Comment: ### Continuation of question responses
**4. K-mer tokenization:** We thank the reviewer for this suggestion. In the ablation presented in the common response in Table R1, we observed consistent (and significant) degradation in performance on the GenomicBenchmarks when using 6-mer tokenization on HyenaDNA and training from scratch, from 2 to 10 accuracy points.
**Typos**
We thank the reviewer noting the typos, and we will make the necessary changes.
**Limitations**
We will address the potential for malicious uses of HyenaDNA in the camera-ready. We believe it is an important conversation to have for any potentially powerful and widespread technology.
### Citations
Li, Conglong, Minjia Zhang, and Yuxiong He. "The stability-efficiency dilemma: Investigating sequence length warmup for training GPT models." Advances in Neural Information Processing Systems 35 (2022): 26736-26750.
Press, Ofir, Noah A. Smith, and Mike Lewis. "Shortformer: Better language modeling using shorter inputs." arXiv preprint arXiv:2012.15832 (2020).
Benegas, G., S. S. Batra, and Y. S. Song. "DNA language models are powerful zero-shot predictors of non-coding variant effects." (2022).
Rajarajeswari, Pothuraju, and Allam Apparao. "DNABIT compress–genome compression algorithm." Bioinformation 5.8 (2011): 350.
Zhang, Zhe, et al. "Analyzing effects of naturally occurring missense mutations." Computational and mathematical methods in medicine 2012 (2012).
Ohno, Susumu. "So much" junk" DNA in our genome. In" Evolution of Genetic Systems"." Brookhaven Symposium in Biology. Vol. 23. 1972.
Tay, Yi, et al. "Charformer: Fast character transformers via gradient-based subword tokenization." arXiv preprint arXiv:2106.12672 (2021).
Kalchbrenner, Nal, et al. "Neural machine translation in linear time." arXiv preprint arXiv:1610.10099 (2016).
Yu, Lili, et al. "Megabyte: Predicting million-byte sequences with multiscale transformers." arXiv preprint arXiv:2305.07185 (2023).
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their time and their comments! We wanted to share a friendly reminder that the discussion period is about to end tomorrow morning at 1PM EDT. We hope the reviewer is able to take into account our response, including additional ablation experiments and clarifications provided, into the final evaluation of the work. If there is no update, we certainly respect the reviewer’s decision. Thank you! | null | null | null | null | null | null |
Uncovering Meanings of Embeddings via Partial Orthogonality | Accept (poster) | Summary: This paper aims to uncover the semantic meaning of embedding vectors
within a given space. The basic idea is to determine
a generalized Markov boundary by computing the cosine similarity of the
orthogonality projected vectors within a subspace. The top K
candidates are then selected. Furthermore, the authors provide a theoretical
analysis of the concept of Independence preserving embedding in section
five.
Strengths: - the connection between graphical model theory and explanation of embeddings seems novel (although similar ideas are present in other fields, such as the study of knowledge graph embeddings)
- conceptually, everything is well defined and formally presented
- The research question is clear and meaningful.
- The structure of this paper is well-organized. In particular, in section
two, the authors explain the necessary background information clearly
Weaknesses: - experiments and results analysis is rather sparse, the paper has more focus on the theory and definitions
- experimental setup can be criticized (see comments below)
- related work with respect to knowledge graph embeddings could be more thorough. Several studies have focused on using projection or rotation techniques for KG embedding to predict the missing relationship between two entities [THW, SLH, SW]. Can these KG meth-
ods be adapted to uncover meaningful word embeddings?
[THW] Yun Tang, Jing Huang, Guangtao Wang, Xiaodong He, and Bowen Zhou. Orthogonal
relation transforms with graph context modeling for knowledge graph embedding. arXiv
preprint arXiv:1911.04910, 2019.
[SLH] Tengwei Song, Jie Luo, and Lei Huang. Rot-pro: Modeling transitivity by projection
in knowledge graph embedding. Advances in Neural Information Processing Systems,
34:24695–24706, 2021.
[SW] Baoxu Shi and Tim Weninger. Proje: Embedding projection for knowledge graph comple-
tion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is the critical motivation for using Image-language pre-train model
CLIP as embedding? It would be better to compare it with other embedding techniques such as GloVe, word2vec, and BERT. And How does the
dimensionality of the embedding vectors decide the performance of the
proposed algorithm?
- This paper only provides five examples in Table 2 regarding meaningful
semantic evaluation. Would it be possible to compute the precise numer-
ical results using semantic evaluation metrics to show the advantage after
orthogonality projection?
- In section five, the author introduces the concept of IPE, which might
be more straightforward to understand through toy examples or visual
figures
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions! We're glad you found the ideas novel and the paper well written.
**Experiments**
We completely agree with the reviewer that the paper is more focused on theory and definitions. The reason we consider pre-trained image-language model CLIP is that we found it tends to encode more interpretable semantic meanings. For example, for the target word “eggplant”, the word “purple” appears more meaningful in CLIP embedding than other text-based only embeddings.
**Connection to knowledge graph**
Thanks for suggesting these related works. We’ll include them in the updated version. On the other hand, although the “independence model” is a concept closely related to the graphical model, its abstract notion has applicability beyond graphs. In fact, many independence models cannot be embedded in graphs.
**“This paper only provides five examples in Table 2 regarding meaningful semantic evaluation. Would it be possible to compute the precise numerical results using semantic evaluation metrics to show the advantage after orthogonality projection?”**
Thanks for the suggestion! We have run additional experiments by using Wu-Palmer similarities as the numerical estimates. Please see the global review for more details.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply, it will be helpful in the discussion about the paper.
---
Rebuttal 2:
Comment: Thank you again for your review and feedback. Do you have any additional concerns or questions? If you are satisfied with the response, we hope you will consider increasing the score. | Summary: This paper investigates the relationship between the semantics and linear algebraic structure of token embeddings. It proposes utilizing partial orthogonality to define the "Markov boundary" of token embeddings. Given that token embeddings have limited dimensions and the Markov boundary can consist of numerous embeddings, the authors suggest relaxing the definition of partial orthogonality. They subsequently introduce an approximate algorithm to identify this boundary, which iteratively locates embeddings with high cosine similarity to the target vector after projecting onto orthogonal complement subspaces. To justify the effectiveness of vector space, the authors present the concept of independence preserving embeddings, which serves as the foundation for studying linear algebraic independence in embedding vectors.
Strengths: 1. The paper formally discussed the relationship between meanings of tokens and their algebraic independence. It generalizes the idea of Markov boundary and relaxes its definition so it can be practically applied to word embeddings.
2. To validate the use of linear algebraic independence relationships between embeddings for studying their semantics, the author introduces the concept of independence preserving embedding. This concept demonstrates that embeddings maintain the independence structure of distribution, making the paper comprehensive and self-contained.
3. The authors conduct experiments using CLIP embeddings and demonstrate that their algorithm effectively identifies intriguing patterns between word embeddings, indicating that these embeddings possess semantic meanings.
Weaknesses: 1. Although the authors aim to study the independence relationship between word embeddings, they do not provide an evaluation metric to substantiate the effectiveness of the proposed method. The experimental results are presented as case studies with a limited number of words as examples, which may not be compelling for readers. A more robust experimental section would be beneficial.
2. The experiments conducted in the paper focus solely on the CLIP embedding model. While it is understandable that CLIP, being trained with visual information, may encode semantics that are more meaningful to humans, it would be interesting to explore whether the proposed method can be applied to other models that rely exclusively on text-based training.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The embeddings studied in this paper is static embedding. I wonder what if the embeddings are contextualized? For example, what would happen if the embeddings are processed by Transformers?
2. Throughout the section 3, the definitions of d and n are confusing. It says if d is smaller or equal to n, vectors are linearly independent. Should it be the opposite? And the same things happens to line 132.
3. Typo: In line 96, period should be replaced by comma.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and suggestions!
**“Although the authors aim to study the independence relationship between word embeddings, they do not provide an evaluation metric to substantiate the effectiveness of the proposed method.”**
The experiment section is divided into two parts. The first part tests the effectiveness of the proposed method in finding subspaces that can reduce residual correlations. The second part is the qualitative study, which evaluates whether the Markov boundary is semantically meaningful. In the paper, we measure semantic meaningfulness via human intuition. How to quantify this measure is an open question and we feel like deserves its own paper. Nevertheless, we include some additional experiments that have numerical evaluations of our method. Please see the top review for more details.
**“The experiments conducted in the paper focus solely on the CLIP embedding model”**
Thanks for the suggestion! And you’re right. We found that CLIP text embedding tends to encode more interpretable semantics meanings. For instance, for the word “eggplant”, the word “purple” appears more meaningful in CLIP embedding than other text-based only embeddings.
**“The embeddings studied in this paper are static embedding. I wonder what if the embeddings are contextualized? For example, what would happen if the embeddings are processed by Transformers?”**
Thanks for the suggestion! It would be an exciting future direction to extend this work to contextualized embeddings. On the other hand, in this paper, our experiments are done on nouns whose meanings are less ambiguous like “eggplant” and “zebra” to demonstrate the effectiveness of Markov boundaries.
**“The definitions of d and n are confusing.”**
Thanks for your careful attention and sorry for the confusion. It is a typo. We’ll correct it in the updated version.
---
Rebuttal 2:
Comment: Thank you again for your review and feedback. Do you have any additional concerns or questions? If you are satisfied with the response, we hope you will consider increasing the score.
---
Rebuttal 3:
Comment: Thanks for the response and the new experiments. It addresses some of my questions. However, I believe a high-quality paper should be tested systematically on a large corpus with scientific metrics, instead of the case study. I would argue to keep my current score.
---
Rebuttal Comment 3.1:
Comment: Thank you for your valuable feedback! We completely agree that testing the semantic relevance of learned Markov boundaries systematically with better metrics would be ideal. However, the dilemma we have is that, as far as we know, there are no good metrics in the literature. And we believe that large-scale evaluation with convincing metrics is an open problem and deserves its own work. Although there are existing metrics on semantic similarities that are based on Wordnet like Wu-Palmer similarity, they do not fit our experiments on Markov bounries. For example, the Wu-Palmer similarity score between “**eggplant**” and “**purple**” is only 0.167 but the score between “**eggplant**” and “**lemon**” is 0.667. However, because we want to construct the Markov boundary to be a minimal description set of the target word “**eggplant**”, one would expect to include “**purple**” instead of “**lemon**” despite what the WP score suggests. To see this, we asked ChatGPT to come up with a short description of the word “eggplant” and the answer is “a **purple** or dark-colored vegetable with a smooth skin, often used in cooking and known for its mild flavor.” This also fits human intuition.
The previous example showcases the difficulty of coming up with good semantic metrics. Nevertheless, we humbly argue that the experiments in the paper, along with the newly added ones in the rebuttal, support the claims made in the paper. In particular, the central hypothesis of the paper is that partial orthogonality, and its byproduct Markov boundary, of embeddings, carries semantic information. To verify this claim, we provide both _quantitative_ and _qualitative_ experiments. For qualitative experiments, we appeal to human intuition by comparing the principal angle between learned Markov boundaries of the target embedding with linear subspaces spanned by relevant word embeddings of the target. For quantitative experiments, _unlike qualitative case studies_, we come up with a numerical estimate that calculates the average principal angle between learned Markov boundaries and embeddings of target descriptions from a dataset we created that consists of target words and their corresponding short descriptions.
Although we do not claim the immediate practical impacts on a large scale, we believe that studying the Markov boundary of embeddings holds promise for understanding the inner workings of embeddings. This is one of our main scientific contributions and our empirical evaluations are designed to ascertain the utility of this claim. Because coming up with a good semantic metric is an open problem, we are of the opinion that it shouldn’t limit the contributions of our paper which is more focused on theories and definitions. | Summary: The central question (quoting the paper) is "How to make sense of an embedding vector in relation to other embedding vectors?" For that
purpose, the authors propose to generalize the idea of the Markov boundary to embeddings, with a relaxed adaptation of this notion to
cope with word embeddings peculiarity. The paper then introduces an algorithm to find (approximately) what is called the generalized Markov boundary for a given embedding. Empirical evaluations are carried out on CLIP.
Strengths: The real scientific goal should be first clarified before one can assess the strength of this proposition.
Weaknesses: Maybe I completely misunderstand this paper, but I cannot tell what is its scientific goal. For me everything is confused in the paper: the notion of markov boundary for vectors in the context of contextualized embeddings (like CLIP), why so much formal definitions for at the end a rough relaxation.
Technical Quality: 1 poor
Clarity: 1 poor
Questions for Authors: I have no question.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 1 poor
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review!
**“The scientific goal”**
We apologize for the confusion. In this paper, we hypothesize that semantic meanings have an independence structure. We use the abstract “independence model” to formalize this idea. On the other hand, embeddings, which are vector representations of words, should also have a similar independence structure. A natural candidate “independence model” for vector space is partial orthogonalities.
_Therefore, the main conjecture of this paper is that partial orthogonalities of embedding spaces encode semantic meanings._ To test this theory, we first study the theoretical aspect of the problem. In particular, we generalize the notion of the Markov boundary to embeddings. The relaxation is necessary because the intersection property of “partial orthogonality” rarely holds for embeddings. And then we verify the theory empirically in our experiment section. Specifically, we examine whether the Markov boundary effectively conveys "semantic meanings."
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Dear authors,
Thank you for your response here. You have clarified the scientific goal succinctly and I assure you we will take this answer into account in the upcoming discussion and decisions.
best
the ac
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply! We really appreciate your commitment to the quality of the review process.
---
Rebuttal 2:
Comment: Thank you again for your review and feedback. Do you have any additional concerns or questions? If you are satisfied with the response, we hope you will consider increasing the score. | Summary: This paper presents some theory and a method for reasoning about information gain in embedding space via a relaxation of conditional independence, as well as some theory on independence preserving embeddings.
As information gain is inherently linked to independence, the paper focuses on defining a generalization of the Markov boundary that is meaningful in embedding space. The Markov boundary is a set of embeddings that "separate" the target embedding from all other test embeddings not in the boundary. Concretely, this means the cosine similarity between the projection of the target and test embeddings onto the orthogonal complement of the generalized Markov boundary should be 0. The proposed generalization relaxes elementwise orthogonality to distributional orthogonality, where the cosine similarities between the projections are allowed to cancel out, rather than all be 0. This criterion is motivated by practical concerns, where embeddings are low-dimensional representations where orthogonal residuals are rare.
Finding a generalized Markov boundary for a single target embedding is then accomplished by sampling a number of random sets of embeddings, finding the top $K$ embeddings that contain information about the target given the random embedding sets, then constructing the boundary from within those top $K$ embeddings.
Separately, the paper addresses another question of how to embed a set of independence assumptions in a lower dimensional space. A theorem is presented that shows that one can preserve independence assumptions as residual orthogonality to some degree, depending on the dimension.
Experiments are presented, using CLIP embeddings, that show that generalized Markov boundaries can indeed be found, that projecting onto the orthogonal complement of the span random embeddings is meaningful, and that the discovered generalized Markov boundaries are more aligned with the span of embeddings of related words than unrelated.
Strengths: 1. The first research question of reasoning about conditional independence and information gain in embedding space is appealing. Recent works in resolving ambiguity through dialogue, such as for image retrieval through 20 questions, reason over the space of individual images. However, if there are many images, this is not scalable. Intuitively, many of those images are likely to be very similar, motivating reasoning in the much lower dimensional CLIP embedding space.
2. The proposed definition of generalized Markov boundary and method for finding boundaries are reasonable.
3. The second research question about independence preserving embeddings is also worth studying for the same reason as above: potential applications would be very interesting.
Weaknesses: 1. The method and experiments for the generalized Markov boundary only involve a single target embedding.
1. The experimental evaluation only evaluates token embeddings, whereas the text encoder in CLIP can encode sequences. An experiment involving reasoning over sequence embeddings would make the paper much stronger, especially if the experiment involved a realistic task.
1. No experiments are performed for independence preserving embeddings.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Primarily, I would like to see experiments verifying Theorem 13 on dimensionality reduction in independence preserving embeddings.
1. Is the only difference between kernel mean embeddings and independence preserving embeddings the choice of a kernel with finite dimensional feature map?
1. What is the relationship to work on information theory with kernel methods [2]?
1. A more realistic application that utilizes the method developed in the paper would greatly strengthen the paper. One possible application is an image retrieval game such as 20 questions [1], where the goal is to retrieve the correct image out of a set by asking questions.
[1] White, Julia, et al. "Open-domain clarification question generation without question examples." Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021.
[2] Francis Bach. Information Theory with Kernel Methods. 2022. ⟨hal-03577992v2⟩
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I did not find a discussion of limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **“The method and experiments for the generalized Markov boundary only involve a single target embedding.”**
The goal of this paper is to find good descriptions/explanations of given embeddings with other embedding vectors. We feel like it does not make too much sense to try to describe/explain multiple word embeddings at the same time. However, one could consider embeddings of phrases and sentences.
**“The experimental evaluation only evaluates token embeddings, whereas the text encoder in CLIP can encode sequences.”**
Thanks for the suggestion! Because our objective is to uncover the semantic meanings of words, finding the semantic meanings of sentences is a much harder task and very difficult to define, although one could use contextualized embeddings to deal with polysemantic words. In this paper, to avoid ambiguity, we choose to study nouns whose meanings are less ambiguous like “eggplant” and “zebra”.
**“No experiments are performed for independence preserving embeddings.”**
The study of IPE is mainly used to answer the theoretical question of whether such embeddings are possible. Its practical application is an interesting future research direction. Nevertheless, we include some additional experiments on IPE. Please see the top review for more details.
**“Relation with kernel mean embedding and information theory with kernel method”**
That’s a good question! We will try to explain this better in the updated version. In general, kernel embeddings are infinite dimensional embeddings that are more concerned with the moments of distributions they are trying to embed. For kernel mean embedding, it’s the first moment, and for information theory with kernel method, it’s the second moment. On the contrary, our independence-preserving embedding is a finite-dimensional embedding. Such finite dimension embedding can be directly used in practice. The reason we can produce finite dimension embedding is that we are only focused on the _structure_ of distributions, i.e., conditional independence statements, and not other statistical properties of the distributions.
**“A more realistic application that utilizes the method developed in the paper would greatly strengthen the paper.”**
Thanks for the suggestion! Finding an interesting application for our results is definitely an interesting next direction.
---
Rebuttal 2:
Comment: Thank you again for your review and feedback. Do you have any additional concerns or questions? If you are satisfied with the response, we hope you will consider increasing the score.
---
Rebuttal Comment 2.1:
Comment: In response to the rebuttal, and in hindsight, my original score is too harsh and will be raised from reject to weak accept with lower confidence. The paper supports its 3 contributions of generalized Markov embeddings, empirical validation, and independence preserving embeddings.
My initial review was caused by a mismatch between the potential impact of the research question in this paper and the experimental validation presented. The paper presents theory towards embedding-based reasoning, and does not make a claim about large-scale empirical impact. The CLIP experiments in the paper are a small-scale study that validates the paper's claim. Further, larger-scale evaluation of the method is an opportunity for future work, and will not be held against the current paper.
---
Reply to Comment 2.1.1:
Comment: Thanks! We sincerely appreciate your time and effort to understand our work, and for updating your score accordingly. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful reviews and for recognizing the ideas presented in the paper as novel and appealing. In addition, we ran some quick experiments to additionally verify the effectiveness of our method and theory as suggested by reviewers. Tables and figures are included in the pdf file.
**Random projections**
In the paper, we show that top correlated words after random projections are more semantically meaningful by presenting some examples. To provide a numerical estimate as suggested by Reviewer zRa2, we adopt the Wu-Palmer similarity metric. We run the experiments in two settings. In the first setting, 1000 target words are randomly selected from the Brown corpus. As those words might contain rare and obscure ones, we run experiments in the second setting with 300 common nouns as target words. These words are selected by ChatGPT. Table 1 shows that the average WP similarities of top correlated words with target words are higher after random projections.
**Markov boundary**
In the paper, we demonstrate the effectiveness of our Markov boundary learning algorithm by measuring the smallest principal angle between Markov boundaries and random subspaces as well as some subspaces spanned by relevant words to target words. We present some selected examples in the paper. The additional experiments here provide numerical estimates. We first ask ChatGPT to give us a list of 50 common nouns each with a short description. Next, we convert the short descriptions into embeddings using CLIP text encoder. Finally, we compare the principal angle between description embeddings and subspaces spanned by Markov boundaries as well as randomly selected words. Figure 1 shows the learned Markvo boundaries consistently have smaller angles with description embeddings.
**IPE**
In the paper, we show the theories of independence-preserving embeddings. Here, we present additional empirical evidence of the theories. We randomly generate graphs using Erdős–Rényi model with p = 0.01 and the number of nodes to be 1000. We first apply the IPE construction method to get the embeddings and then apply random projections. Figure 2 shows that without projection (dimsion=1000), the average absolute cosine similarities of residuals after projecting onto respective Markov boundaries are nearly zero. As the random projection dimension increases, the average absolute cosine similarities increase slowly.
Pdf: /pdf/081e58a730ad42ad3a4695450165951fbb385ed5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds | Accept (poster) | Summary: This paper propose a system of model compression methods for text-to-image models such as stable diffusion. Authors propose to apply robust training (i.e. stochastic depth training) to train a network robust to architecture change and propose CFG-free knowledge distillation during the compression of the u-net. Besides, author also give in-depth study on the computation cost of different layers.
Strengths: 1. This paper focus on a hot and an interesting topic - compression of large generative models.
2. This paper have good evalution and good experimental results.
3. Good writing.
3. I like the video in the supplementary material.
Weaknesses: 1. The main model compression methods are borrowed from previous methods. Robust training is totally same as stochastic depth training. Their evaluation methods for architecture search is also well used in NAS. In the knowledge distillation part, simply adding knowledge distillation loss CFG guidance does not show enough novelty.
2. About the experimental results. Their have been many KD methods for stable diffusion (e.g. On Distillation of Guided Diffusion Models). No enough comparison is provided. Besides, although the appendix gives some qualitative results, it still will be better to provide more.
3. One of the main obstacles in compression of large-scale generative models is the overlarge training cost. Although authors provide their training settings in the manuscript, it will be better to provide more information about the training costs of robust training and the architecture optimization.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please refer to the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for the positive feedback and valuable comments. We appreciate that the reviewer acknowledges our paper studies a hot and interesting topic, and includes good evaluation and experimental results. We are glad to know the reviewer likes the video in the supplementary material. We address the raised concerns as follows.**
***
**Q1. About novelty.**
We humbly think our proposed architectural evolving and CFG distillation are both novel. First, we agree that the architectural evolving is inspired by the elastic depth, which we have cited on Line 117. However, to the best of our knowledge, we are the first to study how to efficiently optimize the architecture of large-scale text-to-image diffusion models, which is rarely studied before. The process requires careful design and evaluation metrics. Second, we comprehensively study the effects of step distillation and propose a new CFG distillation that can better trade off the FID/CLIP score than existing works. Therefore, we think our proposed approaches are valuable and novel, which is also agreed by Review 5sgC, Reviewer vQqa, and Reviewer 3k2m.
***
**Q2. About comparisons to KD methods for stable diffusion, and more qualitative results.**
We carefully reviewed the related literature and there are two works [a,b] that are highly related to our step distillation. The two works use progressive step distillation [a, b], and [b] proposes the w-conditioned model. In the paper, we have comprehensively compared our results with previous works [a,b]. For example, in Fig. 6(a), we compare our direction distillation with progressive distillation. In Fig. 6 (b), we compare our results with the w-conditioning method proposed in [b]. The extensive comparison demonstrates that the strategy for our step distillation is better than existing works. Additionally, in Tab. 2 of the main paper, we further compare our efficient UNet with the model in [b], which has a similar architecture as SD-v1.5, and demonstrate that our model can achieve better FID under the same DDIM scheduler. Additionally, we notice that in Fig.10 in [b], the distilled model is capped with less than 0.30 CLIP score, showing degraded text-image alignment, while our 8-step model overcomes this issue and achieves even better CLIP than SD-v1.5 (approaching 0.31).
For the qualitative results, we provide images in Fig. 1 of the main paper, Fig. 7 and Fig. 10 in the supplementary file, and in the demo video. We further provide more generated images in the author response PDF (Fig. 3).
***
**Q3. About training cost.**
Thanks for the question about training costs. We provide more details in the following table, including the cost for robust training, efficient UNet fine-tuning, step distillation, and decoder compression. Overall, our model training only requires 8.6% training samples in comparison with the training of SD-v1.5 from scratch. Note here we report training iterations and total training samples because the GPU hour estimation is dependent on GPU cluster specs, especially inter-node bandwidth, and thus inaccurate. Based on public information (by Huggingface), SD-v1.5 is trained with 30-60 days on 32 nodes, while replicating our entire workflow takes about 4 days on our 32-node cluster.
>| Stage | A100 GPUs | Batch size | 256x256 iters (K) | 512x512 iters (K) | Total training samples (M) |
>|-------------------------|:---------:|:----------:|:-----------------:|:-----------------:|:--------------------------:|
>| Robust training | 128 | 4 | - | 25 | 12.8 |
>| Efficient UNet finetuning | 256 | 8 | - | 55 | 112.6 |
>| Step distillation | 128 | 4 | - | 10 | 5.1 |
>| Decoder compression | 96 | 6 | - | 10 | 5.7 |
>| SD-v1.5 (from scratch) | 256 | 4 | 237 | 1304 | 1578.0 |
***
References:
[a] Salimans, Tim, et al., Progressive distillation for fast sampling of diffusion models. ICLR, 2022.
[b] Meng, Chenlin, et al., On distillation of guided diffusion models. CVPR, 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer oXj2,
Thank you for your valuable feedback and positive rating.
We provide additional explanations to help clarify our work. As the deadline for Author-Reviewer discussion is approaching, we would like to use this opportunity to see if our responses are sufficient and if any concern remains. Thanks again for your time.
Best,
Authors | Summary: This work develops a lightweight Stable Diffusion model with architecture compression and step reduction. For architectural compression, an efficient UNet architecture is obtained from evaluating the importance of individual residual and attention blocks, and an efficient image decoder is obtained via channel reduction and conventional knowledge distillation. For step reduction, a distillation that applies classifier-free guidance (CFG) during the training phase is introduced. Unlike previous methods that apply CGF during the inference, the proposed approach achieves a better tradeoff between FID and CLIP score.
Strengths: * This work successfully compresses Stable Diffusion, one of the most famous foundation models, and enables their deployment on mobile devices with a two-second latency. I think this study can receive significant attention in both academia and industry fields.
* The results of the proposed architectural compression are impressive, although some questions regarding training computations and resources exist (described below).
* The improved step distillation is well-motivated and novel. I think this can be broadly applicable to diffusion-based large models beyond Stable Diffusion.
* I appreciate Figure 2 and Table 1, which effectively explain the compute cost of SD. In particular, Figure 2 is very informative and well illustrates architectural bottlenecks in parameters (inner stages) and latency (outer stages).
* The paper is very well-written and easy-to-follow. The terms and equations are properly described.
Weaknesses: * According to the implementation details, this work seems to use the following datasets and compute machines. The training cost feels VERY HUGE, in contrast to the authors’ initial claim at the introductory paragraph of Section 3 (i.e., pruning and architecture search require significant training compute, whereas the authors propose some methods to alleviate the issue). It seems like a training cost comparable to building the original SD model from scratch.
* [49] Coyo-700m: Image-text pair dataset. https://github.com/kakaobrain/coyo-dataset, 2022.
* [38] Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402, 2022.
* [+] an internal dataset with high-resolution images to fine-tune our model for more pleasing visual quality.
* At least, 16 nodes x 8 GPUs per node = 128 A100 GPUs (We use 16 or 32 nodes for most of the training. Each node has 8 NVIDIA A100 GPUs with 40GB or 80GB memory)
* Would you clarify the number of training pairs AND the training hours used for each of (1) Robust Training in Section 3.1, (2) Image Decoder Compression in Section 3.2, and (3) Step Distillation in Section 4? I think the detailed description of training data/hours is quite important for readers and future research, especially under the popularity of large foundation models.
* Furthermore, are there any supporting materials or results to argue “From our empirical observation, the operator changes resulting from network pruning or searching lead to degraded synthesized images, asking for significant training costs to recover the performance.” at the introductory paragraph of Section 3? A recent, concurrent work* claims that it would be possible to obtain a small pruned UNet in SD with very small retraining cost.
* On Architectural Compression of Text-to-Image Diffusion Models, https://arxiv.org/abs/2305.15798
* Is there any overlap between the data used for architecture evolving (a smaller subset (2K images) of MS-COCO validation set [48] in line 130 at page 4) and the zero-shot MS-COCO set for final evaluation? If yes, it cannot be considered as the zero-shot evaluation, because the development data for selecting proper blocks were already used and seen. Furthermore, I was wondering the type and quantity of data for architecture evolving would matter and affect the performance.
* Robust Training in Section 3.1 - Would you clarify whether random block dropping is applied to training from scratch (i.e., random initialization) or to retraining from the pretrained SD-v1.5?
* It would be better to describe the method details about compressing the image decoder. Could you describe the channel reduction and the type of synthetic data in detail? I was not able to find them although I have checked the supplementary material (e.g., Supple Section 2.2 VAE Decoder).
* It would be good to analyze the impact of gamma in Eqn (11), the loss weighting between the vanilla step distillation loss and the proposed CFG-aware distillation loss. How did the authors set this parameter? How did different gamma values affect the generation performance? I have checked Figure (6)-b to analyze the effect of CFG range of w in Eqn (10) and the CFG probability p of the loss mixing in Eqn (11), but the effect of gamma in Eqn (11) is not explained.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please check the above weakness section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for the positive feedback and thoughtful comments. We appreciate that the reviewer acknowledges that our successfully compressed two-second text-to-image model can achieve significant attention in academia and industry, our step distillation is novel and can be broadly applied to models beyond Stable Diffusion, our results are informative and impressive, and our paper is well-written.**
***
**Q1. Training dataset and detailed training cost per each stage.**
Thanks for raising the question. In our training pipelines, we use 189M text-image pairs from Laion-5b with the shortest edge as 1024, and 194M text-image pairs from Coyo-700M with the shortest edge as 512. As mentioned in Lines 230-232, we report all the quantitative experiments in the paper by only using the above public datasets for fair comparisons with existing works. The data we used only accounts for 6.8% of the data in Laion-5b and Coyo-700M.
Some qualitative images, such as Fig. 1 in paper, are generated by using the model fine-tuned on additional dataset, as in Lines 232-233, which has 160M text-image pairs. Fine-tuning our model on the additional dataset is optional and achieves very similar quantitative results as public datasets, while sometimes generating more visually-pleasing images.
We provide detailed training costs in the following table, including the cost of robust training, efficient UNet fine-tuning, step distillation, and decoder compression. Overall, our model training only requires 8.6% training samples in comparison with the training of SD-v1.5 from scratch. Note here we report training iterations and total training samples because the GPU-hour estimation is highly dependent on GPU cluster specs, especially inter-node bandwidth, and thus inaccurate. Based on public information (Huggingface), SD-v1.5 is trained with 30-60 days on 32 nodes (8 GPUs per node), while replicating our entire workflow takes $\sim 4$ days on our 32-node cluster.
>| Stage | A100 GPUs | Batch size | 256x256 iters (K) | 512x512 iters (K) | Total training samples (M) |
>|-------------------------|:---------:|:----------:|:-----------------:|:-----------------:|:--------------------------:|
>| Robust training | 128 | 4 | - | 25 | 12.8 |
>| Efficient UNet finetuning | 256 | 8 | - | 55 | 112.6 |
>| Step distillation | 128 | 4 | - | 10 | 5.1 |
>| Decoder compression | 96 | 6 | - | 10 | 5.7 |
>| SD-v1.5 (from scratch) | 256 | 4 | 237 | 1304 | 1578.0 |
***
**Q2. Discussions on pruning cost and concurrent work.**
We tried to apply simple channel pruning on UNet. Under similar training time as robust training and efficient UNet fine-tuning (more than 80K iterations), the model is still hard to recover ($\sim 0.02$ drop in CLIP score). More advanced channel pruning methods are required.
Thanks for mentioning the concurrent work [a], which is an interesting and inspiring work! We notice three major differences between our work and [a] that might cause the differences in training cost. First, we do step distillation to reduce the inference time, while [a] does not. Second, we use the automatic approach for finding and adding blocks, while [a] uses a manually designed pipeline. The cost of manual labor is not clear. Third, and most importantly, we aim to achieve a similar or even better FID/CLIP score than the Stable Diffusion models, while [a] performs slightly worse than SD baseline (FID degrades by $2.71\sim 4.07$, CLIP degrades by $0.008\sim 0.03$). We will cite and discuss [a] in the revised paper.
***
**Q3. About the COCO 2K validation data for architecture evolving.**
There is no overlap between the 2K data for architecture validation and the data used for our evaluation on MS-COCO (6K or 30K).
As suggested, we show the results of using different data, i.e., 2K subset from MS-COCO and 2K subset from LAION, to validate actions in architecture evolving. As shown in the response PDF (Fig.2.a), using different data leads to very similar conclusions ($\Delta CLIP$) for each block. We show stability regarding the validation set selection and strong correlations of the block importance estimation. For instance, both COCO and LAION validation sets suggest the importance of Up.2.Attention, i.e., we should apply more attention modules in Up.2 stage.
***
**Q4. About robust training.**
Robust training in Section 3.1 is applied to pre-trained SD-v1.5.
***
**Q5. Compression details of image decoder.**
We apply 50% uniform channel pruning to the image decoder, resulting in a compressed efficient image decoder with approximately $\frac{1}{4}$ size and MACs.
In order to train the efficient image decoder, we use the dataset that only includes the text prompts from the LAION dataset. For each text prompt, we get the latent representation from the UNet of SD-v1.5 after 50 denoising steps with DDIM. We then forward the latent to our efficient image decoder and the decoder from SD-v1.5, to generate two outputs. Then we distill the efficient decoder by minimizing the mean squared error between the two outputs. We provide the details on Line 148 and will improve it with more details in the revision.
***
**Q6. About gamma in Eq.11.**
Thank you for your valuable comment. We empirically set a dynamic gamma to adjust the original loss into a similar scale to step distillation loss. We provide a detailed ablation analysis of different scaling strategies in the response PDF (Fig.2.b) and will include it in the revision.
***
References:
[a] Kim, et al., On Architectural Compression of Text-to-Image Diffusion Models, 2023.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal review
Comment: Dear Authors,
I sincerely appreciate the time and effort for the rebuttal and would like to increase my score from 6 to 7 for the following reasons:
- The authors have thoroughly described the training computational cost. While a concern remains regarding whether many researchers can access to such computing resources and can reproduce this work, it should not be a determining factor in evaluating this work. I also deeply appreciate the detailed discussions on a channel pruning baseline and a concurrent study.
- The authors ensured that the evaluation results were genuinely zero-shot and conducted further examinations regarding the impact of different data on architecture evolution. Additionally, the choice of the hyperparameter gamma was investigated during this rebuttal. Thanks for the clarification and additional experiments.
- The starting point of robust training, i.e., pretrained SD-v1.5 (not trained from scratch), is understandable, and the details of compressing the image decoder are addressed.
- The newly conducted experiments on SD-v2 look impressive, and I believe they can attract attention from the community.
---
Reply to Comment 1.1.1:
Title: Thank you for the response
Comment: Dear Reviewer 3k2m,
Thank you for your positive response! We are glad to know that your questions have been answered. We will include the additional discussions and analysis in our revised paper.
Best regards,
Authors | Summary: This paper introduces a novel framework that significantly reduces the inference speed of text-to-image diffusion models to under two seconds on mobile devices. The authors propose two key techniques. Firstly, they introduce Efficient U-Net which enhances inference speed by eliminating redundant architectural components without compromising performance. Secondly, the authors present a novel step distillation method that incorporates a classifier-free guidance loss term, which further improves CLIP score. Extensive experiments are conducted on the MS-COCO dataset to validate the effectiveness of the proposed approach.
Strengths: - The paper tackles a timely and practically-relevant problem supported by a fair amount of experiments, and stands as the pioneering study in attempting to reduce the latency of text-to-diffusion models on mobile devices.
- The paper covers a good amount of relevant previous studies.
Weaknesses: - In lines 232-233, the authors mention the use of an additional dataset for the fine-tuning purpose. It is crucial for the authors to provide further clarification regarding the fair comparison issue in relation to this additional dataset.
- The proposed distillation scheme appears to involve a series of intricate choices, making it less readily applicable to other settings. For instance, can this distillation scheme be applied to SD-v2.0?
- The clarity of Algorithm 1 should be improved. For instance, the condition “T is not satisfied” and $\hat{A}$ needs clarification. Also, please write the details regarding removing a group of actions in line 136.
- The readability of Section 4.1 can be enhanced, particularly by providing an explicit distillation algorithm. Further, the reason as to how step distillation and cfg loss trade off FID/CLIP score is unclear.
- While the image generation speed has been enhanced, the model still carries a memory burden.
I am willing to raise my score if above concerns are properly addressed during rebuttal.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - In Fig 5, can authors provide the generated images under SD without ResBlocks (before robust training)?
- Just out of curiosity, did authors conduct experiments on transformer-based diffusion models (e.g., DiT [1]) as well?
[1] Peebles, William, and Saining Xie. (2022) "Scalable diffusion models with transformers."
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper provided limitations along with future research directions in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for the positive feedback and valuable suggestions. We appreciate that the reviewer acknowledges our paper introduces a novel framework with efficient Unet and novel step distillation to significantly reduce the inference speed of the text-to-image models, and stands as the pioneering study. We also thank the reviewer for agreeing the problem studied in this paper is timely and practically relevant with extensive experiments. We address the main questions in the following.**
***
**Q1. About the additional dataset.**
We report all the quantitative experiments in the paper by only using public datasets for fair comparisons with existing works, as mentioned in Lines 230-232.
Some qualitative images in the paper, such as Fig. 1 in the main paper, are generated by using the model fine-tuned on additional dataset, as mentioned on Lines 232-233. Fine-tuning our model on the additional dataset is optional and actually achieves very similar quantitative results as using public datasets, while sometimes generating more visually-pleasing images.
***
**Q2. Distillation scheme on Stable Diffusion V2.**
Thanks for the suggestion. Our distillation pipeline is a generic approach without relying on specific models. We conduct the distillation experiments on SD-v2 and plot the FID and CLIP scores in the author response PDF (Fig.1.a). As we can see, our 8-step distilled model achieves comparable performance to the 50-step SD-v2 model. Please note that we use the exact same hyper-parameters from the training of SD-v1.5 for the step distillation of SD-v2, and further tuning might lead to better results.
***
**Q3. Clarification of Algorithm 1.**
Thanks for your suggestion and we will improve the clarity of Algorithm.1. $\hat A$ refers to the desired action indicated by the scoring metric defined in Sec.3.1. Here "$T$ is not satisfied" should be "latency objective $S$ is not satisfied". We are sorry for the notation typo and will fix it in the revision.
For architecture optimization, the model is initialized from pretrained SD-v1.5, and only 'removing' actions ($\hat A^-$ ) are executed, looping until target latency $S$ is satisfied. When the target latency is achieved, the evolving process comes to a vibrate equilibrium. If the latency falls below the objective, the 'adding' action ($\hat A^+$ ) is executed, as in Line 158 in Alg.1.
For both actions ($\hat A^-$ and $\hat A^+$), we add or remove a group of blocks at a time to save evolving and validation costs. For instance, we remove 2 attention blocks in the first Downsampling stage at a time when these blocks are identified as less valuable (slow, contribute less CLIP score).
***
**Q4. About the detailed distillation algorithm in Sec. 4.1.**
Thanks for the suggestion. We provide an explicit step distillation algorithm in the author response PDF (Alg. 1) and will also include it in the revision.
***
**Q5. How step distillation and cfg loss trade off FID/CLIP score.**
Using our CFG loss for the step distillation, the student model has the probability to mimic two CFG steps of the teacher model with one CFG step. Notice that the CFG has the behavior of using the guidance scale to trade off the FID and CLIP score (as in Fig. 4 Left). Therefore, during the step distillation, the sampled guidance range of the CFG loss and the probability of using CFG loss can influence the FID/CLIP trade-off of the student model, which can be attributed to the FID/CLIP trade-off of the teacher model.
As we can see in Fig. 6(d), using a small range of the CFG guidance scale, such as only 3 as the red line, the student CLIP score can not be improved, since the teacher model only provides supervision of good FID but weak CLIP score under CFG scale as 3. On the other hand, by only using the CFG loss without the vanilla distillation loss, as in the purple line, the student model has a weak FID score, since the teacher model has the behavior of overall good CLIP but weak FID score. Therefore, we use the probability of CFG loss and the range of CFG guidance scale to trade-off the FID/CLIP information from the teacher model, which will then be learned by the student model.
Thanks for the suggestion and we will include the discussion in the revised paper.
***
**Q6. About model memory.**
Thank you for raising the discussion. As mentioned in the Limitation section (Line 323), our current scope is the speed optimization of text-to-image models (within two seconds of runtime on mobile devices), and we leave storage size and running memory optimizations as future work. In Tab.6 in Appendix, we can still observe that our efficient model runs with almost 47% less memory on NVIDIA A100 with TensorRT than SD-v1.5. We further obtain the running memory on iPhone 13 Pro Max (iOS 16.5.1), benchmarked by Xcode. Our model requires 3.2G NPU memory (out of a total of 5.5G NPU memory) when using our demo App, while we can not get a consistent estimation of the original SD-v1.5 due to out-of-memory issues on NPU.
***
**Q7. Generated images under SD without ResBlocks.**
Thanks for the suggestion. We provide the generated images of SD-v1.5 without ResBlocks before robust training in the author response PDF (Fig.1.b). We will also include them in the revised paper.
***
**Q8. About experiments on DiT [a].**
Thanks for suggesting DiT, which is a relevant and great work, and we will discuss it in the revision. In our work, we focus on accelerating large-scale text-to-image models on mobile devices. As for now, DiT only has the model trained on ImageNet. Training DiT on large-scale datasets such as LAION for text-to-image generation requires re-designing the model architecture of DiT to incorporate text condition and many computational resources. We are interested in incorporating architecture improvements from DiT in our future work.
***
References:
[a] Peebles, William, and Saining Xie. Scalable diffusion models with transformers. arXiv, 2022.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for the detailed response and the complementary experiments.
- The concern regarding the utilization of additional dataset has been well addressed.
- Authors have shown that the work can be extended to other models without heavy hyperparameter tuning.
- I now have a clear understanding of Algorithm 1.
For the above reasons, I raise my score from 6 to 7.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer vQqa,
Thanks for your prompt response and positive rating! We are glad to know your concerns are addressed, and we will include the additional discussions and analysis in our final manuscript.
Best,
Authors | Summary: Text-to-image generation (T2I) is getting popular and has a vast application value. This paper explores efficient T2I by reducing computational redundancy. They learn an efficient U-Net via data distillation and then decrease the required diffusion steps. They can achieve T2I on an iPhone 14 Pro in 2 seconds.
Strengths: + This paper is well-written and easy to follow.
+ The efficiency of T2I models is less explored in previous studies. The proposed efficient U-Net is valuable for future research and can be practical for real-world usage.
+ They provide an attractive demo video in their supplementary.
Weaknesses: + Why use ViT to encode the input prompt (Fig. 3)? From my best knowledge, ViT is for encoding images, and StableDiffusion borrows the text encoder (similar to BERT) from CLIP.
+ Since they aim at T2I on mobile devices, various devices should be compared/discussed. iPhone 14 Pro is currently one of the most powerful mobile phones. How about those mid-level phones? Or older versions of the iPhone. This study can make this paper more robust.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see the Weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank the reviewer for the positive feedback and thoughtful comments. We appreciate the reviewer's acknowledgment that the paper proposes a valuable efficient U-Net for research and practical usage, is easy to follow, and provides an attractive demo video.**
***
**Q1. About using ViT for encoding the input prompt in Fig.3.**
Here we follow the model notation from CLIP [a] that uses ViT to represent both image encoder and text encoder. In our model, we only use the text encoder from ViT to get the textual embedding from input prompts. Thank you for the suggestion and we will add more details in the revised version to better clarify the implementation.
***
**Q2. Latency on more devices.**
Thank you for the suggestion. We conduct latency analysis on more devices, such as iPhone12 Pro Max and iPhone13 Pro Max, and will add these results in the revision. We use the latency benchmark tool from Xcode to get the reproducible runtime on different devices.
>| Device | Text Encoder (ms) | UNet (ms) | VAE Decoder (ms) | Overall (s) |
>|----------------|:-----------------:|:---------:|:----------------:|:-----------:|
>| iPhone14 Pro | 4.0 | 230 | 116 | 1.96 |
>| iPhone13 Pro Max | 5.7 | 315 | 148 | 2.67 |
>| iPhone12 Pro Max | 6.3 | 526 | 187 | 4.40 |
***
References:
[a] Radford, Alec, et al. Learning transferable visual models from natural language supervision. ICML, 2021. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you all for your positive rating and valuable comments. We upload a one-page PDF to include additional figures and algorithms. All other questions are addressed in the following individual responses. We sincerely look forward to having further discussions with you during Reviewer-Author Discussions if there are any other questions.
Thanks,
Authors
Pdf: /pdf/6e358922a7b6f6806b65175b51d5a29d129c54ef.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Domain Agnostic Fourier Neural Operators | Accept (poster) | Summary: This paper presents a novel method of extending Fourier Neural Operators (FNO) to irregular geometries using an indicator function to represent the shape of the geometric area. This area is then extended to a larger, regular area, thereby allowing FNO to handle irregular geometries. This is indeed an innovative approach, as the use of indicator functions in neural operators to handle irregular areas has not been done before. The methodology bears some similarities to the use of neural networks in PINNs to learn hard constraints, ensuring regular equations comply with constraints on the geometric boundary. Experimental results indicate that the proposed method achieves commendable performance with fewer parameters.
Strengths: The paper's idea is innovative, using indicator functions in neural operators to handle irregular geometries is a novel approach. The authors show through experimental results that the proposed method performs well with fewer parameters, a significant advantage.
Weaknesses: 1. The review of related work seems insufficient. The recent development of methods represented by transformers [1,2,3] that handle irregular geometric areas well is not mentioned or cited at all. Moreover, last year's ICLR Factorized FNO [4] also dealt with several non-uniform geometric area problems.
2. Regarding the technical aspect of the paper, I have a concern. The original irregular grid may be adaptive, implying that we may need a very high resolution when embedding it into a uniform grid. This might mean that the uniform grid's spacing needs to be smaller than the smallest spacing of the original grid, leading to considerable computation waste. The proposed method does not seem to avoid this waste.
3. I suggest that for datasets like airfoil and hyperelasticity, which have been detailed in other works, there is no need for further extensive description in the main text. From a machine learning researcher's perspective, the dataset's attributes, sizes, and challenges are more important than detailed principles of its generation, especially when these have already been thoroughly discussed in other studies.
Considering the above points, the authors need first add a few key references, carefully consider the technical questions raised, and then heavily revise the paper. If the core technical challenges (question 2 ) can be addressed, I may consider increasing the score after the rebuttal. Otherwise, I would suggest that the authors continue to revise this work for submission to another conference or journal.
1. Choose a Transformer: Fourier or Galerkin (https://arxiv.org/abs/2105.14995)
2. Transformer for Partial Differential Equations' Operator Learning (https://arxiv.org/abs/2205.13671)
3. GNOT: A General Neural Operator Transformer for Operator Learning (https://arxiv.org/abs/2302.14376)
4. Factorized Fourier Neural Operators (https://arxiv.org/abs/2111.13802)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments.
**References on transformer-type neural operators and F-FNO**: We thank the reviewer's kind suggestions, and have added F-FNO [4] as a baseline in our comparison. As shown in the tables below: in both the hyperelasticity and airfoil design problems, DAFNO outperforms F-FNO. Regarding the transformer-type neural operators, we would like to point out that [1] is based on regular grids and does not handle irregular geometric areas. While [2] and [3] can handle irregular geometric areas, they got officially published in April and July this year, respectively, which were around the same time this manuscript was submitted. We will cite these work in our revised paper.
**Handling non-uniform grids**: DAFNO can be easily combined with Geo-FNO and include either an analytical or trainable mapping from non-uniform/irregular mesh grids to uniform mesh grids. As a demonstration of this capability, we consider irregular grids in the airfoil problem and use a pre-computed function to map them to regular grids, as shown in Fig. 1 in the attached pdf file of the global response. Then, we train eDAFNO on regular grids and map the results back. The test loss on irregular grids using the eDAFNO learned model is 0.659\%$\pm$0.007\%, which is similar to the DAFNO results on uniform grids.
Test errors for the hyperelasticity problem, where bold numbers highlight the best method.
| Model| 10 samples | 100 samples | 1000 samples |
| :------------- | :-----------: | :-----------: | :-----------: |
|eDAFNO | **16.446\%**$\pm$**0.472\%** | 4.247\%$\pm$0.066\% | **1.094\%**$\pm$**0.012\%**|
|iDAFNO | 16.669\%$\pm$0.523\% | **4.214\%**$\pm$**0.058\%** | 1.207\%$\pm$0.006\%|
|FNO w/ mask | 19.487\%$\pm$0.633\% | 7.852\%$\pm$0.130\% | 4.550\%$\pm$0.062\% |
|IFNO w/ mask | 19.262\%$\pm$0.376\% | 7.700\%$\pm$0.062\% | 4.481\%$\pm$0.022\% |
|Geo-FNO | 28.725\%$\pm$2.600\% | 10.343\%$\pm$4.446\% | 2.316\%$\pm$0.283\% |
|GNO | 29.305\%$\pm$0.321\% | 18.574\%$\pm$0.584\% | 13.007\%$\pm$0.729\% |
|DeepONet | 35.334\%$\pm$0.179\% | 25.455\%$\pm$0.245\% | 11.998\%$\pm$0.786\%|
|F-FNO | 35.672\%$\pm$3.852\% | 12.135\%$\pm$5.813\% | 3.193\%$\pm$1.622\%|
|UNet | 98.167\%$\pm$0.236\% | 34.467\%$\pm$2.858\% | 5.462\%$\pm$0.048\%|
|FNO w/ smooth $\chi$ | 17.431\%$\pm$0.536\% | 5.479\%$\pm$0.186\% | 1.415\%$\pm$0.025\% |
|IFNO w/ smooth $\chi$ | 17.145\%$\pm$0.432\% | 5.088\%$\pm$0.146\% | 1.509\%$\pm$0.018\% |
Test errors for the airfoil problem.
| Model | Train error | Test error |
| :------------- | :-----------: | :-----------: |
|eDAFNO | 0.329\%$\pm$0.020\% | **0.596\%**$\pm$**0.005\%** |
|iDAFNO | 0.448\%$\pm$0.012\% | 0.642\%$\pm$0.020\% |
|eDAFNO on irregular grids | 0.331\%$\pm$0.003\% | 0.659\%$\pm$0.007\% |
|Geo-FNO | 1.565\%$\pm$0.180\% | 1.650\%$\pm$0.175\% |
|F-FNO | 0.566\%$\pm$0.066\% | 0.794\%$\pm$0.025\% |
|FNO w/ mask | 2.676\%$\pm$0.054\% | 3.725\%$\pm$0.108\% |
|UNet w/ mask | 2.781\%$\pm$1.084\% | 4.957\%$\pm$0.059\% |
---
Rebuttal Comment 1.1:
Title: Feedback
Comment: After reading your rebuttal, most of my concerns were resolved. I have raised my score to 5, but not higher due to the concern that I still think computing grid transformations to a uniform grid (like Geo-FNO) is inefficient and has many limitations. By reading your revision I found this is still a challenge. For example, for 3D airfoil problems, you will need very high-resolution grids which might be computationally expensive. | Summary: The paper explores the extension of Fourier Neural Operators (FNOs) to irregular geometries and topology changes. To leverage the computational speed benefits of the fast Fourier transform (FFT) employed by FNOs, they enclose the physical domain with a period box. They then adapt the FNO kernel by multiplying the integrand with a smoothed characteristic function that encodes domain information. This encoding enables their method to generalize to new geometries while preserving the FFT computational efficiency. They outperform several FNO baselines on two benchmark datasets, and showcase the properties of their method on a real-world crack propagation dataset, which contains evolving topology.
Strengths: The paper is well written and clear. The different choices of architecture are well justified.
- It tackles an interesting and useful problem for real-world application of FNO.
- The idea of incorporating the domain information in the kernel is simple, but effective and grounded, and the smoothing of the domain characteristic function helps avoiding dicontinituty problems.
- Experimental results on three datasets are strong.
- The crack propagation dataset is well-considered and helps highlighting the advantages of DAFNO, especially its ability to adapt to evolving topologies.
Weaknesses: - The comparison with several baselines is interesting, but given the recent improvements seen over FNO and Geo-FNO, having F-FNO [1] as a baseline seems important. In addition to neural operators, other grid-independent methods that can be used on irregular domains have been developed, for instance [2] implemented a continuous model using Implicit Neural Representations (INRs). It would have been interesting to see a comparison or at least a discussion about the advantage of DAFNO in the light of these recent improvements in the community (for e.g. computational cost?).
- As explained in the limitations and clear from the methodology itself, the architecture struggles to handle non-uniformly meshed grids due to the inability to use FFT on such grids. This limitation significantly restricts the range of problems that can be efficiently tackled using this architecture. For example, if applied to real-world airfoil surrogate modeling problems like [3], it seems that DAFNO would not offer significant improvements over the original FNO.
- While the idea behind DAFNO is interesting and novel, the architectural improvement upon the regular FNO architecture seems limited. The authors address this issue in a ‘remark’ paragraph, but I am not totally convinced that it is a sufficient improvement, especially since the architecture does not appear to efficiently handle non-uniformly meshed grids.
- This is more a remark than a weakness. In Figure 3, the zero-shot super-resolution prediction from eDAFNO does not effectively highlights the resolution invariant properties, since there is no ground truth to compare it to. While some degree of "deblurring" is noticeable, it remains unclear if the network successfully captures higher frequency phenomena that may occur at higher resolutions. The appendix, specifically Figure 10, sheds light on this property for the crack propagation dataset. However, Figure 3 does not truly demonstrates this resolution invariant property.
[1]: Tran, Alasdair, et al. "Factorized Fourier Neural Operators." The Eleventh International Conference on Learning Representations. 2022.
[2]: Yin, Yuan, et al. "Continuous PDE Dynamics Forecasting with Implicit Neural Representations." The Eleventh International Conference on Learning Representations. 2023.
[3]: Bonnet, Florent, et al. "AirfRANS: High Fidelity Computational Fluid Dynamics Dataset for Approximating Reynolds-Averaged Navier–Stokes Solutions." Advances in Neural Information Processing Systems 35 (2022): 23463-23478.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I wonder why Geo-FNO is not in the baselines for the Crack propagation dataset. It would have been an interesting baseline, especially to see if it could adapt to evolving geometries better than a masked FNO. It is claimed that Geo-FNO cannot handle topology change, but once a continuous mapping is learned, it can be applied. I agree that it should not adapt well, but is there a specific reason for its absence in these experimental results?
2. Just to confirm my understanding, for the FNO baseline, the grid is interpolated as in Geo-FNO, and then a mask is given as an additional input?
3. I don’t quite understand the link from equation 5 to the equation before 6. Why is $\chi(x)$ in factor of $W^l h(x) + c^l$ ? Is it to ensure that DAFNO will not produce results for points outside of the domain?
4. Given the link of the smoothing function to a signed distance function (SDF) with the $\\tanh(\\beta \\text{dist}(x, \partial \Omega))$ formulation, is there a reason for not directly using a SDF instead of the smoothing function ? It may have given more (or a different) information to the network, as in different previous works (for e.g. [4]).
5. Out of curiosity, have you experimented with using other smoothing functions besides tanh (for e.g. a logistic function)?
[4]: Guo, Xiaoxiao, Wei Li, and Francesco Iorio. "Convolutional neural networks for steady flow approximation." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main limitations are already presented in the article.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable suggestions.
**Comparison with additional baselines**: We have added F-FNO as an additional baseline in both the elasticity and airfoil problems. On the other hand, since INR focuses on learning a time-continuous dynamics model of the underlying flow, and it is not applicable to these two baseline problems. We would like to point out that when comparing F-FNOs with DAFNOs, our original conclusions still stand: DAFNOs consistently outperform F-FNOs in accuracy, with halved computational time.
Test errors for the hyperelasticity problem.
| Model| 10 samples | 100 samples | 1000 samples |
| :------------- | :-----------: | :-----------: | :-----------: |
|eDAFNO | **16.446\%**$\pm$**0.472\%** | 4.247\%$\pm$0.066\% | **1.094\%**$\pm$**0.012\%**|
|iDAFNO | 16.669\%$\pm$0.523\% | **4.214\%**$\pm$**0.058\%** | 1.207\%$\pm$0.006\%|
|Geo-FNO | 28.725\%$\pm$2.600\% | 10.343\%$\pm$4.446\% | 2.316\%$\pm$0.283\% |
|F-FNO | 35.672\%$\pm$3.852\% | 12.135\%$\pm$5.813\% | 3.193\%$\pm$1.622\%|
The per-epoch runtime (second) in the hyperelasticity problem.
|Model | eDAFNO | iDAFNO | FNO | IFNO | Geo-FNO | F-FNO |
| :------------- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: |
|runtime (s) | 2.00 | 1.70 | 1.81 | 1.62 | 5.12 | 3.41 |
Test errors for the airfoil problem.
| Model | Train error | Test error |
| :------------- | :-----------: | :-----------: |
|eDAFNO | 0.329\%$\pm$0.020\% | **0.596\%**$\pm$**0.005\%** |
|iDAFNO | 0.448\%$\pm$0.012\% | 0.642\%$\pm$0.020\% |
|eDAFNO on irregular grids | 0.331\%$\pm$0.003\% | 0.659\%$\pm$0.007\% |
|Geo-FNO | 1.565\%$\pm$0.180\% | 1.650\%$\pm$0.175\% |
|F-FNO | 0.566\%$\pm$0.066\% | 0.794\%$\pm$0.025\% |
**Handling non-uniform grids**: We would like to point out that DAFNO can be readily combined with the grid mapping technique in Geo-FNO, to handle non-uniform grids. As a demonstration of this capability, we consider irregular grids in the airfoil problem and use a pre-computed function to map irregular grids to regular grids, then we train eDAFNO. In this irregular grid set, we place more grid points near the airfoil, so as to provide a better resolution near the important parts as suggested by the reviewer. The test loss using the eDAFNO learned model is 0.659\%$\pm$0.007\%, which is similar to the DAFNO results on uniform grids. In Fig. 1 of the attached pdf file in the global response we demonstrate the irregular mesh, and in Fig. 2 we plot the errors of Geo-FNO, DAFNO on regular grids, and DAFNO on irregular grids. One can see that while both DAFNOs substantially outperform Geo-FNO, the error contours from eDAFNO with irregular mesh show a smaller miss-match region near the airfoil, verifying the flexibility of DAFNO in meshing and its capability in resolving possible fine-grained feature in real-world modeling problems.
The original paper focused on uniform grids, just to highlight our simple but efficient architecture to embed characteristic domain encoding -- which can be readily combined with the grid deformation technique in Geo-FNO to handle non-uniform grids, in addition to its unique capability in handling topological changes.
**Geo-FNO for crack propagation**: The crack example is presented intentionally as a problem where Geo-FNO falls short and cannot be used. The reason is that the Geo-FNO requires a pre-defined or trainable isomorphism between a rectangular domain and the targeted irregular domains. However, in fracture problems the domain undergoes severe topological changes including the emergence of new holes or discontinuities/boundaries (as shown in Fig. 5) which is not known a priori. Such an isomorphism does not exist (intuitively, there is no way to define a mapping between each yellow regions in Fig. 5 to a rectangular domain).
**FNO baseline**: The reviewer is correct that the grid in the FNO baseline is interpolated as in Geo-FNO, and then a mask is given. The settings as well as the interpolated benchmark datasets are consistent with the Geo-FNO paper.
**Explanation of $\chi(x)$ in factor of $W^l(x)+c^l$**: Yes, the reviewer is correct. Adding $\chi(x)$ in factor of $W^l(x)+c^l$ makes the values outside of the main domain zero. Additionally, it allows us to factor out $\chi(x)$ from the entire equation and present it in the form of Eq. 6.
**Using SDF as the characterization function**: We didn't use signed-distance function directly because there is no physical meaning: the characterization function $\chi$ in our setting is a smoothed binary representation which aims to provide near-local structural information. By multiplying it in the integral layer, we can eliminate the interactions between points inside and outside the physical domain, similar as in the molecular dynamics method [1]. As pointed out by [2], SDF aims to provide global structural information and its effect is very different from the local information from binary representation. To them, SDF works but binary representation does not, probably due to the fact that their NN architecture is substantially different from ours: SDF was used as an input, while the characteristic function was applied to layer weights in our setting. Moreover, the tanh on distance function aims to provide a smoothing function, while the SDF itself is not smooth.
[1] Hansson, Tomas et al. "Molecular dynamics simulations." Current opinion in structural biology 12.2 (2002): 190-196.
[2] Guo, Xiaoxiao et al. "Convolutional neural networks for steady flow approximation." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
**Other smoothing functions**: We have also tested other smoothing functions, such as the Gaussian filter. The effect is very similar to tanh. This fact was also verified from Table 6 in our ablation study: As far as the characteristic function has a sufficient level of smoothness, the test error does not vary much.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: Thank you for all your differents answers, which I found insightful. My two main reservations concerned the limited related work missing the F-FNO reference, as well as the potential inapplicability of the architecture to non-uniformly meshed grids. The authors have effectively addressed both of these concerns, consequently I will improve my evaluation.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We thank the reviewer for their kind response and for raising the score. We sincerely appreciate the reviewer's valuable time and suggestions, which helped us to improve the quality of this work. | Summary: The Fourier Neural Operator (FNO) is a model in the field of neural operators that has successfully interpreted various physical phenomena. However, one of the issues with FNO is its limitation to learn only on rectangular domains. In Geo-FNO, this problem is addressed by lifting irregular domains to the latent space of rectangular domains.
The authors propose a novel convolution integral kernel through the domain characteristic function X(x) and introduce DAFNO (Domazin-Adapted Fourier Neural Operator). DAFNO outperforms the existing baseline in problems involving airfoil and hyperelasticity.
Strengths: Conducting research on a domain-agnostic physics simulator is a crucial endeavor. In particular, the field of neural operators, which the authors have explored, offers a highly efficient approach zto interpreting recent physical information, thus holding significant potential.
Weaknesses: Could you provide more detailed explanation about the mathematical motivation behind the domain characteristic function? It would be necessary to provide specific explanations for Eq(4) and Eq(5).
Due to the varying model sizes between the baseline models and the proposed model in Table 1, it is difficult to claim that the comparison is fair. A more equitable comparison can be achieved when all model sizes are standardized to be the same.
The authors proposed a new neural operator model called DAFNO. However, besides modifying the internal structure of the iterative layers in Geo-FNO, specifically by splitting the structure of the iterative layers in Figure 2, it is challenging to identify any other distinct novelties.
Adding more baselines in the field of neural operators could lead to better experiments and improved evaluations. Such as, MWT (Multiwavelet-based neural operator), WNO (Wavelet neural operator)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Is there no experimental result for the benchmark dataset (Pipe) based on the Navier-Stokes equation in the Geo-FNO paper? I regard it as one of the significant experiments in this domain.
Intuitively, it might be expected that DAFNO, having designed the Iterative layers in parallel, could have larger training and inference times compared to the baseline models like FNO and Geo-FNO. Can you provide any measurement results to demonstrate this?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: There is no limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments.
**Mathematical motivation behind the domain characteristic function**: In order to obtain a truly domain-independent operator, we aim to hard-code domain information into the architecture. The challenge is to maintain the applicability of FFT while hard-coding the bounded, arbitrarily shaped domain. The aim is achieved if we enclose the bounded domain inside the rectangular box (so FFT is applicable), but "cut" the interactions between the main domain and the exterior, as if they are two non-interacting bodies. This means that when FNO integral is being computed for a point $x$, integration over points $y$ that fall onto the exterior should be avoided. This is achieve by adding $\chi(x)\chi(y)$ inside the integral. $\chi$ is 1 inside and 0 outside of the domain; as a result, the integrand becomes zero for $(x,y)$ pairs that one falls inside and one falls outside of the domain since $\chi(x)\chi(y) = 0$ for such pairs. Hence, the main domain and the exterior are separated. With this simple but effective modification, the integral operator is still defined over a box and the convolution form of the integral is preserved. As a result, FFT remains applicable while the bounded domain info is hard-coded into the architecture.
**Unify model sizes in table 1**: We have updated the total number of parameters in DeepONet to be on the same level as other models ($\sim$2.5M), and the results are updated accordingly. Here, we point out that the only exceptions are the two implicit models (i.e., iDAFNO and IFNO), which have much fewer parameters. This is because these models are designed to be layer-independent in parameters. In other words, different layers share the same set of parameters, and therefore the total number of parameters stays the same with the increase of layers. To provide a fair comparison between explicit and implicit models, we employ the same architectural hyperparameters (number of layers, width, modes, etc), for eDAFNO, iDAFNO, FNO, and IFNO, while keeping the number of parameters in all explicit models on the same level as other baselines. As a result, implicit models having far fewer parameters than others.
The total number of parameters and per-epoch runtime (second) in the hyperelasticity problem.
|Model | eDAFNO | iDAFNO | FNO | IFNO | Geo-FNO | GNO | DeepONet | UNet | F-FNO |
| :------------- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: |
|nparams | 2.37M | 0.60M | 2.37M | 0.60M | 3.02M | 2.64M | 3.10M | 3.03M | 3.21M|
|runtime (s) | 2.00 | 1.70 | 1.81 | 1.62 | 5.12 | 98.37 | 940.12 | 5.04 | 3.41 |
Test errors for the hyperelasticity problem.
| Model| 10 samples | 100 samples | 1000 samples |
| :------------- | :-----------: | :-----------: | :-----------: |
|eDAFNO | **16.446\%**$\pm$**0.472\%** | 4.247\%$\pm$0.066\% | **1.094\%**$\pm$**0.012\%**|
|iDAFNO | 16.669\%$\pm$0.523\% | **4.214\%**$\pm$**0.058\%** | 1.207\%$\pm$0.006\%|
|FNO w/ mask | 19.487\%$\pm$0.633\% | 7.852\%$\pm$0.130\% | 4.550\%$\pm$0.062\% |
|IFNO w/ mask | 19.262\%$\pm$0.376\% | 7.700\%$\pm$0.062\% | 4.481\%$\pm$0.022\% |
|Geo-FNO | 28.725\%$\pm$2.600\% | 10.343\%$\pm$4.446\% | 2.316\%$\pm$0.283\% |
|GNO | 29.305\%$\pm$0.321\% | 18.574\%$\pm$0.584\% | 13.007\%$\pm$0.729\% |
|DeepONet | 35.334\%$\pm$0.179\% | 25.455\%$\pm$0.245\% | 11.998\%$\pm$0.786\%|
|F-FNO | 35.672\%$\pm$3.852\% | 12.135\%$\pm$5.813\% | 3.193\%$\pm$1.622\%|
|UNet | 98.167\%$\pm$0.236\% | 34.467\%$\pm$2.858\% | 5.462\%$\pm$0.048\%|
**Challenging to identify distinct novelties**: To our best knowledge (and also as pointed out by reviewers FUid and BzBv), our work has for the first time proposed to modify the internal structure of the iterative layers and encode domain geometry. As a result, DAFNOs are the first neural operator that can represent and handle dynamically changing domain topology (also pointed out by reviewer FUid).
**Adding more baselines such as MWT**: We appreciate the reviewer's suggestion. The reason why we did not use them as baselines in our current work is that MWT requires the input grids to be 2 to the integer powers (i.e., $2^N$) and does not fit the benchmark datasets. We will include a discussion in the revised manuscript to acknowledge the contribution of MWT and MNO. Per the reviewer's suggestion, we have added Factorized FNO (F-FNO) as an additional baseline in both the elasticity and airfoil problems (see the table for elasticity problem above, and results of airfoil problem in global response), and the original findings still stand: DAFNOs consistently outperform all baselines.
**Baseline on pipe flow example from Geo-FNO**: Per suggested by the reviewer, we have added an additional test of DAFNO on the pipe flow example. As shown in Fig. 3 of the attached pdf file in the global response, eDAFNO has achieved a similar performance to Geo-FNO: when comparing the relative $L^2$ errors, eDAFNO's test error on the pipe dataset is 0.719\%, while the test error of Geo-FNO is 0.67\%. When comparing the maximum absolute error, eDAFNO has 0.051, while Geo-FNO has 0.061. This is probably due to the fact that all pipes have a very simple geometry, which can be accurately represented with the pre-specified mapping in Geo-FNO. We want to further comment that such a pre-specified mapping for grid deformation can also be readily added to DAFNO, as demonstrated in Fig. 1 in the attached pdf file of the global response for the airfoil problem. In that case, DAFNO will be exactly the same as Geo-FNO.
**Computational cost comparison**: We have already reported the comparison in terms of the runtime in Table 4 in the Appendix (which is also copied above). As demonstrated in the table, DAFNOs' runtime is similar to FNO, and beats Geo-FNO by 67\% in iDAFNO and 61\% in eDAFNO.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering all my concerns and questions. Reflecting this, I will raise my score (Border line reject to Weak accept).
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We thank the reviewer for their kind response and for raising the score. We sincerely appreciate the reviewer's valuable time and suggestions, which helped us to improve the quality of this work. | Summary: In this paper, the authors propose the Domain Agnostic Fourier Neural Operator (DAFNO), an FNO that can deal with irregular boundaries. While the classical FNO is limited by construction to irregular domains, DAFNO simply includes a smoothed function $I(\cdot)$ of the characteristic function of the domain on which the data is defined. DAFNO is shown to outperform relevant baselines on 2-dimensional problems with irregular boundaries and one problem with topology changing over time.
Strengths: The paper is about a very simple yet effective idea: including a smoothed mask to model boundaries inside the integral operator. This approach is original to my knowledge and effective in terms of strong experimental evaluation relevant to practical applications, including being able to handle topology changes. The paper is clear in its description and well-placed in the current literature on neural operators and deep surrogate models for scientific ML.
Weaknesses: In my opinion, there is no major weakness in this paper. The main limitation could be that the approach is (particularly) simple, but I believe this does not need to be seen as a negative point. Sometimes, simpler is better. The experimental results are solid but do not encompass more complex settings such as 3D fluid dynamics. Another weakness is the lack of source code, which I believe should be provided, given the simplicity of the setting.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the proposed method be extended to modeling systems ≥ than 3 dimensions?
- At line 262: “In general, the topology evolution rule that determines Ω(t) can be obtained from a separate neural network or from physics as is the case in the current example”, did you test out a separate neural network in the experiments as well?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main limitation (about uniformly meshed grids) is mentioned. Other limitations are covered in the “weaknesses” section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable suggestions.
**High-dimensional problems**: DAFNOs are readily applicable to more complex settings and higher-dimensional problems, as neither the characteristic geometric encoding nor the smoothening technique is constrained to a specific dimension. It will just require more memory when it comes to higher dimensions.
**Release code**: We have uploaded our DAFNO code of the hyperelasticity problem on anonymous github and sent the link to the Area Chair. Our DAFNO package on other problems will also be made publicly available on github once the paper is accepted.
**Topology evolution rule as NN**: No yet. Practically, one can calculate the topology evolution rule in the form of fracture energy [1] or damage field [2] using a separate neural network, although that might introduce additional modeling error in long-term propagation. In this work we used physical laws so as to focus on resolving the geometric changes.
[1] Goswami, Somdatta, et al. "A physics-informed variational DeepONet for predicting crack path in quasi-brittle materials." Computer Methods in Applied Mechanics and Engineering 391 (2022): 114587.
[2] You, Huaiqian, et al. "Learning deep implicit Fourier neural operators (IFNOs) with applications to heterogeneous material modeling." Computer Methods in Applied Mechanics and Engineering 398 (2022): 115296.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for your reply.
(The code has not been shared with us reviewers, but I trust the AC will check it.)
Given my concerns have mostly been solved and the response to the other reviewers, I am happy to raise my score!
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We thank the reviewer for their kind response and for raising the score. We sincerely appreciate the reviewer's valuable time and suggestions, which helped us to improve the quality of this work. | Rebuttal 1:
Rebuttal: We thank the reviewers for the constructive comments, for recognizing the importance/usefulness of our work (reviewers FUid, UPhM, dnax), the novelty and elegance of DAFNO's architecture (reviewers FUid, fNa3, dnax, BzBv), DAFNO's role as the first neural operator that can handle dynamically changing topologies (reviewer FUid), effective architecture and strong experimental results (reviewers fNa3, dnax), computational advantage with far fewer parameters (reviewer BzBv), and clarity in writing (reviewers fNa3, dnax).
**Being simple yet flexible**: We want to comment on the simplicity of our DAFNO model: as also pointed out by Reviewer fNa3, being simple does not need to be seen as a negative point. Sometimes simpler is better! It is the performance that matters the most. The simplicity of DAFNO's architecture adds benefits in terms of facilitating implementation.
**Additional baselines and experiments**: In order to further enrich the baselines to compare to, we have added Factorized FNO (F-FNO) as another baseline in both the elasticity and airfoil problems. Our original conclusion still stands: DAFNOs consistently outperform all selected baselines. We also show the applicability of DAFNO on irregular/adaptive grids using the airfoil dataset, where the non-uniform grids are generated using sinusoidal functions and are highly adaptive in the vicinity of the airfoil. DAFNO achieves very similar accuracy compared to uniform meshes, while the error gets reduced near the airfoil.
**Release code**: We have uploaded our DAFNO code of the hyperelasticity problem on anonymous github and sent the link to the Area Chair. Our DAFNO package on other problems will also be made publicly available on github once the paper is accepted.
Test errors for the hyperelasticity problem, where bold numbers highlight the best method.
| Model| 10 samples | 100 samples | 1000 samples |
| :------------- | :-----------: | :-----------: | :-----------: |
|eDAFNO | **16.446\%**$\pm$**0.472\%** | 4.247\%$\pm$0.066\% | **1.094\%**$\pm$**0.012\%**|
|iDAFNO | 16.669\%$\pm$0.523\% | **4.214\%**$\pm$**0.058\%** | 1.207\%$\pm$0.006\%|
|FNO w/ mask | 19.487\%$\pm$0.633\% | 7.852\%$\pm$0.130\% | 4.550\%$\pm$0.062\% |
|IFNO w/ mask | 19.262\%$\pm$0.376\% | 7.700\%$\pm$0.062\% | 4.481\%$\pm$0.022\% |
|Geo-FNO | 28.725\%$\pm$2.600\% | 10.343\%$\pm$4.446\% | 2.316\%$\pm$0.283\% |
|GNO | 29.305\%$\pm$0.321\% | 18.574\%$\pm$0.584\% | 13.007\%$\pm$0.729\% |
|DeepONet | 35.334\%$\pm$0.179\% | 25.455\%$\pm$0.245\% | 11.998\%$\pm$0.786\%|
|F-FNO | 35.672\%$\pm$3.852\% | 12.135\%$\pm$5.813\% | 3.193\%$\pm$1.622\%|
|UNet | 98.167\%$\pm$0.236\% | 34.467\%$\pm$2.858\% | 5.462\%$\pm$0.048\%|
|FNO w/ smooth $\chi$ | 17.431\%$\pm$0.536\% | 5.479\%$\pm$0.186\% | 1.415\%$\pm$0.025\% |
|IFNO w/ smooth $\chi$ | 17.145\%$\pm$0.432\% | 5.088\%$\pm$0.146\% | 1.509\%$\pm$0.018\% |
Test errors for the airfoil problem.
| Model | Train error | Test error |
| :------------- | :-----------: | :-----------: |
|eDAFNO | 0.329\%$\pm$0.020\% | **0.596\%**$\pm$**0.005\%** |
|iDAFNO | 0.448\%$\pm$0.012\% | 0.642\%$\pm$0.020\% |
|eDAFNO on irregular grids | 0.331\%$\pm$0.003\% | 0.659\%$\pm$0.007\% |
|Geo-FNO | 1.565\%$\pm$0.180\% | 1.650\%$\pm$0.175\% |
|F-FNO | 0.566\%$\pm$0.066\% | 0.794\%$\pm$0.025\% |
|FNO w/ mask | 2.676\%$\pm$0.054\% | 3.725\%$\pm$0.108\% |
|UNet w/ mask | 2.781\%$\pm$1.084\% | 4.957\%$\pm$0.059\% |
The total number of parameters and per-epoch runtime (second) in the hyperelasticity problem.
|Model | eDAFNO | iDAFNO | FNO | IFNO | Geo-FNO | GNO | DeepONet | UNet | F-FNO |
| :------------- | :-----------: | :-----------: | :------------- | :-----------: | :-----------: | :------------- | :-----------: | :-----------: | :-----------: |
|nparams | 2.37M | 0.60M | 2.37M | 0.60M | 3.02M | 2.64M | 3.10M | 3.03M | 3.21M|
|runtime (s) | 2.00 | 1.70 | 1.81 | 1.62 | 5.12 | 98.37 | 940.12 | 5.04 | 3.41 |
Pdf: /pdf/b3e745f424915e156d46c767b15ec730516a4243.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Fourier Neural Operators (FNOs) are a popular method for modeling physical systems such as different types of PDEs. However, in order to use the FFT to make FNO efficient, the input needs to be a regular grid. The authors study the question of irregular grid inputs for FNO, as well as problems with changing topologies. They propose to add a smoothed characteristic function in the integral layer architecture. This directly encodes the topology into the architecture, and it allows the architecture to continue to use FFT while allowing irregular grid inputs. The authors apply this domain agnostic method to the regular FNO as well as implicit FNO. The authors run experiments on material modeling, airfoil simulation, and material fractures.
Strengths: - This is the first neural operator that can handle learning with topology changes. This is a useful property to handle fracture problems, such as material fracture or earthquakes.
- The technique is relatively simple: adding a characteristic function to the integral kernel and adding a smoothing term, but it is novel and elegant that the characteristic function encodes the topology directly in the architecture. This is how the authors’ technique can learn changing topologies, when, say Geo-FNO learns a fixed mapping and cannot learn changing topologies.
- The authors experiment on three problems: hyperelasticity, airfoils, and crack propagation, and they also perform ablation studies to separate the characteristic function and the smoothness.
Weaknesses: - My biggest concern is that the technique may not be able to handle highly irregular topologies or topologies with fine-grained features, for two separate reasons. As shown in Figure 1, the resulting grid from the authors’ technique is still uniform, but with a characteristic added, to tell what is inside or outside the topology. Since the grid is still uniform, this may not work well for a topology that is very irregular or has fine features. This is in contrast to Geo-FNO, which learns a mapping, allowing it to handle highly irregular topologies and also to focus more grid points on the “important” parts of the topology and fewer grid points on the coarse / less important parts.
Furthermore, the smoothing function could magnify the above problem, making it even harder to model delicate parts of the topology. I think of this as a weakness because the authors say that handling irregular topologies is one of the main contributions of their work.
- The technique is fairly simple: it boils down to adding a characteristic function with smoothing. But on the other hand, this technique is the first to handle dynamically changing topologies.
- The authors did not release code or mention releasing code. If they authors released code (anonymously during submission) it would have a bigger impact.
- Why not compare airfoil with the other baselines? Also, why do the 3rd and 4th panels of Figures 4 look like there is a vertical line discontinuity in the center of the image?
- Why not compare the cracking to more baselines than just FNO, such as Geo-FNO?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Can the authors comment on all of the weaknesses listed above?
Also, I wonder if Geo-FNO can be combined with DAFNO, to handle more complex topologies and also be able to handle changing topologies?
Do you plan to release your code? It could even be released during the rebuttal with e.g. https://anonymous.4open.science/.
I would raise my score if some or all of the points from the "weaknesses" section are addressed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discuss limitations in Section 5: they only consider changing domains where the grid is uniform, and they only consider problems with the same boundary conditions as the domain changes. I agree that it would be good to add a mapping for grid deformation in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable suggestions.
**Highly irregular topologies or topologies with fine-grained features**: DAFNO can be readily combined with the grid mapping technique in Geo-FNO, to handle non-uniform grids. No modification on the NN architecture is required, and one just needs to include an analytical or trainable mapping from non-uniform/irregular mesh grids to uniform mesh grids. As a demonstration of this capability, we consider irregular grids in the airfoil problem and use a pre-computed function to map irregular grids to regular grids, then we train eDAFNO. In this irregular grid set, we place more grid points near the airfoil, so as to provide a better resolution near the important parts as suggested by the reviewer. The test error is provided in the table below: The test loss using the eDAFNO learned model is 0.659\%$\pm$0.007\%, which is similar to the DAFNO results on uniform grids. In Fig. 1 of the attached pdf file in the global response we demonstrate the irregular mesh, and in Figure 2 we plot the errors of Geo-FNO, DAFNO on regular grids, and DAFNO on irregular grids. One can see that while both DAFNOs substantially outperform Geo-FNO, the error contours from eDAFNO with irregular mesh show a smaller miss-match region near the airfoil, verifying the flexibility of DAFNO in meshing and its capability in resolving fine-grained features.
**Smoothing function**: We treat the smoothing level $\beta$ as a hyperparameter, and it is tuned according to the validation error, to either smoothening the boundary or keep the original boundary untouched. This is illustrated in Fig. 8 in the appendix of the paper, where the original boundary is not smoothened with a very large smooth level $\beta$. As such, the $\beta$ as well as the smoothing level are automatically chosen based on the intrinsic resolution of the topology in training and validation datasets: if the topology prefers fine-grained features, a smaller $\beta$ (and smoother $\chi$) will result in a larger validation error and will not be chosen. We will include a discussion about this in the revised manuscript to strengthen the paper.
**Release code**: We have uploaded our DAFNO code of the hyperelasticity problem on anonymous github and sent the link to the Area Chair. Our DAFNO package on other problems will also be made publicly available on github once the paper is accepted.
**Airfoil with the other baselines**: The hyperelasticity problem serves as a thorough comparison against available baselines, after which we picked the baselines that have the best performance for the airfoil problem. This is in consistence with the settings in Geo-FNO. In order to further enrich the baselines, we have added another recent FNO variant, Factorized FNO (F-FNO), in both the elasticity and airfoil problems as a baseline. Additionally, as mentioned above, we added eDAFNO applied to irregular grids to highlight the fact that eDAFNO is not constrained to uniform grids. Our conclusion still holds. DAFNOs consistently outperform all selected baselines. The ``vertical line discontinuity'' in Fig. 4 is a physical phenomenon called shock wave, which reflects the nature of the transonic flow governed by the Euler's equation [1]. As discussed in [2] and [3], the capability of capturing such discontinuity is considered as an important metric in classical numerical methods.
[1] Luk, Jonathan, and Jared Speck. "Shock formation in solutions to the 2D compressible Euler equations in the presence of non-zero vorticity." Inventiones mathematicae 214.1 (2018): 1-169.
[2] Roe, Philip L. "Characteristic-based schemes for the Euler equations." Annual review of fluid mechanics 18.1 (1986): 337-365.
[3] Hennemann, Sebastian, et al. "A provably entropy stable subcell shock capturing approach for high order split form DG for the compressible Euler equations." Journal of Computational Physics 426 (2021): 109935.
**Cracking with Geo-FNO**: The crack example is presented intentionally as a problem where Geo-FNO falls short and cannot be used. The reason is that the Geo-FNO requires a pre-defined or trainable isomorphism between a rectangular domain and the targeted irregular domains. However, in fracture problems the domain undergoes severe topological changes including the emergence of new holes or discontinuities/boundaries (as shown in Fig. 5) which is not known a priori. Such an isomorphism does not exist (intuitively, there is no way to define a mapping between each yellow regions in Fig. 5 to a rectangular domain).
Test errors for the hyperelasticity problem:
| Model| 10 samples | 100 samples | 1000 samples |
| :------------- | :-----------: | :-----------: | :-----------: |
|eDAFNO | **16.446\%**$\pm$**0.472\%** | 4.247\%$\pm$0.066\% | **1.094\%**$\pm$**0.012\%**|
|iDAFNO | 16.669\%$\pm$0.523\% | **4.214\%**$\pm$**0.058\%** | 1.207\%$\pm$0.006\%|
|FNO w/ mask | 19.487\%$\pm$0.633\% | 7.852\%$\pm$0.130\% | 4.550\%$\pm$0.062\% |
|IFNO w/ mask | 19.262\%$\pm$0.376\% | 7.700\%$\pm$0.062\% | 4.481\%$\pm$0.022\% |
|Geo-FNO | 28.725\%$\pm$2.600\% | 10.343\%$\pm$4.446\% | 2.316\%$\pm$0.283\% |
|GNO | 29.305\%$\pm$0.321\% | 18.574\%$\pm$0.584\% | 13.007\%$\pm$0.729\% |
|DeepONet | 35.334\%$\pm$0.179\% | 25.455\%$\pm$0.245\% | 11.998\%$\pm$0.786\%|
|F-FNO | 35.672\%$\pm$3.852\% | 12.135\%$\pm$5.813\% | 3.193\%$\pm$1.622\%|
|UNet | 98.167\%$\pm$0.236\% | 34.467\%$\pm$2.858\% | 5.462\%$\pm$0.048\%|
Results for the airfoil design problem:
| Model | Train error | Test error |
| :------------- | :-----------: | :-----------: |
|eDAFNO | 0.329\%$\pm$0.020\% | **0.596\%**$\pm$**0.005\%** |
|iDAFNO | 0.448\%$\pm$0.012\% | 0.642\%$\pm$0.020\% |
|eDAFNO on irregular grids | 0.331\%$\pm$0.003\% | 0.659\%$\pm$0.007\% |
|Geo-FNO | 1.565\%$\pm$0.180\% | 1.650\%$\pm$0.175\% |
|F-FNO | 0.566\%$\pm$0.066\% | 0.794\%$\pm$0.025\% |
|FNO w/ mask | 2.676\%$\pm$0.054\% | 3.725\%$\pm$0.108\% |
|UNet w/ mask | 2.781\%$\pm$1.084\% | 4.957\%$\pm$0.059\% |
---
Rebuttal Comment 1.1:
Comment: Thank you for preparing the rebuttal to my review and the other reviews.
I appreciate that you released code, added the F-FNO baseline, and ran airfoil with more baselines. I agree with the clarification of the shock wave phenomenon and about running baselines for airfoil and cracking. I also particularly like the example of airfoil with an irregular grid.
Overall, I am impressed by the rebuttal and I raised my score.
I have a few more questions. What is F-FNO? I think it is the same thing as tensorized FNO (TFNO)? In that case, I am wondering why the parameter count is higher for TFNO than for FNO? Did other hyperparameters change?
I also wonder if DAFNO can be applied with the tensorized technique? And, although the new experiment with DAFNO on an irregular grid is great, it would be interesting to have DAFNO with a trainable mapping. It seems like a learnable irregular mapping would complement the domain adaptive part well? Note that I know it is only 4 days before the end of the discussion period, so this is not a request for more experiments, but a question, e.g., for future work.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We thank the reviewer for their kind response. We sincerely appreciate the reviewer's valuable time and suggestions, which helped us to improve the quality of this work.
**Connections between F-FNO and T-FNO**: The reviewer is correct that F-FNO [1] has fundamentally similar architectures to TFNO [2] since they both adopt Fourier factorization, with some small differences such as the improved residual connection and training technique in F-FNO. The reason why F-FNO has more parameters than FNO (as also pointed out in the F-FNO paper) is that F-FNO used a bigger hidden size in the elasticity problem. To guarantee a fair comparison, we kept the original settings of F-FNO as in [1] since the total number of parameters are on a similar level as other methods.
[1] Tran, Alasdair, et al. "Factorized Fourier Neural Operators." The Eleventh International Conference on Learning Representations. 2022.
[2] Kossaifi, Jean, et al. "Multi-Grid Tensorized Fourier Neural Operator for High Resolution PDEs." (2022).
**Future extensions on DAFNO**: We appreciate the reviewer's valuable suggestion. Both the tensorized architecture and the data-driven grid mapping should be readily applicable on DAFNOs. We agree with the reviewer that the combination of these two techniques with DAFNO would indeed be very interesting directions, which we will consider in the future work. | null | null | null | null | null | null |
Small Total-Cost Constraints in Contextual Bandits with Knapsacks, with Application to Fairness | Accept (poster) | Summary: This paper deals with the CBwK problem with a fairness constraint of equalized average costs between groups. The authors propose a dual algorithm: PGD for CBwK (with adaptive stepsize) and provides regret bounds.
Strengths: The problem statement is pretty clear and the related work is relatively thoroughly presented. The limitations are clearly and explicitly stated.
Weaknesses: The algorithm (PGD) seems to be established in the online optimization and bandit literature, and the proofs also seem standard. Also the regret bound is only superior under some circumstances (e.g. not working for soft constraints, loose bound when for example null action has null reward). My major concern is on the novelty. I think to stand out from the existing papers, the authors may want to specify a more narrow scheme and claim the contributions.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The authors make an assumption that the respective proportions $\gamma_g$ is known. Could you please discuss the limitations of such assumption? And what analytical convenience this assumption buys you? Are there any less restrictive alternatives?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the reading but would respectfully disagree with the evaluation.
For the PGD algorithm (and the proofs) being standard: Yes, the PGD approaches in the CBwK litterature are standard, as we acknowledge and as other rewievers point out (see, e.g., Agrawal and Devanur 2016). Our main point, however, is the adaptive step-size tuning of PGD. We carefully explain in lines 229-244 why and how previous contributions provided suboptimal tunings for PGD, leading to the impossibility of going beyond the T^{3/4} total-cost constraint. Section 3.2 then provides our main contribution: a careful adaptive tuning of the hyperparameters of PGD, allowing to total-cost constraints as small as 1/\sqrt{T} up to logarithmic factors. We agree and acknowledge that its analysis partially relies on some known building blocks.
For the regret bound being only superior under some circumstances (e.g., not working for soft constraints, loose bound when for example null action has null reward): We would like to discuss more in depth this comment, which we object to.
- First, Section 3.3 explains in detail why our bounds in terms of the norms of \lambda^* are sharper than the usual OPT/min B bounds of the literature. This new form of the bound is important in view of the lower bounds alluded at in lines 330-331 and because it does not require the existence of a null-cost action. So, our bounds do provide improvement upon the literature.
- Second, the literature was only (and rightfully) interested in hard constraints so far. We introduce in appendix the concept of soft constraints, as we noticed some new phenomena in this setting compared to the usual hard-constraints setting. We actually deal with them in some optimal manner, but with a different algorithm---a primal strategy, see Section F in the supplementary material. Dealing with such (non-classical) soft constraints is an open problem for the PGD strategy considered in the main body. It is also an open problem for all previous strategies in the literature (that did not cover this aspect). To sum up, we have identified tackling soft constraints as a possible extension to CBwK theory, not addressed so far in the literature, and for which we provide partial results (with a primal algorithm). This, in our opinion, cannot be considered as a weakness, on the contrary.
- Third, the 'loose bound’ is the classical bound in the literature, and it is generally optimal. We mention in Limitation 2 that however, faster rates should be possible in some very specific settings---a possibility which was not identified at all in the literature. The precise study of this very specific, yet interesting, setting would require some space not available in a NeurIPS format. This new insight should then be seen as an open problem stimulating future research, rather than a weakness of our contribution.
For not claiming the contributions: We summarized them in lines 48-59 in Section 1.
For the respective proportions gamma_g being known: In practice (at least in the example by Chohlas-Wood et al. [2021]) this should be a mild restriction, that is not unheard of in the fairness literature. It amounts to having a reasonable knowledge of the breakdown of the population into subgroups. We use this assumption in line 156: the cost function is then known, as required. Replacing T \gamma in line 154 by the empirical number of individuals in the group induces many additional technicalities, while it only presents a limited practical interest, given that the fairness literature has little issues with assuming a reasonable knowledge of the breakdown of the population into subgroups.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough rebuttal. I believe my understanding of this paper is at a reasonable level. And my major concern stays as in the review: that the contribution seems incremental to me (in terms of the algorithm especially, as well as the analysis). Nevertheless I in general agree this paper is well presented, and would like to increase my score to borderline, and leave it to the AC's discretion. | Summary: The problem studies the contextual bandit with knapsack problem with contexts coming from a continuous set, signed costs, and under the assumption that expected reward and cost functions can be uniformly estimated. The learner aims at maximizing their cumulative rewards while guaranteeing constraints of the form $\sum_{t=1}^T c_t\le TB$, where the rhs $TB$ may be as small as $T^{1/2+\epsilon}$. The proposed algorithm is based on a projected gradient descent scheme with adaptive step size. The author discuss applications of the algorithm to problems related to fairness.
Strengths: The authors present an interesting set of results. The model is well motivated and presents some interesting complications wrt the traditional BwK framework. The paper is generally well written and mostly clear.
Weaknesses: I don't have any major concern.
Some minor comments:
- Line 113: "go go"
- It would be good to mention closer to Assumption 1 that, in order to derive guarantees, you need strict feasibility.
- Paragraph starting at line 158 lacks convincing examples.
- Assumption 2: why are you defining $\beta$ there?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Can you provide some further details about the statements of the paragraph starting at line 158 (e.g. what scenarios do you have in mind when you say "typically")?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, limitation are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and fully agree with the evaluation.
For the statements of the paragraph starting at line 158, including twice the word "typically": We agree that this paragraph will benefit from some rewriting. For the total spendings B_total, we had in mind the example by Chohlas-Wood et al. [2021], where a constant fraction of the T individuals may benefit from some costly action, hence a linear total budget T B_total, or, put differently, B_total is larger than some positive constant. This is better detailed on page 31 in appendix. For the fairness threshold \tau, we meant that in some cases we would ideally have \tau = 0, but due to central-limit-theorem fluctuations, this is not possible and a \tau with a minimal value of the order of 1/\sqrt{T} has to be considered. In general, Chohlas-Wood et al. [2021] want to ensure some fair treatment among groups, so would pick some \tau that is rather small (though would be ready to have it larger than 1/\sqrt{T} as this entails some positive discrimination and is effective in practice). All in all, we would entirely rewrite this paragraph, by suppressing all occurrences of 'typically' and summarize the ideas above.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I confirm my positive score. | Summary: This paper studied contextual bandit problems with knapsacks (CBwK). Under this setting, at each time step, a scalar reward is obtained and vector-valued costs are suffered. The agent aims to maximize the cumulative rewards while ensuring that the cumulative costs are lower than some predetermined cost constraints.
In this setting, total cost constraints had so far to be at least of order $T^{3/4}$, where $T$ is the number of time steps, and were even typically assumed to depend linearly on $T$.
This paper imposed a fairness constraint by applying the CBwK model and introduced a dual algorithm based on projected-gradient-descent updates in order to deal with total-cost constraints of the order of $\sqrt{T}$ up to poly-logarithmic terms.
================
The score is kept unchanged.
Strengths: 1. This paper is in general well organized and presents a detailed literature review.
2. The motivation of considering faireness with the CBwK setting is explained and convincing.
3. Beyond the strengths of the proposed algorithms, the authors also listed the limitations of this work in Section 4.
Weaknesses: 1. My major concern is that no numerical results are presented in this work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I suggest to compare the performance of the proposed algorithm and some other algorithms with numerical experiments.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and generally agree with the evaluation.
For the weakness raised: We only provide (very) preliminary simulations (see pages 33-35), rather illustrating how to successfully deal with the fairness constraints. What these simulations do not illustrate, however, is how useful adaptive step-sizes are: this is because total-cost constraints are never broken therein during the regime k=0. We were too short on time to provide improved simulations investigating the usefulness of the adaptive step-sizes and offering comparisons to other algorithms. We will provide such simulations in the final version of the paper (including a summary thereof in the main body).
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. | Summary: The paper considers the problem of general contextual bandits with knapsacks in the regime where Omega(sqrt{T}) <= B <= O(T^{3/4}). The paper provides a new algorithm that is based on prior methodologies of primal-dual algorithm, but instead of relying on them as black-boxes, updates the dual variables using adaptive step-sizes (for the same algorithm using fixed step-size reduces to prior works (e.g., Agrawal and Devanur 2016). The main contribution of this paper is to notice this fact, show a scheduling scheme for the adaptive step-size and prove regret bounds in this regime.
Strengths: + The proposed algorithm is clean and builds on understanding from prior works. Notion of using adaptive step-sizes is common in the optimization literature and bringing that understanding/techniques to contextual bandits with knapsacks is novel. Combining this with existing primal-dual approach is natural.
+ The paper is well-written and easy to understand. The paper also provides problem dependent lower-bounds and surprisingly (for modern papers) honestly discusses the limitations of the algorithm (e.g., not being able to prove a worst case regret bound, when B is actually large).
Weaknesses: - Overall, I do not find the fairness example particularly motivating or the right example. The paper is useful for the mathematical/algorithmic insights. Not sure if this example adds any value to the paper.
- One place where this paper could have really shone is to actually run simulations and show that even in practice the adaptive step-size actually improves the regret in the claimed range. In particular, it would tease out the effect of inability to analyze prior works better vs needing new algorithmic technique (such as adaptive step-size) conclusively.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Nothing in particular, that could help change my mind. Please see my suggestions above on simulations.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: This is a mathematical paper and no societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading and generally agree with the evaluation.
For the fairness example: It was our own motivating example for developing a CBwK theory able to handle small total-cost constraints. (Chohlas-Wood et al., 2021, who first introduced this example, could not propose regret bounds, see lines 62-73.) Admittedly, better examples could be provided and it is clear that breaking the artifical T^{3/4} barrier in the literature was an interesting problem per se.
For simulations: Indeed. We only provide (very) preliminary simulations (see pages 33-35), rather illustrating how to successfully deal with the fairness constraints. What these simulations do not illustrate, however, is how useful adaptive step-sizes are: this is because total-cost constraints are never broken therein during the regime k=0. We were too short on time to provide improved simulations investigating the usefulness of the adaptive step-sizes and offering comparisons to other algorithms. We will provide such simulations in the final version of the paper (including a summary thereof in the main body). | Rebuttal 1:
Rebuttal: We generally agree with the evaluations by Reviewers aT3N - EATd - Hs7n but respectfully disagree with the evaluation by Reviewer 3rBq. We explain below in detail the main two issues we disagree with:
- The main algorithmic contribution is not the PGD approach per se, but its adaptive step-size tuning (see lines 229-244), for which we present a novel contribution, namely, handling the total-cost regime Omega(sqrt{T}) <= T B <= O(T^{3/4}).
- Our regret bound is sharper than existing bounds in the literature (with some norm of \lambda^* replacing the larger OPT/min B term, see Section 3.3) and does not require a null-cost action. That being said, we have identified possible improvements and extensions (listed in Section 4) that are not addressed in the literature, and that we propose as open problems for future research. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CL-NeRF: Continual Learning of Neural Radiance Fields for Evolving Scene Representation | Accept (poster) | Summary: The paper introduces CL-NeRF, an approach for efficiently adapting Neural Radiance Fields (NeRFs) to real-world scene changes over time. CL-NeRF focuses on continual learning and requires only a few new images to adapt to changes while retaining the memory of unaltered areas. It consists of two main components: a lightweight expert adaptor for adapting to new changes and evolving scene representations, and a conflict-aware knowledge distillation learning objective for memorizing unchanged parts. The authors also propose a new benchmark with comprehensive metrics to evaluate Continual Learning of NeRFs.
Strengths: 1. The inclusion of a benchmark is a valuable contribution as it provides a standardized evaluation framework for assessing the performance of continual learning methods applied to NeRFs.
2. The provision of rendered data sets generated by CL-NeRF is beneficial for the research community.
Weaknesses: The paper's weakness lies in the assumption that continual learning is necessary for NeRFs and the relevance of this assumption to the problem at hand. NeRFs have distinct characteristics that differentiate them from traditional machine learning problems. Firstly, NeRFs do not typically require a vast amount of training data, as they focus on scene-specific recon rather than generalization (i.e., the plain NeRFs studied in the paper, not SRN or pixelNeRF). One scene -- one NeRF.
Additionally, NeRFs do not necessarily face the issue of inaccessible training data, as the training data can be obtained by rendering images of the scene. Therefore, the need for continual learning, which assumes the unavailability of training data, may not be applicable to NeRFs.
Furthermore, the paper does not adequately address a simple baseline approach __for the scene change challenge__. By rendering images for the new given poses, it becomes straightforward to identify areas where the scene changes occur. This straightforward method eliminates the need for complex continual learning techniques in this specific scenario.
---
After rebuttal:
I have updated my score to be borderline accept. The authors' response regarding large-scale tuning of a trained NeRF model has convinced me of the value of their proposed method. While the experiments do not directly validate performance in the large-scale scenario, the current results demonstrate promising capabilities that could benefit future research in this direction. All other reviewers value the contribution of the continual learning nerf method in this paper.
However, I still have some comments about the writing of the paper to avoid potentially misleading readers on the concept of continual learning:
1. The mentioning of "catastrophic forgetting" may require more explanation. Most readers likely associate this problem with the paper "Overcoming catastrophic forgetting in neural networks," [1] which highlighted it as a serious challenge for sequential learning tasks. But NeRF does not assume sequential learning, since all previous data is stored. The authors could emphasize that even with full data retention, fine-tuning can still be expensive without a continual learning scheme.
2. It may be helpful to include some discussion of efficient network tuning methods as motivation, like the well-known LoRA scheme. Although this paper uses MLPs, parameter-efficient tuning techniques for other networks like transformers are highly relevant to the problem setting.
[1] Kirkpatrick, James, et al. "Overcoming catastrophic forgetting in neural networks." Proceedings of the national academy of sciences 114.13 (2017): 3521-3526.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why do we need to assume that the training data is not available? The training images themselves are already memorized by NeRFs.
2. Why not just compare the image before change and after change?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The problem setting is problematic.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Fafa,
Thank you for appreciating our approach. We will address your comments below.
**W1: The assumption that continual learning is necessary for NeRFs and the relevance of this assumption to the problem.**
Thank you for the comment. We understand your concerns regarding continual learning for NeRFs. However, we believe that there are potential scenarios where continual learning is urgently needed:
1) For large-scale scene applications, like city-level scene reconstruction/novel view synthesis, capturing images can be cumbersome and require expensive devices such as drones to capture the entire city. Thus, a setting that requires fewer images for training would alleviate the burden of data capture.
2) As changes may occur frequently, it would be time-consuming to gather training data and retrain the model after each change. For example, if there are ten scene changes, capturing the entire scene ten times is prohibitively expensive. The burden is further heightened by the need for camera calibration and pose estimation, especially when dealing with a large number of images.
3) Our investigation may also be useful for resource-constrained platforms, such as smartphones, where storage resources are an important consideration. This issue is bypassed by our continual learning formulation, as we only need to retain the few images that reflect scene changes.
Moreover, the primary challenge that our study addresses lies in adapting to changes without encountering conflicting supervisory signals, rather than whether we can use information from a pre-trained NeRF or the amount of training data available. Even when the data are available, our designs, especially the adaptor and conflict-aware mechanism, are still useful.
We specifically address the challenge of learning new content with conflict in our paper. By using our method, we can adapt to changes and maintain the integrity of scene representations, enabling us to adjust to scene changes with minimal effort (i.e., 10-20 training images and 10-20 minute training time).
**W2: Does not adequately address a simple baseline.**
In fact, we have employed the straightforward method you mentioned, as detailed in Section 4.1, to identify and generate the pseudo mask for supervising the training of the mask logit. However, the main objective of our continual learning task is to learn a new neural radiance field from the new data that captures scene changes.
Simply identifying areas where scene changes occur does not address the problem of adapting the neural radiance field to the new scene content while maintaining unaltered scene representation. Our proposed continual learning techniques aim to provide a more effective solution to this challenge by leveraging the newly captured images and minimizing the forgetting of previously learned content.
**Q1: Why do we need to assume that the training data is not available?**
Please refer to the question **W1: The assumption that continual learning is necessary for NeRFs and the relevance of this assumption to the problem**.
**Q2: Why not just compare the image before change and after change?**
Please refer to the question **W2: Does not adequately address a simple baseline**.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed explanation.
> For large-scale scene applications, like city-level scene reconstruction/novel view synthesis, capturing images can be cumbersome and require expensive devices such as drones to capture the entire city.
This makes sense to me. (But I recommend the authors to add this kind of illustrations in the paper to highlight this insight. Currently, the indoor scene is a little misleading.)
This concern again aligns with the criticism raised by Reviewer f2t1 regarding the validation: __The number of scenes in the experiment is not large and all of them are synthetic.__
The claim is handling large scale scenes with a lot of images, but all results are on a toy dataset. But I agree that validating the ideas could still be of value, I will see how other reviewers comment on this and join the majority.
> Moreover, the primary challenge that our study addresses lies in adapting to changes without encountering conflicting supervisory signals, rather than whether we can use information from a pre-trained NeRF or the amount of training data available.
I cannot really get the meaning of this challenge. What does `without encountering conflicting supervisory signals` mean here? Do you mind explaining a little bit further? Thanks.
> In fact, we have employed the straightforward method you mentioned, as detailed in Section 4.1, to identify and generate the pseudo mask for supervising the training of the mask logit.
Apologize if I didn't make it clear. I mean that for experiments in Tab 2, Fig 4,5, the baseline can be discussed.
> Simply identifying areas where scene changes occur does not address the problem of adapting the neural radiance field to the new scene content while maintaining unaltered scene representation.
We can just sample points from those unchanged areas, and add a loss to ask the MLP outputs unchanged.
---
Reply to Comment 1.1.1:
Comment: Reviewer Fafa,
Thanks for your response and suggestions!
> This makes sense to me. (But I recommend the authors to add this kind of illustrations in the paper to highlight this insight. Currently, the indoor scene is a little misleading.) This concern again aligns with the criticism raised by Reviewer f2t1 regarding the validation: The number of scenes in the experiment is not large and all of them are synthetic. The claim is handling large scale scenes with a lot of images, but all results are on a toy dataset. But I agree that validating the ideas could still be of value, I will see how other reviewers comment on this and join the majority.
1) We conduct experiments on two more challenging real-world indoor and outdoor scenes containing various objects and environments to demonstrate the generalization ability of our algorithm. For the indoor scene, we capture images using a camera with LiDAR (specifically, an iPhone 14 Pro) and employ the Record3D App with ARKit to ensure accurate camera poses. On the other hand, the outdoor scene is captured using a DJI drone, focusing on a large-scale subject (the audience seats on a basketball court), with COLMAP utilized to calculate the camera poses. The results (Table R1, Figure R1 in the provided PDF file) reveal that Fine-tuning suffers in old tasks, and Memory Replay struggles with new tasks, highlighting our method's robustness and adaptability across varying environments.
2) Owing to time limitations within the rebuttal period, our research has not been extended to encompass city-scale real-world scenes at this time. Despite this constraint, we remain confident that our methodologies and findings hold potential for practical applications. In the future, we intend to collaborate with industry partners to explore expanding our research to broader and more comprehensive scales.
---
Reply to Comment 1.1.2:
Comment: > I cannot really get the meaning of this challenge. What does without encountering conflicting supervisory signals mean here? Do you mind explaining a little bit further? Thanks.
In the task of adapting a pre-trained NeRF to the scene changes, conflict is inherent. Utilizing both old training data and newly acquired or estimated images from a pre-trained NeRF, conflicts can emerge within the altered region(e.g., a newly added apple). We have tested in these two cases:
1) **Case 1 - Utilizing Old Training Data**: Our baseline algorithm, MR, trains on both original and new data without handling conflicts. Results are shown in Tables 1-3 and Figures 4-5 of the main paper, with further details in the supplementary materials and the provided PDF.
2) **Case 2 - Leveraging Pre-trained NeRF**: Our ablation study **w/o Expert** trains on new data and uses the KD strategy with a pre-trained NeRF without addressing conflicts. Results are presented in Table 4 in the main paper.
From the results, we can conclude that without properly handling conflicting signals, new task performance in altered regions is unsatisfactory. To address the conflict problem, we introduce a conflict-aware knowledge distillation method, effectively utilizing information from both old training data and a pre-trained NeRF, resulting in a remarkable performance.
---
Reply to Comment 1.1.3:
Comment: > Apologize if I didn't make it clear. I mean that for experiments in Tab 2, Fig 4,5, the baseline can be discussed.
It appears there may be some misunderstandings. Our lightweight expert adaptor can identify areas of scene changes, aligning with the method you referenced. Hence, the results are already addressed in Table 2, Figure 4, and Figure 5.
Specifically, our expert predicts a mask logit for each sampling point to indicate if it has been altered (with a logit close to 1 for a newly added region). Using this logit, we fuse original and new predicted features, as outlined in Equation 6 and Figure 3, alongside a mask loss defined in Lines 212-214. This mask loss can further assist the training process for mask logit estimation.
---
Reply to Comment 1.1.4:
Comment: > We can just sample points from those unchanged areas, and add a loss to ask the MLP outputs unchanged.
Sampling points from unchanged areas can be divided into two cases:
1) **Case 1**: If we attempt to sample points from unchanged areas in old images, we must first define which regions within the old images have changed. This identification may not be feasible. Directly using old training data will result in conflict in the altered region, as we discussed above.
2) **Case 2**: If we rely on sampling points from unchanged areas in new images, a different problem arises. Given that the number of new images is typically limited, the model may face difficulties in effectively mitigating the forgetting phenomenon using this restricted data.
We further explore Case 2 with an experiment using pseudo mask ground truth, denoted $\hat M_\text{gt}$. Our fine-tuning process for the pre-trained NeRF includes calculating two losses. First, we sample points from the changed areas, calculating the photometric error between the predicted image and the newly captured images. Second, we sample points from the regions that remain unchanged, and we expect the MLP outputs to remain consistent. The experiment is carried out on the Whitehouse dataset, utilizing the ADD operation. The results below compare our algorithm with and without the proposed lightweight expert adaptor (i.e., the same structure as the pre-trained NeRF). The findings indicate that this simple masked sampling strategy results in unsatisfactory performance in the old task.
| Algos| Old| New|
|--------|--------|--------|
|Masked Sampling | 28.97|23.64 |
|Ours w/o Expert| 31.24 | 23.42 |
|Ours | **32.33** | **25.29** | | Summary: This paper tackles the task of continuing learning of NeRF, which aims to adapting NeRFs to real-world scene changes over time using a few new images. To prevent the forgetting problem during adapting, the authors propose CL-NeRF. The CL-NeRF consists of two key components: an expert adaptor for adapting to scene changes and a conflict-aware distillation scheme for memorising unchanged parts. The expert adaptor learns to encode local changes and can be learned from just a few new images. The conflict-aware distillation scheme is designed to preserve the original scene representation for unchanged areas by via student-teacher knowledge distillation. Moreover, A new continual learning benchmark for NeRF is introduced to evaluating the proposed method.
Strengths: 1. This paper tackles a practical task, which aims to adapt NeRFs to real-world scene changes over time with minimal data.
2. The paper is well written and easy to follow.
3. New datasets and evaluation metrics are proposed for the introduced task.
Weaknesses: 1. The experimental results are not very well represented. For example, there is no text explanation for Table 1. What does the ‘SEQ’ represent in Table 2 since there are also results for sequential operation in Table 3. The quantitative results are shown without clear text illustration.
2. Follow the previous comment, the qualitative results in Figure 4 are not very intuitive. For example, the new and old scenes are very different after the adding operation in the first column. From my understanding, it should be only adding the apple to the old scene? Similar for the Move and Replace operation, the new and old scenes seem very different.
3. The predicted mask plays an important role in detecting changing area, it would be better to show some qualitative or quantitative results on how well the mask logits are predicted.
4. The scale of the network will increase when more operation are added since the expert adaptor is scene-specific.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the performance gap between the proposed method and the scenario when new data is abundant? The latter scenario is upper bound for the proposed task.
2. How many testing data are used for each operation? Take the ‘ADD’ operation for example, we can add different objects to different scenes. Is the result based on just one or multiple scenarios?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations and potential negative societal impact are resolved in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer tKPz,
Thank you for appreciating our approach. We will address your comments below.
**W1: Experimental results are not very well represented.**
1) We regret the omission of the analysis for Table 1 and appreciate your comments. Table 1 uses the Backward Transfer Metric (BTM) and Forgetting Metric (FM), which are both employed to quantify the forgetting issue. BTM quantifies the level of forgetting by measuring the cumulative gap between the original model performance ($PSNR_{i,i}$) and the final performance ($PSNR_{T,i}$), whereas FM directly measures actual performance after T operations. Details are explained in Section 4.2 (evaluation metrics) in our paper.
2) SEQ stands for sequential operation, as defined in Section 4.2. In both Tables 2 and 3, 'SEQ' refers to the same operation. However, we assess this operation from two different perspectives. In Table 2, we evaluate the performance (i.e., PSNR) of the final model after a series of operations, while Table 3 assesses the performance at each stage. This distinction provides a more comprehensive understanding of our method's performance throughout the entire process.
**W2: Qualitative results in Figure 4 are not very intuitive.**
Yes, only adding an apple to the old scene. The difference in qualitative results comes from the different camera views. The new scene images are rendered from camera poses that reflect scene changes (i.e., object-centric camera pose trajectory), while the old scene images are rendered from camera poses where the scene remains unchanged. They belong to different areas; thus, we render them to depict the effective adaptation of new scenes and the preservation of unchanged areas. We sincerely apologize for any confusion and will clarify this point in the final version of our paper.
**W3: How well the mask logits are predicted?**
We have included a visualization of the predicted mask in Figure R4 to demonstrate its quality. A pixel value approaching 1 (white) indicates a high probability that the pixel belongs to an altered region, whereas a lower value suggests an unaltered region. This visualization offers a qualitative insight into the effectiveness of our mask logits in detecting changes within the scene.
**W4: The scale of the network will increase when more operation are added.**
We would like to clarify that we employ only one expert throughout our process. For task-1, no experts are used. At task-2, one expert is added. For task-3 and subsequent tasks, we continue with just one expert, utilizing the original model combined with the expert from the previous task to distill knowledge into the current expert. This approach allows our method to efficiently adapt to new tasks while preserving knowledge from previous tasks without the need to add multiple experts.
In our current exploration, this strategy is sufficient to achieve reasonable performance (see Table 1-3 in the main paper) and outperform baseline methods without increasing model complexities. We believe that adding one expert at a time may further boost performance, albeit at the cost of increased model complexity. We will study the trade-off between model complexities (adaptor numbers) and performance in the final version of our paper.
**Q1: What is the performance gap when new data is abundant?**
To demonstrate the upper bound of our proposed task, Figure R5 shows the relationship between performance and the number of images, specifically for the ADD operation on the Whitehouse dataset. The figure reveals that while increasing the number of training images does enhance performance, the improvement is relatively minor.
**Q2: How many testing data for each operation? Add different objects to different scenes? Is the result based on just one or multiple scenarios?**
1) For synthetic datasets such as Whitehouse, Kitchen, and Rome, the testing data comprises approximately 20 images.
2) In our study, we primarily focus on basic operations involving a single object. Additionally, we have conducted tests with multiple objects added simultaneously, and the results are presented in Table R3 and Figure R3.
3) Results in each table correspond to each specific scenario, however we encompass the tests across several synthetic and real-world datasets.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses. Most of my concerns are resolved in the rebuttal and I would keep my original rating as borderline accept. | Summary: This paper proposes a challenge of how to reduce the data and time cost of retraining NeRF when the scene changes. To this end, the paper proposes two key components: a trainable network for the changing part of the scene, and a conflict-aware network for the unchanged part of the scene. With these two components, CL-NeRF achieves high training efficiency for updating the changing part of the scene while preserving the unchanged part well. This leads to the proposal of a new benchmark based on this challenge.
Strengths: * When there are changes in the scene, such as adding or removing objects, CL-NeRF can update the previously modeled results with very few photos and remember the unchanged parts.
* By using a lightweight expert adapter to predict masks, CL-NeRF can perceive which parts have been changed and only modify the features related to the location of the scene change. The self-distillation mechanism that is aware of conflicts can reduce conflicts in the supervised signals.
* A new dataset has been established to evaluate under the condition of scene changes.
Weaknesses: * The number of scenes in the experiment is not large and all of them are synthetic. However, this is understandable due to limitations in time and equipment. It is highly recommended to evaluate the method on more real scenes.
* Training DyNeRF using Memory Replay (MR) methods and simultaneously using photos with both changed and unchanged scenes may lead to ambiguity, which may lead to unfair comparisons. A better MR method may be partitioning the data based on whether the region has changed or not.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * It is possible that the poor performance of DyNeRF in scene representation is due to the use of MR methods. If this is the case, how will the performance of DyNeRF be when using only the images of the unchanged parts of the scene and the images of the changing parts separately?
* The experiment only includes a few scenes, all of which are synthetic. There are no experiments with real scenes. It would be beneficial to include experiments with more scenes in future studies.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed limitations in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer f2t1,
Thank you for appreciating our approach. We will address your comments below.
**W1: Evaluate the method on more real scenes.**
We conduct experiments on two more challenging scenes (real-world indoor and outdoor scenes) containing various objects and environments to demonstrate the generalization ability of our algorithm.
For the indoor scene, we capture images using a camera with LiDAR (specifically, an iPhone 14 Pro) and employ the Record3D App with ARKit to ensure accurate camera poses. On the other hand, the outdoor scene is captured using a DJI drone, focusing on a large-scale subject (the audience seats on a basketball court), with COLMAP utilized to calculate the camera poses. The results of these experiments are presented in Table R1 and Figure R1. We observe that Fine-tuning (FT) exhibits severe forgetting in old tasks, while Memory Replay (MR) underperforms in new tasks, particularly in qualitative results. These findings effectively showcase the robust performance of our methods across varying environments, further demonstrating the versatility and adaptability of our proposed algorithm.
**W2: Training DyNeRF using a better MR method.**
1) It is important to note that the original DyNeRF does not include a memory replay (MR) operation as our method does. DyNeRF performs poorly without any MR, as discussed in section B.4 of the supplementary materials. Therefore, in Table 2 of the main paper, we compare DyNeRF with a robust MR method that retains all previous-stage data.
2) Following your suggestion, we also explore more accurate MR methods on DyNeRF by manually excluding the conflict views, with results presented in Table R4. MR1 represents the original MR method from our main paper, while MR2 excludes frames with conflict regions in the old task, which have been manually removed. This manual selection effectively avoids conflict, enhancing new task performance but reducing the original image count, thus weakening old task performance.
3) Our method demonstrates superior performance compared to DyNeRF with various MR strategies. Importantly, our approach does not rely on using any previously captured images for training, whereas DyNeRF with the augmented MR method requires storing old images from all previous stages. This distinction highlights the efficiency and effectiveness of our proposed method in handling changing scenes without the need for extensive storage and manual intervention.
**Q1: How will the performance of DyNeRF be when using data without conflict?**
Please refer to the question **W2: training DyNeRF using a better MR method**.
**Q2: It would be beneficial to include experiments with more scenes.**
Please refer to the question **W1: evaluate the method on more real scenes**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response, which has resolved most of my concerns. It is highly recommended to supplement the additional contents to the final version. Besides, dynamic NeRF methods are one of the most important streams of related works, but some recent representative dynamic NeRF works are missing, including but not limited to HyperNeRF, Hexplane, K-planes, TiNeuVox, dycheck *etc*. It is suggested to have a careful check and include the missing works.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer f2t1,
thank you for recognizing our approach. We sincerely apologize for overlooking the related work and acknowledge the importance of dynamic NeRF in our study. We will address this in our final version.
In our original version, we categorize dynamic NeRF [1-5,8] works into two groups: scene flow (or deformation) estimation, and time-aware design. However, given that some time-aware approaches also utilize time for deformation estimation, we now categorize these researches into two streams: deformation field estimation and the incorporation of time-aware inputs.
HyperNeRF [6] and TiNeuVox [7] are categorized under deformation estimation since they estimate deformations to a canonical space. Hexplane [9] and K-planes [10], which introduce time-aware features from specific planes, fall under the time-aware inputs incorporation category. Additionally, Dycheck [11] offers a reality check on these dynamic NeRF works.
Here are the revisions and will incorporate them into our final version:
This line of work usually takes videos containing dynamic objects as inputs and can be divided into two lines: deformation field estimation [1-7] and the incorporation of time-aware inputs [8-10]. Dycheck [11] offers a critical assessment of recent developments in dynamic NeRFs.
[1] Neural scene flow fields for space-time view synthesis of dynamic scenes.
[2] Dynibar: Neural dynamic image-based rendering.
[3] Nerfies: Deformable neural radiance fields.
[4] D-nerf: Neural radiance fields for dynamic scenes.
[5] Nerfplayer: A streamable dynamic scene representation with decomposed neural radiance fields.
[6] HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields.
[7] TiNeuVox: Time-Aware Neural Voxels.
[8] Neural 3d video synthesis from multi-view video.
[9] HexPlane: A Fast Representation for Dynamic Scenes.
[10] K-Planes: Explicit Radiance Fields in Space, Time, and Appearance.
[11] Monocular Dynamic View Synthesis: A Reality Check. | Summary: The authors propose CL-NeRF, which tries to solve the problem of rendering scenes that evolve over time using a few images of the altered scene while retaining information about the unaltered regions. The proposed method contains two key components - 1. an expert adapter, to adapt to new regions, and 2. a conflict-aware knowledge distillation to preserve the original scene representation in the unchanged regions. The paper also introduces a new benchmark to evaluate NeRF-based methods on a continually evolving scene, particularly focusing on a few key operations including ADD, REMOVE, MOVE, and DELETE. The method claims strong adaptation to scene changes, requiring minimal images captured in the changed area while still ensuring high rendering quality in the unchanged regions.
Strengths: - The proposed task is novel, i.e. given sufficient images at time t=0 to train a NeRF, we want to adapt the radiance field (at time t=t > 0) to new changes in the scene with as few images in the altered regions.
- The paper is easy to read and follow. All the components in the proposed method are well-motivated.
- The authors introduce a new continual learning benchmark for NeRFs, containing 3 scenes with 4 single-step operations and a sequence of them. The method is compared against sufficient baselines, outperforming them by a sufficient margin.
Weaknesses: - There are not many significant weaknesses, however, some components of the method e.g. knowledge distillation to prevent forgetting have been seen in some form (without being conflict aware) in [1]. This does not decrease the novelty of the current work, but it would be interesting to evaluate these methods in the proposed benchmark.
- It would also be interesting to see how this method would compare against a Generalizable NeRF baseline that uses the memory replay of the previous task (perhaps a bit more selective to not cover the altered regions) and the new training images captured in the altered region.
1. MEIL-NeRF - Memory Efficient Increment Learning of Neural Radiance Fields.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - In the sequential setup, are new experts added to the network from the previous task? i.e at task-1 there are no experts, task-2: 1 expert is added, task-3: 2 experts are added, etc.
- Is it necessary for the captured images in the later tasks to be local? i.e. only capturing the altered regions? What would happen if the captured images are more global and covered the entire scene?
- In the current benchmark, every operation only manages to alter one area. It would be interesting to evaluate multiple altered regions and how the current method would scale in such cases.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I don't see any negative social impact on their work. The authors have discussed limitations in the supplementary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer fqk8,
Thank you for appreciating our approach. We will address your comments below.
**W1: Evaluate [1] in the proposed benchmark.**
Thank you for the insightful advice. Unfortunately, due to the limited rebuttal time and the unavailability of MEIL-NeRF's [1] code, it is challenging for us to reproduce their work and provide a comparison in this response. Nevertheless, we appreciate the suggestion and will include a comparison result in the final version of our paper. It is worth noting that MEIL-NeRF [1] is designed for novel synthesis tasks involving static scene stream data, whereas our approach addresses the efficient adaptation of NeRFs to dynamic real-world scene changes using minimal new images. Furthermore, our method emphasizes preserving unaltered areas, adapting to new scenes, and resolving conflicts due to scene dynamics through continual learning, which are not addressed in [1].
**W2: Compare against a Generalizable NeRF.**
To address this concern, we conduct comparative analyses with a generalizable NeRF called IBRNet, as detailed in Table R2 and Figure R2. Our tests, conducted with 10 new images, reveal unsatisfactory performance and serious artifacts in IBRNet's performance (denoted as MR1). Additionally, we conduct further testing on IBRNet, employing a more selective MR method involving manual removal of conflict data (denoted as MR1*). However, the obtained results remain unsatisfactory. These can be attributed to its heavy reliance on neighboring frames. To further evaluate IBRNet, we test it in different settings by varying the number of images (denoted as MR2, MR3). Although increasing the number of images improves IBRNet's performance, it still falls short compared to our model.
**Q1: Are new experts added to the network from the previous task?**
We would like to clarify that we employ only one expert throughout our process. For task-1, no experts are used. At task-2, one expert is added. For task-3 and subsequent tasks, we continue with just one expert, utilizing the original model combined with the expert from the previous task to distill knowledge into the current expert. This approach allows our method to efficiently adapt to new tasks while preserving knowledge from previous tasks without the need to add multiple experts.
In our current exploration, this strategy is sufficient to achieve reasonable performance (see Table 1-3 in the main paper) and outperform baseline methods without increasing model complexities. We believe that adding one expert at a time may further boost performance, albeit at the cost of increased model complexity. We will study the trade-off between model complexities (adaptor numbers) and performance in the final version of our paper.
**Q2: Is it necessary for the captured images in the later tasks to be local?**
Yes, we opt for a local view approach in our training and evaluation for several reasons:
1) Changes within a scene typically occur in localized areas. Capturing images of altered regions helps reduce the number of images used for adapting to new changes.
2) This setting allows for a more accurate assessment of performance gains and progress, as the PSNR is calculated at the pixel level.
Our approach can also adapt to images that cover global views. This is because when the captured new images contain global views, the predicted mask logit makes the adaptor primarily learn the difference, and our proposed conflict-aware knowledge distillation mechanism ensures that altered areas contained in our previously captured images do not damage the training. This enhances the robustness of our approach to scenes with global views, as these may have more conflict regions.
**Q3: How the current method would scale in multiple altered regions?**
To showcase the effectiveness of our method, we have developed a benchmark where 10 objects are simultaneously added and distributed across various locations within the room. The results, presented in Table R3 and Figure R3, illustrate that our method performs admirably even in challenging scenarios, achieving 32.31 and 32.15 for the old and new tasks, respectively.
---
Rebuttal Comment 1.1:
Comment: I have gone through the author's rebuttal and the comments from the other reviewers. The rebuttal has addressed a few concerns and I was already leaning toward an `accept' (hence keeping my score).
Thank you for adding the comparison against IBRNet, and I am quite pleasantly surprised with its performance in this setting.
- I am a little confused about the white patch in Fig. R2 (in the case of IBRNet). Could the authors clarify potential reasons for the same?
- Could the authors also clarify how the replay data is selected in the case of MR1* and are MR2 and MR3 just more images added on top of MR1* (more specifically I would like to clarify if MR2 and 3 are selective?)
- Given the quite reasonable performance of IBRNet does this mean more improved methods like NeuRay, GPNR, GNT [1, 2, 3] could do even better? If it's possible I would highly appreciate one extra comparison given that there were several follow-up works after IBRNet. Nevertheless, I am still happy with the extra IBRNet experiment provided and would like to see it included in the final version of this paper.
1. Neural Rays for Occlusion-aware Image-based Rendering
2. Generalizable Patch-Based Neural Rendering
3. Is Attention All That NeRF Needs?
- A clarification: (On average) What is the number of training images used to train the model, the number of additional images included in every task (is it independent of the operation?), and the number of conflict images removed in the case of replay?
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer fqk8,
Thank you for your insightful feedback and valuable questions. We deeply appreciate your recognition. Please find our responses to specific questions below.
**Q1: Potential reasons for the white patch in Fig. R2.**
Thanks for the observation. We examine the source images used in predicting Figure R2 and discover that the region with the white patch in the target frame doesn't exist in the selected source images. Thus, this issue occurs when IBRNet projects a point from the target view to the source views, causing it to be in an invalid region (e.g., outside the image domain or behind the camera plane). If the point fails to locate a corresponding position in all source views, it is labeled as an invalid pixel, and IBRNet sets its value to a default value (white in this case). In summary, this incorrect projection results in an erroneous final predicted image. This effect further verifies that IBRNet is sensitive to the number of neighboring frames.
**Q2: How the replay data is selected in the case of MR1\*? Are MR2 and MR3 selective?**
Yes. We would like to add more explanations.
To avoid conflicts from previously captured images, we manually remove those images containing altered regions (e.g., the apple) from the old images, referred to as MR1*.
Without additional manual selection, MR2 and MR3 incorporate additional images into MR1. We evaluate MR2 and MR3 to see if more images improve performance. This is motivated by the unsatisfactory results from our initial MR1 setup. We speculate that with more images, IBR could perform better.
**Q3: Could more improved methods do even better?**
We also conduct an experiment using GNT [3], a follow-up to IBRNet, and the results in the Dataset Whiteroom are detailed below. Despite exploring various MR methods, the findings consistently fall short of our expectations. The new task's performance may suffer from using only ten images. Similarly, the old task also lacks sufficient dense reference views. Additionally, one of the generalized NeRFs (i.e., IBRNet), stresses in their paper (Section: Sensitivity to source view density) that their approach "degrades reasonably as the input views become sparser." Thanks for your suggestions and we will include these comparisons and analyses in the final paper.
| Algos | Old/New images | Old | New |
|--------|--------|--------|--------|
| GNT + MR1 | 200+ / 10 | 24.45 | 14.07 |
| GNT + MR1* | 200+ / 10 | 24.13 | 13.98|
| GNT + MR2 | 80 / 80 | 18.61 | 15.17 |
| GNT + MR3 | 200+ / 80 | 26.94 | 15.80 |
| **Ours** | **0 / 10** | **32.33** | **25.29** |
**Q4: What is the number of training images, the number of additional images included in every task, and the number of conflict images removed in the case of replay?**
While the numbers of old and new images are independent of the operation, the count after removing conflict images is operation-dependent. Specifically, the number of additional images included in every task is set to 10.
For example, in Dataset Whitehouse, we train the original NeRF using 282 old images and then train our lightweight expert adaptor only using 10 newly captured images. For the MR1* setting which is used in DyNeRF, IBRNet, and GNT, conflicts removed during the ADD operation (e.g., the apple) reduce the old images to 225. Besides, the DELETE operation (e.g., the sofa) decreases the number of old images from 282 to 217. | Rebuttal 1:
Rebuttal: Dear Reviewers and ACs:
Thank you so much for your time and efforts in assessing our paper. Hope our rebuttal has addressed your concerns. We are happy to discuss with you further if you still have other concerns. Thanks for helping improve our paper.
Best regards, Paper 1565 Authors
Pdf: /pdf/e101ea25dad6444c57d33dc6095a71012d8fd6cd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work aims to tackle the challenge of efficiently adapting NeRFs to real-world scene changes in a continual learning setting. To achieve this, it develops two techniques, including an expert adaptor for adapting to new changes and a conflict-aware knowledge distillation scheme for memorizing unchanged parts. Experiments on self-collected datasets validate the superiority of the proposed method over other potential solutions for the continual learning of NeRF.
Strengths: 1. As the first work targeting the continual learning of NeRF, this work could provide insights and references for the community;
2. The proposed method can achieve notable improvements over other potential solutions for the continual learning of NeRF.
Weaknesses: I have the following concerns about this work:
1. The major concern is the limited technical contributions and novelty of the proposed framework. In particular, parameter-efficient tuning methods such as [1][2][3], which adapt parts of the pretrained model or newly added lightweight modules, have been widely adopted in the literature of continual learning and the proposed expert adaptor is an intuitive instantiation of these methods in the context of NeRF. And the conflict-aware knowledge distillation is more like a fine-grained memory replay thanks to the nature of 3D reconstruction tasks.
[1] "Dualprompt: Complementary prompting for rehearsal-free continual learning", Z. Wang et al., ECCV 2022.
[2] "Learning to prompt for continual learning", Z. Wang et al., CVPR 2022.
[3] "UFO: unified feature optimization", T. Xi et al., ECCV 2022.
2. It is not clear whether the proposed method can be generalized to other commonly adopted NeRF representations. For example, for voxel-based NeRFs like DVGO, the MLP part is small while most scene features are encoded in voxel embeddings. The authors are expected to discuss whether the proposed expert adaptor could also work in such scenarios.
3. What are the differences among the old tasks? If they are all unaltered regions, is the only distinction the changed camera poses? In addition, experiments on more diverse scenes, containing diverse objects and environments, are highly desirable.
4. One potential baseline is generalizable NeRF variants (e.g., IBRNet, MVSNeRF, and NeuRay), which can instantly render a new scene given a set of source views. The identified catastrophic forgetting issue can be considerably mitigated thanks to their cross-scene generalization capability even without any fine-tuning. The authors are expected to benchmark with this baseline.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have listed my questions in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: This work targets continual learning of NeRFs, which boosts the data efficiency of NeRF training and thus does not suffer obvious negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer QJef,
Thank you for appreciating our approach. We will address your comments below.
**W1: Limited contribution and novelty, compared to [1,2,3].**
1) Existing methods in continual learning primarily deal with image classification tasks that involve adding new classes incrementally. In such tasks, there is usually no clear conflict between the old and new tasks, and therefore, the main focus is on adapting to new tasks and avoiding forgetting. In contrast, our work focuses on studying this problem within the context of NeRF, where a scene's neural radiance field is continuously evolving. In addition to overcoming the forgetting issue and adapting to new content, our task requires the model to effectively handle conflicting supervisory signals that may arise due to the overlap of old and new scene areas.
2) Our approach differs from methods like [1][2]. While these methods incorporate new trainable prompts for adapting to new tasks at the input level, we introduce new architecture designs that expand the network's capability to adapt to scene changes. Additionally, we have dedicated designs in place to handle conflicting supervisory signals. It is worth noting that the design of DyNeRF, in which we compared our approach, is similar to [1][2] in using trainable embeddings known as latent codes to capture time-specific information in the input. Our adaptor design demonstrates superiority over DyNeRF with or without Memory Replay (MR), as evidenced by the results shown in Table 2 and Figure 4 in the main paper, as well as Table 4 in the supplementary file.
3) In comparison to the current MR method, our approach effectively addresses conflicts crucial to our problem that is overlooked by existing methods. By mitigating the adverse effects caused by conflicting supervisory signals, our design significantly improves performance. This is evident from the comparison in Table 2, where executing the DELETE operation in the old task yields a performance of 34.77, compared to 17.21 in the pure data MR scenario.
In summary, we believe that our proposed framework offers novel contributions to the field of NeRF, effectively addressing the challenges of adapting to scene changes and handling conflicting supervisory signals.
**W2: Generalized to voxel-based NeRF (DVGO).**
In response to the concern about the generalizability of our proposed method to other commonly adopted NeRF representations, we are working on extending our approach to include voxel-based NeRFs like DVGO to demonstrate its adaptability.
In theory, it may also experience the forgetting phenomenon, as it first learns the old tasks and subsequently learns the new tasks with a different camera pose distribution. In such a scenario, our method could potentially be helpful in mitigating the negative impact. However, we regretfully cannot present the experimental results at this moment due to the considerable workload and time constraints for rebuttal. Nevertheless, we will promptly update and provide the experimental results during the discussion period.
Besides, further experiments on diverse datasets in Table R1 highlight the general applicability of our proposed algorithm by demonstrating its robust performance in various environments.
**W3: What are the differences among the old tasks? Contain more diverse scenes?**
1) We aim to clarify our approach for evaluating the old task in the following manner: Initially, we design a camera pose set to ensure comprehensive coverage of the given scene. This set assists in training an original NeRF model to represent the entire scene accurately. Following an operation or a series of operations, we can identify the unchanged areas by sampling camera poses that do not capture the alterations. To illustrate, during the execution of an ADD operation (such as adding an apple), we eliminate camera poses from the set that captures the apple. Finally, we evaluate the performance of the old task using the remaining poses from before the operation is carried out. If they are all unaltered regions, then the old task is to assess the rendering quality of the pre-trained NeRF.
2) We conduct experiments on two more challenging scenes (real-world indoor and outdoor scenes) containing various objects and environments to demonstrate the generalization ability of our algorithm. For the indoor scene, we capture images using a camera with LiDAR (specifically, an iPhone 14 Pro) and employ the Record3D App with ARKit to ensure accurate camera poses. On the other hand, the outdoor scene is captured using a DJI drone, focusing on a large-scale subject (the audience seats on a basketball court), with COLMAP utilized to calculate the camera poses. The results of these experiments are presented in Table R1 and Figure R1. We observe that Fine-tuning (FT) exhibits severe forgetting in old tasks, while Memory Replay (MR) underperforms in new tasks, particularly in qualitative results. These findings effectively showcase the robust performance of our methods across varying environments, further demonstrating the versatility and adaptability of our proposed algorithm.
**W4: Compare with generalizable NeRF.**
To address this concern, we conduct comparative analyses with a generalizable NeRF called IBRNet, as detailed in Table R2 and Figure R2. Our tests, conducted with 10 new images, reveal unsatisfactory performance and serious artifacts in IBRNet's performance (denoted as MR1). Additionally, we conduct further testing on IBRNet, employing a more selective MR method involving manual removal of conflict data (denoted as MR1*). However, the obtained results remain unsatisfactory. These can be attributed to its heavy reliance on neighboring frames. To further evaluate IBRNet, we test it in different settings by varying the number of images (denoted as MR2, MR3). Although increasing the number of images improves IBRNet's performance, it still falls short compared to our model.
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Thank the authors for their efforts in providing the rebuttal. Most of my concerns are properly or partially answered. Although my concerns about the novelty and the generality of the proposed method are still there, given that this is the first work targeting the continuous learning setting of NeRFs, I tend to increase my score to 5 and will discuss it with other reviewers to further adjust my final score.
---
Reply to Comment 1.1.1:
Comment: **W2 (supplement): Generalized to voxel-based NeRF (DVGO).**
As previously noted during our initial response, given the substantial workload and limited time for rebuttals, we are now presenting the experimental results regarding the extension of our approach on DVGO. This may address your valid concerns about the generalizability of our method. We sincerely appreciate your understanding and patience.
DVGO employs a small MLP for RGB prediction and two distinct voxel representations for encoding density and feature information. Consequently, in the FT baseline, both the two voxels and the MLP undergo fine-tuning. In the MR baseline, the same components are fine-tuned, with the integration of old and new data following the setting outlined in our main paper. Regarding our approach, we deploy two forms of lightweight expert adaptors:
**Case 1: Ours w/ MLP-based Expert**: We integrate an MLP-based expert adaptor with the original MLP to capture alterations in the modified region. During training, we only train the expert while maintaining the MLP and the feature voxel in an unaltered state. Since the scene's geometry - determined by the density voxel - changes with modifications to the scene, we also fine-tune the density voxel.
**Case 2: Ours w/ MLP-based + voxel-based Expert**: In addition to the MLP-based expert adaptor, we also introduce a voxel-based expert adaptor associated with the density voxel, since the design of the expert is motivated by the insight that scene alterations are often localized, where most information learned from previous scene is expected to be kept. Specifically, the output of the voxel expert is added to the output of each original density voxel. Instead of extensively fine-tuning all model parameters, which could bring potential forgetting in unaltered regions, we focus on training only the voxel-based and the MLP-based expert adaptors, preserving the original density voxel and MLP in an unchanged state.
Furthermore, in both two cases, we also utilize a conflict-aware Knowledge Distillation (KD) mechanism to maintain the consistency of density and color information in the unaltered regions. The mask logit is predicted by the MLP-based expert adaptor.
Additionally, we find that the original DVGO has difficulties with novel view synthesis, showing obvious artifacts in the Dataset Whitehouse. This might be due to its dependence on the density of training images. Thus, we conduct experiments on the Dataset Rome, which offers denser training views than the Dataset Whitehouse.
The table below compares the performance of our algorithm with the baselines. It also shows that FT is prone to catastrophic forgetting, while MR struggles with new tasks. The pure KD (ours w/o Expert), which fine-tunes all the voxels and the MLP, also underperforms in the tasks due to conflicting supervision signals originating from the original DVGO and the newly captured images in the modified region. Instead, our approach utilizes conflict-aware KD and an MLP-based expert which also predicts a mask logit, identifying whether the input point is altered. These designs effectively mitigate forgetting in the old task and yield impressive performance in the new task. The performance remains consistent when both MLP-based and voxel-based experts are employed. Experimental results affirm the generalizability and effectiveness of our approach.
| Algorithms | Old | New |
|--------|--------|--------|
| MR | **27.49** | 24.18 |
| FT | 23.85 | 24.76 |
| Ours w/o Expert | 26.89 | 24.43 |
| Ours w/ MLP-based Expert | 27.10 | 24.99 |
| Ours w/ MLP-based and voxel-based Expert | 27.15 | **25.07** |
Once again, we deeply value your comprehension and hope these experiments will effectively address your concerns. | null | null | null | null | null | null |
Dual control variate for faster black-box variational inference | Reject | Summary: The authors introduce a "dual" control variate for reducing gradient variance in black-box variational inference in the context of models that admit data subsampling (i.e. exhibit the required conditional independence). The dual control variant is joint in that it simultaneously addresses the two sources of Monte Carlo variance in ELBO approximations: that due to latent variable sampling and that due to data subsampling. The basic idea, in essence, is to leverage linear or quadratic ELBO approximations (which admit closed form evaluation for e.g. gaussian mean field variational families) in conjunction with a running average of gradient estimates for each data point using the variational parameters from the last iteration in which each data point was accessed. Experiments demonstrate that the proposed method can substantially reduce gradient variance (in particular that due to data subsampling, which is often dominant), thus yielding better ELBO optimization (both w.r.t. wallclock time and final ELBOs obtained).
Strengths: The main strengths of this submission include the following:
- it addresses a general problem of relatively wide interest in the NeurIPS community (namely, how best to do black-box variational inference)
- it addresses a particular component of that problem that is often somewhat overlooked compared to other aspects (namely how best to do optimization)
- the experiments are pretty convincing in establishing the efficacy of the method
- the suggested method is technically sound and would appear to be pretty simple to implement
- the discussion in Sec 5 and the variance analysis (appendix B) help the reader conceptually place the proposed control variate alongside other alternatives
Weaknesses: The main weaknesses of this submission, as I see them, are the following:
- the notation is a bit confusing in some places
- some of the limitations of the method and/or extensions to more general problem settings are either not discussed or are insufficiently discussed
Let me expand on these points:
While the notation for this kind of paper will necessarily be somewhat clunky given the various sources of sampling variability that have to be carefully tracked, I think some improvements are possible. In particular I find the choice of M for the running gradient particularly suboptimal. Since capital N is a positive integer and little n and little m are used to index integers, one might expect that M is also a positive integer. I suggest that M be replaced by something like G(bold $w$) to avoid this confusion and to emphasize that G depends on the "table" of $w$s.
The authors consider a generic but still somewhat limited problem specification, in particular they do not consider local latent variables, model learning, or amortization. More discussion of these points would probably be of considerable interest to the reader (for more discussion see below). In addition one of the weaknesses of this method is the potentially large memory requirements, which are O(Nd) where d is the latent dimension. This needs to be *very clearly emphasized*.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Do you expect your method would work more or less unchanged for hierarchical models that feature both global and local latent variables? Or would the algorithm need to be adapted to that setting?
- Arguably besides the possibility of data subsampling for appropriate model classes, some of the biggest selling points of variational inference are amortization and model learning (i.e. learning point estimates for model parameters).
- I guess you could support model learning directly without any change to your algorithm, although I suspect the performance of your control variate would tend to decrease since now in addition to "$w$ drift" you'd also have "$\theta$ drift" (where theta is the model parameter). Can you please comment?
- I guess you could also support amortization (e.g. of local latent variables in hierarchical models) without any changes to your algorithm, although since amortization typically involves a neural network with a largeish number of parameters, this would make the memory requirements needed to cheaply compute the running average M intractably large. Can you please comment?
- typo: appproximation
- It would be great to see some ablation studies comparing first order to second order taylor approximations. How important is going to second order in practice?
- What kind of runtime performance would you expect if used a multivariatiate gaussian distribution for the variational distribution? Would the dual estimator still be ~1.5 to ~2.5 times slower than computing control variant free gradients? Or would the gap increase?
- It should be possible to do a more fine-grained theoretical analysis for a model that admits more analytic control, e.g. bayesian linear regression. Such an analysis could be particularly valuable in further delineating the regimes in which the dual variate is expected to perform best.
- You state that "the total gradient variance is often dominated by variance from μ" [line 207]. Do you have any intuition for this claim?
- Comment: For a large dataset initializing the running mean M at the beginning seems like it could be a waste of compute time. It might make more sense to not use the control variate during the first epoch and use the first epoch to collect M.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: As discussed in the weaknesses section, I believe some of the limitations of the method (e.g. w.r.t.~memory requirements and likely trouble with amortization) should be discussed and/or better emphasized.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and suggestions.
- Usage of "M": This notation comes from the SAGA paper, but we agree it likely causes confusion here. We plan to replace it with $g(w)$ or $\bar{g}$ in later revisions.
- Local latent variables: We agree that this is a limitation. In our PPCA experiments that involve local latent variables, we tackle the local latent $z_i$ by analytically marginalizing it out. However, subsampling for local latent variables is generally tricky and would require additional techniques such as amortization (see citations (1,2) below). We suspect this might be addressed by combining this dual control variate with that kind of amortization strategy. We will add a discussion on this point to the discussion section.
- Limitation on model learning: Our method could be applied with minimal changes in a model learning setting where one wishes to optimize the parameters of p at the same time as learning q. Roughly speaking, let p have parameters θ, and store and update those similarly to how w is updated now. However, it is possible that the method would tend to be less effective in that setting, since changes in p will mean that the stored parameters will tend to get "out of date" somewhat faster. So we don't intend to make any strong claims about practical performance.
- Memory usage: We agree that SAGA takes O(Nd) memory and will emphasize this in later revisions. However, note we can also use an SVRG-like strategy which does not require additional memory. We provide a pseudo-code for SVRG version of dual control variate (Algorithm 2) below and provide experiment results with it on Australian (Figure. 2) in the rebuttal PDF. The SVRG version shows performance comparable to SAGA, though it has an extra hyperparameter, the outer-loop size, and requires additional gradient evaluations at each iteration.
- First-order v.s. Second-order Taylor approximation: If we use a first-order Taylor as our approximation, the gradient with respect to $\mu$ would be constant, leading to a *constant* control variate of 0, which is identical to not using CV at all.
- Runtime performance for multivariate Gaussian: We believe that the time complexity will be the same and we expect dual to remain the same order of complexity.
- Gradient noise dominated by $\mu$. We do not have satisfying intuition, but we do confirm previous experimental observations. We interpret this as a combination of the fact that with mean-field there are fewer scale parameters and they tend to be smaller in magnitude.
- Initialize using first epoch: Thanks for pointing this out. We are already using this regime where we use naive for the first epoch and use the first epoch to initialize M. We apologize for missing the details, we will include this implementation detail to the algorithm in later revisions.
(1) Abhinav Agrawal and Justin Domke. "Amortized variational inference for simple hierarchical models." Advances in Neural Information Processing Systems 34 (2021): 21388-21399.
(2) Charles Margossian and David Blei. "Amortized Variational Inference: When and Why?" arXiv:2307.11018
--------------
```python
Algorithm 2: Black-box variational inference with the dual control variate (SVRG version)
Require: Learning rate λ, variational family qw(z), target p(z, x), update frequency m
Require: Estimator f(w; n, epsilon) whose expectation over n and epsilon is the negative ELBO from qw and p
Require: Approximate estimator tilde_f(w; n, epsilon) that has an expectation over epsilon in closed form
Initialize the parameter tilde_w
for s = 1, 2, . . . do
Let tilde_w ← tilde_w_{s−1} and w_0 ← tilde_w
Compute the full gradient of tilde_f at tilde_w: M = E_m E_eta grad(tilde_f(tilde_w; m, eta))
for k = 1, 2, · · · , m do
Sample n and epsilon
Compute the base gradient g ← grad(f(w_{k−1}; n, epsilon)
Compute the control variate c ← M − grad(tilde_f(tilde_w; n, epsilon))
Update the parameter w_k ← w_{k−1} − λ (g + c). . Or use g + c in any stochastic optimization algorithm
end for
Update tilde_w_s ← w_m
end for
```
---
Rebuttal Comment 1.1:
Comment: Thanks for delineating what a SVRG version would look like.
Please do not forgot to discuss memory usage in the revised paper.
While the suggested algorithm is not shockingly original, I believe it makes a strong contribution to the literature, would be of interest to many in the NeurIPS community, and could be useful in practice. As such I continue to recommend publication.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response! We agree that memory is a crucial issue and will most certainly make the O(ND) requirements for the SAGA version unambiguously clear in the later revision. | Summary: This paper proposes a new control variate for black-box variational inference. In particular, the proposed "dual" control variate attempts to reduce the subsampling noise and Monte Carlo noise at the same time. For this, the paper utilizes an incremental gradient-like scheme. The performance of the new control variate is empirically verified on Bayesian inference tasks with large datasets.
Strengths: * While control variates have been an active area of research for BBVI, reducing the subsampling noise has certainly been a problem that hasn't been addressed. In fact, the paper shows that conventional control variate solutions do not solve this problem *at all*, despite the fact that subsampling is a major source of variance.
* The paper motivates the latter point by empirically computing the subsampling noise. Overall, the paper conveys the motivations for the proposed control variate very clearly.
* The proposed control variate based on incremental gradients seems fairly simple to implement, but not trivial. Thus, the proposed strategy has clear technical contribution.
* Empirical evaluations are thorough and adequate to show the superiority of the proposed control variate.
Weaknesses: * Given that the paper builds on top of incremental gradient methods, which, as typical of the optimization community, comes with a heap of theoretical tools for rigorous analysis. Given this fact, it would have been amazing if the paper also provided some rigorous analysis of the proposed strategy.
* There seems to be some room for improvement in terms of paper organization and notation. The notation can be quite confusing at times, for example, lots of things happen behind innocent-looking superscripts. More on this below.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: ### Major Comments
* What was the motivation behind calling the control variate "dual"? At least to people who have some experience in optimization, I believe the word dual would automatically trigger the notion of Lagrangian duality. How about "doubly" in accordance with the term "doubly stochastic?"
* The section organization seems to have room for improvement. For example, a general explanation of "general" control variates appears way back in Section 5.1. Considering readers that are not familiar with control variates, it would be better to start with a high-level explanation of how a control variate is supposed to work. A good place would be after Section 3, where the variance of BBVI is motivated to be a problem, and before Section 4 where a control variate is first introduced.
* Similarly, I felt that the whole discussion in Section 5.1 and 5.2 would have been a very good way to inspire the readers before actually unrevealing the dual control variate. While reading Section 5.1 and 5.2 I had to constantly go back to Section 4 to remind how the dual control variate attempts to solve the problem.
* Algorithm 1 is not very helpful to precisely understand the implementation. For example, in "Compute the control variate" what "using ... $=M$ " mean? Does this mean we simply plug $M$ in place of $ E_m E_{\eta} \nabla \tilde{f} \left(w^m; m, \eta\right) $ ? Then why denote $ E_m E_{\eta} \nabla \tilde{f} \left(w^m; m, \eta\right) $ ? I think here the authors need to clearly distinguish between values and computed quantities. Similarly, the notation for the running mean $M$ is too uninformative. I think something like $\tilde{g}$ or $\hat{g}$ would be more appropriate. This is also useful to understand the dimensionality/domain of each variable.
* Line 207: Doesn't this contradict the conclusions of [2,3]? From my understanding of [2], the $\mathcal{O}\left(d\right)$ dimensional dependence comes from the gradients with respect to the scale.
* Figure 4 Tennis: Do you have any insight as why the ELBO for the dual control variate peaks and then decreases? This seems quite weird given that the control variate is unbiased.
* Caption of Figure 4: This is perhaps a minor point, but could you elaborate why gradient noise of Tennis is correlated? Is this because reshuffling was used here instead of iid subsampling?
* Section 6: Please include a discussion of other non-dual control variates since this works, in the broader scheme of things, is a control variate paper. Also, why not include Section 5.3 in Section 6?
### Minor Comments
* Eq. (1) I was a little bit confused at first because I am so used to see the likelihood adjustment ratio to be N/B, where B is the batch size. I think it would be useful to either use N/B overall so that the notion that we're dealing with batches is more clear, or to mention in the text that the batch size is assumed to be 1.
* Line 59: The classic paper that first proposed doubly stochstic BBVI was [1]. I recommend adding it here.
* Table 1: How about using a bar plot instead of a table? I think it would be more appropriate since the relative magnitude is key here.
* Line 118: I think it would be better to explicitly show that the performance of a control variate is entirely driven by the covariance between the estimate and the control variate.
### References
* [1] Titsias, Michalis, and Miguel Lázaro-Gredilla. "Doubly stochastic variational Bayes for non-conjugate inference." In International conference on machine learning, pp. 1971-1979. PMLR, 2014.
* [2] Domke, Justin. "Provable gradient variance guarantees for black-box variational inference." Advances in Neural Information Processing Systems 32 (2019).
* [3] Bhatia, Kush, Nikki Lijing Kuang, Yi-An Ma, and Yixin Wang. "Statistical and computational trade-offs in variational inference: A case study in inferential model selection." arXiv preprint arXiv:2207.11208 (2022).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review and detailed suggestions. Please see below for our response.
- Naming: We agree that the connotation of Lagrangian duality is an unfortunate clash of terminology and are certainly open to changing the name. Our reason for not using "doubly control variate" is that it is very similar to the name "double control variate" which is already used in a different context of discrete latent space models (citation (1) below). We would appreciate further advice on this point.
- Paper organization: Thanks for your suggestion. We agree that introducing details behind control variates earlier in Section 3 may improve clarity. We also believe that moving Sections 5.1 and 5.2 to an earlier part of the paper could help readers better understand the context. We plan to reorganize the manuscript in accordance with this suggestion.
- Notation in Algorithm 1: We agree that M may be unclear, and does not emphasize that M "lives in the same space" as the gradient estimates. Our choice of M followed the original SAGA paper, but we accept that a change would be better here. We plan to change the notation to $g(\mathbf{w})$ or $\bar{g}$ following reviewer NBCj's suggestion.
- Gradient variance from scale: It does not contradict [2, 3]. Those papers refer to full-rank Gaussians, whereas the statement in line 207 refers to the mean-field case as is discussed in "Approximation Based Variance Reduction for Reparameterization Gradients".
- ELBO peaks and decreases in Fig. 4, Tennis, left panel. We believe this happens because the step size 0.01 is too large. This behavior disappears in the second plot when we choose the best step size retrospectively. We also suspect that this may be caused by the use of Adam as the optimizer—as gradient variance decreases over time (especially with our control variate), the effective step size increases, meaning the size of the random fluctuations increases even with a fixed nominal step size.
- Correlated gradient noise in Tennis: We also found this behavior very strange at first. Yes, this is due to using dataset reshuffling instead of i.i.d sampling for implementing mini-batching. Essentially what happens is that midway through the epoch only a random subset of matches for each player has been sampled, which pushes the iterates a bit away from the solution, but at the end of each epoch, all matches for each player have been sampled and so that noise gets canceled out. (The cyclic behavior is exactly one epoch long.) Incidentally, note that this periodic behavior is almost completely eliminated by the dual control variate since it controls for the noise introduced by this kind of randomness in the minibatch selection. (It is somewhat reduced by S-miso as well.)
- Including more control variate references in Section 6: We will add more references in the final manuscript as long as we have enough space after revisions (if not we could provide a more exhaustive related work section in the appendix).
(1) Titsias, Michalis, and Jiaxin Shi. "Double control variates for gradient estimation in discrete latent variable models." In International Conference on Artificial Intelligence and Statistics, pp. 6134-6151. PMLR, 2022.
---
Rebuttal Comment 1.1:
Title: Further Response
Comment: Thank you for the clarifications. I am happy with the updated pseudocode addressed to Reviewer NBCj. Reorganizing the sections would make the paper much easier to read and free up some space (I see some redundancy from introducing/explaining the concept of control variates multiple times.) Nevertheless, I am now happy to increase my score.
As per the naming issue, I have looked up some synonyms to "double" or "two" and found words like "duo" or "bi-". Or, given the paper addresses all of the stochasticity present in BBVI, maybe something like "total" or "complete" could also be considerable. Also, given that Titsias chose "double," I think my original suggestion of "doubly" is also fine as it is not exactly the same. Hope this is helpful! | Summary: The paper presents a method for variance reduction in stochastic gradient estimation in doubly stochastic variational inference
where there exists two sources of variance: (i) Monte Carlo noise when sampling from the variational distribution and (ii) gradient
variance due to the minibatch sampling. The authors consider reparametrizable Gaussian variational inference and introduce a control variate that tries to reduce simultaneously both sources of variance. This control variate combines previous ideas, such as a second order Taylor expansion of the function as in [19], and it seems to reduce the variance in the presented experiments.
Strengths: The paper is very clearly written and all derivations appear to be correct. It contains also intuitive discussions why the proposed "coupled" control variate can be useful as opposed to other control variates that deal separately with each source of variance.
Certainly variance reduction is a very important topic for stochastic gradient estimation in variational inference, and the paper proposes a potentially useful method.
The experiments provide many details including running times.
Weaknesses: I was not so impressed by the experimental results for two reasons. Firstly the models used are quite small and it would be useful to include e.g. a big neural network. Secondly, I am not sure if the comparison is done in a fair way for methods such the "naive" method. This is because the "dual method is more expensive and requires more gradient evaluations, such as the ones for the numerical approximation of the Hessian-vector products. Given that computations are dominated by the number of gradient evaluations, a fair comparison should try to match this number across different estimators. For example, for the "naive" estimator someone could increase the minibatch size so that the number of gradient evaluations matches the one of the "dual" method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why the control variate in equation (16) controls only the variance for the $\mu$? Is this discussed in [9]?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are explained above regarding the current experimental comparison.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your careful review and suggestions.
- Choice of models: We would like to clarify that the models we experiment with have rather high latent dimensionality. The models presented in Fig. 4 have 7840, 12544, and 5525 latent variables, respectively. Certainly, larger models exist, but these represent a real challenge for existing algorithms as evidenced by the performance of the baselines.
- Fairness of comparison: Thanks for pointing this out. Following this suggestion, we conducted new experiments on MNIST—the results are shown in the PDF included as part of the general rebuttal. In these experiments, we use a larger batch size for naive and cv (300 and 200 respectively) to ensure all estimators have the same complexity per iteration. The dual estimator still shows better convergence than baseline estimators in this setup. We will include these results in the revised manuscript. It is worth pointing out that a primary goal of our original experiments was to illuminate the theory, making clear what the benefit of controlling variance is with the same (n,ε) values. But certainly, we agree that improvement in wall-clock speed is the ultimate goal. We partially addressed this by including the wall-clock time for each estimator in Table 2 and showing results in terms of wall-clock time in Figure 7 in the appendix (original submission). However, we agree that it is also useful to compare in terms of different minibatch sizes that hold the number of evaluations of ∇f per iteration constant, and we will include these results in the manuscript as well.
- Controlling variance for $\mu$: Yes, this is discussed in [9]. Briefly, $\mu$ accounts for most of the gradient variance in mean-field VI, as is illustrated by the first row in Figure 1 in [9]. We empirically confirm this, and our experiments show that controlling the variance on $\mu$ is sufficient to significantly improve convergence speed.
---
Rebuttal Comment 1.1:
Title: Thank you for your reply
Comment: Thanks for adding the comparison by matching the number of gradient evaluations. This makes the experimental evaluation more complete. | Summary: Existing stochastic methods for black-box variational inference only attempt to reduce the noise either due to data subsampling or Monte-Carlo sampling of the expectation. This paper proposed a new "dual control variate", which addresses both types of noise at the same time. In experiments, the proposed control variate is shown to perform favorably to existing baselines.
Strengths: 1) The paper disentangles the effects of noise through the data and noise from Monte-Carlo estimation of the expectation in the design of the new dual control variate, which addresses both at the same time.
2) The proposed control variate is shown to greatly improve performance on the considered examples, at seemingly minimal overhead. If the x-axis in the plots would have been wall clock time, this would be clearer to see.
Weaknesses: The main weakness of the paper is in my opinion that the experimental evaluation is a bit lacking:
a) larger experiments would be desirable, see questions;
b) the plots were hard to read (difficult to distinguish the red and green curves), and could be wrt. wall clock time and not iterations.
The algorithm looks promising, and I like that it can be used in a "black-box" fashion with any optimizer (e.g. Adam); but I believe it is not ready yet in its current state and could require a more thorough experimental evaluation to warrant a clear acceptance.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Would the proposed dual control variate approach also work for VI in neural networks? Experiments on modern neural networks (e.g. ResNets, transformers) could be a way to convincingly demonstrate the utility of the proposed method.
How much additional overhead (memory & runtime) does it have when compared to, say, Bayes-by-Backprop when actually implemented in practice?
From Table 2 it seems that one would require three backpropagation passes rather than a single one for the naive method -- it would be interesting to see whether this additional effort has a real practical benefit (e.g. higher accuracy, faster convergence wrt wall clock time).
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: All limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and suggestions.
- Experiments and models used: Our experiments focus on Bayesian inference on mechanistic models, a well-recognized area. Many of the models we consider are high-dimensional, with latent dimensions going from 5000 to 12000 for the larger models (lines 211 to 213), and represent complex problems for approximate inference algorithms. Regarding Bayes-by-backprop (BBB), it is essentially mean-field Gaussian VI applied to Bayesian neural networks using reparameterization gradients (our naive estimator).
- Runtime: We plot convergence with respect to iterations for the sake of reproducibility because wall-clock measurements are sensitive to implementation. However, we provide the exact time per iteration for all methods in Table 2, which allows comparison. We also provide results directly showing convergence in terms of wall-clock time in Figure 7 (appendix)
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification! | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their careful reviews.
In the PDF file, we provided the following additional content:
- UG9s has concern regarding the fairness of the experiments, in particular, since dual requires extra gradient calls at each iteration, the baseline estimators should use a larger batch size correspondingly to ensure different estimators have the same gradient call budget at each iteration. We have therefore included new experiments on MNIST (Figure. 1) where we used different batch sizes for naive, cv, and dual: Dual is 3 times more expensive than naive and 1.5 times more expensive than cv (in terms of gradient call), therefore we use a batch size of 300, 150, and 100 for naive, cv and dual respectively such that they have the same cost per iteration. Under this condition, we still observe dual performs better than baseline methods.
- Reviewer NBCj raises concerns about the O(ND) memory cost of dual caused by the underlying SAGA, we agree that this is a potential limitation. In the PDF file, we present the results (Figure. 2) for the SVRG version of dual control variate on Australian and we observe performance close to the SAGA version. The SVRG version bypasses the additional memory cost by introducing an outer loop that computes the full gradient of the control variate at every $m$ iteration and uses it as the expectation with respect to $n$. This variation of dual does not require additional memory cost but would cost extra gradient evaluation and introduce an extra hyperparameter. In our experiments, we compute the full gradient every 5 epochs, equivalent to 0.2 additional gradient calls per iteration.
Pdf: /pdf/0acacf8d69c5f6ceea12515da9dae541ad100e81.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper addresses the drawback of the black-box variational inference framework for Bayesian posterior inference by proposing dual control variate that is capable of jointly reducing the variances from both data subsampling and Monte Carlo sampling. The experimental evaluations on various datasets demonstrates reduced variance and improved optimization.
Strengths: The paper is fairly written well. The background of the black-box variational inference (BBVI) is nicely described. The doubly-stochastic optimization problem in BBVI's gradient estimation is clearly explained involving two sources of randomness - Monte Carlo sampling from the variational posterior and data subsampling from the full dataset.
The proposed dual control variate that jointly controls Monte Carlo and subsampling noise in BBVI to create approximations of the target for each datum where the Monte Carlo noise can be integrated exactly, addressing both forms of noise and interactions between them.
Experimental evaluation and visualization of dual estimator.
Weaknesses: - Although the idea is novel but its usefulness/impact could have explained well.
- The approximation function for the gradient estimators g_cv and g_dual could have been clarified.
- It is unclear how the noise in Monte Carlo sampling influences the noise in data sampling. Is there a way to measure it?
- What is the role of Beta in eqn(14 - 15). Is it experimentally evaluated?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I am not fully familiar with this subject. However, I did get idea about the problem and how the propose idea could address it. Therefore, I will consider the reviews of other reviewers.
The main problem is the writing/explanation of the technical terms to make readers to understand the problem. The paper started well and somehow there is a lost connection between experimental evaluation to demonstrate the impact of the dual control variate real-world applications
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: There is no issue with potential negative societal impact. However, the article does not address limitations (if any).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your careful review and suggestions.
- Presentation/clarity: We agree that the presentation could be improved. We will aim to better clarify technical terms and impact, as well as incorporate suggestions from reviewers SSK5 and NBCj.
- Approximation function: We appreciate that the current way the paper talks about the approximation function $\tilde{f}$ is not ideal. Looking at the paper closely, we see that we currently only very briefly mention in Section 4 that a Taylor expansion is a common choice, but we do not provide any details until Section 7. We agree it would be better to be more concrete earlier and will work to improve the manuscript in this respect.
- Monte Carlo and subsampling noise: When estimating the gradient, these two random variables (n) and (ε) are sampled independently. However, the amount of Monte Carlo noise will typically differ by minibatch. We attempt to measure the interaction in Figure 1 and Table 1 where ∇f(w;n) represents the gradient estimator with the Monte Carlo variable ε integrated out and ∇f(w;ε) represents the gradient estimator with the data subsampling variable n integrated out. We see that results vary from problem to problem (and by iteration inside each problem). However, at a high level, we see that interactions do exist, in the sense that the variance adds "superlinearly". More precisely, the variance with both sources of noise ($V_{ε, n} ∇f(w;n,ε)$) is larger than the sum of the two independent sources of noise ($V_ε ∇f(w;ε) + V_n ∇f(w;n)$)
- What is beta: It is the mixture weight of $c_\text{cv}$ and $c_\text{inc}$ for the ensembles approach from Section 5.3. We do not experimentally evaluate this approach because computing $c_\text{inc}$ is intractable in general (see line 158-164 for more details). We will emphasize this point in Sec 5.3 in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: Thanks for addressing my concerns.
Don't forget to improve the content organization and presentation clarity in the revised paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response! We will absolutely make these changes to the organization and presentation clarity—we agree that these would increase the impact of the paper. | null | null | null | null | null | null |
Learning Neural Implicit through Volume Rendering with Attentive Depth Fusion Priors | Accept (poster) | Summary:
This paper tackles the problem of 3D scene reconstruction from posed RGB-D images by learning an implicit occupancy function.
Unlike previous methods [58, 3, 65, Ref_DS] that directly use the depth values (obtained either from depth images, SfM pipelines, or from depth prediction networks) of individual rays (i.e. pixels) as supervisory signal leading to learning better scene geometry, faster convergence [Ref_DS, Ref_IN] and inference [Ref_IN, 58], this method proposes to first fuse the depth frames into a TSDF representation using some off-the-shelf method and then query the TSDF for obtaining the occupancy value corresponding to a particular point on a sampled ray for supervision.In other words, the off-the-shelf obtained TSDF of the scene is used as a scene prior.
The motivation for relying on TSDF representation of the scene is that the depth images obtained from RGBD sensors suffer from missing depth values (or holes) which makes it challenging to supervise 3D point samples corresponding to those rays.
The primary contribution of this paper is to learn an attention function that, for a 3d point sample, estimates the weights for the linear combination of occupancy values which are predicted from the neural network and occupancy values obtained from the TSDF scene prior (as in, final_occupancy = w1*predicted_occupancy + w2*TSDF_obtained_occupancy_prior). In this sense, the TSDF prior becomes an attentive depth fusion prior.
The network is supervised using volumetric rendering of depth and colour as proposed in [31].
Just like previous methods (e.g. [48, 58]), this method too relies on a hybrid representation: three discrete feature grids are learned, two for low and high-resolution representation of geometry and one for colour, and the tri-linearly interpolated feature value for a corresponding 3D point is decoded by corresponding MLP decoders to obtain occupancy and colour for that 3D point.
The truncated nature of TSDF (it belongs to (-1 , 1) ) is also exploited to inform when to use low-resolution grid features to get occupancy and when to rely on both low and high-resolution grid features to predict occupancy and fuse it (using learned weights) with the TSDF informed occupancy prior; it is clear that points which have TSDF values under the range of 1 or -1 are near surfaces and hence higher resolution feature grids are necessary and in addition, TSDF will explain such points the best.
The method is also applicable for SLAM settings where RGB-D frames are obtained incrementally in which the camera extrinsics, i.e. poses, are also optimized with the network parameters.
[Ref_IN] Kim, Mijeong, Seonguk Seo, and Bohyung Han. "Infonerf: Ray entropy minimization for few-shot neural volume rendering." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[Ref_DS] Deng, Kangle, et al. "Depth-supervised nerf: Fewer views and faster training for free." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Strengths: 1. The paper, barring a few places, is well written.
2. This paper has identified the limitation in supervising neural networks that learn implicit functions for dense 3D scene reconstruction using depth images which is difficult in supervising predictions for points whose depth values are not available as they belong to holes in the depth images.
3. Authors have put in a reasonable effort to analyse the proposed method with good qualitative and quantitative analysis.
Weaknesses: 1. The paper lacks any substantial contribution
Explanation:
a. It relies on hybrid representation for learning implicit functions for dense 3D reconstruction which has been very well explored before in the same context ([48], [58], [19])
b. The use of pre-trained MLP decoders for predicting occupancy from trilinearly interpolated feature vectors from multi-resolution grids has been inspired by NICE-SLAM[65].
c. The TSDF for the scene is obtained by fusing the posed depth images using the off-the-shelf method.
d. The only contribution of this paper is towards learning an attention function that predicts the weights which are used to linearly fuse the predicted occupancies and occupancy obtained from a TSDF prior.
2. Given the way the attention function is learned which is the primary contribution of this work, it almost looks unnecessary :
Explanation: As explained in the method section and also schematically shown in the pipeline figure (fig 1), the attention function takes the sum of the predicted low and high-resolution grid occupancies and occupancy obtained by TSDF prior (normalized between 0 and 1) to predict the weights for the linear combination of the two (eq. 2, eq 3., figure 1.).
This means the attention function only chooses to learn the weights (alpha and beta, eq. 2, 3) by looking at only two floating point numbers which are between 0 and 1 based on errors in RGB and depth renderings. The RGB and Depth renderings are obtained from the final linear combination of the two occupancies (eq. 2, 3) and their quality would be inversely proportional to the rendering errors. I don't see what else the attention function can learn to pay attention to other than which one of the inputs is larger.
In which case, how would it be any different from simply using the softmax operator to combine the two occupancies instead of learning two weights?
Had the attention function taken any feature from grids to reason about how well the network can represent the scene , it could make better decision as which is more important, predicted occupancy or TSDF prior occupancy. Accordingly, it would predict weights for the fusion of the two.
3. The method is affected by the very problem it is trying to solve:
Explanation: The paper claims that missing depth values are a problem when using them raw for learning implicit functions for scene reconstruction. The fused TSDF prior would help mitigate that. However, from Figure 2 it is clear that TSDF prior could also have holes where the network has to rely on its own learned occupancy. This shows that the idea of using TSDF is itself affected by the very problem it is trying to mitigate.
4. The paper also proposes that the missing depth due to occlusion leads to difficulty while supervising using depth images. However, when posed RGB-D frames of the scene are available, the points that are occluded in one frame would be visible in some other frames anyway. Consider, for example, the classic NeRF [27] paper or [Ref_DS], the novel view renderings are not affected by occlusions in different frames as the occluded ones are anyway visible in other views.
5. The proposed method is very similar to [19] in the sense that both rely on fused TSDF of the scene then it would be interesting to see how the proposed method fairs with [19]. The qualitative results of [19] look quite similar to the ones shown in this paper which motivates me to also consider a quantitative comparison between the two.
6. The paper points the reviewer to refer to supplemental material for more results. But there is no supplemental material pdf for reference (the supplemental material also has the paper in it).
[Ref_IN] Kim, Mijeong, Seonguk Seo, and Bohyung Han. "Infonerf: Ray entropy minimization for few-shot neural volume rendering." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[Ref_DS] Deng, Kangle, et al. "Depth-supervised nerf: Fewer views and faster training for free." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. How is the proposed attentive depth fusion prior different from simply using a softmax operation over the predicted occupancies and the occupancy obtained from TSDF prior? This question is motivated from the fact that the attention function only takes the two occupancies as input .
2. How is the error in TSDF which is accumulated due to progressive fusion of depth images using their recent estimate of pose handled?
Detail question: In a SLAM setting where we do not have all the RGB-D images beforehand, the TSDF is performed by fusing the scene incrementally using the estimated poses. However, pose estimates from any front-end would have some degree of error. This error would propagate to the TSDF scene prior which is used to construct the final occupancy. The network however is optimized along with camera poses but the TSDF does not seem to be updated based on the optimized camera poses. If, however, it is updated, would it not mean that at every mapping phase, we will have to fuse the entire TSDF again? This would again mean that we will have to keep appropriate number of depth frames for this. However this method, like vMAP[17], iMAP[41], stores keyframes, whose main purpose is for replay while learning incrementally to tackle the catastrophic forgetting issue. No reference to this is provided in the paper.
3. How would the result look like if instead of a linear combination of the predicted occupancy and occupancy from TSDF prior, we use the one with the maximum weight? Could the results indicate that the attention function is redundant? This would also demonstrate if there is any merit in the idea of using TSDF as a prior for composing the occupancies or learning to predict TSDF from TSDF prior is a better idea; [19] has three stages of training, the first is devoted to this.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: The authors have not discussed any limitations. However, the following are the two strong limitations of this work:
1. One strong limitation of this work is that it would require the availability of TSDF prior even in test time. So what is the point of learning implicitly? The method is also capable of optimizing the camera poses and hence the resulting occupancy would be better (as also shown in the results). But then how is it different from running a classic bundle adjustment and then fusing the optimized depth values into a more accurate TSDF using optimized camera poses? Whereas methods like MonoSDF[58], FastSURF[19] do not need fused TSDF in the test time and the results are quite comparable if not better.
2. Ideally, the attention function could be also provided with the features from the grids so that the function could learn to decide upon the fusion weights also based on the features which are also being optimized. In this process, the learnt features would be considered when deciding the weights required for fusing the occupancies. This would help the network to actually reason about the fusion based on the geometric understanding learned so far.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Misunderstanding in summary
* “...a particular point on a sampled ray for supervision. ”
We do not use the interpolated occupancy to supervise the inferred occupancy due to the inaccuracy or incompleteness of TSDF.
* “... TSDF will explain such points the best”
Fig.3 in our manuscript shows that, within bandwidth, TSDF may describe less (such as areas in blue) due to its inaccuracy.
2. Contributions
Our contribution on the learning framework and the attentive depth fusion prior should get recognized.
One is to show how we should use attention as a balance. Our ablation studies in Tab.7 and Tab.11 showed that it is great to use attention within the bandwidth, and use the inferred low resolution occupancies outside the bandwidth without attention.
The other is attention modeling. Using the interpolated and inferred occupancies without conditions performs the best in ablation studies in Tab.10 and Fig.8. Along with our framework, we believe how we should arrange focus is also worth spreading to the community to benefit the related research.
We did not claim the widely used modules are our contributions.
3. The attention network
3.1 Not always focus on the larger occupancy
We report analysis in Fig.1 in the rebuttal. We sample points on the GT mesh, and get the attention weights in Fig.1(a). At each point, we show its distances to the mesh from the TSDF in Fig.1(b) and the mesh from the inferred occupancies in Fig.1(c), respectively. Fig.1(d) indicates where the former is smaller than the later.
The high correlation between Fig.1(a) and Fig.1(d) indicates that the attention network focuses more on the occupancy producing smaller errors to the GT surface.
Instead, the red in Fig.1(e) indicating where the interpolated occupancy is larger than the inferred occupancy is not correlated to the one in Fig.1(a). The uncorrelation indicates that the attention network does not always focus on the larger occupancy input.
3.2 We used softmax
We would like to point out another reviewer sZZA’s misunderstanding. We did not directly regress two attention weights but used softmax to produce attention weights (L166-167).
3.3 More attention alternatives
Beyond Fig.8 in the manuscript, we additionally reported more alternatives in Fig.2 in the rebuttal.
* Softmax normalization without MLP.
* Using the larger occupancy or with maximum weight.
* Using features as conditions.
Comparisons in Fig.2 in the rebuttal show that all these alternatives degenerate the performance.
4. Holes in Fig.2 do not affect our method
Holes in Fig.2 are invisible areas due to the absence of RGBD scans but not the missing depth values on depth images. We claimed that the TSDF can present missing depth values because of overlaps of neighboring views. However, we did not claim the TSDF can reveal invisible areas.
Moreover, the TSDF does not supervise our implicit representations, so its incompleteness is OK to us. Geometry priors or volume rendering can infer occupancies without TSDF in the invisible area.
5. Occlusion-awareness is more important in SLAM than view synthesis
We did not claim that the missing depth is caused by occlusion (L33).
Volume rendering relies on multiview consistency constraints to infer occlusion in each single view. Current methods randomly sample rays on different views, randomly sample points on each ray, and expect overlaps among these points to possess multiview consistency constraints in rendering. However, the randomness makes the constraints inefficient to get. It becomes even more difficult in SLAM since one view may only be used once as a rendering target during training if it is not a key frame or far away from the current frame that is being processed, and only views before the current time step can be accessed. However, interpolation on the TSDF can directly estimate coarse occupancy for all points on a ray. It is not affected by the randomness in sampling, consistent across views, and much more occlusion-aware than single depth supervision.
6. Difference to FastSurf [19]
* [19] directly uses the TSDF as supervision.
* [19] can only work in multiview reconstruction but not SLAM.
* [19] can not track camera poses.
* Our advantages are detailed in Fig.4 and Tab.1 in the rebuttal.
7. Supplemental materials
We are sorry for submitting a wrong supplemental material pdf. We will be happy to provide any important details.
8. Network difference to simply softmax
As we explained in 3.2, we indeed used softmax as a layer of MLP. Please also see our comparisons in 3.3.
9. Error accumulation in TSDF
For a tradeoff between the accuracy and the efficiency, we use the most updated camera poses to fuse depth incrementally.
Our solution contains a pre-fusion and an after-fusion stage. After the tracking procedure at timestep t, the pre-fusion stage first fuses the t-th depth image into a TSDF A that fused all depth images in front using the estimated t-th camera pose. This leads to a TSDF B. The mapping procedure uses the TSDF B in learning and also gets an updated t-th camera pose. Then, the after-fusion stage re-fuses the t-th depth image into the TSDF A using the updated t-th camera pose. The new TSDF A will be used at the timestep t+1. We will add a discussion on this in our revision.
10. Occupancy with the maximum weight
Please see our replies in 3.3. Fig.6 in the rebuttal indicates that the inaccuracy and incompleteness make the TSDF not a good direct supervision in our method.
11. Limitations
11.1 Depth images during test
Like MonoSDF, FastSurf, and NICE-SLAM, our method is also overfitting-based, all of which need RGBD images to conduct test time optimization (a stage called training but only uses a single test sample). Thus, using RGBD during test time should be a fair evaluation.
Learning implicitly is to learn implicit representations for highly fidelity surface reconstruction.
11.2 Feature condition for attention
Please see our replies in 3.3.
---
Rebuttal Comment 1.1:
Title: After rebuttle analysis
Comment:
Response related to misunderstanding of summary:
1) ì...a particular point on a sampled ray for supervision. î
We do not use the interpolated occupancy to supervise the inferred occupancy due to the inaccuracy or incompleteness of TSDF.'
ANS: There was no misunderstanding, what was meant from the above line is that TSDF is used for better results. This can be understood from the very next line which states "... used as a scene prior" and not as a supervisory signal.
However, I acknowledge the inaccuracy in the pointed out line of my summary.
2) ì... TSDF will explain such points the bestî
Fig.3 in our manuscript shows that, within bandwidth, TSDF may describe less (such as areas in blue) due to its inaccuracy.
ANS: I would not term the above statement of my review as a misunderstanding.
This conclusion was drawn based on the very intent of giving more weightage to in-bandwidth points, as has been stated in L129-130 "Since the TSDF predicts signed distances for queries within the bandwidth with higher confidence than the ones outside the bandwidth, we only use the depth fusion prior within the bandwidth."
While the TSDF might describe some parts of such areas inaccurately, its ability to explain the within bandwidth points better is exploited in the paper. Moreover, in Fig 3. , blue and red colors are used to show attention value for "beta" and it is very clear that the attention network is quite rightly giving high values to "beta" for in-bandwidth points.
The statement does not tally with claims in the manuscript very well. For example it goes against the very observations the manuscript has provided in L310-313 "We try to use attentive depth fusion priors every where in the scene with no bandwidth. The degenerated results indicate that the truncated area outside the bandwidth does not provide useful structures to improve the performance." From this it is logical to infer that areas within the bandwidth provide useful information to improve performance, which is also the crux of the paper. If not, why even bother to have this distinction between inside bandwidth and outside bandwidth regions and use the in bandwidth points for fusion .
The second paragraph of the "Contributions" section of the rebuttal also states the same "...showed that it is great to use attention within the bandwidth, ...".
3) We would like to point out another reviewer sZZAís misunderstanding. We did not directly regress two attention weights but used softmax to produce attention weights (L166-167).
Ans: Again there is no misunderstanding here too. I have not suggested anywhere in the review that the paper "regress two attention weights". I think that the authors have confused a rather important question "how would it be any different from simply using operator to combine the two occupancies instead of learning two weights". By this question, I do not mean that two weights are being learnt. It is very clear in the paper that a softmax is used to produce attenion weights. What is important here is the question asked.
Response with regard to answers provided for my concerns:
1) my responses to explanation given in "The attention network" segment of the rebuttal:
Color coding is not clear in Fig. 1(a). I see only red and blue. However, attention is a real number between 0 and 1. So, I would assume the color coding of Fig. 3 in the paper where red is higher and blue is lower. Looking at Fig 1(a-d), it seems that yes, the attention network does choose the inferred occupancy over the TSDF prior when necessary. The analysis depicted here answers my Question 1.
2) Answers provided for my concern raised in Weakness 3 of my review is not satisfactory.
What difference does it make if the holes in Fig 2. are invisible areas due to absence of RGBD scans or absence of depth values in the depth image, what is important here is the inability of TSDF to help in such a circumstance. This is clearly visible from quality of mesh obtained from inferred occupancies in the those areas shown in Fig 2. So my concern raised in Weakness 3 is not answered from the provided explanation.
3) Concerns raised in Weakness 4 and 5 of my review is addressed and i am satisfied.
4) Limitations 1 and 2 are also well addressed.
5) Concerns related to "Error accumulation in TSDF" during online mode: I am satisfied with the explanation and with the decision of the authors to also put them in the discussion section of their revised manuscript.
Overall: The rebuttal answers most of my concerns except for Weakness 3, which I believe is a significant weakness of this work. Nonetheless, I acknowledge that there is some merit in this work and appreciate the authors detailed response which includes running more experiments. I would also request the authors to include the qualitative and quantitative analysis provided in the rebuttal for concerns raised in Question 1,3 and Limitation 2 in the revised manuscript.
---
Reply to Comment 1.1.1:
Title: We appreciate your comments and the acceptance rating
Comment: Dear reviewer sZZA,
Thanks for your time, comments, and acknowledging our efforts in addressing most of your concerns. We really appreciate that you increased the initial rating to borderline accept.
Also, we are sorry for any misunderstanding in your review if there was any. We did not mean it.
1. A particular point on a sampled ray for supervision
That is correct, the TSDF is not used as a supervisory signal.
2. The TSDF will explain such points the best.
We would like to further clarify this point and make this point more clear in our revision.
* Compared to the TSDF outside of the bandwidth, which is merely 1 or -1, we give all credits to the TSDF within the bandwidth (Fig.1), which contains more geometry details, and ignore the TSDF outside the bandwidth. This is also what L129-130 means to say. However, there are no weights involved to balance the TSDF outside the bandwidth and the TSDF within the bandwidth. Thus, saying “giving more weightage to in-bandwidth points” might not be appropriate.
* For each point within the bandwidth, the attention network learns to balance its TSDF interpolation within the bandwidth and its occupancy inferred by geometry priors or through volumetric rendering. We acknowledge the dominant role of the TSDF within the bandwidth in L244-246 since the attention network focuses more on the TSDF at most locations (areas in red) on the intersection plane in Fig.3, but obviously not all places. Thus, the areas in blue in Fig.3 do not demonstrate that the TSDF explains such points the best.
3. Softmax
We would like to further clarify this point and make this point more clear in our revision.
* Yes, we do use softmax to produce the attention weights.
* Regarding “how would it be any different from simply using softmax operator (the original question) to combine the two occupancies instead of two weights”, our reply to this question is the first alternative in 3.3 in our rebuttal. The difference lies in if we use an MLP to manipulate the two occupancy inputs before normalizing them using a softmax. Fig.2 in the rebuttal shows that the MLP is vital to learn the fusion across the scene.
4. Color coding in Fig.1(a)
* The attention weights that we encode in Fig.1 (a) in the rebuttal are real numbers between 0 and 1. The reason why the reviewer merely saw red and blue is that these attention weights are really close to 1 and 0.
* The ways that we visualize attention weights in Fig. 3 in our paper and Fig. 1 (a) in the rebuttal may determine the difference. Fig. 3 shows attention weights at locations on a randomly selected cross section over the scene while Fig. 1(a) directly shows attention weights at points on the reconstructed mesh at points sampled on the GT mesh . Obviously, almost all points sampled on the mesh are closer to the GT mesh than the ones sampled on the cross section. Thus, you may see more diverse colors in Fig. 3.
5. Holes in TSDF
We would like to further clarify this point and make this point more clear in our revision.
* In response 4 in our rebuttal, the reason why we explain what causes the holes in Fig. 2 in our paper is a part of our response to your concerns that the TSDF may not be adequate to compensate for the missing depth values on depth images. What we wanted to say is that the TSDF can fill holes caused by noises using neighboring depth images, but it may remain containing holes caused by the absence of RGBD scans.
* It would not make any difference if there are invisible areas due to the absence of RGBD scans. Our method does not care about holes in the TSDF, or even need to differentiate what caused these holes either, since we have the ability to infer occupancy in invisible areas through geometric priors and volume rendering.
* Near a hole in TSDF, it is highly possible that this area is outside the bandwidth. This indicates that our method will use the occupancies inferred from the coarse resolution feature grid, and the attention network will not get involved in this area. Thus, the TSDF is not supposed to help in this area.
* The quality of the reconstructed surface is mainly caused by the challenge conditions in the context of SLAM. We estimate the camera poses, only access the frames in front of each current time step, and incrementally fuse depth images into a TSDF. Please refer to our response 1 Low-quality completion to reviewer QXhf for more details.
6. Revision
We will follow your instructions to update our paper by adding the numerical and visual comparisons conducted for the analysis in our rebuttal. | Summary: This paper proposed a simple yet effective approach of neural implicit surface reconstruction from RGB-D seuqences that leverages the reconstructed TSDF grids as a prior which effectively improve the reconstruction quality of fitting a neaural implicit surface from multi-view RGB-D images directly. The idea of using fused TSDF grids as a geometric prior is novel and makes sense as it comes for free from RGB-D sequences. The proposed attention mechnism of fusing the results from TSDF prior and neural implicit function is also novel and effective through experimental results. The authors have also demonstarted the effectiveness of their proposed methods and each of its components through extensive experiments, ablations and analysis. Except from some very minor issues, this is a good paper. For these reasons, I would reccomend "Weak Accept" at this stage.
Strengths: 1. The idea of using fused TSDF grids as a prior is novel. Normally, traditional TSDF fusion could achieve better accuracy in geometry reconstruction but falls short of filling the holes in occluded or unobserved regions. Neual implicit methods, on the other hand, is better at hole filling at the cost of slightly less accurate surfaces. Therefore, using TSDF fusion as a prior and combining these two methods is very natual and has very clear motivation. Also as the authors showed, fused TSDF grid already encoded all historical frames and is occlusion-aware, which could also bring benefits to directly fitting to multi-view RGBD images. Moreover, this prior also comes with no extra cost. The proposed learnable attention module to fuse the output from two sources is also a "simple-yet-effective" solution.
2. Overall, the quantitative experiments are conducted extensively and are sufficient to show the effectiveness of the proposed methods. Quantitative results show the proposed method achieves SOTA reconstruction and tracking results compared to previous RGB-D neural implicit SLAM/surface reconstruction methods. Qualitative results also look promising and accompanied with detailed and clear analysis. Ablation studies are also well conducted and justified the effectiveness of each module and design choices.
3. The paper is very well written and easy to follow. The introduction and related work sections are very clear and show clear motivation of the proposed method.
4. The method could run both in an online SLAM manner and under batched offline setting, which increases the system's practical value.
Weaknesses: 1. Overall, the comparisons with RGB-D baselines seems convincing, but some of the experiments in Sec. 4.2 are comparing against baseline methods that rely on RGB-only input. For example in Tab. 2, 4 and Fig. 5. These are not completely fair comparisons as the proposed methods take measured depth directly. Or did you change some settings in the experiments, like supervising the proposed method with predicted depth instead of measured depth?
2. It would be good to also include performance analysis, such as memory and run-time comparison to previous methods, especially for the SLAM setting.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Some minor questions:
1. What's the resolution and voxel size used in the feature grids and TSDF grids?
2. Regarding the learnable attention module $f_a(\cdot)$, seems that it is only conditioned on the two occupancy predictions? Have you tried with more conditions?
3. When evaluating reconstruction quality on ScanNet (Tab. 4), what mesh culling strategy did you use? Also is it consistent across all the methods?
4. In the ablation studies on depth fusion (Tab. 8), why would there be difference between GT All and GT? If I understand correctly, the difference is just that GT All operates in a batched manner while GT is online. But this shouldn't cause difference when you run TSDF-fusion?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Experimental settings for comparing with multi-view reconstruction methods
Since our method focuses more on SLAM applications which require both camera tracking and mapping two procedures, it is very hard to conduct completely fair comparisons with multi-view reconstruction methods. The comparison details are shown below.
The aspect that we take advantage:
* Using GT depth images. (while RGB only methods do not use them.)
The aspects that we do not take advantage:
* Only observe 0-th to (t-1)-th images before the current time step t (while multi-view reconstruction methods can observe all images during the whole training procedure.)
* Only process frames sequentially (with an interval of 5 frames), and may only see a frame one time during training, if it is not a key frame or far from the current frame that is being processed. (while multi-view reconstruction methods see all images many times during the iterative optimization.)
We did intend to use the estimated depth images to produce our results. However, we found each estimated depth image used by MonoSDF needs a pair of scale and shift parameters to get normalized, which aligns the estimated point cloud to a sparse point cloud obtained by structure from motion. However, the scale and shift parameters are not consistent across different views, which makes it hard to fuse the estimated depth images into a plausible TSDF, even if using GT camera poses. Fig.7 in the rebuttal shows that the TSDF fails to represent a coarse structure of the scene, which can not be used as a depth fusion prior in our method. However, we tried to use GT depth maps and reported their results in Fig.5 and Tab.2 in the rebuttal. As we see the improvement with GT depth images is marginal, which is still not as good as ours.
2. Computational comparisons
Thanks to the TSDF fusion implemented with CUDA, the runtime of our program(29min56s) is comparable to the one with NICE-SLAM(29min19s) for a scene with 2000 frames. It only takes about 26s to fuse depth images frame by frame incrementally. As for the number of parameters, our method contains 12.15M learnable parameters, which is comparable to the 12.20M ones in NICE-SLAM.
3. Resolutions
We follow NICE-SLAM and some TSDF fusion algorithms to use different resolutions in different scenes while keeping the size of a voxel the same across different scenes. In a scene, the resolutions of feature grids or TSDF grids are determined by the size of the scene and the size of a voxel. Specifically, the size of a voxel in the low resolution feature grid is 0.32, the size of a voxel in the high resolution feature grid is 0.16, and the size of a voxel in the TSDF grid is 1/64. Using the same size of a voxel aims to generalize the geometry prior learned at the same size of voxel grids.
4. Attention network alternatives
As mentioned in L165-169, we tried different designs. For instance, without using softmax, we use a sigmoid to predict one weight alpha while using 1-alpha as another weight beta, or use coordinates as a condition. We found that the coordinate condition makes the fusion very sensitive to locations, which degenerates performance. We reported these numerical and visual comparisons in Tab.10 and Fig.8 in the manuscript. We also conduct experiments to explore either the low resolution or high resolution occupancy inferred through volume rendering should be attentive in Tab.11 in the manuscript. Moreover, we conduct experiments to report more results with different conditions in Tab.4 and Fig.2 in the rebuttal, and all these alternatives degenerate the reconstruction accuracy.
5. Culling strategy
We use the culling strategy introduced in MonoSDF to produce our results and the results of other methods on ScanNet.
6. GT All vs GT
The difference between GT All and GT in Tab.8 in our manuscript lies in if we fuse all depth maps into a TSDF grid at the very beginning. GT All indicates we do that and use the TSDF containing the whole scene to process all frames. GT indicates that we do not do that but incrementally fuse a depth map at the current time step into the TSDF, which means we can only use part of the scene observed before the current time step. Although the TSDF fusion is the same, how the attention network learns weights to balance the occupancy interpolated from the TSDF and the occupancy inferred through volume rendering is different under these two conditions. Thus, the TSDF difference in learning makes a difference in results.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed response which have addressed most of my concerns. I believe this is a nice paper and there is also enough agreement among reviewers. I will keep my initial positive rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your kind words
Comment: Dear reviewer 3uRZ,
Thanks for your kind words. We appreciate your effort in reviewing our submission. Your comments are very helpful for us to improve our manucript.
Best,
The authors | Summary: The paper proposes a pipeline for estimating scene geometry with TSDF represented with neural implicit function, based on multi-view RGBD inputs. The main novelty is a fusion mechanism which utilized fused depth geometry as prior, and fuses geometry prior with estimated geometry using attention-based weighting. The weighting also considers bandwidth of the TSDF representation, allowing for multi-resolution grid features to be leveraged. The results demonstrated superior performance w.r.t. geometry estimation when compared to methods like MonoSDF, and better pose estimation when compared to methods like NICE-SLAM.
Strengths: [1] The idea to fuse input geometry and estimated geometry, instead of using input geometry as constraints when optimizing estimated geometry, is in general novel and sound. The reasoning is, input geometry from fused depth maps may present high confidence in certain areas, and estimated geometry via MLPs may excel at other regions. By fusing these two instead of estimating all regions with MLPs, the method yields better geometry, and benefits pose estimation in SLAM applications. The insights here would be useful to the community to inspire better ways to leverage input geometry when training MLP-based TSDF fields.
[2] Extensive evaluation. The methods evaluates the method w.r.t. geometry estimation with state-of-the-art methods like MonoSDF, which is the main focus of the paper. Beyond geometry, the method also shows improved geometry benefits downstream tasks of pose estimation, and is able show superior performance over SLAM systems with implicit neural representations. Extensive ablation study is also applaudable, which justifies certain design choices, as well as providing visualization of what is learned with the attention weights.
[3] The paper is in general well-written and easy to follow.
Weaknesses: [1] Further clarify and justifications are needed in various parts of the paper.
[1.1] For example, on why Transformers are not used to model the attentions, the paper gives an unconvincing reason that people may conjecture that the performance improvements are mainly contributed to Transformers. However, it is unfair and premature to say so without providing actual results with a Transformer-base design. With those additional results, we may then get a better idea on how important the architecture design of the attention mechanism is to the performance gain. Additionally, design choices mentioned in Line 165-169 require justifications via ablation study.
[1.2] Additionally, on the explainability of the learned attention maps, the paper mentions in Line 246 that, 'some area' are paid more attention to, which is not sufficient. Further analysis onto what part of the input geometry is prioritized, and what part is mostly learned, will give better insights of the attention weights, and better justifies the motivation of the proposed. Without further analysis, the attention machoism behaves more or less as a black box.
[1.3] Finally, additional details of the experiment setting are needed. For example, in comparing against MonoSDF, it is not clear whether input depth or estimated depth (with DPT) are used. In experiments where poses are optimized, it is not clear what the initializations look like, what pose representation and regularization are used, and what was the convergence.
[2] Paper writing. For example, the introduction section repeatedly mentions the core contribution of 'attentive depth fusion priors', without giving any explanation or hints of what it exactly is. As a result, the contribution #2 is essentially explaining the 'attentive depth fusion priors', however it is not immediately clear that the 'attentive depth fusion priors' mentioned in contribution #1 is basically explained in #2. Additionally, more discussion should go into explaining import baselines like MonoSDF and NICE-SLAM, on how are they different from the proposed method, so as to better clarify the contributions, and to provide further insights on why the proposed method performs better.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: There are questions included in my reviews, mostly in Weakness 1.2 and 1.3. it would be great if the authors can address those questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not applicable. No potential negative societal impacts are mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Attention network justification
We did report ablation studies on attention network justification in our manuscript. We reported numerical and visual comparisons with different attention modeling alternatives in Tab.10 and Fig.8, respectively, where our current attention network produces the best performance. We also reported either the low resolution or high resolution occupancy inferred through volume rendering should be focused on in Tab.11. Additionally, we compared the performance of our simple MLP and the more advanced Transformer network in the rebuttal. Due to the short rebuttal period, we merely saw some results with Transformer that are comparable to ours, which indicates that it still needs more effort to tune the architecture.
2. Attention analysis
We visualize the learned attention weights on a cross section of the TSDF grid in Fig.3 in the manuscript, and also visualize the attention weights learned at different epochs in our video. Generally, the network mostly focuses more on the occupancy interpolated from the TSDF in areas where TSDF is complete, while focusing more on the occupancy inferred through volume rendering in areas where TSDF is incomplete. In the area where TSDF is complete, the network also pays some attention to the inferred occupancy because the occupancy interpolated from TSDF may not be accurate, especially on the most front of surfaces in Fig.3 in the manuscript.
Moreover, we additionally reported visual analysis on how attention works in Fig.1 in the rebuttal. We sample points on the GT mesh, and get the attention weights in Fig.1(a). At each point, we show its distances to the mesh from the TSDF in Fig.1(b) and the mesh from the inferred occupancies in Fig.1(c), respectively. Fig.1(d) indicates where the former is smaller than the later. The high correlation between Fig.1(a) and Fig.1(d) indicates that the attention network focuses more on the occupancy producing smaller errors to the GT surface. Instead, the red in Fig.1(e) indicating where the interpolated occupancy is larger than the inferred occupancy is not correlated to the one in Fig.1(a). The uncorrelation indicates that the attention network does not always focus on the larger occupancy input.
3. Experimental details
We are sorry for uploading the wrong supplemental material which includes all the experimental details. Regarding the comparison with MonoSDF, we use the estimated depth maps to report their results. We tried to use GT depth maps and reported their results in Fig.5 and Tab.2 in the rebuttal. But the improvement from GT depth maps is marginal, which is still not as good as ours. Regarding camera tracking, we followed NICE-SLAM and NICER-SLAM to use the GT camera pose for the first frame as the initialization, and also use the same regularization. The camera pose is represented as a 4x4 matrix including the rotation and translation matrices. The optimization is regarded as being converged after a certain amount of iterations in camera tracking and mapping stages respectively. We will upload supplemental materials to report the experimental details.
4. Paper writing
We will follow your suggestion to revise our manuscript, such as explaining our attentive depth fusion prior and rephrasing our contributions.
5. Discussions on difference to the latest
We will add more discussions on our advantages over MonoSDF and NICE-SLAM. In summary, our method uses depth fused TSDF as a prior, and employs an attention network to determine where we should use it and how much we should use it with a potential correction from the occupancy inferred through volume rendering. Our attentive depth fusion prior is a more accurate guidance to learn neural implicit representations than using GT depth images as a rendering supervision because single depth images may contain holes and occlusion which can be significantly improved by TSDF. Please see the comparison in Fig.6 in the rebuttal for our advantage over directly using the TSDF as supervision. Moreover, our method also shows novel ways of balancing the confidence within bandwidth and outside bandwidth with attention networks, which also spreads some novel perspectives to use TSDF with neural implicit functions. Our ablation studies in the supplemental material also show that the attentive depth fusion prior can improve surface reconstruction no matter if we are using depth images as rendering targets or not. Hence, the novel way of using depth information differentiates our method from other methods inferring neural implicit representations in either multi-view or SLAM settings, such as MonoSDF and NICE-SLAM.
---
Rebuttal Comment 1.1:
Title: Concerns addressed
Comment: Thank the authors for their reply. In my opinion, the rebuttal has addressed most of the concerns raised by Reviewer X87y.
---
Reply to Comment 1.1.1:
Title: Re: Concerns addressed
Comment: Dear AC,
Thank you so much for checking the review and our response. We are so glad to know that our explaination and additional results addressed the reviewer X87y's concerns. We really appreciate your time and expertise.
Best,
The authors
---
Rebuttal Comment 1.2:
Comment: I would like to first thank the authors on the rebuttal. As mentioned by the AC, some of my original questions were addressed in the rebuttal with additional results; meanwhile a few more have emerged which I would like to further discuss with the authors. Specifically,
[1] On justifying the attention model design. The authors compare the design with a few variants in both the main paper and the rebuttal, which is applaudable. I also understand that it might be impossible to complete experiments with a new Transformer-based design in a short rebuttal window. As such, I stick to my view that comparison with a Transformer-based design is useful to further justify the claims on design choice, and can be added in the final version given time.
[2] On the analysis of the attention maps. The authors include in Fig. 1 of the rebuttal more insights into the attention map, which is great. On the analysis by text, the authors emphasize that more weights on TSDF where it is complete, which is intuitively straightforward given that the learned geometry will surely attempt to fill in 'missing holes'. However I wonder more detailed insights will be available. For example, considering the TSDF may present more **high-frequency details** compared to the learned ones, will the learned weights favor more of the TSDF in edges/corners? These are the questions to be answered in a later version of the paper to provide more insights into what is learned by the method and why it does so.
[3] Experiment setting. Thanks to the authors for clarifying on this. However one question has emerged on my side: by using RGBD inputs (with dense GT depth), the experiment setting may not hold for in the wild datasets without dense depth. The goal of MonoSDF is to demonstrate geometry reconstruction WITHOUT input depth WITH ABSOLUTE SCALE, and the proposed method will not be able to do so due to its need for DENSE ABSOLUTE DEPTH for TSDF fusion. Please correct me if I am wrong on this, but if my conjecture is true, the assumption of input depth may undermine the practicablity of the proposed method on in the wild captures with RGB input only. And in this case, the comparison with MonoSDF has to be done by providing MonoSDF with same GT input depth, instead of DPT inferred ones.
I will hold my final rating until the authors respond to my comment [3] above/
---
Reply to Comment 1.2.1:
Title: Thanks for your questions and comments
Comment: 1. Design Justification
We will follow your advice and report additional comparisons in our revision.
2. Attention Analysis
We additionally reported visual analysis on how attention works in Fig.1 in the rebuttal. We sample points on the GT mesh, and get the attention weights in Fig.1(a). At each point, we show its distances to the mesh from the TSDF in Fig.1(b) and the mesh from the inferred occupancies in Fig.1(c), respectively. Fig.1(d) indicates where the former is smaller than the later. The high correlation between Fig.1(a) and Fig.1(d) indicates that the attention network focuses more on the occupancy producing smaller errors to the GT surface. Instead, the red in Fig.1(e) indicating where the interpolated occupancy is larger than the inferred occupancy is not correlated to the one in Fig.1(a). The uncorrelation indicates that the attention network does not always focus on the larger occupancy input.
One important conclusion that we draw from these comparisons currently is that the attention network focuses on the occupancy that produces smaller errors to the GT surface, even without explicitly reconstructing surfaces from occupancies and knowing GT surfaces. Fig.1 (a) in the rebuttal shows that the attention network pays more attention to both low frequency geometry, such as the ground and the wall, and high frequency geometry, such as the wrinkles on the bed in the TSDF.
We agree that the TSDF may show more high frequency geometries, but we should also be cautious about the noises and sudden depth changes on depth images, which may make the TSDF not accurate. Attention weights on cross sections in Fig. 3 in our manuscript show that the attention network may focus more on the learned occupancies (areas in blue) on the very front of the surface due to the inaccuracy of the TSDF.
We would like to conduct more analysis on where the attention network is more interested in the high frequency geometry than the low frequency geometry, and vice versa. We will follow your advice and report these experiments in our revision.
3. Comparison with MonoSDF
MonoSDF does not require GT depth images, while it indeed requires dense depth at the right scale. Specifically, it employs a monocular depth prior to predict depth images from RGB inputs at first, then it leverages a least-squares criterion to solve for a scale and a shift to align the rendered depth image and the monocular depth image on each view, as shown in Eq. 13 in MonoSDF paper and the compute_scale_and_shift function in the released code at code/model/loss.py. The scale alignment is to make the rendered depth images and the monocular depth images comparable, which results in a dense depth image as a supervision of the rendered image from each view angle. Although MonoSDF manipulates the rendered image to align the absolute scale to the one from the monocular depth prior in Eq. 13 in the paper, our results show that it also works well if we align the scale of monocular images to the absolute scale of rendered images.
We reported the comparisons with MonoSDF with GT depth images in our response 3 “experimental details” in the rebuttal. The visual and numerical comparisons in Fig. 5 and Tab. 2 in the rebuttal show that the GT depth image brings marginal improvement to MonoSDF, which is still worse than ours.
We will also add these analysis and experiments in our revision. | Summary: This paper aims to improve the performance of 3D reconstruction from multi-view RGB-D images. The key innovation is the attentive depth fusion prior, which allows the networks to directly use the depth fusion prior with the inferred occupancy as the learned implicit function. Experiments show that the proposed method work for both one-time fused TSDF and incrementally fused TSDF.
Strengths: 1. This paper aims to improve 3D reconstruction by better handling cases such as incomplete depth at holes. This is one limitation of existing work and is an important task.
2. The method section is clearly and well explained to help readers understand the details.
3. The authors have included video results that can help better understand the visual result quality of the proposed method.
4. Extensive comparison and ablation studies have been done to show improvements over prior work and justify the decision choices of all components.
Weaknesses: 1. L242 claims that the proposed method "plausibly completes the missing structures from TSDF" in Fig. 2. However, the completed surface by the proposed method seems to be of very low-quality. It seems a simple non-learning mesh hole filling algorithm on top of the meshes on the left of Fig. 2 can produce much cleaner results. Learning-based methods [A,B] that predict TSDF for indoor scene completion can also achieve much better results.
[A] Dai et al. SG-NN: Sparse generative neural networks for self-supervised scene completion of rgb-d scans. CVPR 2020.
[B] Dai et al. Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. CVPR 2018.
2. It is great that the paper has conducted extensive quantitative experiments. However, there are currently 12 tables in the main paper which seems a bit overwhelming. Indeed, some tables are not that critical and can be moved to supplementary. Alternatively, the authors can only present the scene average evaluation in the main paper for some table and move the per-scene results to the supplementary. The new empty space can be instead used to address some limitations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Since the proposed method is predicting multiple values for the same query points, what is the time increase over a naive neural implicit function (predicting a single signed distance for each query point)?
2. Minor editorial suggestions:
L173: it would be better to swap "color function c" and "occupancy function f" for better correspondence.
L191: includes -> include
L303: attentions -> attention
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have not adequately addressed the limitations or potential negative societal impact. The authors could show some failures cases that indicates some common patterns where the proposed method would fail on
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Low-quality completion
As described in L226-228, all completed meshes in the right column in Fig.2 are reconstructed in the context of SLAM. Under this setting, we do camera tracking on each frame, render RGB and depth images every 5 frames for reconstruction by minimizing rendering errors, fuse depth images into TSDF incrementally using the estimated camera poses, and only access 0-th to (t-1)-th GT images at t-th time step. These conditions in SLAM result in a dynamic and incremental context information that is much more incomplete and with more uncertainty than the static and relatively complete context required in scene completion or hole filling algorithms on meshes. This kind of context information determines that our method is only allowed to do an extrapolation-like completion since there is no chance to observe the whole scene. However, data-driven or traditional hole filling methods can access the whole scene and merely do an interpolation-like completion, which makes much difference. As described in L241-242, Fig.2 aims to demonstrate our ability of keeping all the correct structure in TSDF and inferring the missing structure through volume rendering.
Additionally, we report comparisons with data-driven or hole filling methods such as SGNN and Filling Holes in Meshes[SGP03] in Fig.8 in the rebuttal. SGNN fails to fill holes in the scene with ceilings, and [SGP03] produces severe artifacts in empty space due to its limited ability of perceiving the context.
2. Layout
We will follow your suggestions to improve the layout.
3. Time cost on more prediction
Actually, we only predict one occupancy at one location, and do not predict multiple ones. Besides the one predicted by our neural network, we interpolated occupancies from the fused TSDF for query points. There is almost no time cost on using trilinear interpolation or the simple attention network. As for the TSDF fusion implemented with CUDA, it merely takes about 25.9s to incrementally fuse all 2000 depth images frame by frame in a scene. For example, in room0 of Replica, the total runtime of our method is 29min56s, which is comparable to 29min19s with NICE-SLAM.
4. Limitations
Since we do not directly model the lighting in the scene, our method does not work well with transparent surfaces or surfaces with reflection, such as glass. Reflection may make depth sensors fail to capture depth information or break multi-view consistency on RGB images, both of which degenerate the accuracy of reconstructed surfaces. However, this merely makes our method reconstruct a small area poorly but rather fail to reconstruct a whole scene. Please see Fig.3 in the rebuttal for more details.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. After reading all reviewers' comments and the responses from the authors, I am leaning towards keeping my original rating.
---
Reply to Comment 1.1.1:
Title: Thanks for the positive rating
Comment: Dear reviewer QXhf,
Thanks for the positive rating. We also appreciate your time and comments.
Best,
The authors | Rebuttal 1:
Rebuttal: We thank reviewers for their valuable comments and highlighting our novel and interesting idea (REVA x87y, 3uRZ), extensive evaluations and analysis (REVR QXhf, x87y, 3uRZ, sZZA), well-motivated and sound technical method (REVR QXhf), clear presentation (REVR QXhf), and well-written manuscript (REVR x87y, 3uRZ, sZZA).
We are also sorry for uploading the wrong supplemental material. Thus, reviewers missed some important details about experimental settings, discussions on results, visualizations, and more ablation studies. We will be happy to provide any details in our following discussion period. We will upload the supplemental material if our submission could get accepted.
We respond to each review below and include a rebuttal with tables and figures here.
We will release all reviews, discussion, and our rebuttal even if our paper gets rejected.
Pdf: /pdf/c92897f39d0e647ab34fb4bf1c8bb9deaeef4d17.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift | Accept (poster) | Summary: This paper proposes to combine contrastive learning and self-training for unsupervised domain adaptation. Experimental results on UDA benchmarks demonstrate the empirical effectiveness of the approach and a thorough study demonstrates the theoretical benefits.
Strengths: - The results on UDA tasks are consistently better than simple contrastive learning and self-training approaches.
- The theoretical analysis is very thorough and demonstrates the fundamental benefits of contrastive learning and self-training individually and of combining them in the case of UDA.
Weaknesses: - The proposed method STOC is vaguely described in L104-108. It would be clearer to describe it formally in more details.
- The contributions are not clearly stated. Among the methods presented in L90-108, please clarify that STOC is your contribution.
- Some popular UDA benchmarks are missing, many concurrent work are using Office-31 and Office-Home. Evaluation on these datasets would be appreciated.
- Concurrent work are missing, such as [1, 2, 3]. Please position your work against these paper, and compare your approach both conceptually and empirically to the approaches presented in these papers. More generally, it seems that the presented method is only compared to simple baselines and not to methods that are specifically designed to tackle UDA tasks.
[1] DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation, Damodaran et al., ECCV 2018
[2] Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation, Jiang et al., ICML 2020
[3] Semantic-aware Message Broadcasting for Efficient Unsupervised Domain Adaptation, Li et al., ArXiv: 2212.02739
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - In Table 1, are concurrent models your reimplementation, or pretrained models from the SwAV and FixMatch papers ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Some limitations are discussed in the analysis, for exemple regarding the limited performance improvement on semi-supervised tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work and for their thoughtful feedback. We will improve the exposition of the final version as per your suggestions (e.g., a bulleted list of contributions, algorithmic description of STOC).
> **The proposed method STOC is vaguely described in L104-108. It would be clearer to describe it formally in more details.**
We have updated the draft to include an Algorithmic description of STOC.
> **The contributions are not clearly stated. Among the methods presented in L90-108, please clarify that STOC is your contribution.**
Thanks for your suggestion; we have updated the draft to list our main contributions:
We propose Self-Training Over Contrastive learning (STOC) to explore benefits of combining self-training and contrastive learning.
- Our experiments across eight benchmark datasets highlight that: (i) in domain adaptation settings, self-training and contrastive learning offer significant complementary gains; and (ii) in semi-supervised learning settings, surprisingly, the benefits are not synergistic.
- We theoretically analyze these techniques to understand why self-training improves over contrastive learning under distribution shift in a simplified model of distribution shift.
Overall, our findings demonstrate that under distribution shift features produced by contrastive learning can yield a good initialization for self-training to further amplify gains and achieve improved performance, even when either method alone would yield comparatively worse performance.
> **Some popular UDA benchmarks are missing, many concurrent work are using Office-31 and Office-Home. Evaluation on these datasets would be appreciated.**
We would like to clarify that we indeed perform experiments on Office-Home and results are included in Table 1 and Table 2 (columns abbreviated with header OH). We do not include results on Office-31 because (i) the task is similar to Officehome; and (ii) Office-31 dataset is much smaller in size than OfficeHome.
> **Concurrent work are missing, such as [1, 2, 3]. Please position your work against these paper ... More generally ... only compared to simple baselines and not to... specifically designed to tackle UDA tasks.**
We thank the reviewer for pointers to the related work. We will update the draft to include discussion on these papers in our related work.
Please note that the main focus of our paper is to investigate when constrastive learning and self-training benefits are complementary and not to establish a new state-of-the-art for unsupervised domain adaptation. For this reason, we also do not use imagenet pretraining to avoid confounding our conclusions about contrastive pretraining. Fair comparisons to other domain adaptation algorithms would require repeating experiments with Imagenet pretrained models which we leave for future work. Nevertheless, recent large-scale studies [4,5,6] hint that self-training (e.g. FixMatch) and constrastive learning methods (e.g. SwAV) empirically outperform other existing approaches specifically proposed for domain adaptation (e.g., DANN, CDANN).
Specifically, for [1, 2, 3] we will include the following discussion. Jiang et al. proposes an alternative of psuedolabling to handle label imbalances. In our study, we do not focus on simulating label proportion shifts and datasets included in our study mainly focus on shifts in the covariate distribution. Hence, the approach proposed by Jiang et al. will be similar to pseudolabeling in absence of label proportion shifts, which we compare with in our additional experiments (see general response, where our trends continue to hold). Damodaran et al. proposed DeepJDOT which uses optimal transport loss to align the features on target data with features on source data, and as part of future work it would be interesting to see if distribution matching methods also benefit from contrastive learning. Li et al. proposed a message passing variant to perform domain adaptation specifically for vision transformer architectures (i.e., ViT). In our work, we perform experiments with ResNet architectures and it remains unclear how to extend Li et al. to our setting.
> **In Table 1, are concurrent models your reimplementation, or pretrained models from the SwAV and FixMatch papers ?**
All models included in our paper are trained again in our experimental setup with (adaptation) of original code from SwAV and FixMatch papers.
[4] Sagawa, Shiori, et al. "Extending the WILDS benchmark for unsupervised adaptation." arXiv preprint arXiv:2112.05090 (2021).
[5] Garg, Saurabh, et al. "Rlsbench: Domain adaptation under relaxed label shift." International Conference on Machine Learning. PMLR, 2023.
[6] Shen, Kendrick, et al. "Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation." International Conference on Machine Learning. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have read the rebuttal and my concerns have been addressed. I am increasing my score to weak accept. | Summary: The paper explores the synergy of self-training and contrastive learning in semi-supervised learning (SSL) and unsupervised domain adaptation (UDA). It discovers their complementary effect in UDA. Furthermore, it proposes Self-Training Over Contrastive learning (STOC) to combine the benefits of the two approach. The STOC algorithm pretrains the self-training model by contrastive learning on source and target data.Finally, it theoretically analyzes the outcome of initializing with contrastive learning under a specific distribution shift.
Strengths: - The paper provides good theoretical support and empirical evidence to defend their hypothesis. It provides valuable insights into the factors that contribute to the success of contrastive learning and self-training.
- The paper proposes a novel method that can improve unsupervised domain adaptation.
- The findings of this paper have important implications for incorporating unlabeled data.
- The paper is well-written.
Weaknesses: - The combination of semi-supervised learning with self-supervised learning has been proposed in prior work [1], which is not included in the related work. An updated variety of this method can constitute as a relevant benchmark.
- The assumptions about infinite unlabeled data, linear classifier $h$, and scaling the magnitude of the coordinates for augmentation, are restrictive.
- The benchmarks are not competitive. There are several self-supervised methods such as DINO that can supplement the benchmarks.
- There are typos such as in line 40: 'the strong strong results'.
[1] Zhai, Xiaohua, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. "S4l: Self-supervised semi-supervised learning." In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1476-1485. 2019.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The role of the classification head architecture and deep fine-tuning is missing from the experiments.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not discuss the limitations and societal impacts of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and positive feedback. In the next revision, we will **add discussion on Zhai et al. [1]** and correct typos. We hope our responses address any outstanding concerns. Please let us know if there are any additional questions.
> **The combination of semi-supervised learning with self-supervised learning has been proposed in prior work [1], which is not included in the related work.**
Zhai et al. [1] differs from our work significantly in two ways: (1) they only focus on the semi-supervised learning (SSL) setting where there is no distribution shift between the input distribution of the unlabeled and labeled sets, while we focus more on the distribution shift setting (DA), and further contrast our findings in SSL and DA settings; and (2) they use rotation prediction as their pre-training objective while we analyze contrastive learning objectives. We will include this discussion in our related work section (currently in Appendix B).
> **The assumptions about infinite unlabeled data, linear classifier , and scaling the magnitude of the coordinates for augmentation, are restrictive.**
Since the aim of our study is to mainly explain the effects of distribution shift between labeled source and unlabeled target distributions, we avoid complicating the analysis stemming from the finite sample nature of the data. Further the main aim of our theoretical analysis is to explain our empirical findings (complementary nature of CL and ST) in a specific setup for UDA and SSL, as opposed to describing generic conditions for CL+ST to outperform CL/ST. Our choice of a linear classifier is common in theoretical analysis on contrastive learning and self-training [2,3,4], and our augmentation distribution has also been studied in prior work [3]. We note that our augmentations assume no knowledge of spurious features, and are similar in principle to augmentations like cropping or blurring that effectively reduce the magnitude of image features (see L191).
> **The benchmarks are not competitive. There are several self-supervised methods such as DINO that can supplement the benchmarks.**
In DINO the self-supervised learning objective used is derived from knowledge distillation and is very different from the contrastive pretraining objective we study. Therefore, we do not include experiments with DINO. Extending our study from CL to other pretraining techniques like DINO is an interesting direction of further research. For our study on studying CL/ST and their combinations we include results on 8 datasets, each with two settings (SSL/UDA), and four methods (ERM, CL, ST, CL+ST). During the rebuttal period, we also ran experiments with additional combinations of CL+ST methods (beyond SWaV+FixMatch) that we will append to the revised paper. Please see the general response for these results.
> **The role of the classification head architecture and deep fine-tuning is missing from the experiments.**
We are not sure if we understand this question completely. If the question is about ''classification head architecture'', then we default to using a linear layer head for the classification, as is common practice. In Section 4.5 and Appendix E.4 we provide experimental results for full finetuning for our simulations in Sec 4, and for real world experiments we always finetune the full backbone. Please let us know if we misunderstood your question in which case we would be happy to provide further clarifications.
> **The paper does not discuss the limitations and societal impacts of this work.**
We discuss limitations of our work in Appendix A. For the final version, we will move this discussion to the main paper.
[2] Shen, Kendrick, et al. "Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation." International Conference on Machine Learning. PMLR, 2022.
[3] Saunshi, Nikunj, et al. "Understanding contrastive learning requires incorporating inductive biases." International Conference on Machine Learning. PMLR, 2022.
[4] Chen, Yining, et al. "Self-training avoids using spurious features under domain shift." Advances in Neural Information Processing Systems 33 (2020): 21061-21071.
---
Rebuttal Comment 1.1:
Title: Post Rebuttal Response
Comment: Thank you for your response. Overall, the paper seems beneficial to be presented to the community. | Summary: This paper empirically explores and theoretically studies the complementary benefits of using self-training (ST) and contrastive learning (CL) for unsupervised domain adaptation (UDA), where unlabeled data is available from source and target domains, whereas labels are only available from the source domain. Firstly, experiments on many distribution shift datasets show that combining self-training over contrastive learning (STOC) yields significant benefits for target domain accuracy (~5% on average) over just doing CL or ST. However the benefit in-distribution is shown to be much lower (<1%).
Motivated by these findings, the paper delves into the theoretical study of this phenomenon through a toy Gaussian data model, where each example is composed of domain-invariant (useful for source and target) and spurious features (only useful for source). Experiments on the toy model are shown to follow similar trends to real data. Through analysis on this toy model, the paper provides the following intuitions:
(a) ST can learn a good target classifier if it starts from a good enough initial classifier
(b) ERM on source domain cannot provide a good initializer because it focusses too much on spurious features
(c) CL (with some "generic" augmentations) learns representations that focus more on domain-invariant features, thus providing better features for ST to improve. Although CL classifier itself is not sufficient for good target performance
This theoretical model provides many insights into the role of CL and ST. Finally the paper empirically verifies some of these claims through further probing experiments on real data.
Strengths: - The paper studies an unexplored (to my knowledge) space of combining two popular ideas for domain adaptation. The idea is natural and seemingly effective
- Experiments on standard datasets are convincing enough on the efficacy of the proposed approach. Theoretical analysis on the toy example seems solid, and provides useful intuitions & insights into why the proposed approach could work well. Overall the quality of the technical work seems good. The insights about "CL provides better features while ST improves the head" is a clean contribution
- The paper is clearly written and easy to follow. Sufficient intuitions are provided for many of the theoretical claims.
I didn't read through all the proofs, but the intuitions made sense and the contrastive learning results seemed believable. On the whole, I think this is a solid contribution and vote for accepting.
Weaknesses: - One minor criticism is about the presentation of Section 5. That one required multiple passes to follow, and it might help to include a clearer description of what is being tested. For instance in L377, it would help to explain how the 14% number was calculated. Some more questions about this in the next section
- A bit more discussion about prior work, in the main paper, would have been useful to see. Sections 5 and 6 do a bit of this in some parts, but I only partially understand how this analysis different from previous ones, especially for self-training
- The paper discusses contrastive learning and self-training in general, but only tests on one pair of methods (SwaV and FixMatch). Including more methods (even in 1 setting) would be useful.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (Q1) How many labeled examples were used for experiments in Section 5? I'm wondering if the target probe results in Fig. 3 are due to worse sample complexity rather than expressivity of the features themselves
(Q2) What are the limitations of the proposed theoretical framework? Does the toy model fail in crucial ways to capture realistic distributions? Does the theory for this model predict something that does not hold in practice. A discussion about this would be useful for future work on this topic
(Q3) Does the choice of contrastive learning/self-training method affect the findings?
(Q4) Does linear probe (instead of fine-tuning everything) also work well in practice?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Q2 above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and positive feedback. In our revision, we will elaborate on the setup in Sec 5, and move the discussion on related work from App B to the main paper. We **add experiments with two additional combinations of CL+ST** algorithms (see general response) and hope that our responses address any outstanding concerns. Please let us know if there are any additional questions.
> **In L377 … explain how the 14% number was calculated.**
The 14% target probe performance difference is calculated based on the right half of Fig 3. The target linear probe performance of CL+ST (80.7%) is about 14% greater than that of ST (67.1%). The difference is higher when comparing CL with ERM. This leads us to believe that contrastive pretraining significantly raises the ceiling on target performance. We apologize for the confusion and will make this clearer in our final version.
> **More discussion about prior work, especially for self-training.**
We deferred discussion on extended related works to App B. Further, in App G.2 we specifically focus on distinguishing our analysis from prior self-training analyses that treat self-training as a consistency regularization objective. In principle, these works assume strong expansion assumptions on class conditional distributions, and ignore the challenges involved in propagating pseudolabels iteratively when the expansion assumptions are difficult to satisfy. If we get an extra page for the final version, we will move this discussion to the main paper.
> **Only one pair of CL+ST algorithms. Does the choice of contrastive learning/self-training method affect the findings?**
Please see the general response for our reply to this question. We have included results with BT as a CL algorithm and Pseudolabeling as self-training algorithm in the general response. This gives us two additional pairs for CL + ST. Overall, we observe that results with these two new algorithms match the trends we observed with our default choices. This highlights that the complementary benefits of self-training and contrastive learning hold across different variations of each of them. We thank the reviewer for their suggestion.
In theory, we analyze Barlow Twins which has been shown to be equivalent to spectral contrastive loss, and even other non-contrastive objectives (see L949 in App E.2 for more). Similarly, the self-training method we analyze in Sec 4 captures generic iterative pseudolabeling methods done in two stages.
> **How many labeled examples were used for experiments in Section 5?**
For each dataset the number of training examples for linear probes are of order $10^4$ (except for OfficeHome where the target dataset is of order 3000) obtained by 80:20 split of all the target data. The impact of finite sample nature of target samples is negligible because we only train a linear head over the representations, so the generalization error is very small $O(\sqrt{1/n})$ where $n$ is number of samples used to learn head.
> **What are the limitations of the proposed theoretical framework? Does the toy model fail in crucial ways to capture realistic distributions?**
Following are some limitations of our theoretical setup for analyzing the complementary nature of CL and ST, which we are actively exploring as directions of future work. We mention some of these in Appendix A.
- We only analyze the case of spurious correlations in source data, as opposed to more general shifts in the covariate distribution.
- Our theory assumes general augmentations that scale each feature independently (similar to [2]) and doesn’t capture more specific conditions for the augmentations.
- While empirically we see our trends hold with full finetuning (Sec 4.5, App E.4), we only analyze linear probing (as done in prior works [1, 3]).
> **Does the theory for this model predict something that does not hold in practice**
The only important trend that shows a deviation between theory and practice is the following: self-training performs as worse as ERM (Fig 2. and Thm. 2), whereas in practice self-training is not necessarily worse (Sec 3). This is mainly due to the normalization step in the self-training iteration (Eq. (2)), which is not done in practice. This normalization is done for mathematical convenience (similar to analysis [3]), so that we can control the $l_2$ norm of the self-training iterates. We thank the reviewer for bringing this up and will add this discussion to our theory section.
> **Does linear probe (instead of fine-tuning everything) also work well in practice**
We observe that training linear head with ST (source labeled + target unlabeled) improves performance over just training head with source labeled data. In fact, if we used the optimal stopping criterion in hindsight (using target labeled data), the benefits are almost as much as full finetuning. However, since we do not have target labeled data in practice, we use source hold data for early stopping and resort to full finetuning which usually yields slightly better performance than only training the linear head.
In our theoretical setup, we observe that source early stopping works and yields optimal performance. Hence, to analyze CL and ST, we perform linear probing which is amenable to theoretical treatment (as in prior works [1, 2, 3]) .
[1] Shen et al. Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation.
[2] Saunshi et al. Understanding contrastive learning requires incorporating inductive biases
[3] Chen et al. Self-training avoids using spurious features under domain shift.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thanks for the response and clarifications. The new experiments with more contrastive and self-training methods would certainly be a useful additions. I maintain my positive opinion of this paper. | Summary: This paper investigates the complementary benefits of combining self-training and contrastive pretraining for domain adaptation under distribution shifts. Through an empirical study on 8 benchmarks, the authors demonstrate that applying self-training (FixMatch) after contrastive pretraining (SwAV) yields substantial accuracy gains over either approach alone. To understand this synergy, the authors analyze a simplified theoretical setup indicating that while contrastive pretraining can amplify signal along invariant features, it may retain dependence on spurious source-only correlations. On the other hand, self-training can effectively unlearn these spurious dependencies and improve linear transfer of representations, achieving optimal target performance. The authors further apply linear probing on representations to verify their theoretical findings empirically. Overall, this work highlights the potential of combining self-training and contrastive learning to address challenges posed by distribution shifts, grounded by both empirical evidence and theoretical analysis.
Strengths: **Originality**: The paper explores the promising yet underexplored direction of combining self-training and contrastive learning to address distribution shifts. The extensive empirical study systematically investigates their complementary benefits across diverse benchmarks. The theoretical analysis offers unique insights into their synergistic effects.
**Quality**: The empirical study is relatively extensive, spanning 8 datasets with careful controls. Despite simplifying assumptions, the theory makes falsifiable predictions that are precisely analyzed and align well with observations. Additional probing experiments further validate the theoretical intuitions.
**Clarity**: The paper is clearly presented overall. The problem setup and motivation are lucid, and the methods and experiments are thoroughly explained. The authors effectively convey the core conceptual messages and insights from their theoretical analysis.
**Significance**: Distribution shifts ubiquitously challenge ML model generalization in practice. This paper highlights the significant potential of combining self-training and contrastive learning to address this problem, demonstrating substantial empirical gains and proposing explanations grounded in both theory and experiments. Overall, I think this paper will be of interest to the ML research community of distribution shifts.
Weaknesses: **Small Pre-training Set**: The authors have chosen to train models using contrastive learning (CL) from scratch on a selection of small datasets from BREEDS and WILDS. The authors have justified this approach by stating, "We have opted not to use off-the-shelf pretrained models (e.g., on Imagenet [67]) to avoid confounding our conclusions about contrastive pretraining." However, in practice, pretraining CL models on small datasets is not a common practice, given the availability of powerful off-the-shelf CL models that have been pretrained on larger datasets (e.g., from SimCLR, iBot to DINOv2). If the paper's results only apply to CL models pretrained on small datasets and do not generalize to more powerful contemporary CL models, the utility of this work could be limited. I would recommend that the authors conduct studies of modern off-the-shelf CL models to strengthen this work. For instance, off-the-shelf ImageNet-pretrained CL models could be used as initialization, followed by self-training, and then evaluation on various shifted datasets (numerous ImageNet shifted datasets are available). Please note, it is not necessary to strive for results during the rebuttal period. This paper is already commendable in my view, I merely want to offer suggestions that could further enhance its impact.
**Limited Combination of CL+ST Algorithms** In this study, the authors have exclusively utilized SwAV+FixMatch, which are indeed effective algorithms. However, the presentation would be more compelling if a wider range of combinations were explored. Given that the authors employ Barlow Twins in their theoretical analysis, would it not be more fitting to also include Barlow Twins as a CL algorithm in the empirical studies?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work and their detailed and constructive feedback. For the rebuttal, we **added new results with Barlow Twins** as the pretraining algorithm and some **preliminary results with Imagenet pretrained networks**. Please let us know if this addresses all outstanding concerns or if there are any additional questions.
> **Small Pretraining Set**
Thanks for your suggestion; we agree this would be an interesting direction of study that could further strengthen the results. We performed preliminary experiments where finetune Imagenet pretrained networks and performed experiments with FixMatch training (e.g., we replace contrastive pretraining with imagenet pretraining) and we observe that FixMatch continues to improve over ERM (source-only) models both pretrained with Imagenet.
| Dataset | ERM (Imagenet pretrained) |FixMatch (Imagenet pretrained) |
|--------------|-------|------------------|
| OfficeHome (avg on 3 shifts) | 47.1 | 57.4 |
| Visda | 44.9 | 72.8 |
Note, we did not perform these experiments with BREEDs datasets, because they are subsets of Imagenet and Imagenet-pretraining leaks information about target. We will plan to explore the impact of leveraging large off-the-shelf pretrained models, e.g. DINOv2 and CLIP for our revision.
> **Results with Barlow Twins as a CL algorithm**
We have included results with BT as a CL algorithm and Pseudolabeling as self-training algorithm in the general response. Overall, we observe that results with these two new algorithms match the trends we observed with our default choices. This highlights that the complementary benefits of self-training and contrastive learning hold across different variations of each of them. We thank the reviewer for their suggestion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. While the complementary ERM result is commendable, I believe there has been a misunderstanding. My intention was to suggest that you use Contrastive Learning (CL) models pre-trained on ImageNet instead of ERM models. For example, the official Barlow Twins model was pre-trained on ImageNet (https://github.com/facebookresearch/barlowtwins), and you can directly fine-tune it using FixMatch.
---
Reply to Comment 1.1.1:
Title: Updated results with Barlow Twins model pre-trained on Imagenet
Comment: We thank the reviewer for engaging in the discussion and apologize for confusing your suggestion. We re-ran experiments as you suggested, i.e., with the Barlow Twins model pre-trained on Imagenet on 4 datasets: Entity13, Nonliving26, Officehome, and Visda. We tabulate the results below. We continue to observe that FixMatch improves over ERM (source-only) models for Barlow Twins pre-trained models on Imagenet.
| | ERM (Imagenet BT) | ST (Imagenet BT) |
|---------------------------|-------------------|------------------|
| Entity13 | 81.0 | **85.4** |
| NonLiving26 | 62.3 | **69.7** |
| Visda (avg 2 shifts) | 52.5 | **69.8** |
| Officehome (avg 3 shifts) | 46.4 | **49.3** |
We will include these results in the updated draft, and we believe these results strengthen our findings as we can hope to leverage off-the-shelf contrastively pre-trained models to combine the benefits of CL and ST. | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their thoughtful feedback and are glad to see all of them recommending acceptance. Per their feedback we have **added experiments on more combinations of CL+ST** where we find our empirical findings on SWaV+FixMatch continue to hold. In the general response, we address one common concern shared by reviewers, and address others as individual responses to each reviewer. Please let us know if you have any additional concerns or if all outstanding concerns are addressed.
> **Limited combination of contrastive learning and self-training algorithms.**
In our experiments, we default to using the SWaV backbone for contrastive pretraining and FixMatch for self-training mainly because prior works [1, 2, 3] note that SWaV outperforms other contrastive pretraining methods like SimCLR, BYOL, etc. Having said that, for the rebuttal, we provide experimental results for two additional combinations: (1) SWaV + pseudolabeling (different self-training method), and (2) Barlow Twins + FixMatch (different contrastive pretraining method). Due to rebuttal time constraints we could only run on two datasets for each, but will add results on the remaining six in our revision.
| Dataset | ERM | ST (FixMatch) | CL (Barlow Twins) | STOC (Barlow Twins + FixMatch) |
|--------------|-------|---------------|-------------------|--------------------------------|
| Entity-13 | 68.32 | 77.93 | 81.04 | 86.23 |
| Nonliving-26 | 45.54 | 56.79 | 62.17 | 71.46 |
| Dataset | ERM | ST (Pseudolabel) | CL (SwAV) | STOC (SwAV+Pseudolabel) |
|--------------|-------|------------------|-------|------------------------|
| Living-17 | 60.31 | 69.34 | 74.14 | 79.81 |
| Nonliving-26 | 45.54 | 58.25 | 57.02 | 67.87 |
Results with these two new combinations match the trends in Sec 3 (i.e., we observe that STOC improves over CL and ST alone), similar to what we observed with SWaV+FixMatch. In fact, on these datasets Barlow Twins pretraining improves over SwAV pretraining. This reinforces that the complementary benefits of self-training and contrastive learning hold across different variations of pretraining and self-training objectives. We thank the reviewers for their suggestion and will include these ablations on all datasets in our revision.
[1] Sagawa et al. "Extending the WILDS benchmark for unsupervised adaptation." arXiv preprint arXiv:2112.05090 (2021).
[2] Garg et al. "Rlsbench: Domain adaptation under relaxed label shift." International Conference on Machine Learning. PMLR, 2023.
[3] Shen et al. "Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation." International Conference on Machine Learning. PMLR, 2022. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception | Accept (poster) | Summary: This work proposes a solution to train multimodal multi-task training and demonstrate the model trained via this strategy outperforms other approaches in zero-short learning problems. The proposed solution mainly relies on current solutions including AGD, JAX library, DropToken, etc. it seems that this work can be helpful in multimodal related works.
Strengths: 1) The proposed training strategy and solution can help related work in multimodal research area.
2) The training strategy demonstrated better performance on zero-short learning approach over other compared works.
Weaknesses: 1) If this work focuses on the training strategy, more studies on downstream tasks are needed.
2) Although the work focuses on the training solution, more analysis of multimodal experimental results analysis are needed. For example, the analysis of the improvement over other approaches, especially what kind of cases the new approach help improve the performance?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) Could the author demonstrate the major difference compared to AGD? In line 119, the author mentioned that it’s a more generic solution based on AGD, however, not clear about the innovation and difference.
2) How can the proposed approach be applied on these cases where one or two modalities are missing?
3) Will the proposed approach handle the situation like training with several different datasets together? And how to deal with the difference among them? Is there any other additional steps needed?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for providing a review of our paper, here we try to answer some of the concerns and comments about the paper.
>If this work focuses on the training strategy, more studies on downstream tasks are needed.
What kind of downstream tasks are you looking for? We provide evaluation of zero-shot ImageNet, CIFAR, Kinetics, UCF, HMDB, and ESC in our main results table, which should already provide a large number of downstream benchmarks. Our ablations furthermore look at additional benchmarks in context of the various modalities. We also do not solely focus on the training strategy, but also highlight the importance of using the MoE architecture as a way to integrate many modalities into one model without harming performance, and that is confirmed in our results across many downstream evaluations and ablations.
>Although the work focuses on the training solution, more analysis of multimodal experimental results analysis are needed. For example, the analysis of the improvement over other approaches, especially what kind of cases the new approach help improve the performance?
We show specifically the improvement of our approach when comparing various aspects of the integration of different parts of our training strategy in various figures and tables (multimodal multitask AGD, Mixture-of-Experts, multi-scale resolution, etc.). We also compare with other similar models (for image-text MoE, we compare with LIMoE, for the previous SoTA on zero-shot video classification, we compare with VideoCoCa). See also our supplementary material for additional findings and results where we explain which methods are most important for our modeling and training strategies. If there's a more specific set of experiments or evaluations that is missing that weaken our claims, please report them so we can address them.`
>Could the author demonstrate the major difference compared to AGD? In line 119, the author mentioned that it’s a more generic solution based on AGD, however, not clear about the innovation and difference.
We propose a novel generalization of the approach which has not been explored in any other work. Due to technical limitations of previous implementations of AGD, we were unable to find other methods training efficiently on many modalities each with their own dynamic input shapes; the most popular techniques tend to waste computation on padding which we found to be a significant source of inefficiency (and graph compilation is quite important to be able to fully saturate the computation on an accelerator). By carefully constructing computation graphs that leverage primitives such as a compilation cache in a modality agnostic way, we can maximize effective training FLOPs utilization across distributed accelerators. Previous instantiations of AGD only considered a special case where all of the inputs are fixed in shape/size. We show that this feature is quite important especially to be able to train efficiently on many different modalities at the same time. This is a point we may need to emphasize more strongly, we can provide more results on the training efficiency when using our accelerated AGD vs. prior approaches.
>How can the proposed approach be applied on these cases where one or two modalities are missing?
This is the core part of our accelerated AGD technique. Because each step only optimizes on a single dataset and alternates training between them, we can define a dataset with only the modalities we want to specify for that step. For example, if we want to optimize on audio+video data, we can compile a computation graph on audio data as input and computes classification, another computation graph that has video data as input and also computes classification, and a third computation graph that computes audio-video contrastive loss. So long as there exists a dataset definition that matches the modalities/objectives of at least one of the computation graphs, then it can be used in training. So if we have any dataset with missing data/modalities, we just simply leverage the computation graph compatible with the losses where that modality is missing and sample that as a new task.
>Will the proposed approach handle the situation like training with several different datasets together? And how to deal with the difference among them? Is there any other additional steps needed?
Yes, this is precisely our approach, and the major reason why we can obtain state-of-the-art zero-shot results. Without being able to train multiple datasets with different subsets of modalities we would not be able to make our approach work. We explain in section 3 how defining datasets in separate compiled computation graphs can allow for efficient multitasking, only computing on the data that is available, without relying on any expensive padding or gradient masking that other works typically rely on. For a concrete example, we might start training step 0 by computing the loss for a video-text-audio contrastive task on VideoCC, followed by an image classification task on JFT on training step 1, and then an image-text contrastive task on CC12M on step 2. We can schedule these datasets/objectives in any order because the computation graphs for each are cached after they are encountered for the first time, this is all automatic due to the `jax.jit` implementation (and this should also work with `torch.compile`, but we have not implemented this).
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. I would like to keep my original rating. | Summary: This paper proposes Integrated Multimodal Perception, which can use image, video, and audio datasets for multimodal and multitask training. The method is scalable, benefiting from the alternating gradient descent as it can alternate diverse modalities, loss functions, and datasets to perform gradient descent. Moreover, using a mixture of experts can help handle different modalities with only a modality-agnostic encoder. The multi-resolution training accelerates the training by using different spatial resolutions, sequence lengths, and batch sizes.
Strengths: 1. This paper provides a generic multimodal training solution involving input, model, and objective and successfully trained models with a combination of image, video, and audio datasets.
2. It achieves SOTA zero-shot results on video action recognition.
3. The paper is well-written and easy to follow.
Weaknesses: 1. Ablation studies show that alternating between the objectives on each step is better than combining them by summing them. Is this true for all cases or only for large-scale pretraining? Many papers on various topics sum up multiple losses for training. For example, in semi-supervised learning, the loss general consists of two parts: supervised loss and unsupervised loss. We may need to give a proper context for this conclusion.
2. The method's two technical components, alternating gradient descent and mixture of experts, mainly follow previous works. This paper integrates them and applies them to a larger scale of pretraining.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Algorithm 1 shows the sampling uses step t and model state S_t, but Lines 142-143 say that sampling is directly proportional to the number of examples in each dataset. That is, the more samples a task has, the more likely the dataset gets sampled. It would be better to make the algorithm consistent with the descriptions. Moreover, in Algorithm 1, should there be another mini-batch sampling given a chosen task? Should the sampling function f be only for task/dataset sampling?
2. Line 227 mentions filtering examples close to the downstream tasks. How is the filtering conducted?
3. Why the ablation study in Section 4.3 uses spatial resolution 224 while the main results in Table 1 use resolution 256 during training?
4. The paper mentions using JAX in several places, like jax.jit compiling graph-of-interest at runtime, jax.checkpoint saving memory by not checkpointing any of the 152 intermediate model states, and jax.lax.scan accelerating the graph compilation. I'd like to know whether Pytorch can also do so. I know Pytorch also has checkpointing functionality, and Pytorch 2.0 provides torch.compile() API.
5. In Figure 3, a 224x224 image can result in 196 tokens, while a 320x320 image corresponds to 400 tokens. After dropping 50% tokens, it has 200 tokens longers than 196 tokens. Do you use padding to make them have the same length in experiments?
6. Line 237 mentions a patch size of 4x16x16. What does the 4 represent? An image generally has 3 RGB channels. Shouldn't it be 3x16x16? Line 238 says image resolution is 4x256x256, which also confuses me.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The experiments seem not to show whether pretraining with one modality can help improve another modality's downstream performance. For example, whether the image pretraining datasets help improve the downstream video action recognition results and whether using the video pretraining datasets can boost the downstream performance on ImageNet-1K.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for a very detailed review, we will try our best to address your comments.
>Ablation studies show that alternating between the objectives on each step is better than combining them by summing them. Is this true for all cases or only for large-scale pretraining?
This is a good point, and we are careful to note that this experiment is limited to the case of multimodal multitask pretraining, especially in the context of classification and contrastive losses. However, we should also note that we provide a theoretical basis in addition for our empirical results (see lines 124-127), i.e., it is proven that the gradients of tasks applied across multiple optimization steps are either equal or more marginally convex than summing the gradients, which can reduce the difficulty of the optimization problem. This is what motivated our initial experiment and works very well for our model setup, but we do not claim that alternating training would result in better downstream performance than summing gradients on any possible combination of tasks. We would argue that this is not a weakness, but simply another broad direction which is out of scope of the paper.
>The method's two technical components, alternating gradient descent and mixture of experts, mainly follow previous works. This paper integrates them and applies them to a larger scale of pretraining.
We propose a novel generalization of AGD which has not been explored in other works. Due to technical limitations of previous implementations of the approach, we were unable to find other methods training efficiently on many modalities each with their own dynamic input shapes; the most popular techniques tend to waste computation on padding which we found to be a significant source of inefficiency. By carefully constructing computation graphs that leverage primitives such as a compilation cache in a modality agnostic way, we can maximize effective training FLOPs utilization across distributed accelerators. Our accelerated AGD approach is also what unlocks the efficient use of multi-scale (multi-resolution) data, which to our knowledge, is a novel training approach applied to multimodal training not observed in other works.
>Algorithm 1 shows the sampling uses step t and model state S_t, but Lines 142-143 say that sampling is directly proportional to the number of examples in each dataset. That is, the more samples a task has, the more likely the dataset gets sampled. It would be better to make the algorithm consistent with the descriptions.
In this case, f(t, S_t) would simply compute over a probability distribution independent of t and S_t. We intentionally left this open to try with alternative sampling algorithms, such as [29], but we found the global sampling strategy to work well enough in combination with our other improvements. If you would prefer we reword this, we can do so.
>Moreover, in Algorithm 1, should there be another mini-batch sampling given a chosen task? Should the sampling function f be only for task/dataset sampling?
We consider the function f to only be for choosing the dataset-objective pair, the sampling function is operating on the current time step t and the optionally the previous model state. Once a task is chosen, then a minibatch is sampled from that dataset as is usually done.
>Line 227 mentions filtering examples close to the downstream tasks. How is the filtering conducted?
We use perceptual hashing as used in similar works (e.g., [46]) to make sure that duplicate images or frames are removed if they appear in the validation set to avoid data leakage. We can update with this information.
>Why the ablation study in Section 4.3 uses spatial resolution 224 while the main results in Table 1 use resolution 256 during training?
These ablations were mainly to try to closely match the resolutions and datasets as seen in literature so they can be more easily compared and reproduced. It is only our large-scale pretraining that we break from this kind of setup to try to achieve the best performance.
>I'd like to know whether Pytorch can also do so.
Yes, actually. We mostly speak to jax because our implementation is based on it. But analagously, it is possible to use `torch.compile` to construct an AGD training pipeline to achieve a similar effect, along with our `rematerialization` and `scan-over-layers` optimizations we mentioned.
>Do you use padding to make them have the same length in experiments?
No, we don't use padding. The sequence lengths are fairly close enough so we have roughly the same compute/memory usage to not be an issue. In practice these two inputs would be compiled separately before alternating training on them.
>Line 237 mentions a patch size of 4x16x16. What does the 4 represent?
The 4 represents the temporal axis, so 4 frames are patched together in the same way the pixels are. But this also requires all inputs to be multiples of 4 frames, so in the case of images we inflate/tile them to 4 frames. This means that images are basically treated as 4-frame videos with no motion, from a modeling perspective.
>The experiments seem not to show whether pretraining with one modality can help improve another modality's downstream performance. For example, whether the image pretraining datasets help improve the downstream video action recognition results and whether using the video pretraining datasets can boost the downstream performance on ImageNet-1K.
We have results in Figure 2 in the supplementary material which indicates that integrating multiple modalities like image and video into a dense model tends to harm the accuracy of the other modalities. However, with Figure 5 in the main paper, we see that MoEs reverse this trend and improve. Most notably, the addition of video data actually improves the zero-shot performance of the image datasets, which was not true of the dense model.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for answering my questions. I will keep my original score. The authors can further improve the paper's quality by incorporating the answers.
* For example, removing f(t, S_t) from algorithm 1 to align with Lines 142-143 can make the algorithm easier to understand. You may discuss the other possible sampling options in texts. Adding a mini-batch sampling within one task in Algorithm 1 can make the logic smoother.
* The authors can consider adding a section describing the method's limitations and the context for some conclusions e.g., alternating among objectives is better than summing them. According to Figure 5, adding new modalities can bring performance drops for other modalities on some tasks, which Reviewer tAnb also notes. Although both the modality competing and distribution are universal problems, they are also worth discussing explicitly in the paper.
---
Reply to Comment 1.1.1:
Title: Official authors' response to reviewer SrPj
Comment: We really appreciate the reviewer's suggestions. We definitely address these concerns upon acceptance of the paper.
We have already mentioned these limitations in the paper, but would elaborate on them based on the reviewers' valuable feedback. | Summary: This paper proposes a scalable multimodal multitasking approach. It combines alternating gradient descent and mixture-of-experts to train a unified model. The extensive experiments verify the effectiveness of the proposed method. By scaling up the model, this method sets up a new state-of-the-art in zero-shot video classification.
Strengths: - The proposed method achieves excellent performance in zero-shot video classification. It further confirms improving data and task scalability in multimodal learning is promising.
- This paper is well written and easy to understand.
Weaknesses: - Using alternating gradient descent for efficient multimodal multitasking learning is not new, which has been explored in PolyViT. It seems that the proposed method is an extension with more engineering improvement. Also, the success of MoE architecture has been validated on image-text tasks. The contribution of this paper is to incrementally extend this architecture to video and audio modalities.
- Compared with other methods, the proposed methods use more training data. Does the improvement mainly come from the scale of the training data? If PolyViT is pretrained on a similar scale of the data, does the proposed method still have advantages?
- typo. Line 264, "differ" -> "defer"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In Figure 5, the audio modality contributes less to the performance improvement on the image (or video)-text tasks. How about the importance of image (or video) to audio-related tasks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for providing a detailed review of our paper, here we try to address some of the comments of the paper.
>Using alternating gradient descent for efficient multimodal multitasking learning is not new, which has been explored in PolyViT. It seems that the proposed method is an extension with more engineering improvement.
While AGD itself has been used in PolyViT, we propose a novel generalization of the approach which has not been explored in any other work. Due to technical limitations of previous implementations of the approach, we were unable to find other methods training efficiently on many modalities each with their own dynamic input shapes; the most popular techniques tend to waste computation on padding which we found to be a significant source of inefficiency. By carefully constructing computation graphs that leverage primitives such as a compilation cache in a modality agnostic way, we can maximize effective training FLOPs utilization across distributed accelerators. The addition of this along with multi-scale representations of each modality further boosts accuracy and efficiency at the same time, as shown in table 1. We also experiment with both classification and contrastive objectives, where we might have any combination of optional paired or unpaired data, something that PolyViT does not try.
AGD when coupled with cross-modal learning is something we found to be very important to our results, as we can see knowledge transfer from videos to images and vice versa. We have results in Figure 2 in the supplementary material which indicates that integrating multiple modalities like image and video into a dense model tends to harm the accuracy of the other modalities. However, with Figure 5 in the main paper, we see that MoEs reverse this trend and improve. Most notably, the addition of video data actually improves the zero-shot performance of the image datasets, which was not true of the dense model. These results show that a good diversity of tasks and modalities that are enabled by our more flexible version of AGD, in combination with expanding the model capacity, are important for multimodal learning. We would not have been able to demonstrate such capability without leveraging the dynamic graph compilation and dataset-objective pair task sampling as described.
>the success of MoE architecture has been validated on image-text tasks
While concurrent work such as image-text has been shown on MoEs, we note that the addition of video and audio data is significantly more challenging to show improvement across all four modalities, and has been the primary focus of the paper, i.e., to demonstrate which scaling techniques are the most useful to integrate these modalities into a single model.
> Does the improvement mainly come from the scale of the training data?
Not all of our results can be explained by scaling up our data. We note that CoCa-B achieves 82.6% on ImageNet1k zero-shot classification while IMP-B, a similarly sized encoder (ViT-B) is 80.5%. However, IMP-MoE-B trained on the same data closes the gap with 83.2%, showing the practical effects of using the MoE architecture. Similarly, our ablations show that alternating training on multi-scale data is useful (figure 3), and multiple objectives (table 2, figure 2) in tandem also help. We admit the addition of more data is certainly helpful, but it is only the combination of all of our approaches (accelerated AGD, multi-scale data, multi-objective, MoE) that we see improvement that is capable of surpassing the state-of-the-art results across multiple multimodal benchmarks. If we simply train a PolyViT architecture on more data, our improvements would be fairly marginal (in figure 2, addition of the larger scale LAION dataset helps some benchmarks but on average the combination of multiple objectives help more).
> In Figure 5, the audio modality contributes less to the performance improvement on the image (or video)-text tasks. How about the importance of image (or video) to audio-related tasks?
We found that one significant challenge in training is in the integration of the audio modality. Especially without our AGD data mixture and MoE improvements, the performance declines significantly due to the difference in how audio data is learned vs. other modalities (see figure 2 in our supplementary material). Despite this, figure 5 shows a minimal penalty of our model on a few tasks, with the benefit that the model now has zero-shot audio capability that previously did not exist.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response. The response well addressed my concerns. I would keep my original rating. | Summary: The paper proposed Integrated Multimodal Perception (IMP) for multi-modal multi-task learning. IMP consists of a shared Mixture-of-Experts (MoE) Encoder as well as modality-specific embedding layers and heads re-projecting representations to modality-specific space. Optimizing towards both supervised classification losses and self-supervised contrastive learning NCE losses, the authors proposed to adopt Alternative Gradient Descent (AGD) during training to accommodate different inputs, outputs and learning objectives without bringing too much memory / computation overhead when incorporating more tasks. Furthermore, a multi-resolution strategy was also proposed to train IMP on large batches of video and audio modalities without losing efficiency or accuracy, where various resolutions, sequence lengths, and batch sizes are used dynamically. Experiments on a good amount of public datasets show that the proposed method can achieve better or competitive performance on various downstream zero-shot classification tasks except audio classification on ESC-50. Ablation studies support some of the design choices.
Strengths: 1. The paper introduced a possible way to integrate arbitrary number of modalities into one model. The AGD + unified encoder + MoE design seems an interesting solution without losing efficiency and accuracy when incorporating more modalities **in some situations**.
2. The proposed dynamic resolution strategy is somewhat novel and effective.
3. Experiments on considerably various public datasets demonstrate the effectiveness of the proposed method on zero-shot classification when comparing with previous state-of-the-arts
4. Ablation studies on important model designs were conducted to give a more comprehensive understanding of the methods
Weaknesses: 1. One critical problem of IMP is that it seems to suffer from performance drop when incorporating new modalities as per to Fig. 5, possibly due to the shared encoder. Although MoE can alleviate it to some extent in some cases, it is not always working. For example, in Fig. 5, for COCO Text->Image and Image->Text, and K400 zero-shot, IMP can achieve better or similar performance when new modalities of videos are introduced, while IMP-MoE-16Exp obtained worse accuracy. This defect somewhat goes against the major claim of this paper.
2. Another problem is that the model performance on different tasks subjects to change depending on training sample distribution, according to Tab. 1 and Line 261-264. While tuning the distribution of samples of different modalities can improve audio classification, it is not sure whether or not performance on other modality classification will drop, based on Weakness 1. It is also not quite sure how robust the proposed method is against any changes of sample distribution, which may hinder the incorporation of more datasets in the future, going against the major claim again.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: While the paper's major claim/contribution is a new method for multi-modal multi-task learning which can integrate any number of modalities/tasks with arbitrary inputs/outputs/objectives, Weaknesses 1 and 2 are critical and should be addressed before an accept decision can be made, in my opinion. Please address them accordingly.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review, we will try to address any concerns encountered in the paper.
>One critical problem of IMP is that it seems to suffer from performance drop when incorporating new modalities as per to Fig. 5
We would like to note that this problem of performance drop when integrating new modalities is fairly universal across any model, assuming a fixed computation or parameter budget. The model is forced to balance all modalities across the available computation/parameter budget, so it's no surprise that a model could decrease in average performance with so many diverse modalities competing against each other. Without the integration of our techniques such as MoE, multi-scale data, multi-objective training, and accelerated AGD, we would observe a much steeper decline across all of our benchmarks. For example, we show in Figure 2 of our supplementary material a fairly steady reduction in accuracy after each new modality is integrated, while with MoEs in Figure 5 in the main paper this distribution is remarkably different. In fact, the zero-shot image performance improves with the presence of video data that did not happen without MoEs. So while we may see, e.g., COCO results look lower when comparing image only vs. image+video, our main claim is that, *on average*, the combination of all of our methods provide significantly better improvement in integrating all of these modalities into one model than previous approaches (while also enabling cross-modal knowledge transfer on four separate modalities). These relative reductions in accuracy tend to apply on the small scale but become less prominent when scaling the MoE model as seen in Table 1 (see also section C the supplementary material). We observe fluctuations may occur on very specific benchmarks but the average shows a steady improvement.
We also note that a two-tower or multi-tower model may incur additional memory consumption as the parameters are replicated for each modality (see supplementary material), so we can typically scale our model larger than existing approaches. In Figure 5, we compare our model against itself so it mainly serves as as a way to gauge the influence of various modalities, but because our method also incorporates various aspects of training, when we compare vs. other models, the intersection of all of our various improvements (multi-task multi-objective multi-resolution data, MoEs, etc.) are all very important for surpassing the existing state-of-the-art. We are comparing a highly general multimodal model (one that trains on 4 modalities at the same time) against other larger models more specialized to one or two modalities, but despite that, our methods consume fewer resources and scale across these modalities much more easily.
>Another problem is that the model performance on different tasks subjects to change depending on training sample distribution, according to Tab. 1 and Line 261-264
Again, performance differences across changes in sample distribution is another universal problem for multimodal models, we would see very similar types of concerns for other models as well. Our paper is not specifically about providing a comprehensive solution to this type of problem but rather finding a method that is better than other competing methods. Our models in table 1 are all trained on the same data, so this comparison is not subject to change in the sample distribution, and we see that the use of MoEs and increase in model size tend to bridge large gaps that might be caused by the data distribution despite using the same data for training. We do provide results in Figure 2 and Figure 5 (and Figure 2 in the supplementary material) to suggest that on especially image and video datasets, these modalities provide mutual improvement when integrated in conjunction with all of our techniques, so scaling one larger would help the other. Audio deserves special consideration, as this modality is sufficiently different from the other modalities that we've observed any scale of audio data could cause accuracy drops on the other modalities. This is one reason why video+text multimodal works typically avoid modeling audio, as it requires special handling like a separate audio network. Instead, we show a much simpler approach of using a combined MoE tower to mitigate this. Therefore, we would argue that the approach is not negatively impacted by the sample distribution in the same way as competing works, as AGD in conjunction with MoEs and multi-scale data provide a robust way to maximize performance on all modalities even in the presence of highly unbalanced audio to video/image data.
---
Rebuttal Comment 1.1:
Title: The common problems are what researchers need to solve
Comment: I appreciate the authors rebuttal, although it doesn't address any of my concerns.
As the authors also agree, what are pointed out in my review are common problems of modern methods and therefore need to addressed properly (I'm not saying "solved") by papers from good conferences like NeurIPS.
What the authors are doing in this paper is first downgrading the performance of the baseline model by using a shared encoder and then showing improvements by using existing techniques. Assuming these techniques really work, the authors should apply these techniques to a better baseline model and make some real contribution to the community.
This downgrading and improving thing cannot convince me of the merits of the paper. Unless my concerns are well addressed, I will argue for reject.
---
Reply to Comment 1.1.1:
Title: Official authors' response to reviewer tAnb
Comment: We appreciate the reviewer's comments on our responses.
We would like to emphasize that the ultimate goal of our paper is misunderstood. The goal of our paper is not to solve the mentioned problem (data sample distribution), which is universal across all models and approaches. We rather introduce a collection of novel techniques that significantly improves training and efficiency of multimodal multi-task models (with many objectives and modalities) compared to previous established literature.
We would also like to mention that the reviewer's statement about downgrading the model is incorrect. We do provide extensive experiments that show sharing an encoder is only a part of the final performance, since our method actually outperforms a model with dedicated encoders (please see Figure 3 in the Supplementary Materials).
We would like to emphasize that the title of a venue does not change the fact that a problem is universal. We would like to humbly argue that the major goal of papers in such venues is to 1. Expose the community to certain fundamental problems, 2. Provide theoretical and/or practical solutions for such problems; both which we have presented and addressed in our paper. In this paper, we elaborate that by scaling the number of modalities and objectives we hit a certain degradation of performance. We provide solutions based on MoE and AGD to resolve such issues and support those solutions by extensive experiments. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Self-Supervised Reinforcement Learning that Transfers using Random Features | Accept (poster) | Summary: The authors propose a self-supervised method to learn approximate multi-step Q-estimates by leveraging random features as bases for a reward function in the target task. This approach addresses some limitations of model-based and model-free RL. In particular, model-based RL typically suffers from compounding error of predicting future states from predicted states. On the other hand, model-free RL tends to suffer from instability when transferring knowledge. The proposed method mitigates these problems by implicitly learning transition dynamics (without explicitly modeling future trajectories) from offline data in a self-supervised manner based on the prediction of random features.
Strengths: The methodology is interesting. Using random features in a self-supervised manner to form a Q-basis such that one can find a linear combination of these random features for the reward function of a downstream task using approximate multi-step returns makes sense and seems novel.
Weaknesses: The main weakness of this approach lies in the experiments and poor framing of how this work fits in the context of meta-RL. meta-RL is not mentioned, but it entails this same problem of having multiple tasks with shared dynamics such that the reward function changes from one task to another. As such, the authors should be comparing against meta-RL benchmarks, not standard RL. It is interesting because the authors are running their experiments on a popular meta-RL benchmark called Meta-World and yet they do not compare their approach against meta-RL approaches. One of the reasons why this is currently an unfair comparison is the need for the competing approaches to learn their value functions, etc. from scratch, whereas the proposed approach had the benefit of learning from an offline dataset befrorehand and adapting to the target task.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. I am confused on your statement that the Q-function for multi-step returns is independent of the policy when H = \tauThe lim. You still have \tilde{a}_1, \tilde{a}_2, ..., \tilde{a}_\tau. From what policy do these actions come from?
2. Why do you not compare against meta-RL methods? The successor features approach you compare against may count as meta-RL (it is not called that in their original paper though), but it is a method from 2017.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We respond to your comments below.
> The main weakness of this approach lies in the experiments and poor framing of how this work fits in the context of meta-RL. meta-RL is not mentioned, … beforehand and adapting to the target task.
Thanks for the suggestion. We did not compare with meta-RL as usually *rewards* are needed for this case (in both the meta-training and meta-testing phases), while using a self-supervised approach without requiring reward labels is exactly one of the advantages of our approaches. By not requiring reward, our method avoids assumptions about train/test time reward sharing the same distribution, as is typical in meta-RL. Given this problem setting of having unlabelled offline datasets and distribution shift between meta-train and meta-test, comparing these settings is not quite apples to apples. However, as per your suggestion, we have added new experimental results and comparison with meta-RL algorithm RL2 in figure 1 of rebuttal pdf. We note that since our problem setting isn't typical meta-rl, which assumes the train/test time objectives are in the same underlying distribution, we have to adopt RL2 by giving it privileged information similar to our adoption of CQL. Out of the four environments which we ablate RL2, it's similar to our method in door open but inferior in all others. This shows that meta-rl will suffer from out-of-distribution goals which our method avoids.
> I am confused on your statement that the Q-function for multi-step returns is independent of the policy when $H = \tau$ The lim. You still have $\tilde{a}_1, \tilde{a}_2, ..., \tilde{a}_\tau$. From what policy do these actions come from?
Great question! In a typical Q function $Q^\pi(s, a)$ refers to expected value when a is taken at s, and the policy \pi is followed thereafter. This makes the dependence on the policy $\pi$ implicit, and the same Q-function cannot be used to evaluate a different policy. In a multi-step open-loop Q-function $Q(s, \tilde{a}_1, \tilde{a}_2, ..., \tilde_{a})$, the dependence on *actions* is *explicit*, i.e., the Q-value, is completely *determined* directly by this sequence of actions (and does not have any implicit dependence on any specific policy $\pi$). When trying to find the optimal sequence of actions, different sequences of actions are generated randomly and then evaluated using the learned Q-function to choose the *best* sequence of actions. Essentially, any policy that can generate the sequences of actions with a good coverage of all possible sequences would be sufficient. In doing so, the same multi-action Q function can evaluate many different action sequences within the data, without having any implicit policy dependence like standard Q-learning. An implicit policy dependence would make it hard to use the same Q function to evaluate many different action sequences.
Please feel free to let us know if there are any other comments that may help reevaluate our paper.
---
Rebuttal Comment 1.1:
Title: Updated experiments
Comment: Thank you for the updated experiments on the pdf. I have raised my score. Should the paper be accepted, please include these results and explanation of meta-RL and how you differ.
---
Rebuttal 2:
Comment: Thank you for the helpful suggestions! If you have additional questions or experiment suggestions regarding our rebuttal, we are happy to answer that! | Summary: The paper introduces RaMP, an approach for fast adaptation to unseen reward functions given offline data collected with arbitrary behavior policies under the same dynamics. RaMP learns a set of basis multi-step Q-functions, each corresponding to a random reward defined as the accumulation of random state action features. For online adaption to new rewards, the new reward/Q function is then identified using linear regression given the basis functions and used for control via MPC.
**Edit After Rebuttal**:
Raised score from 3 to 5, see comment below for details
Strengths: RaMPs' key idea, learning "random feature rewards" which are later combined, is novel and interesting as it allows offline pertaining without "reward labels" and provides an easy and efficient adaptation mechanism. The paper addresses a significant problem, the generalization to novel rewards under little to no assumptions on the knowledge of the current scenario and previously seen rewards, and is mostly clearly written and easy to follow.
Weaknesses: The paper's main weakness is its experimental evaluation, which I think is in its current form insufficient to validate the paper's claims or allow assessment of the method's potential for several reasons:
- The algorithm described in the main paper seems to be improved by several "implementation details" (c.f. Appendix B). While the paper states that "RaMP’s performance already surpasses the baselines even without them", I believe a rigorous analysis of their impact would be necessary to assess the method's potential and workings.
- Baselines: none of the considered model-based RL methods (PeTS, MBPO, Dreamer) was designed to work with dynamics models trained on **offline** data. For a fairer comparison, methods designed to work with such dynamics models (e.g. MOPO[1]) should be considered.
- While RaMP adapts very fast in the considered environments, its final performance is often lower than that of MBPO and/or PeTS, in particular for the Hopper and D'Claw experiments. Further discussion and analysis in the paper would allow a better understanding of RaMPs limitations.
- Compared with most recent work in RL and/or offline RL the paper only uses a small set (8) of relatively simple environments - the maximal state dimension is 39 (meta world, where only a subset of these is relevant for the considered tasks) and maximal action dimension is 9.
- Higher dimensional observations (pixels) are used for some experiments in the supplement but the results are again inconclusive or seem to favor the baseline (Dreamer-V1, which is to the best of my knowledge not SOTA on pixel metaworld [2])
- See questions below.
While the majority of the paper is clearly written and understandable, this is, in my opinion, not the case for section 3.4. Here the assumptions, their realisticness, and practical consequences of theorem 3.1 could be stated much more clearly
[1] MOPO: Model-based Offline Policy Optimization, Yu et al 2020
[2] Masked World Models for Visual Control, Seo et al 2022
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - I am a bit puzzled by the meta-world return curves: If I recall correctly successful execution in most of these tasks corresponds to an episode return of ~ 4,000. However, all presented algorithms achieve max. 1,000 (on "Reach wall") and considerably less in the 3 other tasks. Is there some normalization here? I would prefer to (additionally) see the success rates for these tasks to improve the interpretability of the results.
- What exactly is indicated by the error bars? How statistically significant are these results given that it's only 4 seeds?
- Are the "implementation details" (Appendix B) also used for the baselines? In particular: Additional value function for infinite horizon (B.1) for PeTS, online adaptation (B.2, but adapting the dynamics) for PeTS and MBPO, Planning With Uncertainty (B.3) for PeTS and MBPO, and MPPI (B.4) for PeTS.
- What are the standard "coverage and sampling assumptions" in theorem 3.1? How realistic are those in praxis? What does the theorem add, compared with standard random feature regression results?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: I believe all limitations have been addressed, although some only in the supplement (infinite horizon problematic with multi-step q function), and I believe the paper would benefit from moving this also to the main part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments.
> The algorithm described in the main paper seems to be improved by several "implementation details" ... I believe a rigorous analysis of their impact would be necessary to assess the method's potential and workings.
Thanks for the suggestion. We will move more details from Appendix B back to the main paper, and make the statement more rigorously by provide ablation experiments addition to those in appendix to validate the importance of each design choice.
> Baselines: none of the considered model-based RL methods (PeTS, MBPO, Dreamer) was designed to work with dynamics models trained on offline data. For a fairer comparison, methods designed to work with such dynamics models (e.g. MOPO[1]) should be considered.
Thanks for bringing up MOPO. We’d like to clarify that MOPO isn’t directly applicable here while also providing results of adopted MOPO by providing it with privileged information. MOPO adds a penalization term to reward based on ensemble disagreement during policy training while our offline training data contains NO reward. Despite this, we adopt MOPO in a way similar to how we adopt CQL in figure 1 in rebuttal pdf. Despite privileged information, we observed that MOPO will suffer from similar problems that goal-conditioned CQL faces.
> While RaMP adapts very fast in the considered environments, its final performance is often lower than that of MBPO and/or PeTS ... Further discussion and analysis in the paper would allow a better understanding of RaMPs limitations.
Thank you for the suggestion. We will add relevant discussion and analysis about this behavior in our final version. These differences in the 3 environments are due to the intrinsic problem of random-shooting MPC compared to policy based-methods. We recognize this limitation and would address accordingly in final version
> Compared with most recent work in RL and/or offline RL the paper only uses a small set (8) of relatively simple environments ... maximal action dimension is 9.
If appendix is included, our paper has a total of 15 environments before the rebuttal. We also just added some D4RL results in figure 4 of rebuttal pdf. We note that we are NOT under an offline-reinforcement learning setup so suitable benchmarks aren't directly applicable here. We note a lot of highly cited works under similar setting like successor features only has 2 environments with much lower action dimension.
> Higher dimensional observations (pixels) are used for some experiments in the supplement but the results are again inconclusive or seem to favor the baseline (Dreamer-V1, which is to the best of my knowledge not SOTA on pixel metaworld [2])
We present the pixel observation result mainly to illustrate that random projection also works on higher dimensional data and convolution networks rather than presenting our method as the state-of-the-art in RL from pixels. We will revise the writing and clarify the purpose of the experiment there.
> If I recall correctly successful execution in most of these tasks corresponds to an episode return of ~ 4,000. However, all presented algorithms achieve max. 1,000 (on "Reach wall") and considerably less in the 3 other tasks.
Metaworld is an environment which doesn’t explicitly provide a “done” signal for the environment. It didn’t provide a max episode length either so the benchmark with metaworld needs a manually specified max episode length. We chose a short episode length here since the four metaworld tasks don't need excessively many steps as in the original paper.
> What exactly is indicated by the error bars? How statistically significant are these results given that it's only 4 seeds?
The error bars here are standard errors. We deem 4 seeds to be statistically significant since, unlike online reinforcement learning, we have a static dataset which majority of the training steps are trained on. This makes the models already very stable before the online adaptation phase and also avoids the variance from blind exploration.
> Are the "implementation details" (Appendix B) also used for the baselines? In particular: Additional value function for infinite horizon (B.1) for PeTS, online adaptation (B.2, but adapting the dynamics) for PeTS and MBPO, Planning With Uncertainty (B.3) for PeTS and MBPO, and MPPI (B.4) for PeTS.
Yes, all baseline results are with online adaptation. For Planning With Uncertainty, we have a model ensemble for MBPO and SF baseline and a regularized version, MOPO in rebuttal pdf figure 1.
> What are the standard "coverage and sampling assumptions" in theorem 3.1? How realistic are those in praxis? What does the theorem add, compared with standard random feature regression results?
We have stated in the theorem that it is an “informal” version of the full result, due to space constraints. We will move the full version back to the main paper. The coverage and sampling assumptions, as we stated formally in Theorem D.1, means that the state-action pair is sampled from some offline data distribution that has coverage of the state-action space (diverse enough), and this is related to the standard “all policy concentrability” assumption in the offline RL theory literature, e.g., [1,2]. It is necessary for many offline RL approaches [3], and is important for our approach to be able to cover *any* new reward functions at test time. Compared to the random feature literature, we additionally need to analyze the propagation of the reward function approximation error to value function approximation (due to sequential decision-making), to adapt the loss used in the literature to that in our setting, and to analyze the effect of multi-step optimization over the Q-functions. We will make these points more explicit.
[1] Finite-time bounds for fitted value iteration
[2] Approximate policy iteration schemes: A comparison
[3] Information-theoretic considerations in batch reinforcement learning
---
Rebuttal Comment 1.1:
Comment: The authors thoroughly addressed my concerns and provided additional results regarding MOPO and the “implementation details” Given these and the promised revision, I’ll increase my score.
I still would urge the authors to reconsider the metaworld results though. Why the official codebase is indeed ambiguous in this regard I believe it is common practice to run the tasks for 500 steps each (until an exception is thrown) and report the success rate based on the indicator in the *info* dictionary. Following this protocol would greatly improve the compatibility (with other, potential future, works) and interpretability of these results.
---
Rebuttal 2:
Comment: Thank you for the helpful suggestions! We will try to provide such data or explanation in the final version. | Summary: The paper proposes an approach for unsupervised pre-training of RL agents, ie pre-training on offline agent experience without rewards. The approach generates a set of random reward functions and then learns a successor representation of the state-action space for predicting cumulatives of these rewards on fixed-horizon trajectories from the pre-training data. At test time, it uses a small reward-annotated dataset to learn a linear mapping from the pre-trained successor representation to the target reward. Then it extracts a policy by greedily maximizing the resulting estimates of future cumulative rewards. The paper demonstrates that this allows for faster finetuning than model-based RL since only a linear mapping needs to be learned on the target task data instead of the full Q-function. In contrast to prior work on successor features it does not assume access to given features or a reward-annotated training set to learn features, but instead uses random reward projections.
Strengths: - the problem formulation is impactful: pre-training with reward-free data has great potential for generalist RL agents
- the paper is well-written and easy to follow, it clearly outlines the problem formulation, explains the method and provides theoretical justifications
- the experimental evaluation seems comprehensive, with only minor experiments missing (see below): the paper compares to a representative set of baselines on multiple environments, including image-based control (though there with limited success) and also performs ablations of the key elements of the method
Weaknesses: (A) **Relation to prior work not sufficiently explained**: the proposed approach heavily builds on top of prior work on success features for RL, yet the current submission only mentions this relation in a half-paragraph in the main paper and adds a slightly more detailed discussion at the very end of the appendix. It would be helpful to more clearly introduce this most relevant prior work and explain the deltas, so it is easier for readers to understand the main novel components. Concretely, one could add a preliminaries section that summarizes the idea of successor features and then point out the two main novelties in the proposed approach: random reward projections to be independent of training tasks and action chunking to be independent of training policies.
(B) **Random reward features don't scale well to images**: the main difference to prior successor feature works is that the proposed approach avoids the assumption of pre-defined features or given pre-training task distributions by using random reward projections. The downside of this is that the algorithm cannot "discard" any information and struggles on high-dimensional inputs like images, where predicting random rewards over pixel inputs is complex and lacks semantic meaning. I agree with the authors that this can potentially be mitigated with pre-trained representations, but since removing the assumption of pre-defined features is one of the main selling points of this work, this tradeoff should be discussed more prominently in the main paper (currently only in appendix C.3) and the corresponding image-based results should be included in the main text. It would also be helpful to show the successor feature baseline results on the image-based domain.
(C) **Missing Ablation**: The other introduced novelty is the use of action chunking, ie conditioning the Q-function on a sequence of H actions instead of on a single action. However, this choice is not ablated. It would be good to include versions of the proposed approach without action chunking (ie H = 1) and with different horizon lengths to see the benefit of chunking.
(D) **Missing baseline**: The paper compares to goal-conditioned offline RL (CQL) but with a pre-specified selection of goals. Prior work has instead proposed to use goal-conditioned offline RL on randomly sampled goal states from an offline dataset for pre-training [1]. This has two benefits: (1) it does not require pre-defined goals / tasks, fully matching the assumptions of the proposed approach, (2) because of that it may be more robust to test time adaptation to new tasks. Thus, it would be good to add comparison to Actionable Models on all environments.
**Minor Point**:
(E) **MPC is misnomer**: The paper refers to the downstream policy extraction as "model predictive control" and "planning" (see eg Fig 2). However it does not perform prediction or use a model. Instead it performs greedy policy extraction by maximizing the Q-function, but uses action chunking (ie N-step action inputs). Thus I would call this step "greedy policy extraction with action chunking" instead.
[1] Actionable Models, Chebotar et al. 2021
## Summary of Review
Overall I like the paper and am happy to accept it if the authors can more clearly mention the connections to prior work on successor features, discuss tradeoffs for image-based domains and add the suggested ablation + baselines. The idea of using random reward projections for successor feature learning is novel to my knowledge, but I am not 100% certain and will thus assign lower confidence to my review.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations on image-based environments are demonstrated in the appendix but should be discussed more prominently in the main paper (see weaknesses (B)).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We respond to your comments below.
> Relation to prior work not sufficiently explained.
Thank you for the great suggestion. We will add a preliminary section to summarize the idea of successor features, and emphasize our novelties in the section, in our final version of the paper. This would indeed make our contributions easier and more outstanding for readers to understand. You’ve hit the nail on the head with the key elements - random features rather than require pre-known successor features, and getting rid of policy dependence by doing open loop Q modeling wrt action sequences rather than having an implicit policy dependence. These are crucial because it allows us to learn from offline datasets without reward labels, and it allows us to learn Q-functions that transfer significantly better than techniques like generalized policy iteration (GPI). GPI can simply take the piecewise max between many optimal policies, while RaMP can optimize for any policy that is within the data support. We will make this significantly more clear in the document.
> Random reward features don't scale well to images.
Thanks for the insightful comment. While RaMP can work directly from images using random convolutional features, as we have shown in Appendix C.5, this may perform more effectively using pretrained image features rather than using completely random convolutional features.
While one of our motivations is to *remove* the pre-defined features, what we meant by using *pre-trained* features, as a remedy to the scalability issue, is that we will first *randomly* project onto some feature space of the images, e.g., the latent space pre-trained over standard datasets as ImageNet, and then do random MLP features building on these pretrained features (as in our state based experiments). This is a fairly standard approach to reduce the dimension to consider, when dealing with images and still allows there to be random features, but now building on top of pretrained image features (not directly using those image features). We will make this point more explicit, and also move the related discussions and image-based results from the appendix to the main paper.
> Missing Ablation.
Thanks for the suggestion. We ablated the horizon earlier under the finite horizon variant by shortening the horizon by half and four times. We found that longer-horizon environments like hopper would suffer slightly while our method retains similar performance on meta-world environments. Unfortunately, we ran out of time benchmarking enough seeds for this ablation under our current infinite horizon variant. We appreciate the suggestion and will add final results to the paper in the future.
> Missing baseline - Actionable models
Since source code is not available, we were not able to run this baseline in the short rebuttal period but we will certainly run this and include it in the final version. We would like to note that while actionable models are restricted to goal conditioned problems, RaMP can adapt to any reward function, even non goal reaching ones.
> Minor point – MPC is a misnomer.
Thanks for the suggestion. The step uses “prediction” in the sense that the Q-function we used for optimization is an “accumulated” reward over $H$ steps, and it belongs to the MPC framework as we make “optimization” over a window of $H$ steps, but only *implement* the first step, and then interact with the environment before “re-optimize” the sequence of actions again. However, we certainly acknowledge the confusion here and we will rename the method accordingly.
Please feel free to let us know if there are any other questions.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal!
Comment: Thanks for providing the rebuttal. I am generally happy with the response and appreciate the authors willingness to incorporate the feedback. I also understand that the short rebuttal window can make it hard to implement new ablations / baselines on the spot, particularly if many seeds are required. I trust that the authors will follow through on their promised changes. Please also add comparison to baselines on image-based domains as mentioned in my review.
I have also looked over the reviews of other reviewers specifically to understand the low ratings of reviewers 8qBK and gqpD. After reading through the reviews and rebuttals I find the criticisms rather surface-level and don’t fully agree with them:
- compare to meta-RL (R-8qBK): agreed with the authors that meta-RL requires training tasks / rewards, which their method does not. There is work on unsupervised meta-RL though that could allow for clean comparison (Gupta et al 2018, https://arxiv.org/abs/1806.04640)
- compare to IQL pre-training (R-gqpD): I don’t think IQL w/ final reward 1 for value pre-training is ideal on non-expert data — the proposed projection after random reward pre-training seems more general
I believe the authors adequately addressed the concerns of these reviews so I stand by my accept recommendation and am willing to defend it to the AC. I will also increase my confidence given that no other reviewer raised concerns about too closely related work.
---
Rebuttal 2:
Comment: Thank you for the helpful suggestions! If you have additional questions or experiment suggestions regarding our rebuttal, we are happy to answer that! | Summary: The paper tackles the problem of
In the training phase, they create $H$ randomly initialized neural networks that serve as reward functions during the purpose of pretraining. The Q-function learned online is a linear combination of these random features: $w^T\sum_{h\in [H]}\phi(s_h,a_h)$.
Strengths: This is a beautifully written paper. The motivation is crisp and the proposed method is very well grounded theoretically. Results across a broad range of tasks are presented. On originality, clarity, and method quality, I rate this paper highly.
Weaknesses: The paper would benefit from more baselines: for example doing something like IQL with a reward of 1 at the end of each trajectory during offline training. Less importantly, I also think IQL would be a more natural oracle than CQL. It performs better than D4RL and directly evaluates online fine-tuning. In fact, the IQL paper shows that CQL can sometimes collapse during online fine-tuning.
I also don't quite understand the choice of offline dataset: from the appendix, it sounds like the policies that generate the offline dataset are quite "expert." It would be interesting to see how well the model performs if the offline dataset was instead modified from the D4RL benchmark, which includes different levels of expert policies. I expect high quality data to benefit any self-supervised method, but the claims of the paper are not limited to offline learning on expert demonstrations.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I'm a little confused by Figure 2. The way that the random features and Q-basis are drawn makes it look like each $\phi_h$ is one of the basis vectors that together compose $Q-basis$. Everywhere in the text implies that each index of Q-basis is a different function parameterized by a different $\theta_k$ and together the $\theta_k$ make up the basis vector. Is that correct? If so I would recommend modifying Figure 2 left to include a reference to k.
The number of environment steps in the online phase seems really small. Why was 3000 chosen and about how many trajectories is it? E.g., in the IQL fine-tuning experiments they use ~1M environment steps.
Why did the authors not include D4RL data in their experimental evaluations?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We respond to your comments below.
> The paper would benefit from more baselines: for example doing something like IQL with a reward of 1 at the end of each trajectory during offline training.
Thanks for the great suggestion. We have added new experimental results by comparing with IQL with the reward scheme that you suggested (see Fig. 1). This is akin to the CQL baseline in the original paper, but replacing the base algorithm CQL with the newer algorithm IQL. As expected, IQL performs less favorably compared to our approach, as CQL did. We will add the new experiments in our final version. The important point to note here is that model-free offline RL methods like CQL/IQL are inherently tied to the reward, while RaMP is not tied to the rewards but effectively models the system dynamics. This allows it to easily transfer to new rewards, in situations where algorithms like CQL/IQL will struggle to transfer since they are inherently tied to the reward.
> I also don't quite understand the choice of offline dataset: from the appendix, it sounds like the policies that generate the offline dataset are quite "expert."
We would like to clarify the dataset setup and present new results on D4RL. We use noised expert policy to collect trajectory with very aggressive epsilon noise (>= 0.5) so the trajectories rarely reach high reward. In addition, we don’t assume the dataset contains rewards so the test time objective can be quite challenging for the neural network to learn. In Fig. 4 of the supplementary pdf, we have added a D4RL experiment where we outperform CQL. The new experiment shows that our comparison isn’t due to how our data is collected. The important point is that the datasets must have coverage, as is typical in most offline RL algorithms, but the datasets can be very mixed with both optimal and suboptimal behavior. This is in contrast to imitation learning datasets that only have expert data.
> It would be interesting to see how well the model performs if the offline dataset was instead modified from the D4RL benchmark, which includes different levels of expert policies.
Thanks for the great suggestion. We have added the use of D4RL in our new experiments, see Fig 4. The original reason why we didn’t benchmark in D4RL is that we want to develop a method that doesn’t assume train/test time objective correlation, and thus want to test on out-of-distribution goals or rewards. Most of the D4RL environments are not goal-conditioned and have the same reward function during the offline/online phase. This is drastically different from our reward-agnostic setting. In our new experiment, we chose only few goal-conditioned environments in D4RL and the results illustrate our performance isn’t due to our customized data but the algorithm itself.
> I'm a little confused by Figure 2. The way that the random features and Q-basis are drawn makes it look like each
… Is that correct? If so I would recommend modifying Figure 2 left to include a reference to k.
Sorry for the confusion. It is correct that the basis used to estimate Q is different from (but the accumulation of) the basis used to estimate the reward function, which is $\psi$ instead of $\phi$. For the features $\phi$ and $\psi$, their “parameterizations” are different also – the former is parameterized by $\theta$, while the latter is parameterized by $\nu$ (See our Sec. 3.2). We will make the clarifications in our updated version.
> The number of environment steps in the online phase seems really small. Why was 3000 chosen and about how many trajectories is it? E.g., in the IQL fine-tuning experiments they use ~1M environment steps.
One of the main baselines, Successor Features, is extremely slow to run in terms of clock time. The 32000 step is a hyper-parameter which we choose by observing the number of environment steps needed to achieve success/converge across environments. Since planning backbone isn’t our main focus, we used a computationally inefficient version of random shooting MPC for both Successor Features and our method. Random projection for the successor feature baseline using many independent MLPs instead of one big MLP also made it slow, not to mention we have to copy these networks multiple times for ensemble. Official MBPO implementation is also very slow with ensembles so it’s also extremely difficult to train ~1M steps.
Please feel free to let us know if there are any other comments that may help reevaluate our paper.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for taking the time to address these concerns. Based on the response, I raised my recommendation to Accept.
---
Rebuttal 2:
Comment: Thank you for the helpful suggestions! If you have additional questions or experiment suggestions regarding our rebuttal, we are happy to answer that! | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments and suggestions. We highlight our main experimental additions here and then address individual reviewer concerns in each reviewer response:
- **Meta-RL baseline (Reviewer 8qBK)**: We conduct a comparison with a meta-RL baseline, RL2 [1] that performs recurrent meta-learning. We train RL2 on a set of training tasks and evaluate its adaptation performance on out-of-distribution tasks to demonstrate the transfer behavior under distribution shift. Since RL2 adapts in a very short context (2 episodes), we plot the results at the end of adaptation as horizontal lines with error bars. From Fig. 1, we see that even with privileged information during training, RL2 does not handle distribution shift during testing and often performs poorly on test-time tasks. We note that the good performance on Door Open results from the goals being largely in the same direction. So even with the task separation, RL2 still bootstraps behavior from the training tasks.
- **MOPO baseline (Reviewer BJ94)**: We conduct a comparison with an offline model-based RL baseline as requested, MOPO [2]. While this is not quite in the same problem setting, we adapted using the same assumptions as in CQL, running a goal conditioned variant of this method. As shown in Fig 1 in the rebuttal PDF, we found that RaMP outperforms MOPO across all Metaworld tasks.
- **IQL baseline (Reviewer gqpD)**: We ran an IQL baseline on Metaworld domains as shown in Fig 1. We found that RaMP significantly outperformed IQL in both efficiency and asymptotic performance.
- **D4RL experiments (Reviewer gqpD)**: We conducted experiments using the D4RL dataset on 2 maze environments - U Maze and Medium Maze name them. It is important to note that we are not in the standard offline RL setting, since we do not assume known reward on the offline dataset, only some at adaptation time. Moreover, the focus of RaMP is really on the transfer performance, so the standard D4RL comparisons are not quite comparing apples to apples. Our results show that RaMP is applicable to standard offline datasets in D4RL.
- **Ablations wrt added implementation details (Reviewer BJ94)**: We conducted an ablation, removing each of the added components in the implementation details outlined in Section B. We find that each of the design contributes to the overall performance of our method.
We provide additional clarifications, explanations and discussion in the per-reviewer responses.
[1] Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel. RL2: Fast Reinforcement Learning via Slow Reinforcement Learning. ICLR 2017.
[2] Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, Tengyu Ma. MOPO: Model-based Offline Policy Optimization. NeurIPS 2020.
Pdf: /pdf/cb63df5ba47dd65dfdef079292def391b79f1423.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a Random Features for Model-Free Planning (RaMP) algorithm to solve the problem of learning generalist agents that are able to transfer across platform where the environment dynamics are shared, but reward function is changing. The proposed algorithm leverages diverse unlabeled offline data to learn models of long horizon dynamics behavior, while being able to naturally transfer across tasks with different reward function. The authors evaluated RaMP on number of simulation based robotic manipulation and locomotions tasks.
Strengths: - The problem addressed in the paper is very important problem for RL in robotics/robot learning real-life tasks.
- The proposed method is very interesting and seems relevant to the research community.
Weaknesses: - The paper is not well-written and very hard to follow. For example, Line 13-15 and Line 59-61 is very hard to follow.
- The paper attempts to solve an interesting problem and has strong results in simulation based robotics task. Since, the motivation of the paper draws from real-robotic tasks, it would be good to see how this method performs on real-robotic tasks. I understand authors have mentioned in future work but to understand complete effectiveness of the proposed method, it seems critical.
-
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please address concerns mentioned in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: mentioned in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We respond to your comments below.
> The paper is not well-written and very hard to follow. For example, Line 13-15 and Line 59-61 is very hard to follow.
Thanks for the comment. Line 13-15 simply means our method can be trained “without” reward labels, but on the other hand enjoy the benefit of being able to be quickly deployed to new tasks. Line 59-61 simply means that as long as the number of random features being used is large enough, we can estimate any test-time Q-function by a linear combination of the Q-basis functions. We will improve the writing in our updated version.
>The paper attempts to solve an interesting problem and has strong results in simulation based robotics tasks. Since, the motivation of the paper draws from real-robotic tasks, it would be good to see how this method performs on real-robotic tasks. I understand authors have mentioned in future work but to understand the complete effectiveness of the proposed method, it seems critical.”
Thanks for the great suggestion. We agree that applying our method to real-robotic tasks would be a big plus. With the timeline constraints, we may not be able to finish such real-robotic experiments during the rebuttal phase. We will add them in the next version of our paper. We would like to emphasize that we have run our method across 8 different problem domains, and provide theoretical backing of the proposed algorithm. Hopefully, this provides convincing evidence for the reviewers of the empirical efficacy of our proposed method.
---
Rebuttal 2:
Comment: Thank you for the helpful comments. If you have additional questions or experiment suggestions regarding our rebuttal, we are happy to answer that! | null | null | null | null | null | null |
Sampling weights of deep neural networks | Accept (poster) | Summary: This article introduces an alternative to random features for sampling weights of neural networks. Their method relies on data points (both inputs and outputs) and activations to build iteratively the weights and biases of one layer after the other, in opposition with data-agnostic/purely random methods. It proves several results in term of function approximation by their sampled networks. Finally, they compare accuracy, training speed and size of model needed on a classification benchmark, an ODE approximation problem and a vision classification fine-tuning problem.
Strengths: - I think the method is original, interesting and easy to understand.
- The method is more robust to depth than standard Random Features.
Weaknesses: - The presentation of the paper is not perfect ( the algorithm is not easily readable, there is an indent missing for the for loop over $l$, the theory section is quite dense).
- The comparison against random features is only on a toy example. It is also not clear which algorithm is used on top on the RFs (linear regression, SGD,...)?
- The use in modern tasks and architectures seems very limited.
- In the experiments, it seems that we need quite wide networks to reach Adam networks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I am not an expert in deep neural operators, but it is not clear to me how to use the method in the presence of a time-series? How to take into account the time-dependency of the data in the sampling process?
- It seems to me that this method may have a link with Bayesian neural networks (a field I am not an expert too), i.e. we have a distribution probability over layers. Could the authors comment on this, and maybe add a small paragraph in the text?
- The authors did not comment the case where we have outliers in the data? How would the method perform in that case? How easily these outliers would impact the target function? The method seems quite sensitive to them.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations were addressed in the broader impact.
I am willing to modify my score accordingly to how the weaknesses and questions sections are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**Regarding weaknesses:**
* [W1] We hope that an additional page after - potentially - accepting the paper can help to make Algorithm 1 more readable, and we can also add more explanations to the theoretical section.
* [W2] We also commented on a similar question in our answer to reviewer KCHD: In our experience, random feature models only perform comparatively well for layers with much more neurons - but we would be happy to extend the experiments to
demonstrate it (there was not enough time in one week, unfortunately). For the last layer of random feature models, we use exactly the same algorithm (least squares, same regularization) as we use for the last layer in sampling. Only the hidden layer is sampled differently.
* [W3] We argue that many modern tasks involve supervised learning problems (surrogate modeling, learning classifiers and solving regression problems on tabular data, machine learning potentials for molecular dynamics), and we are currently experimenting with using the sampled weights as a "good basis" even for unsupervised tasks (modeling dynamical systems, solving partial differential equations). We demonstrate in Section 4.3 that more complicated architectures can be decomposed into smaller sequences of supervised learning problems as well, which can then be solved by the proposed algorithm without any modifications.
Furthermore, we argue that with our work we encourage constructing neural networks that use data much more directly to create their parameters. This may spur interesting ideas and extensions to different tasks and architectures, also the ones trained with gradient descent. This type of thinking also very easily leads to many ways to construct interesting densities and emphasizes a more probabilistic approach to neural networks that are also very efficient computationally. Interpretability of our networks is also a benefit that may help the acceptance of machine learning methods in practice.
* [W4] Indeed. The main reason wider networks are needed is that we randomly sample the hidden layers' weights and biases, so a certain amount of the neurons will not be very useful in the final layer. In followup work, we have started experimenting with $L^1$ optimization of the last layer (instead of $L^2$), which leads to many coefficients in the last layer being exactly zero. This means we can easily prune the sampled networks after solving the last layer, too, something we will explore in the future.
**Regarding questions:**
* [Q1] The experiments presented in the paper do not use time series as input data. Models in Section 4.3 approximate an operator that transforms an initial condition into the solution at a particular time step. More specifically, the task’s input is function values at $t = 0$, while the target is function values at $t = 1$. Thus, we do not employ any time dependency in input data during sampling.
If we had time-series data, we could attempt to learn the underlying dynamics of it, for example, an ordinary differential equation (ODE). To do this, we would compute finite differences (in time) in the input time series first, and then train a network to predict those in a supervised manner. This way, we could approximate the right-hand side of the underlying ODE and then solve it using classical integration methods.
* [Q2] The manuscript already contains a brief discussion in "related work" (lines 65-68). Note that we effectively connect training data points and weights, while Bayesian neural networks mostly use distributions over the weights without a direct connection to the data. We have expanded our discussion in the manuscript based on the reviewer's question.
* [Q3] Indeed, analyzing the behavior of the algorithm in a stochastic setting, or with measurement noise, was left for future work. Experiments in Section 4.2 (OpenML) and Section 4.4 (transfer learning) demonstrate empirically that the method is not very sensitive to noise: we use real data for the experiments, which always contains a certain noise level. If we would find a specific sensitivity to noise in a certain setting, it would be possible to study robust methods for linear systems (e.g., robust least-squares) as a replacement for the current standard least-squares method we use in the last layer. Sampling hidden weights and biases may also be affected by noise (mostly by poor function values, like wrong labels, for example). This would cause a less optimal selection of internal weights, which may still be mitigated by more robust methods for the last layer.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: I would like to thank the authors for their answer. I think this paper is an interesting proof of concept, and that the experiments shown here are interesting. I am however left with the question on whether this method would scale to real tasks.
I think this line of work is promising and worth exploring, and I am therefore increasing my rating from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you!
Regarding the question on scaling to real tasks: We interpret this as a question on how our algorithm scales to large data sets, as they are enountered in real, big-data settings. Section F in the supplemental material contains a complexity analysis of the algorithm. It is mostly based on complexity analysis for solving linear systems. For a fixed network, the convergence to a solution is linear in the number of training data points, which is not worse than classical neural network training and should be sufficient for big-data settings. | Summary: The paper proposes a novel approach and analysis to sample weights of neural networks that can potentially address backprop limitations.
The method is based on computing differences between data points and can be scaled to deep networks (by computing the difference of data point activations). The paper introduces a rigor mathematical formulation supported by several experiments showing some benefits of the approach compared to backprop.
Strengths: 1. The problem of sampling weights better than using a data-agnostic random distribution is very interesting and solving it might have a lot of practical implications.
2. The method is supported by rigor mathematical formulation.
3. The experiments are extensive and diverse, including experiments with large networks, showing some benefits of the method.
Weaknesses:
1. In L24, it says "we introduce a data-driven sampling scheme to construct weights and biases close to gradients". However, in Section 3, the connection of the proposed approach to gradients is missing. Is the difference between data points related to gradients? In that sense, is there a connection of this submission with Forward Gradient approaches, e.g. "Gradients without Backpropagation. Baydin et al., 2022." or "Learning by Directional Gradient Descent. Silver et al., 2022"? In Forward Gradient, the gradients are often estimated by perturbing the inputs/weights a little bit. Even if it's directly related, I believe since this submission and Forward Gradient papers are both alternative approaches to backprop, it should be discussed at least in Related Work.
2. An important baseline for the Fig. 3 and 5 experiments that is missing would be to keep the first layers initialized randomly, while still apply arg min L for the last layer. This baseline would show more clearly the benefit of sampling weights. It may be that the main benefit is coming from solving argmin L.
3. It's a bit unclear why the proposed approach scales poorly with width (Fig 5, right). It seems that in Algorithm 1 most of the loops, specifically for k=1,2,...N_l, can be run in parallel for all neurons, so it should scale well with width if implemented efficiently. Perhaps, the comparison to Adam is not very fair as the authors are probably using a very efficient Adam implementation.
4. In algorithm 1, ||y_i - y_j|| must be a constant (same for all i,j) for classification problems (assuming y is a one hot vector and i, j are of different classes), so it's not very clear what's the purpose of this term. If labels are not that important, it could be beneficial for the paper to claim that the approach works without the need of labels (which are often expensive to obtain), perhaps except for the last layer.
5. Some visualizations of sampled vs trained weights would be useful. In particular, for tasks such as MNIST, where sampled first layer weights can be easy to interpret. In general, it remains a bit mysterious how the algorithm actually samples weights that are better than random weights. So some visualization like in Fig. 1 but for actual optimization tasks would be useful.
6. The paper claims improved "Interpretability" in the Introduction, however, this claim was not supported empirically.
I will be willing to raise my score if the weaknesses are addressed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. L213: "Pre-processing failed in 11 of the 72 datasets" - what kind of pre-processing and what exactly means "failed"? Was it applied to both the Adam and proposed approach? Are those 11 datasets ignored in Figure 3? Were the Adam hyperparameters (learning rate, weight decay, etc.) tuned?
2. In Algorithm 1, arg min L is a linear optimization problem. Is it solved using some kind of least-square in a closed form/gradient-based way? Does this step dominate time complexity in large experiments (ResNet, VGG, Xception) in Fig. 5 right?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are discussed, but not all (e.g. see Weakness 3 about scalability above).
**I updated the rating from 4 to 6 based on the author response and other reviews**
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Regarding weaknesses:**
* [W1] In both papers, "Gradients without Backpropagation" and "Learning by Directional Gradient Descent", the authors propose methods to compute the gradient of the network with respect to its parameters (weights and biases). In contrast, we propose to choose data point pairs that are close to regions with steep gradients *of the target function* with respect to the input. We choose this so that the corresponding network activation functions $\phi(\langle w,x\rangle-b)$ are "placed" close to these steep slopes. We hope the new illustration in the general answer to all reviewers clarifies this. We still added a brief discussion of the two papers in the related work section of the revised manuscript.
* [W2] In Section 4.1, for random features, we do indeed initialize randomly all layers except the last one. Then, we use a least squares solver to find the weights and biases of this last layer, exactly in the same way as for our sampled network.
We did not also add this comparison in Figures 3 and 5 (corresponding to Section 4.2 "OpenML" and Section 4.4 "Transfer learning"). In our experience, random feature models only perform comparatively well for layers with much more neurons - but we would be happy to extend the experiments to demonstrate it.
* [W3] The scaling with width may be clarified with our complexity analysis (Section F in the supplemental material). Essentially, the *minimum* of the (a) width of the last layer and (b) number of training data points determines how much work the linear solver for the last layer must do; it scales cubically with this number. Note that this is the time complexity *until convergence* for our sampled network, which usually is very hard to obtain in general for methods like ADAM. The sampling of the hidden layer weights is negligible for the computation time. It is true that we also compare a CPU-based research code against an efficient implementation running on the GPU. We also want to highlight that in Figure 5, our approach outperforms the Adam optimizer on the test set already at around 1000 neurons, where our approach is still around ten times faster. It is not clear how long the ADAM optimizer would have to run to reach the same test accuracy.
* [W4] The function value differences are indeed constant (equal one) if the classes are one-hot encoded. However, for data pairs inside the same class, the difference is zero. The purpose of the term thus is that only pairs that are in different classes are selected. Furthermore, points that are closer to the decision boundary are selected with higher probability, because they correspond to different classes, but have smaller distances - and the distances are included in the denominator (Equation 2). Figure 1 in the answer to all reviewers should visualize this.
* [W5] We added two figures related to the interpretability of the weights in sampled networks to the answer to all reviewers. The examples are constructed on actual tasks (one classification task in two dimensions, one classification using MNIST).
* [W6] We hope that the visualization in the answer to all reviewers helps to emphasize how the weights and biases of networks can now be interpreted better. In fact, in the proof of Theorem 1, we show that any neural network can be transformed into a sampled network (with identical values on the input space, but potentially not away from it). This means the sampling method may be useful to interpret even other neural networks that were not obtained with our algorithm, as we can take a trained network as input and output a network with weights and biases of the form given by Definition 1. Once this is done, we can use the information to interpret which datapoints in the training set have been essential to create the neural network which was trained by ADAM.
**Regarding questions:**
* [Q1] The datasets from OpenML must be pre-processed before they can be used by standard feed forward neural networks (they contain nominal variables and missing values). The pre-processing steps are listed in the supplemental material (lines 496-499). This pre-processing was applied before we construct the networks, the same way for sampling and Adam training. The datasets where pre-processing "failed" were excluded from all evaluations. We re-ran the code after the submission period ended, and it seems that there may have been an issue with the openml package downloading the data. With a newer version, all 72 datasets work correctly now (no change to our code was needed, just an update to the package). The results of all missing datasets are now included in the new manuscript, they do not change the results shown in Figure 3 significantly.
We did not modify the standard hyperparameters of Adam, and except for neural architecture search (number of layers), we did not perform hyperparameter search for the OpenML experiment. This hyperparameter search would have skewed the time for the Adam algorithm even more in favor of sampling - as there is just a single hyperparameter to tune for sampling (regularization of the least-squares solve), as compared to many for Adam. Of course, the accuracy for Adam may have improved.
* [Q2] Indeed, we solve it using the closed form of a least-squares optimization problem (Tikhonov regularization), which is the key source to the scaling. For the scaling factors of the network, we have also included a time and space complexity analysis in the supplemental material, Table 4 (p. 25). Note that the scaling in Figure 5 is due to the increasing, large widths of the networks, not the number of datapoints.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing the concerns in detail. In particular, I found the random features experiments [W2] and the new visualizations for 2d and MNIST [W5] very useful and convincing. I'm generally satisfied with the response and will raise my score accordingly (once this option becomes available to reviewers).
One minor note is that there are several recent works studying the idea of learning a representation of neural network weights to sample new weights from the latent distribution, e.g. [A, B]. It would be useful to see some discussion in the Related Work w.r.t. this kind of papers.
[A] Hyper-Representations as Generative Models: Sampling Unseen Neural Network Weights, NeurIPS 2022
[B] Learning to Learn with Generative Models of Neural Network Checkpoints, 2022
---
Reply to Comment 1.1.1:
Comment: Thank you! We will cite [A,B] (and potentially others), and discuss learning weights from existing models in the related work section. It may be interesting to consider our sampled weights as training set in this context. | Summary: This study proposes a sampling learning method for deep ReLU networks. The proposed distribution is data dependent and the sampled network is shown to have universality.
Strengths: The idea of sampling parameters is well-investigated, for example, such as Bayesian NNs, sampling-based dimension reduction for kernel methods, random Fourier features, mean-field theory and Langevin dynamics to mimic SGD learning dynamics, and ridgelet transform for Barron-type integral representation theory. One of the major shortcomings of these methods is that these theories are often limited to shallow networks. This is because the math behind these algorithms is an integral representation of a neural network, and it is essentially a model of a single hidden layer with infinite units. Despite this difficulty, this study has developed and proposed data-dependent proposal parameter distributions for multiple layers.
Weaknesses: However, I could not figure out if the proposed distribution, Eq.2, is well-defined. Since $\Phi^{(l-1)}$ is a piecewise linear map, the graph of Eq.2 may have (1) a constant direction, and (2) line singularities as $|x_1 - x_2| \to 0$. These characteristics suggest that the proposed function is not generally *integrable*, thus the well-definedness is not trivial. Nevertheless, theorems are proved without regularity conditions, so I consider the theory as incomplete.
Additionally, the design of experiments are not consistent to the theory, since (1) Figure 2 draws reference lines $m^{-1/2}$ and $m^{-1}$, but there is no guarantee that the proposed algorithm converges at these rate, and (2) Section 4.3 deals with neural operators, but the architecture is not considered with theory.
Furthermore, Section 2 “Related work” is rather a compressed list of related works than literature overview since it lacks reviewing on what problems remain open in the past studies and how the authors addressed them.
Several closely related works are omitted, for example, such as
- attempts to use (Quasi) Monte Carlo computation of integral transforms (that describe data-dependent parameter distributions):
- https://arxiv.org/abs/1902.00648
- https://jmlr.csail.mit.edu/papers/volume22/20-1300/20-1300.pdf
- https://jmlr.org/papers/volume18/15-178/15-178.pdf
- strong lottery ticket hypothesis (particularly the edge-pop algorithm and its universality):
- https://arxiv.org/pdf/2111.11146.pdf
- and representer theorem for deep ReLU nets:
- https://jmlr.org/papers/v20/18-418.html
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please refer to the weakness section
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Regarding the sampling distribution:** Equation 2 uses both the map $\Phi^{(l-1)}$ (ReLU) in the denominator, and the target function values $f(x)$ in the nominator. Based on this review, we refined the definition of our probability density and now assume the following:
* Compactness of the data domain. This is already stated in the paper, and in the supplemental material, Definition 3.
* Non-constant, Lipschitz-continuous target function $f$. This was not stated before, and we are thankful to the reviewer for pointing it out. If the function $f$ is constant on $\Phi^{(l-1)}(\mathcal{X})$ or the map $\Phi$ maps all points to a single point, we define $p$ to be uniform.
* Regularization with a small constant $\epsilon>0$ of the denominator for every hidden layers except the first one. We already state this (for all layers) in Algorithm 1, but did not do so in the definition of the probability density. In the numerical experiments, we used $\epsilon=10^{-10}$.
The new definition of the density is now the following (slightly adapted because not all LaTeX features are available here):
**Definition 2.** Let $\mathcal{X}$ be the compact input space, and let $f$ be a Lipschitz-continuous, non-constant function on $\mathcal{X}$. Let $\mathcal{Y} = f(\mathcal{X})$ be given function values, and let $P$ be a continuous probability distribution defined through its conditional distribution with density $p$, for $l=1,2,\dots,L$ and $x^{(1),0}$, $x^{(2),0}$ $\in\mathcal{X}$, setting $x^{(1),l-1}$, $x^{(2),l-1} = \Phi^{(l-1)}(x^{(1),0}), \Phi^{(l-1)}(x^{(2),0})$, where $p(x^{(1),0}, x^{(2),0}| \{W_{j}, b_{j}\}, j=1,\dots,l-1)$ is proportional
to
$ | f(x^{(2),0}) - f(x^{(1),0})| / \text{max}\lbrace| x^{(2),l-1}- x^{(1),l-1} |, \epsilon\rbrace,$ when $x^{(1),l-1}\neq x^{(2),l-1}$ and $0$ otherwise.
Here, $\Phi^{(l-1)}(\cdot)$ is the sampled network up to layer $l-1$ with parameters {${W_{j}, b_{j}}$}, $j=1,\dots,l-1$, $\mathcal{X}^{l-1}= \Phi^{(l-1)}(\mathcal{X})$, and $\epsilon$ is a regularization constant that we set to zero for $l=1$ and larger than zero for $l>1$.
If $f$ is constant or $p$ above would be zero a.e., we define $p$ to be the uniform distribution over $\mathcal{X}\times\mathcal{X}$.
With these additions, most of them being incorporated in the experimental part already, we can now guarantee the proposed density induces a proper distribution. By leaving the conditional density for the first hidden layer unregularized, also all the theory, including Theorem 3, is now consistent.
**Regarding the convergence rate:** we prove (Theorem 2) that there exist sampled networks that achieve convergence rates $m^{-1/2}$, which is why we added the line in the plots for the experiments. Indeed, it is not clear if the specific sampling probability leads to this convergence rate, we only demonstrate that it does so empirically. The architectures we use in Section 4.3 can be mostly considered pre-processing techniques for the sampled networks we study analytically. For example, mapping the target function to Fourier space before constructing a sampled network does not change the theory surrounding it, just the target values. Of course, there are still a lot of open questions (beyond the scope of this paper), e.g., convergence in Sobolev spaces of the sampled networks. The third paper cited by the reviewer may help here.
**Regarding related work:** We greatly appreciate the pointers to additional literature. We comment on them below, and also incorporated them together with a more detailed review in the related work section of the manuscript.
* Monte-Carlo / kernels:
* Paper1: We construct an inverse mapping from the given target function to the distribution in parameter space of (deep) networks. Even though we currently cannot prove the same exponential convergence rates, our sampling algorithm works for networks with more than one layer. We also introduce a duality between training data and network parameters, which makes the sampled parameters easily interpretable.
* Paper2: The ridgelet prior is constructed with normally distributed weights in the first hidden layer, so it "does not require access to any part of the dataset" (quote). Thus the prior does not utilize the information to construct weights, as we do here. The analysis in the paper offers a path toward Bayesian framework for our sampling procedure.
* Paper3: There is a strong relation between random features and kernel functions. The cited paper covers many examples and provides a unifying view on weights for kernel quadrature and random feature approximation of functions. Our work differs in that we do not start with a kernel and decompose it into random features, but we start with a practical and interpretable construction of random features and then discuss their approximation properties. Exploring the kernel (and related RKHS / Bayesian framework) that corresponds to "our" sampled features is an exciting future work that we already started to pursue.
* Lottery ticket: The strong lottery ticket hypothesis, where the "winning" subnetworks are not trained but selected from a randomly initialized starting network, is similar to our approach of randomly choosing data pairs and then selecting highly probable ones for weights and biases. The two approaches are still not easily comparable: in the edge-pop algorithm, iterative gradient-based updates of the weight scores are required, and the remaining weights after pruning cannot be interpreted as easily as they can with our algorithm (see our general answer).
* Deep ReLU nets: The paper states that "Designing an algorithm that can effectively deal with this issue [of selecting proper spline points] will be a very valuable contribution to the field." Our sampling procedure may offer a path toward such an algorithm. Even though we cannot ensure that we only find optimal spline points, at least we offer a solution to constructing useful ones based on given data. | Summary: This paper presents an approach to training deep neural networks by introducing a probability distribution for weights and biases, which significantly reduces the necessity for iterative optimization or gradient computations. The proposed sampling scheme is data-driven, factoring in the input and output training data to sample both shallow and deep networks. The paper demonstrates the universality of the constructed networks as well as their invariance to rigid transformations and scaling of the input data. The robustness and speed of the method are shown through various test trials, including a classification benchmark from OpenML, sampling of neural operators to represent maps in function spaces, and transfer learning using well-known architectures. Overall, this approach provides a valuable direction in neural network training, promoting the efficiency of training and the interpretability of the model.
Strengths: 1. This is a well-written paper with clear descriptions and detailed formula derivations.
2. The data-driven sampling scheme demonstrated in this paper addresses several challenges posed by random feature models compared to iterative optimization methods in supervised learning. Numerical experiments highlight its superiority in training time, accuracy, and interpretability against the ADAM optimizer. Further, its application in transfer learning indicates its potential for broader tasks.
Weaknesses: 1. The experiments in the paper only compare this method with iterative optimization methods, specifically ADAM. However, it lacks comparative experiments with related non-iterative training methods, thus not fully showcasing its potential advantages in non-iterative training tasks.
2. There is a notable lack of analysis regarding the convergence rate within the paper.
3. The technique of using data pairs to build model weights is akin to the approach by Galanis et al.[21]. The paper, though, does not provide a sufficient discussion about this similarity, which could potentially undersell the novelty of the methodology.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. In Algorithm 1, the L2 Loss function is consistently used. Can this method be adapted to accommodate other types of loss functions?
2. Could you elucidate on the process used to select the data pairs?
3. In the paper, you mention that this method can serve as a good starting point for fine-tuning. However, in Figure 5, even though the initial accuracy of the Sampling method surpasses that of ADAM, the test accuracy after fine-tuning falls short compared to ADAM + Fine-tuning. Can you offer an explanation for this outcome?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: This paper adequately discusses the limitations, primarily focusing on the following aspects: the sampling strategy is not well-suited for convolutional or transformer networks, the method faces challenges in handling implicit problems, and a theoretical analysis of convergence rates is not provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding weaknesses:
* [W1] Indeed, comparisons to non-iterative methods are still missing, e.g. particle-based or simulated annealing. We expect that these methods sometimes may lead to more accurate solutions (if they find a global solution that we did not), but are probably still orders of magnitude slower - because the time complexity of our method is the same as solving a single linear problem (see our complexity analysis in Section F of the supplemental material).
* [W2] Theorem 2 states convergence rates with respect to functions in the Barron space. There still remains questions relating specific sampling distribution to convergence rates. This also reflects the state of research around neural networks and how its often limited to very few results in terms of convergence (except for strong assumptions such as convexity of the loss), and Theorem 2 builds upon the current existing theory on neural networks approximating Barron spaces. However, the theory behind random feature methods are more rich in this instance, and we aim to connect more theory between the networks we propose and random feature methods theory, to provide stronger and probabilistic convergence rates.
* [W3] Galaris et al. [21] propose to use the data to find the direction and center of the weights for the sigmoid activation function in a shallow neural network, applied to numerical bifurcation analysis of PDEs from a certain type of simulations. We greatly extend their initial work, which we now also emphasize more in the manuscript:
* We define a non-uniform probability density over the data pairs, which forces weights to be close to steep gradients of the target function.
* We extend the idea to different activation functions, by adding scaling factors (the square of the norm) and change the bias. We show that these changes are essential in theory and experiments, particularly for tanh activation functions.
* We extend the construction to deep neural networks.
* Our work brings the key insight from Galaris et al. into a broader machine learning framework, with all the added theoretical investigation (including convergence), to ensure we do not lose anything by enforcing the weights and biases to a restrictive space.
Regarding questions:
* [Q1] Yes - please see our general answer to this (several reviewers asked this question).
* [Q2] We hope the figures in the general answer to all reviewers also help to illustrate how we choose pairs.
In theory, we start by sampling the weights of the first hidden layer by sampling from the joint distribution over the square of the dataset, with density given by Equation 2 of the paper. After we have sampled a pair of points for each neuron in the first hidden layer, we construct the weight according to Definition 1. If we have more than one hidden layer, we pass the whole dataset through the first hidden layer and proceed to sample as before, but from the dataset transformed by the first hidden layer. This continues until we reach the last layer, the coefficients of which we then approximate using least squares (in case of mean squared loss).
In practice, due to the complexity of computing the density of every pair of points, we first uniformly draw a number of data pairs. This number is only proportional to the data set size, not the square of it. From this set, we then draw again, randomly, a number of pairs equal to the number of desired weights in the current hidden layer, but now with a probability that is proportional to the finite difference computed for each pair (and the corresponding target function values on this pair). This means pairs associated with steeper gradients of the target functions are chosen more likely than pairs in flat regions.
* [Q3] In the paper, we state, 'The sampled weights also provide a good starting point for the fine-tuning of the entire model.' Here, we mean that even though fine-tuning the entire network after sampling breaks the direct connection of weights and biases to data pairs, the ADAM optimizer can still improve the test loss. Essentially, we demonstrate that sampling provides a good starting point for fine-tuning in weight space, i.e., the sampled weights are not a "bad" local minimum from which it is difficult to escape with ADAM.
In our experiments, regardless of whether the weights of the classification head are sampled or trained, the difference in the test accuracy after retraining the whole network is rather small. Thus, for comparable test accuracy, the computational efficiency of sampling is higher.
Nevertheless, we list possible reasons for the small differences:
* (a) Note that for the fine-tuning after Sampling, we use the ADAM optimizer. This suffers from comparatively high variance, mostly from iterative optimization and mini-batching, i.e. stochastic approximation of the local gradient. We believe the differences after fine-tuning are mostly due to this variance and not caused by sampling before.
* (b) The accuracy after fine-tuning also changes with the chosen architecture. For ResNet50, 'Adam training + finetuning' yields a slightly higher test accuracy as compared to 'sampling + finetuning'. However, for Xception architecture, we observe that 'sampling + finetuning' yields a higher test accuracy (which is also the highest overall test accuracy in our experiments).
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the rebuttal and the additional experiments. I agree with the authors' answers to my questions, but the discussion regarding the comparison of non-iterative methods and convergence rates in the weakness section is still insufficient. Despite this, the paper still presents a promising method, and I will keep my rating of 6.
---
Reply to Comment 1.1.1:
Comment: Thank you! Technically, experiment 4.1 in the paper already is an empirical comparison to a non-iterative method (random features). Still, we agree that there is more research and discussion possible, both toward convergence rates and non-iterative methods. | Rebuttal 1:
Rebuttal: We very much appreciate all the constructive criticism, feedback, and suggestions. We replied to every review individually. Several questions were raised on (a) the loss function we use for the last layer, and (b) the interpretability of the sampled networks, so we would like to comment on these here. We added two figures in the attached PDF file, which we refer to in the text.
Regarding different loss functions for the final layer:
We only use mean squared error as the loss function because there is a closed-form solution to the linear system in the final layer (using the least squares method, with Tikhonov regularization). This leads to very fast approximation and clear time and space complexity of the algorithm (see Section F in the supplemental material).
If approximation time is not a concern, loss functions used in standard neural network training can also be used - for example, cross-entropy loss for classification tasks. If no closed-form solution is available, the last layer must be trained iteratively. This should still be much faster than training the entire network iteratively.
According to Theorem 1, any loss function based on the $L_p$ norm, for $1\leq p \leq \infty$ is admissible. This covers most of the norms, and therefore losses, that are used to study universal approximations and convergence of neural networks.
Regarding the interpretability of sampled networks:
We argue that sampled networks are inherently more interpretable than iteratively trained ones. This is because we associate a pair of data points in the training set with every weight and bias pair in the network. Note that the pairs are not assigned after training; the network parameters are *constructed* using these data pairs. This allows us to interpret what part of the data domain the network is using most (cf. Figure 1, experiment A), and which data pairs are most important for predictions (cf. Figure 2, experiment B).
In experiment A, we first sample $10^6$ data points in $[0,1]^2$. From those, we sample $10^6$ data pairs with uniform distribution (as suggested in Galaris et al., [21]) and separately with our proposed distribution, using the class as target function values $(A=1,B=0)$. Finally, we plot the density of all points that were sampled (Figure 1, center and right plots). Obviously, the uniform distribution does not take into account the classification task. In contrast, our distribution mostly concentrates the data pairs around the decision boundaries.
In experiment B, we construct three very small networks with five neurons in one hidden layer each and train them on the MNIST image data set (50,000 training images, ten classes, one-hot encoded). All of the networks are classical feed-forward networks, so the input images (dimension $28\times 28$ pixels) are first flattend to vectors of length 784. The last layer is always constructed using least squares (Tikhonov regularization). In Figure~2, we illustrate the five hidden weights for each neuron by reshaping them from $784$ back to a $28\times 28$ pixel image again. The first network uses random features (weights distributed in a standard normal distribution, biases uniformly in $[0,1]$). The second network uses uniform sampling of pairs, and the third network uses our proposed distribution. Classification accuracy on the test set (10,000 images) is 24% (random features), 43% (uniform), and 47% (ours), respectively. Note that the goal of this experiment is not to achieve high accuracy - the networks have only five neurons each in the hidden layer - but to highlight the differences in interpretability. For random features, the sampled weights are hard to interpret. For the second and third network (second and third row in the figure), the weights are directly computed from the data points. With this knowledge, it is easy to understand why, for example, classes 2 and 4 are poorly classified: none of them are included in the weights. Also, the third weight of the uniformly sampled network does not add a lot of information, because it is constructed from a pair with both points in the same class.
We would also like to point out that the proof of Theorem 1 is constructive, which means we provide a method to convert any given network to a sampled network-effectively by re-constructing data pairs the weights and biases are associated to. This means even networks trained iteratively could now be interpreted using our construction (as long as the training data set contains a large set of points).
Pdf: /pdf/41867623ea5d124ce20021dcb62bf6916106d3b0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work provides a method to sample weights of a neural network in a data-driven manner, such that expensive iterative gradient calculations and optimization can be avoided. The proposed sampling technique sets weights and biases of fully connected networks with ReLU and tanh activations based on pairs of training data points. This mechanism is used for all but the last layer of the network, which is then trained in a gradient-based manner at the very end. This method constructs neural network predictors orders of magnitude faster than some iterative training techniques, while matching them in performance. Experiments include a classification becnhmark, deep neural operators and a transfer learning setting.
Strengths: - The idea of data-driven sampling of neural networks, as opposed to the random features model, is an interesting and original one. This paper sufficiently demonstrates the ability of their method to compete with iterative training methods while being much more efficient.
- The invariance properties of the sampling scheme remove the need for several data preprocessing techniques, which often require significant domain knowledge to determine.
- The theoretical analysis and experimental verification are sound. Proof sketches provided in the main paper are useful to gain insight into the method.
- Given the computational expense of training neural networks currently, extensions of this work to larger scales would have high impact. I find the demonstration and proof of concept shown in this work to be significant.
Weaknesses: - All the analysis seems to be done using mean squared error as the loss function. Stating whether this can be generalized to other losses and/or providing intuition on how that can be done would be useful.
- The main iterative training technique that the proposed sampling technique is compared to is the Adam optimizer. Results using different optimizers/training techniques would strengthen the claims of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - As above, how would the theory and/or experiments change for different loss functions and different optimizers used as comparisons?
- The authors provide a sampling method for fully connected layers and state that architectures like convolutions or transformers cannot be sampled with their method yet. Since convolutions ultimately implement linear transformations, is it possible to leverage that view of the convolution operator to extend this sampling scheme? If not, what are the concrete challenges to be solved for extending such a method to convolutional networks or transformers?
- Similarly, what are the challenges to constructing sampled networks for unsupervised or self-supervised tasks?
- The method intuitively provides greater interpretability since the mechanism of sampling weights is given. Do the authors have any thoughts on how this may help compute things like influence functions to answer questions like "what is the influence of a given training point on a given prediction"?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors clearly and satisfactorily state the limitations and impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weaknesses:
* [W1] Please see our general answer on the topic of choosing different loss functions (multiple reviewers asked for this).
* [W2] Different optimizers other than Adam should not drastically change the results in our chosen experiments, as long as they are iterative and gradient based. It may be that for some of the classification problems in Section 4, other optimizers like stochastic gradient descent or particle based methods result in slightly better results compared to Adam, but it is unlikely that they are much faster, given the highly optimized implementations in TensorFlow.
Questions:
* [Q1] See W1.
* [Q2] Regarding convolutional network sampling: We have tried to extend sampling to convolution kernels, but as of now, it seems that sampling kernels is fundamentally different than sampling weights in a feed-forward network. One of the main challenges is that convolution kernels do not have a one-to-one correspondence to individual data points, so it is not easy to extend the idea of "one data pair per weight" into "one pair of images per convolution kernel". If we would use all possible sub-images of each image pair to determine good kernels, this would quickly lead to an unmanageable amount of kernel pairs. Remark 3 in (https://jmlr.csail.mit.edu/papers/volume22/20-1300/20-1300.pdf) (a paper cited by reviewer ReAz) contains a brief discussion how to convert convolutional networks to feed-forward networks, maybe this can be used as a path forward.
Regarding transformers: A crude simplification of a transformer block is to apply an embedding to each token in a sequence, in such a way that the covariance matrix of the embedded sequence helps in the prediction of the output sequence. It also seems to be important to stack transformer blocks several times. At this point, the main challenge to extend sampling to transformers is how one would sample good token embedding maps without already knowing the embedding or at least the covariance (unsupervised setting). It certainly is possible to sample a transformer's last layer(s), much like our transfer learning experiment with the convolution backbones (Section 4.4).
* [Q3] Regarding unsupervised, self-supervised tasks: The main challenge for our sampled networks in unsupervised tasks is that we require the target function values to fit the coefficients of the last layer. Still, the main benefit of our sampling of weights and biases, even without fitting the last layer, is to create a "good" set of basis functions on the data. This set can then be used in downstream tasks, including unsupervised learning, even if the data pairs are "just" chosen uniformly at random (i.e., not taking into account the target function values). A direction we are currently exploring is the solution of partial differential equations, where the solution is unknown (unsupervised setting), but "good" ansatz functions can help to solve the equation easily. Self-supervised tasks may similarly benefit from sampling. One could sample a set of weights/biases without knowledge of the target function (by sampling uniformly over pairs), solve the last layer with gradient descent, and then refine the sampled weights once a new approximation of the function is available.
* [Q4] Regarding interpretability: The direct connection of weights and biases to data pairs is indeed what we think helps most to interpret the network "internals". This connection answers the question "What is the influence of certain pairs of training data points on all predictions?". Given, individual predictions, as asked by the reviewer, are more difficult to interpret because individual predictions often need multiple activation functions to be "active" (meaning non-zero, in the case of ReLU activation). A simple answer may be: if the training point is part of the "active" ReLUs weights, it is "important", otherwise not. How important exactly should be determined by the magnitude of the connected weight, i.e., the gradient of the ReLU function - which, also using our algorithm, means "how close the training point is to its paired point". For classification problems, this means "how close is the training point to the decision boundary", because only data pairs with differing classes are considered (otherwise the probability of choosing them is zero).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their clarifications and explanations regarding my questions. I will keep my score of 6. | Summary: This work presents a method for sampling the parameters of a deep neural network, including the final task-specific layer. In contrast to prior work that leverages Bayesian deep learning or generative models to learn the sampling distribution, this work presents a sampling algorithm that defines a data-dependent sampling distribution, so no gradient descent is used to train any parameters.
This sampling algorithm is compared with conventionally trained networks (with Adam) on a number of different tasks, where the sampled networks are shown to perform on par or somewhat better than the trained counterparts. Importantly, the sampling procedure faster than iterative training. Currently, this sampling algorithm is restricted to fully connected neural networks.
Strengths: Being familiar with the (sometimes fraught) work on Bayesian neural networks, I appreciate that this work designs an algorithm for effective sampling of neural network weights that is not computationally intractable. This is a new perspective to me, and I found the theoretical results, in particular theorems 1 & 2 to be quite useful in establishing that the sampling algorithm is sound.
I found the empirical results to be interesting, especially the transfer learning / CIFAR-10 experiment that showed comparable test-time performance to an Adam trained network while being far more efficient to "train".
Weaknesses: I'll start by saying that I have very unfamiliar with this line of work, though work in learning to sample neural network parameters is relevant to me. I am quite certain that the main novelty of this paper was missed by me.
That being said, I found this paper to be impenetrable even at a high level. Any confusion I have with the sampling distribution for example, is not alleviated by any satisfactory explanation in the text. I think the only people who will be able to digest this paper in its current form are those completely familiar with this line of work.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: I'm curious if there is any connection to NTK or to NNGP models here? Both employ a similar property of converging on a solution as the width of the network tends to infinity -- and are data dependent, though in a different way.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: I think the limitations were adequately addressed, as this work applies to small neural networks there shouldn't be any concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: This high-level explanation of the paper may help: instead of training all parameters of hidden layers with gradient-based methods, we construct each combination of weight and bias using pairs of data points from the training set, i.e., $(x_1,x_2)$. As there are usually many more pairs of points in a dataset than we need for the number of neurons (each associated with a weight vector and bias value), we assign probabilities to pairs. The probability for a pair to be chosen is proportional to the finite difference $|f(x_2)-f(x_1)|/|x_2-x_1|$, where $f$ is the function we want to approximate. The higher the absolute value of the finite difference, the more likely we pick that data pair for a weight in the network. We use finite differences so that weights are more likely distributed around high gradients of the target function than in flat regions.
In the context of classification and using tanh as an activation function, an informal explanation behind the theory is that we pick two points $x_1$ and $x_2$ for each neuron, where the two points belong to different classes. In addition, we try to pick them as close to the decision boundary as possible. Then, by letting the bias be set as described in the paper, we end up assigning positive values to points that are closer to $x_1$ and negative values to points closer to $x_2$. Then, the last set of weights can use this information to perform the final classification, as the decision boundary separates the two points.
Regarding NNGP/NTK:
There certainly is a relation, but not an immediate one. Our sampling distribution is one of the differences from the models considered in the NNGP/NTK literature. Classically, the weights of a neural network are initialized with a normal distribution. In our case, we use the distribution from Equation 2. Yet, as we sample parameters i.i.d., we can still consider infinitely wide sampled networks and apply the central limit theorem to get a process similar to NNGP.
The NTK describes the training dynamics of a network under gradient descent. Sampled networks do not use gradient optimization for hidden layers, so we cannot define such a kernel evolution. However, one could consider only the last layer of a sampled network and assume gradient descent optimization for its weights. This would make it possible to recover a variation of NTK.
---
Rebuttal Comment 1.1:
Comment: I appreciate the additional summary and feedback by the authors. I also found the additional experiments (particularly figure 1) to be illuminating with regard to the advantages of the sampling distribution over other approaches. I am happy with the authors responses, as it cleared up multiple pain points for me, such as the reasons behind the limitations with sampling convolutional networks. I will adjust my score accordingly from a 5 to a 6.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! | null | null | null | null |
A Logic for Expressing Log-Precision Transformers | Accept (poster) | Summary: In this submission, the authors generalize an expressivity result of transformers from Chiang et al. (2023), who showed that finite-precision transformers can be equivalently expressed in a counting generalization of first-order logic. In the paper at hand, the authors generalized this result that log-precision transformers, i.e., that scale with the input size, are expressible in First-order logic with majority votes, FO(M). This formalism is needed to express uniform attention patterns, to, for example, express counting languages such as a^m b^m. The authors provide a well-written introduction to the topic and rigorous proof.
Strengths: - A novel bound for the expressive power of transformers, which generalizes previous results; the problem is interesting to the NeurIPS community. Transformers are ubiquitous.
- The paper is well written; the transformer formalization and FO(M) is well-explained with the NeurIPS community in mind. Examples are provided (section 2).
- Intuition paragraphs are provided after the main results and parts of the proof (second half of the paper), making it ok to follow.
Weaknesses: - Although not necessary for this type of theoretical work, there would have been room to conduct some interesting experimental results to visualize and empirically analyze the authors claims
- The most important lemma (I feel), Lemma 2, is moved to the appendix and scattered across multiple sections there with little intuition given. The reviewer found this the most interesting part and more important for understanding the proof than section 6.1, as the simulation is straight forward.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I wonder how the claim that uniform attention is necessary relates to the findings of “ALiBi” of Press et al. (ICLR, 2022)?
- What are the author’s opinions on the next steps to tighten the bound further?
- Would you consider moving Lemma 2 to the main part of the paper? What are your opinions on that?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The bound is not tight. The missing tightness of the provided bound would have deserved its own section or a more involved explanation. It is only partially touched on at the end of the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and questions!
> Would you consider moving Lemma 2 to the main part of the paper? What are your opinions on that?
We appreciate this point and agree that having some discussion of the Lemma 2 proof in the main paper could give more intuition about why the result at large goes through. The reason we originally put Lemma 2 in the appendix is because its full proof is really quite large: there is a separate sub-lemma for each transformer component in Section C. We thus don’t think we can fit the full proof in the main paper. Perhaps a good middle ground would be to add a paragraph giving a proof sketch of Lemma 2 in the main paper, and then refer the reader to the appendix for the details about each component.
> What are the author’s opinions on the next steps to tighten the bound further?
We believe a promising direction would be to analyze the width of the circuit required to simulate a transformer. Intuitively at least, we believe there should be a connection between the circuit width and the quantifier depth of an equivalent FO(M) formula. Working out the details here could allow us to say something like “any transformer can be simulated by an FO(M) formula of quantifier depth at most d” (for some constant d).
At the same time, it could be interesting to conduct some experiments on the ability of transformers to learn different formal languages definable in FO(M). Varying crucial properties like the quantifier depth could generate hypotheses about which problems within FO(M) may be out of reach for transformers.
> I wonder how the claim that uniform attention is necessary relates to the findings of “ALiBi” of Press et al. (ICLR, 2022)?
Good question! Under ALiBi, the positional bias in principle will introduce some deviation from uniform attention, although we believe the deviation can be quite small. This is because the parameter norm of the network will likely grow over time (cf. [Merrill et al., 2021](https://arxiv.org/abs/2010.09697)) and the magnitude of qk will scale roughly quadratically in the parameter norm. It follows that the magnitude of qk can dominate the positional bias term of AliBi, especially for the heads with a low value of m (cf. AliBi paper, Section 3). We thus think it is possible to closely approximate uniform attention with AliBi, even if there will be some slight deviation from it.
---
Rebuttal Comment 1.1:
Comment: Interesting. Thanks for your reply! I have no more questions. | Summary: This paper provides two contributions which extend previous work showing that finite-precision transformers may be expressed in FOL.
* Firstly, the authors prove that finite-precision transformers can only uniformly attend to bounded-length windows over their input sequence, with the constraints on previous theoretical work (those of Chiang et al. on finite-precision models) being too tight to model the capabilites of real models (which are capable of expressing sentences of the form $a^mb^mc^m$). This motivates them to consider log-precision transformers which are capable of attending to unbounded ranges at $O(\log n)$ precision.
* Secondly, the authors prove that the computation performed by a log precision transformer can be expressed in FO(M) logic, a variant of first order logic presupposing the typical quantifiers (universal, existential) as well as a majority quantifier (if a variable is present $\geq \frac{1}{2}$ of the entries in a set). They prove this by showing that log-precision transformers belong to computation graph families which are computable in log-uniform threshold circuits, which in-turn (leveraging existing results) are expressable in FO(M).
Alongside the proofs for the second claim, they also provide an algorithm which demonstrates how (in principle) one can express computational graph families with circuit families, by breaking up each node in the computational graph into contiguous blocks of circuit gates; also considering the case of log-uniform computation graphs.
Strengths: The paper appears technically sound, though proofs outside of the main text (those in the appendices) were not thoroughly checked (the first claim, regarding limited expressivity of finite-precision transformers, is small and was checked).
The approach taken to demonstrate the FO(M) expressibility draws on existing results from various domains and uses these to extend previous findings on logical expressivity of transformers, thus it is both quite novel and valuable. Whilst these findings remain far from useful for practical mechanistic interpretability, they serve as a useful foundation for work hoping to explore any such applications.
Weaknesses: No major weaknesses were identified.
Two minor weaknesses are:
1. The limited applicability which is only partially addressed in the section on "Mechanistic Interpretability". In particular, the size of a FO(M) sentence required to express a practical transformer would presumably be significant enough that its interpretability would be questionable at best. The authors do conjecture that quantifier depth "of at most 2" may suffice to express transformers, but this has yet to be shown. This criticism is applicable to other technical attempts to produce discrete / tree-like representations of transformers, and so is not a significant drawback, besides saying that any attempts to motivate such research from a practical perspective seem highly optimistic at present.
2. The paper is quite dense in parts, which is understandable. However, two potentially useful figures could be envisioned: 1) a high-level overview of the proof (log-precision transformers -> computational graph -> TC -> FO(M)) 2) A figure illustrating the contiguous circuit construction algorithm from $6.1
## Nitpicks
* 55: "powerful enough *to* express"
* Example 5: Seems incomplete?
* 252: "the second of which" should probably read "one of which" as it is otherwise somewhat confusing why "the second" is being specified
* 310: "without loss of generality" -> "w.l.o.g" to be consistent with previous use
* 355: "Challenging the premise of a rigid division between symbolic and neural models". The mathematical sense in which this is being challenged is quite loose, and so it is not clear that this meaningfully "challenges" the division in the way it is typically framed (in a sense, matrices are symbolic, especially if a model is quantized, but this does not make the model *symbolic* in the way a logician would find useful)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Theoretical limits of the proofs and claims are precise and well-stated. Claims around practical applicability of these results may be slightly overstated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! We appreciate your suggestions for making the paper less dense. If accepted, we will add a high-level overview of the proof toward the end of Section 4, as you suggest. A figure illustrating the Algorithm from 6.1 would also be nice if space permits - we will consider whether it is possible to add this without pushing too much other important content to the appendix.
Regarding the potential interpretability limitations, we agree that a very large/deep logical formula on its own wouldn’t help anyone better understand a transformer (although it could enable finding meaningful substructures or facilitate formal verification). Therefore, we think that the quantifier depth of FOM formula is quite important for its interpretability, and are actively thinking about a depth-2 / low-depth FOM simulation as a follow-up. (see also response to R4)
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
A high-level overview will indeed be appreciated, and the omission of a figure given the space-constraints and justifiable inclusion of all current main-paper contents is reasonable.
The limitations to interpretability are not a strong limitation of this work, but I am pleased to hear that you are already considering bounds on the sizes of circuits!
I retain the previous score and recommend this paper for acceptance as sound theoretical work which provides novel insights and lays the groundwork for more practical interpretability-focused work going forward. | Summary: The paper presents a theoretical analysis on the expressiveness of transformer-based models. In particular, the authors have managed to prove that the any log-precision transformer is equivalently expressible as normal first-order logic plus majority-vote quantifiers, FO(M). This yields the tightest known upper bound of log-precision transformers.
Strengths: The result is of significant theoretical interest and can potentially bridge well-studied first-order logic and widely used transformer architecture (e.g., via mechanistic interpretability work). The paper is nicely structured and with well crafted examples. To the best of my knowledge, I don't see major gaps in their proof derivations.
Weaknesses: I don't see any significant weakness in this paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Minor:
- Figure 1 on page 1 is admittedly a great motivation example but we don't know the exact meaning of an and b until page 4. It is perhaps a good idea to shed more words about Figure 1 earlier.
- line 80, 'atend'
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review! We appreciate that you found our work to be of theoretical interest. As you suggest, we will clarify the notation used in Figure 1 in the caption so that it is more self-contained. | Summary: In this paper, the computational power of transformers is studied subject to floating point precision and related to that of first-order logic. In particular, the authors show that (1) fixed-precision is not sufficient to compute uniform attention for arbitrary context lengths and (2) log-precision can be simulated by first-order logic with majority. The latter is shown by establishing that log-precision transformers can be simulated by a family of log-uniform circuits which is known to be equivalent to first-order logic with majority.
Strengths: - Some of the existing theoretical results on the computational power of transformers seem to only partially support observations in practice. For example, it has been shown that the language $a^nb^n$ provably can not be recognized by a fixed-precision transformer while experiments indicate the opposite. The paper contributes to better understanding this gap by deriving theoretical results for log-precision transformers that seem to resemble observations in practice although practical implementations are in fact fixed-precision.
- While the expressive power of log-precision transformers has been characterized by first-order logics in previous work, the paper established the tightest bound so far.
- The extensive use of transformers and language models in formal domains such as programming languages and theorem proving makes the connection between transformers and formal logics particularly interesting.
Weaknesses: - The role of positional encoding is not discussed in the analysis. Yet, fixed-precision and log-precision has direct implications on the ability to represent a positional encoding.
- Merrill & Sabharwal (2023) show that a log-precision transformer can be simulated by a uniform $TC^0$ circuit family which at first seems fairly similar to the result in this paper. Merrill & Sabharwal bound space and prove the result for logspace-uniform $TC^0$ whereas in this paper the authors bound time and prove the result for log-uniform $TC^0$. I think this is not obvious and making it more explicit would help to understand the contribution of this paper.
- No definition of circuit complexity is given in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is there a straightforward argument that generalizes the results for decoder architectures or are the results in fact limited to encoder architectures?
- In the conclusion the threshold circuit family is referred to as highly uniform. What is the meaning of highly in this context?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - The definition of the transformer resembles the encoder part of the full transformer architecture. If the results do not generalize to the transformer decoder this should be addressed as a limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! We’d like to address some of your comments and suggestions around the role of positional encodings, different notions of uniformity, and other aspects of the paper.
## Positional encodings
> The role of positional encoding is not discussed in the analysis. Yet, fixed-precision and log-precision has direct implications on the ability to represent a positional encoding.
As discussed in C.1, our results go through for any positional embeddings that are log-precision and computable in time O(log n) (where n is seq length) as a function of the natural number i representation position. Because it takes O(log n) bits to uniquely specify a natural number in the range [0, n], these assumptions are met for any natural type of positional encoding. We will better highlight the role of positional encodings in Section 4.
## Logtime vs. logspace uniformity
> Merrill & Sabharwal (2023) show that a log-precision transformer can be simulated by a uniform circuit family which at first seems fairly similar to the result in this paper. Merrill & Sabharwal bound space and prove the result for logspace-uniform whereas in this paper the authors bound time and prove the result for log-uniform. I think this is not obvious and making it more explicit would help to understand the contribution of this paper.
Thanks for the feedback! We will clarify the distinction when we introduce logtime-uniformity in the Preliminaries.
Relatedly, you asked what we mean by “highly uniform” in the conclusion. We are simply referring to logtime-uniform as opposed to logspace-uniform, which we will clarify. We say “highly uniform” because logtime uniformity is so strong that it collapses circuit families to logical formulas, meaning there is just a single description of the computation for any input size.
## Other questions
We will add some high-level background on circuit complexity as well as references to standard texts at the beginning of the preliminaries section.
> Is there a straightforward argument that generalizes the results for decoder architectures or are the results in fact limited to encoder architectures?
This is a good question that we have also been thinking about recently. There is not a simple result or corollary that holds for decoder-models, but it is possible to use these results as a tool to obtain a different upper bound for decoder architectures. As this is quite involved in its own right it is out of scope for this paper, but it forms the basis of our current follow-up research.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The proposed changes address all weaknesses sufficiently, and I will raise my score accordingly.
Regarding the decoder architecture, I do not consider it a weakness that it is not further discussed in the paper, but I urge the authors to clarify in the paper that the results apply only to encoder architectures. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Data Selection for Language Models via Importance Resampling | Accept (poster) | Summary: This paper aims to improve the samples to pre-train language models.
It propose a novel method called "Data Selection with Importance Resampling (DSIR)" to select better pre-training samples and found superior fine-tuning performance compared to various other approaches, including random-sampling.
The authors show the effectiveness of this approach by reporting 2-2.5% better average performance on the GLUE dataset.
Strengths: This paper addresses a crucial issue "how can we make pre-training more efficient?"
It is well written, easy to follow, covers relevant related work, and considers various other methods.
Their approach effectively improves performance when fine-tuning models on downstream tasks.
Weaknesses: - While show performance improvement, this paper considers GLUE a rather old, general language benchmark. Therefore, this paper evaluates only a part of the possible tasks. Especially in these times when we see the big success of larger language models on such general tasks using in-context learning, it would be interesting to see how models perform on other more challenging tasks - like reasoning intense ones.
- This paper only reports the GLUE performance on the dev set. While we see an improvement, it remains to be seen how this transfer to the test set. This is especially relevant since we do not know whether the authors use the dev set to find the best-performing epoch. From the paper, we know that RoBERTa default hyperparameters were used - which includes early stopping on the dev set - but in this paper nothing is stated regarding this issue.
- The authors report that domain-specific pre-training heavily harms downstream tasks' performance in the worst case. This also reflects a crucial limitation of this approach; when pre-training a language model, we aim for an optimal model in general without having specific downstream tasks/domains in mind. But with this work, we introduce new dependence to the target domain, which only sometimes matches the full richness of existing and upcoming language properties/domains.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - How do you see your approach connected to curriculum learning, where we gradually increase the difficulties during pre-training?
- Do you think more recent learning paradigms like in-context learning or prompting can gain in the same way as fine-tuning does?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank LQ7v for the feedback. Overall, LQ7v felt that the paper was “well written” and “addresses a crucial issue”. We answer specific questions below:
> “we do not know whether the authors use the (GLUE) dev set to find the best-performing epoch. From the paper, we know that RoBERTa default hyperparameters were used - which includes early stopping on the dev set - but in this paper nothing is stated regarding this issue.”
**To clarify, we do not use the dev set for tuning hyperparameters and instead use a fixed set of hyperparameters** for each GLUE dataset. By the RoBERTa default hyperparameters, we mean that we copied and fixed the default hyperparameters for learning rate, batch size, and number of epochs provided in the RoBERTa official codebase, without using any of their additional tuning strategies. We apologize for any confusion and will clarify in the final revision.
> “this paper considers GLUE a rather old, general language benchmark…it would be interesting to see how models perform on other more challenging tasks - like reasoning intense ones.”
Due to compute limitations, we weren’t able to pretrain a model that is large enough for reasoning capabilities from scratch for the general-domain experiments. However, we agree that challenging reasoning tasks are a great direction for future evaluation of data curation methods on large LMs and we would like to do this in the future. We will add this in the discussion for the final revision.
> “when pre-training a language model, we aim for an optimal model in general without having specific downstream tasks/domains in mind. But with this work, we introduce new dependence to the target domain…”
- In general, the target distribution is a hyperparameter that can be tuned to be as general as we desire. For example **in the general-domain experiments we show that even a heuristic/proxy target distribution (Wikipedia and books) can improve performance on general downstream benchmarks.** For general-domain models, the task then becomes to design a good target distribution for high-quality data, and then apply DSIR.
- For **domain-specific models, we often have a target distribution of interest** (e.g., code, law, medical) and DSIR can be applied directly to gather more relevant data for this target distribution.
> Do you think more recent learning paradigms like in-context learning or prompting can gain in the same way as fine-tuning does?
- **Yes.** For example, the GLaM paper (https://arxiv.org/abs/2112.06905) shows that **even heuristic classification (filtering the pretraining data with a Wikipedia classifier) can significantly improve few-shot in-context learning** performance. We believe **DSIR is a more principled approach and would bring further gains** at these large scales.
- There are also some works that show that in-context learning benefits from selecting the few-shot examples themselves intelligently (https://aclanthology.org/2022.emnlp-main.622/, https://arxiv.org/abs/2302.13539, https://arxiv.org/pdf/2307.07164.pdf, https://arxiv.org/abs/2101.06804, https://arxiv.org/abs/2301.11916).
- Another related area is selecting good examples for RLHF / instruction tuning of large models. The target distribution here could roughly be described as “helpful answers to a diverse set of prompts”. DSIR could be used to scale up the amount of good instruction tuning data for RLHF.
> How do you see your approach connected to curriculum learning, where we gradually increase the difficulties during pre-training?
Curriculum learning could be viewed within the DSIR framework as selecting data against a set of target distributions that changes over the course of training, starting with preferring easier examples and progressing to harder ones.
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: Dear reviewer, may we ask if you could respond to our comments? In our response, we clarify that we do not use the GLUE validation set to tune hyperparameters, address the generality of having a target distribution, and address other detailed concerns. Please let us know if you have other questions or concerns. Thank you! | Summary: The work presents a novel framework for effectively selecting a representative document subset during the pre-training of Large Language Models. Pre-training such models incurs significant costs, prompting efforts to minimize the subset size and associated expenses. The authors commence by highlighting the importance of the subject matter and its practical applications. Subsequently, they identify the limitations and characteristics of existing approaches. Building upon the recognized limitations and compelling evidence of Importance Resampling's applicability in the context of Large Models, the authors propose an innovative framework based on KL reduction.
In general, the paper has a good idea and a good novelty factor. The authors claim they improved the state-of-the-art on the text classification task, and, indeed, there is strong evidence by the results presented in the paper. In sum, my unique improvement suggestion is to include statistical treatment of the presented results.
In summary, this research exhibits a commendable goal and innovative ideas, demonstrating substantial potential. With minor changes addressed, I recommend accepting this paper. I would like to extend my congratulations to the authors for their extensive experimental work and the promising results they have achieved.
Strengths: S1: The paper is well written. The authors clearly define the evaluated objectives, motivation, and contributions.
S2: The implementation details and method-specific hyperparameters were defined. Thus, the paper is (possibly) reproducible. Besides, the authors shared their code as supplementary material with the submission in the OpenReview.
S3: The considered datasets are well-known and widely used in the literature. ACL, Sci-ERC, ChemProt, RCT, AGNews, HyperPartisan, Helpfulness, and IMDB.
S4: The proposed method was fairly compared to strong baselines. Besides, in terms of text classification methods, the proposed method was applied to RoBERTa, a strong SOTA method in the LLM field. Moreover, the baselines were very well explained in a simple and straightforward way.
S5: The authors adopted proper metrics to handle and properly measure the effectiveness on both balanced and unbalanced datasets domains (Accuracy and Macro-F1, respectively).
S6: Repetition: It is worth noting that the authors adopted a 5-Fold validation procedure, which demonstrates their commitment to rigorously evaluating the proposed framework.
Weaknesses: W1: There is no statistical treatment of the results (e.g., statistical significance tests), which does not allow to rule out the null hypothesis of equality of results. There are strong evidences of the superiority. However, without tests, any claim of superiority can be considered unsubstantiated. To strengthen the research claims and ensure robust conclusions, it is desirable to include appropriate statistical analyses to validate the significance of the reported results.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Q1: Overall, the paper demonstrates a strong conceptual framework with a commendable novelty factor. However, there is room for improvement, particularly in addressing the previously mentioned weakness.
Regarding this weakness, it is worth noting that the authors have already conducted a 5-Fold random validation procedure. As a result, incorporating a statistical method to strengthen the research claims would require minimal additional effort. In light of this, I highly recommend applying a t-test with Bonferroni correction to account for multiple tests. [1,2]
[1] Dacrema, M. F., Cremonesi, P., & Jannach, D. (2019). Are we really making much progress? A worrying analysis of recent neural recommendation approaches. In Proceedings of the 13th ACM conference on recommender systems (pp. 101–109).
[2] Cunha, Washington, et al. "On the cost-effectiveness of neural and non-neural approaches and representations for text classification: A comprehensive comparative study." Information Processing & Management 58.3 (2021): 102481
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. Overall, znTv notes that we provide **“a strong conceptual framework with a commendable novelty factor”, along with “extensive experimental work and promising results”**. We respond to specific questions below:
> “There are strong evidences of the superiority… To strengthen the research claims and ensure robust conclusions, it is desirable to include appropriate statistical analyses to validate the significance of the reported results… I highly recommend applying a t-test with Bonferroni correction to account for multiple tests.”
The statistical tests below show that **DSIR outperforms all other methods with significance (α=0.05) in general-domain pretraining and is comparable with manual expert curation / top-k heuristic classification in domain-specific continued pretraining while outperforming all other baselines with significance. This is in line with what is claimed in the paper.** We conducted Student-t tests (unequal variances) on the average performance of the methods across downstream datasets, with a Bonferroni correction for multiple comparisons between DSIR and other methods. We thank the reviewer for the suggestion and will include it in the final revision.
For general-domain pretraining, with p-value threshold 0.05 / 3 = 0.0125:
| | t-score | p-value |
|-----------------------------|-------------|-----------------|
| DSIR vs random | 9.271188391 | 0 |
| DSIR vs heuristic cls | 10.8322228 | 0 |
| DSIR vs top-k heuristic cls | 3.716599729 | 0.0001 |
For domain-specific continued pretraining, with p-value threshold 0.05 / 5 = 0.01:
| | t-score | p-value |
|-----------------------------|--------------|--------------------|
| DSIR vs RoBERTa | 5.064870983 | 0.0000002|
| DSIR vs manual curation | 0.816252706 | 0.207 |
| DSIR vs random | 3.199222061 | 0.0006 |
| DSIR vs Heuristic cls | 2.987762708 | 0.0014 |
| DSIR vs Top-k heuristic cls | 0.2570449689 | 0.398 |
---
Rebuttal Comment 1.1:
Comment: Thanks for the replies. Considering that the authors have addressed my particular suggestion and have integrated the suggested statistical analysis, it seems they've proactively enhanced the robustness of their research and substantiated their conclusions. In light of this, I'll maintain my Strong Accept rating. | Summary: The authors present a novel framework for selecting examples from a large and diverse dataset which are most relevant to a specific target domain. They apply this towards data selection for 'continued pretraining' in which a pretrained LM is trained further on a domain-specific dataset, in order to improve its ultimate downstream performance on domain-relevant tasks. They also define a novel data-evaluation metric which is cheap to calculate and correlates well with downstream task performance. They evaluate the proposed method on continued training for RoBERTa against competing methods for 8 tasks, and demonstrate superior average performance. They present additional experiments ablating the target domain dataset, and demonstrate the utility of the method towards data selection for a 'general domain' LM (via heuristic of formality).
Strengths: This paper was a pleasure to read. The overall problem addressed in the paper is important and especially relevant given the preponderance of general LLMs in contemporary research. The relative tractability of this the proposed method at scale makes it more pragmatic than some competing approaches. The paper is clearly written and well structured, each section flows nicely from the last. The results outperform extant methods, and are thoroughly and engagingly analyzed. The proposed method provides a tractable and approachable means of solving a quite general problem, as well as presents many compelling directions for future work in a similar direction.
Weaknesses: DSIR is general to the kind of feature extractor in use, yet in this work only hashed n-gram and unigram feature are explored. It would be instructive as to demonstrate DSIR with different feature spaces. Naively I would assume this is a more interesting ablation than discriminative vs generative approach to learning the importance weight estimator, as this also might demonstrate a tradeoff of performance and efficiency. However, n-grams are simple and a very reasonable place to start. This does not significantly detract from the overall work, and this omission is rightly mentioned in the limitations as being left for future work.
Nits:
Figure 3 and Table 3 cover the same data, ideally would be presented on the same page.
Table 2 and its analysis (lines 199-212) are also on different pages.
Figure 3 references the overall avg performance of the values reported in Table 3, please add that as a final column to the Table itself. Additionally, please add a separate row showing the performance of the baseline model with no continued training.
Table 3 might be made more readable (depending on execution) by color-coding the relative performance each dataset. As presented, it is very difficult to parse that ChemProt is stronger on avg than Sci-ERC, that IMDB is weak on average, or that IMDB on HyperPartisan is such a negative outlier. Additionally, it might be useful to standardize the column widths and center the text so the main diagonal is more obviously a diagonal.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In Table 3, the strong performance of the CS-dataset-target models on non-target domains, especially compared to the relatively poor performance of the Reviews datasets, shows asymmetry in the transfer between domains (line 225). This is an interesting finding and could bear slightly more explanation.
In the discussion, it is mentioned (line 319) that amplifying biases in the target data is a potential harm. On the other hand, the possibility of mitigating bias might confer a tremendous benefit; what do the authors think of the potential to use DSIR as a means of removing bias from a giant dataset?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations of the work and potential negative societal impact are sufficiently addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. Reviewer rytZ felt that the problem is **“important and especially relevant”** and the paper provides a **“tractable and approachable means of solving a quite general problem, as well as presents many compelling directions for future work”**. We answer specific questions below:
> DSIR is general to the kind of feature extractor in use, yet in this work only hashed n-gram and unigram feature are explored. It would be instructive as to demonstrate DSIR with different feature spaces.
**During the rebuttal period, we implemented a first-pass version of DSIR with embeddings from a pretrained language model.** We extracted features using a Sentence Transformer (miniLM-v6-2), learned raw and target distributions over the features parameterized as 1000 and 50-component Gaussian mixture models respectively, and used these within the DSIR framework to select data for general-domain pretraining. The results are shown below in a table. **On average, DSIR with neural features improves by 1-1.5%+ over random selection and heuristic classification** and is on par with top-k heuristic classification and top-k DSIR, but still underperforms DSIR with n-gram features. However, we believe that this is still a promising direction since some steps could be improved, such as the hyperparameters of the Gaussian mixture model. We thank the reviewer for the suggestion.
| GLUE dev | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B | Avg |
|--------------------------------|------:|------:|------:|------:|------:|------:|------:|------:|---------:|
| Random selection | 82.63 | 86.9 | 89.57 | 67.37 | 90.05 | 87.4 | 49.41 | 88.63 | 80.25 |
| Heuristic classification | 82.69 | 85.95 | 89.77 | 68.59 | 88.94 | 86.03 | 48.17 | 88.62 | 79.85 |
| Top-k heuristic classification | 83.34 | 88.62 | 89.89 | 70.04 | 91.15 | 86.37 | 53.02 | 89.3 | 81.47 |
| Top-k DSIR | 83.39 | 88.63 | 89.94 | 72.49 | 91.01 | 86.18 | 49.9 | 89.52 | 81.38 |
| DSIR + n-gram features | 83.07 | 89.11 | 89.8 | 75.09 | 90.48 | 87.7 | 54 | 89.17 | 82.30 |
| DSIR + neural features | 83.44 | 88.2 | 89.81 | 70.68 | 90.5 | 87.55 | 52.58 | 88.4 | 81.40 |
> In Table 3, the strong performance of the CS-dataset-target models on non-target domains, especially compared to the relatively poor performance of the Reviews datasets, shows asymmetry in the transfer between domains (line 225). This is an interesting finding and could bear slightly more explanation.
In Appendix Figure 6, we show that the distribution of data sources selected by DSIR for CS-dataset targets is generally much more diverse, and we believe this could be a reason for its strong performance on many domains. This might make sense since ACL-ARC (a dataset of NLP papers) is likely to be on a more diverse set of topics than reviews.
> the possibility of mitigating bias might confer a tremendous benefit; what do the authors think of the potential to use DSIR as a means of removing bias from a giant dataset?
**DSIR has potential for removing dataset bias by collecting a target dataset with less bias** and using it to define the target distribution. For example, if the target dataset has roughly equal representation across groups, DSIR will upweight minority groups and downweight majority groups accordingly. However, since the user interface is simply to collect a target dataset (and no groups have to be explicitly defined), more intricate patterns about what makes the target dataset less biased can also be taken into account. We thank the reviewer for the great idea!
> Nits / presentation of tables and figures
We will update these in the final revision - we thank the reviewer for the suggestions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the replies. I maintain my Strong Accept rating. | Summary: This paper proposes a simple method for selecting pretraining data for language modeling based on the downstream fine-tuning tasks. It uses ngrams as features for a corpus, and weighs the pretraining data based on importance sampling, which estimates how similar the pretraining data is to the task specific fine-tuning data.
Strengths: 1. The method is simple and easy to understand. The presentation of the paper is generally easy to follow
2. The paper tested the method on both domain adaptive continued pretraining and training from scratch
Weaknesses: 1. The paper uses a simple n-gram feature for importance weighting, but it’s not clear whether other dense features would bring better performance. Even if using dense features is more expensive or unrealistic, it would be helpful to have a comparison on the small scale, or analysis of the differences in computation required for using different features.
2. The paper did not compare to other automatic data selection methods such as
3. The method does seem like a simple/scalable approach, but there is not a good analysis on the resources(memory/training time) needed.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Have you considered comparing to learned data selection method such as DoReMi(https://arxiv.org/pdf/2305.10429.pdf)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. Reviewer NB9k notes that the method “seems like a simple/scalable approach” and the paper included comprehensive experimental settings, as it “tested the method on both domain adaptive continued pretraining and training from scratch”. We answer specific questions below:
> “Even if using dense features is more expensive or unrealistic, it would be helpful to have a comparison on the small scale”
**During the rebuttal period, we implemented a first-pass version of DSIR with embeddings from a pretrained language model.** We extracted features using a Sentence Transformer (miniLM-v6-2), learned raw and target distributions over the features parameterized as 1000 and 50-component Gaussian mixture models respectively, and used these within the DSIR framework to select data for general-domain pretraining. The results are shown below in a table. **On average, DSIR with neural features improves by 1-1.5%+ over random selection and heuristic classification** and is on par with top-k heuristic classification and top-k DSIR, but still underperforms DSIR with n-gram features. However, we believe that this is still a promising direction since some steps could be improved, such as the hyperparameters of the Gaussian mixture model. We thank the reviewer for the suggestion.
| GLUE dev | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B | Avg |
|--------------------------------|------:|------:|------:|------:|------:|------:|------:|------:|---------:|
| Random selection | 82.63 | 86.9 | 89.57 | 67.37 | 90.05 | 87.4 | 49.41 | 88.63 | 80.25 |
| Heuristic classification | 82.69 | 85.95 | 89.77 | 68.59 | 88.94 | 86.03 | 48.17 | 88.62 | 79.85 |
| Top-k heuristic classification | 83.34 | 88.62 | 89.89 | 70.04 | 91.15 | 86.37 | 53.02 | 89.3 | 81.47 |
| Top-k DSIR | 83.39 | 88.63 | 89.94 | 72.49 | 91.01 | 86.18 | 49.9 | 89.52 | 81.38 |
| DSIR + n-gram features | 83.07 | 89.11 | 89.8 | 75.09 | 90.48 | 87.7 | 54 | 89.17 | 82.30 |
| DSIR + neural features | 83.44 | 88.2 | 89.81 | 70.68 | 90.5 | 87.55 | 52.58 | 88.4 | 81.40 |
> “there is not a good analysis on the resources(memory/training time) needed”, and it would be helpful to have an “analysis of the differences in computation required for using different features.”
- **DSIR is compute and memory efficient due to the simple n-gram featurization.** The memory usage is constant (keep track of 10k n-gram frequency counts), and it takes about 38 minutes to compute n-gram counts on the Pile (2B examples) using 30 workers.
- **Comparison between n-gram and neural features: using neural embeddings requires at least 138M times more FLOPs+integer ops than the n-gram featurization, if a small 23M parameter neural embedding model.** This is mainly due to the FLOPs needed to run the forward pass of the neural embedding model vs. counting n-grams. **Due to the compute costs, it takes about 16 hours to extract neural embeddings from the Pile with 30 workers with 1 GPU each (23M model), vs 38 minutes with n-gram features**
- We thank the reviewer for the comment and will add more discussion of the compute resources needed in the final revision.
> “Have you considered comparing to learned data selection method such as DoReMi(https://arxiv.org/pdf/2305.10429.pdf)?”
- DSIR and DoReMi tackle different data curation problems. **DoReMi does not select data for a target distribution and can only reweight the data on the coarser level of domains** (instead of example level selection). Instead, DoReMi tries to find a reweighting that is robust to many target distributions via minimax optimization. Although DoReMi could be applicable to the general-domain experiments, scaling up DoReMi to example level selection (where domain = 1 example) would require some fundamental changes to DoReMi, since it currently depends on seeing all/most domains in the minibatch to keep the domain weights updated.
- We also note that **DoReMi was also released on arXiv after the NeurIPS deadline**.
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: Dear reviewer, may we ask if you could respond to our comments? In our response, we explore the use of neural features and clarify the memory/time resources for the data selection method. Please let us know if you have other questions or concerns. Thank you!
---
Reply to Comment 1.1.1:
Title: Request for further discussion
Comment: Dear reviewer, please let us know if we've addressed your concerns regarding use of dense features and the memory/time resources. We are happy to answer any other questions you may have. Thank you! | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors begin by highlighting an important issue related to data quality in (encoder only) LMs, motivating the need for an improved way to filter large datasets (e.g. The Pile) for samples which are in distribution to a held out target sample. The authors propose a metric, KL-reduction, which quantifies how well matching a target distribution signals downstream performance after fine-tuning on the data. To sample in-distribution points, they compute a importance weight for each sample, by learning a generative distribution for the raw data and target data. Finally they re-sample data using the weights without replacement. Beyond relying on the KL-reduction, the authors finetune RoBERTa on their filtered data (compared to other baseline filtering/sampling methods) and measure performance improvements on the downstream task, showing several percentage point improvements.
Strengths: * Paper is well written and easy to understand; motivation and set up are clear.
* The described KL-reduction is a simple metric which appears to strongly correlate with downstream performance after fine-tuning, saving time and compute costs.
* Paper shows strong performance improvements on existing datasets/baselines using the proposed strategy
Weaknesses: * The authors only run experiments using Encoder only models, this is in line with a 2021 paper which their benchmarks (data + a model) are based on. However given the recent attention to decoder only models, and the shown impact data quality has on pretraining (see llama / red pajama), an obvious question is if decoder only models also benefit from this low cost strategy. Including decoder/generative experiments may increase the impact of the method.
* The task shares similarities with active learning, that should include be included somewhere in the related work section.
* The authors only consider n-gram based features, it’s mentioned in the limitations but it likely wouldn’t take much to run the experiments with other features. On Line 173, the authors mention the potential for BERT embeddings, but there are no related experiments. Clearly, the use of just n-grams is effective, but why are n-grams the best way to determine if something is ‘in distribution’ with a target text?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * TF-IDF (or bm25), i.e. a simple retrieval system, could be used to retrieve documents relevant to the target set sufficient. Line 263 references experiments but the results should be included. Related to this, if a bm25 approach will yield many duplicate documents, why can’t they just be deduplicated? And why does the proposed approach not yield duplicates?
* How are the distributions (q(x) and p(x)) on lines 91/92 computed? Elsewhere it is described as a generative distribution, but is it simply parameterized by counting n-grams in the respective datasets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The discussion of limitations is adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. LSWj notes that the paper is **“highlighting an important issue” and “shows strong performance improvements on existing datasets/baselines”**. We answer specific questions below:
> “The authors only consider n-gram based features, it’s mentioned in the limitations but it likely wouldn’t take much to run the experiments with other features”
**During the rebuttal period, we implemented a first-pass version of DSIR with embeddings from a pretrained language model.** We extracted features using a Sentence Transformer (miniLM-v6-2), learned raw and target distributions over the features parameterized as 1000 and 50-component Gaussian mixture models respectively, and used these within the DSIR framework to select data for general-domain pretraining. The results are shown below in a table. **On average, DSIR with neural features improves by 1-1.5%+ over random selection and heuristic classification** and is on par with top-k heuristic classification and top-k DSIR, but still underperforms DSIR with n-gram features. However, we believe that this is still a promising direction since some steps could be improved, such as the hyperparameters of the Gaussian mixture model. We thank the reviewer for the suggestion.
| GLUE dev | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B | Avg |
|--------------------------------|------:|------:|------:|------:|------:|------:|------:|------:|---------:|
| Random selection | 82.63 | 86.9 | 89.57 | 67.37 | 90.05 | 87.4 | 49.41 | 88.63 | 80.25 |
| Heuristic classification | 82.69 | 85.95 | 89.77 | 68.59 | 88.94 | 86.03 | 48.17 | 88.62 | 79.85 |
| Top-k heuristic classification | 83.34 | 88.62 | 89.89 | 70.04 | 91.15 | 86.37 | 53.02 | 89.3 | 81.47 |
| Top-k DSIR | 83.39 | 88.63 | 89.94 | 72.49 | 91.01 | 86.18 | 49.9 | 89.52 | 81.38 |
| DSIR + n-gram features | 83.07 | 89.11 | 89.8 | 75.09 | 90.48 | 87.7 | 54 | 89.17 | 82.30 |
| DSIR + neural features | 83.44 | 88.2 | 89.81 | 70.68 | 90.5 | 87.55 | 52.58 | 88.4 | 81.40 |
> “Retrieval methods: Line 263 references experiments but the results should be included. “
- Line 263 refers to small preliminary data selection tests using BM25 retrieval methods, where we found that when selecting for with AGNews as the target, **out of 6.1M documents retrieved by BM25, there were only 1.8M unique documents (70% were exact duplicates).** We will add these results in the final revision.
- Since heuristic classification is the method that is used to filter large datasets such as the Pile and the data for GPT3 and PaLM and there is a close similarity between heuristic classification (which uses an inner product score between pretrained word embeddings and a learned vector) and retrieval, we mainly conducted full comparisons against heuristic classification.
> “if a bm25 approach will yield many duplicate documents, why can’t they just be deduplicated? And why does the proposed approach not yield duplicates?”
**Although BM25 retrieved data could be deduplicated, this results in less control over the number of selected examples.** In comparison, DSIR will return exactly the number of examples requested, avoids choosing the exact same document by sampling without replacement, and naturally handles deduplication of repeated documents in the raw data via importance resampling.
In detail, DSIR more gracefully handles duplicates through 2 mechanisms:
- **1) DSIR avoids sampling the same document by sampling without replacement.** BM25 may select the same document multiple times from different query strings.
- **2) DSIR naturally decreases the probability to sample an example proportionally to how much it is duplicated in the raw data.** For instance, an example x that is duplicated 1000 times in the raw data will have 1000 times higher probability p(x) under the raw distribution. This reduces the importance weight by a factor of 1000 (since p(x) is in the denominator).
> “ Including decoder/generative experiments may increase the impact of the method”
Due to compute limitations, we weren’t able to pretrain a decoder-only model that is large enough for meaningful few-shot/generative evaluations, but we would like to do so in the future. We thank the reviewer for the suggestion and will add it to the discussion in the final revision.
> “The task shares similarities with active learning, that should include be included somewhere in the related work “
**We agree and had already included some active learning works** in the related work (e.g., https://arxiv.org/abs/1901.01151, https://arxiv.org/abs/1708.00489), but we will add more and make the reference to active learning more explicit in the final revision. We thank the reviewer for the suggestion.
> “How are the distributions (q(x) and p(x)) on lines 91/92 computed? … is it simply parameterized by counting n-grams in the respective datasets?”
Yes, they are computed by counting n-gram frequencies. This allows DSIR with n-gram features to be cheap to run on large raw datasets.
---
Rebuttal Comment 1.1:
Title: Further discussion
Comment: Dear reviewer, may we ask if you could respond to our comments? In our response, we explore the use of neural features, clarify the comparison to retrieval methods, and addressed other detailed concerns. Please let us know if you have other questions or concerns. Thank you! | null | null | null | null | null | null |
Breadcrumbs to the Goal: Goal-Conditioned Exploration from Human-in-the-Loop Feedback | Accept (poster) | Summary: The paper introduces Human Guided Exploration (HUGE), a system designed to integrate human feedback within the Goal-Conditioned Reinforcement Learning (GCRL) environment, offering a cost-effective and straightforward method for learning diverse robotic tasks through actual human interaction. The authors' primary objective is to strike a balance between overexploration and underexploration in GCRL tasks by incorporating human-in-the-loop assistance. Consequently, human feedback is utilized to ascertain the states requiring further exploration, building on the traditional GCRL algorithms and the GO Explore exploration framework. In particular, a target distance estimation function is trained using binary feedback, facilitating the generation of a more precise subgoal. Subsequently, the Goal-Conditioned Supervised Learning (GCSL) training paradigm is implemented to establish an interaction strategy with the environment. Experimental results in Mujoco and Pybullet demonstrate that HUGE outperforms the preceding GCRL algorithm and Human-in-the-loop algorithm.
Strengths: - The integration of human feedback into the GCRL setting is well-conceived. Hence, the motivation is clear, and the proposed method is, in general, logical and sensible.
- This technique facilitates a simple interface between the human labeler and the algorithm, in which the human supervisor provides binary evaluations to establish which state-goal pairings are closer in comparison to others.
Weaknesses: - The paper presents a degree of innovation that is somewhat restrained, as it essentially combines modified versions of existing methodologies (GCRL, Go Explore, and Human Preference). The foundational GCRL algorithm used is the previously established GCSL, while the exploration component is derived from the Go-Explore paradigm. It is important to note that GCSL paired with Go-Explore could have operated independently without the need for human feedback.
- As per the previous discussion, it's not unexpected to see improved results when human feedback is integrated into the Go-Explore process, as it introduces more a priori knowledge. The implementation of binary feedback can help to decrease the exploration space by learning to rank and establishing a subgoal model.
- This strategy essentially transforms the behavior cloning challenge into a binary human feedback problem. While this appears logical, it makes the motivation somewhat ambiguous and makes specific presumptions about the task at hand. Consequently, the work lacks a degree of originality and novelty. The three fundamental components of the proposed method: the "goal-conditioned policy", "hindsight relabeling", and "learning reward models" from human preference, have all been previously suggested in earlier papers.
- I think the tasks for evaluation are borderline novel and perhaps not hard enough when we consider a setting where human feedback can be utilized (these tasks are still challenging for RL-based methods learning with dense/sparse rewards). The maze tasks, robotic manipulation tasks are commonly seen in literature (in fact I am surprised PPO is completely failing on some of these tasks, I have personally trained PickNPlace tasks with a RL-based method with a properly shaped reward - Otherwise the authors can also explore other SOTs such as TD3 or SAC) . These tasks are not seen as a fundamental challenge for present learning methods. The main result of the paper is comparison of the proposed method against RL algorithms like PPO and LEXA in terms of sample efficiency. I prefer to see longer horizon tasks where an RL agent with a dense reward cannot easily solve. I also did not see comparison results for the Sim2Real tasks nor for PickNPlace.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Other comments:
- The spelling sparce to sparse in Figure 5.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - The work lacks novelty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and for taking the time to review our work. Please find answers/additional experiments to address your concerns below
> “GCSL paired with Go-Explore could have operated independently without human feedback”
Go-Explore + GCSL suffers from the issues of undirected frontier expansion. To illustrate this, we run this baseline (Go-Explore + GCSL) across all domains in Fig AM1. We observe:
- The baseline (purple line, GoExplore+GCSL) does not always succeed and takes more samples than HuGE.
- Longer horizon goals are not discovered. In multistep tasks such as the kitchen, Go-Explore + GCSL only manages to discover how to do 1 task (interacting with one of the objects) at a time, it never manages to combine 2 or 3 tasks as HuGE does.
In contrast, we can see the benefit of HUGE in guiding exploration and reducing search space.
> “It is not unexpected to see improved results when human feedback is integrated into the Go-Explore process”
It is *very important* how we integrate human feedback into the learning process. As we explain in Section E.1 [paragraph: Huge is robust to noisy labels] and show in Fig AM 7 prior work that introduces human feedback into the learning pipeline fails when this human feedback is noisy. We show that using this learned reward function to sample goals and guide exploration makes this method more robust to human feedback.
> “The three fundamental components of the proposed method: the "goal-conditioned policy", "hindsight relabeling", and "learning reward models" from human preference, have all been previously suggested in earlier papers”
While the terms - goal-conditioned policies, hindsight relabeling and learned reward models have been seen in prior papers, we would like to emphasize that:
1. We propose a unique way of integrating learned reward models into the exploration of learning goal-conditioned policies.
2. *Separating* policy learning from human feedback, and using it for soft frontier expansion (Ours) is crucial to both enable the solving of harder tasks and learning from noisy human feedback. Our results on Go Explore + GCSL [21,24,37], Human Preferences [11], and GCSL [24] emphasize this.
3. We enabled capabilities that were not possible in prior work solving long horizon tasks *without* careful reward engineering, using occasional and noisy (crowdsourced) human feedback
> “I think the tasks for evaluation are borderline novel and perhaps not hard enough when we consider a setting where human feedback can be utilized”
We propose and solve harder tasks than any of the prior related work:
1. PEBBLE benchmarks: opening single drawers/doors -> HuGE benchmark (kitchen): three sequential tasks in the kitchen involving open drawers, and sliders
2. LEXA benchmarks:
- two sequential tasks in the kitchen-> HuGE benchmark (kitchen): three sequential tasks in the kitchen involving open drawers, and sliders
- Pick and place tasks with two blocks ->Huge benchmark (bandu, block-stacking): assembling a castle-like structure with 4 blocks
HuGE beats both PEBBLE and LEXA (Go-Explore+GCSL) on these harder benchmarks. Moreover, we see that algorithms like PPO/SAC do not learn or learn significantly slower.
> “Surprise that PPO is completely failing on some of these tasks”
The goal in Figure 5 was to provide a fair comparison between all of the benchmarks using the same underlying reward. Since HuGE is robust to noisy underlying reward functions, this works with unshaped reward functions. For further analysis, we performed a new experiment with a more carefully tuned underlying reward function, Fig AM5. We see that PPO and HP fail with totally reasonable reward functions. After engineering effort finetuning these reward functions, these methods succeed, but still slower than HuGE. On the other side, we show HuGE’s robustness to the underlying reward function since it works in both cases.
> “These tasks are not seen as a fundamental challenge for present learning methods” “I prefer to see longer horizon tasks where an RL agent with a dense reward cannot easily solve”
We refer the reviewer to the drawing task in the real world where the reward function for the robot to draw something is very hard to specify. In this case, we do not have access to an oracle dense reward function and yet HuGE is able to succeed from easy-to-provide human feedback.
In addition to this, we have also run state-of-the-art baselines - PPO/SAC and several baselines for learning from human preferences on the environments in simulation. We find that HUGE is both more efficient and performant in these baselines across tasks. This suggests that without significant reward engineering, these tasks *are* challenging for current methods.
> “I also did not see comparison results for the Sim2Real tasks nor for PickNPlace.”
Because of HuGE’s sample efficiency, we can learn the policies directly in the real world, hence, no modeling nor sim2real is required. While this could certainly be performed, the point of the real-world experiments is to show that HUGE is both feedback and sample efficient for real-world learning.
> “This strategy essentially transforms the behavior cloning challenge into a binary human feedback problem.”
The problem we consider is learning from human comparative feedback, not converting behavior cloning into a feedback problem. We propose an algorithm for learning from comparative feedback, and then we show that this algorithm can benefit from some pretraining using behavior cloning. We would appreciate a clarification if we have misunderstood.
> SAC comparison
We also conduct an experiment to compare with other off-policy RL methods like SAC. We see that HuGE and the previous baselines outperform SAC from dense rewards in these benchmarks, Fig AM 1. A small caveat is that these results could be improved somewhat with further tuning of SAC, but will still require reward engineering.
---
Rebuttal Comment 1.1:
Title: Response to the Authors
Comment: I would like to thank the authors for their response and efforts towards improving the paper. In the light of reading other reviewers opinion and the additional results, I'm inceasing my score to 5. | Summary: This paper focuses on the exploration problem in decision-making tasks. Previous works try to leverage human guidance with constant synchronous high-quality human feedback, which is expensive and impractical to obtain. In this paper, the authors propose Human Guided Exploration (HUGE), which is able to leverage low-quality feedback from non-expert users. Specifically, the key idea is to separate the challenges of directed exploration and policy learning. Human feedback is only used to direct exploration, while self-supervised policy learning is used to independently learn unbiased behaviors from the collected data. The task for human annotators is just to select which of two states is closer to a particular goal. Then this ranking will be used to train a distance function for goal selection. The experimental results on robotic navigation and manipulation tasks in both simulation and real-world robots demonstrate the advantages of the proposed method.
Strengths: 1. The idea of the proposed method is simple yet effective. Separating the exploration and policy learning dramatically reduces the annotation effort and influence of noisy annotation on policy learning. Using human preference to learn the distance function is also better than directly learning the reward function.
2. Thorough evaluation of multiple environments, including both simulation and real-world tasks in robotic navigation and manipulation domains. The experimental results show that HUGE outperforms other methods by a large margin.
Weaknesses: 1. Figure 1 provides limited information. It is hard to find that human feedback is noisy and asynchronous. The tasks of pick-and-place and drawing do not contain detailed explanations. This figure can be improved by adding a comparison between previous works that require high-quality data and their method.
2. The method may need to be evaluated on more complex real-world tasks. The authors argue that Novelty-based exploration performs task-agnostic exploration of the entire state-space, thereby over-exploring the environment. But it seems that the environments used in this paper do not contain very large exploration space.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No particular questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitation is discussed in the paper. One potential limitation is that the human annotation of the preference of states may be difficult for tasks that require logic reasoning. The experiment environments used in this paper mainly focus on L2 distance, which is easy for humans to quickly select the state close to the goal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and for taking the time to review our work. Please find answers/additional experiments to address your concerns below
> Figure 1 provides limited information.
We will improve this as suggested to include comparisons with prior methods. Thank you for your suggestions!
> The authors argue that Novelty-based exploration performs task-agnostic exploration of the entire state-space, thereby over-exploring the environment. But it seems that the environments used in this paper do not contain very large exploration space.
To verify this, we scaled up the experiment of Go-Explore + Novelty to test task-agnostic exploration via undirected frontier expansion (in Fig AM1). We find that while this can solve certain tasks (maze, four rooms), it takes significantly longer and is unable to solve the more challenging tasks (kitchen, block-stacking, pusher). This shows the presented environments have a large exploration space and also shows why directing frontier expansion is important. Finally, prior work on exploration methods such as LEXA also test their algorithms on some of the same benchmarks, such as the kitchen and block stacking, but they solve shorter horizon problems :
1. In the kitchen, LEXA only manipulates 2 objects -> we manipulate 3 sequentially,
2. In block manipulation LEXA only moves 2 blocks -> we assemble a castle-like structure with 4 blocks
> The method may need to be evaluated on more complex real-world tasks
While this was challenging to perform in the short rebuttal period, we will evaluate this further and we are excited to continue as future work the evaluation on more real-world experiments such as object rearrangement in a household setting, drawing a variety of shapes, etc.
> “One potential limitation is that the human annotation of the preference of states may be difficult for tasks that require logical reasoning. The experiment environments used in this paper mainly focus on L2 distance, which is easy for humans to quickly select the state close to the goal.”
While it is true that many of the environments use distance as an oracle, this is not fundamental to the method, we would like to point out the drawing experiment that we performed in the real world. It is very hard to specify a distance notion for this experiment, however, it remains very easy for the human to quickly select which state is closer to the goal. We will note this potential limitation in the manuscript.
---
Rebuttal Comment 1.1:
Title: Response to author
Comment: Thanks for addressing my concerns. I don't have further questions. | Summary: This paper introduces HUGE, a Reinforcement Learning algorithm that makes use of human preferences to guide the selection of partial goals.
HUGE expands upon Goal-Conditional Supervised Learning (GCSL) by improving the goal selection method using noisy labels from humans to form a model, $f_\theta$, of distances to a goal state. This model $f_\theta$ is then used set goals for the GCSL algorithm to learn in a self-supervised fashion, thus bootstrapping learning.
The paper features an extensive evaluation of HUGE, with 6 synthetic tasks, 2 real robot tasks and one task trained from crowd-sourced feedback. Results show that HUGE outperforms a large number of baselines.
Strengths: * HUGE combines intuitions from GCSL and GoExplore, and can take advantage of asynchronous human preferences.
* The paper is really well written and is easy to follow.
* The paper presents a thorough evaluation. I find it particularly that HUGE Was able to train a real robot policy in under 30 hours with just 130 annotations.
* HUGE can take advantage, but does not require, expert demonstrations through Behaviour Cloning.
Weaknesses: * It would have been more representative to choose PEBBLE [1] rather than Christiano et al. (2017) as a more representative example of the performance of Preference-based Reinforcement Learning.
* Some of the results are buried in the appendix. For instance section 5.2 does not mention what was the task analysed are. From Figure 6. left, I assume it's the Kitchen task from D4RL. Still, neither the final performance nor the task are mentioned in the main text.
* Similarly, the main results of the analysis comparing the final performance and the number of annotations should be included in the main text, since it provides readers with an estimation of how many labels they may need for their task.
**Post-rebuttal update**
Authors added a comparison against PEBBLE, where HUGE outperforms it.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How were the annotations for the real-world tasks obtained?
2. Why does BC+Ours finish early in Figure 5 Kitchen & Bandu?
**Minor nitpicks and suggestions:**
* $\mathcal{G}$ is undefined in equation 4.
* In algorithm 1, step 5, the second argument to PolicyExploration should be $f_\theta$.
* In figure 6 left, what is the difference between "human + 5 demos" and "crowd-source + 5 demos". Moreover, the colours of their curves are indistinguishable to my eyes.
* When defining $f_\theta$, I would clarify that "closer" is not necessarily meant as a mathematical distance, but rather as a human intuition of closeness, which I belive is what $f_\theta$ is supposed to capture.
**Post-rebuttal update**
See discussion for answers to the above questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your feedback and for taking the time to review our work. Please find the answers to your comments and concerns below.
> “More representative to choose PEBBLE”
We implemented PEBBLE (using author provided GitHub repository) and provide the new curves with this implementation in Fig AM1. We find that while PEBBLE can learn in certain environments (block stacking, bandu, maze), however, the performance is worse than Human Preferences and HuGE.
Since the reviewer seemed particularly keen on comparing to PEBBLE we tried to understand better why it was failing and we find the following:
- We observe that when the initial unsupervised exploration step does not cover a big enough set of the state space, PEBBLE gets stuck and does not manage to reach the goal.
- We spent a decent amount of time trying to make it work better (hyperparameter tuning on entropy, number of unsupervised steps, increasing the stochasticity in the rollout actions) however nothing seemed to make it work on our benchmarks. We used the implementation provided by the authors and we also implemented our own version, in both cases this faced the same issues of getting stuck mid-way (meaning exploration collapsed and no progress was made).
> “Some of the results are buried in the appendix”
As suggested, we will move the final performance of the baselines, and the number of annotations for the real human experiments into the main text of the paper and provide more clarity on the problem setting.
> “For instance, section 5.2 does not mention what the task analyzed are.”
Thank you for pointing this out! It was only mentioned in small in Figure 6’s title and we will clarify this in the camera-ready version. In the meantime for more information, the crowdsourced data collection was performed on the kitchen manipulation task, where the franka arm needs to open 3 objects (cabinet, microwave, slider), more details about the kitchen environment on Appendix C.
> “The main results of the analysis comparing the final performance and the number of annotations should be included in the main text”
Indeed. We will include this in the main text in Section 5.1. In the meantime, we refer the reviewers to Appendix D and Table D.10 for more details.
> “How were the annotations for the real-world tasks obtained?”
We obtained the annotations for the real-world task in the same manner as in the simulated experiments, querying three annotators, no crowdsourcing was done in these experiments. We will clarify this in the text, and for more information on how the feedback was collected on the simulated experiments we refer the reviewers to Appendix B.
> “Why does BC+Ours finish early in Figure 5 Kitchen & Bandu?”
We will update these two curves for the camera-ready, they were not fully completed due to computing limitations.
>“In figure 6 left, what is the difference between "human + 5 demos" and "crowd-source + 5 demos". ”
The difference is in the number of humans that collected the labels, for “human + 5 demos” only one annotator provided the labels, for “crowd-source + 5 demos” 109 annotators provided the feedback, we provide more details about this crowdsourcing experiment in Appendix B.1.
> Minor corrections
We thank you for this feedback, and we will incorporate this into the manuscript.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal.
Comment: Thank you for your detailed rebuttal, the additional experiments regarding PEBBLE and for really trying to get the PEBBLE baseline working. It makes sense that insufficient unsupervised exploration would be cause PEBBLE to collapse.
Since my main concern and questions have all been addressed, I will increase my rating. | Summary: Broadly, the paper addresses the challenge of learning multi-stage robotic navigation and manipulation tasks in simulation.
The paper frames several benchmark tasks as goal-conditioned RL (GCRL), and they present a novel method ("HUGE") for leveraging human preferences collected during learning. In particular, the preferences are used to train a goal-selector model that guides (biases) exploration. For policy-learning, they use goal-conditioned supervised learning on the collected replay buffer, without requiring a handcrafted task-specific reward.
Compared to [1], HUGE also uses hindsight relabeling to learn from a replay buffer despite a sparse reward, but it differs in that it biases sampling from the replay buffer based on human preferences. Compared to [11], HUGE also collects human preferences iteratively (intertwined with learning). However, HUGE doesn't use its collected human feedback to learn a reward or otherwise directly bias the policy. Instead, the feedback is only used to bias exploration. Compared to [17], HUGE also has an exploration phase that builds an archive of trajectories. However, HUGE chooses promising goal states and generates trajectories using its goal-condition policy, while [17] chooses promising start states based on novelty and can generate trajectories with various exploration policies (random, epsilon-greedy, novelty-based).
The authors evaluate on four simulated manipulation tasks, two simulated navigation tasks, and two real-robot tasks. They compare to several baselines including the related work mentioned above as well as PPO (with dense and sparse reward variants).
Strengths: Their method doesn't require a handcrafted reward and it's able to leverage noisy, infrequent human feedback. These strengths are relevant and valuable to this domain.
Evaluation on a variety of simulated and real tasks.
Impressive crowdsourcing infrastructure and diversity of annotators.
Interesting, well-explained adaption of real-world tasks to fit their use of goal-conditioned RL.
Weaknesses: Lack of clear, fair numeric results on task performance across HUGE and the baselines:
= In Figure 5, I can only compare task performance after a given number of steps. A table with a performance number after training each approach to convergence would be more clear and would better support the claim from your text "HUGE beats prior work on the proposed long horizon tasks in simulation".
= It's not clear why some tasks were trained for millions of steps and others not.
= The paper text conveys that Figure 5's various approaches are using a poorly-tuned reward function ("we did not do any reward finetuning"). In particular, this makes the PPO results questionable.
= This section also uses synthetic human preferences derived from your reward function, instead of real human preferences. This confuses me and contradicts the earlier premise of HUGE. HUGE with this synthetic source of preferences would appear to be another RL method that utilizes a handcrafted reward function and not human feedback. This muddies your comparison with other approaches, especially [11] which is intended to be guided by real (not synthetic) human feedback.
= It seems like other baselines might benefit from the addition of behavior-cloning pretraining, as you did for "BC + Ours".
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: Can you improve your comparison to baselines based on the feedback I gave above? In particular, I am most interested in a robust, fair comparison to [11] ("Human Preferences"), because I consider that work the most directly comparable to yours: it doesn't require a handcrafted reward signal but it does require collection of human preferences intertwined with learning.
I hope you'll address the weaknesses in your PPO baseline I mentioned earlier: train to convergence with a fine-tuned reward function. Figure 5 currently shows that HUGE (with synthetic preferences derived from a reward function) outperforms PPO using the same reward function. If this result holds after you improve the PPO baseline, can you analyze this? Would it mean the synthetic variant of HUGE is fundamentally a better optimization algorithm compared to PPO?
I notice your two real-world tasks are fully-observable. How would you apply your approach to a real-world task that is only partially-observable, such as room-scale manipulation with visual perception?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: No concerns here
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and for taking the time to review our work. Please find answers/additional experiments to address your concerns below
> “Poorly-tuned reward function”
A key point of our paper is the robustness to noisy human feedback, which means that even with a simple guiding human proxy, HuGE can still solve the proposed tasks, as we showed in Appendix E.15. It is known that this is not the case with PPO which needs careful reward shaping to succeed AM[3,4], and also reflected in our results in Fig 5 in the original paper.
We provide a new analysis experiment on this result (Fig AM 5), where we test HuGE, PPO, and Learning from Human Preferences (using PPO) on two reward functions, one that we very carefully shaped during the rebuttal period, and another that is as in the original submission (note that the reward function serves to inform the synthetic human proxy for HuGE). We observe:
- HuGE succeeds with both reward functions
- PPO only succeeds when the reward function is carefully shaped.
- Human Preferences only partially succeeds when the reward function is carefully shaped
- When PPO and Human Preferences succeed, they are significantly less sample efficient than HUGE.
This demonstrates that HuGE is robust to simple underlying reward functions, which makes it robust to learning these reward functions from Human Feedback. This is also reinforced by additional experiments with real human feedback on all domains as shown in Fig AM 2-3.
>“Going from Synthetic Human Feedback to Real Human Feedback” & “Fair comparison with Human Preferences”
In Figure 5, all baselines that rely on human feedback were trained using a proxy-human for a fair comparison among methods. However, as the reviewer requested in Figure AM 2-3 we provide additional results of HuGE working from real human feedback and results on Human Preferences from real human feedback on the four rooms and empty rooms environments. With this robust, fair comparison between HuGE and Human Preferences, we observe that when learning from real humans:
- HuGE is more sample efficient in the number of human labels compared to learning from human preferences [11] with real humans
- HuGE is more sample efficient in the number of timesteps to succeed compared to [11].
>“Analysis of PPO vs HuGE (from synthetic feedback)”
As noted, we observe that HuGE is more sample efficient than PPO. To analyze why this is, we ran a comparison on gradient variance using HUGE as compared to gradient variance with PPO, comparing the average pairwise cosine similarity across mini-batches to measure gradient variance (as suggested in AM[1]). We find that training the gradient similarity is significantly larger in HuGE than in PPO, potentially explaining improved sample complexity. Besides this, other components in HuGE may also lead to faster learning:
1. Generalization across states that come from self-supervised hindsight relabeling,
2. Hindsight relabeled policy learning is off-policy while PPO is on-policy.
> “Does it mean the synthetic variant of HuGE is fundamentally a better optimization algorithm compared to PPO?”
Not necessarily, given a carefully shaped reward function, PPO typically eventually works on most decision-making problems, however, HuGE is a method for goal-reaching problems, which is a subset of all possible decision-making problems. In these goal-reaching problems, we have shown that HuGE can typically outperform PPO even at convergence when the reward functions are noisy. However, PPO is a more general learning paradigm and more careful experimentation and ablations are needed before making broad-reaching claims.
>“Other Baselines would benefit from BC pretraining”
To provide a fair comparison, we also added learning from human preferences with the same amount of BC pretraining (implemented using stable baselines AM[2]). We found that despite this pretraining, learning from human preferences [11] and PPO were outperformed by HUGE.
>“Table of performance at convergence:”
We would like to refer the reviewer to Table D.10 with the results after convergence in the Appendix.
>“Number of training steps per task”
Each task has different action and observation spaces as well as different ranges of timesteps per episode, we refer the reviewer to Appendix C for more details. Also, we ran each baseline until convergence.
>“How would you apply your approach to a real-world task that is only partially observable”
As of now, the framework is largely structured around fully observable problems, as is a significant body of work on reinforcement learning in MDPs [11,21,24,37]. Training recurrent policies, rather than MLPs, as well as recurrent goal selectors, may make the system applicable to partially observable settings.
AM[1] Ilyas, Andrew, et al. "A closer look at deep policy gradients." arXiv preprint arXiv:1811.02553 (2018)
AM[2] Raffin, Antonin, et al. "Stable-Baselines3: Reliable Reinforcement Learning Implementation" http://jmlr.org/papers/v22/20-1364.html (2021)
AM[3] Learning robust perceptive locomotion for quadrupedal robots in the wild, Takahiro Mki, et al, Science Robotics, 2022.
AM[4] Visual Dexterity: In-hand Dexterous Manipulation from Depth, Tao Chen, et al. 2022, arxiv.org/abs/2211.11744
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. The main novel contribution here is the use of a noisy preference signal to bias exploration, while avoiding biasing the learned policy. The rebuttal experiments and baselines show more clearly that this works well, regardless of whether this is synthetic (e.g. a poorly-tuned reward function) or real human preferences. I've revised my rating. | Rebuttal 1:
Rebuttal: Dear reviewers,
Thank you for your constructive feedback. In response to reviewer concerns, we have conducted a number of new experiments and analyses. We describe these briefly below and refer reviewers to individual responses for detailed clarifications:
**[DISCLAIMER 1]**: In the following experiments we talk about reward functions and RL methods that use reward functions. However, HuGE does *NOT* need any reward function, it learns uniquely from human feedback. Where specified, we use human proxies, which have an underlying reward function, to provide a fair comparison among different baselines.
**[DISCLAIMER 2]**: Figures in the Aditional Material (PDF) attached to the rebuttal are referred in the text as Figure AM ${figure number}.
1. **More carefully tuned rewards for PPO**: (reviewer MJKZ): We compare the performance of HuGE, PPO, and Human Preferences from finetuned/not finetuned reward functions in order to clarify how important careful reward engineering is to the success of these methods. Findings:
- HUGE succeeds even without careful reward engineering,
- PPO and Human preferences struggle without careful reward engineering.
- With careful reward engineering - PPO and Human preferences can succeed, but they learn significantly slower than HUGE. (Fig AM 5)
2. **HUGE (Ours) with real human feedback (reviewer MJKZ)**: We provide additional results on running HuGE from real human feedback across more benchmarks (see Fig AM 2-3). HuGE matches performance when feedback is provided by a synthetic human.
3. **Human preferences [11] with real human feedback (reviewer MJKZ)**: To induce a fair comparison, we also run learning from human preferences[11] with real human feedback, as shown in Fig AM2. Human preferences:
- Needs more human labels to converge
- More timesteps to converge
- converges to lower performance compared to HuGE
4. **HuGE is more robust to noisy feedback compared to Human Preferences** (reviewer 4hMa)
- HuGE matches the behavior obtained with perfect annotations whilst Human Preferences does not solve the task at hand, see Fig AM 7
5. **BC pre-training with human preferences** (reviewer MJKZ): To induce a fair comparison when pretraining data is available, we introduced BC pretraining on a couple of benchmarks for PPO and Human Preferences (see Fig AM 1) and compared this to HUGE with pretraining. We found that even if each method improves when adding demo pretraining, these still take longer than HuGE with and without demos to converge.
6. **Analysis of PPO vs HuGE** (reviewer MJKZ): Since HUGE is much more sample efficient than PPO, we conducted analysis to understand why this might be the case for goal reaching problems. We hypothesize this may be (atleast in part) due to lower variance gradients, and show this in Fig AM 6.
7. **PEBBLE comparison** (reviewer EF3C): We conducted an experiment to compare with an off-policy RL method learning from human preferences such as PEBBLE [31]. We find:
- HuGE and Human Preferences methods outperform PEBBLE
8. **Go-Explore + GCSL** (reviewer 4hMa): We conducted an experiment to understand whether GCSL + go-explore by itself is effective enough to solve the proposed tasks. We find in Fig AM1 that Go-Explore + GCSL:
- Longer horizon goals are not discovered (such as in the kitchen benchmark), which is also seen in prior work (LEXA)[34].
- When it solves the task it takes much longer than HuGE to discover target goals since exploration is undirected.
- SAC comparison (reviewer 4hMa): We conducted an experiment to compare with other off-policy RL methods like SAC. We see that HuGE and the previous baselines outperform SAC from dense rewards in these benchmarks, Fig AM 1.
Please let us know if other experiments/clarifications can help with the discussion!
Pdf: /pdf/0cc9e87558a18a4dc9a1fabf0f7c229d7b3a4cf5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling | Accept (poster) | Summary: This work focus on theoretical analysis on two main score-based approaches. The first one is the score-based casual discovery, where the authors give the sample complexity error bounds for score matching using ReLU NNs as well as an upper bound of the error rate. The second one is the score-based generative modeling, where the authors give a sample complexity bound for the score estimation.
Strengths: This work has strong theoretical results and has provided sufficient backgrounds for the readers to understand the theoretical problems and include a comprehensive discussion on related work.
Weaknesses: - It is unclear why the analysis on score-based casual discovery and that on score-based generative modeling are put together in this work; it seems to me they are unrelated work. The authors might want to put some insights on the relationship between these two work, for example, some shared techniques for proving the sample complexity bounds.
- The definition of covering number should be included to make this work more self-contained.
- For Assumption 2, since this is an original assumption proposed by this work, the authors should give more justification to how much this assumption holds in practice. An specific example, even a simple one, showing when this assumption holds, would make this work more convincing.
- To provide some proof sketch might make this work more convincing and help reader better understand what are the technical contributions of this work. Now it is unclear to me why it provides better analysis than the previous work; specifically, why it does not rely on the assumption of low-dimensional data while the previous work Chen et al. [2023a] does.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - It seems that the theoretical results of score-based casual discovery are highly specific to the algorithm in Rolland et al. [2022]. I wonder how these results generalize to other related approaches?
- Does Lemma 1 play a role in the later analysis? I wonder why we need Lemma 1.
- The assumption in Line 148-149 is not properly justified.
- Can the authors provide a specific case on when Assumption 2 holds and analysis on how practical this assumption?
- As mentioned above, why this work does not rely on the assumption of low-dimensional data while the previous work Chen et al. [2023a] does?
Minor issue:
- It seems that the case l = 1 in Equation (3) can be merged into the case 2<=l<=L-1.
- Line 107: donate -> denotes
- The property mentioned in Line 108 seems unrelated to the Definition 1.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the feedback and suggestions provided by reviewer LFXr. Regarding the issues raised by the reviewer concerning the relationship between causal inference and the generative model aspects of the paper, as well as the convincing of assumption 2, we have addressed these matters in the [general response](https://openreview.net/forum?id=uNnPWR66b8¬eId=VWjUWDv2Kl). Furthermore, in the next version of the paper, we will present the main results in a more comprehensive and clearer way, emphasizing the key points of the paper and providing proof sketches to support the main results. Below, we will provide responses to additional questions raised by the reviewer. Additionally, we will address the details raised and incorporate the solutions in the next version of the paper.
**Q1: The definition of covering number.**
A1: The concept of the covering number plays a significant role in statistical learning theory. It is a widely used tool, often appearing in various learning theory textbooks. We appreciate the reviewer's suggestion. To enhance the self-contained nature of our paper, we plan to incorporate a new section in the appendix. This section will provide a comprehensive background definition of the covering number, ensuring that readers can grasp its essence without needing to refer to external sources.
**Q2: Low-dimensional assumptions in previous work.**
A2: The previous work [1] rested on the assumption of low-dimensional data structures, employing this to decompose the score function and engineer specialized network architectures for the derivation of the upper bound. Our work takes a distinct route. We harness the inherent traits and conventional techniques of standard deep ReLU networks to directly deduce the upper error bound. We will add such a discussion in the next version of the paper.
**Q3: How do these results generalize to other related approaches?**
A3: Firstly, our theoretical results in Theorems 1 and 3 are not influenced by the choice of algorithm used. Although Theorem 2 is established based on Algorithm 1, our technique provides a comprehensive processing strategy for those analogous algorithms which initially conduct a topological ordering based on the score function, followed by pruning to obtain the final DAG. So through suitable adaptations, our method can also be extended to similar algorithms [2][3].
**Q4: The role of Lemma 1 in the paper.**
A4: While Lemma 1 is not directly employed in the ensuing proofs presented in this paper, it serves as the cornerstone for Algorithm 1 in [2] in our analysis. Furthermore, it sheds light on vital properties of the non-linear additive Gaussian noise model. Its incorporation into the primary content might not be deemed essential, we plan to rectify this in the next version of the paper. We will introduce a dedicated section that delves into the discourse concerning Algorithm 1 and Lemma 1 in the appendix.
**Q5: More discussion about Assumption 2.**
A5: The left-hand side of Assumption 2 represents the expectation of the square of a random variable, this value is 0 when the function f is linear. Correspondingly, the value of $C_m$ is 0 too. For non-linear functions f, this value is greater than 0 (disregarding some extreme mathematical cases), and the corresponding $C_m$ is also greater than 0.
Theorem 2 converges when $C_m$ is greater than 0, indicating that Algorithm 1 can converge for the causal discovery of any nonlinear causal relationships.
Different non-linear functions f correspond to different values of $C_m$. The experimental results we provided in the general response demonstrate the varying outcomes of Algorithm 1 under causal relationships with different $C_m$ values.
**Q6: The assumption in Line 148-149 is not properly justified.**
A6: We disagree with the reviewer's view here. The content in lines 148-149 is not an assumption; it is based on the standard setting of theoretical analysis in Ornstein–Uhlenbeck process in score-based generative modeling, which has been widely employed in prior research [1, 4, 5]. However, we appreciate the reviewer's feedback and have added a remark at this point to clarify this setting.
**References**
[1] Score Approximation, Estimation and Distribution Recovery of Diffusion Models on Low-Dimensional Data. ICML 2023.
[2] Scalable Causal Discovery with Score Matching. CLeaR 2023.
[3] Causal Discovery with Score Matching on Additive Models with Arbitrary Noise. CLeaR 2023.
[4] Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. ICLR 2023.
[5] Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling. NeurIPS 2021.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the clarifications. Even though the relationship between the score-based casual discovery and score-based generative modeling is briefly explained in the general response, it would require non-trivial modification to the current presentation of this paper to address this concern and to include the new experimental results. Thus, I would keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer LFXr,
Thanks for the prompt response. We did indeed commit to making some revisions in the general response. But it's important to note that these revisions primarily focus on the organization of the paper and the explanation of the main results. These revisions do not affect the core theoretical results, major contributions, and novelty of the paper. Furthermore, the new experiment results serve to enhance the comprehensiveness of the paper and should not be regarded as a negative factor. Therefore, we hope the reviewer can reconsider their opinion.
Best,
Authors
---
Rebuttal 2:
Title: Any remaining questions from reviewer LFXr?
Comment: Dear reviewer LFXr,
As we're getting close to the end of the discussion period, we want to thank you once more for your time. We hope our comments so far have covered any past worries or questions, but if there's anything you'd like more explanation on, just tell us. If you've got any other questions, please don't hesitate to ask. We'll do our best to handle them before the deadline. | Summary: The paper provides sample complexity bounds on score function estimation when (1) the score function is estimated by using SGD to minimize a denoising score matching objective, (2) the probability distribution is induced by a structural causal model (SCM) with additive Gaussian noise, and (3) the score function is Lipschitz, and (4) for each mechanism $f_i$ in the SCM, the expected value of the second derivative of $f_i$ with respect to each parent of $X_i$ is lower-bounded by $C_m$ times $\sigma_i$, the variance of the exogenous noise of $X_i$.
They show that, under these conditions, the estimated score function converges at a parametric rate (Theorem 1). Using this bound, they give a sample complexity bound for the event that the SCORE algorithm for causal order search returns a correct topological order (Theorem 2). Finally, they also use the bound from Theorem 1 to provide a sample complexity result for the score-matching objective used in those works.
Strengths: ### Significance
The paper appears to be a significant theoretical contribution addressing the sample complexity of score function estimation, which as an important topic for both causal structure learning and score-based generative modeling.
### Clarity
The paper is written in a fairly clear style. Relevant notations are appropriately defined and summarized in the Appendix. For each theorem, there are accompanying remarks which clearly discuss the implications of the theorem. The motivation and the context for the work are easy to understand.
Weaknesses: ### Experimental Results
This is clearly a theory paper, and contributes enough theory such that there is no need for an extended section on experiments. In fact, given the space constraints, extensive experiments would most likely hurt the paper. However, a small set of experiments - e.g. corroborating Theorem 1 by plotting the loss function versus the number of samples - would take the paper from a good contribution to a great one (8 instead of 7). In my experience, experimental validation of theoretical results on sample complexity can be fairly difficult to obtain, and thus papers like this one would be a lot more useful in practice if they ran some carefully-chosen experiments.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: ### Details
1. The theorems (eg. Theorem 1 and Theorem 3) refer to a DNN trained by SGD. However, this is missing a condition on the learning rate. What is the condition on the learning rate?
2. Presumably the usage of "SGD" in the theorems here implies that batch size = 1. Do the results extend to larger batch sizes? For clarity, I think it would be helpful to encapsulate the assumptions on the architecture and training into an Assumption environment.
3. What is $\tau$, the first argument to the covering number in line 203? Where is it defined?
### Suggestions
1. Since this paper sits at the intersection of score-based generative modeling (SGM) and causal structure learning (CSL), it is important to keep the background of both audiences in mind. In particular, coming from causal structure learning / statistics, I am surprised by Theorem 1, which seems to require that SGD finds a solution close to the global optimum. It looks like Theorem 1 uses some relatively recent results / techniques (e.g. Nguyen et al., 2021), which might be well-known in the SGM community but are not well-known in the CSL community. This paper provides a good opportunity to bring the advances of SGM to the CSL audience, and it would be valuable to provide more intuition about the techniques used here.
2. Please reference all appendices in the paper. Appendices B, C, F, and G are not referenced in the paper.
3. I think Lemma 2 would be more clear if you left out the node $i$ altogether. Since the associated term is zero anyways, it just gets in the way and serves no purpose in the left-hand side of the implication statement. It feels like the point that the variance term associated to $i$ is zero shouldn't be part of this theorem, but a separate observation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the feedback and suggestions from reviewer FzFv. Regarding the issues raised by the reviewer regarding the relationship between causal inference and the generative model aspects of the paper, as well as the lack of experimental validation, we have provided responses in the [general response](https://openreview.net/forum?id=uNnPWR66b8¬eId=VWjUWDv2Kl). We will also enhance the comprehensiveness and clarity of the main results in the next version of the paper, emphasizing the paper's focal points. We will answer the other questions from the reviewer below. Furthermore, for the detailed problems, we will resolve them and incorporate the solutions in the next version of the paper.
**Q1: What is the condition on the learning rate and can we extend the result to a larger batch GD.**
A1: The use of SGD training is referenced in our three theorems. However, it's important to note that Theorem 1 and Theorem 3 are rooted in the generalization by sampling complexity bound. This makes them independent of the specific algorithm used, and consequently, we did not delve into learning rates here. The results are broadly applicable and can be seamlessly extended to encompass larger batch GD.
Regarding Theorem 2, its foundation lies in the proof of SGD/GD convergence within deep networks, as elucidated in [1]. This proof necessitates certain conditions related to large network width ($m\geq\text{poly}(n,L)$) and a sufficiently small learning rate ($\mathcal{O}(\frac{1}{\text{poly}(n,L)m \log^2 m})$). These convergence outcomes also apply to BatchGD, as demonstrated in similar convergence results [2]. Hence, Theorem 2 can naturally be expanded to incorporate Batch GD as well.
**Q2: The definition of $\tau$ in the covering number.**
A2: Sorry for the typo here. Instead of $\tau$, it should be $\frac{1}{n}$. We have revised this issue.
**Q3: Modifications for Lemma 2.**
A3: We presented Lemma 2 in this form to facilitate its utilization within the proof. But we concur with the reviewer's perspective that leaving out the reference to node $i$ in this lemma could enhance its clarity. As a result, we have restructured the phrasing of this lemma and subsequently refined the associated proof.
**References**
[1] A Convergence Theory for Deep Learning via Over-Parameterization. ICML 2019.
[2] Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases.
---
Rebuttal Comment 1.1:
Comment: I thank the others for their thoughtful response. Their responses to **Q2** and **Q3** have satisfied my concerns on those points.
With regard to **Q1**: could your please describe how your response will be incorporated into the paper? e.g. it is my understanding that the mention of SGD in Theorem 1 and Theorem 3 can be removed, while for Theorem 2, the condition needs to be added? In particular, the condition could be formally defined in the appendix and the extension to batch GD could be added as a remark.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer FzFv,
We thank the reviewer for the positive reply. As the reviewer said the SGD mentioned in Theorem 1 and Theorem 3 can indeed be deleted, but in order to maintain the unity of the paper, we will keep it and discuss this point in the Remark, and we will discuss Theorem 2 according to your suggestion to add discussion in the appendix.
We are happy that our clarifications addressed most of your concerns and please feel free to let us know if you have any remaining concerns or further questions.
Best,
Authors | Summary: This study aims to investigate the sample complexity associated with score-matching and its applications in the field of causal discovery, providing valuable theoretical bounds. The authors present theoretical evidence that training a conventional deep ReLU neural network using stochastic gradient descent enables the accurate estimation of the score function. Additionally, they establish rigorous bounds on the error rate pertaining to the recovery of causal relationships, employing the score-matching-based methodology introduced by Rolland et al. [2022], under the assumption of an adequately precise estimation of the score function. Furthermore, an analysis is performed to determine the upper bound of score-matching estimation within the framework of score-based generative modeling, which not only bears relevance to causal discovery but also demonstrates independent significance within the wider domain of generative models.
Strengths: There are several main strengths in this paper. First, the authors extensively explored the relevant recent theories, including background research. Second, the error bounds are rigorously derived from well-known assumptions (1 and 3) and novel assumption (2) for causal discovery. Third, the use of this theory seems to have potential in multiple applications relating to causal discovery and SGM given that the architecture is general enough.
Weaknesses: There are some weaknesses in this paper. First, the background section is a bit too long compared to the main body (almost 5 pages with added related work in Section 5) resulting in only 2.5 pages new materials (excluding conclusion). Although it was advantageous to extensively explain the background (where there are multiple topics in this paper), it became a weakness due to its excessive length.
The second weakness is about deep ReLU neural networks, but there is no empirical experiments to corroborate theory. For example, one might show causal discovery errors and how the bounds in Theorem 2 are ‘good’ enough. If the bounds are too loose, the usefulness of the bounds will become low. (or how robust the result would be under the violation of the some of assumptions)
Given that there are some space that can be saved by removing (or moving) some of related work/preliminaries from main text, I recommend the authors to have some proof-of-concept experiments demonstrating the effectiveness of such theoretical bounds.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Not a question but there are issues with \begin{align} or other tex related comments. If you have an empty line in between text and mathematical formula, there will be a gap. You can see Eq 1 (and the one before and one next), 4, 5, 6, 7, (one next to 7), 8, next to 8, in Assumption 2, in Theorem 1, in Thm 2, in Thm 3. You can effectively save lots of space.
There are many remarks. But I guess they can be incorporated more smoothly just as a text with proper rephrasing (if needed).
98th line contains a typo with missing parentheses in the equation.
Theorem 3, Remark 1. Can you elaborate in the main text what enable your theorem applicable to high-dimensional data versus. Chen 2023a's results on low dimensional data?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Assumptions can be restrictive but the authors are well-understanding such restrictions and others also made similar assumptions. For the assumption 2, it is unclear how one can assume C_m properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the comments and suggestions provided by reviewer RLDi. Regarding the convincing of Assumption 2 and the lack of validation experiments in the paper, we have addressed these concerns in the [general response](https://openreview.net/forum?id=uNnPWR66b8¬eId=VWjUWDv2Kl). In the next version of the paper, we will condense the background part, emphasize the key points of the paper, and provide proof sketches for the main results to present them in a more concise and clearer way. We will now provide responses to the other questions raised by the reviewer. Additionally, for the detailed issues, we will address them and incorporate the changes into the next version of the paper.
**Q1: Low-dimensional assumptions in previous work.**
A1: The previous work [1] rested on the assumption of low-dimensional data structures, employing this to decompose the score function and engineer specialized network architectures for the derivation of the upper bound. Our work takes a distinct route. We harness the inherent traits and conventional techniques of standard deep ReLU networks to directly deduce the upper error bound. We will add such a discussion in the next version of the paper.
**References**
[1] Score Approximation, Estimation and Distribution Recovery of Diffusion Models on Low-Dimensional Data. ICML 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The authors mentioned revising the paper based on reviewers comments (not just mine). Hence, I am positive about what the paper will look like. Yet my score will remain unchanged (weak accept) given my limited confidence level.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer RLDi,
Thanks for your insightful comments. Please let us know if you have any further questions.
Best,
Authors | Summary: This paper presents error bounds (sample complexity and convergence rates) for the problems of score based generative modelling and score-order based causal discovery. It is primarily a theory paper, and one of the first few works to present error bounds for a causal discovery algorithm that is not conditional independence based.
Strengths: (+) In score-function based causal discovery (which does not use conditional independence testing), there are no notable error bounds. So this work addresses the gap. Although it is tied to the order based method of Rolland et al, it is interesting as this might lead to more such work for other methods in causal discovery. Theoretical guarantees are important for causal discovery as real world validation is harder due to lack of ground truth.
(+) The assumptions and technicalities are fairly well laid out. Although the score requires Lipschitz assumptions, thereby limiting the class of mechanisms, it is still relevant as it still might cover fairly large class of real world settings.
(+) The theoretical analysis for score based generative modelling seems similar to what has been done for causal discovery, and is potentially interesting. However, I am unsure about its relevance wrt that field as I am not totally familiar with it.
Weaknesses: (-) My main concern with this work is that it might be a bit too broad in scope. It is definitely okay and also welcome to have a broader scope (causal discovery plus score based generative modelling), but I don't think the current writing/presentation justifies it appropriately. I think a better way to present the results would have been to present the main result a bit more generally and clearly, and explain that result in the context of both causal discovery and score based generative modelling. I would have been happy to see just the causal discovery part alone as it merits quite a bit of discussion (since it is the first work doing so).
(-) There are no experiments. Although it is primarily a theory paper, some simple empirical analysis of RELU deep networks for score function in causal discovery would have been very illustrative and relevant. It would have been good to see how the bounds when the assumptions are satisfied and how do they get worse if the assumptions are not satisfied.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: My main question is : Is there a common theoretical part (proof techniques and statements) that can be generalised to both settings? If so, I think it might be good to highlight it. If not, I feel the paper might benefit from studying these two problems separately.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The assumptions are fairly well laid out. A major limitation now is that there is no empirical analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the input and suggestions provided by reviewer Ndh6. Regarding the issues raised by the reviewer concerning the relationship between causal discovery and the generative model of the paper, as well as the lack of experimental validation, we have addressed these concerns in the [general response](https://openreview.net/forum?id=uNnPWR66b8¬eId=VWjUWDv2Kl). We will also present the main results in a more comprehensive and clearer way in the next version of the paper, highlighting the main points of the paper.
---
Rebuttal 2:
Title: Any remaining questions from reviewer Ndh6?
Comment: Dear reviewer Ndh6,
As we're getting close to the end of the discussion period, we want to thank you once more for your time. We hope our comments so far have covered any past worries or questions, but if there's anything you'd like more explanation on, just tell us. If you've got any other questions, please don't hesitate to ask. We'll do our best to handle them before the deadline.
---
Rebuttal Comment 2.1:
Comment: Thanks a lot for the clarifications and I acknowledge your efforts to get experimental results. I am unsure if similar well tailored experiments in SGM setting would also be insightful as I am more coming from causal structure learning background. I still think the current theory and causal structure experiments is a positive contribution already.
The major problem I see is that the proposed reorganization and presentation, which includes the current set of experiments, would involve significant and/or non-trivial changes to the overall paper (as the other reviewer pointed out too). It is also a bit less concrete at a lower level on how the paper would look like after all the changes take place. As a result, I will keep my current score under the conclusion that the paper might benefit from another round of review with all the changes appropriately incorporated. | Rebuttal 1:
Rebuttal: # General response:
We extend our gratitude to the reviewers for their valuable feedback. In response to recurring issues highlighted by multiple reviewers, we offer a consolidated response as follows:
**Q1: Relationship between two parts.**
A1: The theoretical foundation of this paper spans two distinct yet interconnected domains: causal discovery and score-based generative modeling. These two domains share intriguing interrelations while also exhibiting notable disparities.
These two domains are connected by a common theoretical foundation centered on the upper bound of score matching. Specifically, Theorems 1 and 3 study similar problems in different domains and share the same techniques drawn from statistical learning theory and deep learning theory.
However, they also harbor autonomous elements, with the findings in the causal discovery domain having a relatively broader and more consequential scope. The foundation of Theorem 2 rests upon Theorem 1, but its result pertains to entirely separate problems. It can be seen as an embodiment of applying the upper bound of score matching for causal discovery.
Consequently, we simplified the background part of the paper, relocating certain components to supplementary materials. Additionally, we enhance the explanations and provide succinct proof sketches for the theorem in the causal discovery part. This endeavor aims to present the main results in a manner universally comprehensible and illuminating.
Furthermore, if the reviewers think the title "Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling" be better aligned with the paper's essence, we are willing to modify the title.
**Q2: Lack of experiments.**
Following the reviewers' recommendations, we conducted a series of experiments to validate the theoretical findings presented in the paper. We took inspiration from the code provided in [1] and employed the structural Hamming distance (SHD) between the generated output and the actual causal graph to assess the outcomes. The ensuing experimental outcomes for SHD, vary across causal model sizes (d), sample sizes (n), and $C_m$. The specific results are shown in the pdf.
Analyzing the experimental outcomes, we find a notable pattern: higher values of $C_m$, augmented sample sizes $n$, and reduced model size $d$ all contribute to the algorithm's performance. This alignment with the insights from Theorem 2 in our paper is clear.
In the subsequent paper revision, we will incorporate these experimental findings to further enrich the paper's comprehensiveness.
**Q3: Discussion of the assumptions.**
A3: As the reviewers point out, our assumptions are reasonable, with 1 and 3 being standard in the deep learning theory. Regarding assumption 2, we remark that this assumption is strongly linked with identifiability and that the algorithm does not make use of $C_m$.
This second assumption is particularly interesting because it relates the sample complexity bound with classical identifiability theory: if the mechanisms are linear, even with infinite data the graph is not identifiable. In this sense, this assumption cannot be violated.
An interesting avenue for future work would be the effect of local assumption violations i.e., what if only one mechanism is linear but others are not? We can add a discussion on this for future work, however, this is far beyond the scope of the present work. In fact, most identifiability results in the infinite data regime still require causal assumptions to hold globally.
**References**
[1] Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models. ICML 2022.
Pdf: /pdf/156220bf607612e7d13a9c118607fee69f98e74c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper provides detailed theoretical results for causal discovery and score-based generative modeling. For causal discovery, it first provides a sample complexity analysis for the estimation of the score function for nonlinear additive Gaussian noise models, then it proves that the error rate of a score-matching-based method, SCORE, converges linearly w.r.t the number of training data. For score-based generative modeling, it presents sample complexity bounds for the estimation of the score function in the ScoreSDE.
Strengths: 1. This paper provides sound and detailed theoretical results for score matching in the context of both causal discovery and score-based generative modeling. In both cases, the theoretical analyses hold with mild assumptions compared to previous work.
2. This paper well organized overall. It provides a detailed discussion about preliminaries in Section 2, provides theoretical results for causal discovery and score-based generative modeling in Section 3 and Section 4 respectively. Although there are many symbols, I think this paper is relatively easy to follow.
3. This paper points out limitations of the current theoretical results clearly in Section 6.
Weaknesses: 1. As demonstrated in Section 6 of this paper, the theoretical results are still not general enough due to some relatively strong assumptions.
2. I suggest the authors change the title "Error Bounds for Score Matching Causal Discovery" because almost half of this paper is about score matching in score-based generative modeling, which has little to do with causal discovery.
3. Although I have no doubt about the soundness of the provided theoretical results, the contribution of this paper is somewhat limited. This paper does not design more advanced algorithm, it only provides sample complexity bounds for existing algorithms under PAC framework. I'm not sure whether it should be regarded as a supplement for previous work or an independent research. However, considering that ScoreSDE and is widely used in respective field, I still recommond "weak accept" for this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the comments and suggestions from reviewer wqRq. Regarding the questions raised by the reviewer about the relationship between causal discovery and the generative model of the paper, as well as concerns about strong assumptions, we have provided responses in the [general response](https://openreview.net/forum?id=uNnPWR66b8¬eId=VWjUWDv2Kl). Below is the response addressing your additional questions:
**Q1: This paper does not design a more advanced algorithm.**
A1: In this paper, we introduce the inaugural sample complexity bound for causal discovery in non-linear Gaussian models (to the best of our knowledge). This is an example of theory following practice, with the SCORE algorithm only having identifiability guarantees in the infinite data regime, but being backed up by empirical results with finite data sets. While we anticipate that this paper will establish a path toward statistically efficient causal discovery algorithms, we maintain that formulating a new algorithm falls beyond the scope of our present endeavor.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank the authors for their work, I think the title "Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling" is better than the original one. Besides, although I appreciate the theoretical contributions of this work, IMHO, the overall contributions may be somewhat insufficient, because it only derives sample complexity under the widely-used PAC framework. After rebuttal, I decide to maintain my score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer wqRq,
Thanks for your insightful comments. Please let us know if you have any further questions.
Best,
Authors | null | null | null | null | null | null |
Adaptive Linear Estimating Equations | Accept (poster) | Summary: The authors propose a general method for constructing debiased estimator called Adaptive Linear Estimating Equations (ALEE) estimator, which achieves asymptotic normality even in sequential data collection.
To obtain valid statistical inference, the online debiasing concept is used. The online debiasing procedure guarantees asymptotic reduction of bias to zero, but the convergence speed is slow. However, the ALEE estimator solves the slowly decreasing bias problem.
In this paper, the stable weights of ALEE are proved through equations for three cases: multi-arm bandits, autoregressive time series, and contextual bandits. A comparison of the ALEE method confirms that it performs better than the previous method.
Strengths: - ALEE provides point and interval estimation based on the central limit theorem, which enables stable estimation. It also prove the stability condition of ALEE through formulas for stable weights in three cases (Multi-arm bandits, Autoregressive time series, and Contextual bandits), which increases the efficiency of ALEE. Based on this stability and efficiency, we expect that it can be applied to various models.
- ALEE's training algorithm has fast convergence. The authors provide theoretical performance guarantees and demonstrate ALEE's effectiveness on distributed time series forecasting problems with several examples.
Weaknesses: - In the numerical experiments, the results are only shown using parameter values of 0.3 and 1 for the two-armed bandit setting and the contextual bandit setting. It would be better if used the various values for the parameters.
- Lack of explanation for Table 1.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Table 1: OLEE typo (ALEE), and Table 1 seems to lack explanation.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere appreciation to the reviewer for dedicating the time to reviewing our paper and offering us valuable feedbacks. We truly appreciate your comments and suggestions, and believe they can make our work better.
In the following, we are going to take this opportunity to address each of the points raised in your review, in the order in which they were presented.
**Weakness:**
1. We performed additional simulations across multiple dimensions and parameter values, incorporating different distributions for noise variables such as mean-centered Poisson, Gamma, and Bernoulli. The outcomes obtained under these settings align with the findings already presented in the paper. We intend to incorporate these simulations into the final version of the paper, as we believe that these additional numerical validations will further strengthen the appeal of the paper. We are grateful to the reviewer for offering this valuable suggestion.
2. Thank you for bringing this up. We are going to add more explanations of Table 1 in the final version.
**Questions:**
Thank you for catching the typos. We will correct it in the final version. | Summary: This paper considers the problem of least squares when the data is collected sequentially. It proposes a form of weighted least squares where the weights are designed to lead to estimates that are asymptotically normal and nearly optimal variance. The appropriate weights are derived for the multi-arm bandit, autoregressive, and context bandit settings. Experimental results on some toy datasets confirm the theory.
Strengths: 1. The proposed estimator is simple and nearly efficient.
2. Therefore, I think it would be useful for practitioners.
3. The problem is well-motivated.
I am not familiar with the literature on this problem, so I cannot comment on originality.
Weaknesses: 1. The presentation is somewhat confusing at times. The matrix $A$ from Section 2 does not appear in Section 3. The general construction strategy in Section 3.1 does not seem to be applied in Section 3.3 and it's not clear why.
2. The examples used in the experiments are all very simple. It would strengthen the work to, for example, vary the number of arms/dimension.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see above.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper is largely theoretical, so I think it's fine that the authors don't include a broader impacts section. However, I think the authors should include something about the limitations of the current work (see previous sections).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere appreciation to the reviewer for dedicating the time to reviewing our paper and offering us valuable feedbacks. We truly appreciate your comments and suggestions, and believe they can make our work better.
In the following, we are going to take this opportunity to address each of the points raised in your review, in the order in which they were presented.
**Weakness:**
We apologize for any confusion caused by our presentation, and we would like to clarify certain aspects of the paper to make it clearer.
1. A benefit of using $A_w$ is that it allows for a better evaluation of the efficiency of the ALEE estimator, and it also serves as motivation for constructing weights $w_t$ with desirable properties (see equations (10) and (11)). We would like to highlight that the matrices $A_w$ and $W^\top X$ yield the same variance asymptotically, meaning that $A_w^\top A_w = X^\top W W^\top X + \text{ lower order term}$, and consequently, they can be used interchangeably; see the discussion near equations (8) and (9) where we already discuss the utility of using the matrix $A_w$. In section 3 onwards, we use the matrix $W^\top X$ because it is easier to understand. We sincerely appreciate the reviewer for bringing this to our attention, and we will add additional clarification to the paper to highlight the utility of the matrix $A_w$.
2. We thank the reviewer for this careful observation. In Section 3.1, we consider the multi-armed bandit problem, where the problem is ``effectively" one dimensional due to the orthogonality of the columns of the data matrix $X$. In dimension $d = 1$, we can do nice manipulations to the variance term which allows us to simplify the asymptotic variance term, and the resulting corollaries are much more informative and interpretable. In Section 3.3, we tackle the general contextual bandit problem, where the columns of the columns of the data-matrix matrix $X$ are no longer orthogonal, and we had to develop ideas that apply to dimension $d > 1$. This is why the constructions in section 3.1 and section 3.3 --- although they have same goal --- are very different.
3. We conducted some additional simulations with different noise variables (mean-centered Poisson, Gamma, and Bernoulli noise variables) and for various dimensions and parameter values. The results obtained using these settings align with what we have presented in the paper. In the final version of the paper, we plan to include these simulations. We believe that these additional numerical validations will enhance the appeal of the paper, and we sincerely appreciate the reviewer for providing this valuable suggestion.
**Limitations:** Thank you for your suggestions. We apologize for not including the limitation of the work and will add it in the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Thanks to the authors for the rebuttal
Comment: After reading it and the other reviews, I have decided to keep the same score. | Summary: This paper proposes an estimator (ALEE) for adaptively collected data generated from adaptive linear models, describes its construction such that asymptotic normality holds for practically relevant examples, and demonstrates its desirable properties in numerical experiments.
Strengths: I enjoy reading this paper. It reads well.
- The problem of inference for adaptive data is practically relevant
- Theoretical guarantees assume somewhat weaker assumptions than previous works
- The proposed method provides an improvement over other approaches, at least in the numerical experiments shown
Weaknesses: I don't think this submission lacks anything for a NeurIPS paper
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - One novelty of the paper is that Eq (3) is weaker than sub-Gaussian, yet all experiments are for Gaussian noise. I would be more convinced if the numerical simulations shows the same desirable properties for non-Gaussian noise.
- Perhaps refer to Fig 1 from the main text?
Some minor typos I find:
- Refs 14&15 are identical
- l51 - _debaising_ → debiasing
- l98 - _Slutsy's theorem_ → Slutsky's theorem
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere appreciation to the reviewer for dedicating the time to reviewing our paper and offering us valuable feedbacks. We truly appreciate your comments and suggestions, and believe they can make our work better.
In the following, we are going to take this opportunity to address each of the points raised in your review, in the order in which they were presented.
**Strengths and weaknesses:** we are glad that you enjoy reading this paper and thank you so much for your kind words.
**Questions:**
Thank you for your suggestion. We conducted simulations with mean-centered Poisson, Gamma, and Bernoulli noise variables. The results obtained using these noise variables align with what we have presented in the paper (please see Figure 1 from the paper). In the final version of the paper, we plan to include these simulations. We believe that these additional numerical validations will enhance the appeal of the paper, and we sincerely appreciate the reviewer for providing this valuable suggestion.
Lastly, thank you for catching the typos, we will correct them in the final version.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: I am satisfied with the response. I am keeping my rating as is. | Summary: This paper introduces a general method for constructing debiased estimator within the context of sequential data collection. The proposed methodology is applied explicitly to multi-arm bandits, autoregressive time series, and contextual bandits. Experiments are conducted in these three domains to verify the applicability and effectiveness.
Strengths: 1. The proposed method is able to achieve asymptotic normality without knowing the data collection algorithm and can obtain a faster convergence rate of the bias term.
2. Pointwise and interval estimates can therefore be generated.
Weaknesses: The experiment section only contains synthetic results. Considering the broad application of sequential data collection, it would greatly enhance the study if the authors could validate their framework using real-world datasets. This could provide more practical insight into the effectiveness of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Figures 2 and 3, what do “lower tail coverage” and “upper tail coverage” represent?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not mentioned, and the potential negative societal impact is not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere appreciation to the reviewer for dedicating the time to reviewing our paper and offering us valuable feedbacks. We truly appreciate your comments and suggestions, and believe they can make our work better.
In the following, we are going to take this opportunity to address each of the points raised in your review, in the order in which they were presented.
**Weaknesses:** Thank you for your suggestion, and we agree with the reviewer that incorporating additional numerical/real data simulations will enhance the appeal of the paper. In the current version of the paper, we have primarily focused on simulated experiments. The use of simulated experiments allows us to establish a known ground truth, making it easier to compare the utility of different methods.
In the revised version of the paper, we plan to supplement the existing simulations with new ones involving different noise variables, such as mean-centered Poisson and Gamma noise variables. These additional results will align with the findings presented in the paper (please refer to Figure 1). Furthermore, we intend to include a real data example and assess the performance of various methods on it. While real data examples lack a known ground truth, comparing the confidence interval widths of different methods will be beneficial in evaluating their effectiveness. We extend our gratitude to the reviewer once again for this excellent suggestion.
**Questions:** Thank you for your question. Lower tail coverage represents whether the lower one-sided confidence interval, which has the form$(-\infty, a)$, can cover the true parameter. Upper tail coverage corresponds to the upper one-sided confidence interval, which has the form $(a, \infty)$.
**Limitations:** We apologize for not including the limitation part and not addressing the potential negative societal impact. We are going to add them in the final version. Thank you for your suggestion.
---
Rebuttal Comment 1.1:
Title: Thanks for the Authors' response
Comment: The authors' response is promising to me. I will keep my score as it is. | Rebuttal 1:
Rebuttal: Due to the page limit, we only include part of the updated simulations. Please see attached pdf file for simulations regarding different distributions for the noise variables and varying dimension $d$ for the contextual bandit example.
We plan to add limitation in the discussion section.
**Limitations:** In our paper, we propose ALEE estimator that achieves asymptotic normality and discuss its optimality. Our result is based on large samples and it would be interesting to investigate how to improve the efficiency of ALEE framework under finite samples.
Pdf: /pdf/724a4ba80032061ba6430f59a2db53bc99bd94aa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
The Graph Pencil Method: Mapping Subgraph Densities to Stochastic Block Models | Accept (poster) | Summary: This paper adapts the matrix pencil method to estimate parameters in models with prescribed subgraph counts, and to simulate from them. A prime example are edge counts and the stochastic blockmodel.
Strengths: The idea to use the matrix pencil method for parameter estimation is interesting and the paper addresses an important problem.
Weaknesses: My main concern is that I do not think that (6) is correct. Focusing on d^3, we have
\langle d^3 \rangle = \sum_k \pi_k (\sum_j \pi_j B_{jk})^3
Expanding the right hand side gives
\sum_k \pi_k \sum_j \pi_j B_{jk} \sum_r B_{rk} \sum_s B_{sk}
However
mu (3-star) = \sum_k \pi_k \sum_{j, r, s distinct} B_{jk} B_{rk} B_{sk}
so the two expressions do not coincide. As equality (6) is the foundation of the proposed approach, I am not convinced of the method.
More generally the paper is not well presented. The abstract mentions exponential random graph models but they do not appear in the main paper at all. A particular version of a stochastic blockmodel is introduced which gives exchangeable edge indicators, and then it is claimed that this can be identified with the limit as the number of vertices tends to infinity. This is not clear at all; in which sense is the limit taken? Two SBMs, one on N vertices and the other one on N+1 vertices, need not be related at all. Is there a coupling construction which maintains exchangeability?
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: The homomorphism density does not seem to take care of automorphisms of the counts; is that not an issue?
Is the underlying graph supposed to be finite or infinite? All matrices appear to be finite; why is the notion of a limit important? Are there any theoretical guarantees regarding consistency of the estimation?
There is a lot of literature on graph with prescribed degree distributions, see for example
Britton, T., Deijfen, M., & Martin-Löf, A. (2006). Generating simple random graphs with prescribed degree distribution. Journal of statistical physics, 124, 1377-1397
and
Van Koevering, K., Benson, A. and Kleinberg, J., 2021, April. Random graphs with prescribed k-core sequences: A new null model for network analysis. In Proceedings of the Web Conference 2021 (pp. 367-378).
In Figure 2 in the supplementary material, how many numbers of nodes were used? Was it a step size of 1?
The graphs for Figure 2 are very dense; how does the proposed method work for sparser graphs?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: There is no mention of any limitation of the method; there is no mention of model mis-specification, and there is no discussion of the variability in the estimates.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. The discussions they spurred have been useful for clarifying the paper, as there were some key misunderstandings.
Weaknesses:
The main concern, about Equation (6), is a misunderstanding by the reviewer, but suggests an excellent way that we can be more clear with our notation.
When estimating the subgraph densities from a finite graph, the unbased estimators are the \textit{injective} homomorphisms, which require one to sum over distinct nodes.
However, when computing the subgraph densities from a graphon or Stochastic Block Model, the distinction between injective homomorphism counts (which require that the sums be over distinct labels) and non-necessarily-injective homomorphism counts disappears. (Essentially this is because as $n\rightarrow\infty$, the fraction of maps that are non-injective goes to zero.)
So equation (6) is correct, we should be more clear in our notation about whether the quantity is of the graphon/SBM or if it is of a finite graph sample.
Another concern, which was shared by all reviewers, was about the mention of Exponential Random Graph Models in the beginning. This makes a lot of sense. We plan to move this to the ending discussion, where the connection to Stochastic Block Models is made (there is some convincing evidence that, for many choices of subgraph constraints, when entropy is maximized, lead to an SBM with some finite number of blocks.).
In terms of identifying the limit of $n\rightarrow\infty$ with a graphon, we can elaborate on this a bit more (also see papers by Chayes and Borgs et al). Consider a sequence of exchangeable distributions over graphs with $n$ exchangeble nodes, for $n=1,2,\ldots$. If the each distribution on $n+1$ nodes is related to the distribution on $n$ nodes by randomly deleting a node, then the sequence converges to a graphon (a symmetric measurable function over the unit interval) in terms of the cut norm.
To make this more clear in the text, we will be explicit that an SBM is just a parameterization of a particular subset of such limiting graphons, and that it does not necessarily have an inherent number of nodes $n$ that must be sampled from it --- it describes all numbers of nodes at once, and therefore the limit as well.
Questions:
As we are using homomorphism densities, there is no need to worry about automorphism counts in the numerator, so long as you also do not worry about it in the denominator (they cancel out either way).
The "underlying graph" is the underlying graphon/SBM, from which we are sampling graphs of finite size. As we are estimating the parameters of this model, the only notion of limit is the intuition that "a distribution is the limit of infinite sample size".
The question about sparsity is very good, both theoretically and practically. We will add another set of experiments in the sparse regime, where we keep the expected degree the same while increasing the number of nodes.
---
Rebuttal Comment 1.1:
Title: Equation (6)
Comment: Thank you for your explanations. I am still not satisfied with (6). Formula (1) details \mu for a SBM on N nodes. Here N is finite from what I understand; otherwise (1) may be an infinite sum as block sizes will be infinite. So that does not seem to be intended. Consider the simple case of K=1, only one block, that is, an ER graph. Then, with edge probability p = B(1,1) and pi_1 = 1 for all i, and d_1 = p
\mu (3-star) = 4 (n choose 4) p^4. In contrast, < d^3> = p^3. Equation (6) states that these two terms are equal. What am I missing?
---
Reply to Comment 1.1.1:
Title: Clearing up Equation (6)
Comment: Thank you for your prompt reply, it is very nice to have the time to hash this out.
Equation (1) gives the expression for counting the subgraph density of a subgraph $g$ in an SBM given by $\pi_k$ and $B_{kk'}$ (regardless of the number of nodes $N$ that one decides to sample from the SBM). It should never be infinite; the $\pi_k$ is a vector of the probability that a random node is in block $k\in [K]$, and the sum is over all $|V(g)|^K$ ways that the $|V(g)|$ nodes in the subgraph could be assigned to each of the $K$ communities in the model.
So in the case of an ER graph: $K=1$, $\pi_k=(\pi_1)=(1)$, $B_{kk'}=((b_{11}))=((p))$,
then $d_k = (d_1)$, where $d_1 = \sum_k \pi_k b_{kk'} = \pi_1 b_{11} = p$.
And so $\mu(\text{edge}) = \sum_k \pi_k d_k = \pi_1 d_1 = 1 * p = p$,
$\mu(\text{cherry}) = \sum_k \pi_k d_k^2 = \pi_1 d_1^2 = 1 * p^2 = p^2$, and
$\mu(\text{claw}) = \sum_k \pi_k d_k^3 = \pi_1 d_1^3 = 1 * p^3 = p^3$.
These are indeed the homomorphism densities of these subgraphs in an ER graph model on any number of nodes $N$.
Does this clear it up? If not, please let me know, it is very helpful in writing the Appendix on Gluing Algebra.
Very best,
The Authors | Summary: The paper studies how to make inference sampling from a stochastic blocking model (SBM) given its corresponding subgraph densities. They first estimate the normalized degrees and the relative sizes of the latent blocks of a SBM, and then infer the connection properties of the SBM using generalized Prony's method.
Strengths: - It is nontrivial study to consider the inverse map from subgraph densities to a stochastic blocking model.
- Overall, the paper is not hard to follow. The proposed method and its properties may have some impacts on its community. As the authors claim, the proposed method can be well generalized to directed, weighted graphs.
Weaknesses: Typos:
L63: entries, give -> entry, gives
L79: explain(ing)
L117: Recall(ing)
L141: allowed -> allows
Suggestions:
Eq.10 looks a bit lengthy, and can be moved to Supplement.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How to generalize your method to capture hypergraphs with multi-edges between two vertices, signed graphs?
Can you discuss more on how does the method capture large sparse graphs, and graph with structured sparsity?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I do not find distinguished limitations of the method. Nonetheless, I would think it could be incremented in the supplement if the method can be easily generalized to directed, weighted graphs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your time and comments.
Strengths:
It is nice that you found the paper easy to follow! We really try to make it easy on the reader, and with the restructuring/moving parts to dedicated Appendices, we think it will be even more accessible.
Weaknesses:
The typos are much appreciated, and the restructuring makes a lot of sense.
Questions:
As there are several parts of the main that are being moved to their own dedicated Appendices, there should be some space for us to describe how generalizations to, eg, directed/weighted would look like.
With respect to multi-edges, there is a mathematically correct way to count weighted subgraph densities in a weighted network (see Lovasz), which when applied to the method described in this paper, would give the "expected number of edges" between a given pair of nodes (instead of the "probability of one edge").
For directed edges, there are actually more subgraphs one can use (eg, we can reference both the in- and out- degrees of the nodes).
With respect to sparsity, we will add another set of experiments in the sparse regime, where we keep the expected degree the same while increasing the number of nodes.
---
Rebuttal 2:
Comment: Hello Reviewer,
Thank you again for your thoughtful and detailed review. Please let us know if we have addressed all your concerns.
In particular, we are including in the main text a section discussing how the method generalizes to directed graphs (with reference to an Appendix for the details). Is there anything you would like us to elaborate on before the deadline?
Very best,
The Authors | Summary: Given the subgraph densities of the stochastic block model (SBM), the authors consider the problem of obtaining SBM's parameters (node and edge probabilities). The authors observe a connection between this estimation problem and estimating parameters of an exponential signal. So, they cleverly teleported the classical Prony's method for obtaining signal parameters to obtaining the SBM's parameters. This observation by the authors is the main noteworthy contribution of the paper. Finally, the authors provide analytical formulae for computing the SBM parameters via eigendecompositions of matrices constructed from the subgraph densities of the SBM.
Strengths: The main strength of the paper is to develop a simple and elegant solution for obtaining the parameters of SBM using classical Prony's method. The glue algebra and its connection to extracting edge probabilities seem interesting and intriguing.
Weaknesses: The are three main weaknesses in the paper:
1. it expects a lot of background knowledge from the reader ranging from SBM and its subgraph densities to glue algebra--for e.g., Equations in (1) and (13) appear out of the blue. How are the densities related to SBM?
2. Some of the technical details appear to be heuristic rather than rigorous proof--for e.g., The use of glyphs in matrices and the glue algebra explanation in Sections 3.3 and 3.4 appear more like a heuristic rather than a rigorous proof. Why are the identities in equations (13)-(18) true?
3. The title, the narrative in the abstract and introduction, and the actual problem solved are disconnected. Moreover, the current story of exponential random graphical models (which is not even discussed); titbits on statistical significance (lines 14 -21); and concluding remarks on summary statistics are very broad and unrelated. As an example, the last sentence in the conclusion, “...stepping-stones over two centuries old.” is meaningless for the reader who does not have a background in Prony’s analysis or its historical context.
The paper develops a procedure for obtaining parameters of the SBM model using subgraph densities. I suggest the authors rewrite the introduction/abstract/conclusion directly stating this message.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The authors claim to have generalized Prony's method for their network problem. This generalization needs to be clarified and concrete. The "quoted" generalization in Sections 3.3 and 3.4 seems like using Prony's method for extracting edges. Please justify.
2. It is a well-known fact in probability theory that the moments generally do not specify the distribution completely. In that sense, how can the subgraph densities (analogous to moments) determine the SBM parameters?
3. At the high level, is unlabelling vertices (in Eq (16)) similar to the law of total probability? That is, first consider conditional probabilities (labeled homomorphism densities) and then add them up to get the final probability (unlabelled densities).
4. Eq (34) is a random quantity obtained from a realization drawn from SBM. The authors didn't provide any statistical guarantees on this quantity. For example, how accurate is this quantity compared to the true sub-graph density?
5. I did not completely understand the details in Section 34. In particular, the glue algebra seems hard to understand. Could the authors provide a layman's explanation for extracting the edge expectations?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have not discussed the limitations of their current approach. However, I think the proposed method is limited to only SBMs. I suggest the authors to comment on generalizing their method to exponential random graph models (if there is any).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the thoughtful review.
Summary:
Your summary was super accurate and succinct. And we very much appreciate the "cleverly teleported". Your suggestions are really helping the readability our paper, primarily with continuity of the narrative, and an inclusion of a pedagogical "layperson" Appendix on the gluing algebra.
Strengths:
Thank you! We also find the gluing algebra "interesting and intriguing", ever since Lovasz used them to prove inequalities between subgraph densities. And your suggestions are super helpful for making the paper more accessible.
Weaknesses:
1) Equations (1) and (13) are the expressions for the expected injective homomorphism densities of (un/labelled) subgraphs in a graph that was sampled from that SBM. We will include a citation to Lovasz, but there is also an intuitive building-up of definitions that describes where Equations (1) and (13) come from, and how they dovetail nicely with the gluing algebra.
2) Related to the previous point, I'm making an Appendix on Gluing Algebra that is written pedagogically with examples, but makes sure to carefully define the gluing algebra and how to interpret the rooted and unrooted subgraphs.
It first defines the subgraph densities $\mu_g$ and how they are computed by Equation (1). Then, to introduce (singly) labeled subgraphs, it shows how the normalized degrees $d_k$ are reflected in the notation as the singly-labelled edge.
Plugging in these definitions into Equations (13) -- (18), it is relatively straightforward linear algebra to see that and that using them in a matrix this way works (like in Equations (7) -- (11)).
3) This makes a lot of sense, we will certainly clean up the narrative.
One of the original motivations for this work was related to Exponential Random Graph Models, but it no longer really belongs in the abstract/motivation. It may belong better in the discussion, where we will clarify the connection with the SBMs inferred here (ie, they are often the distribution that is desired when using common choices for ERGMs).
I rather like the phrase about ``stepping-stones'' at the end, we will add to the Motivation/Background a short paragraph on the historical context of Prony's method and its varied applications.
Questions:
1) Sure, maybe "generalization" is not the best word. We really liked your word: to "teleport" the solution. Would you mind if we used something along those lines?
2) For exchangeable distributions over graphs, a bit more can be said about the structure of the mapping from subgraph densities/moments to distributions with those moments. If two graphons have the same subgraph densities, then they are related to each other by a measure-preserving transformation of the nodes. Moreover, we consider stochastic block models here, and these are indeed determined by a finite number of subgraph densities. This fact can be found e.g. in Lovasz’ “Large networks and graph limits”, where it is phrased as all graphons that are step functions (which includes SBMs) being ‘finitely forcible’, see Lovasz’ Corollary 16.47.
3) Precisely, and thank you, that is an excellent way to phrase it. Hopefully, this will be cleared up in the Appendix on Gluing Algebra;
after showing how to understand labelled nodes as removing the sum over that node, the unlabelling of vertices will naturally be seen as taking the weighted sum of conditional probabilities (ie, multiplying by $\pi_k$ and summing over $k$).
I promise this Appendix will have many examples for anyone who wants to check their understanding.
4) Good point. There are quite a few works on the statistics of subgraph densities, we will mention the salient ones for dense and sparse graphs. More generally though, as the section on quickly counting subgraphs, Equations (28) -- (34), addresses the practical issue of quickly counting in a real network the subgraphs required for this method. It might be more appropriate for it to be moved into another Appendix on Counting Graphs.
5) Yes, absolutely, this is an excellent idea. The Appendix on Gluing Algebra is coming along nicely, and we believe it will help a lot.
Limitations:
Indeed, there needs to be a section added about the limitations. For example, while SBMs can approximate any graphon to arbitrary accuracy, there will always be some amount of model misspecification unless the distribution is exactly an SBM. As for the connection with exponential random graph models, it indeed makes more sense to include them here at the end --- there is some convincing evidence that, for many choices of subgraph constraints, when entropy is maximized, lead to an SBM with some finite number of blocks. Also important to note would be the difficulty in handling SBMs with entries that are equal or close to 0 and 1.
---
Rebuttal Comment 1.1:
Title: Increase of the score in the hope that the authors will deliver their promise
Comment: Thanks for taking the time to address my questions. The title summarizes what my intention is. As long as the readability of the paper gets improved, the authors can use the phrases or explanations I provided in the review. That being said, I am not yet satisfied with answer to the question on statistics of the random quantity in Eq (34). The authors might think more on this to quantify the uncertainty (in terms of some suitable consistency measures--asymptotic or non-asymptotic) in dealing with a sample sub-graph density. Good luck.
---
Reply to Comment 1.1.1:
Comment: Thank you very much. Your comments on improving the flow and readability have already been incredibly helpful, and we will make sure to see them all through.
In particular, for Equation (34), we are able to cite previous work (e.g. [1] and [2]) on characterizing the distribution of subgraph densities in networks sampled from a graphon/SBM. These previous results on characterizing the distribution are directly applicable will help to provide some notion of how large the sampled network needs to be in order to apply our method for inferring the SBM.
Thanks again, and very best,
The authors
References:
[1] Bickel, P. J., Chen, A. & Levina, E. (2011). The method of moments and degree distributions for
network models. *The Annals of Statistics*. (39) 2280–2301
[2] Zhang, Y. & Xia, D. (2022). Edgeworth expansions for network moments. *The Annals of Statistics*. (50) 726-753
[3] Kaur, G. & Röllin, A. (2021). Higher-order fluctuations in dense random graph models. *Electronic Journal of Probability.* (26) 1-26
Very best,
The Authors
---
Rebuttal 2:
Comment: Hello Reviewer,
Thank you again for your thoughtful and detailed review. Please let us know if we have addressed all your concerns. They have been very useful for streamlining the paper's narriative and writing our Appendix on Gluing Algebras.
Is there anything more you would like us to elaborate on before the deadline? We are happy to provide further details.
Very best, The Authors | null | null | Rebuttal 1:
Rebuttal: Thank you so much to all the reviewers for taking the time to read our paper. We really appreciate it being easy and fun to read, so the comments on how to improve the narriative were particuarly appreciated.
The main issues seem to be the mention of Exponential Random Graph Models in the beginning. This we will move to the end, with motivation for why we are mentioning it. SBMs (like the ones obtained by our method) are often the distribution that is desired when using common choices for ERGMs. And there is some convincing evidence that, for many choices of subgraph constraints, when entropy is maximized, the result is an SBM with some finite number of blocks. We will include this in the discussion.
Also related to structure, we are creating Appendices that give a more detailed introduction to the gluing algebra, how it is used in the method, and related proofs.
Sparsity was also mentioned twice, so we will add another set of simulations where the graphs have constant average degree.
With the space freed up, we can also give examples of further extensions to (for example) weighted and directed graphs.
Thank you so much again, and we are looking forward to hearing back from you.
Very best,
The Authors | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Fair Allocation of Indivisible Chores: Beyond Additive Costs | Accept (poster) | Summary: This paper studies the allocation of m indivisible chores among n agents with non-additive preferences. The authors show that, for the case of approximate MMS, the best approximation factor is super constant, and specifically they give a lower bound of min{n, log m/ log log m } for submodular costs, and an upper bound of min{n, log m} for subadditive costs. The lower bound also implies a negative result for 1-out-of-d MMS allocations in this setting.
The authors proceed to study special cases of subadditive costs, and specifically costs encoded by combinatorial problems, and namely bin-packing and job scheduling. So, for example, in the case of bin-packing, the cost for a subset of items is the minimum number of bins to pack them. For both cases, the authors give an algorithm for finding a 1-out-of-(n/2) MMS allocation.
Strengths: - The paper studies an interesting, natural problem: chore allocation beyond additive costs.
- The results are relatively complete, and the authors settle, up-to-constants, all their questions.
Weaknesses: - The main algorithmic results (for bin-packing and scheduling) seem like twists to the standard approximation algorithms (e.g., Next Fit or Best Fit for bin-packing). Of course, this is expected, but these connections/insights are not explicit in the text, so it’s harder to see what’s new about this work.
(Some typos:
Line 134: “which is somehow the most unfair algorithm” -> drop “somehow”.
Line 342: “or the job scheduling setting, we restrict us on the case”)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Have combinatorial values of this nature (and as a tool for bypassing negative results) been considered in the goods case?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your appreciation of the problem we have studied and the results we have obtained. We also thank the reviewer for the constructive and helpful comments.
**Question 1: combinatorial problems considered in the case of goods**
**Response:** We appreciate this question and will incorporate the following discussion in the revised paper, as we consider it a compelling motivation for our research problems.
When it comes to goods, there have been several papers that delve into concrete combinatorial problems and seek to improve approximation ratios compared to more general valuations. For instance, we draw attention to two papers [1,2] that introduce interval scheduling and independent set structures, respectively, into MMS fair allocation problems. In both cases, the induced valuation functions correspond to special cases of XoS (fractionally subadditive) functions. While a general XoS valuation guarantees an approximation ratio of 1/4.6, these two works [1,2] manage to enhance this approximation for their specific functions.
[1] “Fair Scheduling for Time-dependent Resources” at NeurIPS 2021
[2] “Fair Allocation of Conflicting Items” at AAMAS 2022
[3] “Improved Maximin Guarantees for Subadditive and Fractionally Subadditive Fair Allocation Problem” at AAAI 2022
Some other combinatorial problems that have been studied for goods include the knapsack problem and matroid constraints, e.g.,
[4] “Approximation Algorithm for Computing Budget-Feasible EF1 Allocations” at AAMAS 2023
[5] “Guaranteeing Envy-Freeness under Generalized Assignment Constraints” at EC 2023
[6] “On fair division under heterogeneous matroid constraints” at JAIR 2023
**Comment 1: connections with standard algorithms**
**Response:** Yes, our algorithms reflect some ideas from standard algorithms (such as Next Fit), and we will make the connections and differences more explicit. Informally, in the second phase of our algorithms, we prove that we can assign each bundle obtained from the first phase to two agents without violating the MMS constraints. To prove this claim, we introduce an imaginary partition of the items, which is derived from the standard algorithms with certain modifications, such as allowing for one over-filled item. By leveraging the structural properties of this partition, we can prove our claim. In essence, the standard algorithms serve as a step within our algorithms.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I do not have any other questions at this stage. | Summary: This paper studies the MMS fair allocation of combinatorial tasks (indivisible chores) problem where the cost function is submodular or subadditive. For submodular functions, they prove that no algorithm can ensure better than min{n, log m/log log m}-approximation. For more general subadditive cost functions, they prove that there always exists an allocation that is min{n , log m}-approximation MMS, which is (almost) asymptotically tight. What’s more, for ordinal relaxation, 1-out-of-d MMS, they prove that for any d≥2, there is an instance for which no allocation is 1-out-of-d MMS. Finally, the authors give two specific settings which are bin packing setting and job scheduling setting and prove that 1-out-of-[n/2] MMS allocations always exist for these two settings.
Strengths: This paper provides solid and clean theoretical results on MMS chores allocations. It also contains some interesting techniques. For example, the author gives a quite interesting example for Theorem 1, and I also find the proof of Theorem 2 mathematically natural and complete.
Weaknesses: I do not find any obvious weaknesses. Perhaps the relevance of this paper to machine learning is not very strong.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No question.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer for the dedicated effort in reviewing our paper. We are especially appreciative of the reviewer's acknowledgment of the theoretical results and techniques presented in our paper. We also understand the reviewer’s concern about the relevance of our paper to machine learning. We humbly offer the following justifications, hoping they will be helpful in addressing this concern.
On one hand, we can see that fair division works have been increasingly welcomed at conferences such as NeurIPS and ICML in recent years. For example, a growing number of fair allocation papers have been presented at NeurIPS and ICML in recent years, e.g.,
- “Fair Scheduling for Time-dependent Resources” at NeurIPS 2021
- “Fair and Efficient Allocations Without Obvious Manipulations” at NeurIPS 2022
- “Multi-agent Online Scheduling: MMS Allocations for Indivisible Items” at ICML 2023
On the other hand, our paper focuses on submodular (and subadditive) functions, and these functions have significant relevance to various machine learning and optimization scenarios, as we attempted to motivate the relevance in the introduction. Thus, we think the investigation of fair division under submodular functions can contribute to NeurIPS.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your response. I do not have any further questions. | Summary: The paper deals with problem of allocating indivisible chores to agents whose valuation functions for bundles of chores are subadditive or submodular, where an allocation that guarantees every agent their maximin share (MMS) may not exist. Here, an agent's maximin share is the agent's worst-case disutility from a partitioning of the items that minimizes the disutility of the worst (maximum disutility) partition. The paper provides new upper and lower bounds on the approximability of MMS allocations.
I have reviewed a previous version of this paper submitted to IJCAI 23 where I was positive about the technical contributions of the paper. This revision addresses the concerns with writing and a minor technical issue raised there. The revisions have certainly helped with the readability.
Strengths: The problem setting of subadditive and submodular (dis)utility functions is novel.
The new results for these classes of valuation functions provided is this paper are likely to be of interest to the comsoc / fair division research community.
The technical results are interesting, non-trivial and use interesting techniques that are new to me. I was able to verify the technical results.
Relevant related work is well cited and discussed.
Weaknesses: The relevance to NeurIPS for what seems a very AI/computational social choice focused paper is not clear although this is attempted in the introduction.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reviewing our paper again. We are pleased to hear that the reviewer acknowledges the technical contribution of our work and the improvements made in the revised manuscript. We will incorporate all reviewers’ suggestions to further enhance and refine the paper.
We understand the reviewer’s concern about the relevance of our paper to NeurIPS. We humbly offer the following justifications, hoping they will be helpful in addressing this concern. On one hand, we can see that AI/Computational Social Choice works have been increasingly welcomed at conferences such as NeurIPS and ICML in recent years. For example, a growing number of fair allocation papers have been presented at NeurIPS and ICML in recent years, e.g.,
- “Fair Scheduling for Time-dependent Resources” at NeurIPS 2021
- “Fair and Efficient Allocations Without Obvious Manipulations” at NeurIPS 2022
- “Multi-agent Online Scheduling: MMS Allocations for Indivisible Items” at ICML 2023
On the other hand, our paper focuses on submodular (and subadditive) functions, and these functions have significant relevance to various machine learning and optimization scenarios, as we attempted to motivate the relevance in the introduction. Thus, we think the investigation of fair division under submodular functions can contribute to NeurIPS.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I do not have additional questions at this time. | Summary: The paper studies the classical fair allocation of indivisible items setting with two twists: (1) Items correspond to *tasks* instead of *goods*, i.e., any agent would prefer receiving no item at all. (2) Valuations are not additive but may be subadditive. The paper also considers some special cases of subadditive valuations, namely, submodular, "bin packing", and "scheduling" costs. As an objective, the paper focusses on the classic notion of the maximin share (MMS). Since MMS allocations do not always exist, the paper considers both a multiplicative as well as an "ordinal" relaxation of the MMS notion.
The contribution of the paper is threefold (I am omitting the ordinal approximation results for simplicity): (1) For the subadditive case, the paper presents a lower bound of $\min\{n,log(m)/loglog(m)\}$ as well as a mechanism providing an approximation of $\min\{n,\log(m)\}$. (2) For bin packing, the paper presents a multiplicative $2$-approximation and a tight lower bound of $2$ for any mechanism. (3) For job scheduling, the paper presents a mechanism providing a multiplicative $2$-approximation, however, without a matching lower bound.
Strengths: - The paper makes significant progress within the classic setting of fair allocation of indivisible items. While this literature has been long focussed on the case of additive valuations/costs, in recent years there has been a growing body of literature studying more general valuations/costs, with the paper under review being one of these. Hence, I am optimistic that the paper will lead to follow-up work.
- The paper develops new mechanisms that are tailored to the studied cost functions. These mechanisms and their analysis is certainly non-trivial and lead to a significant technical contribution.
Weaknesses: - Since the result for the general, subadditive case is rather negative (i.e., there is no constant approximation for MMS), the main contribution of the paper is for the special cases of bin packing and scheduling. Having said this, this can hardly be seen as a critique for the paper, but rather as a sign of the challenging endeavor to study beyond additive costs.
- I think that the writing of the paper could be improved, as I had to reread several parts of the paper. I added a list of suggestions within the minor comments.
- Unfortunately, the newly developed mechanism is not very elegant, and one can't help but wonder whether there exists a simpler mechanism to achieve the same approximation guarantee. Also, the mechanisms do not come with a polynomial-time implementation, hence, leaving the question open whether the same guarantees can be achieved efficiently.
**Minor Comments**
- The paper uses the term "tasks", while a large fraction of the literature uses the term "chores". I would suggest to add a comment on that.
- line 21: You mention "functions" without clarifying their role in the problem. (Of course this is clear for any person knowing fair allocation, but for others it may not.)
- line 60: "As far as we know, all the above works also assume additive costs" - Sounds a bit weird in this context, since checking these papers should be doable.
- line 71: "the asymptotically tight multiplicative" - I think it is weird to use this phrase in a theorem environment, especially since you are ignoring log-factors. I would suggest to just mention the upper and lower bounds.
- line 99: "Note that no bounded approximation" - At this point, it is not clear what should be approximated.
- Proof of theorem 1: I was very confused of the usage of the term "covering planes" since, as far as I understand, these objects are actually (partial) grids, i.e. finite set of points.
- line 214: I think it would be helpful for the reader to learn about the meaning of the abbreviation "IDO".
- line 228-236: I did not find the intuition for the algorithm very helpful before reading the algorithm (and even after that). I would suggest to refine this, having in mind that the reader has not read the algorithm at this point.
- line 316: I think that $j \in P_i$ is a bad choice for an index since here $j$ is a machine but before $j$ used to correspond to jobs/items.
- Section 5: It would have been nice to hear some (very brief) summary of how Theorem 4 is achieved, i.e., how does the mechanism look like?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Intuitively, subadditive costs make the problem "easier" in the sense that allocation all tasks to one agent at least has the same approximation guarantee as in the additive case. This is certainly not the case for superadditive cost functions. Do you have any results in this direction?
- I was missing a concrete (real-world) motivation for subadditive cost functions in the context of allocation tasks. Could you elaborate on that?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of the paper are well addressed, in particular, within Section 6. Here, the paper transparently communicates all resulting open questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our paper’s contribution to the literature on fair allocation, as well as the technical depth and the potential to stipulate subsequent research. We also thank the reviewer for pointing out our paper’s weaknesses and offering many constructive and helpful suggestions for improving our paper.
In the following, we first answer your questions.
**Question 1: on the superadditive cost functions.**
**Response:** Yes, we have considered superadditive cost functions and found that no bounded approximation is possible. Let us consider a simple instance with two agents $N = \\{1, 2\\}$ and four items $M = \\{a, b, c, d\\}$. The cost functions are described below:
- When $|S| \le 1$, $v_i(S) = 0$ for both $i = 1$ and $2$
- When $|S| = 2$,
- $v_1(\\{a, d\\}) = v_1(\\{b, c\\}) = 0$; $v_1(\\{a, b\\}) = v_1(\\{a, c\\}) = v_1(\\{b, d\\}) = v_1(\\{c, d\\}) = 1$.
- $v_2(\\{a, b\\}) = v_2(\\{c, d\\}) = 0$; $v_2(\\{a, c\\}) = v_2(\\{a, d\\}) = v_2(\\{b, c\\}) = v_2(\\{b, d\\}) = 1$.
- When $|S| = 3$, $v_i(S) = 1$ for both $i = 1$ and $2$
- When $|S| = 4$, $v_i(S) = 2$ for both $i = 1$ and $2$
It can be verified that the cost functions are superadditive. Besides, MMS$_i = 0$ for both $i = 1$ and $2$ since agent $1$ can partition the items into $\\{\\{a, d\\}, \\{b, c\\}\\}$ and agent $2$ can partition the items into $\\{\\{a, b\\}, \\{c, d\\}\\}$. However, no matter how the items are allocated to the agents, there is one agent whose cost is at least $1$.
We have included a similar example in Appendix B.1 of the manuscript.
**Question 2: concrete motivation for subadditive cost functions in the context of allocating tasks**
**Response:** Firstly, we can regard the job scheduling and bin packing models as two examples that motivate subadditive cost functions in the context of allocating tasks. One interpretation of the cost of a set of tasks is the time taken to complete them. Sometimes agents may be able to perform tasks in parallel, in which scenario the agents have job-scheduling cost functions. Sometimes agents have to finish tasks by a deadline and can hire processors to finish them. The cost of hiring processors equals the number of processors, in which scenario the agents have job-scheduling cost functions.
Secondly, and more concretely, doing laundry and cleaning the house can be carried out simultaneously, and thus induce subadditive cost functions. Similarly, when it comes to teaching, adding more students to a class can actually result in a decrease in the marginal cost for the teacher, as the teacher only needs to prepare the teaching materials once, regardless of the number of students.
Next, we respond to your comments.
**Comment 1: on the inherent difficulty of the problems.**
**Response:** We thank the reviewer for understanding the inherent difficulty of the problems.
**Comment 2: the writing of the paper could be improved**
**Response:** We thank the reviewer for pointing out this weakness and offering so many helpful suggestions for improving the writing quality. We will carefully polish our paper and address the reviewer’s concerns.
**Comment 3: simpler mechanisms to achieve the same approximation guarantee**
**Response:** We have tried several simpler algorithms but none of them performs well, especially for the ordinal approximation settings. For example, round-robin is a commonly-used algorithm in fair division that guarantees $2$-MMS (multiplicative approximation) for additive cost functions. However, it does not ensure $1$-out-of-$\lfloor n/2 \rfloor$ MMS in terms of ordinal approximations. To see this, we can consider an instance with four identical agents and five items of costs $4,1,1,1,1$. A $1$-out-of-$\lfloor n/2 \rfloor$ MMS defining partition is $\\{\\{4\\},\\{1,1,1,1\\}\\}$ and thus the $1$-out-of-$\lfloor n/2 \rfloor$ MMS equals $4$. The round-robin algorithm allocates two items with costs $4$ and $1$ to one agent, who receives a cost higher than her $1$-out-of-$2$ MMS.
This example also appears in Appendix E.2 of the manuscript.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their detailed response. I do not have any further questions at this point. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Exploring Geometry of Blind Spots in Vision models | Accept (spotlight) | Summary: The authors explore the output space of vision models. Specifically, they propose an algorithm (Level Set Traversal, LST) which, starting from a "source" input image transforms it into an image that looks completely different (e.g. an image from a different class) while still confidently predicting the "source" class. This proves that neural networks have "blind spots" they further analyze the behavior of networks, and show that the paths between these inputs are connected.
Strengths: Originality: the paper improves upon previous work in the area, which only applied to very specific network architectures (invertible resnets). In contrast, the novel LST algorithm can be applied to virtually any network.
Quality: Both the theoretical and the empirical work appear solid to me.
Clarity: I was not able to follow all the math in detail, but was nonetheless able to follow along nicely.
Significance: I think this paper is a solid contribution to the field. It provides both some theoretical analysis of the output space of vision models, as well as an algorithm for producing "blind spot" examples that is easy to understand and implement, and seems to work well.
Weaknesses: * The authors state that the algorithm takes 200-400 iterations for each input image. To put this into context it would be nice if the authors could give a wall clock time estimate on how long this would take.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * The authors mention that doing only 10ish iterations produces much lower-quality results. I would be curious to see what these would look like (though this is just personal curiosity, so feel free to ignore this).
* I don't understand what the colors within the triangles in Figure 4 represents, could you explain it?
* Do the images you obtain transfer to other architectures? E.g. If we take the off-diagonal images from Fig. 2 and put them into a different network than the one used to create them, would they be classified as target or source class?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately discusses limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We are encouraged that the reviewer found that the proposed method is original, explained clearly, and is a significant contribution to the field. We respond to the questions raised below:
> The authors state that the algorithm takes 200-400 iterations for each input image. To put this into context it would be nice if the authors could give a wall clock time estimate on how long this would take.
- *Wall Clock Time:* We record wall clock time on a single RTXA5000 GPU with a ResNet-50 model on ImageNet, using a batchsize of 100, and report mean and standard deviation ($\mu \pm \sigma$) statistics over 5 independent minibatches.
The proposed LST algorithm is seen to require 101.8 seconds ($\pm$ 0.4s) for 400 iterations, and 52.0 seconds ($\pm$0.3s) for 200 iterations.
In comparison, a 100-step PGD attack of the same model takes 25.1 seconds ($\pm$1.4s). Thus, per iteration, the LST algorithm takes the same amount of time as a PGD attack iteration (both about 0.25 seconds). In total, the LST algorithm of 200/400 iterations requires 2x/4x the computation time of a standard 100-step PGD adversarial attack.
- In Figure-2 of the Rebuttal pdf, we present the LST outputs as obtained with just 10 iterations. Here, we clearly observe that the LST output images (off-diagonal entries) are not adequately similar perceptually to the target image of interest, but are often rather more similar to the source image. Thus, in practice, we set the number of iterations to be between 200 and 400 to produce LST outputs very similar to the target image.
> I don't understand what the colors within the triangles in Figure 4 represents, could you explain it?
- In order to visualize the extent of level sets over a two-dimensional subset, we evaluate the model confidence over the triangular convex hull obtained by linearly interpolating over three reference points, namely the source image and the two target blindspot images. The prediction confidence (in the range [0,1]) assigned by the model with respect to the source class is mapped to the continuous color-bar (shown in the rightmost part of Figure 4), with high-confidence (close to 1.0) points appearing as bright yellow, and low-confidence (close to 0.0) points appearing as dark violet.
> Do the images you obtain transfer to other architectures? E.g. If we take the off-diagonal images from Fig. 2 and put them into a different network than the one used to create them, would they be classified as target or source class?
- *Transferability of LST outputs:* In general, we observe little transferability between different networks, that is, the off-diagonal images generated by LST on one network are generally assigned low confidence by a different network with respect to the source class. This is likely because the LST algorithm optimizes the final output to be highly specific to the original network (especially with the choice of relatively small step sizes over several iterations), and transferability is not explicitly incorporated into its design. However, we hypothesize that transferability of LST outputs can be improved significantly by incorporating techniques proposed in a vast body of existing literature that works towards boosting transferability of standard adversarial attacks [1,2,3] and Universal Adversarial Perturbations (UAP) [4,5,6].
We thank the reviewer for the support for acceptance. We greatly appreciate the valuable comments and suggestions, and we will certainly incorporate them in the final version of the paper.
References:
[1] Xie et al., Improving Transferability of Adversarial Examples with Input Diversity, CVPR 2019
[2] Wu et al., Boosting the Transferability of Adversarial Samples via Attention, CVPR 2020
[3] Qin et al., Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation, NeurIPS 2022
[4] Dezfooli et al., Universal adversarial perturbations, CVPR 2017
[5] Mopuri et al., Fast feature fool: A data independent approach to universal adversarial perturbations, BMVC 2017
[6] Benz et al., Double Targeted Universal Adversarial Perturbations, ACCV 2020
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I stand by my original review that this manuscript should be accepted.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for your response. Again, we are glad that you support the acceptance of our paper. | Summary: The paper studies the under-sensitivity of such deep neural networks, where it is possible to find large perturbations in the input space (such as transforming one image to another) without significant changes to the activations/predictions. Towards this goal, the paper proposes Level Set Traversal, LST, which adds perturbations in an incremental manner to move along regions orthogonal to the gradient (w.r.t input) space computed on the classification loss. Evaluations are done on ImageNet and CIFAR-10, and results show that LST is able to find not only large perturbations that continue to yield high confidence on the source class, but a path of high confidence from the source to the target. Experiments also show that the convex hull formed by a source image and a pair of LST output images for two targets enclose a region of high-confidence for adversarially trained models, unlike non-adversarially trained ones.
Strengths: - The explanation and intuition of the method is very clear, and it also appears to be highly effective in finding an entire path of high confidence/similar loss from two highly visually different images.
- Theoretical analysis presented in Sec 4 is also helpful and insightful.
- Additional experiments on CIFAR-10 and ablation studies in the supplementary material are also appreciated.
Weaknesses: 1) The experiment details mention 1000 images from ImageNet are chosen as the source images, but only 5 target images (the ones in the figure) are used. It is difficult to tell whether the results are overfitted to the 5 target images. More extensive evaluation can be done, for example simply randomly choosing source-target pairs from the 1000 chosen images.
2) I do not see any metric measuring the confidence of the output of LST alone. This metric is highly important especially for Table 1, to show that LST can both jointly optimize for minimal distance from target image as well as preserving confidence. Measuring path confidence averages the results across the entire path is not sufficient since the confidence result can be biased by points closer to the source image and hence more likely to exhibit high confidence.
3) Measurement of computational cost (wall-clock time) of the LST algorithm might be useful.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What are the main failure cases of LST - under what situations does LST work best, and when is LST unable to find such paths or outputs of low loss/high confidence?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We are encouraged that the reviewer found that the proposed method is explained clearly and intuitively, is effective in practice and supported by theoretical insights. We respond to the questions raised below:
>The experiment details mention 1000 images from ImageNet are chosen as the source images, but only 5 target images (the ones in the figure) are used. It is difficult to tell whether the results are overfitted to the 5 target images. More extensive evaluation can be done, for example simply randomly choosing source-target pairs from the 1000 chosen images.
- *Choice of Target Classes:* For CIFAR-10, we consider all remaining 9 classes as target classes in our evaluations. For ImageNet, we pick 5 arbitrary classes for our experiments in the paper, but observe that the choice of the target images/classes does not affect our results much. To show this, we rerun our experiments twice more for a normally trained ResNet-50 model wherein we randomly sample another set of 5 classes each time. We also randomly sample two target images for each source image in the last row. We present our results in the tables below:
| | $\ell_2$ dist: $\mu \pm \sigma$ | $\ell_\infty$ dist: $\mu \pm \sigma$ | SSIM: $\mu \pm \sigma$ | LPIPS dist: $\mu \pm \sigma$ |
|-------|------------------------ |------------------------ |------------------------ |------------------------ |
|Paper| 2.6 $\pm$ 1.1 | 0.033 $\pm$ 0.011 | 0.995 $\pm$ 0.006 | 0.003 $\pm$ 0.004|
|Random sampling 1| 2.8 $\pm$ 0.9 | 0.031 $\pm$ 0.012 | 0.996 $\pm$ 0.004 | 0.002 $\pm$ 0.004|
|Random sampling 2| 2.9 $\pm$ 1.3 | 0.055 $\pm$ 0.031 | 0.996 $\pm$ 0.005 | 0.002 $\pm$ 0.003|
| Random sampling per image| 3.5 $\pm$ 4.9 | 0.044 $\pm$ 0.035 | 0.988 $\pm$ 0.029 | 0.003 $\pm$ 0.016|
| | $~~~~~~p_{\text{src}}$ | Avg $\Delta$ Conf. | Avg $\Delta$ Frac $(\delta=0.0)$ | $~~~(\delta=0.1)$ | $~~(\delta=0.2)$ | $~~(\delta=0.3)$| Avg Path Conf. |
|----|------------------------ |------------------------|------------------------ |------------------------|------------------------ |------------------------|------------------------ |
|Paper| 0.96 $\pm$ 0.12 | 0.49 $\pm$ 0.11 | $~~~~~$0.13 $\pm$ 0.12 | 0.42 $\pm$ 0.12 | 0.46 $\pm$ 0.12| 0.48 $\pm$ 0.11 | 0.87 $\pm$ 0.09 |
|Random sampling 1| 0.95 $\pm$ 0.14 | 0.47 $\pm$ 0.12 | $~~~~~$0.13 $\pm$ 0.13 | 0.41 $\pm$ 0.13 | 0.44 $\pm$ 0.13 | 0.46 $\pm$ 0.13 | 0.81 $\pm$ 0.11|
|Random sampling 2| 0.95 $\pm$ 0.13 | 0.42 $\pm$ 0.11 | $~~~~~$0.11 $\pm$ 0.12 | 0.36 $\pm$ 0.11 | 0.39 $\pm$ 0.12 | 0.41 $\pm$ 0.12 | 0.81 $\pm$ 0.11 |
| Random sampling per image | 0.94 $\pm$ 0.14 | 0.43 $\pm$ 0.14 | $~~~~~$0.11 $\pm$ 0.13 | 0.36 $\pm$ 0.15 | 0.39 $\pm$ 0.15 | 0.42 $\pm$ 0.16 | 0.79 $\pm$ 0.14|
- The confidence of the LST output itself generally depends on a combination of hyperparameter choices, such as the confidence threshold $\delta$, number of steps $m$, and steps sizes for the perpendicular and parallel components to the local gradient. For example, if the step size perpendicular to the gradient is made larger, the network confidence can drop below $p_{src} - \delta$, terminating the LST algorithm. Thus when the LST algorithm stops before the max-iterations is completed, the penultimate point that has network confidence above $p_{src} - \delta$ is set as the LST output. However, early termination is rarely observed with an appropriate choice of hyperparameters. Thus in all cases, the LST output is guaranteed to have network confidence above $p_{src} - \delta$, for all LST image metrics shown in Table-1. For reference, we set the confidence threshold $\delta=$ 0.2 for ImageNet, which means that the confidence on the LST output never drops below the confidence of the source image by more than 0.2. This can also be seen individually for all LST output images which are the off-diagonal images in Figures-2,3 (original source images are the diagonal images), indicating that the LST outputs have high confidence, generally well-above the lower bound given by $p_{src} - \delta$ = $p_{src}- $ 0.2.
* *Wall Clock Time:* We record wall clock time on a single RTXA5000 GPU with a ResNet-50 model on ImageNet, using a batch size of 100, and report mean and standard deviation ($\mu \pm \sigma$) statistics over 5 independent mini-batches.
The proposed LST algorithm is seen to require 101.8 seconds ($\pm$ 0.4s) for 400 iterations, and 52.0 seconds ($\pm$0.3s) for 200 iterations.
In comparison, a 100-step PGD attack of the same model takes 25.1 seconds ($\pm$1.4s). Thus, per iteration, the LST algorithm takes the same amount of time as a PGD attack iteration (both about 0.25 seconds). In total, the LST algorithm of 200/400 iterations requires 2x/4x the computation time of a standard 100-step PGD adversarial attack.
- *Best and Worst cases:* Given that the LST algorithm depends primarily on the parallel and orthogonal components of the model gradient, LST works best on deep networks that do not suffer from gradient obfuscation or gradient masking issues, which encompasses nearly all commonly used vision models. On the other hand, for models with non-differentiable or randomized inference-time components, the LST algorithm would have to be modified to utilize tricks such as Backward Pass Differentiable Approximation (BPDA) and Expectation over Transformation (EoT) tricks as proposed in past adversarial attack literature [Athalye et al. 2018] that helps overcome shattered, stochastic, exploding and vanishing gradients. We also note that these modifications also often require an sizable increase in the maximum iteration count to achieve adequate convergence levels.
We thank the reviewer for the support for acceptance and greatly appreciate the valuable comments and suggestions. We kindly ask if the reviewer would consider increasing their score if their concerns or questions have been addressed. We would be glad to engage further during the discussion period.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: The authors' rebuttal has convincingly demonstrated that the results of the experiments are generalizable to more ImageNet classes apart from the initially chosen 5, and that LST, by construction of the algorithm, maintains high confidence across the entire path. I also appreciate the author's additional results on wall-clock timing and discussion of failure cases, which I believe will be nice to incorporate into the revision to paint a more complete picture. Since my concerns have been addressed, I've raised my score to 7.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are glad that the reviewer found our rebuttal convincing and helpful. We sincerely thank the reviewer for the valuable suggestions and detailed feedback, we will certainly incorporate these and associated additional results into the final version of the paper. We thank the reviewer for raising their score, and for supporting acceptance of the paper. | Summary: The authors propose a new adversarial attack on the discriminative vision models called Level Set Traversal (LST). Contrary to the previous attacks, this new algorithm exploits the orthogonal component of the network's gradient to produce samples that can bypass existing adversarially-trained classification networks.
Moreover, the authors showcase that the samples obtained using this algorithm lie on the star-shaped manifold of high-confidence predictions. This is new compared to the previous adversarial attacks, which typically yield samples not connected via high-confidence linear paths.
Strengths: + The presentation of the paper is great.
+ The motivation behind the proposed method is clear. I also like that the paper is theoretically driven and includes experimentally confirmed results on real-world datasets.
+ To me, the results look really convincing and exciting. I was surprised at the existence of the linearly interconnected sets of adversarial examples, even in adversarially trained models.
+ The ablation study of the method is quite extensive and includes robustness tests for multiple hyperparameters.
Weaknesses: - I would suggest the authors try and make connections to the area of model ensembling, where works such as [1] showcased the existence of piecewise-linear paths with low training error in the space of the model weights.
- I also suggest including section 1 of the appendix in the main paper, as well as extending Figure 4 to include the visualization for multiple attack types. Without reading the appendix, the advantage of the proposed algorithm over previous methods remains unclear. Extended proof of Lemma 1, in my opinion, could be instead moved to the appendix, as it breaks the flow of the text.
- The authors have only validated their approach against one type of adversarial defense for each proposed network. Extending such evaluation would benefit the community and provide a benchmark for future works.
[1] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs, NeurIPS 2018
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Did the authors benchmark multiple state-of-the-art adversarial defense methods against their attack?
* Why does the discovered high-confidence manifold have more high-confidence regions for adversarially-trained models than non-AT variants? Have the authors evaluated multiple AT methods to explore whether or not the shape of this region depends on the network architecture (ResNet vs ViT) or the specific AT method?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The limitations were adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We are encouraged that the reviewer found the paper well-motivated, clear, convincing and exciting. We respond to the questions raised below:
- *Connections to model ensembling:* We thank the reviewer for the suggestion. The existence of piece-wise linear paths in model parameter space that achieve low loss (on a fixed training or testing set of images) as found by several recent works suggest that those model weights lie within the low-loss level sets and imply similar connected topologies of these sets. By altering the fixed choice of training/testing images using some well-defined perturbations/modifications, the loss function itself would change and induce different level sets in model-weight space for each such modification. We hypothesize that it might be possible to jointly consider the model parameter space and input image space as orthogonal components in a combined high-dimensional product space, wherein a meta-level linear functional can be defined to define level sets and connectivity. This potentially could help identify subsets of model weights that generalize well across different variations of the input image distribution.
- We thank the reviewer for feedback on the section organization, we will certainly incorporate it in future versions.
- *Extending Evaluations and Benchmarking:* We thank the reviewer for the suggestion to create a common benchmark to evaluate the existing models, especially amongst different adversarial defenses. Our codebase is compatible with loading adversarially trained models from the Model-Zoo offered by RobustBench [Croce et al. 2021]. We additionally present results obtained on other popular adversarial defenses such as TRADES [Zhang et al. 2019] and the current best-performing WideResNet-34-10 model on CIFAR-10 listed on RobustBench (trained with extra data by Rade et al. 2021) below:
Image Metrics between LST output and Target Image:
| | $\ell_2$ dist: $\mu \pm \sigma$ | $\ell_\infty$ dist: $\mu \pm \sigma$ | SSIM: $\mu \pm \sigma$ | LPIPS dist: $\mu \pm \sigma$ |
|-------|------------------------ |------------------------ |------------------------ |------------------------ |
Trades | 3.6 $\pm$ 2.2 | 0.467 $\pm$ 0.137 | 0.874 $\pm$ 0.114 | 0.033 $\pm$ 0.03|
Rade et al. | 4.0 $\pm$ 2.6 | 0.51 $\pm$ 0.145 | 0.846 $\pm$ 0.132 | 0.037 $\pm$ 0.034|
Quantitative measures ($\mu \pm \sigma$) of model confidence invariance over the triangular convex hull ($\Delta$) of a given source image and all possible target image pairs and over linear interpolant paths between all possible source-target image pairs:
|| $~~~~~~p_{\text{src}}$ | Avg $\Delta$ Conf. | Avg $\Delta$ Frac $(\delta=0.0)~~~$ | $~~~~(\delta=0.1)$ | $~~~(\delta=0.2)$ | $~~~(\delta=0.3)$| Avg Path Conf. |
|--------------|------------------------ |------------------------|------------------------ |------------------------|------------------------ |------------------------|------------------------ |
Trades | 0.75 $\pm$ 0.21 | 0.69 $\pm$ 0.21 | $~~~~$0.24 $\pm$ 0.20 | 0.72 $\pm$ 0.21 | 0.87 $\pm$ 0.14 | 0.94 $\pm$ 0.09 | 0.74 $\pm$ 0.20|
Rade et al. | 0.75 $\pm$ 0.18 | 0.70 $\pm$ 0.19 | $~~~~$0.24 $\pm$ 0.18 | 0.77 $\pm$ 0.19 | 0.90 $\pm$ 0.11 | 0.96 $\pm$ 0.07 | 0.74 $\pm$ 0.17|
---
>The authors have only validated their approach against one type of adversarial defense for each proposed network. Extending such evaluation would benefit the community and provide a benchmark for future works.
> Did the authors benchmark multiple state-of-the-art adversarial defense methods against their attack?
> Have the authors evaluated multiple AT methods to explore whether or not the shape of this region depends on the network architecture (ResNet vs ViT) or the specific AT method?
- *Shape of Level Sets:* Due to differences in architecture between ResNet and ViT networks, and differences in training methodologies, the level sets do tend to have different shapes in the two cases. For instance, the ResNet-50 AT model is more confident over a more expansive set of inputs as compared to the DeiT-S AT model, indicated by the larger fraction of the triangular convex hull of a given source image and all possible target image pairs for various confidence threshold settings as presented in Table-2. Between different adversarial training methods on the same architecture, we observe that the extent of the level set generally depends on the extent of regularization utilized, as it explicitly promotes smoothness of network outputs over a predefined input region, and that the AT models tend to be under-sensitive over subsets of the input domain that lie well beyond its original threat model.
> Why does the discovered high-confidence manifold have more high-confidence regions for adversarially-trained models than non-AT variants?
- During adversarial training, the model is explicitly regularized to mitigate oversensitivity to perturbations in input space within the subset of images as defined by the original threat model of interest. It is well known that adversarially trained models are smoother than normally trained models within the original threat model, given that large fluctuations in model prediction are discouraged during adversarial training. We suspect that these adversarially trained models also tend to be smoother over subsets of the input domain that lie well beyond its original threat model due to the regularization encountered during adversarial training, and thus the high confidence region is also more expansive as compared to normally trained models.
We thank the reviewer for the strong support for acceptance. We greatly appreciate the valuable comments and suggestions, and we will certainly incorporate them in the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Great rebuttal, I keep my rating
Comment: I would like to thank the authors for the comprehensive rebuttal and keep my initial rating. In case of acceptance, I recommend the authors incorporate the additional presented experiments and rework the text for better clarity and comprehensiveness in the camera-ready version.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are glad that you found our rebuttal comprehensive and helpful. We will certainly incorporate the additional results and rephrase the text in the camera-ready version as suggested. Once again, we thank you for your strong support for acceptance of the paper. | Summary: This paper introduces the idea of level sets for image classification models. Image classification models are said to be ‘under-sensitive’ when two visually distinct images produce the same output (class). The authors propose a method to compute ‘equi-cconfidence’ level sets such that two images belonging to this set produce the same output when passed through a classifier.
The Level Set Traversal (LST) algorithm takes a source image and target image (visually distinct from the source, different class) and produces an image visually similar to the target while maintaining the output prediction of the source image. This is done by iteratively updating the source image in the direction perpendicular to the gradient. The paper then goes on to show (theoretically) the behavior or level sets in some basic ML settings (linear classifier, rely neural networks etc).
Additionally, the authors also highlight the complimentary nature of adversarial examples and level sets: Adversarial examples try to change model output while keeping human predictions the same while levels sets try to keep model output the same when human predictions differ.
Experiments are done on ResNet-50 and DeiT on ImageNet. Empirical results show adversarially trained models exacerbate the problem of under sensitivity compared to vanilla models.
Strengths: 1. The LST algorithm is the main contribution of the paper and is a novel way to calculate equity-confidence sets given a source-target pair.
2. The paper is well written and concepts are introduced in an easy to understand fashion.
3. Significance: Understanding what causes under-sensitivity in a model is a major open problem.
4. I really like the explanation of the under-sensitivity of adversarial models using the LST algorithm. I think that is the major insight from this paper (apart from the algorithm itself of course).
Overall, I really like the idea and the execution of the paper so I'm conflicted whether to recommend acceptance (see my comments below). I'm looking forward to hearing from the authors and seeing other reviewers' comments.
Weaknesses: 1. I think the main weakness of the paper is the lack of analysis around the level set found for a particular source-target pair. For example, in the adversarial example setting, adversarial examples don’t need to be w.r.t. some target, they just need to change the label with a perturbation. In this paper’s case, the level set itself depends on a target label. This isn’t necessarily an issue on it’s own, however, it is difficult to understand what showing the mere existence of a level set is supposed to show about the under-sensitivity of the model. Is there something about the level set we've found that tells us something about the under-sensitivity of the model? Other questions: How does choice of target affect the level set? Is there a ’size’ for a level set? Are level sets from source -> target and target -> source symmetric?
2. Related to presentation: I think including a figure showing images along a path from source to the final image generated by the level set algorithm would be useful to understand what exactly is within a level set. This figure could potentially replace Figure 3 which, in my opinion, does not add anything new (it shows the same thing as Figure 2 except a change in architecture)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How are the images inside the convex hull in Fig 4 obtained? The images on the edges are calculated via linear interpolation but I am not sure about those inside the convex hull.
2. How do you distinguish ‘under-sensitivity’ from just poor training? Suppose you chose an architecture/set of hyperparameters which is bad fit for the dataset. Wouldn’t this setting also be ‘under-sensitive’? Is there any way to tell them apart?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. (Relatively) Large computational budget required to calculate images close to the target via LST. (Mentioned by the authors)
2. Level Sets are a property of target class in addition to source class (unlike adversarial examples)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback. We are glad that they found our contributions novel, significant, and insightful. We answer the questions raised below:
**Analysis and implications of level set**
- We would like to emphasize that the level set itself is primarily a property of the source image/source class - formally, given a prediction confidence value $p \in [0,1]$, and a class $j \in \lbrace 1, \dots, N\rbrace$ the Level Set $L_f(p,j)$ for a function/model $f$, is defined as the set of all inputs that are assigned the confidence $p$ with respect to class $j$. Thus, from a theoretical standpoint, the level set itself does not depend explicitly on the choice of the target, but empirically we explore the level set using the LST algorithm which simply starts from the source image and explores in the direction of the target image. The true level set for any particular class is thus a superset of the union of the sets found by LST for all possible target images of other classes. We use random images from random target classes to empirically show that inputs very similar to the target image lie inside the level set containing the source image.
- We agree that showing the mere existence of the level set is not very significant in itself. We however find that the level set for common vision models is remarkably expansive — large enough to contain inputs that look near-identical to target images from other classes. Further, LST helps uncover a remarkable star-like connected geometry for the level sets, wherein the linear convex interpolant paths between any source image and LST output “blindspot” image lies within the same level/super-level set. Furthermore, we show that adversarially trained models exhibit significant under-sensitivity over subsets of the input domain that lie well beyond the original threat model used in its training. The large extent of these level sets implies that we may need to think beyond adversarial training in order to solve the under-sensitivity issue.
*Intermediate images along Path:* Thank you for the suggestion. We present these intermediary images found over different iterations by LST for both normal and robust ResNet-50 models in Figure-1 of the Rebuttal pdf.
*Computation of images inside Convex Hull:* In order to visualize the extent of level sets over a two-dimensional subset, we evaluate the model confidence over the triangular convex hull obtained by linearly interpolating over three reference points, namely the source image and the two target blindspot images. For example, the image at the ‘centroid’ of the triangle formed by the source image and any target image pair is the arithmetic mean of the three images. In Figure-4, the lower left vertex in each triangle is given by a source “goose” image, and the other two vertices are given by two random target blindspot images. Apart from the high-confidence assigned along the linear paths between the source and target images (represented by two sides of the triangle), we observe that the model also assigns high confidence to a sizable fraction of the interior of the triangular convex hull as well, indicating the extent of the level set.
*Analyzing the 'Size' of Level Sets:* To quantitatively characterize the extent/size of the level/super-level sets, we compute statistical measures which evaluate the model confidence on the source class over the triangular convex regions enclosed by the source image and any two target blindspot images (as shown in Figure 4 of the Main paper). In particular, we report the Average Triangle Confidence, and the fraction of images in the triangular region for which the model confidence is greater than $p_{src}−\delta$, representing different threshold confidences in Table-2 of the Main paper.
*Symmetry of Level Sets:* Given a source image $x_s$ with label $y_s$ and a target image $x_t$ of class $y_t$, the level set as defined for the source image/class is mathematically distinct from that of the level set corresponding to the target image/class and are not exactly symmetric. However, by empirically running the LST algorithm with source image $x_s$ and target image $x_t$, we obtain a new input $x’$ that is perceptually similar to $x_t$ but lies within the level set of the source image $x_s$, and even the linear convex path between $x’$ and $x_s$ lie within the same level/super-level set. Symmetrically, we can rerun the LST algorithm with $x_t$ as the source while targeting $x_s$, to obtain a new image $x^”$ perceptually similar to $x_s$, though $x^”$ and the the entire linear convex path between $x^”$ and $x_t$ lie within the level set of $x_t$. Thus, though mathematically distinct, we observe similar symmetric perceptual characteristics for the two level sets empirically on common vision models.
*Under-sensitivity vs Poor training:* We agree that poorly trained models will also likely display a pronounced degree of under-sensitivity. However, we can discern between these two cases by considering the network prediction of the target image itself. If the target image itself is incorrectly predicted to be in the same class as that of the source image (and say lies in its level set), then the model itself is a bad fit for these images. To control for this in our experimental evaluations, we only use target images which are correctly classified with high confidence by the network with respect to the target class. Using the LST algorithm, we show that though standard vision models may be considered a “good fit” (e.g. ResNet-50 model on ImageNet), they also simultaneously display a surprising extent of excessive under-sensitivity.
We thank the reviewer again for their valuable comments and suggestions. We kindly ask if the reviewer would consider increasing their score to support acceptance of the paper if their concerns or questions have been addressed. We would also be glad to engage further during the author-reviewer discussion period.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their clarifying comments and updated manuscript. Thinking from a partial (Edit: practical) perspective, I'm still unsure about why discovering a star like structure (I'm picking one claim out of many about structure) is significant but I can still appreciate some theoretical insight obtained from the rest of the paper. I have accordingly raised my score to a 5.
---
Reply to Comment 1.1.1:
Title: Thank you for supporting acceptance
Comment: We are highly grateful to the reviewer for increasing the score and supporting acceptance of our paper. Regarding the significance of our methodology and results (such as star-like structure), you might find our rebuttal and official comment to reviewer QPKJ pertinent. We hope that your remaining questions regarding our work are answered.
---
Rebuttal 2:
Title: Discussion?
Comment: Dear Reviewer LWdq,
You provided a borderline rating before. What do you think about the authors' response? Any further questions/comments/final justifications? Some constructive discussion can really help with the reviewing process and a fair evaluation of the work.
Best, Your AC | Rebuttal 1:
Rebuttal: **A note to all Reviewers**
We sincerely thank the reviewers for their time and valuable feedback on our work. We are glad that the reviewers appreciate the presentation and motivation of the proposed Level Set Traversal (LST) algorithm, its effectiveness in identifying the extent and geometry of equi-confidence level sets of deep networks, whereby subsequently uncovering the linear path connectivity and star-like substructure of these level sets. Furthermore, we are happy to note that the reviewers appreciate that the paper makes noteworthy contributions on the theoretical front towards the analysis of level set submanifolds, juxtaposed with the practical applicability of the LST algorithm to general vision models such as CNNs and ViTs (while prior approaches rely on special network architectures), thereby enabling novel analysis of the under-sensitivity and blind-spots of common models empirically on standard datasets such as ImageNet and CIFAR-10.
We greatly appreciate the valuable comments and suggestions, and we will certainly try to incorporate these in the final version of the paper.
Pdf: /pdf/1963799fc265c06721d4ed690b5e085f07e0fea9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors aim to systematically study specific shortcomings (called blind spots) of vision models caused by their under-sensitivity.
For this purpose, they devise an algorithm that given an arbitrary pair of images (called source and target) can produce inputs that result in the same prediction output as the source image despite being "perceptually" similar to the target image (as quantified using LPIPS).
The authors study the geometry of the generated inputs and compare it under different training criteria (e.g. standard vs. adversarially robust models).
Strengths: - Interesting analysis that can potentially shed light into inherent shortcomings of vision models.
- The authors provide their source code
Update after reubttal
------------------------
I would like to thank the authors for their elaborate response to the review remarks. My remarks are largely addressed in these responses, and I am hence leaning for accepting this contribution. In that case, I urge the authors to highlight the significance of their work and its utility to diagnose the sensitivity issues in adversarial training upfront. The manuscript is heavy on technical jargon that, while important, could be simplified in the main text and presented in the appendix.
As an additional suggestion, the following workshop at NeurIPS '23 would be a great fit to present some of the theoretical parts / foundations of this work https://www.neurreps.org/
Weaknesses: - The contribution is rather limited. It is unclear what actionable insights we can gain using the proposed level-set analysis. There were no sufficient findings that can inform their design and training of vision models.
- The observations with respect to the "geometry of blind spots" are highly anecdotal
- The novelty is rather limited. A wide variety of methods have been devised to minimally perturb an image in order to fool the model to make a specific prediction. In the interpretability domain, various methods have meaningfully utilized minimal perturbation for the purpose of visualizing which image areas are most relevant for the input. In fact, similar perturbations are used in Integrated Gradients (Sundararajan et al. 2017), however, for a tangible application (feature attribution). See also the work by Wagner et al:
"Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks" (CVPR '19)
Minor: There were frequent language issues. Below are the ones I noted:
- sensitivty
- dimnesional
- upto human expectations => unclear (did you mean, compared with?)
- phenonmenon
- atttacks
- near-neighbour training images => nearest-neighbour images?
- simalar
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How sensitive is your analysis to the parameters of the level set algorithm, besides delta (e.g. max iterations, scale factor and stepsize)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Partially discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments, and address the specific aspects raised below:
**Contributions**
- We present a novel Level Set Traversal (LST) algorithm that iteratively uses orthogonal components of the local gradient to identify the extent and geometry of equi-confidence sets or level sets of deep networks.
- We emphasize that the proposed LST algorithm is applicable to general vision models such as CNNs and ViTs, unlike prior works such as Jacobsen et al. [2018a] that utilize a special class of bijective neural networks called fully Invertible RevNets, to understand and study the phenomenon of under-sensitivity of classification models.
- Given a source image, we use LST to identify inputs (informally, "blind spots") that are very similar perceptually to arbitrary target images from other classes, while leaving the model prediction nearly unmodified.
- We discover for the first time that such inputs ("blind spots") surprisingly are linearly connected to the original source image, that is, the network retains high confidence on the convex interpolant path as well. Thus given any source image, our work uncovers a remarkable star-like substructure within the equi-confidence level sets of common models on ImageNet and CIFAR-10, which was hitherto unknown.
- We agree with the reviewer that at present, we cannot directly use the LST algorithm without further modifications to train vision models in order to mitigate under-sensitivity as mentioned in the Limitations section (Lines 295-300), since the images produced using a smaller budget (e.g. 10 steps) look visually distinct from the target image. We hope however that future works could tackle this problem in a computationally effective manner.
- We draw a parallel to the field of robust defenses following the discovery of adversarial examples, which required several years to identify compute-effective techniques to mitigate the over-sensitivity of deep models, and is indeed an ongoing area of research even today. However, even the present version of the LST algorithm can help practitioners better evaluate different training methods to select an "optimal model" prior to deployment in real-world systems, even if it cannot be directly utilized during training.
**Quantitative Characterisation of Under-Sensitivity**
- To analyze the geometric structure of level sets, we demonstrate the existence of convex linear interpolant paths within the level set between the source image and the LST output quantitatively by reporting the average confidence over this linear path in Table-2, which is only marginally lower than the source image confidence itself. Since the linear interpolant path maintains high confidence for arbitrary source-target image pairs, the source image is linearly connected to the LST outputs of different target classes in a star-like substructure within the level set.
- We then characterize the extent of the level sets by computing statistical measures which evaluate the model confidence over the triangular convex regions enclosed by the source image and any two target blindspot images. In particular, we report the Average Triangle Confidence, and the fraction of images in the triangular region for which the model confidence is greater than $p_{src}−\delta$, representing different thresholds.
- Using these statistical metrics, we further demonstrate that adversarially trained models display a greater extent of under-sensitivity quantitatively in Table-2, as compared to normally trained models of the same architecture.
- Backed by these quantitative measures, we firmly submit that the observations so made are not just “highly anecdotal”.
**Novelty and Comparisons to Interpretability Works**
- By utilizing orthogonal components of local gradients iteratively, LST is the first effective algorithm in exploring under-sensitivity towards arbitrary target images for vision models of generic architectures. Thus, LST helps uncover “maximal perturbations” that leave model confidence unchanged, in sharp contrast to the wide variety of adversarial attacks that minimally perturb samples to induce a significant change in the network output.
- Moreover, in Section-1 of the Supplementary, we demonstrate that standard adversarial examples crucially are standalone insufficient to study the structure of level sets of common vision models, since model confidence along the path between a target benign sample ($x_2$) and adversarial examples (such as $x_1 + \delta_{12}$ targeted towards input $x_2$) sharply declines to a valley of near zero-confidence. This contrasts sharply with the existence of linear-connectivity observed between the source image and blind-spot inputs generated using LST.
- To uncover important parts of a source image, the interpretability paper by Wagner et al. does engage in changing the image by computing a sparse mask formulated as a deletion/preservation game, while Integrated Gradients averages gradients along the interpolant path with respect to a “reference” all-zero or black image. In contrast however, LST finds a path with respect to a valid target image of any other class, and is meant to study the overall under-sensitivity and level set substructure in input space surrounding the source image, rather than highlighting parts of the image to interpret the network prediction at that specific input-point.
We report an extensive set of quantitative and qualitative experiments for ablation analysis and robustness tests for various hyperparameters in Section-4 of the Supplementary.
Typos: We thank the reviewer for pointing these out, we will amend them in future versions. We apologize for the oversight.
We thank the reviewer again for their valuable comments and suggestions. We kindly ask if the reviewer would consider increasing their score to support acceptance of the paper if their concerns or questions have been addressed, and would be glad to engage further during the discussion period.
---
Rebuttal Comment 1.1:
Title: Clarify the motivation
Comment: I appreciate the extensive responses the authors provided to all reviewers.
They helped me better appreciate the merits of the presented work.
Thinking from the readers' perspective, I feel that the authors could better highlight what the motivation behind their work is, what problem it is exactly solving, why it is significant, and what follows from their results.
The authors do introduce a variety of novel artifacts such as the triangular convex hull and the associated confidence metrics. I understand that the authors want to use these artifacts to analyze model under-sensitivity in specific OOD input regions, but it was not obvious to me what these artifacts tell us about the model's behavior, and what generalizable conclusions we can draw from them (e.g. what do star-shaped paths tell us?).
To make my point clear, the interpolation path in Integrated Gradients was derived from two clear axioms (sensitivity and implementation invariance) instead of arbitrary definitions. Are there any parallels in the level-set solution?
---
Reply to Comment 1.1.1:
Title: Significance of Proposed Methodology
Comment: We greatly thank the reviewer for engaging in the discussion and comments. We are glad that the reviewer has appreciated our responses as well.
- While the mere existence of level sets is not very significant in itself, we find that the level set for common vision models is remarkably expansive — large enough to contain inputs that look near-identical to arbitrary target images from other classes.
- To put this into context, prior to this work, it was perhaps plausible to expect that if classes A and B are somewhat similar/related (such as similar-looking classes “Norfolk terrier” and “Norwich terrier” of ImageNet), then a vision model would possibly misassign high confidence for images of class B with respect to class A.
- However, using the LST algorithm, we find that such high-confidence regions (say with respect to class A) extend outwards in an expansive, connected manner to include images of *arbitrary* classes, which human oracles would never state as being similar. Thus, using LST, we are able to systematically and quantitatively examine this phenomenon in common vision models for the first time.
- Since the linear path from any given source image is to LST outputs for arbitrary target images retains high model confidence throughout, the level sets have a star-like substructure, where the number of “limbs” or linear protuberances of the star-like structure is extraordinarily large, plausibly as large as the number of images in all other classes.
- This is significant in itself, since it indicates the hitherto unknown and unappreciated scale and extent of under-sensitivity, and moreover the relative difficulty in being able to adequately mitigate the phenomenon in practical settings. For instance, if the level set for images of class A contained sizable protuberances towards only one other class B alone, the problem could perhaps be tackled by introducing a contrastive objective during the training stage that encourages the network to better discriminate between A-B images pairs by utilizing a denser sampling of related image augmentations, likely resulting in the diminution of these specific “directed” protuberances assuming reasonable train-test generalization. But since the star-like set substructure uncovered by LST implies that such protuberances exist towards any generic image of any other class for practical networks, such simple approaches will likely be ineffective and moreover computationally infeasible from a combinatorial perspective. Thus, based on the observations uncovered with LST, we expect the problem of mitigating such a pervasive extent of under-sensitivity in common vision models to be highly non-trivial.
- We utilize the triangular convex hulls to qualitatively and quantitatively analyze the size of the level sets beyond the one-dimensional interpolant paths between the source image and LST outputs alone. Thus the 1D star-like paths are contained within the triangular hulls, with the latter indicating the extent of the level sets within a two-dimensional linear subspace, for which we record relative volume at different confidence thresholds etc. in Table-2.
- For instance, these triangular convex hulls help support generalizable conclusions such as that adversarial training demonstrably exacerbates under-sensitivity in a quantifiable manner, which is also readily observable qualitatively by visualizing these triangular hulls (Figure-4). Interestingly, these robust models are under-sensitive over subsets of the input domain that lie well beyond the original threat model used in its training.
- Of related note is the work of Tramer et al. [2020], wherein they show that due to the misalignment of $\ell_p$ norm bounded balls and the ideal set of human-imperceptible perturbations, networks that are adversarially trained against such $\ell_p$ bounded perturbations of relatively large radius are overly-smooth *within* the same $\ell_p$ radius (more details in Section 7 of the Main paper). The triangular convex hulls and associated metrics introduced in our work indicate that under-sensitivity for robust models is extensive even when the models are trained with adversaries within a smaller radius.
---
Reply to Comment 1.1.2:
Title: Mathematical Foundations
Comment: - The theoretical underpinnings of the proposed method lie in the fact that the level set is orthogonal to the local gradient. Formally, if $\gamma(t):[0,1]\rightarrow L_g(c)$ is any differentiable curve within the level set of a differentiable real-valued function $g$, then $\frac{d}{dt}(g(\gamma(t))) = 0 = \langle \nabla g(\gamma(t)) ,\gamma'(t) \rangle $ $\forall t \in [0,1]$. Furthermore, under mild conditions it can be shown that the level set is a $(d-1)$ dimensional submanifold (see Section-2 of the main paper).
- *Uniqueness:* Since the local tangent space of the level set is (d − 1) dimensional, several independent directions are orthogonal to the local gradient, and apriori do not yield a unique path like a gradient-flow. However, once we fix our target image, we can use its difference vector with respect to the source image to compute its projection onto the local tangent space to generate a *uniquely defined path within the level set*.
- *Extremality:* Though this flow-based path may be non-linear, we additionally discover that the extremal point of this flow is surprisingly linearly connected with high-confidence to the source image in practical settings after we apply discretized approximations etc. Thus, if $x_s$ and $x’$ represent the source image and LST output respectively, let $\Delta x = x’ - x_s$. Then, we observe that $x_s + \alpha \Delta x$ is assigned high confidence for all $\alpha \in [0,1]$, and $x’$ is extremal in the sense that $x_s + (1+\epsilon) \Delta x$ is rapidly assigned low-confidence by the model even for extremely small values of $\epsilon > 0$.
- Thus, once the target image is fixed, outputs generated by LST enjoy two properties: (1) uniqueness and (2) extremality. While our method satisfies these properties, we do not expect it to be the unique method to do so.
- In concrete settings, such as the model being (1) a linear-functional, or (2) a full-rank linear transformation, we present a detailed analysis of level sets and under-sensitivity in Section-4. We also include a discussion on the nature of level sets for ReLU networks, since they induce a piecewise linear structure over tessellated subsets of the input domain. The linear connectivity uncovered by LST consequently indicates that the overlap of different (d − 1) dimensional hyperplanes is non-trivial at a non-local scale. | null | null | null | null | null | null |
FairLISA: Fair User Modeling with Limited Sensitive Attributes Information | Accept (poster) | Summary: This paper aims to achieve fair user modeling with limited sensitive attribute information and propose a general framework, FairLISA, which efficiently utilizes data with known and unknown sensitive attributes to facilitate fair model training. The authors also provide theoretical guarantees from a mutual information perspective. Extensive experiments are conducted to demonstrate the effectiveness of FairLISA in scenarios with different ratios of missing sensitive attributes.
Strengths: S1. The scenario of missing sensitive attributes in fair user modeling is both meaningful and worthy of investigation.
S2. The paper is well-written and easy to follow.
S3. The proposed FairLISA efficiently leverages unknown data without the need for predicting missing attributes, providing a simple yet effective approach, supported by theoretical guarantees.
S4. Extensive experiments, especially the RQ2 experiment on different missing ratio situations, demonstrate the effectiveness and robustness of FairLISA in two representative user modeling tasks.
Weaknesses: W1. This work mainly focuses on the fairness definition where the mutual information between the user modeling result and the sensitive information is zero. However, there are other classic fairness definitions, such as Equalized Odds (EO) and Demographic Parity (DP). It would be valuable to discuss investigate how FairLISA performs on these metrics.
W2. The sensitive information ratio setting is missing in Table 1.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Q1. How does FairLISA perform on classic fairness metrics?
Q2. Please provide the complete experiment setting, including the missing ratio in RQ1.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Although this paper addresses the issue of limited sensitive attribute information, it still requires the collection of some sensitive information as model input, which can potentially compromise privacy. Exploring the combination of fairness and privacy would be a promising direction for future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**A1. How does FairLISA perform on classic fairness metrics?**
Q2: We have already conducted this experiment in our paper, and the details can be inferred from Experiments RQ4 and Appendix C.6. To provide a clearer presentation of these results, we report the performance of all methods on classical fairness metrics for PMF, LightGCN (representative user models in Recommender System) in Table 1, Table 2 (the lower, the better). The results indicate that our goal of removing the effect of sensitive attributes also benefits the classical group fairness metrics, and our models achieve the best performance. We will highlight this experiment in the revision.
Table 1: The performance on classic fairness metrics for PMF (the lower, the better), the best fairness results are highlighted in bold.
| | GENDER DP | GENDER EO | AGE DP | AGE EO | OCC DP | OCC EO |
| --------------- | --------- | --------- | -------- | -------- | -------- | -------- |
| Origin | 0.106871 | 0.031829 | 0.051489 | 0.027891 | 0.060231 | 0.024581 |
| ComFair | 0.054697 | 0.026501 | 0.037224 | 0.028744 | 0.047889 | 0.019412 |
| FairGo | 0.029125 | 0.020568 | 0.037694 | 0.028481 | 0.041344 | 0.019123 |
| FairGNN | 0.027538 | 0.022540 | 0.023698 | 0.027001 | 0.040135 | 0.019478 |
| FairLISA | 0.014163 | 0.019857 | 0.020140 | 0.024576 | 0.039625 | **0.018324** |
| FairGo+FairLISA | **0.010912** | **0.017749** | **0.019480** | **0.023984** | **0.039013** | 0.018352 |
Table 2: The performance on classic fairness metrics for LightGCN (the lower, the better), the best fairness results are highlighted in bold.
| | GENDER DP | GENDER EO | AGE DP | AGE EO | OCC DP | OCC EO |
| --------------- | --------- | --------- | -------- | -------- | -------- | -------- |
| Origin | 0.136796 | 0.049581 | 0.060878 | 0.035130 | 0.069747 | 0.029848 |
| ComFair | 0.075487 | 0.035767 | 0.057361 | 0.037898 | 0.074596 | 0.022252 |
| FairGo | 0.029414 | 0.034892 | 0.057358 | 0.036956 | 0.065387 | 0.022143 |
| FairGNN | 0.037538 | 0.045751 | 0.055342 | 0.034956 | 0.064192 | 0.030140 |
| FairLISA | 0.024163 | 0.022345 | 0.052356 | **0.034596** | 0.062334 | 0.021860 |
| FairGo+FairLISA | **0.012912** | **0.020985** | **0.052539** | 0.037129 | **0.061489** | **0.021578** |
>**A2. Please provide the complete experiment setting, including the missing ratio in RQ1.**
Q2: Sorry for missing the detail. We will include this information in the revision.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for the authors’ responses. Most of my concerns have been addressed.
Specifically, the experiment was conducted to validate the effectiveness of FairLISA in classic fairness metrics. This should be added in the revision. Furthermore, the authors claims that “we will try to combine the privacy and fairness concerns so as to explore fairness and privacy aware user modeling. ” Though this can be future work, I would to hear from the authors about the plan in some details.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer QZCP,
We sincerely appreciate your recognition and the invaluable feedback you have provided. We will present our plan for integrating privacy and fairness considerations within FairLISA from pre-processing, in-processing, and post-processing stages.
In the pre-processing stage, since FairLISA needs collect some sensitive attributes from certain users. To align with stringent privacy requirements, we intend to employ various techniques during the data collection phase. These techniques may encompass actions such as the removal or obfuscation of personally identifiable information (PII) from the dataset, thereby safeguarding individual privacy.
In the in-processing stage, we will combine some differential privacy mechanisms in FairLISA to fulfill the requirements of both fairness and privacy. Sepcifically, we will introduce controlled noise to the training data or gradients, guaranteeing that individual data points remain confidential.
In the post-processing stage, we will assess the model's fairness and privacy performance using appropriate metrics (such as Differential Privacy [1], demographic parity [2], equal opportunity [3]). If any discrepancies or concerns are identified, re-evaluate and fine-tune the model accordingly.
Once again, we express our gratitude for your valuable feedback. Please feel free to contact me if any other confusion exists.
[1] Dwork C, McSherry F, Nissim K, et al. Calibrating noise to sensitivity in private data analysis[C]//Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3. Springer Berlin Heidelberg, 2006: 265-284.
[2] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness
through awareness. In Proc. of ITCS, 2012.
[3] Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29, 2016.
Best,
Authors. | Summary: The authors investigate the problem of fair user modeling in a setting with limited sensitive attributes. Due to the lack of such attribute information, they propose a general framework called FairLISA, which efficiently applies unlabeled data to facilitate fair model training. Compared to previous works, FairLISA can directly leverage unlabeled data without the need for predicting missing attributes, thereby reducing information loss caused by predictions. The experiments demonstrate the effectiveness of the proposed model.
Strengths: The authors tackle a socially valuable problem of fair user modeling in a setting with limited sensitive attributes, which holds significant implications across various applications, such as recommendation systems and cognitive diagnostics.
The paper provides a thorough and insightful summary of related works on fairness without sensitive attributes and fairness in limited sensitive attribute situations. Building upon existing research, this paper effectively identifies and summarizes three key challenges: "Efficient data utilization," "Theoretical guarantee," and "Framework generalization." The authors propose reasonable solutions to address these challenges. As far as I am concerned, the insights of directly leveraging unlabeled data without predicting missing attributes are straightforward and effective.
The authors substantiate their claims with theoretical guarantees through Lemma 1 and Lemma 2. Finally, comprehensive experiments are conducted on two representative tasks, highlighting the superiority of FairLISA. In summary, this work exhibits reasonable motivation, a clear literature review, a look-nice model design, and a comprehensive validation experiments.
Weaknesses: There have been existing works that focus on the complete absence of sensitive attributes, such as [1][2]. Although the specific problems and input data may differ, it would be valuable to explore whether FairLISA can be combined with these works to extend their potential in limited scenarios. This article lacks discussion in these two aspects.
[1]Tianxiang Zhao, Enyan Dai, Kai Shu, and Suhang Wang. Towards fair classifiers without sensitive attributes: Exploring biases in related features. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 2022
[2] Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning, 2018.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can fairness be extended to settings without sensitive attributes?
Can the techniques used in settings without sensitive attributes [1][2] be integrated with FairLISA to attain improved outcomes in limited situations?
[1]Tianxiang Zhao, Enyan Dai, Kai Shu, and Suhang Wang. Towards fair classifiers without sensitive attributes: Exploring biases in related features. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 2022
[2] Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning, 2018.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As mentioned in the Border Impact section, FairLISA is effective only in limited scenarios. It is recommended to further explore the potential of FairLISA in completely unsupervised settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**A1. Can fairness be extended to settings without sensitive attributes?**
Q1: I sincerely appreciate your question. Currently, our methods cannot be extended to settings without sensitive attributes due to the necessity of labeled data to train the discriminator. However, it is essential to note that our research focuses on fairness with limited sensitive attributes. Our main contribution lies in efficiently utilizing data with both known and unknown sensitive attributes to facilitate fair model training. Additionally, we provide theoretical guarantees, and our experiments demonstrate that our approach achieves SOTA performance. Moving forward, we will explore how to extend our work to settings without sensitive attributes.
>**A2. Can the techniques used in settings without sensitive attributes [1][2] be integrated with FairLISA to attain improved outcomes in limited situations?**
Q2: Thank you for your question. The techniques used in settings without sensitive attributes can indeed be integrated with FairLISA, as exemplified by the related work FairRF[1]. FairRF does not explicitly require sensitive attributes and enhances fairness through related features. Building on this core idea, we can seamlessly combine FairRF and FairLISA. Specifically, by leveraging related features, we can infer high-quality labels for sensitive attributes. This data enables us to train our discriminator and filter. Subsequently, these two components can be optimized jointly, potentially leading to improved results. In the future, we will explore how to integrate our method with existing techniques used in settings without sensitive attributes, aiming to achieve improved fairness and performance results. Meanwhile, we will provide more specific discussions in the revision to enhance the clarity and depth of our work.
[1]Tianxiang Zhao, Enyan Dai, Kai Shu, and Suhang Wang. Towards fair classifiers without sensitive attributes: Exploring biases in related features. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 2022
[2] Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning, 2018.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for your response. My concerns have been well addressed. Moreover, this paper holds the potential to make an impact in the situation of fairness-aware with limited sensitive attributes situations. I would raise my score accordingly. | Summary: This paper propose FairLISA that learns fair user modeling using limited sensitive attribute information. Specifically, for users with known sensitive attribute information, FairLISA maximizes the cross-entropy of predicting the sensitive attribute using user representations; for users with unknown sensitive attribute, FairLISA maximizes the entropy of predicting sensitive attribute using user representations. Experiments on benchmark datasets demonstrate the efficacy of the proposed model.
Strengths: S1. Studying limited sensitive attribute scenario is practical and important.
S2. FairLISA is supported by the theoretical analysis.
S3. FairLISA's performance is good based on the empirical evaluation.
Weaknesses: Please see limitations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see limitations.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: L1. The authors may better illustrate why statistical parity is important in recommender system. To my understanding, many recommendation tasks is related to sensitive attribute as well. For example, a recommender system don't want to recommend feminine care items to male users.
L2. Using user modeling as a motivating example is ok to me, but I don't understand why the authors position the paper to fair user modeling specifically. How does the proposed method connect to user modeling? What is the uniqueness of user modeling (other than limited sensitive attribute) in terms of fair user embedding learning? How does <user, item, relation> triplet useful here?
L3. The theoretical analysis is based on the fact that discriminator is Bayesian optimal, which is often impossible. Thus, the practicability of the theoretical analysis needs more justification.
L4. What if the discriminator is not optimal? Does the theoretical analysis still hold in this case?
L5. Some intuition about the theoretical analysis would strengthen the intuition. For example, how does optimizing the entropy of the discriminator's output help with removing sensitive information. My guess is that it tries to make the prediction probability of sensitive attribute to be uniform. But the authors may clarify it better.
L6. In Figure 2, why do all methods other than FairLISA converge to the same point? And it seems FairLISA converge to the same point when the ratio increases to 95% as well. Some discussion is needed.
L7. FairGNN is developed in the setting of binary sensitive attribute and binary classification, where the covariance regularizer minimize the pseudo-sensitive attribute and the output logit scores. How do the authors extend it to non-binary sensitive attribute settings?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1. The authors may better illustrate why statistical parity is important in recommender system. To my understanding, many recommendation tasks is related to sensitive attribute as well. For example, a recommender system don't want to recommend feminine care items to male users. (L1)**
A1. Sorry for the confusion. Recommender systems have a wide range of applications. In specific recommendation contexts, such as suggesting feminine care items, considering statistical parity may not be relevant since the preferences are inherently linked to gender. However, in other critical scenarios like career recommendations, it becomes essential to incorporate statistical parity. By doing so, we can prevent discriminatory outcomes and ensure fairness for all users, regardless of their sensitive attributes, this goal has been widely adopted the previous works, such as [1][2]. We will reorganize our paper to address your concerns.
>**Q2. Using user modeling as a motivating example is ok to me, but I don't understand why the authors position the paper to fair user modeling specifically. How does the proposed method connect to user modeling? What is the uniqueness of user modeling (other than limited sensitive attribute) in terms of fair user embedding learning? How does <user, item, relation> triplet useful here? (L2)**
A2. The core of FairLISA is based on the widely adopted framework of fair adversarial learning. In other words, FairLISA can be applied to any domain where fairness adversarial learning is applicable. It is not restricted to specific data formats, such as <user, item, relation>. In this paper, we concentrate on the user modeling domain, as it plays a crucial role in decision-making. Moving forward, we intend to extend the application of FairLISA to a broader range of data formats, exploring its potential in various domains.
>**Q3. Theoretical analysis. (L3,L4)**
A3. The theoretical analysis demands an optimal discriminator assumption. However, I should mention that an optimal discriminator is a commonly ideal assumption in the limited sensitive research domain, such as [4]. Meanwhile, even when the sensitive attributes are not missing, achieving an optimal discriminator is also an ideal scenario. To evaluate the impact of the optimal discriminator assumption on our model, we conducted empirical experiments (RQ2). Specifically, we varied the ratio of missing sensitive attributes in the training set as {20%, 40%, 60%, 80%, 95%}. As the missing ratios increased, it became more challenging for the discriminator to reach an optimal state. By comparing our model's performance with baseline results, we could assess the impact. The experimental results demonstrated that our model achieved SOTA performance, even under the same missing attribute ratios. This indicates that the optimal discriminator assumption has minimal influence on our model compared to the baselines. Thanks for your suggestion and question, we will add more relevant discussions in the revision.
>**Q4. Strengthen the intuition of FairLISA. (L5)**
A4. Thanks for your suggestion. We will illustrate the intuition from the adversary learning perspective.
Basic fair adversary learning comprises two modules:
A discriminator module that predicts the sensitive features from the learned user embedding.
A filter module that aims to fail the discriminator's performance.
By maximizing the entropy of the discriminator, we can ensure that the prediction probability of sensitive attributes becomes uniform. For example, in the case of gender prediction, both male and female probabilities are set to 0.5. This shows that the discriminator becomes unable to predict the sensitive features from the learned user embedding. Consequently, our goal of achieving fairness is accomplished. We will provide further clarification in the revised version.
>**Q5. More discussion about experiments. (L6)**
A5. Thank you for your suggestion. We observe that all methods, except FairLISA, converge to the **Origin** baseline in the missing ratio 95%. **Origin** refers to the model without fairness considerations. This observation indicates that in the absence of extremely sensitive attributes, other baseline methods have essentially lost their effectiveness in achieving fairness and perform similarly to the case where fairness was not considered at all. In contrast, FairLISA demonstrates a more robust performance, showcasing its effectiveness in achieving fairness. We will provide a more comprehensive discussion on this in the revision.
>**Q6. The non-binary setting expansion of FairGNN. (L7)**
A6. Sorry for missing the detail information.
The covariance regularizer in FairGNN is
$$L_R=cov(\hat{s},\hat{y})$$
while $\hat{s}$ represents the predicted sensitive attribute and $\hat{y}$ represents predicted label.
In the non-binary setting, $\hat{s}$
s a multi-dimensional vector. To implement the regularization, we first calculate the covariance matrix and then add the frobenius norm of the covariance matrix to the loss function as the regularization term. This technique has been widely adopted in various studies, such as [3]. We will provide a more comprehensive explanation of this setting in the revised version.
[1] Le Wu, Lei Chen, et al. Learning fair
representations for recommendation: A graph-based perspective. In Proceedings of the Web Conference 2021, 2021
[2] Yunqi Li, Hanxiong Chen, et al.Towards personalized fairness based on causal notion. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021.
[3] Pourahmadi M. Covariance estimation: The GLM and regularization perspectives[J]. 2011.
[4] Enyan Dai and Suhang Wang. Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, 2021.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: Dear authors,
I appreciate your efforts in addressing my concerns. But I think my first two concerns remain.
(1) Could you provide a more concrete and detailed **real-world** example, in which statistical parity is indeed important and needs to be satisfied?
(2) In your response, you mentioned that FairLISA can be applied to other problems. Then why are we specifically learning fair user embedding but overlooking the problem of learning fair more neutral item embedding?
Thanks for your efforts in advance.
Best,
Reviewer Gbnv
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Gbnv,
Thanks for your valuable feedback, we will try our best to alleviate all your concerns, detailed responses are as follows.
> (1) Could you provide a more concrete and detailed real-world example, in which statistical parity is indeed important and needs to be satisfied?
Sorry for the confusion. Let's take the career recommendations as an example. Due to historical factors, specific demographic groups (e.g., ethnic minorities and women) may have encountered unjust treatment during the company's recruitment process, placing them at a distinct disadvantage when seeking job opportunities. For example, stereotypes like "Man is to Computer Programmer and Woman is to Homemaker"[1] have perpetuated this bias. To redress these inequalities rooted in historical biases and foster a more inclusive and equitable career development environment, it is crucial to consider the fairness definition of statistical parity.
In the real world, LinkedIn researchers also argue that recommendation tasks should adhere to statistical parity definition. The top results should always reflect the gender distribution of all candidates [2]. In light of this, they have introduced a fairness-aware algorithm into LinkedIn Talent Search that incorporates statistical parity. This algorithm's effectiveness also has been substantiated through online A/B testing.
In the revision, we will reorganize our paper to address your concerns.
> (2) In your response, you mentioned that FairLISA can be applied to other problems. Then why are we specifically learning fair user embedding but overlooking the problem of learning fair more neutral item embedding?
Thanks for your insightful question. I quite agree that fair item embedding learning is essential in the <user, item, relation> triplet data format, which is also the uniqueness of <user, item, relation>. In fact, the studies of fair recommendation can be classified into two categories based on whether the uniqueness of the <user, item, relation> triplet is utilized: methods do not consider the uniqueness (e.g., ComFair [3], which primarily focuses on fair user embedding), methods consider the uniqueness (e.g., FairGo [4], which studies fairness from the perspective of the user-item interaction graph).
Our study, however, studies fairness from the limited sensitive attribute perspective. This stands in parallel with the consideration of the uniqueness in <user, item, relation>. This implies that regardless of whether the model takes uniqueness into account, we have the ability to extend its application to limited situations. More specifically, our FairLISA can be applied to other adversarial-based models, including ComFair and FariGo.
In order to validate the efficacy of our model, we have conducted comprehensive investigations across various models and illustrated how we extended these models to limited situations in the paper(as introduced in section 4.5). Specifically, if we set $\lambda_3$ in Eq. (9) to 0, FairLISA degenerates to ComFair. If we combine Eq. (8) from FairLISA with the final loss of FairGo. FairGo will be expanded to the limited situation, which we refer to FairGo+FairLISA. Finally, our model's effectiveness was demonstrated across different models. Moreover, our experiments also revealed that FairGo+FairLISA consistently achieves superior fairness outcomes. This implied that by considering the specificity of the <user, item, relation> triplet, we can achieve fairer results in the limited situation.
Sorry for the confusion. We would reorganize our paper to make it clearer in the revised version. Please feel free to contact me if any other confusion exists.
[1] Bolukbasi T, Chang K W, Zou J Y, et al. Man is to computer programmer as woman is to homemaker? debiasing word embeddings[J]. Advances in neural information processing systems, 2016, 29.
[2] Geyik S C, Ambler S, Kenthapadi K. Fairness-aware ranking in search & recommendation systems with application to linkedin talent search[C]//Proceedings of the 25th acm sigkdd international conference on knowledge discovery & data mining. 2019: 2221-2231.
[3] Avishek Bose and William Hamilton. Compositional fairness constraints for graph embeddings. In International Conference on Machine Learning, 2019.
[4] Le Wu, Lei Chen, et al. Learning fair representations for recommendation: A graph-based perspective. In Proceedings of the Web Conference 2021, 2021
Best,
Authors. | Summary: This paper proposes a novel adversarial learning method for fairness with limited demographics.
Strengths: Pros:
1. This paper focus on an important and practical problem. Fairness with limited demographics is a practical and important problem.
2. This paper provides a theoretical-driven perspective on fairness with limited demographics.
3. This paper proposed a novel adversarial learning method for fairness with limited demographics.
4. Solid experiments are conducted to demonstrate the effectiveness of the proposed method.
Weaknesses: Cons:
1. The datasets used in this paper seem not very common. So it is a little bit hard to evaluate the effectiveness of the proposed method. It is suggested that authors also conduct experiments on commonly used fairness datasets such as ADULT and COMPAS.
2. Baselines seem insufficient. It is suggested that the authors may consider adding more in-processing methods as baselines.
3. This paper only empirically study the effectiveness of the proposed method. It is unknown whether or not the proposed method is theoretically better than the previous estimation-based method [1]. It is suggested the authors could add some theoretical analysis to demonstrate that the proposed method is better than the previous estimation method.
4. The authors are encouraged to provide code for reproduction.
5. Besides [1], the authors may consider citing some related references on fairness with limited exact demographics. Reference [2] studies the problem of fairness with limited clean sensitive attributes and mostly private sensitive attributes. Reference [3] studies the problem of fairness with active sensitive attribute annotation.
[1] Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. https://arxiv.org/abs/2009.01454
[2] When Fairness Meets Privacy: Fair Classification with Semi-Private Sensitive Attributes https://arxiv.org/abs/2207.08336
[3] Mitigating Algorithmic Bias with Limited Annotations https://arxiv.org/abs/2207.10018
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper seems to not discuss the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1. The datasets used in this paper seem not very common. So it is a little bit hard to evaluate the effectiveness of the proposed method. It is suggested that authors also conduct experiments on commonly used fairness datasets such as ADULT and COMPAS.**
A1:We greatly appreciate your suggestion. In this paper, we study the fairness in **user modeling**, where the fundamental data format is (user, item, relation). This format is distinct from the ADULT and COMPAS table data formats, rendering these widely used datasets unsuitable for our purposes. Instead, we have chosen to utilize the classical fairness datasets commonly employed in user modeling studies [1][2], such as MovieLens-1M.
>**Q2. Baselines seem insufficient. It is suggested that the authors may consider adding more in-processing methods as baselines.**
A2: Thanks for your valuable suggestion. We have incorporated a new in-processing method called "Reg[3]" as our baseline. The core idea behind this baseline is to add the fairness objective as regularization. To validate our approach, we conducted additional experiments on the MovieLens-1M dataset, utilizing two user models (PMF and NCF). The results(i.e.,Table 1, Table 2) demonstrate that our methods (FairLISA and FairGo+FairLISA) outperform other techniques, affirming the efficacy of our research.
Table 1: The fairness and accuracy performance on PMF. The AUC represents AUC
scores of all attackers. The smaller values of AUC denote better fairness performance with less
sensitive information leakage (the fairer). GEN, AGE, OCC represent gender, age, and occupation. RMSE
represents accuracy performance.
| Baseline | GEN AUC | AGE AUC | OCC AUC | RMSE |
| --------------- | ---------- | ------- | ------- | ------- |
| Reg | 0.6483 | 0.7368 | 0.6631 | 0.8945 |
|FairLISA | 0.5174 | **0.5276** | 0.5110 | 0.8912 |
| FairGo+FairLISA | **0.5147** | 0.5316 | **0.5103** | **0.8812** |
Table 2: The fairness and accuracy performance on NCF. The detail of evaluation metrics is the same as Table 1.
| Baseline | GENDER AUC | AGE AUC | OCC AUC | RMSE|
| --------------- | ---------- | ------- | ------- |------- |
| Reg | 0.6814 | 0.5722 | 0.5886 | 0.8944 |
|FairLISA | 0.5301 | 0.5218 | 0.5219 | **0.8831** |
| FairGo+FairLISA | **0.5217** | **0.5205** | **0.5200** | 0.8892 |
>**Q3. This paper only empirically studies the effectiveness of the proposed method. It is unknown whether or not the proposed method is theoretically better than the previous estimation-based method. It is suggested the authors could add some theoretical analysis to demonstrate that the proposed method is better than the previous estimation method.**
A3: Thank you for your insightful suggestion. We acknowledge that conducting a theoretical analysis to establish the superiority of the proposed method is undeniably crucial. Nonetheless, it's imperative to recognize that addressing this challenge presents a significant undertaking. As an alternative avenue for evaluation, we chose to substantiate our superiority through experiential insights. We will regard the theoretical substantiation of FairLISA as future work. Thanks again for your valuable suggestion.
>**Q4. The authors are encouraged to provide code for reproduction.**
A4: We greatly appreciate your suggestions. We strongly support open-source principles and will make the code public if our paper is fortunate enough to be accepted.
>**Q5. Related references.**
A5: Thanks for providing the references. These works are indeed pertinent to our research. We will incorporate these relevant references into the revision.
[1] Yunqi Li, Hanxiong Chen, Shuyuan Xu, Yingqiang Ge, and Yongfeng Zhang. Towards personalized fairness based on causal notion. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021.
[2] Le Wu, Lei Chen, Pengyang Shao, Richang Hong, Xiting Wang, and Meng Wang. Learning fair
representations for recommendation: A graph-based perspective. In Proceedings of the Web Conference 2021, 2021
[3]Yao S, Huang B. Beyond parity: Fairness objectives for collaborative filtering[J]. Advances in neural information processing systems, 2017, 30.
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: Thank you for the response. There is no additional comment currently. | Rebuttal 1:
Rebuttal: We sincerely appreciate all reviewers' time and efforts in reviewing our paper. We would like to thank all of them for providing constructive and valuable feedback which we will leverage to improve this work. Meanwhile, we are encouraged by the positive comments from reviewers, including:
**Motivation:** "Studied an important problem in practical fair recommender system" (Reviewer KpgW), "an important and practical problem" (Reviewer teqm), "Studying limited sensitive attribute scenario is practical and important."(Reviewer Gbnv), "a socially valuable problem" (Reviewer xseN), "both meaningful and worthy of investigation" (Reviewer QZCP)
**Theoretical Contribution:** "a theoretical-driven perspective" (Reviewer teqm), "supported by the theoretical analysis" (Reviewer Gbnv), "with theoretical guarantees" (Reviewer xseN), "providing a simple yet effective approach, supported by theoretical guarantees." (Reviewer QZCP)
**Method:** "a novel adversarial learning method" (Reviewer teqm), " reasonable solutions" (Reviewer xseN), " straightforward and effective" (Reviewer xseN), "a look-nice model design"(Reviewer xseN), "a simple yet effective approach" (Reviewer QZCP)
**Experimental Results:** "Solid experiments are conducted" (Reviewer teqm), "performance is good based on the empirical evaluation." (Reviewer Gbnv), "a comprehensive validation experiments."(Reviewer xseN), "demonstrate the effectiveness and robustness of FairLISA in two representative user modeling tasks." (Reviewer QZCP)
We have provided specific responses to each reviewer. We hope our responses can clarify all your confusion and alleviate all concerns. We thank all reviewers again. Looking forward to your reply! | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposed an algorithm to train fair user modeling (e.g recommender systems) when given limited sensitive attributes. The idea is to factorize out the effect of sensitive attributes from the model's fair training objectives, and isolate its impact in training.
Strengths: 1. Studied an important problem in practical fair recommender system
Weaknesses: 1. The proposed algorithm only applies to generative-based user modeling, which is limited. Many user modeling methods are prediction-based.
2. The high-level idea of decomposing the mutual information between prediction/embedding and sensitive attribute into sensitive-attribute-related term and non-sensitive-attribute-related term is common in the fairness literature. It seems limited novelty in the method.
3. It seems the method still requires some samples with sensitive attributes labeled (the minimum in experiments is 20%). It would be good to clarify how practitioners can obtain those sensitive attributes. Because if the concern is about privacy, then getting 20% samples with sensitive attributes is as hard as getting 100%. In the literature, usually, the assumption is having some aggregated form of sensitive attributes rather than sample-level.
4. A simple baseline is to train a classifier on samples with sensitive attributes and use it to label other data. This should be tested in experiments.
5. Many grammatical errors and typos.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The evaluation focuses on privacy, which is how accurately an attacker can predict sensitive attributes from the user representations. But this evaluation would assume attackers can access user representation, which means the attacker is the model trainer, e.g. recommender system employer. Is it the case? If so, how can the attacker obtain the ground-truth sensitive attributes as the training labels of the attack model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1. The proposed algorithm only applies to generative-based user modeling, which is limited. Many user modeling methods are prediction-based.**
A1. Sorry for the confusion! I quite agree that currently many user modeling methods are prediction-based. In fact, I'd like to clarify that the user models discussed in this paper are prediction-based models that do not have generative capabilities [5][6]. Our FairLISA is specifically used to ensure fairness guarantees for these prediction-based models, which borrows the idea of GAN to remove the effect of sensitive attributes, as shown in Section 4 of the paper. In the revised version, we would reorganize this part to make it clearer. In the future, we intend to expand our work to generative-based user modeling, such as diffusion-based user models [7] and LLM-based user models [8].
>**Q2. The high-level idea of decomposing the mutual information between prediction/embedding and sensitive attribute into sensitive-attribute-related term and non-sensitive-attribute-related term is common. It seems limited novelty in the method.**
A2. In this paper, we study fairness in situations with limited sensitive attributes. Our core contribution lies in efficiently leveraging data with both known and unknown sensitive attributes to facilitate fair model training. To achieve this, we design a theoretical-driven model. Different from the conventional mutual information-based framework, our framework stands out in two significant aspects: (1) We employ distinct approaches for handling two types of data. (2) We provide theoretical guarantees for both types of data. Moreover, the experimental results further validate the SOTA performance achieved by our approach. Thanks for your question, we would reorganize our paper to make it clearer.
>**Q3. It would be good to clarify how practitioners can obtain those sensitive attributes. Because if the concern is about privacy, then getting 20% samples with sensitive attributes is as hard as getting 100%. In the literature, usually, the assumption is having some aggregated form of sensitive attributes rather than sample-level.**
A3. I sincerely appreciate your suggestion. In fact, the fairness setting with certain sample-level sensitive attributes available has been proposed in the literature, such as [1][2][3]. The limited sensitive attribute labels can be obtained through various methods. Firstly, on open platforms, certain users may publicly share their profiles, like the example of 14% of teen users who expose their complete profiles on Meta [4]. Secondly, some researchers rely on human annotators to label limited sensitive attributes [3]. We will incorporate your suggestions in the revision of our paper. Moreover, we will explore methods to achieve fairness in sensitive properties using aggregate forms.
>**Q4. A simple baseline is to train a classifier on samples with sensitive attributes and use it to label other data. This should be tested in experiments.**
A4. We have already compared this baseline in our paper, called FairGNN. The core idea behind FairGNN is to train an estimator to predict missing sensitive labels using data with known sensitive attributes (as described in the Introduction, Section 6.1, and Appendix C.1). Furthermore, we discussed FairGNN in detail in Section 5. The experimental results demonstrate the effectiveness of our method when compared to the baseline. We will reorganize our paper in the reversion to remove any lingering doubts or confusion.
>**Q5. Many grammatical errors and typos.**
A5. Thank you for pointing out grammatical errors and typos. We will revise our paper to fix these errors.
>**Q6. How can the attacker obtain the ground-truth sensitive attributes as the training labels of the attack model in the evaluation?**
A6. Sorry for the confusion. The attacker is indeed a recommender system employer. Nevertheless, during the testing phase, we are assuming that the labels for sensitive attributes are obtainable. These labels serve solely as a ground truth to assess the attacker's ability to predict sensitive attributes, which in turn allows us to access the fairness of our model. Note that during the training phase, only a subset of sensitive labels are accessible, aligning with our real-world scenario. Such a setting is commonly adopted in situations involving limited sensitive attributes [1][2][3]. In the future, we will make the necessary revisions to elucidate these aspects in our paper and address your concerns.
[1]Zhang F, Kuang K, Chen L, et al. Fairness-aware contrastive learning with partially annotated sensitive attributes[C]//The Eleventh International Conference on Learning Representations. 2022.
[2]Kırnap Ö, Diaz F, Biega A, et al. Estimation of fair ranking metrics with incomplete judgments[C]//Proceedings of the Web Conference 2021. 2021: 1065-1075.
[3]Enyan Dai and Suhang Wang. Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, 2021.
[4]Mary Madden, Amanda Lenhart, Sandra Cortesi, Urs Gasser, Maeve Duggan, Aaron Smith,
486 and Meredith Beaton. Teens, social media, and privacy. Pew Research Center, 2013.
[5]Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural
collaborative filtering. In Proceedings of the 26th international conference on world wide web, 2017.
[6]Fei Wang, Qi Liu, Enhong Chen, Zhenya Huang, Yuying Chen, Yu Yin, Zai Huang, and Shijin
Wang. Neural cognitive diagnosis for intelligent education systems. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020.
[7]Wang W, Xu Y, Feng F, et al. Diffusion Recommender Model[J]. arXiv preprint arXiv:2304.04971, 2023.
[8]Wu L, Zheng Z, Qiu Z, et al. A Survey on Large Language Models for Recommendation[J]. arXiv preprint arXiv:2305.19860, 2023. | null | null | null | null | null | null |
Focus Your Attention when Few-Shot Classification | Accept (poster) | Summary: This paper proposes to directly adapt the large-scale pretrained model to the downstream classification task via fine-tuning on few-shot examples, therefore yielding a novel few-shot learning paradigm. Different from common few-shot classification methods, their paradigm is featured for 1) utilizing large-scale pretraining and 2) NO popular base training. Although large-scale pretraining usually promises substantial benefit and may bring unfair comparison against prior few-shot learning methods, the reviewer agrees with the author that this paradigm is more realistic and is of great value. Moreover, extensive experiments show that the proposed method achieves consistent improvement over various baselines (under the same paradigm), therefore validating the effectiveness of the proposed attention enhancement for few-shot adaptation.
Strengths: 1) The proposed "large-scale pretraining --> few-shot learning" (no base training) is meaningful has good realistic value.
2) Making the attention focus onto the key entities sounds reasonable for few-shot downstream task adaptation.
3) Experiments validate the method with consistent improvement over a battery of baselines.
Weaknesses: - In Eqn. 6, the rationale of adding an identity matrix to the attention matrix is unclear. Is it because the attention layer contains a residual operation?
- The attention graph calculation for the columnar and pyramidal architecture differs a lot from each other. How does the latter (and simpler) manner perform on columnar architecture? Ablation studies to investigate the detailed design of the attention graph are expected.
- The results analysis (L269 to L287) needs better clarification. Currently, multiple observations are squeezed into a single paragraph and it is not always clear which results correspond to each observation. Some analysis in the main text is not explicitly consistent with the results in the tables. For example, Line 272 states that the fine-tuning methods can achieve better performance than the model-frozen methods. This observation does not stand in Table 2, where two model-frozen methods (LoRA and SSF) are higher than FT.
- The “orig attn” visualization in Fig 1 is somehow misleading (which barely attends to the useful foreground), while the visualization in Fig. 4 is much more reasonable (the pre-trained cls token, though attends to many unrelated regions, partially covers some useful foreground). Please check whether the visualization in Fig. 1 is correct.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is many-hot presentation?
- In Table 1 and 2, “FT” (full finetuning) is slightly higher than all the three parameter-efficient tuning methods (VPT, LoRA, SSF). This is contrary to the intuition that PET methods is superior under the few-shot learning scenario. Can you give some explanations on this phenomenon?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please check the reference rules in the global response first to help following reading.
**1.many-hot presentation**
For an input image, we obtain its position prompts, i.e., the position index set *O* of the key patches. For a vector *z* of length *N*, where *N* is the patch number, we set its values of the positions in the index set *O* to 1 and other positions to 0, then *z* is the many-hot presentation of the position prompts. Similar to one-hot label, its cross-entropy loss is actually the negative log probability at key patch positions and the expanded formula is provided in Eq.8-main. Since the attention scores are computed from Softmax, so each score can be treated as the prediction probability to the corresponding position.
**2.FT vs PET in few-shot learning**
For all fine-tuning methods, we initialize the classification head using closed-form linear classifier, e.g., MetaOptNet[a], before the fine-tuning process. As shown in Fig.2-main, without classification head initialization, the performance of the fine-tuning methods even lags significantly behind closed-form MetaOptNet, and PET fine-tuning indeed obtains better performance than full fine-tuning on few-shot tasks. The detailed accuracy are provided in Tab.2-appendix. The classification head initialization can significantly improve the performance of fine-tuning methods on few-shot tasks and also make full fine-tuning comparable with PET fine-tuning. We recommend this operation as the default for fine-tuning on few-shot tasks, which also is a contribution of this work.
**3.Identity matrix in Eq.6-main**
Yes, the rationale of adding an identity matrix to the attention matrix is the attention
layer contains a residual connection. The residual connection plays a important role in propagating information from inputs to model predictions and can preserve the position information during forward propagation. Concretely, the input *X* and out *Y* of the attention layer approximately satisfy *Y = (W + I) X* with *W* is the attention matrix.
**4.results of locating method from pyramidal architecture in columnar one**
For a pyramidal architecture, since the downsampling operation destroys the patch position correspondence between layers, so we directly use the attention scores average on patches as the importance scores. The results of this simple method in columnar architecture are provided in Tab.2-rebuttal as “+ FORT (attn score)”, the method used in our paper is shown as “+ FORT”. As we can see, this simple method can not achieve as effective improvement as the used method, especially for CLIP pre-trained model whose attention is not enough to locate the key patches and we need gradient information as auxiliary.
**5.analysis between Line-269 to Line-287**
We guess the unclear reference to methods causes confusion. In this paper, the model-frozen methods refer to the simple machine learning classifiers (e.g., Nearest Neighbor classifier, Ridge Regression and Support Vector Machines), plug-and-play inductive meta-solvers (e.g., ProtoNet[b], R2D2[c] and MetaOptNet[a]) and linear probing. LoRA[d], SSF[e] and FT are all fine-tuning methods and adjust the parameters to adapt new tasks. Our FORT is used for these fine-tuning methods and improve their performance. Therefore, the fine-tuning methods indeed can achieve better performance than the model-frozen methods, e.g., MetaOptNet vs (SSF, LoRA or FT) in Tab.1,2,3,4-main. FORT can further help the fine-tuning methods to obtain significant improvement on both ViT and Swin, e.g., LoRA vs LoRA+FORT in Tab.1,2,4-main. We will make these more clear in the revision.
**6.“orig attn” visualization in Fig.1-main**
The “orig attn” visualization in Fig.1-main is correct. The original attention attends to both foreground and background objects and thus the visualization of 95% attention may cover most positions.
[a]Meta-learning with differentiable convex optimization. CVPR 2019.
[b]Prototypical networks for few-shot learning. NIPS 2017.
[c]Meta-learning with differentiable closed-form solvers. ICLR 2019.
[d]Lora: Low-rank adaptation of large language models. ICLR 2022.
[e]Scaling & shifting your features: A new baseline for efficient model tuning. NeurIPS 2022.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal by the authors
Comment: The authors' rebuttal addresses most of my concern. They did not clearly explain the inconsistency between the visualization in Fig. 1 and Fig.4 (regarding the baseline attention). However, I would like to maintain my recommendation of "borderline accept" and suggest the authors give this problem a double check.
---
Reply to Comment 1.1.1:
Title: Thanks very much for you feedback.
Comment: This is a good suggestion. We carefully check the "Orig Attn" visualization in Fig.1-main and find the white (chosen) patches cover the foreground bird region. So we think it is consistent to Fig.4-main. | Summary: This paper introduce a method called FORT, aiming to adapt pre-trained vision transformers to few-shot image classification task. This method contains two steps. 1) Utilizing attention and the gradient information to locate important entities, which is denoted as position prompts. 2) A new loss is defined to enforce the attention close to the position prompts. Extensive experiments shows benefits of FORT over different datasets and different pre-trained model.
Contributions can be summarized as follow:
The position prompt and the new loss can make fine-tunning on pre-trained transformer based model more efficient when dealing with few-shot samples for classification task.
Strengths: 1) The writing and presentation is logical.
2) The intuition to make the attention concentrate to the class related entities are reasonable.
3) Derivation of analysis makes sense.
4) Extensive experiments shows some benefit of the proposed method.
Weaknesses: 1) The idea of denoising the attention more interpretable seems not very novel, especially for papers of interpretability and visualizations. Would be nice to discuss more in the related work about these methods.
2) From my understanding, this method is only propose a regularizer in equation 8 combined with other standard fine-tunning procedure. The contribution of this is not significant. The improvement 1percent accuracy of experiments doesn't seem to be very siginificant.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1) for section 3.4, are the position prompts fixed during fine-tunning? Or it also updating along the fine-tuning?
2) For each task, are the fine-tunning process only involved with a few-examples for each task? Or it is like traditional few-shot learning need a bunch of data like base-dataset.
3) Can the method be compared with traditional trained few-shot learning? (train a vit from scratch using the data from seen classes of its own domain). If yes, would be nice to see some numbers. If not, please further explain a little more.
4) When utilizing the pre-trained transformer, how does 5-way few shot classification works? Would be nice to see some numbers for comparison. I am wondering if it is too easy or too little data to be fine-tunned.
5) For the compared baselines, are the MetaOptNet and SVM the same for fixed backbone? I am wondering why they differ so much in terms of the performance.
If the questions are addressed, I am willing to raise the score. Thank you in advance!
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: In general, I think the contribution might not be enough since the pre-trained model are proofed very powerful already. If the questions are addressed, I am willing to raise the score. Thank you in advance!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please check the reference rules in the global response first to help following reading.
**1.fixing or updating the position prompts along fine-tuning**
The position prompts are obtained before fine-tuning and fixed as supervision signals during fine-tuning process. We find that updating them along fine-tuning can not obtain non-trivial improvement but introduces obvious cost.
**2.only support set of each novel task or using a base-dataset**
In this work, we aim to effectively adapt the pre-trained large models to target few-shot tasks. Therefore, for each task, we only have its support set that contains few labeled samples and a pre-trained model. We directly fine-tune the pre-trained large model in the few support samples, which is actually very challenging, especially for the data-hungry vision transformers.
**3.comparison with traditional few-shot learning**
In this work, we abandon the traditional setting that learns inductive bias from a small base dataset, and instead propose a new paradigm that directly adapts the models pre-trained on massive data to target few-shot tasks. This new paradigm has many advantages: 1) the pre-trained large models have far more generalizing representation and thus can obtain significantly better performance; 2) it can handle supervised, cross-domain and unsupervised few-shot learning simultaneously: the pre-training data can be unlabeled and thus sufficiently large, and the novel tasks can come from different target domains; 3) it is more friendly to privacy data, and we only need the pre-trained models instead of pre-training data; 4) it has better realistic value and conforms to the learning paradigm of humans whose few-shot ability comes from learning with massive signals since birth. The reason why we use the 20-way setting is also to make the setting more realistic and practical useful. The 5-way setting is too simple to be a valuable one.
Most traditional few-shot learning methods typically design some parametric meta modules and need a base dataset for meta-training, and thus are not applicable in our setting. The results of some applicable ones, e.g., MetaOptNet and R2D2, are compared in Tab.1,2,3,4-main.
**4.results on 5-way 1-shot/5-shot tasks**
We provide the results on 5-way tasks in Tab.1-rebuttal. The performance are far more better than traditional few-shot learning methods. For the 1-shot tasks from miniImageNet, we can achieve 94.2% accuracy while the SOTA traditional method, FeLMi[a], only achieve 68.28%. For the 1-shot tasks from CUB, we can achieve 84.4% accuracy while the SOTA traditional method, Wave-SAN[b], only achieves 50.3%. This means using pre-trained large models is more effective and practically valuable than traditional paradigm. Moreover, our method can further improve the fine-tuning performance of base fine-tuning methods, e.g., LoRA and SSF.
**5.difference between MetaOptNet and “SVM”**
For “SVM”, we use the method from scikit-learn library, i.e.,
from sklearn.svm import SVC
classifier = SVC(C=1., kernel='rbf')
whose implementation is based on libsvm library. Differently, MetaOptNet use multi-class kernel-based vector machines from [c]. Both MetaOptNet and “SVM” use the same fixed backbone, but the different multi-class support vector machine methods lead to different performance.
**6.novelty of denoising**
Denoising for better explanation is indeed a commonly-used method, e.g., Eigen-CAM[d], and we do not treat it as a contribution and thus do not discuss about denoising in the related works. Our main innovation is to effectively combine the attention and gradient information to obtain a method that can localize the key patches accurately and generally. The attention information is not enough for some pre-trained models, like CLIP, to locate the key patches, so we introduce the denoised gradient to the attention map, as shown in Eq.6-main.
**7.contribution and improvement**
(1)Our contribution is that we propose a new form of prompts for vision data, position prompts, to effectively adapt the pre-trained vision transformers to few-shot image classification tasks. We provide their definition, obtaining method (Section 3.4) and usage (Section 3.5). The reason why the usage is a simple regularization loss besides the classification loss is the limitation of the new setting. Concretely, for a target few-shot task, we only have few support samples without base dataset, so it is impossible to learn a new parametric module from scratch. This greatly limits the designing space of the model, and we have to resort to the simple non-parametric method. Our regularization method introduces no parametric modules and is simple, but can effectively enhance the attention of model to the key patches. (2) We conduct extensive experiments for different backbone, pre-training ways, fine-tuning methods and datasets. Our method can obtain significant improvement (+2.0~6.0%) in some cases, and also obtains small improvement in other cases. We believe this is normal in deep learning, as it is impossible for a method to be significantly effective in all cases.
Finally, although the pre-trained models are proofed very powerful already, it is still meaningful to effectively adapt them to few-shot tasks to obtain better performance. In fact, it is a non-trivial problem, since fine-tuning the large data-hungry vision transformers on few samples is prone to over-fitting. As shown in Tab.1-main, the fine-tuning methods sometimes cannot significantly improve performance beyond the initial classifier (i.e., MetaOptNet) or even perform worse, especially on 1-shot tasks.
[a]FeLMi: few shot learning with hard mixup. NeurIPS 2022.
[b]Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain Few-Shot Learning. arXiv:2203.07656.
[c]On the algorithmic implementation of multiclass kernel-based vector machines. JMLR 2001.
[d]Eigen-CAM: Class Activation Map using Principal Components. IJCNN 2020.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttals.
Comment: I appreciate the authors for your effort for the comprehensive rebuttals. This rebuttals addresses most of my questions. I still have concerns as follows. Traditional setting of few-shot learning can already get very good performance with a smaller backbone, for example, TDM[1] reaching 84.36 on 1-shot 5-way classification. This shows that with a much smaller dataset (the base dataset) and smaller backbone, it can obtain SOTA results. It is unnecessary to utilize much more data and larger backbone to solve such easy problem. Plus, [2] has also shows that utilizing the pretrained model can easily adapt to this few shot setting. I get the point that the model make the fine-tuning a little more effective, but only testing on this easy classification task with 1% improvement is not persuasive. It would be nice to show this regularizer are effective on different few-shot downstream tasks e.g. on object detection since authors shows the good semantics in the attention maps. I will remain my score around boarderline for now and make my final decision later to see other reviewers and AC's opinion.
[1] Task Discrepancy Maximization for Fine-grained Few-Shot Classification, CVPR 2022
[2] Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference CVPR 2022
---
Reply to Comment 1.1.1:
Title: Thanks very much for you feedback.
Comment: **1.advantage of adapting large self-supervised pre-trained model**
First, the traditional setting which uses a base dataset is not inherently a true few-shot setting. If we want to obtain a few-shot model for CUB, we have to collect many labeled bird images for meta-training, which still needs massive labeling labor and seriously hinders the practical application. Importantly, for different domains, e.g., CUB, Cars, we have to label separate base datasets for them. Besides, the SOTA cross-domain accuracy to CUB is only 50.3%[a]. Our new setting uses open-source large models pre-trained on massive unlabeled data and only uses the few support samples from different target domains for fine-tuning, which is a true few-shot setting and can handle different domains simultaneously. Second, TDM[b] achieves 84.36% on 5way-1shot tasks from CUB, but only achieve 56% for Pets and we can achieve 91.2% as shown in Tab.1-rebuttal. Therefore, TDM can not obtain stable good performance. The few-shot problem is actually not a easy problem, just the old 5-way setting is too simple and not practical.
**2.about improvement**
First, [c] uses a pipeline, pre-training+meta-training+fine-tuning, which still needs to label massive in-domain data for different target domains. Instead, we consider a more challenging setting with only a pre-trained model and few support samples. Second, the 95% confidence interval is 0.3 (1-shot) and 0.2 (5-shot) as provided in line_269-main, so we believe the >1% improvement is meaningful enough. In fact, directly fine-tuning the large pre-trained model on few (20~100 in total) samples is a very challenging problem and we conduct the first exploration. Even [c] still needs a base dataset for meta-training. Third, we focus on few-shot classification in this work, and our method may have the potential to work on detection tasks, so we leave this for future work.
[a]Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain Few-Shot Learning. arXiv:2203.07656.
[b]Task Discrepancy Maximization for Fine-grained Few-Shot Classification, CVPR 2022
[c]Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference. CVPR 2022
---
Reply to Comment 1.1.2:
Title: Do our responses solve you concerns?
Comment: Thank you very much for your feadback and hard work. Do our responses solve you further concerns? If you have other concerns, welcome to discuss with us and we will be very happy to answer them. | Summary: This paper addresses few-shot learning by resorting to pre-trained large vision models. A new prompt strategy, termed position prompt, is proposed during fine-tuning to encourage the model to focus on class-relevant patches. This is realized by an attention-based token selection module and an optimization objective that enhances the affinity values between image tokens and the selected key patches. The proposed method is applied to various Vision Transformer structures with different fine-tuning strategies to verify the effectiveness of the proposed method. Experiments on a set of few-shot learning tasks show the proposed method can effectively boost few-shot learning performance.
Strengths: 1. This paper makes an interesting attempt on how to prompt large pre-trained vision models during fine-tuning and is found beneficial for few-shot learning.
2. It is good to see the proposed prompt strategy is applied to different combinations of Vision Transformer architectures and fine-tuning strategy to verify the generalizability of the proposed method.
Weaknesses: 1. The claim regarding the importance of locating class-specific patches appears too strong. Firstly, the datasets used in this paper primarily consist of single objects with context, so the statement that "the input images typically contain multiple entities" may not hold true. Secondly, contextual information is not always detrimental to classification and often provides valuable cues for recognizing the objects of interest.
2. While the method employs a sophisticated strategy to locate key patches using both MHA and gradient information, this does not guarantee the accuracy of key patch identification. As there are no ground-truth annotations for key patches, limited visualizations alone are insufficient to justify their correctness. If the identified patches do not align with expectations, it contradicts the original motivation.
3. The authors have overlooked existing attempts to locate discriminative image parts in few-shot learning that aim to enhance performance, such as the works referenced [1] and [2].
[1] "Multi-attention Meta Learning for Few-shot Fine-grained Image Recognition." IJCAI 2020.
[2] "Multi-attention network for one-shot learning." CVPR 2017.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: In Eq. 5, G is a vector, this vector is repeated before applying to Eq. 6? What is the motivation to project gradients to input to its first principle component?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitation analysis is provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please check the reference rules in the global response first to help following reading.
**1.vector *G* in Eq.6-main**
Yes, the vector *G^T* with shape of (1, *N*) is repeated row-wise to add to the attention matrix with shape of (*N*, *N*).
**2.motivation to reserve the first principle component of gradient**
The gradient computed for single sample contains many noises and we reserve its first principle component for denoise. The similar idea can also be seen in some deep explanation methods, i.e., Eigen-CAM[a]. In fact, for the gradient *G* with shape of (*N*, *d*) where *N* is the patch number, we find keep the maximum value of each patch gradient can achieve similar effect, i.e., max(*G*, dim=1) --> *G* with shape of (*N*, 1).
**3.importance of locating class-specific patches**
Firstly, although the used datasets are typically object-centric, the images still contain multiple entities in most cases, e.g., birds and tree branches, cars and buildings, etc. The class-independent objects introduce noise information and harm the performance, especially for scarce training samples. Secondly, most contextual information can not reflect the essential characteristics of target classes, and thus misleads classification. Whether a bird lands on a tree branch or a roof should not affect its class. Besides, if some contexts are useful for recognizing the objects, we also treat their corresponding patches as the key ones.
**4.correctness of locating the key patches**
We agree that the qualitative visualization of position prompts does have limited justification. Since there are no ground-truth annotations, it is impossible for us to provide quantitative verification. However, we conduct extensive visualization of position prompts beyond the few ones in the main text, and find our method can indeed effectively locate key patches. In fact, the supplementary materials also include some samples in Figures_ViT-B_DINO.zip, with the position prompts of 140 images, and please check them for further validation. Besides, it has been extensively validated that deep explanation methods can locate the most classification-related parts of input.
**5.discussion about related works [b,c]**
[b] proposes the channel and spatial attention modules to enhance the local features of each input. [c] aim to use class text tag to aid classification and propose a attention network which can generate the attention map based on text tag embedding to weight local features. Both methods need to meta-train the introduced parametric modules using a base dataset, and [c] even needs to conduct a new image-tag datasets. Therefore, they are not suitable for our setting where we only have a pre-trained backbone and the support set of the target task. Our method instead introduces no new parametric modules and does not require meta-training, thus is more flexible.
[a]Eigen-CAM: Class Activation Map using Principal Components. IJCNN 2020.
[b]Multi-attention Meta Learning for Few-shot Fine-grained Image Recognition. IJCAI 2020.
[c]Multi-attention network for one-shot learning. CVPR 2017.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response. However, their response has not fully addressed my concerns. Firstly, it is a factual observation that the authors did not acknowledge existing efforts in learning class-specific features for few-shot learning. More significantly, there are lingering uncertainties about whether the proposed method functions as claimed and expected. It remains challenging to assert with confidence that we can automatically pinpoint the positions of key entities, especially in few-shot scenarios. Notably, a recent study [r1] revealed the potential issue of supervision collapse in few-shot learning, where models tend to focus on partial information sufficient only to distinguish a few support samples but may struggle to generalize effectively. In response to this, the authors of [r1] employed self-supervised learning to acquire more versatile representations, which appears somewhat contradictory to the motivation of the reviewed method. In light of these considerations, I am inclined to maintain my rating as borderline reject.
[r1] CrossTransformers: spatially-aware few-shot transfer, NeurIPS 2020.
---
Reply to Comment 1.1.1:
Title: Thanks very much for you feedback.
Comment: **1.acknowledge existing efforts**
The discussion about related works has been provided in rebuttal text. We acknowledge their contributions and also explain the key differences. We will add this discussion in the revision.
**2.credibility of locating the key patches**
Since there are no ground-truth annotations, it is impossible to conduct quantitative verification. However, the 140 images with position prompts have already been provided in Figures_ViT-B_DINO.zip of the supplementary materials and we believe this is statistically valid. In fact, it has been extensively validated that deep explanation methods can locate the most classification-related parts of input. We also observe the same effectiveness of locating the key patches in more images beyond the above 140 ones.
**3.contradictory motivation to [a]**
The motivation of [a] does not contradict ours at all. The supervision collapse in [a] is for pre-training, i.e., supervised pre-training harms the transfer performance in downstream novel tasks. However, in this work, we claim to learn class-related information for fine-tuning on novel tasks, not pre-training. In fact, the pre-trained models used in this work, e.g., DINO, iBOT, are also self-supervised models.
In a word, the upstream pre-training cannot learn too much information of old/base classes for better generalization, but the downstream fine-tuning needs to focus on learning information of novel classes for better classification.
[a] CrossTransformers: spatially-aware few-shot transfer, NeurIPS 2020. | Summary: - The paper adapts pre-trained vision transformers for few-shot classification.
- Full/parameter-efficient fine-tuning using only a few examples may harm performance due to spurious correlations.
- The proposed method uses an additional auxiliary loss to guide the attention of the top layer to focus on the class-related patches.
- The target for this loss (“position prompts“) are obtained as a combination of deep explanation method (e.g. Rollout) and gradient information (of class logit wrt top layer input features).
- Theoretical analysis shows how the proposed auxiliary loss increases mutual information between the input tokens and key class-related patches.
- Experiments on CUB, Cars, Places, Plantae datasets show improvements over full/parameter-efficient fine-tuning methods and meta-learning based methods.
Strengths: - The paper is written well and is easy to follow.
- The proposed Attention+Grad target gives better prompts than Attention alone (Fig3). (It would be good to see that it gives better prompts than Grad alone as well, or whether Grad is enough to get good position prompts.)
- Qualitative results of the resulting attention (Fig4) demonstrates improvements visually.
- Ablation wrt key hyper-params help understand their effect in different settings. (Please see Weaknesses for missing ablations)
- Theoretical analysis provides additional justification of the proposed approach.
Weaknesses: - The most commonly studied settings in related work are 5-way 5-shot and 5-way 1-shot. It would be valuable to show results in these settings in order to compare with more recent methods than the ones in the paper. [a, b, c]
- The used 20-way setting is more challenging, though it restricts comparisons with recent work
- Recent methods [a, b, c] also show results on the larger miniImageNet and tieredImageNet few-shot benchmarks
- Current SOTA pre-training methods are based on MAE. How does this method work when fine-tuning MAE pre-trained ViT?
- Cross-domain results to demonstrate that the method generalizes to unseen domains (as in [59] whose experimental setup is used)
- Ablation experiment using only the gradient term in Eq 6 to show the relative contribution of A
- Missing comparisons and mention of closely related work [d, e]
[a]: Roy, Aniket, et al. "FeLMi: few shot learning with hard mixup." Advances in Neural Information Processing Systems 35 (2022)
[b]: Yiren Jian, et al. "Label hallucination for few-shot classification." In Proceedings of the AAAI Conference on Artificial Intelligence (2022)
[c]: Afrasiyabi, Arman, Jean-François Lalonde, and Christian Gagné. "Associative alignment for few-shot image classification." Computer Vision–ECCV 2020
[d]: Hou, Ruibing, et al. "Cross attention network for few-shot classification." Advances in neural information processing systems 32 (2019)
[e]: Jiang, Zihang, et al. "Few-shot classification via adaptive attention." arXiv preprint arXiv:2008.02465 (2020)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - DINO attention in Fig3 looks much better than Orig attention in Fig4. Is there a difference in how they are obtained?
- How do the position prompts look when only using gradient term in Eq6 ?
- Above points from the Weaknesses section
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - The authors have discussed limitations:
- The method is not applicable for non-attention based network architectures
- Obtaining the position prompts may need manual labeling from experts for domains like medical, satellite, etc
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please check the reference rules in the global response first to help following reading.
**1.difference between the attention visualization in Fig.3-main and Fig.4-main**
Fig.3-main visualizes the attention scores where the different colors represent different values, while Fig.4-main chooses the patches with top highest attention scores and covering about 95% attention for visualization where the chosen patches have the same color (i.e., white) no matter the attention scores. This can more clearly show where the models pay most attention to. Besides, the attention score visualization corresponding to Fig.4-main is provided in Fig.1-appendix.
**2.position prompts using only gradient information**
We follow the setup of Fig.3-main to visualize the position prompts using only gradient information. The results are shown in Fig.3-rebuttal as “Grad Prompts” and the other ones are the same as Fig.3-main. As we can see, using only gradient information also can effectively locate the position prompts, but there could be some bias due to gradient noise.
**3.results on 5-way 1-shot/5-shot tasks and comparison with [a,b,c]**
In this work, we abandon the traditional setting that learns inductive bias from a small base dataset, and instead propose a new paradigm that adapts the models pre-trained on massive data to target few-shot tasks. The new paradigm has many advantages: 1) the pre-trained large models have far more generalizing representation and thus can obtain significantly better performance; 2) it can handle supervised, cross-domain and unsupervised few-shot learning simultaneously: the pre-training data can be unlabeled and thus sufficiently large, and the novel tasks can come from different target domains; 3) it is more friendly to privacy data, and we only need the pre-trained models instead of pre-training data; 4) it conforms to the learning paradigm of humans whose few-shot ability comes from learning on massive signals since birth, and has better realistic value. Besides, we use the 20-way setting since it is more challenging and practically useful, and the 5-way setting is too simple.
We provide the results on 5-way tasks in Tab.1-rebuttal. The results are far more better than those in [a,b,c], e.g., for 1-shot tasks from miniImageNet, the best accuracy of [a,b,c] is 68.28% and we can achieve 94.2%; for 1-shot tasks from CUB, [a] achieves 51.66% and we achieve 84.4%. This means using pre-trained large models is more effective and practically valuable than traditional paradigm [a,b,c]. Our FORT can also improve the performance of fine-tuning methods on 5-way tasks. Besides, [a,b,c] aim to pseudo-label the base dataset as auxiliary data, and this is not suitable for our setting. First, we may not obtain the pre-training data due to privacy. Second, even the pre-training data can be obtained, it is still too cost to pseudo-label the massive data, not like the small base dataset.
**4.fine-tuning MAE pre-trained models**
Although the MAE pre-trained models can obtain SOTA fine-tuning performance in dense prediction tasks, e.g., detection and segmentation, their can not obtain highly linearly discriminant representation like DINO and are not suitable for few-shot image classification. For example, fine-tuning ViT-B/16 from MAE with LoRA can only obtain 12.4% and 51.2% accuracy on 20-way 1-shot and 5-shot tasks from CUB respectively, far lag behind DINO pre-trained models whose results are 57.9% and 88.2% respectively.
**5.cross-domain results**
Our new setting is essentially a cross-domain setting: the massive pre-training data can come from different domains with target few-shot tasks, and even be unknown if there are privacy issues. We can adapt the pre-trained models to the few-shot tasks from any target domains.
**6.ablation experiment using only the gradient term**
We follow the setup of Section_4.1-main and ablate the attention and gradient information. The results are shown in Tab.2-rebuttal where “+FORT (grad)” denotes using only gradient term, “+FORT (attn score)” denotes directly using attention scores, “+FORT” denotes using both attention and gradient information. Here we pick the best hyper-parameters for each model even though they may differ from each other. For DINO pre-trained models, the internal attention can locate the key patches and thus effectively assist the gradient information. For CLIP pre-trained models, the internal attention can not indicate key patches and the gradient information plays a major role.
**7.comparison with related work [d,e]**
[d] aims to use the correlation between the local features of support and query samples to calculate similarity precisely and proposes a cross-attention module. [e] aims to highlight the related features in query samples based on support samples and proposes the meta weight generator and spatial attention generator. These models follow the traditional meta-learning paradigm and need a base dataset to meta-learn the parametric module, thus can not applied in our new setting. Our method introduces no new parametric modules and does not require meta-training, thus is more flexible.
[a]FeLMi: few shot learning with hard mixup. NeurIPS 2022.
[b]Label hallucination for few-shot classification. AAAI 2022.
[c]Associative alignment for few-shot image classification. ECCV 2020.
[d]Cross attention network for few-shot classification. NeurIPS 2019.
[e]Few-shot classification via adaptive attention. arXiv:2008.02465. | Rebuttal 1:
Rebuttal: We really appreciate all the reviewers for taking your precious time to make the valuable comments. Overall, **our work has the following strengths**:
1. **the proposed new setting is more realistic and has great value. [nhXH]**
2. **reasonable and interesting idea. [8eth], [SFDV], [nhXH]**
3. **sensible and useful theoretical analysis. [TpQg], [SFDV]**
4. **well-written and easy to follow. [8eth], [TpQg], [SFDV]**
5. **extensive experimental results. [8eth], [h2aC], [SFDV], [nhXH]**
We will carefully respond to the concerns of each reviewer. The reference contents come from the main text, the appendix or the rebuttal PDF. Therefore, for the convenience of reference, we mark "**xxx-main**", "**xxx-appendix**" and "**xxx-rebuttal**" to be the corresponding content xxx in the main text, the appendix and the rebuttal PDF respectively. For example, "Tab.1-main" refers to the Table 1 in the main text, and "Fig.1-rebuttal" refers to the Figure 1 in the rebuttal PDF. The valuable suggestions from reviewers will be added to the revision, such as more ablation results, more discussions about related works and more analysis on observations.
Pdf: /pdf/edacc77853eb31b51c916b93943e501e66ef5306.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes to force the pre-trained model to focus on class-related entities for few-shot image classification. To achieve this, it proposes position prompts that use attention and gradient information to automatically locate the positions of key entities. An attention enhancement loss is used to be trained with the cross-entropy loss. Experiments are conducted on CUB, Cars, Places, Plantae, Aircraft, and Pets datasets.
Strengths:
+ The idea to focus attention for few-shot learning is quite interesting.
+ This paper is well-organized, with intuitive figures which illustrate the motivation and approach clearly.
+ The experiments are extensive.
Weaknesses: + The statement in Ln168 “When the support samples are sufficient, the model can attend to the frequently-occurring entities in each class more to alleviate this problem, since they are typically the key ones” can’t be well-supported by Fig. 4. Instead, attention maps with **sufficient training samples** should be included to prove that the “attention focus” is especially useful for few-shot learning.
+ More illustration of the hyper-parameters $\alpha$ should be included as it controls the core component of the proposed approach, the attention enhancement loss. For example, why does the performance decrease with higher $\alpha$; why is the performance of 5-shot tasks more sensitive to the value? If as stated in Ln168, more samples would be less sensitive to the attention selection?
+ The illustration of the position prompts is unclear and quite difficult to understand, which makes it difficult to evaluate its effectiveness.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please check the reference rules in the global response first to help following reading.
**1.attention maps with sufficient training samples**
We follow the setup of Fig.4-main, except fine-tuning on the 20-way 50-shot task from CUB and 20-way 200-shot task from Pets without attention enhancement. The attention maps after fine-tuning are shown in Fig.1-rebuttal and Fig.2-rebuttal as “FT Attn (sufficient)”. As we can see, fine-tuning with sufficient samples indeed focuses on the key entities more than with few samples (“FT Attn”) or original attention (“Orig Attn”), but is less focused than explicit attention enhancement (“Our Attn”).
Note that this does not mean that few-shot fine-tuning with attention enhancement can obtain higher accuracy than sufficient-shot fine-tuning. In fact, sufficient training samples allow the models to fit the input-label distribution better, which is more important for classification than attention enhancement. But when we can not obtain sufficient samples, attention enhancement can reduce class-independent noise information in the final embedding as shown experimentally and theoretically, and thus is useful for classification.
**2.more explanation about hyper-parameter alpha**
As an auxiliary of classification loss, the attention enhancement loss aims to guide the model to learn to classify the images by focusing on their key entities, so as to ignore class-independent information. Both the attention enhancement loss and the classification loss help to get better classification boundaries, but the latter one fits the input-label distribution and thus is more direct and effective. The attention enhancement loss aims to reduce noise when data is scarce and indirectly improves the generalization performance of the classifier. Therefore,
(1) If the coefficient alpha is too large, the attention enhancement loss could relatively inhibit the optimization of the classification loss (e.g., skew gradient direction), which is harmful to fitting input-label distribution.
(2) The more training samples, the more important it is to fit input-label distribution, so 5-shot tasks prefer smaller alpha than 1-shot tasks.
**3.about position prompts**
For each input image, there are some patches corresponding to the key class entity, denoted as key patches. Their positions are used as “position prompts” to guide fine-tuning, i.e., the red patches in Fig.3-main.
As described in Section_3.4-main, we use both attention and gradient information to locate position prompts for generality. In fact, it is difficult to locate them using only attention information sometimes, as shown in Fig.3-main. “Attention+Grad Prompts” is generally better than “Attention Prompts”.
As described in Section_3.5-main, we use the position prompts as the prediction target for attention module, and optimize the cross-entropy loss between their many-hot presentation and the attention logits during fine-tuning, i.e., Eq.8-main. Unlike the existing prompts in NLP[a], our position prompts are not used in the input or middle phase of forward inference, since those of query samples are unknown without label information.
Finally, their effectiveness is quantitatively verified in Tab.1,2,3,4-main and qualitatively verified in Fig.4-main. The attention enhancement using position prompts indeed helps the model to focus most attention on key entities.
[a] pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I appreciate the rebuttal and the clarification. The responses address most of my concerns. Therefore, I would retain my score. | null | null | null | null | null | null |
Static and Sequential Malicious Attacks in the Context of Selective Forgetting | Accept (poster) | Summary: This paper investigates the potential and viability of malicious data update requests in the context of the unlearning process. The authors put forward a malicious selective forgetting attack in a static scenario and present a framework for sequential forgetting attacks. And the framework is formulated as a stochastic optimal control problem.
Strengths: This paper investigates the potential and viability of malicious data update requests in the context of the unlearning process.
Weaknesses: The attack goal of the proposed approach remains unclear.
The attack methodology of the proposed approach lacks clarity.
The authors assert the effectiveness of their method in both white-box and black-box scenarios. However, the methodology section lacks a clear explanation of how the attack steps differ in these two scenarios.
More experiments on high-resolution datasets, such as ImageNet, VGG-Flower, that include more classes, should be evaluated. Otherwise, it is not sure whether the proposed method is applicable in the real-world scenario.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The attack goal of the proposed approach remains unclear. While the paper discusses constructing malicious data update requests during the unlearning process, it does not explicitly define the specific objectives pursued by the attacker. It is essential for the authors to provide concrete explanations regarding the attack goals, rather than merely stating them as desired outcomes. For instance, is the intention to diminish the fairness of the unlearning model or to deliberately misclassify specific samples?Furthermore, the paper lacks a high-level description of how the attack is executed, given a clear attack target. It is important for the authors to outline the general methodology employed by the attacker to carry out the attacks.
The attack methodology of the proposed approach lacks clarity. While the authors present multiple definitions, lemmas, and theorems in the methodology section, they do not provide an overall step-by-step explanation of how the attack is conducted. As a result, obtaining a clear understanding of the attack process remains challenging.
An overview image is also required to describe the proposed attacks.
The authors assert the effectiveness of their method in both white-box and black-box scenarios. However, the methodology section lacks a clear explanation of how the attack steps differ in these two scenarios.
More experiments on high-resolution datasets, such as ImageNet, VGG-Flower, that include more classes, should be evaluated. Otherwise, it is not sure whether the proposed method is applicable in the real-world scenario.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your thoughtful comments and valuable suggestions. Below, we provide our response to the questions and concerns.
**1: Clarifying the attack goal (e.g., diminishing the fairness of the unlearning model, or misclassifying specific samples?) of the proposed approach. Giving a high-level description of how the attack is executed.**
First, we want to clarify that our proposed attacks are general and can offer great flexibility to accommodate the adversary's desired diverse attack objectives through the malicious unlearning samples (Please refer to lines 136-138 in Section 3.1 of the main manuscript). In this paper, we present novel static and sequential selective forgetting attack frameworks that empower the adversary to achieve desired attack goals, such as causing unfairness for specific groups of individuals or inducing misclassifications, by crafting malicious data update requests.
Additionally, the high-level descriptions of the proposed attacks are deferred to Section 4 in the Supplementary Material due to space limitations during the preparation of our submission. For example, in Algorithm 1 presented in the Supplementary Material, we illustrate the high-level descriptions of how to solve the proposed attack optimization framework using the second-order unlearning strategy described in Eqn. (1) and (3) of the main manuscript. In the final version, we will integrate these high-level descriptions for the attacks from the Supplementary Material into the main manuscript.
**2: Clarifying the attack methodology of the proposed approach by providing the step-by-step explanations of how the attack is conducted.**
Due to space limitations during the preparation of our submission, in Section 4 of the Supplementary Material, we provide the algorithms (e.g., Algorithm 1, 2, 3, and 4) to give detailed explanations of how the attacks are conducted in the static and sequential settings. In the final version, we will incorporate these high-level descriptions from the Supplementary Material into the main manuscript.
Additionally, in the one-page PDF in the "global" response, we have included the attack flowchart (please refer to Figure 2 and Figure 4 in this one-page PDF) for our attacks in the static and sequential settings.
**3: Providing an overview image to describe the proposed attacks.**
Thanks for pointing this out. In the attached one-page PDF in the “global” response, following your suggestion, we provide an overview image (please refer to Figure 1 and Figure 3 in this one-page PDF) to describe the proposed attacks in the static and sequential attack settings.
**4: Giving more explanations of how the attack steps differ in the black-box and white-box scenarios.**
Thanks for this suggestion. In our paper, we implement our selective forgetting attacks in both white-box and black-box scenarios. In the white-box setting, we assume that both the model architecture and parameters are known to us. Leveraging this knowledge, we can directly generate malicious data update requests for specific test samples on the pre-trained model. In the black-box attack, as we don't know any information about the model, we first need to train one or several surrogate models to substitute the pre-trained model, and then transfer selective forgetting attacks to the black-box victim model. In detail, we generate malicious data update requests on one model and apply these malicious data update requests to attack the black-box victim model.
**5: Including more experiments on high-resolution datasets, such as ImageNet, and VGG-Flower.**
We greatly appreciate the suggestions for adding experiments on ImageNet and VGG-Flower. Following your suggestions, we conducted experiments on ImageNet [r1] and Flowers [r2], with high-resolution images and more classification classes. In experiments, we trained ResNet-18 on a sub-dataset of ImageNet and VGG-19 on Flowers. Following the same experimental setting in Section 4.1 of the main manuscript, we report the attack success rate of our proposed static forgetting attacks in the untargeted setting, and we compare the results with the RandSearch baseline. As shown in the below table, our proposed methods also achieve high attack success rates on these two datasets using the first-order and the rolling SGD unlearning methods. On the other hand, the RandSearch baseline demonstrates limited effort in misclassifying targeted test samples in an untargeted manner. In the final version, we will include the experiments on high-resolution datasets.
Table 1: Attack success rate of our proposed static forgetting attacks on ImageNet and Flowers.
| Dataset | Unlearning method | RandSearch | Ours |
| :---: | :---: | :---: | :---: |
| ImageNet | First-order | 0.26 ± 0.08 | 0.96 ± 0.04 |
| ImageNet | Unrolling SGD | 0.24 ± 0.07 | 0.94 ± 0.04 |
| Flowers | First-order | 0.46 ± 0.13 | 1.00 ± 0.00 |
| Flowers | Unrolling SGD | 0.35 ± 0.10 | 0.95 ± 0.05 |
**Reference.**
[r1] "Imagenet: A large-scale hierarchical image database", CVPR 2009.
[r2] "Automated flower classification over a large number of classes", ICVGIP 2008. | Summary: This paper studies the malicious data update in machine unlearning. The authors consider two strategies. The first is static selective forgetting attack framework, where the adversary exploits vulnerabilities in the unlearning systems by submitting a set of carefully crafted data update requests at once. The second is sequential selective forgetting attack framework that injects malicious update multiple times by considering the order and timing of data update requests. Theoretical and experimental analysis are provided to validate the proposed machine unlearning attacks.
Strengths: • The angle of the problem is novel as there is no study on malicious machine unlearning so far.
• The authors propose one-step and multi-step attacks, and study the attack effect in white/black box, targeted/untargeted settings.
• The two proposed methods are well-justified both theoretically and experimentally.
Weaknesses: • Experiment detail missing:
o What is target class in the targeted setting?
o What is the class that the machine unlearning aims to forget? The entire class or a few samples? If latter(according to line 344-346), what does forgetting mean in this case?
o What does the attack success rate mean, especially in the targeted setting? Reaching the targeted class or no longer on the previous class?
o For untargeted setting, will the random class be the class for the task of machine unlearning? If so, what is the impact?
o What is the setting (untargeted/targeted) for black-box attack? The authors should specify in the manuscript.
• Related work to model poisoning should be added. While the authors mentioned data poisoning attacks, the proposed attack in machine unlearning is closer to model poisoning, which embeds the attack goal with the benign training objectives. Likewise, the proposed attack embeds the attack goal into the unlearning objective.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your thoughtful comments and valuable suggestions. Below, we provide our response to the questions and concerns.
**1: Providing more experiment details: (1) What is the target class in the targeted setting? (2) What is the class that the machine unlearning aims to forget? The entire class or a few samples? If the latter (according to lines 344-346), what does forgetting mean in this case? (3) What does the attack success rate mean, especially in the targeted setting? Reaching the targeted class or no longer on the previous class? (4) For an untargeted setting, will the random class be the class for the task of machine unlearning? If so, what is the impact? (5) What is the setting (untargeted/targeted) for black-box attack? The authors should specify in the manuscript.**
Sorry about the unclear descriptions regarding these mentioned experiment details. Below, we address your concerns one by one.
(1) In the targeted setting, the "target class" refers to the class that the adversary aims to attack. For example, the motivated adversary could generate malicious data update requests to attack the targeted test samples and force the targeted test sample (e.g., the bird image) to be assigned as the attack targeted label (e.g., the dog label).
(2) To be clarified, our goal is not to forget certain classes. Instead, we aim to use machine unlearning techniques to forget specific training samples and their corresponding influence on the pre-trained model. In lines 344-346 of the main manuscript, we conducted selective forgetting attacks to misclassify the targeted test samples by unlearning a subset of training data (the percentages indicate the total portion of training samples to be unlearned).
(3) The attack success rate means the number of successful attacks achieved over all attack attempts. Specifically, in the targeted setting, a successful attack means the targeted test sample is misclassified into the adversary’s specified label on the victim model.
(4) In the untargeted setting, the predicted label of the targeted test sample is changed to any label other than the true label, after unlearning a subset of training data. This can lead to misbehaviors on the victim model, such as degrading the classification accuracy of specific users. Consider an example where existing unlearning techniques are utilized to repair a face recognition system. During this process, in the untargeted setting, the adversary could make malicious data update requests to cause the repaired face recognition system to misidentify their intended target as anyone else other than the true identity. Lastly, it is important to emphasize that our task is not to select a random class and then perform machine unlearning to completely forget that specific class.
(5) In the black-box attack, we consider a setting where we have no prior knowledge about the target pre-trained victim model. Therefore, we explore the transferability of selective forgetting attacks across various machine learning models. In detail, we can generate malicious data update requests in the targeted setting (where the predicted label is changed to a specified one) and the untargeted setting (where the label is changed to an incorrect one) on one surrogate model, and then transfer these malicious data update requests to attack the black-box victim model.
**2: Adding the related work to model poisoning attacks.**
Thank you for your valuable suggestion. In the final version, we will incorporate related work on model poisoning attacks. Here, we want to clarify that our proposed selective forgetting attacks and model poisoning attacks differ significantly in terms of attack timing and attack mechanisms. Specifically, our attacks occur during the unlearning process, while model poisoning attacks usually take place during learning (e.g., [r1] assuming that the adversary directly attacks the model parameters during the learning process) or fine-tuning (e.g., [r2] involving a model fine-tuning step to attack the pre-trained model via the maliciously designed penalty term for fine-tuning). Additionally, our attacks utilize existing unlearning techniques to delete certain unlearning samples, whereas model poisoning attacks modify model parameters through means such as parameter manipulation [r1]. We appreciate this constructive suggestion and will ensure this suggested related work discussion in the final version.
**Reference**
[r1] “Local model poisoning attacks to Byzantine-Robust federated learning”, USENIX Security 2020.
[r2] “Fooling Neural Network Interpretations via Adversarial Model Manipulation”, NeurIPS 2019. | Summary: This paper explores the malicious forgetting issue in model unlearning and proposes two attack strategies: static attack and dynamic sequential attack. The authors also present a theoretical framework for selective forgetting attacks. Experimental results on multiple benchmark datasets demonstrate that the proposed attack method poses a significant security threat to model unlearning.
Strengths: - Model unlearning as an effective strategy for data forgetting has garnered significant attention, making the exploration of potential malicious attacks during the unlearning process an interesting research direction.
- The paper has a clear research motivation, well-structured writing, and provides ample theoretical support.
Weaknesses: - It is unclear whether this topic has been previously studied. The core of the proposed unlearning attack lies in a reasonable data sampling strategy for the victim model, which may not be novel from a technical standpoint.
- It would be interesting to investigate if the defender, being aware of the malicious unlearning procedure, how could detect or filter out such requests to counter the attack. For example, could perturbation noise be added to mitigate the malicious impact of unlearning?
- Recent study like Anti-backdoor Learning [1] uses two-stage unlearning techniques against backdoor attacks, whether this method could be applied to mitigate the negative effect of proposed unlearning attack?
- The authors should discuss the potential limitations of this attack.
I would be happy to improve the score if the authors address the aforementioned issues.
[1] Yige Li, Xixiang Lyu, Xingjun Ma, et al, Anti-Backdoor Learning: Training Clean Models on Poisoned Data, NeurIPS, 2021
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Refer to the weaknesses mentioned above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The method does not exhibit apparent limitations. One main concern lies in the lack of convincing baseline comparisons in the experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your thoughtful comments and valuable suggestions. Below, we provide our response to the questions and concerns.
**1: Discussions on whether this topic has been previously studied, and the technical standpoint.**
Thank you for your helpful suggestions regarding our work's novelty. As far as we know, we are the first to investigate how the adversary can exploit selective forgetting mechanisms to wholly and sequentially delete unlearning samples in an attempt to undermine the integrity and performance of the victim model. In the Related Work Section of the main manuscript, we give detailed comparisons with existing related works.
From the technical standpoint, different from traditional empirical sampling methods, the proposed static and sequential attack frameworks use discrete indication variables to formulate the complete deletion of targeted training samples, which are very hard to be directly solved due to the presence of non-differentiable loss functions. To address the challenges, we first design a continuous and differentiable function to approximate the discrete component in the formulated static attack framework, and then optimize the proposed sequential unlearning attack framework by training an adversarial policy network, which is specifically designed to target and attack a few critical data update requests in the sequential data update setting. We believe that these technical advancements strengthen the contributions made in this paper.
**2: How could detect or filter out such requests to counter the attack. Could perturbation noise be added to mitigate the malicious impact of unlearning?**
Thanks for the discussion on countering the attack detection. One strategy is to identify potential unlearning requests by tracking model parameter changes before and after unlearning. However, our experiments show that even with this method, our malicious unlearning attacks maintain high success rates. One reason is that these attacks are targeted at specific samples, not overall performance reduction. Another reason is that effective malicious data update requests cause slight model parameter changes, as observed. Another strategy involves using the maliciously unlearned model with other prediction models for ensemble-based methods. Yet, this requires additional models and learning costs. Also, recent studies highlight vulnerabilities in existing prediction models, making them susceptible to subtle attacks like data poisoning. Therefore, simultaneous attacks on multiple models could deceive ensemble systems.
Additionally, in Section 9.1 of the Supplementary Material, we experimented with adding adversarial perturbation noises using adversarial training techniques to assess our attacks against robust deep learning models. Despite the success of adversarial training in enhancing adversarial robustness, our attacks remain potent against these robust models, as evident in Figure 1 and Figure 2 in Section 9.1 of the Supplementary Material. This further emphasizes the significance of our research, as no prior work has explored security risks introduced by selective forgetting.
**3: Whether Anti-backdoor learning [r1] could be applied to mitigate the negative effect of proposed unlearning attack?**
Thanks for your insightful question. We want to clarify that applying Anti-backdoor learning [r1] to mitigate our proposed unlearning attack is not feasible. Anti-backdoor learning assumes the presence of backdoored samples with triggers, focusing on trigger pattern identification and correlation removal. Our unlearning attacks do not involve the injection of triggers; we remove specific training samples rather than adding triggers. Furthermore, Anti-backdoor learning identifies low-loss backdoor examples early in training and removes correlations, while our attacks target malicious deletion during unlearning (instead of training). Consequently, the strategies of Anti-backdoor learning are unsuitable for mitigating our unlearning attacks.
**4: Discussions on the potential limitations of this attack.**
Due to space limitations in our main submission, we deferred the discussions on the potential limitations of our attack in Supplementary Material (see Section 2 in Supplementary Material). In the final version, we will incorporate the discussions on the potential limitations of our attacks from the Supplementary Material to the main manuscript.
**5: Adding more convincing baseline comparisons in the experiments.**
Thank you for your suggestions. We followed your advice and conducted experiments with two new baselines: input space clustering-based unlearning and representation space clustering-based unlearning. These clustering-based methods remove specific sample types, differing from the RandSearch baseline. In Table 1, we present the attack success rates of these clustering baselines on CIFAR-10 using first-order and second-order unlearning. We divided training data into 250 clusters using K-means for each baseline, based on input and representation space distances. Unlearning involved randomly selecting clusters and ensuring an equal number of unlearned samples as our methods. The results show that clustering-based unlearning baselines still perform poor low attack success rates. However, our proposed methods significantly outperform them by a large margin. This is because random cluster unlearning fails to guide importance scores for training samples to influence targeted loss in selective forgetting attacks.
Table 1: Attack success rate of our proposed static forgetting attacks and new baselines.
| Unlearning method | RandSearch | Input space | Representation space | Ours |
| :---: | :---: | :---: | :---: | :---: |
| First-order | 0.08 ± 0.04 | 0.04 ± 0.03 | 0.14 ± 0.08 | 0.80 ± 0.04 |
| Second-order | 0.10 ± 0.08 | 0.08 ± 0.05 | 0.12 ± 0.05 | 0.82 ± 0.04 |
**Reference**
[r1] “Anti-Backdoor Learning: Training Clean Models on Poisoned Data”, NeurIPS 2021.
---
Rebuttal Comment 1.1:
Title: follow-up.
Comment: After reviewing the author's response, some of my concerns have been addressed. Currently, I believe that this work meets the standards for the conference and I am leaning towards accepting it.
---
Reply to Comment 1.1.1:
Title: Thank you for the post-rebuttal feedback
Comment: Dear Reviewer,
Thank you very much for the appreciation of this work!
Best regards,
Authors of Paper5786 | Summary: This paper identified a novel class of machine learning attacks, i.e., ML models can be manipulated with malicious data update requests during the machine unlearning process.
The authors study two threat scenarios: (1) selective forgetting attacks and (2) sequential selective forgetting attacks.
Specifically, in the static setting, an adversary can first select a subset of accessible training data and then launch the unlearning request with the selected subset in order to induce the victim model to become close to a target model.
In this setting, the authors achieve the attack goal by casting the discrete data selection process into a continuous optimization process.
On the other hand, in the sequential setting, data update requests occur sequentially and the adversary can modify any update request before it is received by the victim.
In this setting, the authors propose to train a policy network via reinforcement learning to output attack strategies according to the current environment state.
Comprehensive experiments verify the effectiveness of the proposed selective forgetting attacks.
Strengths: 1. This paper identifies a novel and important ML threat, selective forgetting attacks. Such a threat is realistic in the real world and therefore worth the ML security community to continue further research.
2. For the static version of the attack, how to efficiently select a subset to launch an attack is a challenging problem.
The authors address the problem by approximating the discrete indication function with a continuous function to perform GD-based optimization. This is a technically non-trivial solution.
3. The problem formalization for sequential attacks is comprehensive, as it considers (almost) all possible types of update requests("delete", "add", and "modify").
Weaknesses: 1. Two concerns about the sequential selective forgetting attacks:
- In the black-box setting, According to this paper, the adversary could not access any training data. Therefore, to let the substitute model better imitate the behavior of the victim model (in order to train an effective policy network), the adversary may need to collect data that are similar to the real data. However, this could be difficult in the real world when the adversary does not have any prior knowledge about the private training data.
- The adversary performs sequential attacks via modifying incoming update requests before they are received by the victim model, in which the modification strategy is by adding perturbations to the requested examples. This may result in a strange "paradox": if you modify the data that needs to be unlearned, is it still the original data? Wouldn't it result in the victim model "unlearning" something that it has never learned? Please comment.
2. Suggestion: including results of membership inference attacks (MIA) would be interesting. It would be interesting to see how selective forgetting attacks will affect the success rate of MIA.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the usage of Theorem 1? It seems like removing Theorem 1 would not affect the conclusion of Section 3.1.
2. In Section 4, how to calculate the "attack success rate"? Is it the case that the authors first collect a set of data and then evaluate the victim's misclassification rate every time after processing an unlearn request?
3. The setting of selective forgetting attacks seems a little similar to that in [r1]. It would be interesting to compare [r1] with this paper.
**Reference**
[r1] Shumailov et al. "Manipulating SGD with Data Ordering Attacks". arXiv 2021.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The main limitation of this paper is that the threatening scenario of sequential attacks may not be realistic to some extent, which has been explained in Section "Weaknesses".
Nevertheless, I believe the merits of this paper suppress its disadvantages and would significantly benefit the ML security community.
Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your thoughtful comments and valuable suggestions. Below, we provide our response to the questions and concerns.
**1: Discussions on training the substitute model in the black-box setting where the adversary does not have prior knowledge about training data.**
Thank you for addressing the black-box scenario, where the adversary lacks prior knowledge of private training data. In this setting, the adversary creates a synthetic dataset via existing model stealing attack techniques. They then train a local substitute model on this data, leveraging similar decision boundaries. Through traditional query methods, an ablation study was conducted on a three-layer ReLU neural network trained on MNIST as the target model. Table 1 indicates that the substitute model's performance approaches that of the target model more closely with an increasing number of queries. We also observed that their attack performance difference reduces as the query number increases. Query analyses will be included in the final version for this black-box setting.
Table 1: Experiments on the discussed black-box setting.
| Number of queries | Norm of parameter difference| Test accuracy difference |
| :---: | :---: | :---: |
| 2000 | 2.46 | 45.90% |
| 4000 | 1.93 | 15.45% |
| 6000 | 1.22| 9.68% |
| 8000 | 0.67 | 5.85% |
| 10000 | 0.19 | 2.51% |
**2: Is it still the original data when modifying incoming update requests? Wouldn’t it result in the victim model “unlearning” something that it has never learned?.**
Thank you for your insightful questions. The difference between modified and original data depends on update magnitudes. As we discussed in lines 216-217 of the main manuscript, for the setting of modifying incoming update requests, we are following existing partial data deletion works [r2, r3]. These methods involve perturbing requested examples [r2]. The adversary can exploit this and use the excuse of bad data quality issues (e.g., noises) or privacy to generate specific misclassifications.
Regarding unlearning, it only affects acquired training knowledge. Unlearning techniques remove the influence of unlearning information from the pre-trained model. If the model never learned from the infomation, its influence is close to zero.
**3: Including experiments on how forgetting attacks affect the success rate of membership inference attacks (MIA).**
Thanks for highlighting this. Existing machine unlearning methods use MIA to measure the unlearning effectiveness. In contrast, our attack leverages existing unlearning techniques to induce misclassifications. Thus, MIA's success relies on how well unlearning methods remove maliciously requested unlearning samples.
Following your suggestions, we evaluated membership inference attacks after maliciously unlearning certain CIFAR-10 training samples using various methods like first-order, second-order, unrolling SGD, amnesiac, and SISA. Based on [r4], we then compared these results with the baseline from the fully-trained original model. Table 2 displays the results. Notably, these unlearning methods offer similar performance deletion of malicious unlearning samples, as MIA metrics depend on the unlearning method's unlearning efficiency. To clarify, our work aims to study security vulnerabilities of the unlearning system by exploring the possibility of malicious data updates.
Table 2: Experiments on membership inference attacks.
| Model | Original | First-order | Second-order | Unrolling SGD | Amnesiac | SISA |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Accuracy | 1.0 ± 0.0 | 0.52 ± 0.06 | 0.54 ± 0.06 | 0.55 ± 0.04 | 0.59 ± 0.05 | 0.48 ± 0.05 |
**4: Discussions on Theorem 1 and the conclusion of Section 3.1.**
Thanks for the question. In Theorem 1, we aim to study the difference between the original pre-trained model and its unlearned version (created after removing malicious unlearning samples). This helps understand the influence of these unlearning samples on the model's parameters. Notably, from this theorem, we can see that as more malicious unlearning samples are removed, the difference between the two models grows.
**5: In Section 4, how to calculate the "attack success rate"? Is it the case that the authors first collect a set of data and then evaluate the victim's misclassification rate every time after processing an unlearn request?**
In our experiments, we randomly sample a target class to attack as well as a set of targeted test samples in this class. We then run our selective forgetting attacks by optimizing which training data to unlearn to meet attack objectives (targeted or untargeted setting). After unlearning, we assess the misclassification rate on the targeted test samples. This process is repeated 10 times with varied random seeds, and we report the average results and standard errors.
**6: Comparing the setting of the proposed selective forgetting attacks and that of [r1].**
Thanks for highlighting the comparison with [r1]. Our method differs significantly from [r1]: First, [r1] targets training-time attacks, and assumes the adversary can alter the order of training data fed to the model. In contrast, our approach focuses on the unlearning process, and utilizes it to submit malicious update requests to achieve attack objectives.
In addition, the setting of [r1] can be viewed as a special case of our sequential unlearning attacks, since we can use the defined data update requests (refer to Definition 2 in the main manuscript) to achieve the attack goal of [r1] via changing the order in which batches are supplied to the model during training.
**Reference**
[r1] "Manipulating SGD with Data Ordering Attacks". arXiv 2021.
[r2] “Machine Unlearning of Features and Labels”, NDSS 2023.
[r3] “Feature Unlearning for Generative Models via Implicit Feedback”, arXiv 2023.
[r4] “Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations”, ECCV 2020.
---
Rebuttal Comment 1.1:
Title: I vote for acceptance
Comment: Thanks to the authors for their response. All of my questions have been resolved. I believe this work will significantly contribute to the ML security community so I vote for acceptance.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Dear Reviewer,
We appreciate your encouraging response. We are delighted that our feedback has addressed your concerns.
Thank you for your time again!
Best regards,
Authors of Paper5786 | Rebuttal 1:
Rebuttal: We would like to express our great gratitude to all the reviewers for their valuable time, comments, and questions. We highly appreciate all the feedback and suggestions, which further help us improve our paper. We are greatly encouraged that they found our ideas and contributions to be novel and significant (Reviewers a1jN, MEi8, EC3c, and MTyE), comprehensive (Reviewers a1jN, MEi8, and EC3c), and technically sound (Reviewers a1jN, MEi8, and EC3c). We are grateful that they recognized the effectiveness of our methods (Reviewers a1jN, MEi8, EC3c, and MTyE), and the comprehensiveness of our experiments (Reviewers a1jN, MEi8, and EC3c).
We carefully considered all concerns and questions raised by the reviewers and provided our detailed responses to these concerns and questions raised in individual comments. We hope that our responses will address the concerns and questions provided by the reviewers. We look forward to participating in further discussions and answering any further questions posed by the reviewers.
Thank you for your time and consideration.
Best,
Authors of Paper5786
Pdf: /pdf/45057a4e3911e0d27540461ce065a54ad98a5573.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Efficient Robust Bayesian Optimization for Arbitrary Uncertain inputs | Accept (poster) | Summary: The paper proposes a new method for Bayesian optimization under uncertain inputs, where the distribution of an input can be complex and unknown but can be sampled from. The paper proposes an MMD based kernel between probability distributions and the use of the Nystrom approximation to make the computations tractable. A UCB-type algorithm is used. The paper provides regret bounds accounting for the approximation error due to sampling / Nystrom approximation, and compares the algorithm to some baselines.
Strengths: 1. The motivation of requiring to do BO with uncertain inputs with complex distributions that may not be known in closed form is compelling.
2. The regret bound incorporating approximation error and the corresponding implications on the sampling size $m$ are useful and interesting.
In general I believe the work has potential given that the technical issues raised in the Weaknesses section are sufficiently addressed.
Weaknesses: 1. **Validity of kernel**. The paper designs an MMD-based kernel between probability distributions $\hat k = \eta(\text{MMD}(P, Q))$. When designing a new kernel, it is important to prove that it is a valid kernel. However, no such proof is given (or references to previous literature that does something similar). $\eta$ is said to be a "normalizing function with range $[0, 1]$". What conditions does $\eta$ need to fulfill in order for $\hat k$ to be a valid kernel?
2. **Empirical evaluation**. The empirical evaluation is unconvincing for the following reasons:
* The Nystrom approximation is adopted for efficiency. It is important to measure how much using this approximation affects performance beyond simply a qualitative comparison by looking at the posteriors in Figure 3. The empirical evaluation should do an ablation study that includes MMDGP without the Nystrom approximation, at a sampling size that results in the same inference time as that with the Nystrom approximation. What if removing the Nystrom approximation results in better performance with no decrease in efficiency?
* The closest work seems to be Oliveira et. al. (2019). Why isn't their algorithm one of the baselines?
* The error bars are simply too large and overlap too much. Any performance improvement could be due to randomness; please run with more trials to decrease the error bars so that the performances can be meaningfully compared.
3. **Writing**. The writing has much to be improved:
* Line 174 "One important theoretical guarantee to conduct GP model..." and in Theorem 1 "...running Gaussian Process with acquisition function..." I believe you mean "Bayesian optimization" in these contexts instead of "Gaussian process". The GP is the model, the algorithm is BO which relies on a GP. The thing being run is the algorithm, not the model.
* Line 151 "sampling size $N$" do you mean $m$?
* Many grammatical and typo errors: the last word in the title is not capitalized; line 178 "For $\hat k$ be radial kernels" and "For $\hat k$ be linear kernel", should be, for example, "If $\hat k$ is a radial kernel"; Lines 201 and 205 are missing periods; Line 256 "can be more pronounced impact" should be "can have".
* [24] and [25] point to the same reference.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: No additional questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments, especially regarding the constructive suggestions on the kernel validity, ablation test, and experiment.
### 1. Validity of kernel
> When designing a new kernel, it is important to prove that it is a valid kernel.
> $\eta$ is said to be a "normalizing function with range [0, 1]". What conditions does $\eta $ need to fulfill for $\hat{k}$ to be a valid kernel?
Thanks for pointing this out! In our paper, we focus on the case that $\eta(x) = \exp{(-\alpha x)}$ and $\hat{k} = \exp(-\alpha \text{MMD}(P,Q; k))$. It can be proved to be a valid kernel by using the Theorem 2.2 from [1]: for any type of bi-functions defined on $\mathcal{P}\times\mathcal{P}$ as $\hat k(P, Q) = \sum_{i=1}^{\infty} a_i \langle \psi_P, \psi_Q \rangle_k$ with $a_i \ge 0$, $\psi$ is the kernel mean map from $\mathcal{P}$ to $\mathcal{H}_k$, then $\hat k(P, Q)$ is a valid kernel; In addition if $a_i > 0$, this kernel is universal over $C(\mathcal{P})$, given the mean map $\psi$ is injective and the space of $P, Q$ defined on is compact. In this case, an MMD kernel with RBF mapping $\hat k(P, Q) = \exp(-\alpha \Vert \psi_P - \psi_Q \Vert_k^2)$ is ensured to be valid and universal.
### 2. Ablation test for Nystrom approximation
> The Nystrom approximation is adopted for efficiency. It is important to measure how much using this approximation affects performance.
> The empirical evaluation should do an ablation study that includes MMDGP without the Nystrom approximation, at a sampling size that results in the same inference time as that with the Nystrom approximation.
Thanks for this constructive suggestion! We have conducted an ablation test for the Nystrom approximation. In this experiment, we employ an RKHS function and set the input uncertainty to follow a beta distribution (see Sec 5.2 in the paper). Several candidates are to study the effect of Nystrom approximation:
+ *MMDGP-nystrom* is our method with Nystrom approximation, in which the sampling size $m=16$ and sub-sampling size $h=9$. Its complexity is $O(MNmh)$, where $M$ and $N$ are the sizes of training and test samples respectively, $m$ is the sampling size for MMD estimation, and $h$ indicates the sub-sampling size during the Nystrom approximation.
+ *MMDGP-raw-S* does not use the Nystrom approximation but employs an empirical MMD estimator. Due to its $O(MNm^2)$ complexity, we set the sampling size $m=12$ to ensure a similar complexity as the *MMDGP-nystrom*.
+ *MMDGP-raw-L* also uses an empirical MMD estimator, but with a larger sampling size ($m = 16$).
+ *GP* utilizes a vanilla GP with a learnable output noise level and optimizes with the upper-confidence-bound acquisition [^a].
Figure 7a in the supplementary_results.pdf shows that i) with sufficient computation power, the *MMDGP-raw-L* can obtain the best performance by using a large sample size. However, ii) with the same inference complexity, the *MMDGP-nystrom* performs much better than the *MMDGP-raw-S*, suggesting the Nystrom approximation can significantly improve the efficiency with a mild cost of performance degradation. iii) All the MMDGP-based methods are better than the vanilla GP-UCB.
### 3. Experiments
> The closest work seems to be Oliveira et. al. (2019). Why isn't their algorithm one of the baselines?
> The error bars are simply too large and overlap too much. please run with more trials to decrease the error bars so that the performances can be meaningfully compared.
Thanks for pointing this out! We have cited this paper and included the uGP-UCB method in our baselines and rerun all the experiments with more trials to provide a statistically-meaningful comparison.
The uGP-UCB method in [2] employs an integral kernel over probability measures $\mathcal{P}$: $\hat{k}(P, P^\prime):= \int_\mathcal{X} \int_\mathcal{X} k(x, x^\prime) dP(x) dP^\prime(x^\prime)$. Since it does not mention how to compute this kernel and no public code is available, we compute this integral kernel via sampling, resulting in an inference complexity of $O(MNm^2)$. Here $M$ and $N$ are the sizes of training and test samples respectively, and $m$ is the number of samples used for estimating the integral.
In the supplementary results, Figure 9a compares our method with the baselines on an RKHS function (Figure 4a in the paper) with a Gaussian input uncertainty. All the robust methods perform well except the vanilla GP-UCB stacks to a local optimum. Also, we notice that the *uGP* method performs slightly better than the others in this case, but we will see later that our method outperforms in more complex distributions and high-dimensional cases.
Figure 9b further compares these models on a double-peak function (Figure 4c in the paper) and a beta distribution. In this case, we observe that the *MMDGP-nystrom* quickly converges to the optimum while the *uGP* hobbles with a larger variance. Also, as mentioned in the paper, the *skl* and *ERBF* fail to locate the robust optimum due to the mismatching of their assumptions and true input uncertainty.
We also re-evaluate on the robot pushing problem with a multi-modal Gaussian mixture distribution. Each method is tested for 48 trials and the robust regrets are summarized in Figures 9c and 9d. We can observe our algorithm outperforms the others in terms of both optimization efficiency and stability.
### 4. Writing
> The writing has much to be improved...
Thanks for pointing them out and sorry for such a rough presentation. We have revised them accordingly and performed careful proofreading for the updated version.
[1] Andreas Christmann and Ingo Steinwart. “Universal Kernels on Non-Standard Input Spaces”. NIPS. 2010
[2] Oliveira, Rafael, Lionel Ott, and Fabio Ramos. “Bayesian Optimisation under Uncertain Inputs.” PMLR, 2019.
[^a] For simplicity, all the methods in this work use an acquisition of upper confidence bound.
---
Rebuttal Comment 1.1:
Title: Additional concern
Comment: Thank you for your response, most of my previously raised concerns are decently addressed. I have increased my score to reflect this.
I have one more qualm regarding the comparison of your kernel to that developed in Oliveira et. al. (2019), and to the kernels developed in "Learning from Distributions via Support Measure Machines" by Muandet et. al. (2012) and "Universal Kernels on Non-Standard Input Spaces" by Christmann and Steinwart (2010). The kernel developed in Oliveira et. al. (2019) has the form $k(P, Q) = \langle \psi_P, \psi_Q \rangle_{\mathcal H}$, and they state below this definition that "Besides the linear kernel in Equation 9, many other kernels on $\mathcal P$ can be defined via $\psi$, e.g. radial kernels using $\lVert \psi_P - \psi_Q\lVert_{H}$ as a metric on $\mathcal P$ (Muandet et al., 2012). " Since your kernel is defined as $k(P, Q) = \exp(-\alpha \lVert \psi_P - \psi_Q\lVert_{H})$ this seems to be the same as what was suggested. Muandet et. al. (2012) seems to have tested this also as a "Level 2 RBF", and seems to be very similar to that developed in Christmann and Steinwart (2010) Lemma 2.3. My concerns concretely are the following:
1. The fact that the proposed kernel is not new must be clearly stated and these links to previous works must be clearly discussed. On reading the paper in its current form, one has the impression that the proposed kernel is part of the work's novel contribution.
2. The work of Muandet et. al. (2012) suggests that there is an entire family of kernels over probability measures, of which the one in Oliveira et. al. (2019) and yours are special cases. The Nystrom approximation will be able to decrease the computation time for all of them, since they all rely on inner products between sums of kernel matrix entries and therefore all rely on the kernel matrix. I believe that this work can be made significantly more general by not focusing on the case of RBF with MMD as a metric, and since Theorem 1 does not rely on this specific kernel anyway. Or is there a good reason that there is a particular focus on this specific kernel?
---
Reply to Comment 1.1.1:
Title: Response to kernel design
Comment: We appreciate the reviewer's thorough understanding of this paper and insightful comments!
+ We come to this MMD-based kernel design from an intuition that, if we can find a good metric that can characterize the distance btw probability measures with a rich expressive power, we should be able to devise a robust GP based on it and guide the search for robust optimum.
+ This intuition turns us into a family of Integral Probabilistic Metrics (IPMs). Among them, the MMD grabs our attention because of its intrinsic connection with distance measurement in RKHS and high coincidence with the kernel trick (ref. Section 3.1 of our paper), which eventually leads us all the way to the current idea.
+ Given two distributions P and Q, the MMD [1] is defined as:$MMD(P, Q) = \\sup\_{f \\in \\mathcal{F}} \\big( \\mathbb{E}\_{x}[f(x)] - \\mathbb{E}_{y}[f(y)] \\big)$. By setting the function class $\mathcal{F}$ to a unit ball in an RKHS $\mathcal{H}$, the MMD can be expressed as a distance btw the mean embeddings in RKHS. In this case, our kernel becomes $k(P, Q)=\exp{(-\alpha \Vert \psi_P -\psi_Q \Vert_H)}$, which converges to the kernel family in [2] (The uGP kernel in [4] is a specialization of [2]).
+ Though our theoretical analysis is developed upon the RKHS, **MMD is not limited to RKHS and our kernel can go beyond the kernel family in [2]**. In fact, any function class $\mathcal{F}$ that comes with uniform convergence guarantees and is sufficiently rich can be used, which gives different expressions of MMD [1]. For example, MMD converges to the Kolmogorov metric if $\mathcal{F}$ is a class of bounded variations. Also, MMD can be expressed as other Earth-mover distances, say Wasserstein distance, with proper choice of function class $\mathcal{F}$. In this field, there are some active researches, e.g., [5, 6, 7], to design novel kernels with these IPM metrics.
**In the updated manuscript, we will clarify this point clearly and discuss the connections with these existing works as the reviewer suggested.**
Again, thanks for the constructive suggestion on the generalization direction. As pointed out by the reviewer, our theoretical analysis in theorem 1 does not rely on an assumption of the kernel in its current form and can be generalized. As the main scope of this paper is a practical method for robust Bayesian optimization, we will **add a discussion for this generalization direction and leave the comprehensive exploration in future work.**
**Reference**
[1] Gretton, Arthur, et al. "A kernel two-sample test." *The Journal of Machine Learning Research* 13.1 (2012): 723-773.
[2] Muandet, Krikamol, et al. "Learning from distributions via support measure machines." *Advances in neural information processing systems* 25 (2012).
[3] Christmann, Andreas, and Ingo Steinwart. "Universal kernels on non-standard input spaces." *Advances in neural information processing systems* 23 (2010).
[4] Oliveira, Rafael, Lionel Ott, and Fabio Ramos. "Bayesian optimization under uncertain inputs." *The 22nd international conference on artificial intelligence and statistics*. PMLR, 2019.
[5] Carriere, Mathieu, Marco Cuturi, and Steve Oudot. "Sliced Wasserstein kernel for persistence diagrams." *International conference on machine learning*. PMLR, 2017.
[6] Oh, Jung Hun, et al. "Kernel wasserstein distance." *arXiv preprint arXiv:1905.09314* (2019).
[7] De Plaen, Henri, Michaël Fanuel, and Johan AK Suykens. "Wasserstein exponential kernels." *2020 International Joint Conference on Neural Networks (IJCNN)*. IEEE, 2020. | Summary: This work focuses on the situations where input uncertainty arises and the input values are unobservable, and introduces to measure the distance of uncertain inputs through MMD when training the Gaussian process surrogate. The authors theoretically and empirically demonstrate the effectiveness of the proposed method.
Strengths: 1. The proposed method is sound and MMD can be a metric to measure the distances of uncertain inputs.
2. The experiments are extensive to show the effectiveness of the proposed method.
Weaknesses: 1. MMD-GP needs to query m times more samples than GP, which can restrict its application since BO is usually applied to the tasks where evaluating a query can be time-consuming. What about the time cost of this work compared to other methods for input uncertainty?
2. After sampling m times for one query x, we can also get m rewards/performance. I wonder how to deal with the m rewards/performance? Line 139 shows that $D_n=${$(x_i, y_i)|x_i \sim P_i$}. However, it seems that $y_i$ also should be a distribution.
3. Other typos:
3.1 Eq. 2 use $\xi$ to denote the noise while line 92 utilize $\epsilon$.
3.2 Line 151: “the computation and space complexities of the empirical MMD estimator scale quadratically with the sampling size N” Should N be m?
3.3 What does the training and testing samples stand for in line 162? Do you mean the number of observations and the number of queries?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No broader societal impacts are provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments.
### 1. Sampling issue
> MMD-GP needs to query m times more samples than GP, which can restrict its application since BO is usually applied to tasks where evaluating a query can be time-consuming. What about the time cost of this work compared to other methods for input uncertainty?
Thanks for pointing this out and sorry for our obscure expression! In our problem setting, we assume our input determinant point $x_i$ is blurred by some noise $\delta(x_i)$, and then evaluate a time-consuming $f(x_i + \delta(x_i))$ to get $y_i = f(x_i + \delta(x_i)) + \xi_i$ with additional noise $\xi_i$. However, when doing MMD-based kernel evaluation, we do not have to query the time-consuming function $f$ but only assume $\delta(x_i)$ can be easily sampled, and thus we have enough samples $\\{ \delta^j(x_i) \\}_{1\le j\le m}$ for each $1 \le i \le n$ to calculate the MMD distance between $x_i + \delta(x_i)$ and $x_l + \delta(x_l)$. When calling the evaluation of $f$, we only input one sample $x_i + \delta(x_i)$, and return one evaluation $y_i$.
### 2. Misleading notations
> After sampling m times for one query x, we can also get m rewards/performance. I wonder how to deal with the m rewards/performance. Line 139 shows that $D_n = \\{ (x_i,y_i) \vert x_i \sim P_i\\}$. However, it seems that $y_i$
also should be a distribution.
Thanks for pointing this out and sorry for our obscure expression! This question can also be solved by our last answer, as in each query, we only input one $x_i$ to the function $f$ and thus generate one $y_i$ . Also there are some typo in the definition of $D_n$ in our origina paper: $D_n = \\{ (\\hat x_i,y_i) \vert \\hat x_i \sim P_{x_i}\\}$, here $\hat x_i$ is one random evaluation of $P_{x_i}$ when we intend to input $x_i$ into the function $f$.
### 3. Typos
> Other typos
Thanks for pointing this out, we have modified all the mentioned typos in the updated paper.
---
Rebuttal Comment 1.1:
Title: A warm reminder in last day of rolling discussion
Comment: We kindly ask the reviewers if they have any outstanding questions or clarifications regarding our paper. We are happy to engage in a dialogue and conduct any additional requested work in the remaining discussion period. Thank you!
---
Rebuttal 2:
Comment: Thanks for your response and I have read the rebuttal. The authors resolve my concerns and I would like to maintain the rating. | Summary: The paper proposes a novel Bayesian Optimization (BO) algorithm that explicitly tackles input uncertainty by introducing a new integral probabilistic metric (IPM)-based kernel. The algorithm also utilizes an efficient and stable Nystrom estimator to approximate the Maximum Mean Discrepancy (MMD), which serves as the adopted IPM. Furthermore, the paper extends the GP-UCB framework by incorporating the proposed kernel and derives the corresponding upper bound for the cumulative regret. The empirical study illustrates the effectiveness of the proposed approximation and the BO algorithm.
Strengths: 1. The paper is generally well-organized and illustrative with figures and tabular results.
2. The theoretical result extends the existing UCB work with reasonable approximation to tackle the input uncertainty.
3. The limitation of the proposed AIRBO on dealing with high-dimensional input and lack of discussion over other IPMs is clearly stated.
Weaknesses: 1. The theoretical bound is arguably sound. Due to the numerical approximation of the Maximum Mean Discrepancy (MMD), it may potentially result in pseudo-metric or even worse outcomes in practice. Therefore, it remains unclear whether the upper bound of the maximum information gain $\gamma_T$ still holds and guarantees that the UCB cumulative regret is sublinear.
2. The essential justification for the proposed Nystrom estimator lies in assumption 1. While the author claims in lines 210 and 211 that the approximation error can be fairly small, the paper lacks sufficient discussion or substantiation to support this claim.
3. The empirical results are limited. The regret curve lacks statistical significance, and more extensive empirical studies on real-world applications would be desirable.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What is SKL-UCB mentioned in line 262? Is it referring to the algorithm discussed in line 45 from [19]?
2. What is the corresponding reference for Oliveira et al. mentioned in line 50?
3. Despite the explicit treatment of input uncertainty, it is unclear to me whether tolerating the input uncertainty by fixing a larger output noise level could provide a simpler solution. The author claims that the GP model overfits (line 222) and justifies the use of MMD-GP. However, one would expect that fixing a larger noise level for the GP model would help mitigate it as well. Additionally, the existing method addressing heteroscedastic noise proposed in [1] potentially aids in mitigating the input uncertainty purely on the output side. It would be valuable to hear the authors' comments on this.
4. Could the author provide more intermediate quantitative evidence showing the effectiveness of the proposed MMD-based kernel? For example, any evidence showing that the uncertainty quantification is better than the naïve GP model that doesn't explicitly deal with the input uncertainty?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: LImitations are mentioned in the comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive comments.
### Theoretical bound
> The theoretical bound is arguably sound. Due to the numerical approximation of the Maximum Mean Discrepancy (MMD), it may potentially result in pseudo-metric or even worse outcomes in practice. Therefore, it remains unclear whether the upper bound of the maximum information gain $\gamma_T$ still holds and guarantees that the UCB cumulative regret is sublinear.
We modify the expression of our key result, showing that the cumulate regret is bounded by the approximating error $e_\varepsilon$, and the maximum information gain $\hat\gamma_T$ of the **exact kernel $\hat k$**, thus no need to calculate the approximate information gain. One can check Theorem 1 in our general rebuttal part. We also discuss the order of $\hat\gamma_T$ in Theorem 2, showing that in mild assumption, $\hat\gamma_T$ satisfies an order depending on its differentiability and dimension, and thus the UCB cumulative regret is sublinear.
The modified theorem 1 can be checked in the global rebuttal.
### Nystrom errors
> The essential justification for the proposed Nystrom estimator lies in assumption 1. While the author claims in lines 210 and 211 that the approximation error can be fairly small, the paper lacks sufficient discussion or substantiation to support this claim.
We have updated assumption 1 and added a remark after assumption 1 to discuss the general approximation error of the Nystrom estimator. Please refer to the global rebuttal.
### Experiments
> The empirical results are limited. The regret curve lacks statistical significance, and more extensive empirical studies on real-world applications would be desirable.
Thanks for pointing out this issue. We have rerun all the experiments with more trials to provide a statistically-meaningful result: Figure 9 in the attached supplementary_results.pdf reports all the updated experiment results. In particular, a performance comparison on the RKHS function with Gaussian input uncertainty is reported in Figure 9a, the result on the 1D double-peak function with beta inputs is presented in Figure 9b, while 9c and 9d compare the performance on a real-world robot pushing problem.
In addition, we also add an HD optimization problem with a 10-dimensional bumped bowl problem from [2] and set the input uncertainty as a circular distribution. Figure 7b in the supplementary results shows our method yields the best performance and can locate the robust optimum in high dimension efficiently and stably.
### Questions
> 1. What is SKL-UCB mentioned in line 262? Is it referring to the algorithm discussed in line 45 from [19]?
Yes, the SKL-UCB employs a GP surrogate equipped with a symmetric KL-based kernel, as described in[19], to model the uncertain inputs. We have updated the citation in Iine 262.
> 2. What is the corresponding reference for Oliveira et al. mentioned in line 50?
Sorry for the mistake. We have added the correct reference to [1] in the updated paper.
> 3. Despite the explicit treatment of input uncertainty, it is unclear to me whether tolerating the input uncertainty by fixing a larger output noise level could provide a simpler solution. The author claims that the GP model overfits (line 222) and justifies the use of MMD-GP. However, one would expect that fixing a larger noise level for the GP model would help mitigate it as well. Additionally, the existing method addressing heteroscedastic noise proposed in [1] potentially aids in mitigating the input uncertainty purely on the output side. It would be valuable to hear the authors' comments on this.
This is an interesting viewpoint. In our understanding, explicit modeling of the input uncertainty not only helps to distinguish the input perturbation from other sources of randomness (e.g., measure noise) but also enables us to predict the **expected** function value under such an input uncertainty, which then can be used to guide the search for the robust optimum. This is quite reasonable and intuitive in our problem setting. On the contrary, simply setting a larger output noise to absorb the input uncertainty may mix the different sources of randomness together, rendering the output noise level hard to learn.
> 4. Could the author provide more intermediate quantitative evidence showing the effectiveness of the proposed MMD-based kernel? For example, any evidence showing that the uncertainty quantification is better than the naïve GP model that doesn't explicitly deal with the input uncertainty?
Thanks for this constructive comments. We have conducted a new experiment for modeling the uncertainty with diff.
In this test, we intentionally design the input uncertainty to be a "step-changing" Chi-squared distribution, whose degrees of freedom parameter $df$ is 0.5 when $x \in [0.0, 0.6)$ and suddenly changes to 7.0 if $x \in [0.6, 1]$. Figure 8 in the supplementary results visualizes the uncertainty quantification of different surrogate models (The numbers followed by the surrogate name are the sampling sizes, *e.g.*, *MMDGP-nystrom(160/10)* is our method with a sampling size of 160 and sub-sampling size of 10).
We can clearly observe that i) *MMDGP-nystrom* can comprehensively model the input uncertainty, evidenced by the abrupt change of its posterior distribution at the location $x=0.6$. ii) uGP with a small sampling size fails to quantify the uncertainty but a large sampling size with a higher computation cost help alleviate this problem. iii) A naïve GP model that does not explicitly deal with the input uncertainty cannot be aware of this uncertainty change at all.
### Reference
[1] Oliveira, Rafael, Lionel Ott, and Fabio Ramos. “Bayesian Optimisation under Uncertain Inputs.” PMLR, 2019.
[2] Sanders, Nicholas D., Richard M. Everson, Jonathan E. Fieldsend, and Alma A. M. Rahat. “Bayesian Search for Robust Optima.” arXiv, 2021.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response to the concerns regarding the soundness of the theoretical and empirical results. I believe the new results could help alleviate the issues and enhance the overall presentation. However, the new results in Figure 8 demonstrate an improvement in uncertainty quantification but do not seem to contribute significantly to the optimization, especially when compared to uGP. Additionally, the results in Figure 9 still lack significance. I still have some reservations about the contribution of explicitly learning the input uncertainty for general optimization. Nonetheless, I appreciate the general contribution of this paper within this context.
---
Reply to Comment 1.1.1:
Title: Additional explanation
Comment: Thanks for your feedback. Here we would like to explain a little bit more about the experiment setting and results, hoping it can help dispel some concerns.
In Figure 8, we aim to compare the modeling performances of different surrogates given the same set of training points. We design the input distribution to be a "step-changing" Chi-square distribution, whose $df = 0.5$ if $x\in[0.0, 0.6)$) and $df$ changes to 7.0 when $x\in[0.6, 1.0)$ [1]. Due to this sudden change, the uncertainty at point $x=0.6$ is expected to be asymmetric: i) as the Chi-square distribution is quite lean and sharp when $df=0.5$, the distance from $x=0.6$ to its LHS points, i.e., $x_{lhs} \in[0.0, 0.6)$ are relatively large thus their covariances are small, resulting a fast-growing uncertainty. Meanwhile, ii) when $x\geq 0.6$, the $df$ changes to 7.0, rendering the input distribution a quite flat one with a long tail at RHS. Therefore, the distances btw $x$ and its RHS points become relatively small, which leads to large covariances and small uncertainties for points in $[0.6, 1.0]$. As a result, we expect to observe an asymmetric posterior uncertainty at $x=0.6$.
We have employed several surrogates in this test, including:
+ *MMDGP-nystrom(160/10)* is our method with a sampling size $m=160$ and sub-sampling size $h=10$. Its complexity is $O(MNmh)$, where $M$ and $N$ are the sizes of training and test samples (Note: all the models in this experiment use the same training and testing samples for a fair comparison).
+ *uGP(40)* is the surrogate from [2] and employs an integral kernel with sampling size $m=40$. Due to its $O(MNm^2)$ complexity, we set the sampling size $m=40$ to ensure a similar complexity as the *MMDGP-nystrom(160/10)* .
+ *uGP(160)* is also the surrogate from [2] but uses a much larger sampling size ($m = 160$). Given the same training and testing samples, its complexity is 16 times higher than our *MMDGP-nystrom(160/10)*.
+ *skl* is another robust GP surrogate equipped with a symmetric KL-based kernel, as described in[13].
+ *ERBF* employs an expected RBF kernel from [4] (Both *skl* and *ERBF* are designed for Gaussian inputs).
+ *GP* utilizes a noisy Gaussian Process model with a learnable output noise level.
According to Fig 8, we observe that:
+ **our method, MMDGP-nystrom(160/10), can comprehensively quantify the sudden change of the input uncertainty**, evidenced by its abrupt posterior change at $x=0.6$.
+ However, **the *uGP (40)* with the same complexity fails to model the uncertainty correctly**. We suspect this is because *uGP* requires much larger samples to stabilize its estimation of the integral kernel and thus can perform poorly with insufficient sample size, so we also evaluate the *uGP(160)* with much higher complexity (sampling size $m=160$) and it does successfully alleviate this issue.
+ Apart from this, the **noisy GP model with a learnable output noise level are not aware of this uncertainty change at all**, this may be because it treats the inputs as the exact values instead of random variables.
This test is only designed to examine the uncertainty quantification, but not their optimization performance. We further compare the optimization performances on the 1D synthetic functions and a real-world problem in Figure 9, as well as a 10D problem under circular distribution in Figure 7b. According to these tests, we found that:
+ In all experiments, **the noisy GP with learnable output noise level fails to locate the robust optimum and can get stuck at a local optimum**.
+ In problems with complex distributions and high dimensions, **our method outperforms the others and works quite, while the *uGP* at the same computation cost suffers the instability caused by insufficient sampling and stumbles over iterations**. This observation can be supported by the mean and standard deviation of regret in Figures 7b and 9c.
Moreover, the discussion on the benefits of modeling the input uncertainty explicitly can be founded in these works: [2, 3, 4, 5] (which are already cited in our manuscript).
**Reference**
[1] Chi-squared distribution, Wikipedia, https://en.wikipedia.org/wiki/Chi-squared_distribution
[2] Oliveira, Rafael, Lionel Ott, and Fabio Ramos. "Bayesian optimization under uncertain inputs." The 22nd international conference on artificial intelligence and statistics. PMLR, 2019
[3] Moreno, Pedro, Purdy Ho, and Nuno Vasconcelos. "A Kullback-Leibler divergence-based kernel for SVM classification in multimedia applications." *Advances in neural information processing systems* 16 (2003).
[4] Dallaire, Patrick, Camille Besse, and Brahim Chaib-Draa. "An approximate inference with Gaussian process to latent functions from uncertain data." *Neurocomputing* 74.11 (2011): 1945-1955.
[5] Beland, Justin J, and Prasanth B Nair. “Bayesian Optimization Under Uncertainty.” In *Advances in Neural Information Processing Systems*, 5, 2017. | Summary: The paper tackles the problem of robust Bayesian Optimization (BO) with uncertain inputs, i.e., the input values are deviated from the intended value before evaluation. The paper proposes a new technique, namely AIRBO (Arbitrary Input uncertainty Robust Bayesian Optimization), that can model the uncertain input and incorporate this uncertainty into the surrogate model, and thus can be used to guide the search of the objective function global optimum. The paper further proposes to use Nystrom approximation to reduce the computational cost. Theoretical analysis is conducted to guarantee the performance of the proposed technique. Experiments are conducted on some synthetic and one real-world problem to evaluate the performance of the proposed method.
Strengths: + The paper’s writing is generally clear and easy to understand. Illustration and figures are plotted nicely to explain the problem setting, as well as the property of the proposed method.
+ The proposed method of using MMD to construct a kernel that can incorporate uncertainty of the inputs seems to be interesting to me.
+ Theoretical analysis is conducted to guarantee the convergence property of the proposed method (although note that I have some concerns regarding the theoretical analysis).
+ The experiments are described very detailed. Various experiments are conducted to understand the performance of each component within the proposed method.
Weaknesses: + I think the application of this problem setting should be motivated much better as it’s unclear on the significance of the problem tackled in this paper. In the Introduction, the paper only mentions that this problem is quite common for robotics and process controls, but no references or further explanations are provided.
+ Some notations are quite confusing, making it hard for me to follow the proposed method. In Eq. (8), the paper shows how to compute MMD(P,Q) from the m samples {x_i}_1^m, {y_i}_1^m – but y is defined as the objective function value. Then how can we compute the kernel \hat{k}(P_{x_i}, P_{x_j}) in Eqs. (10) and (11)?
+ I also have some concerns with the theoretical analysis. In the theoretical analysis, there is no formal proof to prove the proposed kernel is actually a valid kernel. There is just one paragraph in Lines 170-173 briefly mentioning about this but no formal proof is given to know if the proposed kernel is valid, and in which scenarios it will be a valid kernel. Assumption 1 is too strong, it assumes that we already know that the error function e(P,Q) can be uniformly upper-bounded. In practice, can this assumption be true? How can we know it can be true? Theorem 1 also doesn’t have much meaning to me. In Theorem 1, \tilde{\beta}_n is defined with the maximum information gain \tilde{I} within the formula. This is not really a standard analysis in BO’s theory, normally \beta is defined by some constants. Is this maximum information gain \tilde{I} bounded?
+ Experiments are mostly conducted with synthetic functions. There is only one real-world problem being used in the experiment, and it’s quite simple to me. It’s just a 3-dim problem.
+ What is the time cost of the proposed method?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please answer my comments and questions in the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I can’t find any dedicated section that describes about the limitations of the proposed technique. There are some future work mentioned in the Conclusion, but there are no limitations mentioned there.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive comments. **The reference is listed in the global rebuttal**.
### 1. Motivation
> I think the application of this problem setting should be motivated much better as it’s unclear on the significance of the problem tackled in this paper.
Thanks for bringing it up. This paper aims to tackle a robust optimization problem in which the design parameters are perturbed randomly before the evaluation. Such perturbations are the manifestation of uncertainties in the design process and can arise for several reasons: *e.g*., execution noise during the control process or machining error of manufacturing. The drone measurement in Section 1 gives an example of execution noise, yet some other motivating applications include: robot exploration [7], robot grasping [3], semiconductor design [4], and more applications can be found in [6].
### 2. Notations
> In Eq. (8), the paper shows how to compute MMD(P, Q) from the m samples ${x_i}^m, {y_i}^m$ – but y is defined as the objective function value. Then how can we compute the kernel $\hat{k}(P{x_i}, P{x_j}$) in Eqs. (10) and (11)?
Sorry for the abused notation. In Eq.8, the $y$ is not the objective function value, we use $x$ and $y$ to represent the samples from the input distributions $P$ and $Q$ respectively. The updated Eq.8 should be:
Suppose we can get $m$ samples, $\\{ x_i^{(u)} \\}\_{u=1}^m$ and $ \\{x_j^{(v)} \\}\_{v=1}^m$, from input distribution $P_{x_i}$ and $P_{x_j}$, the MMD can be empirically estimated via:
$$
\text{MMD}^2(P_{x_i}, P_{x_j}) \approx \frac{1}{m(m-1)} \sum_{1\leq u, v\leq m, u\neq v} \big( k(x_i^{(u)}, x_i^{(v)}) + k(x_j^{(u)}, x_j^{(v)}) \big) - \frac{2}{m^2} \sum_{1\leq u,v \leq m} k(x_i^{(u)}, x_j^{(v)}),
$$
where $u$ and $v$ are the sample indices and $x_i^{(u)}$ represents the $u$-th sample from $P_{x_i}$, which are consistent with the notations used in Eqs 9, 10, and 11.
### 3. Theoretical analysis
> In the theoretical analysis, there is no formal proof to prove the proposed kernel is actually a valid kernel.
For the theoretical analysis, we complement the evidence that our proposed kernel is a valid kernel: by Theorem 2.2 in [11], any type of bi-functions defined on $\mathcal{P}\times\mathcal{P}$ as $\hat k(P,Q) = \sum_{i=1}^{\infty} a_i \langle \psi_P, \psi_Q \rangle_k$ with $a_i \ge 0$, $\psi_P,\psi_Q$ is the kernel mean map from $\mathcal{P}$ to $\mathcal{H}_k$, then $\hat k(P,Q)$ is a valid kernel; In addition if $a_i > 0$, this kernel is universal over $C(\mathcal{P})$, given the mean map $\psi$ is injective and the space of $P, Q$ defined on is compact. Thus, the Gaussian RBF kernel $\hat k(P,Q) = \exp(-\alpha \Vert \psi_P - \psi_Q \Vert_k^2)$ is ensured to be valid and universal.
> Assumption 1 is too strong, it assumes...can this assumption be true? How can we know it can be true?
For the concern on assumption 1, we admit that the original assumption is too strong, and we weaken the uniform condition, only assuming that each evaluation of $e(P,Q)$ has a bound with probability $1-\varepsilon$. The modified assumption 1 is stated in the global rebuttal.
Note that this assumption is standard in our case: we may assume $\max_{x \in \mathcal{X}} \Vert \phi \Vert_k \le K $, where $\phi$ is the feature map corresponding to the $k$. Then when using an empirical estimator, the error between $\text{MMD}_{\text{empirical}}$ and $\text{MMD}$ is controlled by $4K\sqrt{2\log(6/\varepsilon)m^{-1}}$ with probability at least $1- \varepsilon$ according to Lemma E.1, in [5]. When using the Nystrom estimator, the error has a similar form as the empirical one, and under mild conditions, when $h = O(\sqrt{m}\log(m))$, we get the error of the order $O(m^{-{1/2}}\log(1/\varepsilon))$ with probability at least $1-\varepsilon$.
Under this modified assumption, the result for our theorem 1 is also slightly modified, and we state it in the general Author Rebuttal page. We may see that the final regret is bounded in a probability $1 - \delta - n\varepsilon$ compared to the original $1 - \delta - \varepsilon$, but it matters a little: as the empirical estimator and Nystrom estimator have errors that are log-scale on $\varepsilon$, we can take $n\varepsilon$ small enough without hurting the error order.
> Theorem 1 also doesn’t have much meaning to me. In Theorem 1, \tilde{\beta}_n is defined with the maximum information gain \tilde{I} within the formula. This is not really a standard analysis in BO’s theory, normally \beta is defined by some constants.
For theorem 1, the $\tilde{\beta}_n$ is defined with the maximum information gain $\hat\gamma_n$ because we consider in a general case: for arbitrary input f with bounded RKHS norm, not assuming $f$ is sampled from GP process. One may checkin Theorem 2, [10] for $f$ sampled from GP process, and Theorem 3, [10] for arbitrary $f$ in RKHS space. for $f$ assume to be sampled from the GP process, we do not need to define $\beta_n$ with $\gamma_n$, while for arbitrary $f$ in RKHS space, we need it.
> Is this maximum information gain \tilde{I} bounded?
We add the theorem 2 in the global rebuttal to discuss the order of the maximum information gain $\gamma_n$, showing the sublinear order of the accumulative regret bound.
### 4. Experiments
> Experiments are mostly conducted with synthetic functions. There is only one real-world problem being used in the experiment, and it’s quite simple to me. It’s just a 3-dim problem.
We appreciate the reviewers' suggestion, we have added an HD optimization problem with a 10-dimensional bumped bowl problem from [8] and set the input uncertainty as a circular distribution. Figure 7b in the supplementary results shows our method yields the best performance and can locate the robust optimum in high dimension efficiently and stably.
### 5. Time
> What is the time cost of the proposed method?
Table 1 in the paper reports the time cost for our method.
---
Rebuttal Comment 1.1:
Title: Reminder in the last day of the Rolling Discussion
Comment: We kindly ask the reviewer if they have any outstanding questions or clarifications regarding our paper. We are happy to engage in a dialogue and conduct any additional requested work in the remaining discussion period. Thank you! | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their insightful comments and constructive suggestions. In this global rebuttal, we mainly provide the updated version of our theoretical results and a discussion on the limitations, followed by an attached pdf for the supplementary experiments (The other detailed feedback can be found under each reviewer's comments).
### Theoretical Results
For the theoretical part, we summarize the amended theorems/assumptions/descriptions here.
**Assumption 1** For any $\varepsilon > 0$, $P,Q \in \mathcal{P_{\mathcal{X}}}$, we may choose an estimated $\tilde k(P,Q)$ such that the error function $e(P,Q)$ can be upper-bounded by $e_\varepsilon$ with probability at least $1-\varepsilon$, that is, $\mathbb{P}\left(|e(P,Q)| \le e_\varepsilon\right) > 1-\varepsilon.$
**Discuss on Assumption 1** Note that this assumption is standard in our case: we may assume $\max_{x \in \mathcal{X}} \Vert \phi \Vert_k \le K $, where $\phi$ is the feature map corresponding to the $k$. Then when using empirical estimator, the error between $\text{MMD}_{\text{empirical}}$ and $\text{MMD}$ is controlled by $4K\sqrt{2\log(6/\varepsilon)m^{-1}}$ with probability at least $1- \varepsilon$ according to Lemma E.1, in [5]. When using the Nystrom estimator, the error has a similar form as the empirical one, and under mild conditions, when $h = O(\sqrt{m}\log(m))$, we get the error of the order $O(m^{-{1/2}}\log(1/\varepsilon))$ with probability at least $1-\varepsilon$.
**Theorem 1** Let $\delta >0, f\in \mathcal{H}\_k,$ and the corresponding $ \Vert \hat f \Vert_{\hat k} \le b, \max_{x\in \mathcal{X}} |f(x)| = M$. Suppose the observation noise $\zeta_i = y_i -f(x_i) $ is $\sigma_\zeta$-sub-Gaussian, and thus with high probability $|\zeta_i|< A$ for some $A>0$. Assume that both $k$ and $P_x$ satisfy the conditions for $\Delta f_{P_x}$ to be $\sigma_E$-sub-Gaussian, for a given $\sigma_E > 0$. Then, under Assumption1 with $\varepsilon >0$ and corresponding $e_{\varepsilon}$, setting $\sigma^2 = 1+\frac{2}{n}$, running Gaussian Process with acquisition function
$\\tilde \\alpha(x|\\mathcal{D}_n) = \\tilde\\mu_n(P_x) + \\beta_n \\tilde\\sigma_n(P_x),$ where
$\\beta_n =b + \\sqrt{\\sigma_E^2 + \\sigma\_\\zeta^2}\\sqrt{2(\\hat \\gamma\_n + 1 - \\ln\\delta) },$
we have that the uncertain-inputs cumulative regret satisfies:
$\\tilde R_n \\in O ( \sqrt{n\hat\gamma_n(\hat\gamma_n - \ln \delta)} + n^2\sqrt{(\hat\gamma_n - \ln \delta) e_\varepsilon} + n^3 e_\varepsilon )$
with probability at least $1-\delta - n\varepsilon$. Here $\tilde R_n = \sum_{t=1}^n \tilde r_t$, and $\\tilde r\_t = \\max\_{x\\in \\mathcal{X}} \\mathbb{E}\_{P\_x}[f] - \\mathbb{E}\_{P\_{x\_t}} [f]$
**Theorem 2** Suppose $k$ is $r$-th differentiable with bounded derivatives and translation invariant, i.e., $k(x,y) = k(x-y,0)$. Suppose the input uncertainty is i.i.d., that is, the noised input density satisfies $P_{x_i}(x) = P_0(x - x_i), \forall x_i \in \mathcal{X}$. Then if the space $\mathcal{X}$ is compact in $\mathbb{R}^d$, the maximum information gain $\hat\gamma_n$ satisfies
\begin{equation}
\hat\gamma_n = O(n^{(d^2 +d)/(r+d^2+d)} \log(n)).
\end{equation}
Thus, when $r > d(d+1)$, the accumulated regret is sub-linear with respect to $n$ assuming $e_\varepsilon$ is small enough.
### Limitations
Here we provide a discussion of the limitation & further work:
1. The choices of sampling and subsampling size are highly related to the distribution type, dimension, and rank of the kernel matrix. For now, we set these parameters heuristically via experiments, it is worth devising a learnable way to determine their best values automatically.
2. As the power of MMD-based two-sample tests drops polynomially with problem dimension [9], our current method can only alleviate this issue with a quadratic computation cost, which hinders a scale to very high-dimensional problems. We leave the exploration to HD problems for future work.
### Reference
[1] Andreas Christmann and Ingo Steinwart. Universal Kernels on Non-Standard Input Spaces. In:NeurIPS, 2010
[2] Srinivas N, Krause A, Kakade S M, et al. Gaussian process optimization in the bandit setting: No regret and experimental design[J]. arXiv, 2009.
[3] Nogueira, Jose, Ruben Martinez-Cantin, et al. “Unscented Bayesian Optimization for Safe Robot Grasping.” In IROS, 2016.
[4] Tsan Sheng Ng, Yang Sun, et al.,Semiconductor lot allocation using robust optimization, European Journal of Operational Research, Volume 205, Issue 3, 2010.
[5] Chatalic, Antoine et al. Nyström Kernel Mean Embeddings. ICML 2022.
[6] Gabrel, Virginie, Cécile Murat, and et al. “Recent Advances in Robust Optimization: An Overview.” *European Journal of Operational Research* 235, no. 3 (June 2014): 471–83.
[7] Oliveira, Rafael, Lionel Ott, and et al., “Bayesian Optimisation under Uncertain Inputs.” AISTATS, 2019.
[8] Sanders, Nicholas D., Richard M. Everson, et al., Bayesian Search for Robust Optima. arXiv, 2021.
[9] Ramdas, Aaditya et al. “On the Decreasing Power of Kernel and Distance Based Nonparametric Hypothesis Tests in High Dimensions.” AAAI Conference on Artificial Intelligence (2014).
[10] Srinivas N, Krause A, Kakade S M, et al. Gaussian process optimization in the bandit setting: No regret and experimental design[J]. arXiv preprint arXiv:0912.3995, 2009.
[11] Christmann A, Steinwart I. Universal kernels on non-standard input spaces[J]. Advances in neural information processing systems, 2010, 23.
Pdf: /pdf/3ffb43f75ed564be6aa10260180492eb9a1942a0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
(S)GD over Diagonal Linear Networks: Implicit bias, Large Stepsizes and Edge of Stability | Accept (poster) | Summary: This paper analyzes the implicit biases of GD and SGD on two-layer diagonal linear networks, specifically with large step sizes. This paper shows that SGD converges to a limit that is determined by the trajectory, and specifically by a certain effective initialization. Moreover, while both GD and SGD are close to gradient flow when the step size is small, it is shown that SGD may recover a sparse solution with larger step sizes. This can help explain the generation gap between GD and SGD with large step sizes.
Strengths: This paper analyzes the implicit bias of SGD for a wide range of step sizes, which is an important contribution. In particular, it is pointed out that while larger batch sizes generally hurt the performance of GD, they may actually improve the accuracy of SGD before divergence, which is interesting.
Weaknesses: My main concern is that the two-layer diagonal linear network model might be too simple, and more evidence is needed to show that observations in this setting can be transferred to more practical settings. Also the presentation is a bit technical; for example, the definition of the effective initialization is a little hard to follow. It could be better to consider a simple case to simplify the presentation while highlighting the interesting results, such as the different implicit biases of GD and SGD.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and the feedback. We would first like to point out a misunderstanding in the review (this is most certainly only a typo between batchsize and stepsize): *it is pointed out that while larger batch sizes generally hurt the performance of GD, they may actually improve the accuracy of SGD before divergence, which is interesting* should be replaced by *it is pointed out that while larger **stepizes** generally hurt the performance of GD, they may actually improve the accuracy of SGD before divergence, which is interesting*.
**Limitations of DLNs.** DLNs are indeed a simplistic models of neural networks, a limitation acknowledged in our work. However, as we explain in lines 41-50, we believe that the rigorous study of complex phenomenons such as the intricate effect of noise and stepsizes has to start on such simple models. As put by Reviewer JT9J, *for now, the setting of diagonal linear networks might be the most complicated one we can expect for SGD-large-learning-rate bias to rigorously prove*.
The definition of the effective initialization is indeed quite technical, but there is no simple case for which it has a simpler expression. However, its effect on the solution recovered is summarized as follows for sparse regression and large stepsizes: the effective initialization has heterogeneous coordinates that hinder the recovery of sparse vectors for GD, while for SGD its coordinates are of same order and lead to good sparse recovery.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! I have no further questions. | Summary: This is a theoretical work on implicit bias in diagonal linear networks. It proves the empirical observation by Pesme et al that SGD with large learning rates can recover the sparse signal while GD or small learning rates cannot. Technically it utilizes mirror descent with time-vary potentials.
Strengths: 1. The motivation and setting are great and non-trivial.
It is a popular and significant trend to understand the implicit bias with large learning rates, which is practically meaningful and much harder than previous works on small learning rates. For instance, the small-learning-rate version of this work is Woodworth et al for gradient flow.
Although the main empirical observation has been reported by Pesme et al last year, this work successfully proves it, completing the story. Meanwhile, for now, the setting of diagonal linear networks might be the most complicated one we can expect for SGD-large-learning-rate bias to rigorously prove.
2. The presentation and discussion are comprehensive.
The target setting is with several factors, including initialization scale, initialization shape, stochasticity, learning rates. This work does a good job to clearly its position in terms of these factors. In Section 3, it connects the main theorem 1 with previous gradient-flow result, with an emphasize on effective initialization $\alpha_{\infty}$. Then in Section 4, it carefully discusses how these four factors affect $\alpha_{\infty}$.
Weaknesses: 1. Is such a phenomena robust to different learning rates? In Figure 1, the x-axis region of significant drop of test loss for SGD seems to be not broad, and it looks hard to find the optimal $\gamma$ since it is too close to $\tilde{\gamma}_\max$ to explode. Meanwhile, the definition of $\tilde{\gamma}_\max$ around 257 is not much helpful to find the exact value. Moreover, my guess is that $\tilde{\gamma}_\max$ is not stable due to stochasticity in data generation and sgd sampling.
In other words, how can we choose a good lr for good sparse recovery?
2. In terms of practical computation, I am wondering how much SGD+large lr is better than GD+small initialization or SGD+label noise. It is mentioned in Section 4.1 that ''the slower the training, the larger the gain''. Hence it seems to bring more computation cost in practice for SGD to achieve good test error, while GD + small initialization may be faster in steps. And SGD+label noise might be another strong candidate.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See above in Weakness.
For now, I would like to recommend an acceptance for its theoretical proof of an open problem.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the overall very positive feedback, all the helpful remarks and questions, that could lead to additional valuable discussions in our paper.
**1.** The quantity $\tilde \gamma_{max}$ is not robust to the sampling of the random inputs $x_i$, as it is defined conditionally on them (for different samples of $(x_i)$'s, we empirically observe that $\tilde \gamma_{max}$ varies a lot, making it hard to estimate). However, it is empirically robust to SGD noise (to the noise induced by the sampling of mini-batches at each iteration): if SGD converges for some stepsize, we empirically observe that SGD converges for any smaller stepsize. Proving such a phenomenon seems however out of reach (even for GD).
For sparse recovery, a good learning rate for SGD would be to choose a learning rate that produces oscillation in the initial phase, but still converges. As noticed, such a learning rate is hard to estimate, hence the interest of *learning rate schedules*, that start with a (too) large learning rate, but decreases it slowly during different phases. These schedules are in fact also used in practice to increase generalization on more complex architectures; DLNs here offer a provable benefit for such schedules.
We propose adding this discussion in a revised version, as we believe it will provide additional valuable insights into the effects of learning rates for SGD by discussing lr schedules.
**2.** There is indeed a tradeoff between generalization power and computational efficiency here: empirically, taking into account its reduced complexity per iteration, SGD with large stepsizes performs better for large dimensions $d$, which can be explained by the fact that $\alpha$ needs to be taken too small for GD to converge quickly. Finally, SGD with large stepsizes or label noise SGD perform equally well. If the reviewer believes that illustrating this discussion with experiments could be a valuable addition, we propose to do so. | Summary: This paper studies the impact of stochasticity and step size on the implicit regularization of GD and SGD, in the setting of two-layer diagonal linear network. It is show that large step size can benefit SGD, but hinder the performance of GD. Both the implicit bias and convergence are proved for (S)GD, providing a complete trajectory-dependent result. The results are further used to provide insights for explaining the "edge of stability" regime.
Strengths: The writing is very clear and easy to follow, and highlighting of the important parts helps the reading a lot.
Theorem 1&2 together provides a complete set of results for trajectory-dependent analysis for GD and SGD for two-layer diagonal linear network, and the subsequent discussion on the impact of stochasticity and step size is clear and insightful.
Weaknesses: It seems that the technical tools used in this paper have already been developed in previous related papers, especially [48, 50, 61] (corresponding to the reference number in the paper). Can the authors discuss the technical novelty of the current paper?
The gain vector does not have a closed-form expression, and it seems unclear what exact property does the minimization in (5) lead to. More specifically, does it give exact sparse recovery in some case?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Theorem 2 gives the convergence guarantee. Is it possible to obtain an explicit convergence rate? If not, what's the difficulty here?
In Figure 1, the largest step sizes for GD and SGD are different. Why this difference doesn't appear in Theorem 2?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time spent reviewing, the questions, and for the positive feedback.
**Technical novelty.**
Our technical tools are in fact very different from the cited references: our underlying process (SGD on the neurons) is **discrete** while theirs are **continuous-time** processes: gradient flow or stochastic GF. Our analysis is thus much more challenging, and studies the real algorithms used in practice, as opposed to continuous time approximations.
On the technical side, [48, 50, 61] leverage a continuous mirror flow structure; our analysis is based on leveraging a (discrete) time varying mirror descent structure, indeed showing some similarity with [48, 50, 61]. Analyzing the implicit bias in sections 4-5 is however completely new both technically speaking and in terms of insights on the effects of stepsizes and noise.
In the general case, the minimization in (5) is hard to grasp; particular cases that lead to exact sparse recovery are GD or SGD with small initialization and stepsizes that are not in the EoS regime, or SGD with any initialization, but with an EoS stepsize.
**Convergence rate.** From Proposition 6, we directly have $1/T \sum_{t<T} L(\beta_t)=O(1/T)$. However, for a quantitative bound on $||\beta_k-\beta_\infty||$, a more thorough analysis is required. In fact, we can show that if the loss becomes small (as can be proved with Prop. 6 in our paper), the loss becomes locally relatively strongly convex wrt the hypentropy. This property leads to a linear convergence (on the quantity $\Vert \beta_k-\beta_\infty \Vert_2$) to 0 with a rate of the form $O(\exp(-\alpha^2\mu\gamma k))$ where $\mu$ is the smallest non-null eigenvalue of $H$.
We are open to incorporating this result into a revised version of the paper if the reviewer believes it would be beneficial. We did not include it due to space constraints and our primary focus on the implicit bias aspect of the problem.
Question 2. : In Figure 1, the largest step sizes for GD and SGD are different, but this is also the case in Theorem 2, where the constant $L$ depends on the batchsize (see def in line 131).
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: I thank the authors for their response, and I don't have further questions. I think it would be interesting to include the result on the convergence rate. | Summary: This paper studies the implicit bias of 2-layer diagonal linear networkss.
The authors show the convergence of GD and SGD with macroscopic stepsizes and characterise their solutions through an implicit regularization problem.
Moreover, the theoretical results reveal the difference between the generalization performances of GD and SGD for large stepsizes.
Strengths: The paper is clear and well-written.
Wnderstanding what solutions do GD/SGD converges to is an important theoretical issue.
This paper achieves a clear characterization by deriving the solution found GD/SGD through an implicit regularization problem.
From a technical standpoint, this paper cleverly establishes an equivalent relationship between gradient descent and mirror descent of another problem, showcasing its potential for broader applications.
Furthermore, the authors analyze the gain in the implicit regularization problem, offering insightful explanations for the differences in generalization abilities between GD and SGD with varying step sizes.
Weaknesses: This paper generalizes the existing results of the implicit regularization on GF to SGD, and the results are relatively similar, although I know this is indeed technically difficult.
Futhermore, 2-layer diagonal linear network and specific initialization restrict the influnce of this paper.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. While the authors have provided explanations of the Gain in Section 4 from various perspectives, I am still confused by its inisightful meaning. To address this, the authors can consider providing additional geometric insights or visual aids to help clarify the intuitive meaning of Gain.
2. The authors assert that the Gain of SGD is homogeneous in Equation (11) and Figure 2. However, it is important to note that Equation (11) corresponds to the expectation of the stochastic gradient rather than the realistic stochastic gradient. Previous studies, such as Zhu et al. (2018), have argued that the realistic stochastic gradient is highly anisotropic. This raises a potential contradiction, and it would be helpful if the authors could address this concern in the paper.
Zhu et al (2018). The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The results in the paper is limited to 2-layer diagonal linear network and specific initialization, but it should not be the reason for rejection.
This paper indeed offers a solid theoretical perspective on the implicit regularization of SGD.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time spent reviewing, the thoughtful comments and reference.
We first answer the ‘weakness’ part of the review.
Our paper indeed generalizes existing results: references [48,50,61] provide the implicit bias of gradient flow and stochastic gradient flow over DLNs, while we prove convergence of GD and SGD for macroscopic stepsizes, as well as determine the implicit bias for any arbitrary stepsizes that lead to convergence. Our results are thus much stronger since they directly study the algorithms which are used in practice and enables to gain insights on the effects of stepsizes and stochasticity. Similar characterisations were not possible with previous works. We thus believe that our results are not just simple technical and incremental contributions.
As we acknowledge in our paper, DLNs are indeed a very simple model of neural nets; however, we believe this model to be (currently) the only one for which a non-trivial implicit bias can be rigorously proved for large stepsize GD and SGD, as rightfully noted by Reviewer JT8J.
Finally, we use a specific initialization of the neurons that lead to $\beta_0=0$, in order to have results that are clearer to understand. Our analysis can directly be extended to general initializations: non-centered initializations would only modify the term $\tilde \beta_0$ in Theorem 1; we can add a more thorough discussion about the initialization in a revised version.
**Meaning of Gain.**
The quantity Gain naturally appears in the analysis as being the key quantity which quantifies how much the solution found by (S)GD with given stepsizes deviates from that found by GF with same initialization. This quantity is insightful because it is tractable and analyzable: the goal of sections 4 and 5 is precisely to leverage its definition in order to understand the impact of the stepsize and stochasticity on the recovered solution.
A geometric insight could however be: for all iteration $k$, the interpolating vector that minimizes the potential $h_k$ (eq. (16)) related to the effective initialization $\alpha_k$ (line 621 in the appendix) is the solution found by GF started at $w_k$; hence the effective initializations $\alpha_k$ track the solutions found by GF starting from the iterates of (S)GD. The Gain and the effective initialization $\alpha_\infty$ are limits of these quantities for $k\to\infty$.
**Anisotropic noise.**
We thank the reviewer for this very interesting paper.
Eq. (10) is indeed the expected squared stochastic gradient: it is thus related to both the true gradient and the noise, but the latter dominates. It is however important to note that these are the stochastic gradients of the **convex** loss $\mathcal L$, and not of the **non-convex** loss $F$ (that takes neurons as input). We thus do not believe Eq. (10) to be in contradiction with Zhu et al.: they argue that the noise of stochastic gradients of the **non-convex** loss is anisotropic, which we do not contradict. A thorough study of the SGD noise of the non-convex loss $F$ would show that for $s$-numerically sparse (i.e., almost or approximately sparse) neurons, the noise covariance and the non-convex Hessian (see Eq. (19)) are highly non-isotropic.
We thank again the reviewer for this reference, and suggest to add it with such a discussion in a revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reviewer's response. No further questions from me. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper analyzes 2-layer diagonal linear networks and explains the different characteristics of the solutions received through GF, GD, and mini-batch SGD.
Strengths: The paper analyzes the impact of the different step sizes and batch sizes and explains the differences between the solution received from training 2-layer diagonal linear networks using GF, GD, and mini-batch SGD. The paper shows that some of the differences are explainable using the linear scaling rule of \\( \frac{\text{step size}}{\text{batch size}} \\). However, the paper also shows other differences which are not explainable using the linear scaling rule.
Weaknesses: Part of the theory, including theorems 2 and Proposition 1, only apply when the step size is small enough, and the edge of stability phenomenon does not occur. However, in this case, both GD and SGD behave approximately like GF. Thus any difference between the solutions received via the different optimization algorithms is minimal.
For larger step sizes, the theoretical analisis can not explain the full picture by itself. Rather, additional empirical observation are first being made, and then assumed to be true in order to complete the full analisis.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) What conditions were used to determine if the training converges, i.e., the training reached \\( w^\gamma_\infty \\)? In particular, in Figure 5, it seems like \\( \lambda_\max\left( \nabla^2F \left( w^\gamma_\infty \right) \right) \\) is above the value of \\( 2/\gamma \\), which is impossible at convergence.
2) What is the effect of learning rate schedules? In particular, about Observation 1 and Figure 3, what would be the effect on coordinates not in \\( \text{supp}\left( \beta^*_{\text{ sparse }} \right) \\) if the step increases slowly so that the Oscillation phase is arbitrarily long?
3) In lines 290 and 295, did you perhaps mean to quote Eq.(10) and Eq.(11)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough reviewing, the helpful remarks, and the noticed typos.
As rightfully noticed and as acknowledged in our paper (lines 41-42), part of our results do not apply for large step sizes at the edge of stability. Nonetheless our main result (the implicit bias result in Theorem 1) holds **for any stepsize schedule such that the iterates converge**: we use this result in sections 4.2 and 5 together with empirical observations to provide informations on the behavior of GD at the edge of stability. Rigorously proving the observations made remains however a hard problem which we currently believe to be out of reach.
The condition for the convergence is $\mathcal L (w_k)\leq \varepsilon$ for a small $\varepsilon$ ($\varepsilon=10^{-20}$).
Indeed as accurately noticed there was a mild mistake in the code used to create Figure 5 (left and right): in this figure the y-axis should be rescaled by a factor 1/8 that comes from wrong normalizations (a 1/2 factor in front of the loss, and a 1/2 factor in the argument of the loss that leads to a 1/4 factor for second order derivatives, giving us this $1/8=1/2*1/4$ normalization error), we apologize for this mistake and will correct it. The trend of the figure is however unchanged (monotonicity of sharpness for GD and SGD).
**Learning rate schedules.** The proposed learning rate schedule (slowly increasing the stepsize to keep oscillations) *will magnify the effects of large stepsizes*: for SGD, this will mimic label-noise SGD if we keep the magnitude of the oscillations of the same order, and lead to sparse recovery through l1 norm minimization; however, for GD, this will lead to recovery of the wrong support due to a weighted l1 norm minimization which is adversarial, as explained in the paper.
In lines 290-295, we indeed meant to quote Eq. 10-11.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reviewer's response. I have no further questions. | null | null | null | null | null | null |
ReDS: Offline RL With Heteroskedastic Datasets via Support Constraints | Accept (poster) | Summary: This paper introduces ReDS for offline reinforcement learning (RL) where the variability of demonstrated behaviours changes non-uniformly across the state space. Unlike prior offline RL methods, e.g., CQL and AWR, that directly constrain the learning policy by distribution matching with the behaviour policy, ReDS presents the task as support matching and implements it by reweighting the data distribution in CQL. Both theoretical analysis and experimental results are provided to demonstrate the effectiveness of the resulting algorithm, CQL (ReDS).
Strengths: - The paper is well-written and easy to follow.
- The paper is well-motivated, theoretically sound, and supported by extensive experiments.
Weaknesses: There are several main weaknesses of the paper.
- The main issue of the paper is with its novelty. The idea of reweighting samples by state-dependent weights is not novel. Specifically, [1] has proposed to reweight the samples by trajectory returns / advantages and demonstrated that such a scheme significantly improves the performance of multiple offline RL algorithms, including CQL, IQL, TD3+BC, and BC. Similarly, [2] has proposed to cast the offline RL constraints as a support matching problem, rather than distribution matching, by reweighting the data samples by normalised trajectory returns. The proposed method in [2] has also been demonstrated to improve a wide range of offline RL methods, ranging from CQL, IQL, TD3+BC to CRR. Nevertheless, despite their importance, highly similar motivation, problem formulation, and solutions, both [1] and [2] are missing from the related works.
- Although the paper has conducted extensive experiments across different domains, it lacks several important studies. Similar methods [1, 2] have already demonstrated that support matching methods for offline RL can be algorithm-agnostic, i.e., it can be used to boost the performance of different algorithms, ranging from CQL to IQL, TD3+BC, and etc. However, this paper only discusses CQL (ReDS). From the perspective of algorithmic design, CQL, TD3+BC, and AWR (discussed by the authors in line 127) / CRR / IQL (based on the AWR for policy improvement) are all distribution-constraint algorithms. It is important to discuss how ReDS could improve the aforementioned algorithms and how ReDS differs from [1, 2] in its performance.
References:
[1] Hong, Z. W., Agrawal, P., des Combes, R. T., & Laroche, R. (2023, February). Harnessing mixed offline reinforcement learning datasets via trajectory weighting. In The Eleventh International Conference on Learning Representations.
[2] Yue, Y., Kang, B., Ma, X., Xu, Z., Huang, G., & Yan, S. (2022). Boosting Offline Reinforcement Learning via Data Rebalancing. arXiv preprint arXiv:2210.09241.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Some of the important domains / tasks are missing from the paper. As a standard benchmark for offline RL, the medium-level locomotion tasks, the adroit tasks, and the original antmaze tasks are missing from the paper. Specifically, the original antmaze itself is already heteroskedastic in my opinion, as for most of the states, the ant only needs to go straight and only occasionally it will make turns. How does CQL (ReDS) compare with the well-benchmarked results on these tasks?
- How to apply ReDS to other distribution-constraint-based methods, including AWR, TD3+BC, etc. If it can be applied, how does it improve them and how does it compare with the aforementioned related works? If ReDS cannot be applied to other algorithms, what is its limitation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The main limitation of the paper is with its novelty. The idea of reweighting samples and casting the distribution-constrained offline RL, i.e., distribution matching problem, as a support matching problem, has been discussed by prior methods. Also, many important and well-benchmarked domains and tasks are missing in the experiments, which makes it less clear about the performance of the method. Overall, I do think this is an interesting paper, but given the current limitations and issues, it might not be ready for publication yet.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful response. Regarding novelty, we would like to clarify that while prior works exist that utilize re-weighting with offline RL algorithms, in this work, we not only develop a method for re-weighting distributional constraint methods but also analyze challenges with distribution constraints on heteroskedastic datasets. That is, our contribution isn’t just in developing a re-weighting method for offline RL, but in understanding challenges with existing methods and showing how re-weighting can alleviate these challenges. To further address your concerns, we compare ReDS to the method Hong et al. 2023 and find ReDS outperforms this method, indicating its efficacy. We discuss this and other questions in detail below. **Please let us know if your questions are resolved, and if so, we would appreciate it if you could update your score. We would be happy to discuss any further questions.**
***Comparisons with Hong et al. 2023***
Thank you for pointing out these comparisons. We do believe that these methods are quite relevant and we will discuss them in the related works section of the paper. We also ran experiments on the heteroskedastic AntMaze domains to evaluate these methods and present our results below. In brief, we found that ReDS still outperforms this prior method.
**RW and AW (Hong et al. 2023):** We tried two variations that the authors use for weighting: Reward Weighting (RW) and Advantage Weighting (AW), using the author’s hyperparameters for the D4RL AntMaze tasks. We compared against the best-performing variation for each dataset. For these same variations and hyperparameter settings, we are also able to replicate the author’s performance on the D4RL AntMaze tasks.
Note that the performance is significantly worse compared to the base algorithm (IQL) used by these methods and our approach, CQL (ReDS). Note this isn’t contradictory: the author’s reported numbers are outperformed by IQL (baseline; with uniform sampling) on the D4RL AntMaze tasks.
A full comparison is found below:
| Task & Dataset | EDAC | BEAR | CQL | IQL | INAC | RW and AW | ReDS [Ours] |
|:--------------:|:------------:|:-----------:|:---------:|:---------:|:----------:|:---------:|:---------------:|
| medium-noisy | 0 | 0 | 55 | 44 | 0 | 5 | **73** |
| medium-biased | 0 | 0 | **73** | 48 | 0 | 0 | **74** |
| large-noisy | 0 | 0 | 42 | 39 | 0 | 10 | **53** |
| large-biased | 0 | 0 | **50** | 41 | 0 | 8 | 45 |
We aren’t able to compare with Yue et al in time for the rebuttal but will in the final.
**Why does ReDS outperform AW and RW?:** We believe this limitation stems from the re-weighting of entire trajectories. We wouldn’t expect such a strategy to help with heteroskedasticity especially when states with both narrow and wide behavior action distributions appear in the same trajectory such as in the heteroskedastic AntMaze or even in our didactic example in Figure 1. Reweighting a trajectory wouldn’t provide precise-enough control to change the behavior distribution to impose the correct state-specific re-weighting.
***As a standard benchmark for offline RL, the medium-level locomotion tasks, the adroit tasks, and the original AntMaze tasks are missing from the paper***
While the standard D4RL tasks provide an effective way to benchmark offline RL methods, our focus in this work was to validate if ReDS can be effective with heteroskedastic datasets. Since D4RL tasks consist of data from policies that typically exhibit uniform action entropy across states, they tend to not be heteroskedastic as observed in Table 2a. Due to this reason, we thought it was more important to study the performance of ReDS on heteroskedastic datasets while making sure it does not degrade performance on a subset of standard D4RL tasks (as in Table 1). That said, we are running additional experiments on medium locomotion tasks and Antmaze tasks, and will report back on the status of our results before the end of the discussion period.
***Support matching methods for offline RL can be algorithm-agnostic***
While we agree that support-matching methods for offline RL can be domain agnostic, in this paper, we attempted to instantiate ReDS for CQL: an instantiation specific to methods that use a conservative regularizer like CQL (or related approaches like COMBO). Here, we modify the distribution for push-up and push-down in the CQL regularizer (Figure 3), but these loss terms aren’t present in other algorithms such as policy constraints. We would like to clarify that we don’t claim that ReDS is algorithm agnostic. Other works in offline RL also propose algorithms that aren’t general methodologies (e.g., TD3+BC, IQL, etc). As such, we believe that the lack of a general methodology should not be used for rejecting the paper, provided the method attains good results.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' comprehensive rebuttal and the inclusion of additional experiments. The rebuttal effectively addresses some of my concerns. Regarding the additional experiments conducted on antmaze, I share a similar concern as Reviewer uYnF. It looks interesting to me that CQL is achieving a fairly strong performance on these harder tasks (comparable with ReDS on medium-biased and outperforming ReDS on large-biased). Specifically, the results are even better than the reported results of CQL on the less challenging original antmaze domains. Please find below the original reported results.
| | CQL |
|---------------------------|:----:|
| antmaze-medium-play-v0 | 61.2 |
| antmaze-medium-diverse-v0 | 53.7 |
| antmaze-large-play-v0 | 15.8 |
| antmaze-large-diverse-v0 | 14.9 |
It makes me wonder how much of the strong performance of ReDS comes from CQL and how much it comes from the algorithmic advancements. It would be much appreciated if you could give some additional explanations. In addition, I checked the appendix and found that ReDS is developed based on the implementation of JaxCQL. I happened to use this repository before and failed to reproduce CQL's results on antmaze. Could you kindly share your full hyper-parameters?
I'd love to increase my rating if the concern is addressed.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer huak
Comment: Thank you for your reply! The full range of hyperparameters that we use for JaxCQL is shown below. Using this set of hyperparameters, we were able to reproduce the numbers for CQL on the JaxCQL repository, as other works have also done in the past: e.g., Cal-QL (Nakamoto et al. 2023) and the CQL implementation in the open-source CORL (Clean Offline RL) library (Tarasov et al. 2022): https://github.com/tinkoff-ai/CORL.
| Hyperparameters | Values |
|:---------------------------:|:------------------:|
| CQL Lagrange | True |
| CQL Lagrange Target Action Gap| 0.8 |
| Critic Network | 256-256-256-256 |
| Actor Network | 256-256 |
| Reward Scale | 10 |
| Reward Bias | -5 |
| Critic Lr | 3e-4 |
| Actor Lr | 3e-4 |
**Where does improvement in ReDS come from?** Of course, since ReDS builds directly on top of CQL, its performance will be heavily affected by the performance of the CQL algorithm. So we would expect improvements in design decisions for CQL to improve the performance of ReDS as well. That said, please note that across our experimental results, we do observe a significant improvement with using the weighting proposed by ReDS: for example, in Table 3 on the visual manipulation tasks and in Figure 5, on the Atari tasks, as well as the noisy antmaze tasks. This improvement from ReDS is not explained by baseline CQL as we utilize the same hyperparameters for ReDS, and hence the gains can only be attributed to the weighting prescribed by ReDS.
**CQL performance on AntMaze:** With regards to the performance of CQL on antmaze tasks, note that while the original CQL paper studies the antmaze-v0 tasks, D4RL deprecated these tasks in 2021 and upgraded to antmaze-v2 datasets, which fixed inconsistencies between terminals and rewards in the antmaze-v0 datasets (https://github.com/Farama-Foundation/D4RL/commit/49699950ae0c00501114b420626e956c0437d00e). We suspect this is the reason for performance improvement for the baseline CQL from reported v0 to our v2 results. To address your point about results on the heteroskedastic datasets, we note that on the D4RL medium antmaze datasets, CQL attains a larger performance than what it attains on our -biased and -noisy datasets. Finally, as we also note in Lines 364-365 in the submission, our results used oracle policy selection for reporting results for all methods, following trends in recent works such as ATAC (Cheng et al. ICML 2022; best paper runner-up) due to a lack of an early stopping algorithm for offline RL methods. However, D4RL reports last iterate results, which also partly explains a smaller gap in performance on the heteroskedastic antmazes and the D4RL antmaze. We will add a detailed discussion in the updated paper.
**We would be grateful if you are willing to raise your score if your concerns are addressed. We would be happy to discuss further if you have any remaining questions.** | Summary: The paper has identified the presence of heteroskedasticity in realistic offline RL datasets, which negatively impacts the performance of existing offline RL methods that rely on distributional constraints. To tackle this issue, the authors introduce a new approach called ReDS, which transforms distributional constraints into support-based constraints. Several novel heteroskedastic datasets were introduced in order to showcase the superior performance of ReDS compared to existing offline RL methods.
To practically achieve the support constraint, the fundamental change based on CQL is to add a new penalty term $\pi_{re}$, which can be obtained via
$\arg \max_{\rho_\psi} \mathbb{E}_{s \sim \mathcal{D}, a \sim \pi_\beta(\cdot \mid s)}[\log \rho_\psi(a \mid s) \cdot \exp (-A_\theta(s, a) / \tau)]$.
Strengths: 1. Well-written, with high-quality examples, and technically sound.
2. Extensive evaluations over many different tasks.
Weaknesses: 1. The design of the reweighted version of the behavior policy in Eq. 6 is heuristic. I can not see a clear clue why the coefficient is designed as 1/2 and 1/2.
2. The experiments in the noisy and biased version of Antmaze tasks seem contrived.
3. Inconsistent of the differential concentrability between theory and real-world practical datasets (std).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Could the policy $\pi$ in Definition 3.1 be any policy? What if it is an arbitrary worst policy, i.e., random policy? I do not see clear evidence that $C_{diff}$ is big enough when $D(\pi, \pi_{\beta})$ is arbitrarily large in both $s_1$ and $s_2$. On the other hand, should $s_1, s_2 \sim d^{\pi}$ without any other condition? What if they are all sampled from the same place, i.e., the narrow hallways in the didactic example?
2. Any detail about how to compute the std in Chapter 4.2?
3. How well does ReDS perform over non-heteroskedastic datasets (random, medium, expert)? Is there any limitation to on non-heteroskedastic datasets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the feedback and positive assessment of our work. We address your concerns below and will update the paper to clarify each of the questions. Please let us know if your questions are resolved, we are happy to discuss further if any questions are remaining.
***Inconsistent of the differential concentrability between theory and real-world practical datasets (std).***
The intuitive explanation for what makes a dataset heteroskedastic -- namely, that the variability in the policy is different in different states -- is provided primarily to build intuition. Unfortunately, this notion by itself does not make a great formal definition, because it does not capture the count of states $n(s)$ that shows up in a safe-policy improvement bound (the constraint in Equation 5). Therefore, in our analysis, we use the somewhat more complex notion of differential concentrability, which captures a similar intuition (we make the mathematical connection between the intuition of heteroskedasticity and our definition precise below) but provides us with the foundation on which to build our theoretical analysis.
To better understand the relationship between the variance of the action distribution and our Definition 3.1, consider a simpler scenario where we remove the counts $n(s)$ from the expression of differential concentrability and set $\pi$ in $C^\pi_\text{diff}$ to be the uniform distribution over actions. Then, we can show that the value of $C^\pi_\text{diff}$ is **exactly** equal to twice the variance of entropy of the dataset action distribution across states, which matches with the metric (variance = square of the standard deviation) we measure in our experiments.
Of course, we cannot exclude the counts of states $n(s)$ for technical accuracy, however, it should be noted that in high-dimensional state space tasks, such as those examined in our experiments, each state in the offline dataset is likely to be unique, thus validating the assumption that $n(s) = 1$. This means that measuring the variance in the entropy of the action distribution is an accurate estimate of differential concentrability in our experiments.
***Could the policy $\pi$ in Definition 3.1 be any policy?***
While the choice of the policy $\pi$ in the _expression for_ differential concentrability can be any policy, we would like to clarify that we utilize the value of $C^\pi_\text{diff}$ only for the policy learned via distributional constraint methods to define how heteroskedastic a dataset is. This is akin to the definition of standard concentrability which, in principle, can be computed for any policy but only its value at the optimal policy is a useful measure of the hardness of the offline RL problem.
**I do not see clear evidence that $C^\pi_\text{diff}$ is big enough when $D(\pi, \pi_\beta)$ is arbitrarily large in both $s_1$ and $s_2$. **
We would like to clarify that to the best of our knowledge, we did not claim this in the paper. In fact, in lines 163-168, we claim otherwise: we say that $C^\pi_\text{diff}$ can be _small_ even when $D(\pi, \pi_\beta)$ is large enough because it might take similar values at different states. Does this clarify your question?
**Should $s_1, s_2 \sim d^\pi$ without any other condition? What if they are all sampled from the same place?**
Please note that $d^\pi$ refers to the state occupancy distribution of the policy that is attempting to maximize the conservative offline RL objective (Equation 1). Intuitively, this means that sampling $s_1$ and $s_2$ from $d^\pi$ should be enough because $d^\pi$ is, in effect, close to $d^{\pi_\beta}$. Thus, when the dataset behavior distribution has a lot of variability across states, this variability would also be reflected on states sampled from $d^\pi$.
***Any detail about how to compute the std in Chapter 4.2?***
We apologize, but we are not sure which table this question is referring to. Could you please provide a clarification? If you are referring to Table 2, the standard deviation is computed by first recording the value of KL divergence between a model of the behavior policy (trained by maximizing the log-likelihood of the dataset) and the policy learned by CQL for the best hyperparameter $\alpha$ for all states in the dataset, and then, computing the standard deviation in this divergence value across all states in the dataset.
***Why the coefficient is designed as $\frac{1}{2}$ and $\frac{1}{2}$.***
Setting this coefficient to a value of $\frac{1}{2}$ was the initial choice for our experiments. We found this choice was convenient: it worked well in practice and circumvented the need for introducing a new hyperparameter. That said, future work could aim to study the sensitivity of ReDS to other values of this coefficient, which we believe will only improve the performance of this approach.
***How well does ReDS perform over non-heteroskedastic datasets (random, medium, expert)? Is there any limitation to non-heteroskedastic datasets?***
We are running these experiments and will get back to you before the end of the author-reviewer discussion period. However, note that we do already have experiments on some standard and commonly used non-heteroskedastic benchmark tasks in Table 1.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer 5pSd
Comment: I appreciate the authors' response to my previous comments. Concerning the standard deviation, I was referring to Table 2. I apologize for the ambiguity in my previous question. Most of my concerns have been addressed.
However, I still have a few points that require further clarification in Definition 3.1. To clarify my question: I believe that $C_{\text{diff}}^{\pi}$ would be \textbf{small} when the learned policy $\pi$ is arbitrarily worse (i.e., a random policy), because both $D(\pi, \pi_{\beta})(s_1)$ and $D(\pi, \pi_{\beta})(s_2)$ would be large enough to make $C_{\text{diff}}^{\pi}$ small.
Additional Questions:
I have also reviewed comments from other reviewers. Reviewer uYnF mentioned advanced methods within the in-sample learning paradigm based on SAC, such as InAC. It would enhance the completeness of this paper if experiments and analysis of these advanced in-sample methods [1, 2] were conducted. For instance, SQL (Xu et al., 2023, Chapter 5.2) demonstrates that sparsity in value function learning can be beneficial in noisy datasets (heteroskedastic datasets) and outperform IQL.
[1] Xu H, Jiang L, Li J, et al. Offline rl with no ood actions: In-sample learning via implicit value regularization[J]. ICLR, 2023.
[2] Garg D, Hejna J, Geist M, et al. Extreme q-learning: Maxent RL without entropy[J]. ICLR, 2023.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 5pSd
Comment: Thank you for your prompt response! We are glad that most of your concerns are addressed. Regarding your question about the definition of $C^\pi_\text{diff}$: you are right that this value would be small when $\pi$ is a random policy such that $D(\pi, \pi_\beta)(s)$ is large. However, please note that to decide when a given offline RL problem is heteroskedastic and, more importantly, to identify when distribution constraints algorithms will fail, Theorem 3.2 only considers the value of $C^\pi_\text{diff}$ for $\pi := \pi_\alpha$ (i.e., policies obtained by optimizing the generic offline RL objective in Equation 1) and not just any policy $\pi$. Below, we enumerate and intuitively discuss various cases in which a generic offline RL objective would produce a random policy at all states and reason as to why in these settings a low value of $C^\pi_\text{diff}$ is still consistent with the behavior of distribution constraint algorithms. There are two possible scenarios under which the learned policy can be a random policy:
1. **The behavior policy is highly random at all states as well**, in which case the offline RL problem is not heteroskedastic because the variability in the behavior distribution is similar across all states (and thus, a small value of $C^\pi_\text{diff}$ correctly identifies this problem as non-heteroskedastic). In this case, we can improve the learned policy performance by tuning the value of $\alpha$ in Equation 1.
2. **When the behavior policy action distribution is not random, but it does not cover any action that maximizes reward** (and hence optimizing Equation 1 recovers a high-entropy policy as it attempts to maximize the sum of two conflicting objectives: reward and divergence against the behavior policy). For concrete understanding, consider a setting when the reward is binary, and no actions in the dataset attain a +1 reward. This problem is simply a hard offline RL problem as no (near-)optimal behavior is covered by the offline dataset. It is not necessarily heteroskedastic, as we would not expect support constraint algorithms to offer a distinct benefit over distribution constraint algorithms, all of which will perform poorly. This is because the behavior policy does not cover good actions in any state at all and all actions inside the support attain poor performance. Hence, a small value of $C^\pi_\text{diff}$ also describes the heteroskedasticity challenges associated with this scenario, by triviality.
Hence, when considering only policies that maximize the generic offline RL objective, $C^\pi_\text{diff}$ will be large only when distribution constraint algorithms for any $\alpha$ fail because a single $\alpha$ is insufficient to modulate the strength of the behavior regularization at all states.
**Regarding your additional question,** Thank you for the suggestion! We are going to run experiments with these in-sample learning methods as well and add them to the final version of the paper. | Summary: The paper introduces ReDS, a novel offline RL method designed to handle heteroskedastic datasets. ReDS incorporates support constraints by reweighting the data distribution based on conservative Q-learning (CQL). This allows the learned policy to deviate from the behavior policy within its support while maximizing long-term return.
Strengths: 1. The addressed problem of a more fine-grained support constraint in offline RL research is crucial.
2. The paper includes thorough discussions about the proposed method.
3. The introduction of the method, which utilizes reweighting distribution constraints to achieve a fine-grained support constraint, is clear and reasonable. The explanation in Figure 3 is particularly intuitive.
4. The method demonstrates significant performance advantages, especially when applied to heterogeneous datasets.
Weaknesses: 1. The didactic example does not convincingly support the claims. For example, it does not convincingly demonstrate the characteristics of heterogeneous datasets in key states (such as the bottom-left corner of the map) and their impact on the CQL and REDS algorithms. Additionally, the results presented in Figure 1 show that the baseline algorithms AWR and CQL fail, while REDS does not. However, has this demonstration accounted for the potential impact of statistical errors?
2. The baseline algorithm (CQL) seems somewhat outdated. It would be necessary to discuss and compare with more advanced methods that satisfy the support constraint, such as InAC[1].
3. There are several spelling errors throughout the paper. A careful proofreading is necessary. For instance, the capitalized "w" in "With" in the title should be lowercase, and the occurrence of "WSome" in line 419 is incorrect.
[1] Xiao, Chenjun, et al. "The in-sample softmax for offline reinforcement learning." arXiv preprint arXiv:2302.14372 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The main concern relates to the points mentioned in the weaknesses section. Additionally, there is another question that requires clarification:
Q: Is the proposed method plug-and-play? If so, why not integrate it with an updated or foundational algorithm (e.g., SAC+vanilla distribution constraint)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper discusses the limitations. And, these limitations seems not appear to require immediate attention.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback and positive assessment of our work. To address your concerns, we add 2 comparisons to the support-constraint methods InAC and Hong et al 2023. We also explain how we took statistical errors into account when we compute the statistics for the didactic example.
**Please let us know if our answers resolve your concerns, and if so, we would truly appreciate it if you are willing to upgrade your assessment. We are happy to discuss any further questions.**
___
***Additional baselines; CQL is outdated***
We add 2 additional baselines and compare our method ReDS to InAC and Hong et al. 2023 on the heteroskedastic AntMaze datasets.
**InAC [Xiao et al. 2023]** We find this approach fails to attain a non-zero performance on the heteroskedastic AntMaze tasks. We tuned InAC by sweeping the learning rate and temperature $\tau$ over the following values:
| Hyperparameters | Values |
|:-------------------:|:-----------------:|
| Tau | 0.5, 0.33, 0.1, 0.01 |
| Learning Rate | 1e-3, 3e-4, 1e-4, 3e-5 |
For each sweep value, we are unable to get non-zero performance. We also ran this method on the D4RL AntMaze tasks, unable to attain non-zero performance even on these standard tasks. Note, our findings don’t contradict the results in InAC because the paper doesn’t study them. This indicates that ReDS outperforms InAC on the heteroskedastic AntMaze tasks.
**Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting [Hong et al. 2023]**: We see that performance for the AntMaze heteroskedastic tasks using the hyperparameters the authors used for the AntMaze results in their paper, is non-zero but lower than the performance of ReDS. We tried two variations that the authors recommended for weighting: Reward Weighting (RW) and Advantage Weighting (AW). We compared against the best-performing variation for each dataset. For these same hyperparameters, we verify performance on the D4RL AntMaze tasks, which match the author’s results.
For both algorithms, we used the author’s implementation, averaging over 4 seeds and using the max performance after 1M training steps.
Additionally, in the paper, we compare against two prior methods, BEAR and EDAC, which enforce support constraints, and find that ReDS outperforms all of these prior methods. We are happy to add more comparisons if the reviewer has any suggestions for the final version of the paper.
A full comparison can be found below:
| Task & Dataset | EDAC | BEAR | CQL | IQL | INAC | RW and AW | ReDS [Ours] |
|:--------------:|:------------:|:-----------:|:---------:|:---------:|:----------:|:---------:|:---------------:|
| medium-noisy | 0 | 0 | 55 | 44 | 0 | 5 | **73** |
| medium-biased | 0 | 0 | **73** | 48 | 0 | 0 | **74** |
| large-noisy | 0 | 0 | 42 | 39 | 0 | 10 | **53** |
| large-biased | 0 | 0 | **50** | 41 | 0 | 8 | 45 |
***Didactic example doesn’t convincingly demonstrate the characteristics of heterogeneous datasets in key states.***
The didactic gridworld consists of narrow hallways and wide rooms that periodically repeat throughout the path the policy needs to take to successfully solve the task. While in Figure 1(a), we visualize the action distributions at locations A and B only, a similar variability arises in behaviors in the bottom-left corner of the map. In regards to why AWR and CQL get stuck at the bottom left corner, this is because while these policies are still able to escape the first two wide rooms along the path, they are unable to do so the third time at the bottom of the maze. This is not due to the particular nature of that location, but the general periodic pattern of the behavior distribution. We are happy to edit this visualization for further clarity if you have any suggestions.
**However, has this demonstration accounted for the potential impact of statistical errors?**
Yes, we do account for statistical errors: **(1)** the offline dataset provided to all algorithms is of finite size, indicating that these algorithms including ReDS must still learn in the face of statistical error, and **(2)** For visualizing how the policies behave upon evaluation, we directly plot the discounted, long-term state-action visitation following the visualizations in Fu et al. 2019, rather than plotting the empirical rollout of each algorithm, so our visualizations do take into account the impact of statistical errors. Note that the (discounted) visitation distribution of a policy is computed using T steps of forward propagation.
***Is the proposed method plug-and-play?***
As stated in Lines 210-224, the principle of combining re-weighting with distributional constraints is plug-and-play, applicable to any distributional constraint algorithm including SAC + distributional constraint methods. However, the instantiation of ReDS we develop in Section 4.1 is specific to methods that utilize a conservative regularizer such as CQL (or related approaches like COMBO). This is because our current instantiation of ReDS modifies the distribution for push-up and push-down in the CQL regularizer (Figure 3), but these loss terms are not present in policy constraints. We clarify that our main contribution in this work is an analysis of when distributional constraints fail (which we study for AWR and CQL), and developing a principle for reformulating distributional constraints to approximate support constraints via reweighting. We validate this through CQL and leave as future work the application of this framework to other offline RL algorithms.
***Careful proofreading is necessary*** Thank you for the suggestion! We will thoroughly proofread to ensure the final paper is polished.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. Most of my concerns have been addressed. However, it's quite strange that InAC cannot achieve non-zero performance while IQL can even on original Antmaze datasets. I would love to raise my score if you could give more analysis.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer uYnF
Comment: Thank you for your reply! We are glad that most of your concerns have been addressed. With regards to InAC, please note that the original InAC paper did not study these datasets and we were unable to find any other paper that evaluated the InAC method on the D4RL AntMaze datasets. In fact, one of the very recent papers which came out on arXiv after the NeurIPS deadline – In-Sample Policy iteration (Hu et al. 2023) – incorporated InAC as a baseline, but did not evaluate it on the medium and large AntMaze datasets (observe that Table 1 in https://arxiv.org/pdf/2306.05726.pdf reports a “-” for InAC on AntMazes).
**Experiments:** We ran a sweep over some hyperparameters of InAC: temperature $\tau$ and the learning rate for the critic, using the InAC’s recommended hyperparameters and implementation from Appendix B in their paper. All other details such as the network architecture are standard to these environments, consistent with other baselines such as CQL, IQL as well as our method. In particular, we utilize the following set of hyperparameters:
| Hyperparameters | Values |
|:-------------------:|:-----------------:|
| Tau | 0.5, 0.33, 0.1, 0.01 |
| Learning Rate | 1e-3, 3e-4, 1e-4, 3e-5 |
| Critic Network | 256-256-256-256 |
| Actor Network | 256-256 |
We also ran an additional experiment where we shift the rewards in the dataset to -1, 0 following IQL which found these values to do better on the AntMaze domain, using the same sweep parameters. But we were still unable to attain nonzero performance for InAC on these tasks.
**Empirical analysis:** In all of our runs, we found that the Q-values were heavily over-estimated: for example, we observed that the average Q-values and the value function in the dataset were always positive, although the rewards were -1 and 0 (e.g., a value of +254 for $\tau$=0.5 and learning rate of 0.01). We hypothesized that this was because the $\log \pi_\psi(a|s)$ in InAC (Equation 15) was negative and had a large magnitude. Subtracting this log probability prevented the value backup from focusing on the reward signal and hence the policy did not learn effective behavior. In further support of our hypothesis is the fact that CQL and IQL do not backup the entropy term ($-\log \pi(a|s)$) utilized in SAC backups in their implementations.
To verify this hypothesis, we ran InAC by removing this log probability term from the value function training objective, and found that the learned value function attained values that were over-estimated less, and were able to find some settings of the hyperparameters (e.g., learning rate of 0.1 and tau of 0.5) where the policy attained a non-zero return of around 20%. However, note that doing this is technically incorrect according to the theoretical derivation in InAC, in the sense that it does not actually produce an in-sample optimal Q-function. That said, our analysis localizes this as one of the issues with InAC.
**We would be grateful if you would be willing to raise your score in the light of the above clarifications and analysis. Please let us know if you have any more questions. Thank you so much!** | Summary: The paper focuses on heteroskedastic datasets, where the distribution of actions may not be uniform in certain states but close to uniform in other states. The authors propose the ReDS method, which re-weights the actions to penalize "bad" actions that perform poorly on the dataset but have a higher probability, while encouraging "good" actions with lower probabilities in the dataset's support. ReDS outperforms previous baseline methods in multiple experimental scenarios.
Strengths: The majority of the paper is written in a clear and easy-to-follow manner. The research problem addressed in this paper is indeed relevant to real-world scenarios, and the authors provide didactic examples to illustrate the potential issues with previous methods. The paper offers a detailed analysis and proposes a concise and intuitive plug-in solution called ReDS, while also demonstrating its effectiveness. The experimental settings are diverse, covering multiple domains.
Weaknesses: ReDS should apply in broader scenarios beyond the specific setting of heteroskedastic datasets. Another point is the choice of experimental settings, as most of them involve manually constructed datasets to satisfy heteroskedastic conditions, rather than utilizing more intuitive scenarios such as autonomous driving or validating whether some existing datasets are really heteroskedastic. Thus the significance is somewhat limited.
Minor issues:
The figures in the main text, such as Figures 1 and 2, rely heavily on the captions. Adding more conclusive statements in the main text could enhance consistency and clarity.
Table 4 --> Table 2?
Line 326 "some of the standard D4RL datasets which are not heteroskedastic in and find" --> extra "in"
Line 419 "WSome prior works ..."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Fig.1, I wonder if the datasets cover the whole room B or only a point shown in the figure. If the datasets cover the whole room, AWR and CQL should be able to learn to exit room B, since other actions lead to lower returns.
2. As Line 326-327 stated, the authors have evaluated ReDS on some of the standard D4RL datasets which are not heteroskedastic and find that the addition of ReDS, as expected, does not help, or hurt on those tasks. Why does ReDS hurt the performance on some of the standard D4RL datasets? By Theorem 1, ReDS can at least have better performance.
3. Why do the experimental domains contain Atari?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors generated new datasets and did not conduct a universal analysis on whether heteroskedastic properties exist in some datasets. It would be beneficial to first employ a suitable method to effectively verify the presence of heteroskedastic properties in commonly used datasets before applying ReDS.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and positive assessment of our work. We address your concerns below and will update the paper to clarify each of the questions. **Please let us know if your questions are resolved, and if so, we would appreciate it if you are willing to upgrade your score. We are happy to discuss further if you have any remaining questions.**
***ReDS should apply in broader scenarios beyond the specific setting of heteroskedastic datasets***
When learning from datasets that are non-heteroskedastic, ReDS perform similarly to distribution constraint algorithms, which indicates that ReDS can be used in other scenarios that distribution constraint algorithms can be used in. We don’t believe that this is a weakness of the method – the aim is not to beat all prior methods in all settings but to address a specific challenge in offline RL. We believe that the paper is scoped carefully around this claim, but we would be happy to revise it if you believe this is unclear.
**The choice of experimental settings, … the significance is somewhat limited.**
While we agree that the choice of experimental settings that we study involves specifically constructed heteroskedastic datasets, we believe that this is an important first step in verifying if developing support constraint methods can help in scenarios with heteroskedasticity. The nature of this contribution is akin to prior papers such as Hong et al. ICLR 2023, which proposes methods for re-weighting offline data to deal with imbalanced datasets, but evaluates only on manually constructed datasets that add noise or mix two existing D4RL datasets. We believe that this sort of evaluation is still of value to the community as it provides a stress test for existing algorithms and provides a proof of concept for new ideas. Many of the challenges present in real-world RL problems are not well reflected in existing benchmarks such as D4RL. That being said, we agree with you that the next step in this line of work is to validate our methods on real-world datasets in autonomous driving or robotics, which are heteroskedastic. We discuss some of these scenarios intuitively in Section 3 and in the introduction. Finally, we also remark that our contribution in this paper is not just a novel algorithm, but also an analysis of why heteroskedasticity is hard for offline RL methods and understanding when support constraint methods can help. We believe that this in and of itself is valuable to the community developing offline RL algorithms.
***I wonder if the datasets cover the whole room B or only a point shown in the figure … AWR and CQL should be able to learn to exit room B since other actions lead to lower returns.***
We clarify that the dataset consists of several sampled state-action transitions inside the wide room. Note that not every state-and-action pair from within this room is observed in the dataset, but rather only a subset of these samples are observed. We only highlight one single point B in Figure 1(a) for the compactness of visualization.
More concretely, the dataset consists of sampled state-action-reward-next state transitions that are a subset of the cross-product of the state and action spaces. The samples are not uniformly distributed and are instead drawn from **a skewed action distribution which crucially attempts to keep the agent within the room**, which hinders the ability of algorithms such as AWR and CQL from learning how to exit wide rooms because a strong distributional constraint will make the policy closely match this skewed behavior distribution. This can be seen in the left part of Figure one, where in the narrow regions, the data consists of state and action pairs that uniformly move the agent towards the goal but in the wide rooms, the data consists of state and action pairs that skews the agent towards the edges of the room. Therefore, even if more than one transition is sampled from inside the wide room, the learned policy obtained from distributional constraint algorithms with a strong constraint will have the affinity to stay within the room and fail at the task.
***Authors have evaluated ReDS on some of the standard D4RL datasets. Why does ReDS hurt the performance of some of the standard D4RL datasets?***
Observe in Table 1 that ReDS performs similarly or better than CQL for all tasks of the D4RL datasets presented except for the half cheetah medium expert task. We believe that this difference is small (only 2.1 performance points) and hence not statistically significant on this task.
***Why do the experimental domains contain Atari?***
We chose Atari as a representative domain with discrete action spaces to study some of the heteroskedasticity challenges. Many offline RL papers (Badia et al. 2020, Kumar et al 2022) that we are aware of study the Atari domain using the offline datasets generated by Agarwal et al. ICML 2020. Atari also requires learnings from raw high dimensional pixel space, which adds to the breadth of the experiments in the paper since the D4RL benchmarks only require learning from low dimensional state space.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. ReDS is a re-weight-style module that could apply in a broader scope, though this is not a drawback of the method itself. I still think the experiments should contain more strongly-related datasets since heteroskedastic datasets exist but do not prevail. As there is no free lunch (each offline algorithm has the most suitable task), it would be better if the authors could provide more domains that contain heteroskedastic data or metrics to judge whether the dataset is heteroskedastic so that we can figure out what kind of tasks really suit ReDS.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer XMxS
Comment: Thank you for your reply and for engaging with us! We would like to note that we do provide a metric to judge heteroskedasticity in our experiments. The intuitive explanation for what makes a dataset heteroskedastic is that the variability in the behavior policy’s action distribution is different in different states. In the submission, we formalized this intuition by **developing the metric of differential concentrability** (Definition 3.1), such that a given offline RL problem and dataset is regarded as heteroskedastic if this measure of differential concentrability is high, and not otherwise.
**Practical metric to judge heteroskedasticity:** While this formal definition requires the count of states $n(s)$, which we do not have in high-dimensional continuous state spaces, we presented an approximation of this metric in our experiments that practitioners can utilize for judging whether a given offline dataset is heteroskedastic. As shown in **Table 2a**, this metric can be computed by first running standard offline RL methods such as CQL, and then looking at the standard deviation in the value of $f(s) = D(\pi_\text{CQL}(\cdot|s), \pi_\beta(\cdot|s))$ across states. In our experiments, we found this metric to be predictive of heteroskedasticity – this metric took a significantly higher value for datasets that we developed in our experiments compared to standard D4RL datasets, and we found ReDS improved performance in such scenarios.
**More domains:** We are happy to experiment with more domains with heteroskedastic data in the final version of the paper. Since we already include studies on D4RL-like tasks, Atari tasks, image-based manipulation domains, and some gridworld domains, we wanted to request your suggestions on which domains would be the most helpful to add for the final.
Please let us know if this response addresses your concerns. We are happy to discuss more if you have any concerns remaining. Thank you so much! | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models | Accept (poster) | Summary: The paper studies the problem of differentially private prompting, i.e. the scenario where a prompt-augmented LLM is exposed to users, which should be able to interact with the model, but should not be capable of extracting the private prompt prepended to their queries. The authors first show, that a membership inference attack against example data used in a prompt is feasible and easy to carry out (given access to model logprobs). They then show how to adapt the DPSGD algorithm to enable differentially private soft prompt learning (PromptDPSGD) and devise a new method for differentially private prompt learning in the discrete setting (PromptPATE) based on teacher-student knowledge transfer. Their evaluation shows that both methods are effective in protecting prompt privacy without sacrificing too much model performance under reasonable \epsilon values.
Strengths: * The paper is well-written and easy to follow, even though it covers a lot of ground. It motives the problem well by first demonstrating an effective MIA.
* The paper considers both the soft and discrete prompt learning setting. I think this is a good and pragmatic choice, as the most popular LLMs are API-guarded and typically do not provide access to gradients or hidden states, which are required for soft prompt learning.
* The paper explicitly considers efficiency and scalability of the proposed methods. This is important as state-of-the-art LLMs are very large and training them is expensive or even infeasible when accessible only via an API. Both the discrete and soft prompt learning methods are efficient and scalable, which is a major advantage over more data-hungry methods such as full model fine-tuning.
Weaknesses: * The shown membership-inference-attack relies on model logits (log probabilities), which are not always available to users (or developers). This means the attack will not work with e.g. the latest OpenAI models like ChatGPT or GPT-4 [1]. Further, even with access to logits via API, model vendors often add noise to the logits to prevent distillation attacks.
* While stated later on, the paper initially does not clarify that PromptPATE is specifically designed for the relatively simple few-shot LLM classification setting. It is not clear how well the methods would work in other settings, e.g. when the model is used for free-form text generation. For PromptDPSGD this is less of an issue, as it seems to be more of a general method for prompt learning.
* I rated the contribution only as "fair", because after reading the paper it seems like PromptDPSGD is a rather simple adaptation of DPSGD and PromptPATE also is a relatively simple idea. It may be helpful if the authors could highlight the novelty of PromptDPSGD and PromptPATE compared with existing work.
* The paper does not cover the case where attackers want to extract instructional information from a prompt, e.g. when a prompt not only contains example data, but also a description of the task and the desired LLM output. From what I understand, PromptPATE would not work very well in this case, since even with teacher-student transfer, the resulting student prompt would still have to contain the same (likely human-readable) instructional information as the teacher prompt. I recognize this is as a different problem, but it may be worth mentioning in the paper, as common LLM use has shifted heavily from example-driven prompting to instruction-driven prompting, with the advent of instruction-tuned and RLHF models like ChatGPT.
[1] Chat API documentation, OpenAI, URL: https://platform.openai.com/docs/api-reference/chat/create
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
* Can you comment on the effectiveness of your MIA attack against larger OpenAI models like `text-davinci-003`. From my own experience (which is anecdotal), logit noise is added to the API response for these larger models, which may make the attack less effective.
* Can you comment on MIA effectiveness with models that do not expose model logits at all, as is the case with ChatGPT and GPT-4 (see above)? Is prompt privacy still an issue in this case?
* Can you discuss the novelty of PromptDPSGD and PromptPATE compared to existing work like the standard DPSGD algorithm or existing discrete prompt optimisation approaches, to highlight the concrete contribution of the paper?
* Does PromptPATE and PromptDPSGD generalize to more free-form text generation tasks, e.g. when the model is used to generate longer text given a (private) prompt or exposed as a fully interactive chatbot? Can you protect against instruction extraction in these cases?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I think the paper should clarify early on, that it focuses on the restricted classification setting. This would help readers to better understand the specific problem the paper is trying to solve, where the more general prompt extraction problem should be considered as a separate problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. Please find our detailed response in the following:
>**1. The novelty of PromptDPSGD and PromptPATE compared with existing work.**
- PromptDP-SGD: Our work is the first one to show the good utility of soft prompt tuning with DP-SGD. Compared with [25] and [54], our method optimizes orders of magnitude fewer parameters while keeping the original LLM frozen. We leveraged libraries for DPSGD from full [25] and parameter-efficient [54] fine-tuning (LoRA) and combined them with the P-tuning V2 to be able to apply DP-SGD to the continuous prompts. Then, we carefully tuned the standard and privacy (hyper-)parameters, which resulted in good performance on many downstream tasks.
- PromptPATE: This is the first method for DP learning with LLMs that requires only input-output access to the model. Also, our instantiation of each building block of PATE is novel and original. In the following, we list the novelty w.r.t each building block of PATE:
- Teachers: We are the first to observe how to leverage the effectiveness of in-context learning for the design of teachers. Instead of training teachers from scratch, we notice that the same LLM (with different prompts) can be instantiated as a teacher ensemble. This does not only make obtaining the teachers more efficient but also vastly decreases the required number of private data points.
- Student: The naive way of training a student for PATE would be to obtain many labels from the ensemble and then train a model in a supervised way. This would consume a large privacy budget due to a large public dataset needed for supervised learning. Therefore, instead of distilling teacher knowledge into a student model, we distill it into a single prompt, which significantly differs from the original paradigm of PATE. It enables us to obtain high-performance prompts with a small number of public labeled data points, making PromptPATE significantly better in terms of privacy-utility trade-offs than the naive adaptation of PATE.
>**2. The effectiveness of MIA for the models that do not expose model logits at all,**
Thank you for the comment. First, we would like to highlight that the main purpose of our MIA attack is not to present a powerful attack against modern LLMs but to serve as a motivational example that demonstrates private information from the prompts leaks through predictions of the prompted model. Our MIA shows that the predictions of the prompted LLM can be measurably influenced by the prompt data, which motivates the need for privacy-preserving prompt learning algorithms (the main focus of our paper). Previous works on label-only MIA (Choquette-Choo 2021, Li 2021) show that membership signals cannot be removed by publishing only labels. These attacks approximate the data’s distance to the decision boundary by making multiple queries on the perturbed inputs, which are shown to be as effective as the confidence scores. It is feasible to apply these member-only MIA to our prompt set-up. However, since it is not the focus of our paper, we’d like to leave it to future work.
>**3. The effectiveness of MIA attack against larger OpenAI models that might add noise to the logits**
Adding noises to the logits can decrease the success rate of MIA (in the limit, we can add enough noise to overshadow the membership signal) but this comes at the cost of lowering performance for legitimate users. Label-only MIA would be an effective attack against this defense strategy.
>**4. Does PromptPATE and PromptDPSGD generalize to more free-form text generation tasks?**
For PromptPATE, one way to extend it to generative tasks is by performing a private teacher vote for next-token predictions. Generating each next token can be thought of as a classification problem over the whole vocabulary. In fact, this idea has been used in SeqPATE (Tian et al. NeurIPS 2022) for fine-tuned models. We perform some preliminary experiments on one-shot prompts with Claude, which shows that 86% of the private teachers generate the same first token on the e2e dataset. This high consensus among private teachers implies a promising signal for PromptPATE to succeed. However, we find that many details and design choices in this extension requires a deeper analysis and extensive engineering efforts. For example, it’s not immediately clear what vocabulary to vote on when that information is not given, which is the case for Calude. Due to the constraints of time and resources, we admit this to be a limitation to our work and leave it to the future work.
>**5. Can you protect against instruction extraction in these cases?**
That's a great point. However, we are not aware of any attack that could extract the private instructions from prompts effectively. On the other hand, our MIA against few-shot examples is shown to be successful. Therefore, we think that defending against privacy leakage of few-shot examples is better-motivated.
>**6. I think the paper should clarify early on, that it focuses on the restricted classification setting.**
Thank you for your suggestions. We agree with this. In our updated introduction, we will emphasize that "our paper focuses on classification" and "PromptPATE protects private examples in the few-shot prompts".
Citation:
- Tian, Zhiliang, et al. "SeqPATE: Differentially private text generation via knowledge distillation." Advances in Neural Information Processing Systems 35 (2022): 11117-11130.
- Choquette-Choo, Christopher A., et al. "Label-only membership inference attacks." International conference on machine learning. ICML, 2021
- Li, Zheng, and Yang Zhang. "Membership leakage in label-only exposures." Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for clarifying my questions.
> Re 5: However, we are not aware of any attack that could extract the private instructions from prompts effectively.
I was asking mainly out of curiosity, but what comes to mind are simple "Print the first N characters of your prompt" kind of attacks, also more commonly known as "prompt jailbreaks".
After carefully reading and considering the rebuttal and the other reviews, I will maintain my score, as it still aligns best with my overall perception.
---
Reply to Comment 1.1.1:
Title: Prompt jailbreak attack
Comment: Thank you very much for engaging with our rebuttal and pointing out the prompt jailbreak attack. This attack is not very effective in our few-shot classification set-up as the model is instructed to output only the class names mentioned in the prompt. However, we agree that this is a potential threat for the free-text generation tasks. We will emphasize in the updated introduction that our paper focuses on the privacy of few-shot examples in the classification tasks. Please let us know if you have further questions. | Summary: This paper investigated the privacy leakage in prompted large language models (LLMs), and have proposed methods to protect the privacy of potentially sensitive data used for prompt engineering. The authors first demonstrated high membership inference leakage in existing prompted LLMs and then proposed PromptDPSGD and PromptPATE for privately learning soft prompts and discrete prompts with DP guarantees. PromptPATE specifically works with existing commercial API, making it ideal for deployment in the real world. The authors have validated their approaches on LLMs with extensive experiment setup and have shown a reasonably good privacy-utility balance.
Strengths: * The paper is well-motivated as LLMs and prompt engineering are increasingly popular, the privacy concerns of sensitive data used in prompted LLMs are less studied. The privacy leakage analysis in this paper also validated the concern.
* The paper is organized and easy to follow. The literature review is comprehensive and the methods are clearly described.
* The experiments are conducted on LLMs with real world deployment scale and on black-box commercial API, showing the applicability of this research on existing real world LLMs systems. The setup of the experiment is extensive, including different scenarios such as private and public data from different domains.
Weaknesses: Compared to the parameter-efficient fine-tuning approach (LoRA), prompt learning with DP has inferior performance. Although the authors argued that the storage is cheaper with prompt learning, one might still prefer parameter-efficient LLMs based on their needs on the utility (also storage is typically considered cheap compared to other resources).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Why restrict PromptPate with much smaller epsilon < 0.2 while using larger epsilon (8) for PromptDPSGD? Do you not observe an increase in utility when increasing the privacy budget for PromptPate?
* Have you run membership inference on the DP protected models as a sanity check?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper has raised a privacy concern about real world deployment of LLMs. The authors have acknowledged limitations in the appendix of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback! Please find our detailed response in the following:
>**1. One might still prefer parameter-efficient LLMs based on their needs on the utility.**
We want to emphasize that the main advantage of prompt learning with DP is its flexibility and applicability even when the LLM is deployed behind an API, which is nowadays a standard (see for example the GPT family or Claude). DP with LoRA requires users to modify the model architectures, which almost all LLM APIs do not provide access to. In contrast, our PromptDPSGD only requires gradient access and PromptPATE only needs input-output access. These make our methods more practical and applicable without full access to LLMs.
>**2. Why restrict PromptPate with much smaller epsilon < 0.2 while using larger epsilon (8) for PromptDPSGD?**
Table 2 shows that our PromptPATE is very close to the accuracy of ensemble teachers, which is the upper bound of the student's performance. Figure 3.b further shows that the performance plateaus after $\epsilon=0.2$ and labeling more data points is not helpful. To further improve the utility of PromptPATE, it's crucial to improve the performance of the ensemble teachers, which is a separate question than privacy.
>**3. Have you run membership inference on the DP protected models as a sanity check?**
We thank the reviewer for this suggestions. During the rebuttal period, we ran our MIA on the public prompts over 3 random trials for each dataset. The average ROC scores over 3 trials are shown below. The AUC-ROC curves (each blue line corresponds with one student prompt for one trial) are in the pdf. The ROC scores are close to 0.5 and the curves are close to the random-guessing line, which show that PromptPATE effectively prevents the leakage from MIA.
| | sst2 | agnews | trec | dbpedia |
|-----|------|--------|------|---------|
| ROC | 0.49 ± 0.02 | 0.48 ± 0.03 | 0.51 ± 0.02 | 0.52 ± 0.04 | | Summary: This paper studies privacy preservation in the context of prompt learning for large language models (LLMs). It highlights the vulnerability of prompting data to membership inference attacks (MIAs) and proposes differential privacy (DP)-based defense methods for both soft and hard prompt learning. For soft prompts, the DP-SGD algorithm is utilized, while the PATE algorithm is adapted for hard prompts. Experiments on several common datasets and LM architectures demonstrate that the proposed algorithms can achieve good utility with a relatively small privacy budget.
Strengths: S1. This paper presents a timely study on privacy preservation for prompt learning.
S2. The paper addresses both soft and hard prompt settings and proposes suitable differential privacy (DP) algorithms for each. The PromptPATE algorithm includes in-depth adaptations to the data-efficient characteristics of prompt learning.
S3. It presents MIA to motivate the DP-based defense algorithms.
S4. This paper provides systematic experiments on several common datasets and LM architectures to demonstrate the effectiveness of the proposed methods.
S5. Very well-written and easy to follow.
Weaknesses: W1. PromptDPSGD is somewhat of a direct application of the original DPSGD to the soft prompt learning setting.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I noticed that different language model architectures and datasets are used for PromptDPSGD and PromptPATE. Could you provide further discussion on this difference?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately discussed the limitations and potential negative societal impact of their work in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. Please find our detailed response in the following:
>**1. PromptDPSGD is somewhat of a direct application of the original DPSGD to the soft prompt learning setting.**
Our work is the first one to show the good utility of soft prompt tuning with DP-SGD. Compared with the [25] and [54], our method optimizes orders of magnitude fewer parameters while keeping the original LLM frozen. We leveraged libraries for DPSGD from full [25] and parameter-efficient [54] fine-tuning (LoRA) and combined them with the P-tuning V2 to be able to apply DP-SGD to the continuous prompts. Then, we carefully tuned the standard and privacy (hyper-)parameters, which resulted in good performance on many downstream tasks.
>**2. I noticed that different language model architectures and datasets are used for PromptDPSGD and PromptPATE.**
The main reason why we use different models and datasets for soft and discrete prompts is to choose the most suitable set-up for each paradigm and keep consistent with each of its own previous works. As soft prompts can be applied to white-box LLMs, in order to provide a fair comparison with the previous DP fine-tuning methods [25,54], we use the same architecture (RoBERTa) and datasets as these two previous works. On the other hand, discrete prompts are mostly used with the decoder-only architecture like GPT3. In order to study the most suitable set-up for discrete prompts, our dataset choice and experiment design follows one of the pioneer works on in-context learning for GPT3 [58].
Also, we want to emphasize that our paper does not aim to compare the performance of PromptDPSGD and PromptPATE. Instead, we want to provide alternatives for people to use in different scenarios. For example, one should use PromptDPSGD with the smaller and open-source LLM that offers gradient access and the downstream dataset size is large enough. On the other hand, PromptPATE should be used for larger and commercial LLMs (such as, GPT3 and Claude) has only the input-output access and the sensitive dataset is small.
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: I appreciate the author's rebuttal. I maintain my score and vote to accept.
---
Reply to Comment 1.1.1:
Title: Thanks for the feedback
Comment: Thanks for the positive feedback and voting to accept our paper. We deeply appreciate that. | Summary: This paper discusses the potential privacy risks associated with prompting data in large language models, which can be exposed through a membership inference attack. To address this issue, the authors propose two methods, PromptDPSGD and PromptPATE, for achieving private prompt learning. PromptDPSGD involves obtaining input gradients from the LLM and using FedAvg to update the soft prompt embedding while keeping the LLM frozen. On the other hand, PromptPATE creates a noisy ensemble of private discrete prompts and transfers knowledge by selecting a student prompt that can be publicly released. The experiments demonstrate that LLMs prompted with these private algorithms closely match the non-private baselines.
Strengths: - The authors explore the potential privacy risks that may arise from prompting data in large language models, which can be exploited through a membership inference attack.
- The proposed approach for privately learning to prompt is novel and effective in preserving privacy while maintaining high accuracy.
- The experimental results demonstrate the effectiveness of the proposed approach in preserving privacy, which is important for real-world applications of large language models.
Weaknesses: - The authors only verify the effectiveness of the proposed approach on NLU tasks. It is necessary to conduct experiments to verify the performance of the proposed methods on NLG tasks.
- I am not entirely convinced that prompting data poses significant privacy risks. In my view, the user input itself may present a more significant potential privacy risk.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Could you provide more details of the membership inference attack? How to discover the member and non-member data points?
2. Would the proposed methods be applicable to NLG tasks? Additionally, have you tested the effectiveness of these methods on other NLU tasks?
3. I am not entirely convinced that prompting data poses significant privacy risks. From my perspective, the user input itself may present a more substantial potential privacy risk. What is your perspective on this? Are these methods also applicable to addressing privacy concerns related to user input?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. Please find our detailed response in the following:
>**1. Could you provide more details of the membership inference attack?**
We thank the reviewer for their question and are happy to provide more details on the membership inference attack. As we describe in Section 3, we are interested in the question if a given demonstration (private data point example) was used in a prompt.
A prompt consists of a template that tells the model what to do, e.g. “Tell us if the following sentence is “positive” or “negative”, and example is “The movie was good”, “positive”. The data-label pair from the private data (“The movie was good”, “positive”) is the demonstration whose membership we are interested in. The question which data points *are* members, is therefore, determined by the chosen prompt. All other data points that are not used in the prompt are non-members.
In standard MIA literature (e.g. [45]), the attacker who wants to find out which data points are the members always has some candidate pairs of potential members. For example they would have the data points (“The movie was good”, “positive”), (“I liked the movie”, “positive”), (“What a terrible movie”, “negative”). Then, they would query all these candidates (only the sentences) to the model and observe the model’s confidence scores on the labels (“positive” or “negative”) for the attack. Member data points have a higher confidence for the correct label.
In our case, we assume that the attacker has a pool of potential member-candidates that consists of 1 member and 50 non-members. This skewed distribution is realistic given that the attacker has access to large amounts of data, out of which only a small fraction will be members of the concrete task. The attacker then queries x_1, … x_51 to the model and obtains the confidence scores z_1, … z_51 for the prompted model’s predictions at the right class belonging to x_i.
Out of this vector of confidence values, we generate the ROC-curve (Figure 2b). We calculate the true positive rate (TPR) at different false positive rates (FPR). Given that only one of the 51 data points is a member point, there is a jump in the curve, where at a given threshold, the member data point is correctly classified as a member.
We repeat this experiment over 100 different 1-shot prompts which yields 100 gray curves. We then average over these curves and plot the average as the blue curve. This then expresses the privacy risk of the prompted model for the private downstream dataset.
>**2. Would the proposed methods be applicable to NLG tasks?**
In principle, PromptDPSGD can be directly applied to generative tasks. For PromptPATE, one way to extend it to generative tasks is by performing a private teacher vote for next-token predictions. Generating each next token can be thought of as a classification problem over the whole vocabulary. In fact, this idea has been used in SeqPATE (Tian et al. NeurIPS 2022) for fine-tuned models. We perform some preliminary experiments on one-shot prompts with Claude, which shows that 86% of the private teachers generate the identical first token on the e2e dataset. This high consensus among private teachers implies a promising signal for PromptPATE to succeed. However, we also find that many details and design choices in this extension require a deeper analysis and extensive engineering efforts. For example, it is not immediately clear what vocabulary to vote on when that information is not given, which is the case for Calude. Due to the constraints of time and resources, we admit this to be a limitation to our work and leave it as our future work.
>**3. I am not entirely convinced that prompting data poses significant privacy risks.**
We would like to clarify that the threat-model for leakage of private prompt data and private user inputs differs substantially. While the user input is leaked to the company who uses the prompted model and the API provider who deploys the LLM, the prompt data is already known to the company and leaks to the API provider, but **additionally**, when the company deploys the prompted model, it will leak to all other users of the prompted LLM. While we assume that the user shared their data with a company that they trust (or at least a company that has to follow the current privacy regulations) and while the API provider offers a service where the company can make contracts with them to not leverage the sent data for training with opt-out accounts (https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt), no guarantees can be given about other users (that are uncontrolled and potentially malicious). This significantly increases the risk for the private prompt data.
Additionally, with pre-trained LLMs becoming more powerful, companies can leverage their users' private data more easily by deploying the model prompted with some examples of a task they want to solve. We identified privacy risk for data included in the prompts using the membership inference attack. Thus, the prompt provider can potentially expose private data via the textual prompts which is a severe privacy leakage.
Citation:
- Tian, Zhiliang, et al. "SeqPATE: Differentially private text generation via knowledge distillation." Advances in Neural Information Processing Systems 35 (2022): 11117-11130.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I genuinely appreciate the author's rebuttal, and it has influenced me to reconsider my review score (4->5). However, I still have some reservations regarding the potential privacy risks associated with prompting data, particularly in the case of LLMs utilizing SFT/RLHF methods like ChatGPT/GPT-4. | Rebuttal 1:
Rebuttal: We want to thank all reviewers for their feedback which has greatly helped us improve the paper. We are glad that the reviewers recognize our work to “present a timely study” (reviewer 96HV) on the private adaptation of LLMs “with real world deployment scale and on black-box commercial APIs” (reviewer Hn4c). Our proposed defense methods are “efficient and scalable” (reviewer k8ve), “novel and effective” (reviewer RZeY) with “a much lower privacy budget than fine-tuning techniques” (reviewer p7La). We hope that our work will contribute to a more trustworthy deployment of LLMs. Below we offer clarifications to some common questions reviewers have.
>**1. Why did you choose to compare PromptPATE and PromptDPSGD on different model architectures and datasets? (Reviewers: p7La, 96HV, Hn4c)**
The main reason why we use different models and datasets for soft and discrete prompts is to choose the most suitable set-up for each paradigm and keep consistent with each of its own previous works. As soft prompts can be applied to white-box LLMs, in order to provide a fair comparison with the previous DP fine-tuning methods [25,54], we use the same architecture (RoBERTa) and datasets as these two previous works. On the other hand, discrete prompts are mostly used with the decoder-only architecture like GPT3. In order to study the most suitable set-up for discrete prompts, our dataset choice and experiment design follows one of the pioneer works on in-context learning for GPT3 [58].
Also, we want to emphasize that our paper does not intend to compare the performance of PromptDPSGD and PromptPATE. Instead, we want to provide alternatives for people to use in different scenarios. For example, one should use PromptDPSGD with smaller and open-source LLMs that offers gradient access. On the other hand, PromptPATE should be used for larger and commercial LLMs (such as, GPT3 and Claude) that provide only input-output access and when the sensitive dataset is small.
>**2. Discuss the novelty of PromptDPSGD and PromptPATE compared to existing work (Reviewers: p7La, 96HV, and k8ve)**
- PromptDP-SGD: Our work is the first one to show the good utility of soft prompt tuning with DP-SGD. Compared with [25] and [54], our method optimizes orders of magnitude fewer parameters while keeping the original LLM frozen. We leveraged libraries for DPSGD from full [25] and parameter-efficient [54] fine-tuning (LoRA) and combined them with the P-tuning V2 to be able to apply DP-SGD to the continuous prompts. Then, we carefully tuned the standard and privacy (hyper-)parameters, which resulted in good performance on many downstream tasks.
- PromptPATE: This is the first method for DP learning with LLMs that requires only input-output access to the model. Also, our instantiation of each building block of PATE is novel and original. In the following, we list the novelty w.r.t each building block of PATE:
- Teachers: We are the first to observe how to leverage the effectiveness of in-context learning for the design of teachers. Instead of training teachers from scratch, we notice that the same LLM (with different prompts) can be instantiated as a teacher ensemble. This does not only make obtaining the teachers more efficient but also vastly decreases the required number of private data points.
- Student: The naive way of training a student for PATE would be to obtain many labels from the ensemble and then train a model in a supervised way. This would consume a large privacy budget due to a large public dataset needed for supervised learning. Therefore, instead of distilling teacher knowledge into a student model, we distill it into a single prompt, which significantly differs from the original paradigm of PATE. It enables us to obtain high-performance prompts with a small number of public labeled data points, making PromptPATE significantly better in terms of privacy-utility trade-offs than the naive adaptation of PATE.
>**3. How do the presented methods PromptPATE and PromptDPSGD extend to other tasks, for example, to the text generation tasks? (Reviewers: p7La, RZeY, k8ve)**
In principle, PromptDPSGD can be directly applied to generative tasks. For PromptPATE, one way to extend it to generative tasks is by performing a private teacher vote for next-token predictions. Generating each next token can be thought of as a classification problem over the whole vocabulary. In fact, this idea has been used in SeqPATE (Tian et al. NeurIPS 2022) for fine-tuned models. We perform some preliminary experiments on one-shot prompts with Claude, which shows that 86% of the private teachers generate the identical first token on the e2e dataset. This high consensus among private teachers implies a promising signal for PromptPATE to succeed. However, we also find that many details and design choices in this extension require more extensive analysis and engineering efforts. For example, it is not immediately clear what vocabulary to vote on when that information is not given, which is the case for Claude. Due to the constraints of time and resources, we admit this to be a limitation to our work and leave it as our future work.
We thank all reviewers again for their encouragements, feedback and comments. Please find the individual response to each reviewer below.
Citation:
- Tian, Zhiliang, et al. "SeqPATE: Differentially private text generation via knowledge distillation." Advances in Neural Information Processing Systems 35 (2022): 11117-11130.
Pdf: /pdf/f20ea8812c13866cc9e95b4f7a0f149086bb298b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents differentially private techniques for optimizing continuous and discrete prompts for solving text classification tasks using Large Language Models (LLMs). The need for differentially private prompt learning is motivated by showing that examples used in few shot prompts for classification tasks are highly susceptible to simple membership inference attacks
- For optimizing continuous prompts, the paper uses DP-SGD in combination with existing parameter efficient prompt tuning techniques. When used in combination with either soft prompt or prefix tuning, the method is generically called **PromptDPSGD** in the paper. This method requires access to gradients for tuning and the ability to adapt the model at inference time with tuned parameters, neither of which is typically supported in consumer APIs.
- For optimizing discrete prompts, the paper adapts the Private Aggregation of Teacher Ensembles (PATE) approach to label public examples using an ensemble of LLMs prompted with private few shot examples in order to learn a few shot prompt. This method is called **PromptPATE** in the paper. This method can be implemented on top of existing LLM completion APIs, even when they do not expose token prediction probabilities.
The paper evaluates the utility of using **PromptDPSGD** to fine-tune RoBERTa-Base on the SST-2, QNLI, QQP, and MNLI tasks from the GLUE benchmark with $\varepsilon \in$ {3,8}. It compares it to parameter efficient LoRA-tuning and full fine-tuning using DP-SGD. The results show that the accuracy of **PromptDPSGD** tuned models is competitive despite having orders of magnitude fewer tunable parameters than full fine-tuning and LoRA-tuning.
The paper evaluates **PromptPATE** using GPT-3 (Babbage and Curie) on SST-2, TREC, AG News, DBPedia. The results show that for $\varepsilon < 0.3$, it achieves similar accuracy to few-shot (1- and 4-shot) classification and to a non-private majority vote using the same teacher ensemble.
The supplemental material contains an Appendix describing hyperparameter choices, additional experimental results, and a discussion of broader implications and limitations. It also includes implementations of **PromptDPSGD** and **PromptPATE** built on existing libraries.
Strengths: - Demonstrates for the first time that examples used in LLM prompts for text classification are susceptible to membership inference attacks.
- Evaluates the use of DP-SGD together with 3 different parameter efficient fine-tuning techniques for text classification tasks and compares it to full fine-tuning.
- Proposes a novel adaptation of PATE to optimize discrete prompts for solving text classification tasks using LLMs that is implementable using available commercial LLM APIs using a much lower privacy budget than fine-tuning techniques.
Weaknesses: - Oversells the novelty of **PromptDPSGD**, which is just standard DP-SGD applied to existing parameter efficient fine-tuning techniques.
- The susceptibility of few shot learning to membership inference attacks has been studied before at least for image classification (https://openreview.net/forum?id=39kvovk9ju7). This diminishes the novelty of the observation that the same phenomenon holds for text classification.
- **PromptPATE** is applicable to classification tasks, but it does not appear generalizable to generative tasks where LLMs excel.
- **PromptPATE** performs worse than a non-private few-shot baseline in all scenarios evaluated. Table 2 shows that even a 4-shot **PromptPATE** has worse accuracy than the 1-shot non-private baseline. This casts doubt on the necessity of learning differentially private discrete prompts. It would be enough to declassify a single private prompt to get better utility with perfect privacy for the rest of the prompts.
- The choice to evaluate **PromptDPSGD** and **PromptPATE** on model classes with hugely different capabilities (RoBERTa, and GPT-3 and Claude, respectively), and on mostly disjoint tasks (with the exception of SST-2), makes it hard to compare the two approaches. Despite the difference in model capabilities likely making **PromptPATE** look better than **PromptDPSGD**, the results still show a large gap to a non-private baseline: e.g., for AG News, Table 2 reports 71.7 $\pm$ 0.8% with $\varepsilon = 0.248$ vs. 81% using a 4-shot prompt (which I think is a more appropriate baseline than a 1-shot prompt). The utility of **PromptPATE** cannot be significantly improved because it saturates at a lower privacy budget (cf. Figure 3b) compared to parameter efficient fine-tuning.
**Comments**
- In line 120, "To the best of our knowledge, no prior work attempted to provide DP guarantees for prompt data in LLMs." This peer-reviewed paper published shortly before NeurIPS 2023 deadline explores the use of differentially private prompt tuning in federated learning: https://doi.org/10.1109/ICASSP49357.2023.10095356.
- In line 322, The reference should be to Table 2 instead of Table 5.
- In line 539 in the Appendix: "trend the our private prompts" should read "trend that our private prompts".
- In Algorithm 1 in the Appendix: $D$ is specified first as a set of labeled examples $(x_i,y_i)$ but later only the features $x_i$ are used. In line 4, $L_P$ should read $L_{P_t}$ and $p_t$ should read $P_t$. In line 9, $p_T$ should read $P_T$.
- In Algorithm 2 in the Appendix: $\mathbf{x}$ should read $x$ and the parameter $E$ is not described.
- In the caption of Figure 5, "each prompt has only one member" should read "each prompt has only four members".
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: a) Table 2 reports for AG News and IID public data 1-shot $\varepsilon = 0.248$ and 4-shot $\varepsilon = 0.145$. How can the privacy budget be lower for 4-shot than 1-shot?
b) I tried to wrap my head about how you produced the ROC curves in Figure 2b (and in the Appendix), but after reading the description in the paper many times I still do not fully understand it.
Do I get it right that your attack infers that $(x, l)$ is a member if and only if $\arg \max_j L_P(x)_j = l$?
Or does the attack take 51 different examples $(x_i, l_i)$ and infers that the example with index $\arg \max_i L_P(x_i)_{l\_i}$ is a member and the rest are not? Can you provide a pseudocode description of how the points in the ROC curve are computed?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, both broader societal impact and limitations.
Appendix A discusses societal impacts. It highlights the risk of overreliance on DP guarantees and the need to correctly select the privacy hyperparameters $\varepsilon$ and $\delta$.
Appendix B discusses limitations. It highlights that privacy concerns are limited to private data in few-shot examples (as opposed to the training data of the LLM itself), the limited scale of experiments driven by cost constraints, and that private data used to construct prompts is exposed to the provider of the LLM API.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. Please find our detailed response in the following:
>**1. The susceptibility of few shot learning to membership inference attacks has been studied before at least for image classification**
Thank you for pointing out this paper. We will add it to our related work section. However, we think our observations are novel since the few-shot learning algorithm studied in the paper is the traditional gradient-based fine-tuning, while ours is in-context learning. These two learning paradigms are vastly different. The main distinction is that in-context learning does not modify the model’s parameters whereas fine-tuning does. Given that fine-tuning modifies the model parameters according to the sensitive data, it is expected that the model then exposes membership information on that data. For our settings where the model's parameters are unchanged, the previous conclusion does not simply transfer.
>**2. It would be enough to declassify a single private prompt to get better utility with perfect privacy for the rest of the prompts.**
We want to clarify that the 1-shot non-private baseline is with regards to the best 1-shot prompt selected based on the validation accuracy instead of a randomly-selected 1-shot prompt. [58] shows that the performance of in-context learning has huge variance, so a validation set is very important for utility. The privacy implications of this difference are also crucial and publishing a model with a 1-shot prompt cannot be seen as leaking only privacy of the chosen example. During the process of selecting the best 1-shot prompt, the model is prepended with multiple examples from the validation set, and the best performing one is chosen. As a consequence, the final prompt implicitly also contains private information on the non-chosen examples.
>**3. Different models and datasets for PromptDPSGD and PromptPATE make it hard to compare.**
The main reason why we use different models and datasets for soft and discrete prompts is to choose the most suitable set-up for each paradigm and keep consistent with each of its own previous works. Also, we want to emphasize that our paper does not aim to compare the performance of PromptDPSGD and PromptPATE. Instead, we want to provide alternatives for people to use in different scenarios. Please see a more detailed response in the first point of the global response.
>**4. Why is the privacy budget lower for 4-shot than 1-shot?**
Our PromptPATE’s privacy analysis follows the standard PATE where the privacy budget depends on the number of teachers, the consensus among teachers and the size of the public set. The number of examples in the prompt does not directly influence the privacy budget. This is similar to standard PATE, where the privacy costs are not influenced by the private training set size
>**5. How is the ROC curve produced?**
We thank the reviewer for their question and are happy to clarify. We perform a confidence-based membership inference attack. The intuition is that for labeled samples (x,y), if (x,y) was used as a member of the prompt, when queried to predict on x, the prompted model will have a higher confidence on y than when (x,y) was not a member.
In our case, we assume that the attacker has a pool of potential member-candidates that consists of 1 member and 50 non-members. This skewed distribution is realistic given that the attacker has access to large amounts of data, out of which only a small fraction will be members of the concrete task. The attacker then queries x_1, … x_51 to the model and obtains the confidence scores z_1, … z_51 for the prompted model’s predictions at the right class belonging to x_i.
Out of this vector of confidence values, we generate the ROC-curve (Figure 2b). We calculate the true positive rate (TPR) at different false positive rates (FPR). Given that only one of the 51 data points is a member point, there is a jump in the curve, where at a given threshold, the member data point is correctly classified as a member.
We repeat this experiment over 100 different 1-shot prompts which yields 100 gray curves. We then average over these curves and plot the average as the blue curve. This then expresses the privacy risk of the prompted model for the private downstream dataset.
>**6. PromptPATE does not appear generalizable to generative tasks.**
Thanks for your question. Please see the third point of our global response, where we explain how PromptPATE can be extended to generative tasks.
>**7. 4-shot PromptPATE has worse accuracy than the 1-shot non-private baseline.**
For agnews in Table 2, the improvement of 4-shot compared with 1-shot is not significant (< 3%). Thus, it’s not surprising that 4-shot private is worse than 1-shot non-private. Indeed, previous work [58] shows that having more examples in the prompts does not always increase the accuracy. For example, in Table 1 of [58], on the Trec dataset, 4-shot (69.7) gives worse accuracy than 1-shot (75.7). Therefore, we do not think it’s suitable to compare public and private setups with different number of shots.
>**8. The utility of PromptPATE saturates at a lower privacy budget compared to parameter efficient fine-tuning.**
As shown in Table 2, PromptPATE is already very close to the teacher ensemble's performance, which is the upper bound of the student's performance. Therefore, if the teacher ensemble's performance can be improved, the utility of PromptPATE will improve as well.
>**9. Related work of differentially private prompt tuning in federated learning**
Thank you for pointing out this concurrent paper. We will add this work to the related work and change the sentence on line 120 “To the best of our knowledge, no prior work attempted to provide DP guarantees for **discrete** prompts in LLMs.”
>**10 Comments 2-6**
Thank you very much for pointing out the typos in the paper. We will update all of them in the new version of our paper. | null | null | null | null | null | null |
Real-World Image Variation by Aligning Diffusion Inversion Chain | Accept (spotlight) | Summary: This paper proposes a framework RIVAL to generate variation of the real image without any tuning. It has two key components, the first one is a cross-image self-attention injection, where it first inverts the real image using DDIM, and then sample another random chain but uses a mix of real inverted chain’s value and denoised value in the denoising process. To mitigate the issue of out-of-distribution issue of the inverted latent of the real image, it further proposes a latent chain alignment process, where the random chain is initialised with the adaptively normalized Gaussian distribution of the inverted real latent. Experiments show improvement over baseline methods in terms of Text alignment and user preference. Overall the paper is well-written, the method is easy to follow.
Strengths: The proposed method does not require training or finetuning, although the inference time becomes twice as long as the normal inference, it is acceptable in practice.
RIVAL is shown to work on other text-to-image applications such as test-driven image generation with real-image condition and example-based inpainting.
Qualitative results show better identity preservation and style matching to the reference image compared to other baselines, plus, the editing results on text conditioning is impressive, considering no finetuning is needed.
The authors also provide detailed ablation studies on the choice of alignment steps and framework components.
Weaknesses: The author does not provide analysis on the edibility of the proposed method, which I think is an important feature.
How sensitive is the method to the CFG guidance weight? There is no ablation study on the choice of the CFG weight
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In Figure 4, the person's identity seems to change over regeneration, any idea on how to better preserve the identity?
Can the proposed method work with image conditional DM? Would be great to see some visual examples
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors provide limitation analysis, and see my questions above for further clarification
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer 5k69,
Thank you for your valuable feedback and constructive comments. We will address your comments below and in the revised paper.
### Q1 A Clarification of Inference Time
We want to clarify the point on inference time:
After the inversion process, the overall inference time is not doubled. This claim is attributed to the concurrent execution of the denoising steps for both the inversion and generation chains when processing batched inputs, as illustrated in Fig. 2. The primary computational overhead emerges during the self-attention phase for $t>t_{\text{align}}$, due to a doubling in QKV matrix multiplication from the increased size of KV (Eq. 4).
As mentioned in **aMYr@Q3**, in our experiment, we've cut RIVAL's inference time down to 6 seconds by employing half-precision and x-formers, maintaining 50 DDIM steps.
Furthermore, efficiency could be further improved by caching hidden states of the inversion chain in the inversion process.
### Q2 Sensitivity of the CFG Scale
CFG scale in our diffusion model is an essential hyperparameter controlling the intensity of guidance. We have conducted a visual ablation study to examine the effect of various CFG scales, and the results are presented in Appendix Fig. 17 (a). The study reveals that the CFG scale influences the variations generated by our model, for instance, the color of the building in Appendix Fig. 17 (a). We performed a quantitative ablation on our dataset with different CFG values. It's worth noting that even with large CFG scales, our generated images did not exhibit color artifacts, as demonstrated in **rebuttal Fig. H**.
|CFG scale|3|5|7|9|11|
|-|-|-|-|-|-|
|Text Alignment |0.260|0.272|**0.273**|0.273|0.271|
|Image Alignment |0.859|**0.863**|0.845|0.845|0.838|
|Palette Distance|1.749|**1.685**|1.737|1.829|1.902|
As shown in the above table, our RIVAL maintains a balance between the strength of the text guidance and the preservation of features from the reference exemplar, thanks to the proposed latent noise rescaling mechanism in Eq. (9-10).
### Q3 Identity Preservation in Fig. 4
Identity preservation, especially in the context of tuning-free customization methods such as our RIVAL, ELITE, and BLIP Diffusion, is indeed a challenging problem. Unlike customizing some "easy" concepts, like specific animal breeds and colors, preserving certain customizable concepts, like personal identity or particular objects, requires intricate control and optimization.
One feasible solution could be to integrate RIVAL with customized models. We provide an illustrative example in Fig. 9 and the last row of Appendix Fig. 13, where we combined RIVAL with DreamBooth's [sks] doll model. This ability demonstrates how our approach can work synergistically with customized models to achieve better identity preservation, and we will continually explore the direction of tuning-free personalization in our future work.
### Q4 Compatibility with Image Conditional Diffusion Models
Regarding image conditional diffusion models, we recognize two primary branches. The first is represented by models like UnCLIP (DALLE-2), which accepts an image as a semantic condition. As an example of this type of model's integration with RIVAL, we point to Appendix Fig. 12 (4th column), where we use the open-sourced ImgVar SDv1[29]. RIVAL is able to maintain the tone and low-level style consistency of the reference image, which contrasts with images generated solely using ImgVar (shown in each left-bottom corner of each image).
The second branch of image conditional diffusion models consists of controllable diffusion models like ControlNet. In this context, we demonstrate the application of the RIVAL integration with ControlNet and present examples in **rebuttal Fig. B**.
The implementation is straightforward. We adapt RIVAL with ControlNet by adding condition residuals of ControlNet blocks on the generation chain. Results illustrate that RIVAL, as a plug-and-play module, exhibits strong generalization capabilities when combined with various image-conditioned models.
### Q5 "Editability" of RIVAL
Concerning "editability", we understand it in three different ways. If your interpretation is different from ours, we would appreciate your feedback.
1. **Model adaptability**: This refers to the editability of our method to adapt to other models. As discussed in Q4, our approach is highly adaptable across several popular frameworks, such as UnCLIP, ControlNet, and the recent SDXL. The results are shown in **rebuttal Fig. A&B**.
2. **Text-conditioned image editing**: This refers to the editability for the text-conditioning image editing and generation task. We provide non-cherry-picked visual examples in Appendix Fig. 14 and **rebuttal Fig. C&E**. We also present some ablations concerning editability in Appendix Fig. 17 (c), focusing on selecting source prompts. Using an empty prompt for inversion, we achieve results with more flexibility and consistency with the text prompt. Among those experiments, RIVAL produces results that consistently adhere to the reference image's low-level attributes while maintaining a high-quality match with the text prompt.
3. **Adjustability of the method itself**: This pertains to the editability of the method in terms of controlling the generation process. Our method provides several editable hyperparameters. For instance, the alignment steps $t_{align}$ and $t_{early}$ can be adjusted (a visual ablation is shown in Appendix Fig. 15, **quantitative results in aMYr@Q5**). Additionally, the target prompt (in image editing) and the CFG value can be adjusted for better text alignment, as mentioned in Q2.
---
Rebuttal 2:
Title: Thank you for your response
Comment: The response resolved my concerns, I retain my original rating. I encourage the author to release code for the community.
---
Rebuttal Comment 2.1:
Title: Thank You for Recognizing RIVAL
Comment: Dear Reviewer 5k69,
We are glad to see our rebuttal well addressed your issues. We greatly appreciate your consistently positive feedback on our work from beginning to end. Your comments are meaningful; for instance, considerations like identity preservation and the impact of CFG have led us to valuable reflections.
As mentioned in our abstract, we will update the code after the final decisions are confirmed. We will release the code link and corresponding implementation in the final version (including new applications mentioned in the rebuttal, such as ControlNet).
Thank you again for recognizing RIVAL! | Summary: This paper works on the design of diffusion model to generate image variations given an image examplar as the source image. The basic idea is to align the image generation process to the image inversion process of source image. This is achieved by designing an cross-image self-attention injection for feature interaction between the source and generated images in the diffusion process. The results demonstrate better performance compared with the existing methods in image-conditioned text-to-image generation and examplar-based image inpainting.
Strengths: The idea to generate image variations given a source image is an interesting task. This paper proposed to align the source image inversion process and image generation process using the cross-image self-attention injection, which is a simple idea and shows good image generation results.
Weaknesses: 1. The presentation and training of the parameters in "cross-image self-attention injection" are unclear to me. I understand that, in sect. 3.1, the features of examplar image and the generated image are interacted based on eqns. (2-4), in their inversion and generation chain respectively. However, how to train/determine the parameters in these equations? Are these parameters shared across different diffusion steps? And how do these parameters affect the final generation results?
2. Does this proposed generation method rely on the off-the-shelf diffusion model without any training? The chain alignment is conducted in the latent feature space or the image space?
3. An algorithmic description of this proposed method for image generation should be given for better understanding the training (possibly) and image generation process?
4. The shuffle operation in sect. 3.2 is described as "permutated sample from the inverted reference latent". Please clarify on this description, does this mean permutating the pixels of X_R^T?
5. In the Table I, please clarify on the metric of "Real-image Rank" and "Preference Rank". I know that lines 228-233 may describe them, however, it is still unclear to me on the difference between "Real-image Rank" and "Preference Rank".
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I am interested in the problem to tackle and the proposed idea of feature interaction between source and generated image for inverse and generation chain alignment. However, I have many unclear points on the presentation of the proposed method (as mentioned in the weakness). I suggest authors to clarify the method in rebuttal and improve the writings and presentations of this manuscript.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper discussed on the limitation of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer SLMY,
Thank you for your valuable feedback and constructive comments. We will address your comments below and in the revised paper.
### Q1 Clarification of Cross-Image Self-Attention Injection
We employ the pre-trained Stable Diffusion model in RIVAL without adding trainable parameters. All parameters are consistent across steps. Our self-attention injection design modifies the input and interactions of self-attention. Here we consider each chain in the forward process separately:
1. **Inversion Chain**: Because we want to ensure that this chain reconstructs the inverted image, the denoising process is identical to the vanilla diffusion generation forward. No modifications are made in this chain. We use the hidden state feature $\mathrm{v}_R$ before each self-attention for attention injection to the Generation Chain.
2. **Generation Chain**: For the early generation steps $t > t_\text{align}$, we replace the KV values with $W^V (\mathrm{v}_R) $, $W^K (\mathrm{v}_R) $ using the hidden state $\mathrm{v}_R$ from the inversion chain for self-attention calculation. In the later generation steps, we concatenate $\mathrm{v}_R$ and the hidden state from the generation chain itself $\mathrm{v}_G$ to obtain new KV values. In all self-attention calculations, we do not change Q values and maintain them as $W^Q(\mathrm{v}_G)$. It's worth noting that all parameters $W^{(\cdot)}$ are frozen.
### Q2 Clarification of Generation Method
#### Q2-1 Does this proposed generation method rely on the off-the-shelf diffusion model without training?
As elaborated in response Q1, our training-free approach is built on the pre-trained Stable Diffusion model. Given its prowess in reconstructing real-world images using DDIM inversion, we infer its potential to generate images within a wide real-world domain. This foundation ensures RIVAL's plug-and-play functionality for arbitrary input images. Furthermore, we've showcased RIVAL's synergy with the recent diffusion model, SDXL, in **rebuttal Fig. A**.
#### Q2-2 Is the chain alignment conducted in the latent feature or image space?
In our approach, we align two chains in the latent feature space. A visual ablation study was conducted and presented in Appendix Fig. 16. One of the reasons is that the chains should be aligned at the initial state $X_G^T$ and $X_R^T$. In the image generation process, we found that aligning in the latent space has a similar effect as aligning the residual part in the self-attention. This result paves the way for RIVAL's adaptability to potential latent-free diffusion models. We align in the latent space for simplicity and computational efficiency by adjusting the predicted noise in both branches, as detailed in Eq. 9-10.
### Q3 Algorithmic Description
Below, we provide a Python-style pseudocode for RIVAL. A LaTeX algorithm version will be included in the revision.
```py
def RIVAL(ref_img, c, cfg, DM, T):
'''
ref_img: the reference image
c: text condition
cfg: classifier-free guidance scale
DM: a pre-trained diffusion model
T: number of total denoising steps
return -> the generated image
'''
t_align = t_early = int(T*0.6)
# Eq.(5-6), generate an inversion latent chain of T steps using DDIM
inv_chain = DDIM_inv(ref_img, c, T, DM)
x_rt = inv_chain[-1] # the last latent X_R^T
# Eq.(8)
x_gt = shuffle(x_rt) # initialized latent in the generation chain
# start denoising process, t from T to 0
for t in range(T, 0, -1):
# Inversion chain is the normal denoising process of the diffusion model
# pred_r and pred_g are noise term predictions for each step
pred_r, v_rs = DM.unet(x_rt, c, T, cfg=1)
pred_g = x_gt
# use blks to denote the unet blocks
for blks in DM.unet.blks:
v_g = pred_g
# Eq.(3)
if t > t_align:
# v_r: residual hidden state of generation chain
v_r = blk(v_g)
v_kv = v_rs[blk]
else:
v_r = blk(v_g) # [bs, ndim, HW]
v_kv = torch.cat([v_rs[blk], v_r]) # [bs, ndim, 2*HW]
# Eq.(4)
pred_g = v_g + blk.self_attn(v_r, v_kv)
# Other operations in the unet are omitted for simplification
pred_g = blk.others(pred_g, c, T)
# Eq.(9), cfg rescale
pred_g = rescale(pred_g, cfg)
# Eq.(10), chain latent alignment
if t > t_early:
pred_g = AdaIN(pred_g, pred_r)
x_gt = DDIM_scheduler(x_gt, pred_g, T)
x_rt = DDIM_scheduler(x_rt, pred_r, T)
return DM.vae.decode(x_gt) # decode X_G^0 to get the variance results
```
### Q4 Explanation of the Shuffle Operation
You are correct. The shuffle operation $X_G^T = \text{shuffle}(X_R^T)$ in Eq.8 is designed to perform a permutation in the pixel dimension. We'll ensure this is more explicitly stated in our revised version. Refer to the PyTorch-style code snippet in **4HxC@Q1**.
### Q5 Clarification of "Real-Image Rank" and "Preference Rank" Metrics
The **Real-Image Rank** originates from the user study, where participants were asked to rank images according to their perceived authenticity - which image they believed was most likely not AI-generated.
The **Preference Rank**, on the other hand, originates from the user study where participants, given a reference image and text prompt, ranked generated images based on their adherence to the image reference and the text description.
We acknowledge the potential confusion between these terms and have renamed them "Real-World Authenticity Preference" and "Condition Adherence Preference" in the revision.
### Q6 Improving Presentation
We appreciate your comments and advice on the presentation of the paper. We will include more details and refine our presentation to make the content more easily understandable in the revision.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: I appreciate the responses for clarifying my questions and they are mostly clear, and the paper should consider these clarifications in the revised version, especially on the Q1 and Q2. I retain my rating and it is ok to me if accepting the paper.
---
Reply to Comment 1.1.1:
Title: Thank You for Your Comments and Feedback
Comment: Dear Reviewer SLMY,
Thank you for acknowledging our clarification of issues in the rebuttal. As mentioned in Q6, based on your valuable feedback, we will revise the presentation in the paper's method section of the version to clarify our method better. We also appreciate your positive evaluation of this paper and your endorsement of our approach.
As mentioned in the **global response Q1&Q2**, we will continue exploring broader applications of our method. We expect it can make contributions to the community. We also hope that the explorations we've made in our response can better demonstrate the effectiveness of RIVAL to you.
Thank you again for your review and feedback on our response! | Summary: The authors propose a tunning-free pipeline called RIVAL(Real-world Image Variation by Alignment) for generating diverse and high-quality variations of real-world images.
In previous works, some models also generate images with novel concepts and styles but require additional training stages and data, and others directly incorporated images as the input condition results in suboptimal visual quality and content diversity.
To tackle this issue, the authors suggest a pipeline that maintains the style and semantic content of the reference without additional training stages and data.
The proposed method comprises the inverted latent chain alignment(step-wise latent normalization) and cross-image self-attention injection.
The authors determine whether aligning the inverted latent chain and injecting the modified self-attention key and value by adopting $t_{align}$ and $t_{early}$.
Unlike the other papers that use the cross-image self-attention injection method with an additional forward to calculate the cross-attention mask, the suggested approach only needs a single forward.
The authors show that RIVAL enhances performance by applying the alignment process to other text-to-image tasks through experiments and ablation studies.
Strengths: - The authors claimed a novel plug-and-play pipeline to generate high-quality, real-world image variations without additional optimization through an inverted latent chain alignment and cross-image self-attention injection.
- The proposed pipeline enhances the performance of text-to-image models, such as text-driven image generation with real-image conditions and example-based inpainting. The pipeline generates images fast and flexibly with diverse text prompts adopting a new self-attention injection approach with a single forward pass.
- Compared to CFG, the advantage of the proposed pipeline is its ability to avoid misalignment between latents in two chains and its direct utilization. The authors decouple the two inference chains and rescale noise prediction during denoising inference. Furthermore, they formulate noise prediction as an adaptive normalization.
- The quantitative results outperform the other pipelines provided in the paper except for one metric, and the qualitative results show that RIVAL generates images with diverse text inputs while preserving the reference style.
Weaknesses: The experiments need to be more extensive.
- The authors proposed a plug-and-play pipeline, but there are no quantitative results compared with the other plug-and-play pipelines mentioned in the paper. RIVAL used a similar scheme as MasaCtrl[2] and Plug-and-Play[3] except the latents alignment. The authors employed similar schemes to the pipeline, so comparing the quantitative results with the others would be beneficial.
- In the appendix, the proposed method allows fast and flexible text-to-image generation, but there are no results about how fast it is. It would be reasonable to compare quantitatively with other plug-and-play pipelines.
The authors need to validate the ability of the pipeline to apply to other diffusion-based generation tasks.
- The authors show the qualitative results of self-example image inpainting in Fig. 6, but the results lack experiments. It would be hard to explain the ability to extend RIVAL’s framework to other image editing tasks.
(Minor) The notation of $t_{align}$ and $t_{early}$ is confusing.
[1] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *CVPR*, pages 10684–10695, 2022.[2] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. *arXiv preprint arXiv:2304.08465*, 2023.[3] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. *arXiv preprint arXiv:2211.12572*, 2022.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - In Table 1, I wonder if using specific $t_{align}$ and $t_{early}$ values for each metric would lead to better quantitative results.
- In Fig. 6, the inpainting results show a different masked area. Can you show me the same results by overlaying the masked area at the source image like SD[1] in the paper?
- Can you provide examples of other tasks to which RIVAL can be applied?
[1] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolutionimage synthesis with latent diffusion models. In *CVPR*, pages 10684–10695, 2022.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The paper needs more extensive experiments and analysis of the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer aMYr,
Thank you for your valuable feedback and constructive comments. We will address your comments below and in the revised paper.
### Q1 Comprehensive Quantitative Experiments
We agree that comprehensive quantitative evaluations are vital to support our claims. Please refer to **rebuttal global@Q3** for all quantitative experiments conducted in this rebuttal.
### Q2 Comparison with Other Plug-and-Play Pipelines
Initially, we excluded quantitative comparisons with training-free pipelines such as PnP and MasaCtrl due to task and input disparities. Specifically, PnP utilizes text to modify image appearance, while MasaCtrl leverages text for structural alterations. We recognize the merit of such a comparison in real-image editing. Please see **rebuttal Fig. G** for qualitative comparisons and the subsequent table for a quantitative evaluation.
|Method|PnP|MasaCtrl|RIVAL|
|-|-|-|-|
|Image Alignment $\uparrow$|0.786|0.827|**0.831**|
|Text Alignment $\uparrow$|**0.249**|0.226|0.231|
|Palatte Distance $\downarrow$|1.803|1.308|**1.192**|
|LPIPS $\downarrow$ |**0.245**|0.274|**0.245**|
In the experiment, we directly use the inverted latent $X_{G}^{T} = X_{R}^{T}$ for two chains. We adopt the same interaction starting step $t=45$ as the MasaCtrl. Furthermore, as shown in our Appendix Fig. 14, RIVAL can create free-form content that maintains consistency with the image's style, indicating its distinction and effectiveness compared to these methods.
### Q3 Speed of the Pipeline
As mentioned in L181-182, RIVAL processes images in roughly 8 seconds each on a single RTX 4090. When tested on the more prevalent RTX 3090, our comparisons with other plug-and-play methods confirm RIVAL's competitive inference speed against recent techniques.
|Comparison on RTX 3090|PnP|MasaCtrl|RIVAL|
|-|-|-|-|
|Preparation Time (s)|200|6|6|
|Inference Time (s)|21|15|15|
PnP has a long preparation time due to the feature storing and long-step inversion. It's worth noting that both MasaCtrl and RIVAL can be sped up using half-precision and xformers, achieving an inference time of 6 seconds on RTX 3090. For our comparison, we used the official implementations and maintained consistent conditions across the board.
### Q4 Applicability to Other Tasks
RIVAL's versatility is showcased through diverse experiments spanning multiple diffusion frameworks and applications, such as DreamBooth (Fig.9) and ImgVar[UnCLIP] (Appendix Fig.12).
Additionally, we have successfully extended RIVAL's application to other tasks and frameworks, which showcases the method's adaptability and broad applicability across diverse diffusion frameworks.
1. Larger diffusion models. We've integrated RIVAL with the recent SDXL in **rebuttal Fig. A** for high-resolution (1024 $\times$ 1024) image variation and text-driven generation.
2. Image-conditioned controllable generation. As demonstrated in **rebuttal Fig. B**, RIVAL, in conjunction with ControlNet, produces style-consistent images under varying control conditions.
3. Image style transfer. Utilizing the aligned inverted latent of the content image, $X_G^T = \text{AdaIN}(X_G^T, X_R^T)$. The style transfer results are shown in **rebuttal Fig. C, the first row**.
4. Image editing. We have discussed it in Q2 with visuals in **rebuttal Fig. G**.
### Q5 Use of Specific $t_{align}, t_{early}$
Your statement is correct. As stated in the grid-wise ablation in Appendix Fig.15, using complete attention feature fusion (replacement) ($t_{align} = 0$) makes the generation closely resemble the original image. However, there is a balance between two conditions (text-alignment and image-alignment). Besides, low-level texture-pasting artifacts will present when $t_{align}$ is small, as shown in Fig. 8, 4th image. The choice of $t_{early}$ primarily influences the generated images' style (color) bias.
|($t_{align}, t_{early}$)|(30, 30)|(0, 30)|(30, 0)|(0, 0)|(30, 50)|(50, 30)|(50, 50)|
|-|-|-|-|-|-|-|-|
|Text Alignment |0.268|0.261|0.269|0.259|0.267|_0.274_|**0.279**|
|Image Alignment |0.846|**0.873**|0.838|_0.865_|0.839|0.813|0.817|
|Palette Distance|1.810|**1.421**|1.806|_1.483_|2.359|2.061|2.419|
### Q6 Inpainting Results
In **rebuttal Fig. E**, we've added a mask boundary overlay to highlight the generation area during inpainting. As shown in the last column, we can achieve varied generation results within the masked area by resampling the latent.
RIVAL primarily aims to generate images resembling a reference. Inpainting is just one specific application of RIVAL. Given that self-example inpainting focuses on variation generation within a masked image area, it's challenging to fairly compare it with general inpainting (as RIVAL utilizes the original image as a reference) or example-based inpainting (where RIVAL is only capable when the image itself is the reference). Instead, we've offered a quantitative comparison for the image editing task in Q2.
### Q7 Limitations Analysis
For clarity, we will include a more detailed analysis in the revised paper, using Fig. 10 examples here.
> Semantic Bias: For a prompt like "Pokemon", bias in training set towards the popular"Pikachu" can skew generated images towards Pikachu-like designs.
> Complex Scene & Hard Concepts: Stable Diffusion struggles to generate complex scenes and complicated concepts, e.g., "illustration of a little boy standing in front of a list of doors with butterflies around them". This complexity can lower inversion chain quality and widen the domain gap, leading to less accurate generation results.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses to each question. My concerns and questions are resolved well with your rebuttal. I am willing to raise my score to accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer aMYr,
Thank you very much for recognizing our work and rebuttal. Especially in exhibiting the applicability to other tasks and comprehensive quantitative comparisons, your questions have helped us engage in a more comprehensive discussion and comparison of RIVAL's mechanism.
We will integrate the pertinent discussions into the revision. Thank you again for your review and feedback on our response!
Title: Thank You for Recognizing RIVAL | Summary: Generating real-world image variations is an important research task with practical applications such as image editing, image synthesis, and data augmentation. Past approaches include texture synthesis, neural style transfer, and generative models, among others. This study proposes a method called "Real-world Image Variation by ALignment", which generates high-quality image variations from a single image sample through the inference process of an aligned diffusion model. The paper provides experiments on different images.
Strengths: The overall effect is very good. The proposed method takes full advantage of the properties of the diffusion model.
Weaknesses: There are a lot of recent papers based on a very well trained diffusion generative model. The results of these papers are generally very good (after all, it is based on a very mature large model). These papers generally have some micro-designs, which make the academic meaning of the papers more prominent. However, such papers have a common problem, that is, they do not discuss the research issues in a very specific way. Often only stay in the effect. But this hardly accounts for the actual scholarly progress of these papers. Chances are it's just a manifestation of the capabilities of the base model itself.
Taking this paper as an example, the paper proposes the method of Aligning Diffusion Inversion Chain. But the necessity and interpretability of the method is not fully discussed. In fact, there are many "tricky" ways to achieve the effect of this paper, and it is not necessary to go through the method described in Section 3 (By the way, the writing of Section 3 is very difficult to understand. In fact, a lot of useless mathematics is unnecessary. It is recommended that authors use more carefully set diagrams to explain their methods). I doubt the necessity of the proposed method. I hope the author can have a full ablation experiment, and compare and discuss it with a wide range of existing tricks. Of course, I also realize that such a requirement is a bit too harsh. Because many methods and tricks have not been written into papers or formally disseminated. The authors may not be able to adequately discuss every possibility.
Interpretability is also another important topic. I want to know the specific physical meaning of each latent and representation in the inversion transfer or the point that it can be exploited in this method. This paper doesn't really explain it (or even try to).
=====================
Post rebuttal:
Thanks for the rebuttal. I am willing to rase my score to accept (7). But I recommend authors to add these new discussions in to the final version.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Two general questions:
1. Ablation study.
2. Interpretability.
Please see Weaknesses for full details.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer mFBx,
Thank you for your valuable feedback and constructive comments. We will address your comments below and in the revised paper.
### Q1 Utilizing Capability of the Base Model
We respect your comments on the trend of recent works leveraging mature models. While we agree these base models are crucial, RIVAL innovates by addressing the specific challenge of controllable, high-quality, real-world image generation. RIVAL is designed to extend beyond the capabilities of the original model and can be generically applied to broad well-trained diffusion frameworks.
This generalization ability is demonstrated through extensive experiments with diverse diffusion frameworks and applications, including DreamBooth (Fig.9), SDXL (**rebuttal Fig.A**), ControlNet (**rebuttal Fig.B**), and ImgVar (Appendix Fig.12). We posit that this breadth of applicability constitutes scholarly progress, broadening the base model's utility.
### Q2 Clarification of Section 3
For a detailed and precise workflow of our method, please refer to the Pytorch-style pseudocode for RIVAL in **SLMY@Q3**. A LaTeX algorithm version will also be included in the revision.
### Q3 Necessity of RIVAL: Wider Discussion and Comparisons
To elucidate RIVAL's necessity, we delve into two key aspects:
#### Q3-1 Module-wise Ablations
Agreeing with your suggestion for thorough ablations, we augment our visual ablations (**rebuttal Fig. F**, Fig.8 and Appendix Fig.15-17) with a comprehensive module-wise experiment.
|Module|Attention Inject|Early Fusion(Eq.3)|Latent Init.(Eq.8)|Noise Align(Eq.9-10)|Palette Dist.|Text Align|Image Align|
|-|-|-|-|-|-|-|-|
|Baseline (SD)|||||3.917|0.266|0.751|
|Attn Inject|Y||||3.564|0.277|0.804|
|Attn Fusion|Y|Y|||3.518|0.274|0.820|
|Latent Init.|||Y||3.102|0.268|0.764|
|Noise Align||||Y|3.661|0.251|0.647|
|w/o Attn Operations|||Y|Y|2.576|0.276|0.760|
|w/o Fusion&Noise Align |Y||Y||2.419|**0.279**|0.817|
|w/o Attn Fusion|Y||Y|Y|1.902|0.274|0.818|
|w/o Latent Init|Y|Y||Y|3.741|0.242|0.653|
|w/o Noise Align|Y|Y|Y||2.335|0.267|0.839|
|**Full Model**|Y|Y|Y|Y|**1.810**|0.268|**0.846**|
The results underscore the role of each component and their combinations. Attention injection facilitates high-level feature interactions for better condition alignment (both text and image). Early fusion, built upon attention injection, aids in early step chain alignment, significantly enhancing image alignment. Meanwhile, latent noise alignment guarantees the preservation of color.
Latent initialization has a pronounced impact, notably enhancing the color palette metric, an effect intensified by noise alignment. A thorough evaluation of hyperparameters $t_{align}$ and $t_{early}$ can be found in **aMYr@Q5**, Appendix Fig.15, and analysis in Appendix L84-89.
#### Q3-2 Wider Discussions and Comparisons
To further ascertain RIVAL's necessity and effectiveness, we compare it with three concurrent related works. **Visual results: rebuttal Fig. C&G. Quantitative comparison: aMYr@Q2**.
1. **Tunning-based style transfer method [StyleDrop]** Similar to ours, this method yields visually appealing results with a style exemplar image. Despite its high-quality stylization, it relies on case-wise finetuning with intricate prompt design. At the same time, its base model MUSE isn't open-sourced.
2. **Attention-based image editing [PnP]** PnP employs a plug-and-play approach that uses attention and feature injections for structure-preserving text-based editing. While the results are good, the image style diverges from the exemplar due to a lack of chain alignment consideration.
3. **Attention-based image synthesis [MasaCtrl]** This work employs a similar attention-injection framework. We've comprehensively analyzed the differences in attention in Appendix Section B and Fig.11. MasaCtrl is generally suited for real-image editing instead of unconstrained generation. We present a visual comparison in **rebuttal Fig. G**. As input latent is fixed, it cannot generate free-form images from a single target prompt, as results in Appendix Fig. 14.
### Q4. Interpretability of RIVAL
We add explanations and experiments for the interpretability of RIVAL in the following two aspects.
#### Q4-1 Latent Similarity
To assess RIVAL components' efficacy regarding the distribution gap, we illustrate the KL divergence of noisy latent between chains A and B, $X_{A}^{t}$ and $X_{B}^{t}$ in the generation process, as depicted in **rebuttal Fig. D left part**. Interactions between different distributions (green line) widen the gap, while two latent chains with the same distribution (yellow line) can get a better alignment by attention interactions. With aligned latent chain alignment and interaction, RIVAL (red line) effectively generates real-world image variations.
#### Q4-2 Reference Feature Contribution
Attention can be viewed as sampling value features from the key-query attention matrix. RIVAL converts self-attention to image-wise cross-attention. When latents are sourcing from the same distribution ($X_G^T, X_R^T\sim\mathcal{N}(0, I)$) images retain consistent style and content attributes (Fig. 3, first two images, **rebuttal Fig. D left, yellow line**). This result is beneficial since we do not require complex text conditions $c$ to constrain generated images to get similar content and style.
For a more direct explanation, we visualize of images' bottleneck feature contributions (as in Fig. 7 right part) of attention score are presented in **rebuttal Fig. D right part.**
Reference contribution of the softmax score is:
> $$\text{score}_{R}=\frac{\sum^{v_i\in\mathbf{v}_R}({W^Q\mathbf{v}_G\cdot(W^K (v_i))^{\top}})}{\sum^{v_j\in\mathbf{v}_G\oplus\mathbf{v}_R}({W^Q\mathbf{v}_G\cdot(W^K(v_j))^{\top}})}$$
As RIVAL adopts early fusion in the early steps, we use 50% as the score in the early steps. Results indicate that RIVAL leverages 30% of the reference feature in the final step, compared to random initialization (7%) and latent initialization (21%).
---
Rebuttal Comment 1.1:
Title: Raising My Score
Comment: Thanks for the rebuttal. I am willing to rase my score to accept (7). But I recommend authors to add these new discussions in to the final version.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer mFBx,
Thank you for recognizing our work and the content of our rebuttal. The questions you raised in the review are essential for explaining the effectiveness of RIVAL and have helped us to delve deeper into the underlying mechanisms of this work.
We will add all relevant experiments and discussions in the revised version. Besides, we will organize the content in the method section to make the article more understandable and academically meaningful.
Thank you again for acknowledging our work!
Title: Thank You for Recognizing RIVAL | Rebuttal 1:
Rebuttal: Dear all reviewers,
We want to express our gratitude to all reviewers for their thorough reviews and constructive comments. In the global rebuttal, we aim to summarize and address representative questions raised by the reviewers.
**We encourage all reviewers to refer to the global response for Q1 to gain a deeper insight into our motivation and applications.**
### Q1 Motivation and Problem Setting [all reviewers]
The primary goal of our paper is to introduce a tuning-free approach to produce images that resemble real-world samples. With a traditional T2I (Text-to-Image) pipeline, one can generate an image in the "Starry Night" style simply by inputting the prompt "in Starry Night style." However, considering that Van Gogh created 864 oil paintings in his lifetime using various techniques, generating images that closely align with an arbitrary real-world reference remains an essential problem.
While our objective may seem similar to image stylization, a fundamental distinction exists between generation (conditioned on image) and tasks based on the original image (like image stylization and editing). While generation often starts from random noise and produces free-form, unconstrained images, our method can act as a style-condtioned generation technique when references incorporate low-level features. Here, we initialize the latent as a normalized inverted latent of the content image. This process also allows us to harness image editing capabilities directly using the inverted latent.
To provide clarity, we present a table comparing various diffusion-based methods based on four key aspects: (text prompt, image exemplar, initial latent, and requires finetuning)
|Application |Text Prompt|Image Exemplar|Gen. from noise|Gen. from inverted latent|Requires Tuning|Representative Method|
|-|-|-|-|-|-|-|
|T2I generation|Y|-|Y|-|-|Stable Diffusion|
|Image Variation*|C|Y|Y|-|-|DALLE2|
|Style conditioned T2I*|Y|Y|Y|-|Y|StyleDrop|
|Image Stylization**|C|Y|-|Y|Y|InST|
|Image Editing**|Y|Y|-|Y|-|Prompt2Prompt, MasaCtrl|
|Image Customization|C|Y|C|-|C|DreamBooth, Custom Diffusion|
|**Our Method**|Y|Y|C|C|-|**RIVAL**|
>**Y: required, C: required in some methods/application**\
*: Demonstrated ability of RIVAL in the paper submission.\
**: Demonstrated ability of RIVAL in the rebuttal.
### Q2 More Applications [4HxC, aMYr, 5k69]
Given that RIVAL is a broad-based idea for the diffusion model during denoising inference, we received suggestions to expand our method to other diffusion-based applications. Below is a list of tasks RIVAL is adept at:
|Applications of RIVAL |Text Prompt|Image Exemplar|Gen. from noise|Gen. from inverted latent|
|-|-|-|-|-|
|Image Variation|Y|Y|Y|-|
|Style conditioned T2I|Y|Y|Y|-|
|Structure-guided T2I (ControlNet)|Y|Y (+Control condition)|Y|-|
|Image Stylization|Y|Y|-|Y|
|Image Editing|Y|Y|-|Y|
Building on Q1, we have expanded our approach to cover the three new applications listed in the last three columns for adapting RIVAL to different tasks. Generally, RIVAL is suitable for tasks that need consistent style/content generation in line with the exemplar.
### Q3 Table of Content: Experiments [mFBx, aMYr, 5k69]
Outlined below are the quantitative experiments conducted in this rebuttal:
1. **[mFBx@Q3-1] Module-wise ablation study**: We present a detailed module-wise ablation study to evaluate the impact of our method's four components (attention injection, attention feature fusion, latent initialization, and noise alignment).
2. **[aMYr@Q2] Comparison with plug-and-play methods**: We compare our image editing outcomes with PnP Diffusion and MasaCtrl.
3. **[aMYr@Q3] Speed comparison**: We delve into a time comparison for preparation and inference phases against PnP and MasaCtrl, with RIVAL showing competitive inference speed.
4. **[aMYr@Q5, 5k69@Q2] Hyperparameter Ablation**: Quantitative results are shown for selected timestep hyperparameters controlling the denoising process and for different CFG values, proving RIVAL's capability to mitigate artifacts from high CFG values.
### Q4 Table of Content: Visualizations [all reviewers]
We have made available a PDF containing visualization results of all pertinent experiments. We suggest reviewers examine our results, especially Fig. A, B, C, G, and H, within the provided PDF for comprehensive details.
### Q5 Clarifications
#### Q5-1 The algorithm workflow [mFBx, SLMY]
We appreciate the feedback and admit that our paper's presentation could be more precise. We have added an algorithmic description in **SLMY@Q3** to help reviewers better understand RIVAL's workflow.
#### Q5-2 Limitation analysis [4HxC, aMYr]
This is presented in **4HxC@Q7 and aMYr@Q7**.
#### Q5-3 Correction of formula expressions
Despite no direct feedback, we have spotted and rectified a typo in our inversion process in Eq. (5-6). Take Eq. (5) as an example. A coefficient $\sqrt{\alpha_{t-1}}$ should be multiplied before $(\beta_{t-1} - \beta_{t})$,
$$X^{t-1}=\sqrt{{\alpha_{t-1}}/{\alpha_t}} \cdot X^t+\sqrt{\alpha_{t-1}}* (\beta_{t-1} - \beta_{t}) \cdot \varepsilon_\theta(X^t, t, \mathcal{C}).$$
Furthermore, a simpler and corrected version of the adaptive normalization in Eq.(9) is:
$$\epsilon_\theta(X_G^{t}, t, \mathcal{C}) = \text{AdaIN}(\varepsilon_\theta^\mathrm{cfg}(X_G^{t}, t, \mathcal{C}), \varepsilon_\theta(X_R^{t}, t, \mathcal{C}))\text{, where}$$
$$\text{AdaIN}(a, b) = \sigma(b) (\frac{a - \mu(a)}{\sigma(a)}) + \mu(b).$$
In addition to the existing references, we have incorporated a few new ones in this rebuttal, which will appear in the revised version.
> *Inversion-Based Style Transfer With Diffusion Models*; **Yuxin Zhang et al.**; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 10146-10156
> *StyleDrop: Text-To-Image Generation in Any Style*; **Kihyuk Sohn et al.**; arXiv 2023
> *SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis*; **Dustin Podell et al.**; arXiv 2023
Pdf: /pdf/6a64f8be81234f21755c2a2b9e9da588203f7848.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper investigates the domain gap between generated images and real-world images, as it is challenging in generating high-quality variations of real-world images. It mainly contains two contributions, the first one is the cross-image self-attention injection for generating images that correspond to the given exemplar, and the second one is the latent alignment, which can help to eliminate the biased tone and semantics. The comparison with stable diffusion, SD variation, and ELITE validates the effectiveness of each component. Lastly, the ablation studies analyze in depth the contribution of each submodule, showing the positive impact of each part.
Strengths: + The paper is well-written and easy to follow for most parts, except when introducing the details of the shuffle operation in equation 8. It is not easy to directly understand the operation.
+ The paper has a nice flow of descriptions and figures to illustrate the qualitative comparison.
+ The diagrams are easy to understand and are well-made.
+ The main contribution of the proposed cross-image self-attention injection and latent alignment are intuitive and well-executed in this paper. Each design is reasonable to handle the corresponding challenges, like object custom and style consistency.
+ The ablation study is sufficient to validate the effectiveness of each component in designing the whole network, the concatenation of cross-image features in different steps, and the alignment of two features.
+ Experiments show superior performance in comparison with recent works.
Weaknesses: - Generally speaking, this work is more like the recent customized text-to-image generation, except that this work considered style consistency, like the analyses in Figure 4 (2nd-row). Since these competing methods did not consider this style factor, so they easily have the tones inconsistency problem. Directly comparing with them seems not very fair, as they can generate great results, except for the inconsistent tone distribution. In my opinion, it seems not a crucial issue.
- The concatenation of v_G and v_R in equation 3 is also like some training-free video generation, which adopts the cross-frame interaction. The second proposed latent alignment is widely used in style transfer tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Some deep analyses of the failed cases seem not enough. The provided examples cover very wide ranges, like semantic bias, complex scenes, and hard concepts.
What is the difference between custom image generation and variation generation? It seems that it only has the style difference, as this work needs to keep the style consistent with the given exemplar. In this case, I think the authors should discuss these related works, and compare with them by considering style transfer factors.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer 4HxC,
Thanks for your valuable feedback and constructive comments. We will address your comments below and in the revised paper.
### Q1 Shuffle Operation
In Eq. 8, the shuffle operation $X_G^T=\text{shuffle}(X_R^T)$ is designed to rearrange latent elements within the spatial dimension.
We present a PyTorch-style code snippet for better understanding:
```python
# x_rt: latent variable with the shape (batch size, num. of dimensions, height, width)
def shuffle(x_rt):
bs, n_dim = x_rt.shape[:2]
# Generate a permutation index
perm_idx = torch.randperm(x_rt.nelement() // (bs * n_dim))
# Reshape, permute, and reshape back to the original
x_gt = x_rt.view(bs, n_dim, -1)[:, :, perm_idx].view(x_rt.size())
return x_gt
```
### Q2 Fairness of Comparison
We recognize the importance of ensuring a fair comparison. In the image variation task, RIVAL uses the same objectives and input states as ImgVar and ELITE, ensuring a fair comparison. Our comparisons aim to highlight the benefits of RIVAL, particularly regarding style and content consistency, an overlooked yet critical aspect in image generation.
#### Q2-1 Comparison and Importance of Image Variation
Image variation in diffusion-based generation stems from UnCLIP (DALLE2), using image embedding as a condition for tailored generation.
In Fig. 4 and Tab. 1, the StableDiffusion baseline takes only text as a prompt to illustrate the benefits of introducing a reference image to guide the generation process. Concerning ImgVar and ELITE, these methods also use the original image as a condition and are fine-tuned on large-scale real-world datasets to align the image condition with generated images. However, they often fail to address specific subtle attributes that may not be easily described from the image condition, as shown in Fig 4 (e.g., out-of-focus background in the 3rd row, blinds in the 5th row).
Such attributes define an image's unique identity. Our approach strives to generate images that closely align with the original image from random noise. This property is vital as it is the fundamental precondition of all the applications presented in the paper and rebuttal, allowing us to generate images more similar to real-world exemplars.
#### Q2-2 Style-transfer Factor Consideration
Beyond image variation, RIVAL's capabilities in style transfer are evident. This ability is particularly highlighted in its text-driven image generation (Fig. 5) and customization (Fig. 9). Such demonstrations further accentuate RIVAL's strengths in producing style-consistent images.
Furthermore, we compare methods explicitly crafted for style transfer: StyleDrop and InST in **rebuttal Fig. C**. This comparison underscores RIVAL's versatility, especially when considering the case-wise fine-tuning (~20min) that StyleDrop and InST demands.
### Q3 Novelty of Purposed Modules
Here we will describe our ideas to answer how we can solve problems through the organic combination of modules that might appear "simple" at first glance. A comprehensive ablation is presented in **rebuttal Fig. F and mFBx@Q3-1**.
#### Q3-1 Cross-frame Self-Attention Injection
As you correctly pointed out, we have reviewed some video-generation methods (L110-112) that use self-attention injection for cross-frame consistency. Despite sharing a similar objective, our technique is distinct as we are among the first to adapt it to real-world image generation challenges.
A primary challenge we address is the attention-attenuation (L132-135, **rebuttal Fig. D right part**). This problem arises due to the discrepancy between the distributions of latents in image generation. In video generation methods, latents across different frames generally typically maintain the same distribution, whether they originate from inverted latents by DDIM [Video-P2P, Fate-ZeRO] or are derived from the standard Gaussian [Tune-A-Video, Gen-1]. However, uniformity cannot be presumed for generation starting from random noise and referencing with an inverted latent.
Although DDIM inversion is deterministic, it does not ensure that the inverted latent adheres to the standard distribution $\mathcal{N}(0, I)$. This discrepancy in the distribution between the real-inversion and generation chains becomes evident when applying cross-image self-attention (Sec. 3.1). This results in a distribution gap in the Key-Query-Value (KQV) terms. This gap can impact attention computation, leading to varied generation results. Hence, our attention-fusing design combined with latent alignment is proposed to address this challenge effectively.
#### Q3-2 Latent Noise Alignment
While adaptive normalization is commonly observed in style-transfer and domain adaptation tasks [StyleGAN, AdaIN], our approach to latent alignment tackles a distinct challenge inherent to diffusion generation. Specifically, we aim to solve realism degradation from high classifier-free guidance (CFG).
High CFG produces text-aligned images but can cause realism-divergent artifacts **(rebuttal Fig. H)**. To counter this, we introduced a decoupled adaptive normalization (Eq.9-10). It decouples CFG generation from DDIM inversion in one pass, aligning latents to reflect the image exemplar faithfully.
### Q4 Limitation Analysis
For clarity, we will include a more detailed analysis in the revised paper using Fig. 10 examples here.
> Semantic Bias: A prompt like "Pokemon" may lean towards popular choices like "Pikachu" due to training set biases, resulting in a Pikachu-dominated generation.
> Complex Scene & Hard Concepts: Stable Diffusion struggles to generate complex scenes and complicated concepts, e.g., "illustration of a little boy standing in front of a list of doors with butterflies around them in Fig. 10(b)". This complexity can degrade the inversion chain and widen the domain gap, leading to less accurate results.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response and my concerns are addressed well. Considering other reviews, I would like to give weak accept.
---
Reply to Comment 1.1.1:
Title: Thank You for Recognizing RIVAL
Comment: Dear Reviewer 4HxC,
Thank you very much for recognizing our work and the effort we put into our rebuttal. Especially in explaining this paper's novelty, your questions have helped us engage in a more comprehensive discussion and comparison of RIVAL's mechanism.
We will integrate the pertinent discussions, especially concerning the comparison's fairness and the method's novelty, into the subsequent revision. | null | null | null | null | null | null |
MADG: Margin-based Adversarial Learning for Domain Generalization | Accept (poster) | Summary: This paper aims to ease the distribution problem from a theoretical perspective, which uses margin loss and a scoring function to describe the relationship between domains, and the generalization bound in terms of functional class complexity is subsequently analyzed. Based on their theoretical analysis, a margin-based adversarial framework, which is developed upon the classical DANN method, is further proposed. Results conducted in the Domanbed benchmark show their results are competitive among existing arts.
Strengths: 1. It is always good to see some theoretical analysis for DG.
2. Experiments will the five datasets are appreciated.
Weaknesses: 1. The motivation in the introduction is problematic. The literature is happy to see inspiring theory, but the theory itself is not the purpose. Rather, the paper should emphasize what part of existing work requires a proper theoretical explanation. After all, how to solve domain generalization is the problem. The authors may consider revise the introduction part.
2. The idea of adversarial training for DG (either DANN [77] or MMD [79]), is shown to be less effective than ERM according to different benchmarks [32, a]. However, this work shows significantly better results than ERM, what is the advantage of the proposed MADG compared with them?
3. According to Line 282, and the implementation details in the supplementary material, I think the comparisons are unfair, as the Domainbed uses randomly selected hyperparameters (batch size, learning rate, etc.), which are fixed in their experiments.
4. More effective approaches should be compared, such as SD [b], and Miro [c], and it is suggested to reevaluate their code (at least some of them) on the same device.
[a] OoD-Bench: Quantifying and Understanding Two Dimensions of Out-of-Distribution Generalization, in CVPR'21.
[b] Gradient Starvation: A Learning Proclivity in Neural Networks, in NeurIPS'21.
[c] MIRO: Mutual Information Regularization with Oracle, in ECCV'22.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: It seems domain labels must be available during training, this should be listed as a limitation as domain labels are not always available.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and suggestions. We respond below to each of the concerns/suggestions.
> The motivation in the introduction is problematic. The literature is happy to see inspiring theory, but the theory itself is not the purpose. Rather, the paper should emphasize what part of existing work requires a proper theoretical explanation. After all, how to solve domain generalization is the problem. The authors may consider revise the introduction part.
>
**Response:** We apologize for not being clearer on this. Our overall intention was to suggest that having strong theoretical underpinnings along with improved performance is an added advantage for a method. We will revisit the phraseage and temper/substantiate our claim on motivating our method using theoretical framework as suggested. As discussed in Sec 1 (L39-42), our motivation in this work was to develop an adversarial DG algorithm which uses a disparaity discrepancy that is based on margin loss because of its advantages as discussed in Sec 4 (L129-144. We also develop a generalization bound for an unseen domain based on margin loss and scoring functions, which are more informative than 0-1 loss based bounds.
>The idea of adversarial training for DG (either DANN [77] or MMD [79]), is shown to be less effective than ERM according to different benchmarks [32, a]. However, this work shows significantly better results than ERM, what is the advantage of the proposed MADG compared with them?
>
**Response:** As discussed in Sec 4 (L136-139), one of the important advantages of the margin-based discrepancy used in MADG (defined in Eq (5) and (6)) is that it considers the classifier f (that is used to perform the task) in its formulation. The adversarial methods in previous works have used discrepancy measures such as MMD [79] (moment matching metric) and DANN [77] (based on H$\Delta$H discrepancy) that do not consider the task (the classifier) in their formulations. Also as discused in Sec 4 (L132-144), margin-based discrepancy is tighter and can be efficiently optimized as compared to $H\Delta H$ discrepancy (ours computes the supremum over just one variable, whereas the latter computes over two variables). We hypothesize that these reasons make the margin-based disparity measure used in MADG perform better domain alignment and learn better domain-invariant features for the DG task.
> According to Line 282, and the implementation details in the supplementary material, I think the comparisons are unfair, as Domainbed uses randomly selected hyperparameters (batch size, learning rate, etc.), which are fixed in their experiments.
>
**Response:** In DomainBed, hyperparameter tuning is done by selecting a hyperparameter value from a range of values randomly drawn from a distribution as shown in Table 6 in [32]. We follow the same approach. In Table A1 in our appendix, we report the hyperparameter values of the MADG model for each dataset. Simiarl to DomainBed, we employ hyperparameter tuning from among a range of values for each parameter, as detailed in the below Table 1. We will include these, and clarify this in the Appendix.
Table 1: Ranges of values for hyperparameter tuning in our MADG model
| Hyperparameter | Search Values |
| -------- | -------- |
| Margin | [1, 1.5, 2, 3] |
| Learning rate | [0.01, 0.04, 0.001, 0.004, 0.0001] |
| Momentum | [0.85, 0.9, 0.95] |
| Weight Decay | [0.005, 0.0005] |
| Dropout | [0.3, 0.4, 0.45, 0.5] |
| Learning rate decay | [0.001, 0.0002] |
>More effective approaches should be compared, such as SD [b], and Miro [c], and it is suggested to reevaluate their code (at least some of them) on the same device.
>
**Response:** As discussed in Sec 6 (Lines 278-280) we evaluate our model on five different benchmark datasets (OfficeHome, PACS, VLCS, TerraIncognita and DomainNet) that is followed by recent papers [71,88,89] and DomainBed benchmark [32]. To the best of our knowledge, this is comprehensive for DG work. The paper [b] did not report results on these datasets, however, as suggested, we run the SD model [b] on OfficeHome dataset for three trials and report the average results below in Table 2. As seen from Table 2, MADG (our model) outperforms SD model on all domains.
Also, as stated in Appendix A2 (L85-86), we follow recent state-of-the-art work [71] (also supported by recent papers [A1-A3]) in their test-domain methodology for hyperparameter selection. The paper [c] proposes the MIRO model and reports results only for training-domain validation-based model selection setting. For fair comparision, based on your suggestion, we run the MIRO model for three trials on our setting. The results are reported below in Table 2 where we can see that the average accuracy of our model outperforms the MIRO model. The MADG model also outperforms MIRO on two domains and achieves the same accuracy on the \`Real World\`domain.
Table 2: Accuracy (%) on OfficeHome dataset
|Model|Art|Clipart|Product|Real World| Average|
|-------|--------|-------|-------|------|------|
|Miro[c]|67($\pm0.7$)|56.5($\pm0.9$)|79.4($\pm1.4$)|81.5 ($\pm0.5$)|71.1|
|SD [b]|64.8($\pm0.9$)|49.9($\pm1.2$)|75.6($\pm1.3$)|79.1($\pm0.2$)|67.4
|MADG (ours)| 68.6($\pm0.5$)|55.5($\pm 0.2$)|79.6($\pm0.3$)|81.5($\pm0.4$)|71.3|
> It seems domain labels must be available during training, this should be listed as a limitation as domain labels are not always available.
>
**Response:** As with recent state-of-the-art DG methods [71],[88], our method requires the knowledge of domain assignment of each data point. We will be happy to state this assumption/limitation explicitly.
[a] OoD-Bench: Quantifying and Understanding Two Dimensions of Out-of-Distribution Generalization, in CVPR'21.
[b] Gradient Starvation: A Learning Proclivity in Neural Networks, in NeurIPS'21.
[c] MIRO: Mutual Information Regularization with Oracle, in ECCV'22.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for the rebuttal, some of my concerns are addressed. But my main concern regarding the hyper-parameter setting, which I think leads to unfair comparisons, remains. In Table 8 (not Table 6) in [32], the parameter for each trial is randomly selected in a uniform distribution, and the performance is averaged from different trials. That being said, the parameters are different for different trials, and simply selecting the best-performing parameter for all trials violates the random selecting rule. Besides, different datasets use the same set of hyper-parameter in [32], which are also reported differently in Table A1 in the supp. material. For this reason, I cannot recommend acceptance for this work.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our response, and are glad to know that it helped resolve concerns.
On the pending issue, we'd like to clarify any misunderstanding. Firstly, on a minor note, we meant Table 6 in the ICLR version of [32] (the version available on OpenReview), which is Table 8 of its arxiv version [32].
Secondly and more importantly, Sec 5 (Pg 6) in the ICLR version of [32] under the ‘Hyperparameter search’ subsection states that: “**`*All hyperparameters are optimized anew for each algorithm and test domain*`, including hyperparameters like learning rates which are common to multiple algorithms.**” Similarly, Sec 5 (Pg 7) in the arxiv version of [32] under the ‘Hyperparameter search’ subsection states that: "**For each algorithm and test environment, we conduct a random search of 20 trials over the hyperparameter distribution (see Appendix D). We use each model selection method from Section 3 to `*select amongst the 20 models*` from the random search.**” Thus, [32] *DOES NOT* average performance across the 20 trials, but suggests selecting the best hyperparameters among the 20 trials, each of which is obtained from random search. We exactly follow the same procedure in our work too (note that the list of hyperparameters in our rebuttal was randomly selected).
Also, in the official code implementation of [32] (code link provided in Sec 4, pg 5, of both ICLR and arXiv versions), note that in Lines 114 and Lines 123-128 of `list_top_hparams.py` file , dataset is passed as an argument, which is then used to list the best hyperparameters for each dataset individually (see Line 141). Line 141 in turn calls `model_selection.py`, where Lines 27-40 clearly states that the hyperparameters are chosen per dataset.
Following this, we use the same procedure and report the top hyper-parameters for each dataset in Table A1 in the Appendix.
We hope this can clarify our choice. If there is any more information required, please let us know. | Summary: The paper proposes a new adversarial learning objective using a margin based approach for domain generalization. The goal of domain generalization broadly is to build classifiers that are trained on one or more source domains, and are expected to generalize to an unseen target domain. The key idea is to leverage Margin disparity discrepancy (MDD) which quantifies the level of disagreement between decision boundaries of classifiers using their margins. MDD is used as a proxy to understand generalizability of classifiers in this context across multiple domains. Next, the paper establishes an upper bound on the generalization error on any unseen target domain that is within the convex hull of the source domains and MDD. In other words, the objective is to get the decision boundaries across different source domains to agree as much as possible, while also minimizing the empirical errors on them simultaneously.
Strengths: * The margin perspective for domain generalization is a fresh perspective to this, and the paper takes an interesting approach at using an adversarial learning strategy to optimizing this problem.
* Detailed theoretical setup and formulation, which builds on existing work and setup the formulation for MADG
* Results are impressive — on several benchmarks, the proposed method appears to perform competitively.
* good ablations and analysis of the proposed method.
Weaknesses: * I found the paper to be hard to read in general, at its core, the paper is proposing to model error on the unseen target using a convex hull of the source domains — which is a standard idea in many generalization papers. The novelty, in my opinion, is to use the MDD as an objective for determining discrepancy between domains, and the adversarial game. This can be clarified significantly to make sections 4, 5 more readable. There is too much of notation and terminology that obfuscates the reader from understanding the key contributions of the paper. I am taking the proofs and theorems at face value, and have not verified them.
* As far as i understand, MADG requires to train significantly more models (1 per domain, a feature extractor and the main classifier) in order to compute MDD and perform training. This is vastly more complex than any existing method. It may be that making progress in a hard problem like domain generalization requires this, but i think this trade-off needs to be made more explicit — compute vs generalization performance. Whereas a simple ERM or Mixup have little to no overhead and perform very closely on the metrics considered in the paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: some what, yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments and suggestions. We respond below to each of their concerns/suggestions.
> I found the paper to be hard to read in general, at its core, the paper is proposing to model error on the unseen target using a convex hull of the source domains — which is a standard idea in many generalization papers. The novelty, in my opinion, is to use the MDD as an objective for determining discrepancy between domains, and the adversarial game. This can be clarified significantly to make sections 4, 5 more readable. There is too much of notation and terminology that obfuscates the reader from understanding the key contributions of the paper. I am taking the proofs and theorems at face value, and have not verified them
>
**Response:** Thank you for the suggestion, we will make necessary changes to make sections 4 and 5 more readable.
> As far as i understand, MADG requires to train significantly more models (1 per domain, a feature extractor and the main classifier) in order to compute MDD and perform training. This is vastly more complex than any existing method. It may be that making progress in a hard problem like domain generalization requires this, but i think this trade-off needs to be made more explicit — compute vs generalization performance. Whereas a simple ERM or Mixup have little to no overhead and perform very closely on the metrics considered in the paper.
>
**Response:** We agree that the current dataset benchmark in domain generalization, DomainBed [32, ICLR 2021], is a challenging one where one model may not outperform all other models across all the constituent datasets (OfficeHome, PACS, VLCS, TerraIncognita, DomainNet) by a significant margin. Even recent papers, Fish [88, ICLR 2022] and Fishr [71, ICML 2022] showcase small improvements across these datasets. In terms of computational cost, following Eq.(17), while we iteratively compute MDD over all domains, our computational cost during training is similar to other benchmark methods as shown below in Table 1. (To add, even simple methods like ERM and Mixup have running times in similar ranges.)
Table 1: Computation cost (GPU ram occupied [GB] and time per step [s] during model training) and the average accuracy achieved across all the datasets.
|Model| GPU RAM occupied (GB) | Time per step (s) | Avg. Accuracy (%)|
|-------|-------|-------|-------|
|MADG (ours) | ~10.5 |~1.3 |66.0|
|Fishr [71] | ~15.7 |~0.6 | 65.7|
|Fish[88]| ~3.4|~1.2 | 64.8|
|Mixup [82]| ~8.2 | ~0.4 |65.4|
|ERM [75] |~8.2 | ~0.4| 65.0|
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Thank you for your great efforts in reviewing our paper and offering insightful and constructive comments. The deadline for the author-reviewer discussion is approaching. We wanted to check if our rebuttal addressed your concerns, and would appreciate the opportunity to discuss further with you.
Understanding the demands on your schedule, we express our sincere gratitude for your review. Your suggestions/comments will be significantly beneficial to the improvement of our work. | Summary: - The paper presents a margin-loss based analysis of domain generalization.
- First, the paper derives a bound on margin disparity discrepancy (MDD) of any unseen domain within the convex hull of source domains in terms of the margin loss on source domains, ideal margin loss, and max MDD between any two source domains.
- The paper relates this to any unseen domain via projection onto the convex hull of source domains, and an additional factor $\gamma$
- The paper then realizes the bound for the empirical setting using a bound based on the Rademacher complexity of the function class
- Based on this bound, the paper proposes an adversarial learning method to regularize training by minimizing MDD in addition to classification loss.
- The paper demonstrates the efficacy of the method on a variety of real world datasets in the DomainBed benchmark.
Strengths: 1. The paper proposes a principled method for domain generalization, and the empirical method follows neatly from the derived generalization bound.
2. The proposed bound for error on an unseen domain based on margin disparities between source domains is novel to my knowledge.
3. Empirical results show modest improvements over ERM.
4. The paper presents empirical ablations for some design choices used in the algorithm.
Weaknesses: It is not clear if the single optimization step for the adversarial models f’ is sufficient for tightly approximating MDD. The method effectively regularizes a lower bound on the MDD term, which requires the adversarial models to be effective maximizers — this may require many steps for the adversarial models per main-model step.
Table 1 presents results using the model selection strategy of [71], which uses out-of-distribution data as the validation set for picking hyperparameters. Using labelled OOD data for hyperparameter selection limits the conclusions we can draw about if the proposed method works in practice, where we do not usually have access to labelled test data. While the AD, GD, and M metrics allay some of this concern, presenting results with some methodology that could be used for model-selection in practice, such as leave-one-out domain validation would make the empirical results stronger.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why is only the feature extractor trained to minimize MDD loss and not the classifier (line 261)?
Theorem 2 shows that the error of any unseen domain can be bounded via a projection onto the convex hull of the source domain, by adding in an additional factor $\gamma$. It is not clear if this factor is small or very large for domain generalization benchmarks. Is there intuition for “how far” an unseen domain in a benchmark is to the source domains?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It is not completely clear how significant the empirical results are. The method slightly outperforms ERM on average, but ERM outperforms the method on 2 out of 5 considered datasets, and is competitive on 2 others. Additionally, the reliance on labelled OOD data for model selection limits the conclusions one can draw about the efficacy of the method on the DomainBed benchmark.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and suggestions. We respond below to each of the concerns/suggestions.
> It is not clear if the single optimization step for the adversarial models f’ is sufficient for tightly approximating MDD....this may require many steps for the adversarial models per main-model step.
>
**Response:** As discussed in Algorithm 1, each classifier f' (Adversarial model) along with the feature extractor is updated once per main-model step. We had found this to be sufficient in our empirical studies. To however answer this further, we report results on the Art domain in OfficeHome dataset with additional adversarial updates per main-model step (t) in Table 1 below. (We had observed similar trends in our earlier studies.) As in the table, adversarial update with t=3 achieves only marginally better performance for the added computational cost.
Table 1: Accuracy (%) for Art domain for different adversarial updates per main-model step
|\# Adversarial updates| Art|
|--|--|
|t=1| 68.6 |
|t=3|68.7|
>Table 1 presents results using the model selection strategy of [71], which uses OOD data as validation set for picking hyperparameters. Using labelled OOD data for hyperparameter selection limits the conclusions ..... While D, GD, and M metrics allay some of this concern, presenting results with some methodology that could be used for model-selection in practice would make empirical results stronger.
**Response:** As stated in Appendix A2 (L85-86), we follow the recent state-of-the-art work [71] (also supported by recent papers [refs A1-A3]) in using the 'Test-domain' method for hyperparameter selection - primarily for fair comparison of results. (We also reported train-domain results in the Appendix, although that was not our focus in this work.)
These recent efforts also discuss reasons for this choice, such as avoiding over-reliance on spurious correlations [71, A2] and the underspecification issue as a key reason for failure of deployed ML systems [A3]. Also, [71] argues that it is more realistic that a user will label a few target samples to validate their model's generalization performance before deploying it in practice. We followed these recent efforts in using the test-domain method.
> Why is only the feature extractor trained to minimize MDD loss and not classifier (line 261)?
>
**Response:** As seen in Eqns (5) and(6), MDD is defined as the supremum over only the $f'$ scoring function while the other function $f$ is fixed. Thus, following this definition, our final optimization problem, Eq (22), maximizes MDD in terms of the $f'$ classifier and minimizes MDD in terms of the feature extractor ($\mathcal{G}$). The classifier $f$ is fixed when optimizing the MDD loss.
>Theorem 2 shows that error of any unseen domain can be bounded via a projection onto the convex hull of the source domain, by adding an additional factor $\gamma$. It is not clear if this factor is small or large for DG benchmarks. Is there intuition for “how far” an unseen domain in a benchmark is to the source domains?
>
**Response:** As stated in Theorem 2 and Remark 3, $\gamma$ is defined as the projection of the unseen domain on to the convex hull. It is also defined in Thm 2 as the MDD between the convex hull and the unseen domain, which can be further approximated with a combination of two different cross entropy functions as in Eq (20). This equation is equivalent to a $\hat{\rho}$ balanced Jensen-Shannon (JS) Divergence as shown in Propn D.1. in [37]. Thus we can get an approximation of this projection by computing the JS divergence between the two domains (distributions). To answer this better, we studied this paiwise JS divergence between two domains in the OfficeHome dataset, as shown in Table 1 below. Such an analysis provides us with an intuitive value for $\gamma$ by summing the values across the rows in Table 1. We can see from Table 1 that the unseen accuracy is low when the $\gamma$ value is high (expectedly). This is because higher $\gamma$ values incidcate that the unseen domain is far away from the convex hull and hence harder to generalize. We can see a similar trend for PACS dataset in Table 2 below.
Table 1: Pairwise JS divergence for the OfficHome dataset and the approximated projection value ($\gamma$) of the unseen domain on the convex hull.
|OfficeHome|Art|Clipart|Product|Real World| ~$\gamma$|Accuracy(%)|
|-----|-----|-----|-----|-----|-----|-----|
|Art|0|0.36 |0.35|0.09|0.8|68.6|
|Clipart|0.36|0|0.31|0.41|1.1|55.5|
|Product|0.35|0.31|0|0.37|1.0|79.6|
|Real World|0.09|0.41|0.37|0|0.87|81.5|
Table 2: Pairwise JS divergence for the PACS dataset and the approximated projection value ($\gamma$) of the unseen domain on the convex hull.
|PACS|Art_painting|Cartoon|Photo|Sketch| ~$\gamma$|Accuracy(%)|
|-----|-----|-----|-----|-----|-----|-----|
|Art_painting|0|0.34 |0.08|0.72|1.18|87.8|
|Cartoon|0.34|0|0.4|0.41|1.24|55.5|
|Photo|0.08|0.4|0|0.74|1.21|79.6|
|Sketch|0.72|0.46|0.74|0|1.92|81.5|
>It is not completely clear how significant the empirical results are....
>
**Response:** We understand the concern. The current dataset benchmark in domain generalization, DomainBed [32, ICLR 2021], is a challenging one where one model may not outperform all other models across all the constituent datasets (OfficeHome, PACS, VLCS, TerraIncognita, DomainNet) by a significant margin. Even recent papers, SAND-mask [89, ICML 2021 workshop], Fish [88, ICLR 2022] and Fishr [71, ICML 2022] showcase small improvements across these datasets. To further analyze a model’s effectiveness and consistency across these datasets, we evaluate each model with three other metrics (AD, GD, M) as discussed in Sec 6. The proposed model outperforms other models on all three metrics along with the average accuracy metric. As discussed in Appendix A2 (L85-86), we follow recent state-of-the-art work [71] (also supported by recent papers [A1-A3]) for our experimental settings, for fair comparison.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Thank you for your great efforts in reviewing our paper and offering insightful and constructive comments. The deadline for the author-reviewer discussion is approaching. We wanted to check if our rebuttal addressed your concerns, and would appreciate the opportunity to discuss further with you.
Understanding the demands on your schedule, we express our sincere gratitude for your review. Your suggestions/comments will be significantly beneficial to the improvement of our work.
---
Rebuttal Comment 1.2:
Title: Response
Comment: I thank the authors for addressing my questions. The new ablation resolves my concern related to adversarial optimization.
I am currently inclined to retain my score due to the following remaining concerns:
1. I thank the authors for checking the approximate values of gamma on DomainBed. It appears that the empirically observed values are all close to or above 1. Does this not make the bounds of theorems 2 and 3 very loose for settings such as DomainBed where there may be very few source domains (the error rate can be 1 at most anyway)? If that is the case, it would support the story of the paper to motivate why optimizing the other terms still leads to improved DG. Alternatively, are there empirical settings the authors can identify where gamma is small?
2. I appreciate that DomainBed is a challenging benchmark to consistently outperform ERM on and no DG methods lead to meaningful performance improvements. However, to show that MADG is a useful algorithm for domain generalization, there needs to be an experiment where MADG leads to convincingly better results than ERM. This doesn't have to be DomainBed, it could be a simpler synthetic setting (for example with the unseen domain close to or within the convex hull of source domains) — [71] considers ColoredMNIST and a linear task where Fishr outperforms ERM significantly.
---
Reply to Comment 1.2.1:
Comment: We thank the reviewer for acknowledging our response, and are glad to know that it helped resolve concerns.
The reviewer is absolutely correct. The empirically observed values of gamma are all close to or above 1 for a challenging benchmark like DomainBed. Hence, in this work, we focus on optimizing the first two terms in Theorem 3 in our proposed MADG method and demonstrate improved performance on the DomainBed benchmark. Optimizing all the terms in Theorem 3 would be a natural and interesting extension of our work. Based on the reviewer's suggestions, we ran experiments on ColoredMNIST (more discussion below), where we noted the gamma values to be small for all domains (expectedly so since the distributions of different domains in ColoredMNIST are relatively close to each other, when compared to real-world datasets).
Regarding the comparision with ERM model, we thank the reviewer for the suggestion. We conducted studies on the ColoredMNIST dataset, as shown in the tables below. As seen in Table 2, with almost no hyperparamter tuning (due to limited time), the proposed MADG algorithm achieves 65.6\% average accuracy, which is significantly higher than the ERM model (57.8\% average accuracy), thus showing that using a margin-based DG algorithm, MADG, learns better domain-invariant features on the ColoredMNIST dataset. Besides, Table 1 shows that gamma is small across domains in this dataset, showcasing the promise of the proposed method when the unseen domain is within the convex hull of source domain. We will leverage the additional page provided in NeurIPS for the final version to include these results and discussions.
We hope that this clarifies the reviewer's concerns. We are grateful for the opportunity to engage with the reviewer, and improve our paper. We'd be happy to provide any further information or clarifications if required and permitted.
Table 1: Pairwise JS divergence for the ColoredMNIST dataset and the approximated projection value ($\gamma$) of the unseen domain on the convex hull.
|OfficeHome|+90\%|+80\%|-90\%| ~$\gamma$|
|-----|-----|-----|-----|-----|
|+90\%|0|0.019|0.024|0.043|
|+80\%|0.019|0|0.029|0.048|
|-90\%|0.024|0.029|0|0.054|
Table 2: Accuracy (%) on ColoredMNIST dataset
|Model|+90\%|+80\%|-90\%|Average|
|-------|--------|-------|-------|------|
|ERM|71.8 ($\pm0.4$)| 72.9($\pm0.1$)| 28.7 ($\pm0.5$)| 57.8|
|MADG| 72.3 ($\pm0.3$)| 74.0 ($\pm0.2$) | 50.5 ($\pm0.4$)|65.6| | Summary: This paper is commendable for its innovative use of a margin-based theoretical framework to solve domain generalization problems, which contrasts with the largely heuristic and empirical approaches adopted by existing methods. By grounding their approach in a theoretical foundation, the authors provide more interpretable solutions that can contribute to the field of domain generalization.
Strengths: - The claim presented in the paper is strongly supported by theoretical analysis, as well as the extensive ablation studies presented in section 7. This speaks to the methodological rigour of the study and strengthens its credibility.
- The development of a theoretical framework for solving domain generalization problems is indeed a significant strength of this paper. This approach not only enhances the comprehensibility and reproducibility of the method, but also contributes to the broader understanding of the problem.
Weaknesses: - While the average performance of the proposed method is the highest, the performance gap across different tasks is quite significant. The method shows the best performance on the Office Home dataset, but this alone is not enough to convincingly demonstrate the overall superiority of the method. Therefore, the claimed significance of the proposed method appears to be overstated.
- The authors' claim that a theoretical approach is necessary is quite a strong statement that seems to undermine the importance of empirically validated methods in the field. This makes it hard to fully agree with their motivation.
- While the authors make a reasonable argument for the use of margin as a metric in generalization, they do not provide a sufficient justification for its relevance in domain generalization problems, particularly where style shifts are involved. This makes it difficult to understand the authors' motivation for their margin-based approach.
- Similarly, the reasoning behind the use of adversarial learning (min-max framework) is not clearly explained. The proposed method appears to be similar to the one used in [1], and a clear explanation of the differences between these two methods, apart from the task, would be beneficial.
- The paper lacks a comparison with recent Domain Generalization (DG) works [2-4]. This makes it difficult to assess how the proposed method stacks up against the state of the art.
[1] Maximum Mean Discrepancy Test is Aware of Adversarial Attacks
[2] Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
[3] Domain Generalization by Mutual-Information Regularization with Pre-trained Models
[4] SIMPLE: Specialized Model-Sample Matching for Domain Generalization
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see weakness
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have well described their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and suggestions. We respond below to each of the concerns/suggestions.
> While the average performance of the proposed method is the highest, the performance gap across different tasks is quite significant.....
**Response:** We agree and in Sec 6 (L288- 292), we discuss this concern and in order to reward model's consistent performance across datasets, we include three other metrics (AD, GD, M), discussed in Sec 6, which reward consistent performance across benchmarks. As seen from Table 1 in main paper, we outperform all earlier methods on these three metrics and the average accuracy metric. Besides, the current dataset benchmark in domain generalization, DomainBed [32, ICLR 2021], is a challenging one where one model may not outperform all other models across all the constituent datasets (OfficeHome, PACS, VLCS, TerraIncognita, DomainNet) by a significant margin. Even recent papers, SAND-mask [89, ICML 2021 workshop], Fish [88, ICLR 2022] and Fishr [71, ICML 2022] showcase similar improvements across these datasets.
> The authors' claim that a theoretical approach is necessary is quite a strong statement that seems to undermine the importance of empirically validated methods in the field....
**Response:** In hindsight, we agree with this observation, and thank you for pointing this out. We will revisit the phrasage and temper our claim. Our overall intention was to suggest that having strong theoretical underpinnings along with improved performance is an added advantage for a method. As discussed in Sec 1 (L39-42), our motivation was to develop an adversarial DG algorithm that uses a margin-based discrepancy because of its advantages, discussed in Sec 4. Such an approach has not been done for DG before. We also develop a generalization bound for unseen domain based on margin loss as they are more informative than 0-1 loss based bounds.
> While the authors make a reasonable argument for the use of margin as a metric in generalization, they do not provide a sufficient justification for its relevance in DG problems, particularly where style shifts are involved....
>
**Response:** We apologize if this did not come through clearly. As stated in Sec. 4 (L136- 139), one advantage of using margin-based discrepancy (MDD) is that it considers the task (classifier) in its formulation as shown in Eq (5). MDD learns invariant features across domains based on the idea of a bi-classifier prediction. Let's consider the MDD between the two domains ($D_{s_i}$,$D_{s_k}$) as shown in Eqns (5)&(22) (consider the case where $N_s=2$ and $j=1$). We have two different classifiers (scoring functions) that take input from the feature extractor and try to classify $D_{s_i}$ samples correctly, whereas for $D_{s_k}$ samples only classifier $f$ tries to classify it correctly. Simultaneously the two classifiers are also trained to detect $D_{s_k}$ samples far from $D_{s_i}$'s support. This is possible because $D_{s_k}$ samples far from $D_{s_i}$’s support cant be easily discriminated by $f'$ classifier The feature extractor then tries to generate $D_{s_k}$ features such that they are close to the support of $D_{s_i}$. In this way, we align the domains and learn domain-invariant features. We will clarify this.
> Similarly, the reasoning behind the use of adversarial learning (min-max framework) is not clearly explained. ....a clear explanation of the differences between this and the one used in [1], apart from the task, would be beneficial.
>
**Response:** As discussed in Sec 5 (L243-251), we formulate our objective as a minimization problem as shown in Eq (18). We then expand the second term (Empiricial MDD) in this problem using Eq (6). As empirical MDD is defined as the supremum over $f'$ (maximization), the overall optimization now becomes a minimax game. We hence use adversarial learning to solve our objective function. We will explain this clearly.
We note that our discrepancy, MDD, is different from the more common MMD. The paper [1] uses Maximum Mean Discrepancy (MMD), whereas, in our paper, we use Margin Disparity Discrepancy (MDD), which is fundamentally different from MMD formulation as shown in Eqns (5) and (6).
>The paper lacks a comparison with recent DG works [2-4]. This makes it difficult to assess how the proposed method stacks up against the state of the art.
>
**Response:** As in Sec 6 (L278-280) we evaluate our model on five different datasets from DomainBed benchmark [32]. To the best of our knowledge, this is fairly comprehensive, when compared to most recent efforts. To answer this further, [2] evaluates only on the Domainnet dataset of the DomainBed benchmark, and also uses a smaller version of this dataset (3 instead of 6 domains, and 40 classes instead of 345). Note that DomainBed consists of 5 datasets (OfficeHome, PACS, VLCS, TerraIncognita, DomainNet). To provide a comparison with [2], we train our model on their version of the domainnet dataset. Table 1 in attached rebuttal.pdf shows the results. We see that our model outperforms LP-FT [2] on all three domains. As stated in Appendix A2 (L85-86), we follow recent state-of-the-art work [71] (also supported by recent papers [A1-A3]) in using the `Test-domain` methodology for hyperparameter selection. [3] proposes MIRO model and evaluates only for a training-domain model selection method. For fair comparision, we re-run the MIRO model for three trials on our setting. From Table 2 in rebuttal.pdf, we see that MADG outperforms MIRO on two domains and achieves higher average accuracy. The paper [4] proposes an ensemble algorithm (collection of multiple methods) for DG. Our method is a non-ensemble one, and can be a part of their method by itself.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their comprehensive response and the additional experiments provided. Concerns about 2-4 have been resolved. However, I still have concerns regarding the performance gap evaluations across different tasks and the comparisons with recent studies. From the response provided in the PDF, I am not entirely convinced that MADG surpasses MIRO or even when compared to Fishr. Consequently, due to limited evaluation, I will maintain my score at 5.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our response, and are glad to know that it helped resolve concerns.
Considering the experimental studies in the main paper, ablation studies, additional results on other experimental settings in the appendix and those in the rebuttal, we humbly submit that our evaluations are fairly comprehensive. We believe that the reviewer is pointing to the performance comparisons here than the amount of experimental studies itself.
Regarding the gap in performance evaluation, as stated in our first response, DomainBed is a challenging benchmark and we achieve consistent performance on all its constituent datasets as shown using the AD, GD and M metrics. We initially did not have MIRO in our comparisons, since their experimental setting is different from ours (we follow [71] in the experimental settings). However, our comparisons included in the rebuttal PDF, show that we outperform MIRO on our experimental setting. To go further, we also compared our method, MADG, with another recent method, SD [a, Neurips 2021], as suggested by the reviewers and show results in the tables below. The proposed MADG outperforms SD on all domains across all the four datasets. We will add these to the paper.
Regarding performance comparison with Fishr, as seen in Table 1 of our paper, our method outperforms the Fishr model on all three evaluation metrics that reward consistency of performance across domains/datasets: AD, GD, M.
We'd like to add here that existing work on DG are designed from different perspectives, such as domain-invariant representation learning, data/feature manipulation, and meta-learning/optimization. We believe each perspective has its specific advantages and can achieve superior performance on specific domains/datasets. Consequently, it may not be easy for a method from one perspective to achieve a large improvement than another perspective across domains/datasets. We show that our method outperforms other methods that use the same experimental setting (following [71]) consistently. Our method has a novel contribution in its margin-based discrepancy and adversarial formulation. The combination of our method with complementary ones, e.g. those based on data/feature augmentation may only provide opportunities for future work and even better results.
We hope this can address any pending concerns w.r.t. performance evaluation. We are grateful for the opportunity to engage with the reviewer, which has only helped us improve the paper. If there is any more information required, please let us know.
Table 1: Accuracy (%) on OfficeHome dataset
|Model|Art|Clipart|Product|Real World| Average|
|-------|--------|-------|-------|------|------|
|SD[a]|64.8($\pm0.9$)|49.9($\pm1.2$)|75.6($\pm1.3$)|79.1($\pm0.2$)|67.4
|MADG (ours)| 68.6($\pm0.5$)|55.5($\pm 0.2$)|79.6($\pm0.3$)|81.5($\pm0.4$)|71.3|
Table 2: Accuracy (%) on PACS dataset
|Model|Art_Painting|Cartoon |Photo |Sketch | Average|
|-------|--------|-------|------- |-------|------|
|SD[a]|82.7($\pm0.9$)|77.4($\pm3$)|97.1($\pm0.7$)|68.5($\pm2.4$)|81.4|
|MADG (ours)|87.8($\pm0.5$)|82.2($\pm0.6$)|97.7($\pm0.3$)|78.3($\pm0.4$)|86.5|
Table 3: Accuracy (%) on VLCS dataset
|Model|PASCAL|CALTECH|LABELME|SUN| Average|
|-------|--------|-------|-------|------|------|
|SD[a]|72.6($1.7$)|91.4($\pm1.7$)|64.1($\pm1.6$)|63.3($\pm1.2$)|72.3|
|MADG (ours)|77.3 ($\pm0.1$)|98.5($\pm0.2$)|65.8($\pm0.3$)|73.1($\pm0.3$)|78.7|
Table 4: Accuracy (%) on TerraIncognita dataset
|Model|L100|L46|L43|L38| Average|
|-------|--------|-------|-------|------|------|
|SD[a]|39.5 ($\pm8.7$)|41.4 ($\pm1.2$)|49.2 ($\pm1$)|34.1($\pm6.2$)|41.1|
|MADG (ours)|60.0 ($\pm1.2$)|45.6($\pm0.5$)|57.4($\pm0.3$)|51.8($\pm0.2$)|53.7|
[a] Pezeshki, Mohammad, et al. "Gradient starvation: A learning proclivity in neural networks." Advances in Neural Information Processing Systems 34 (2021): 1256-1272. | Rebuttal 1:
Rebuttal: We thank all reviewers for their positive feedback: proposed method is well designed [Ru5h1, RFjmm]; the method is well supported by theoretical analysis as well as extensive ablation studies [Rwf8A, RFjmm, RMRuq]; the development of a theoretical framework for DG problem not only enhaces the comprehensibility and reproducibility of the method, but also contributes to the broader understanding of the problem [Rwf8A, Ru5h1, RzZgJ] and the results of the proposed MADG model are good [Ru5h1, RMRuq]. We respond to each reviewer's comments below.
Pdf: /pdf/d18cd7408abfe83a80765d649093236a956451a6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper investigates domain generalization (DG) using a margin-based theoretical framework. The authors first formulate the generalization upper bound by leveraging the margin disparity discrepancy (MDD). Then, an adversarial learning strategy (MADG) is devised to minimize the empirical MMD between source domains. The experiments on the DomainBed benchmark further demonstrate that the proposed MADG could outperform the current state-of-the-art methods.
Strengths: 1. The paper is clearly written, and the method is well-designed.
2. The theory of margin-based generalization bound is interesting and could benefit the further designs of DG methods.
3. The proposed MADG could outperform recent state-of-the-art methods.
Weaknesses: 1. The primary concern of the reviewer is in its effectiveness. While MADG outperforms most baselines in the DomainBed benchmark, the improvement is marginal/minor. This made it unclear to me if the method actually works better, or it is just a product of optimizing some hyperparameters.
2. The motivation, “very little work has been done in developing DG algorithms that are well-motivated by theoretical”, lacks soundness. In fact, there have been notable works in the literature that utilize theoretical frameworks to analyze the generalization problem, such as gradient matching [71, 88] and invariant risk minimization [80]. It could be beneficial if the authors could provide a detailed comparison with such theoretical methods.
3. In addition to the accuracy comparison, could the authors provide a more in-depth analysis regarding the training dynamics? This would help shed light on the inner workings of the MADG approach.
4. Many advantages, such as efficient optimization, stated in the abstract are not well verified. Specifically, following Eq. (17), it seems that the estimation process could introduce a significant computational cost, as it iteratively computes for all domains.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Table 1 could be simplified, as many baselines are unnecessary.
2. The common approach to compare DG methods is the training-domain validation. Why do the authors mainly utilize the test-domain selection?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments and suggestions. We respond below to each of the concerns/suggestions.
> 1. The primary concern of the reviewer is in its effectiveness. While MADG outperforms most baselines in the DomainBed benchmark, the improvement is marginal/minor. This made it unclear to me if the method actually works better, or it is just a product of optimizing some hyperparameters.
>
**Response:** We understand the concern. The current dataset benchmark in domain generalization, DomainBed [32, ICLR 2021], is a challenging one where one model may not outperform all other models across all the constituent datasets (OfficeHome, PACS, VLCS, TerraIncognita, DomainNet) by a significant margin. Even recent papers, SAND-mask [89, ICML 2021 workshop], Fish [88, ICLR 2022] and Fishr [71, ICML 2022] showcase small improvements across these datasets. To further analyze a model’s effectiveness and consistency across these datasets, we evaluate each model with three other metrics (AD, GD, M) as discussed in Sec 6. The proposed model outperforms other models on all three metrics along with the average accuracy metric.
> 2. The motivation, “very little work has been done in developing DG algorithms that are well-motivated by theoretical”, lacks soundness. In fact, there have been notable works in the literature that utilize theoretical frameworks to analyze the generalization problem, such as gradient matching [71, 88] and invariant risk minimization [80]. It could be beneficial if the authors could provide a detailed comparison with such theoretical methods.
>
**Response:** We apologize for not being clearer on this. The gradient matching papers [71,88] propose the idea of learning invariant features by domain-level gradient matching; in particular, [71] use gradient variance matching, and [88] use gradient mean matching. These papers don't develop generalization bounds for the unseen domain; instead, they theoretically analyze the inconsistency score function to find optimal weights across all domains. Even though [80] develops a generalization bound for the unseen domain, it uses a different causal mechanism-based approach. In this work, we present a new perspective to the problem based on margin-based bounds and adversarial learning, which considers convexity relations between domains and the alignment and discrepancy between these domains. We will revisit the phraseage as suggested.
> 3. In addition to the accuracy comparison, could the authors provide a more in-depth analysis regarding the training dynamics? This would help shed light on the inner workings of the MADG approach.
>
**Response:** The key aspects of training dynamics have been analyzed in Sec. 5 (L252-258) and Eq. (20) the optimization problem consists of two loss terms, Classification and Transfer Loss, that are updated in an two-step training methodology. Fig. 1 and 2 in rebuttal.pdf shows the classification and transfer loss plots for the Art and Product domain of OfficeHome.
> 4. Many advantages, such as efficient optimization, stated in the abstract are not well verified. Specifically,.. Eq. (17), it seems that the estimation process could introduce a significant computational cost, as it iteratively computes for all domains.
>
**Response:** As seen from Eqns (5) and (6), the MDD between two domains is defined as the supremum over only one variable (f\`) as opposed to other discrepancies used in related work such as H$\Delta$H, which involves a supremum over two variables. This makes MDD relatively more efficient to optimize when compared to other discrepancy measures; we will clarify this. In terms of computational cost, following Eq.(17), while we iteratively compute MDD over all domains, our computational cost during training is similar to other benchmark methods as shown below in Table 1 in rebuttal.pdf. (To add, even simple methods like ERM and Mixup have running times in similar ranges.)
> Table 1 could be simplified, as many baselines are unnecessary.
>
**Response**: Thank you for the suggestion, we will simplify the table by either dividing it into sections or moving some results to the appendix.
> The common approach to compare DG methods is the training-domain validation. Why do the authors mainly utilize the test-domain selection?
>
**Response:** As stated in Appendix A2 (L85-86), we follow recent state-of-the-art work [71] (also supported by recent papers [refs A1-A3]) in using the 'Test-domain' method for hyperparameter selection. These recent efforts also discuss reasons for this choice: (i) [71] argues that learning causal mechanisms is not useful in the Training-domain method, esp when correlations are more predictive in training than causal features. The paper suggests that using 'Test-domain' selection is more realistic since a user will easily label a few target samples to validate a model's generalization performance before deploying it; (ii) [A1] states that In-domain (ID) validation sets (Training-domain) eliminates the advantages of using DG-specific models over ERM models, as ID accuracy is often at odds with generalization accuracy. In our work, restricting the $\rho$ value selection using ID validation can hurt the model's generalization performance as shown in the relationship between margin and generalization in Thm 3; (iii) [A2] argues that high ID accuracy can be achieved by depending on spurious patterns and is not indicative of generalization performance; (iv) [A3] states that Training-domain method suffers from underspecifications as a reason for the failure of ML systems deployed in the real world. It defines underspecification as a scenario where distinct predictors, primarily validated on Training-domain method, report equivalently strong held-out performance but behave differently in test. We follow these recent efforts in using the test-domain method. (We also reported train-domain results in the Appendix, although that was not our focus in this work.)
---
Rebuttal Comment 1.1:
Comment: The authors have addressed the concerns, in particular regarding the validations.
---
Reply to Comment 1.1.1:
Comment: We are happy to know that the reviewer's concerns have been addressed. We sincerely appreciate the thoughtful feedback which helped us improve the presentation of our work. We will leverage the additional page that NeurIPS provides in the final version to include the clarifications for the reviewer's concerns, as well as other additional information provided in the rebuttals.
We would appreciate if you could kindly consider revising the score accordingly. Once more, we express our gratitude for the time you've dedicated to these thoughtful discussions. | null | null | null | null | null | null |
Graph Denoising Diffusion for Inverse Protein Folding | Accept (poster) | Summary: The paper proposes discrete diffusion for the task of inverse protein folding (IPF). The many-to-one mapping of sequence to structure warrants a generative model of sampling multiple possible sequences that fold into a given structure. Nearly all prior works use autoregressive generative models so the exploration of non-autoregressive models is high interest for its improvements in accuracy and speed. The authors propose several IPF specific improvements to discrete diffusion that results in state-of-the-art performance on commonly used benchmarks. Additional analysis is performed on accuracy for different classes of residues such as surface exposed, diversity, and designability.
The contributions can be summarized as follows:
- First exploration of discrete diffusion for IPF.
- Discrete diffusion improvements such as a BLOSUM weighted transition kernel, secondary structure auxiliary prediction, DDIM for faster sampling.
- New SOTA performance on IPF benchmarks.
Strengths: I enjoy the direction and development of ideas in this paper. Protein sequences are largely driven by their local environment based on the graph construction. It makes a lot sense to free the decoding order to be learned rather than left-to-right. Though the method is not new, the application is novel and well motivated. I appreciate the authors exploring this direction for IPF.
- Well-motivated by taking advantage of the graph structure inherent in the problem to improve sampling.
- Well written. I could follow the math and figures to understand the method. I appreciate the authors shared code.
- Removes limitations of left-to-right decoding order of previous works which results in higher diversity.
Weaknesses: - Comparison to alternative discrete diffusion methods. I hoped for more discussion on the merits and considerations of choosing D3PM over alternative methods of discrete diffusion such as analog bits and latent diffusion.
- No in-depth analysis of foldability. There is a question of whether higher recovery rate is always better. For instance, foldingdiff [1] showed in Table S1 and S2 that foldability was much higher with ProteinMPNN than with ESM-1F -- the latter which has higher sequence recovery. Including a benchmark on foldability and not just a few examples would be important to understand foldability as a function of sequence recovery on a benchmark would be high importance to understand the relationship. I already believe the contributions are strong and this is merely a suggestion for higher impact.
- No explanation of different benchmark numbers. I noticed the numbers for proteinmpnn don't match what is reported in this paper.
[1] https://arxiv.org/abs/2209.15611
[2] https://www.science.org/doi/10.1126/science.add2187
[3] https://www.biorxiv.org/content/10.1101/2022.04.10.487779v1
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: - As mentioned above, what is the relationship between sequence recovery and designability/foldability? [note to self of checking their foldability results.
- Why does the method only use C-alpha positions? ProteinMPNN found using all backbone atoms to improve results. I would be curious if all backbone atoms helps and if noise augmentation improves.
- Have the authors tried not providing hand-crafted geometric features? I would’ve suspected these can be inferred from the raw coordinates like in ProteinMPNN.
- Why do the benchmark numbers not match those reported in ProteinMPNN?
- There is are some typos and weird language I noticed.
- Line 78. “Meticulously curated” is a weird phrase. I would reword it.
- Line 88. Needs capitalization.
- Line 93. x_1^{pos} is bold face while the rest are not.
- Line 134. Equivariant is misspelled.
- Line 143. normalized is misspelled.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: - There no limitations discussed. It would be interesting to see where the method fails and initial preliminary thoughts on how it can be improved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for recognizing the novelty and contribution of our work. Your insightful comments helped us enrich the analysis a lot. By the response below we hope we address your concerns properly.
**weaknesses**:
1. *comparison to alternative diffusion methods*: The question of choosing between discrete diffusion in original space (e.g., D3PM) and latent continuous space diffusion (e.g., analog bits) remains open. Each approach has its merits and limitations. Discrete diffusions provide transparency in the generative process, enabling observation of sequence transformation, and latent space diffusion benefits from established continuous diffusion methods used in image/video generation. The latter allows a similar treatment of discrete data diffusion within a continuous Gaussian diffusion framework.
2. *foldability*: Thank you for highlighting this important aspect. We've incorporated your suggestion by adopting the TM score to assess foldability. We've provided Table 4 in the PDF, summarizing average TM scores (with standard deviations) between structures predicted by AlphaFold2 for generated sequences and their wild-type counterparts. GraDe-IF consistently achieves the highest TM scores with the least variance. Moreover, we've extended foldability evaluation to a wider set of 42 proteins (Table 5 in PDF), confirming GraDe-IF's superiority in terms of foldability.
3. *different benchmark performance*: The benchmark discrepancies arise due to our distinct dataset split rules. Although both ProteinMPNN and our study utilize CATH4.2, ProteinMPNN used random splitting by 80:%10%:10% (according to their experimental details), and we follow Ingraham et al.'s [1] split, ensuring no overlap in CATH topology classification between training, validation, and test sets. This explains the divergent reported results. Same as [1,2,3], we implemented the latter splitting rule for our method. We have also tested the 80:%10%:10% splitting rule as established in ProteinMPNN’s paper, and it returns a higher recovery rate on the test set at 59.6%. As a reference, ProteinMPNN reported their recovery rate at 50.8%. Having said that, we would like to emphasize again that we do not optimize our diffusion model towards a close-to-1 recovery rate of the generated sequence. Instead, we expect a preferred model to perform higher diversity on the generated sequences while capturing the composition of amino acids at those conserved regions or positions (such as the buried regions in proteins.).
**Questions**:
1. *foldability*: see weaknesses point 2.
2. *coordinates*: (1) The objective of the designed model is to generate possible amino acid sequences for a given protein backbone. To this end, we make all the operations, including the diffusion process and the message passing propagators on residue levels. Since the atom types for different amino acids only differ on the side chain, atom types on the backbone are determined. Regarding the atom positions, we initially utilized C-alpha positions to represent the position of the residue, which is a widely applied choice for generating residue-level protein graphs. (2) While we did not employ the position of other atoms explicitly, we did include their spatial relationship by defining the relative position and local frames of local atoms. The detailed definitions were introduced in Appendix C.
3. *hand-crafted features*: We agree that it is possible to infer the processed node features from the raw coordinates. However, there is no doubt that a much higher volume of input proteins would be required to train a more sophisticated (and potentially more powerful) model. Moreover, defining hand-crafted features has been used in several previous protein-related works, which indicates that it can be considered a promising choice for modeling protein topology. While we remain open to exploring this idea in future research, we conducted a preliminary ablation study that investigates the influence of the dihedral angle on the model's performance. The study revealed that omitting this hand-crafted feature led to a decrease in the recovery rate of the entire test dataset, from 52.21% to 51.47%, and an increase in perplexity from 4.35 to 4.58.
4. *mismatched performance for ProteinMPNN*: see weaknesses point 3.
5. *typos*: We've diligently corrected the typos you pointed out in the revised version.
We have followed your insightful comments and suggestions to address the concerns you raised and polish our work. We believe these changes have greatly enhanced the quality and contribution of this paper. We believe that the additional analysis and discussion now better support the merit of our work. We would be more than willing to take your further suggestions, and we kindly request you reconsider the score you assigned to our paper.
**Reference**:
[1] Ingraham, John, et al. "Generative models for graph-based protein design." In Advances in neural information processing systems (2019).
[2] Gao, Zhangyang, Cheng Tan, and Stan Z. Li. "PiFold: Toward effective and efficient protein inverse folding." In International Conference on Learning Representations (2022).
[3] Zheng, Zaixiang, et al. "Structure-informed Language Models Are Protein Designers." In International Conference of Machine Learning (2023).
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you the response. I believe the method is technically sound. My score is from my concerns in the evaluation.
> ProteinMPNN used random splitting by 80:%10%:10% (according to their experimental details),
I am not sure this is true. Their experimental details state "we used a set of 19.7k high resolution single-chain structures from the PDB were split into train, validation and test sets (80/10/10) based on the CATH4.2 40% non-redundant set of proteins (1, 8)." where reference (1) is Ingraham et al. It seems to me ProteinMPNN and Ingraham follow the same splits. Can the authors clarify how they arrived at their conclusion?
> we've extended foldability evaluation to a wider set of 42 protein
I appreciate the addition of more proteins. I commented in the global comment asking how these 42 were selected. Are they in the test set?
---
Reply to Comment 1.1.1:
Title: Re-Reviewer 92pD: reults on ProteinMPNN and 42 proteins
Comment: We appreciate your meticulous attention to detail and the insightful comparisons you've made regarding the dataset split methodology. Regarding your concerns:
1. **Performance of ProteinMPNN**:
(1) Indeed, Ingraham et al. [1] take the random split of the training, validation, and test sets by an 80/10/10 ratio, along with the careful consideration of protein CATH categories to remove redundancy, which has been a common practice in subsequent studies as well. However, we did not find explicit rules for dataset processing in ProteinMPNN's paper, specifically concerning the removal of overlap data. (2) Our further investigation into the ProteinMPNN's GitHub repository, including its discussion board, revealed an interesting detail. ProteinMPNN employed a random decoding technique to sample sequences, leading to an improvement in the recovery rate. In our initial implementation, we adhered to ProteinMPNN's default settings, which utilized the same input sequence order for decoding and yielded suboptimal results in terms of CATH recovery rate. By incorporating the random decoding strategy, we were able to enhance ProteinMPNN's result with a higher recovery rate of 49.9% and a perplexity of 4.576. We acknowledge the discrepancy in the methodologies employed and extend our gratitude for your keen attention to this crucial aspect. We will make clarifications and update this result in our revision.
2. **42 Proteins**:
All the proteins are from the test dataset. As we have replied in the global comment, we fold the first 42 proteins in the test dataset in alphabetical order by their PDB ID. Due to time limitations, we were only able to fold this subset of proteins for the generated protein sequences by the three models compared. We are currently in the process of conducting evaluations on the remaining proteins in the test set. We are committed to providing updated and comprehensive scores for a broader range of proteins. | Summary: The paper highlights that "Existing discriminative models struggle to capture the diverse range of solutions, while generative diffusion probabilistic models offer the potential for generating diverse sequence candidates." They propose to use the denoise diffusion model together with the prior information from the secondary structure and BLOSUM matrix to train an inverse folding model. As shown in the experiments, their method achieves better results than the previous baselines, and they also show the folding results for the model predictions.
Strengths: [+] The proposed graph denoising diffusion model introduces a new perspective for addressing the challenging problem of inverse protein folding. It leverages the power of diffusion probabilistic models to generate a diverse set of sequence candidates for a given protein backbone. In protein design, diversity is a key point, since we want diverse candidates to build a library for wet-lab experiments.
[+] The model achieves state-of-the-art performance compared to popular baseline methods in sequence recovery.
[+] The method includes prior information during the model training, and the method is easy-to-follow.
[+] The authors also check SASA and other properties, which is useful for protein design and engineering.
Weaknesses: [-] Although the paper mentions diversity, however, they mainly measure the recovery ratio (wildtype accuracy) and the PPL, which cannot tell the readers whether their model captures the "useful" diversity. The diversity is only shown in Figure 4, and I think it is not enough to demonstrate the model's advances in diversity.
[-] The baseline PiFold or ESMFold algorithm can also introduce diversity by controlling the temperature and do sampling based on the probability score, a detailed comparison with the baselines could improve the experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors show the trade-off between speed and recovery ratio. Here, I wonder if the authors use a standard Euler solver or any other sampling schedule to do the inference.
2. In Figure 6, the authors show the results of several PDB proteins, do the authors make sure that these proteins have < 30% sequence similarity as the proteins in the training dataset?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback on our paper and recognizing its contribution and performance. We have taken your remarks into careful consideration and offer the following responses to your concerns and questions:
**weaknesses**:
- *measurement of diversity, and additional comparison with PiFold and ProteinMPNN*: (1) The table below compares the recovery rate and diversity of the generation results by controlling temperature = {0.5, 0.1, 0.0001} for PiFold and ProteinMPNN and sample step ={20, 100, 250} for GraDe-IF. The three levels are divided roughly into low, medium, and high recovery rates in the table below. The results, quantified as average diversity and recovery rate, are depicted in the table provided. For clarity, the metric "diversity" is quantitatively defined as `diversity = 1-sequence identity`, where the average sequence identity is computed pairwise for generated sequences. For instance, in the first cell of the table, where the method is PiFold, the recovery level is low, and the metric is diversity, the diversity value is indicated as 0.3796.
(2) In response to the task of aligning recovery rate levels, we have introduced an intermediary plot positioned between the two subfigures in Figure 4 (refer to Figure 1 in the attached PDF document). This supplementary plot effectively illustrates the intricate relationship between recovery rates and diversity across the three methods on a finer scale. Notably, the visualization highlights the rapid contraction of the sampling space for both PiFold and ProteinMPNN. In contrast, the samples generated by GraDe-IF exhibit a broader distribution that encompasses the wild type.
| | low recovery | | medium recovery | | high recovery | |
|------------------|:-------:|:--------:|:-----:|:-------:|:------:|:------:|
| (Method) | (diversity) | (recovery) | (diversity) | (recovery) | (diversity) | (recovery) |
| PiFold | 0.3796 | 0.4794 | 0.2535 | 0.5084 | 0.2181 | 0.5087 |
| ProteinMPNN | 0.5159 | 0.4268 | 0.2735 | 0.4679 | 0.2657 | 0.4679 |
| GraDe-IF | 0.5899 | 0.4142 | 0.5462 | 0.4586 | 0.5296 | 0.4755 |
**Questions**:
- *sampling schedule for inference*: While the standard Euler solver was not implemented in our study due to the distinct characteristics of our diffusion process, which involves a discrete transition matrix that does not follow a continuous stochastic differential equation, we acknowledge the potential for future exploration. In forthcoming research, there may be merit in formulating a score-based model and developing a sampling algorithm that operates within a continuous time framework. It's plausible that continuous solvers, such as the Euler solver, could play a role in this endeavor.
- *sequence similarity of the generated protein sequences*: We have meticulously examined the generated sequences, confirming that none of them exhibit a sequence identity exceeding 30% with the training samples. It is essential to emphasize that the intention behind presenting Figure 6 is to showcase the capacity of our generated protein sequences, conditioned on a template protein backbone, to reliably fold back to the template structure (RMSD < resolution) with a high degree of confidence (pLDDT > 0.8). We want to underline that all the displayed proteins, including 3FKF in Section 4, along with 1ud9, 2rem, and 3drn in Appendix G, were selected from the test set, which was properly processed to ensure no overlapping with the training or validation set. Moreover, every visualized sequence underwent a thorough comparison against the wild-type template protein sequence, yielding sequence identity values that fall within the 40% to 60% range. | Summary: The authors of the manuscript present GraDE-IF, a diffusion model based method for inverse protein folding given the backbone of the structure. The denoising network is a graph neural network that's equivariant to rotations and translations. Moreover, a biologically relevant inductive bias is incorporated into the discrete diffusion process by replacing the uniform amino acid transition probabilities with amino acid substitution scoring matrices. Finally, sampling is accelerated by deploying a variant of the Denoising Diffusion Implicit Model (DDIM).
Strengths: To the best of my knowledge, this is the first work that explores the use of a discrete diffusion model for inverse protein folding. The fact that prior biological knowledge is incorporated directly into the discrete diffusion process increases the appeal of this method. The presented results look promising.
Weaknesses: Even though I enjoyed reading the paper, there are still some concerns.
First and foremost, in my opinion the reported metrics do not fully support the claim that the model is capable of generating diverse protein sequences. Figure 6 shows some qualitative examples of different sequences leading to plausible structures, but I'm missing quantitative results on diversity/uniqueness, and an average RMSD/pLDDT for a large batch of generated sequences folded by AlphaFold2 for different proteins. This would make the story much more convincing.
Additionally, the paper is quite sloppy in some places, with confusing mathematical notation, wrong figure references, and occasionally poor sentence constructions. I appreciate that this is likely due to time constraints, but some time should be spent on fixing this for the final paper in case it gets accepted.
I will address the more minor concerns in the "Questions" part.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. As mentioned in "Weaknesses", I missed some kind of quantitative metric to measure diversity / uniqueness of generated sequences accross proteins.
2. Another valuable addition to the results, although it might not be possible due to time constraints, would be to show how this method could aid protein design, e.g. by providing a synthetic backbone and generating plausible corresponding sequences.
3. Figure 1: the text in the "prior" and "condition" blocks is quite small. For me this caused some confusion because at a quick glance it seemed like the bottom part of "condition" was an amino acid sequence rather than secondary structure annotation.
4. Table 1: Are the CATH versions correct? The first sentence of section 4.1 (line 233) mentions CATH v4.3.0.
5. Figure 4: The 45% threshold seems quite conveniently chosen such that only one sequence remains for the other two models. It would be informative to include this plot for some different recovery rate cut-offs, if not in the main text then at leats in the supplementary. Additionally, for the figure on the right, no unit is given for "speed", and there is no comparison to other models (PiFold, ProteinMPNN).
6. Missing references/proofs to support the connections made in line 45-51.
7. Missing defenitions:
* FFN (Figure 1): mention feed forward network somewhere.
* $\mathcal{E}$ (line 76) is never defined.
* $d$ (line 126) is never defined (number of categories).
* $\mathbf{A}$ (line 131) is never defined.
8. Confusing mathematical notation:
* The graph notation seems inconsistent accross the paper.
* I think there's a "pos" superscript missing in line 93 (i.e. $\mathbf{X} \rightarrow \mathbf{X^{pos}}$)
* The expression in line 126 seems a bit confusing. Isn't the transpose of the identity matrix the same (i.e. $\mathbf{I}^T=\mathbf{I}$) and wouldn't $\mathbf{I} \ \mathbf{I}^T=\mathbf{I}$? It might be that this is the point you wanted to make, but it was not completely clear to me.
9. Wrong figure references line 260, line 289.
10. Not all appendices are referenced to in the main text.
11. A grammar check would improve the flow of the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors adequately dicuss the limitations of their work in the appendix, even though I would prefer it if some of this discussion was transferred to the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your meticulous feedback, as well as your recognition of the novelty of our work. Below we respond to your concerns and questions point-by-point.
**Weaknesses** :
1. *Quantitative Results on Diversity*: We have incorporated quantitative evaluations of generated results using the `diversity` metric, defined as `1 - sequence identity`. We extended our assessments to average RMSD and pLDDT across a broader spectrum of 42 distinct proteins. Note that the reason we not investigating a larger volume of proteins is because that AlphaFold failed to fold more structures within the limited time for rebuttal. (1) Table 1 in PDF compares the recovery rate and diversity of the generation results by controlling temperature = {0.5, 0.1, 0.0001} for PiFold and ProteinMPNN and sample step ={20, 100, 250} for GraDe-IF. The three levels are divided roughly into low, medium, and high recovery rates in the table below. The results, quantified as average diversity and recovery rate, are depicted in Table 1 in PDF. (2) We expanded the validation of RMSD and pLDDT across a wider selection of 42 proteins and reported the results in Table 2 in PDF. Notably, GraDe-IF exhibited the highest average pLDDT and lowest RMSD, highlighting its effectiveness over baseline methods.
2. *Typo*: We have carefully gone through the main text to fix the typos and inappropriately defined notations, combining suggestions from all the reviewers.
**Questions**:
1. *quantitative metrics on the generated results*: Refer to the response provided above.
2. *sequence generation for synthetic backbone*: Thank you for suggesting a new possibility for our design. There is no doubt that generating plausible sequences for synthetic backbones is of great significance for designing structural proteins. While the limited time for rebuttal prevents us from preparing a throughout investigation upon generating valid sequences for practically meaningful protein structures, it opens a promising direction for future investigation, combining in silico validation and wet experiments through structural characterization methods like NMR, X-ray crystallography, and cryo-EM.
3. *conditions in Figure 1*: We have fixed the problem in the revision by magnifying the contents in both prior and condition blocks, mitigating any residual confusion.
4. *CATH versions*: We utilized CATH4.2 for both training and evaluation. Thank you for highlighting this inconsistency. We have fixed the typo in the revised version.
5. *45% threshold*: We selected the 45% threshold because it represents the best recovery rate that the other two models achieved when sampling at very low temperatures. To address your suggestion, we have added an additional plot at an intermediate recovery rate cut-off (see Figure 1 in PDF). We have added the unit for speed in Figure 5, which is ‘second’ in the revision. Moreover, Table 3 in PDF compares the sampling speed of our model at varying step sizes against other models. Note that the sampling speed of DDIM is influenced by the choice of skip steps (as illustrated in Figure 5). The inference time of GraDe-IF decreases significantly with increasing step size, and it becomes faster than all baseline methods when step size=100.
6. *missing reference*: The claim was from D3PM [1]. We introduced the work in related work and added the reference in line 49 in the revised version.
7. *missing definitions*: In the revised version, we: (1) added the full name of FFN in the caption of Figure 1; (2) added the definition of $d$ (line 126), which is the number of amino acid types, i.e., d=20; (3) For the other two notations, $\mathcal{E}$ (the set of edges) and $\mathbf{A}$ (the adjacency matrix), we updated the definition of a graph to $\mathcal{G}=(\mathbf{X}, \mathbf{A}, \mathbf{E})$, with $\mathbf{X}, \mathbf{A}$ and $\mathbf{E}$ representing the node feature matrix, adjacency matrix, and edge feature matrix of the graph, respectively.
8. *confusing mathematical notation*: In the revised version, we: (1) unified the notation of graphs by $\mathcal{G}=(\mathbf{X}, \mathbf{A}, \mathbf{E})$ (see the previous answer); (2) updated $\mathbf{X}$ in line 93 to $\mathbf{X}^{pos}$; (3) corrected the definition of $\mathbf{Q}_t$ to $\mathbf{Q}_t = \alpha_t \mathbf{I} + (1 - \alpha_t)\mathbf{1}_d\mathbf{1}_d^{\top}/d$, where $\mathbf{1}_d$ denotes a d-dimensional one vector.
9. *figure reference*: Thank you for identifying these typos. We have fixed both references in the revision. In lines 262&289 (of the revised version) the wrong number (“Figure 7”) has been corrected to “Figure 3”.
10. *reference to Appendix*: We have added proper references in the revision. For instance, Appendix B (non-Markovian forward process) is now linked in Section 3.4 (DDIM sampling process); Appendix C (algorithm) is mentioned at the beginning of Section 4 (Experiment); and Appendix F (ablation study) is mentioned in Section 4.2.
11. *grammar check*: We've conducted a thorough grammar check, correcting various grammatical errors and typos in the revised version.
**Reference**:
[1] Austin, Jacob, et al. "Structured denoising diffusion models in discrete state-spaces." Advances in Neural Information Processing Systems 34 (2021): 17981-17993.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their elaborate response to my feedback and for investing time into generating new results. Here is my answer to the rebuttal:
**Questions about new results:**
a) Table 4: Can you comment on statistical significance? I think the claim “Our GraDe-IF constantly achieves the *highest TM score* with the smallest variance” might be a bit strong. And why were these 4 proteins selected?
b) Great that most of the sequences seem foldable. I was just wondering, is there something the remaining three unfordable sequences have in common? For example, are these perhaps transmembrane, partly disordered, higher beta-sheet content, etc? This would give some insights into potential weaknesses and give some idea as to where you might run into more trouble when exploring the rest of the PDB IDs beyond the 42 you show here.
**Weaknesses:**
1. Thank you for spending time and effort to generate new results. Did you come up with this “1-sequence_identity” score or do you have a reference? Moreover, are all generated sequences unique?
2. Much appreciated.
**Questions:**
1. See above. In addition, are you planning on adding some more sample visualisations like in Figure 6 for the new proteins, either in the main paper or in the appendix?
2. Okay, fair enough. Maybe worth including in the conclusion / discussion.
3. \- 11. Thank you for making these changes.
---
Reply to Comment 1.1.1:
Title: Re-Reviewer j843: response to new questions
Comment: Thank you for the reply. Regarding your questions:
**New Results**:
a). The selection of the four proteins for our new visualizations remains consistent with our initial submission, as they were randomly chosen from the test set. We have now conducted testing on 100 proteins, and the corresponding results are presented in the table below. Notably, GraDe-IF demonstrates a significant performance improvement over PiFold and achieves comparable results to ProteinMPNN. In light of this, we would modify our claim to "achieve the best performance on TM score" as you have suggested.
| Method | Success | TM score |
|-------------|:---------:|:-----------------:|
| PIFOLD | 85 | 0.80 +- 0.22 |
| ProteinMPNN| 94 | 0.86 +- 0.16 |
| GraDe-IF | 94 | 0.86 +- 0.17 |
b). Our investigation into the three 'unfoldable' proteins revealed intriguing insights. These proteins faced challenges not only with our model but also with the baseline models. Notably, the structural determination of the three failed proteins, 1BCT (https://www.rcsb.org/structure/1BCT), 1BHA (https://www.rcsb.org/structure/1bha), and 1CYU (https://www.rcsb.org/structure/1CYU), were all based on NMR, which is an experimental technique that analyzes protein structure in a buffer solution. Due to the presence of multiple structures for a single protein in NMR studies, it is reasonable for folding tools to assign low foldability scores. In addition, we extended our investigation to another protein, 1H3L (https://www.rcsb.org/structure/1H3L), which our model successfully folded but PiFold failed to do so. This particular protein, a fragment of SigR, displays remarkable flexibility, leading to its low experimental resolution that causes a low structure prediction confidence.
**Weaknesses**:
1. We followed [1] when defining the score for diversity. We will include this reference in the final version when discussing the related results.
**Questions**:
1. Your suggestion of showcasing the performance of our model on a broader range of proteins is valuable. Given the substantial number of proteins in the test set, including them all might indeed dilute the main results. Instead, we intend to incorporate a selection of representative proteins in the revised appendix. These chosen proteins will exemplify various characteristics, such as those that are unfoldable, exhibit high pLDDT scores, manifest diversity, and more.
2. We appreciate your thoughtful suggestion and will certainly include this aspect in the revised version.
**Reference**:
[1] Zheng, Zaixiang, et al. "Structure-informed language models are protein designers." bioRxiv (2023): 2023-02. | Summary: This work presents a denoising diffusion model for protein inverse folding: predicting the amino acid sequences that fold into the given 3D protein structure. The proposed method leverages a discrete denoising diffusion model with respect to the graph structure representing the protein backbone. This work proposes to use Blocks Substitution Matrix for the transition matrix considering the different transition probabilities between amino acids, and further utilize the distinct types of secondary structure as a condition during sampling that guides the sampling process to appropriate 3d structures.
Strengths: - The paper is well-written and easy to follow.
- The approach to using (discrete) diffusion models on protein inverse folding is novel to the best of my knowledge.
- Two main contributions are novel and well-motivated: Employing BLOSUM for considering transitions between AAs and using the secondary structure information as a condition during sampling injects biological prior knowledge into the diffusion process which further reduces the sampling space and results in plausible AA sequences.
- The proposed method shows superior performance for the inverse folding tasks in terms of perplexity and recovery rate. The diversity analysis shows that the generated sequences are diverse without losing recovery rate.
Weaknesses: - Using the distinct types of secondary structures as a condition for the sampling process is not clear. How is the information used? Is it an input to the model during each step of the sampling process?
- The reason for not giving a higher rating is that although the approach to the task is novel, using BLOSUM and secondary structure conditions are novel, and the method shows superior performance, the components of the proposed methods are widely used in different domains: Diffusion models and E3 equivariant networks are widely used in the generation of protein structures given sequences (I understand that the task is not the same but in the similar domain) and DDIM is also used to reduce the sampling steps.
- Minor: line 143 "orm.alized"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Using DDIM for sampling, how long does it take to predict the sequence compared to the baselines, e.g. ProteinMPNN or ESM-IF1?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are discussed in the Supplementary file.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful feedback on our work, particularly your kind recognition of the quality of our presentation, the originality of our work, and its significance. We would like to address your questions and concerns as follows:
**Weaknesses**:
- *secondary structure representation*: In our approach, we employ an eight-dimensional one-hot encoded feature to represent nodes (amino acids) associated with one of the eight secondary structures. During the reverse diffusion process, this secondary structure information undergoes projection into the feature dimension and is subsequently integrated into the time embedding. This embedding is then added to the features of the output of each layer.
- *novelty of the work*: We agree that many individual components, such as the forward/reverse diffusion model, equivariant graph neural networks, and DDIM, have been previously explored in research. However, what sets our work apart is the non-trivial combination of these components, achieved through significant additional effort. Notably, we derive a substitution matrix for multi-step transitions in discrete space, leading to a closed-form expression for this matrix, which in turn allows us to establish the Discrete Discrete-time Inhomogeneous Markov (DDIM) model for protein sequence generation. Furthermore, our approach's effectiveness in generating reliable protein sequences is attributed to the careful arrangement of auxiliary components, including the transition matrix for forward/reverse diffusion and the incorporation of protein secondary structures for conditional sampling. We contend that the overall framework we introduce to address the inverse folding problem, encompassing evaluation metrics and related aspects, constitutes an additional valuable contribution.
- minor typo: Thank you for pointing out this typo. We have corrected it to “normalized” in the paper.
**Questions**:
- *sequence prediction time*: The sampling speed of DDIM is influenced by the choice of skip steps (as illustrated in Figure 5). To illustrate, we present the generation of 3FKF-A and provide a comparative analysis of inference times along with baseline methods, considering skip steps of 100, 20, and 1 for GraDe-IF, where the skip step of 1 reverts to the original DDPM sampling algorithm. The inference time of GraDe-IF decreases significantly with increasing step size, and it becomes faster than all baseline methods when step size=100.
| Method | Inference Time (in seconds) |
|------------------------------|:--------------------------------------------:|
| PiFold | 0.06 |
| ProteinMPNN | 0.14 |
| ESM-if1 | 0.17 |
| GraDe-IF (step=1) | 6.14 |
| GraDe-IF (step=20) | 0.31 |
| GraDe-IF (step=100) | **0.05** |
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response.
I agree with the authors that the novel combinations of some known approaches as in this case provide a simple yet effective method. I do not find other main concerns and would like to keep my score.
I tend to accept this work if there is no main issue commented on by other reviewers in the experimental part. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their detailed comments and insightful suggestions. We incorporated additional experiments and analyses as per recommendations. Here, we present a concise overview of the major enhancements that have been universally implemented, focusing on aspects such as diversity (Figure 1, Table 1), structural comparison (Table 2), sampling speed (Table 3), and foldability (Table 4 & 5). The visual representation in the form of the figure and tables has been included within the attached PDF, with corresponding references provided in each of our responses. We believe that a brief overview of these additional results will provide a clear context for comprehending the significance of the updates we have made.
**Figure 1. t-SNE of the generated sequences in three different recovery levels.**
In response to the task of aligning recovery rate levels, we have introduced an intermediary plot positioned between the two subfigures in Figure 4. This supplementary plot effectively illustrates the intricate relationship between recovery rates and diversity across the three methods on a finer scale. Notably, the visualization highlights the rapid contraction of the sampling space for both PiFold and ProteinMPNN. In contrast, the samples generated by GraDe-IF exhibit a broader distribution that encompasses the wild type.
**Table 1. Comparison of diversity and recovery rate at three different levels**
The recovery rate and diversity of the generation results were compared by controlling temperature = {0.5, 0.1, 0.0001} for PiFold and ProteinMPNN and sample step={20, 100, 250} for GraDe-IF. The three levels are divided roughly into low, medium, and high recovery rates in the table below. The results, quantified as average diversity and recovery rate, are depicted in the table provided. For clarity, the metric "diversity" is quantitatively defined as `diversity = 1-sequence identity`, where the average sequence identity is computed pairwise for generated sequences. For instance, in the first cell of the table, where the method is PiFold, the recovery level is low, and the metric is diversity, the diversity value is indicated as 0.3796.
**Table 2. Average RMSD and pLDDT across 42 protein structures (folded by AlphaFold2) with model-generated sequences.**
We expanded the validation of RMSD and pLDDT across a wider range of 42 proteins. For each protein backbone, a sequence is generated per model, which is then folded by AlphaFold2 to calculate the average pLDDT and RMSD with respect to the template wild-type protein structure. Notably, GraDe-IF exhibited the highest average pLDDT and lowest RMSD, highlighting its effectiveness over baseline methods.
**Table 3. Sampling speed of GraDe-IF (DDIM) at varying step sizes and baseline methods.**
The sampling speed of DDIM is influenced by the choice of skip steps (as illustrated in Figure 5). To illustrate, we present the generation of 3FKF-A and provide a comparative analysis of inference times along with baseline methods, considering skip steps of 100, 20, and 1 for GraDe-IF, where the skip step of 1 reverts to the original DDPM sampling algorithm. The inference time of GraDe-IF decreases significantly with increasing step size, and it becomes faster than all baseline methods when step size=100.
**Table 4. TM score comparison. 10 sequences were generated for each protein backbone.**
We computed the TM score to measure the foldability associated with the sequences generated by gauging the structural similarity between the structure of the generated sequences and the native structures. Generally speaking, a higher TM score demonstrates a better chance for the structure to be folded, and a protein with a TM score lower than 0.5 is basically considered unfoldable. The table below summarizes the average TM scores (along with the standard deviations) between the structures predicted by AlphaFold2 for generated sequences and their associated wild-type structures. Our GraDe-IF constantly achieves the highest TM score with the smallest variance.
**Table 5. TM score comparison on 42 protein backbones, with each backbone generating 1 sample. A sequence is considered foldable if its TM score > 0.5.**
The foldability of the generated protein sequences was also evaluated on a wider range of 42 proteins (Note that the reason we not investigating a larger volume of proteins is because that AlphaFold failed to fold more structures within the limited time for rebuttal). We define the number of successfully folded proteins by whether their TM score is larger than 0.5. Again, GraDe-IF delivers the largest amount of foldable proteins with the highest average TM score.
Pdf: /pdf/54ba40255d0a355b79bd89105710f4b5b64d562a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Uncovering motifs of concurrent signaling across multiple neuronal populations | Accept (spotlight) | Summary: This work extends an established line of previously proposed methods for Gaussian Process factor analysis-based models of neuroscience data. In comparison with previously published methods, it extends the factor model to include correlations and time delays between different (matched) latents underlying different populations. Inference is performed via mostly closed-form variational updates. This method is applied to data from paired recordings from three regions in visual cortex, providing evidence of directional influence between them in the context of overlapping receptive fields.
This is a valuable contribution that extends previous methods in ways that are likely to prove increasingly useful in an era of widespread multi-region recordings. While the algorithmic innovation is somewhat incremental, the validation, presentation, and contextualization to questions in neuroscience are very good.
Strengths: - This is a valuable contribution that extends previous methods in ways that are likely to prove increasingly useful in an era of widespread multi-region recordings. While the algorithmic innovation is somewhat incremental, the validation, presentation, and contextualization to questions in neuroscience are very good.
- The manuscript is clearly and carefully written, including a helpful notation key in the supplement.
- Rigorous model evaluation through recovery of parameters on synthetic data and careful analysis of a challenge data set.
- Details of the model and its inference are carefully laid out in the main text and supplement.
Weaknesses: - While the authors have done a commendable job of using a consistent and careful notation, the resulting formulas are index-dense, and it may be easy for some readers to miss the key conceptual setup, particularly which latents are independent of which other latents. For example, l. 94 in the supplement clearly notes that latent posterior estimates are independent for different trials, and this is implicit in (7), but this probably deserves a mention in the main text. Similarly, different $\mathbf{x}_j$ for $j=1\ldots p$ appear to also be independent, and this is a substantive assumption (see comment under limitations below), but I didn't see this discussed (though I may have missed it).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - The authors report several latents with nearly sinusoidal form. Are inferred delays in these cases well-defined? Is it possible to infer directionality in these cases, when a lag of $2\pi(1 - D/\tau)$ (with $\tau$ the period) should also produce similar results? This seems to me clearer in cases of sparse signals or strong transients but more difficult when latents are densely active.
- While the authors cite a number of related factor analysis models in neuroscience (especially refs 14, 15), there is a limited discussion of some other lines of multi-region work (e.g., ref 33, but also Gallagher et al. NeurIPS 2017 from the same group) that have previously used a GP-based factor analysis model and included relative delays between regions. What, precisely, is the difference between these models apart from their application to spikes as opposed to LFP?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - From (7), it appears as if the latents for each $j=1\ldots p$ are independent: $\mathrm{cov}[x^m_{njt}x^m_{nj't}] \propto \delta_{jj'}$. That is, there is potential coupling _across populations_ for each latent but not _across latents_ even within a population. This choice might warrant some additional discussion in the text.
- The variational inference algorithm for the latents requires inverting an $MpT \times MpT$ matrix for each trial. I suspect this is feasible for typical numbers ($M \sim 3$, $p \sim 10$, $T \sim 10$) and can be parallelized across trials, but will scale poorly if any of the relevant parameters become large.
- While the model is fit to spike count data, the observation model in (1) is Gaussian. This is likely to hinder inference when bins are small and/or counts are low.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weaknesses**
Thanks to the reviewer, we realize we could have been clearer about the independence structure of the latents *a priori* and *a posteriori*. We have added the following text:
- Section 2: "Under equation 7, latents are independent and identically distributed across trials."
- Supp. Section S2.1.1: "Note that, under the posterior distribution, latent variables $j = 1,\ldots,p$ are no longer independent, as they are under the prior distribution (equations 7, S2)."
We hope that these statements better complement Supp. Section S2.1.1, in which we preface each posterior update (eqs. S11-S20) with a statement about independence structure. Please see also our response under Limitations, below.
> **Questions**
> - Sinusoids
The reviewer's intuition regarding purely sinusoidal signals is correct. The central quantity is the latent's underlying covariance function. For example, a signal generated from a squared exponential function (Fig. 2c) would be "densely active," but the correct time delay can be unambiguously identified via the peak of the cross-covariance function. A sinusoidal signal has a sinusoidal covariance function, which has translational symmetry (i.e., it is periodic). Hence two time delay estimates spaced one period apart present two local optima.
This potential ambiguity can be resolved by additional context. For example, latent 3 of Fig. 5f is nearly sinusoidal with period 333 ms (corresponding to the grating stimulus) and estimated time delay +10 ms. An initial transient, however, breaks its translational symmetry (the 1st cycle is different from the 2nd and 3rd cycle). Furthermore, potential alternative delay estimates, -323 ms and +343 ms, would be inconsistent with the response latencies of the neurons in V1 and V3d (Fig. 5b; both populations respond within 10s of ms of stimulus onset). We can thus conclude that +10 ms is a reasonable estimate consistent with additional observations. Finally, we note that the sinusoidal signals identified here are due to experimental design: future experiments could be designed to avoid them.
> - Related models
We thank the reviewer for this reference, and now cite it. In brief, we see three axes that distinguish mDLAG from the cross-spectral factor analysis (CSFA) approach of Gallagher et al., 2017: (1) group structure, (2) parametrization of time delays, and (3) automatic relevance determination (ARD). The design choices along these axes give mDLAG interpretational and computational advantages in the study of multi-population recordings.
In detail:
1. Group structure: While Gallagher et al. indeed analyze activity in different brain regions, each "brain region" corresponds to a single LFP channel or time series. Consequently, there is no explicit group structure (i.e., the capability that brain regions may be multi-variate) built into CSFA, either in the observation model (eqs. 4 and 5 of Gallagher et al.) or the state model (eqs. 1 and 6 of Gallagher et al.). CSFA appears to be similar to time-delay GPFA (Lakshmanan et al., 2015), which studied time delays between pairs of individual neurons. In contrast, mDLAG incorporates group structure into both the observation (eqs. 1-6) and state models (eqs. 7-9; time delays are shared across neurons in the same population). Group structure facilitates inferences about network-level interactions (Fig. 1c, Fig. 2b) and about signal flow between populations (Fig. 1b), rather than between pairs of individual neurons or LFP channels.
2. Parametrization of time delays: Gallagher et al. use the cross-spectral mixture kernel (Ulrich et al., 2015). Accordingly, (1) the GP covariance function of each latent variable is a weighted sum of spectral Gaussian kernels, and (2) for each latent variable and between each pair of LFP channels, each spectral Gaussian component contributes a constant phase difference at its center frequency. Effectively, "time delays" between a pair of LFP channels are parametrized as a piecewise-constant phase function in the frequency domain. In contrast, (1) the GP covariance function of each mDLAG latent variable comprises one component (e.g., the squared exponential function), and (2) for each latent variable and between each pair of populations, mDLAG directly defines a time delay, or equivalently, a linear (as opposed to piecewise-constant) phase function in the frequency domain. These design choices enable a simpler description of cross-population temporal structure (fewer parameters) while remaining reasonably expressive.
3. ARD: CSFA requires cross-validation over a space of four hyperparameters. mDLAG employs ARD to enable computationally tractable model selection for multi-population recordings.
> **Limitations**
> - Independence
Indeed, under the prior distribution, eq. 7 (see also Supp. eq. S2), the latents $x_{n,j,:}$, $j =1,\ldots,p$ are independent. Under the posterior distribution, however, this independence no longer holds (see Supp. eqs. S6, S11, and S12). The interaction between the structure of the prior and posterior distributions can be understood by inspection of the ELBO (Supp. Section S2.2, eq. S31). The KL-divergence term $KL(Q_x(X) || P(X|\Omega))$ penalizes deviations of the posterior distribution $Q_x(X)$ from the prior distribution $P(X|\Omega)$. The independence structure of the prior distribution therefore acts as a form of regularization, not a hard constraint.
> - Variational inference
We thank the reviewer for the opportunity to clarify an important point: the latent posterior covariance (Supp. eq. S11) is identical for all trials of the same length. This computation can therefore be reused efficiently across trials. We have added this statement to Supp. Section S2.1.1. Please see also the General Response to Reviewers, "Computational demands."
> - Gaussian model
We thank the reviewer for making this point. Please see the General Response to Reviewers, "Gaussian observation model applied to spike counts."
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses to my and the other reviewers' critiques. I believe this is a valuable method that will likely find application in to those doing multi-region recordings in neuroscience. | Summary: This paper develops a new approach to analyze multi-region neural populations recordings. This approach extends DLAG to analyze communication across more than 2 regions. The main technical novelty lies in tractably extending the model definition of DLAG to multiple regions through the incorporation of the ARD prior. I found the approach elegant and particularly relevant to the current state of neuroscience research where multi-region recordings are growing in number. DLAG and mDLAG like approaches offer the ability to temporally disentangle concurrent signals, allowing us to make sense of multi-region recordings. The experiments are thorough and all the figures are very well made and described. Overall, I very much enjoyed reading this!
Strengths: 1. The paper is extremely relevant for a computational neuroscience crowd, where analyzing multi-region recordings is an important challenge. This approach can in practice help drive neuroscience research and answer questions about signaling across regions in the brain.
2. Technically, the novelty lies in the model description (which allows for easy disentangling of latents involved across region(s)) and in discovering a tractable inference scheme (I should say that I did not read details of the inference in the supplement).
3. The simulations and real-world experiments are both very thorough and nicely done. Details in the supplement were also helpful in contextualizing some of the results.
4. The paper is very well-written, and the figures are very clean!
Weaknesses: 1. The main question that I found myself struggling with was the order of latent variables across regions. I realize that there in a GP connecting them, but that does not guarantee that the ordering of latents is preserved in anyway across regions, right? Concretely, I am not sure I understand that reading through the columns of $C^m$ can tell us which dimensions are shared and which are not unless there is a post-processing step that reorders the latents (and the columns of $C^m$ respectively) so that the ordering of latents across regions is preserved (example: if latent 1 in regions 2 is correlated with a latent in region 1, then does it occupies the first dimension in the latent vector corresponding to region 1 at any time point?). Perhaps this falls our of the bayesian structure of the model, in which case it would be helpful to clarify this.
2. It would be nice to also include a literature review / related work section discussing the connections of mDLAG to multi-region LDS based models. Including this in the introduction is also fine, but I do think that it would be helpful to describe the pros and cons of both.
3. Finally, discussions of the amount of time and data needed to obtain reliable estimates from mDLAG would help practitioners understand the merits of the approach. Currently, there is a figure in the supplement describing the clock time per iteration, but as a reader I do not know how many iterations are needed in practice to fit the model, so having a number describing the overall runtime would be useful.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In addition to the questions in the weakness section, I have some other minor questions:
1. Shouldn't $\bf{d}^m$ have a prior that requires it to be positive? I'd be curious to hear how the authors deal with this?
2. I am curious to understand the amount of data needed to reliably estimate the parameters of mDLAG, and so I wonder if the authors have any plots demonstrating accuracy vs # of data points for a given # of regions and population size.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I do not envision any societal implications of this work in the short term.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weaknesses**
> 1. The main question that I found myself struggling with...
We thank the reviewer for the opportunity to clarify this point. The ordering of latents across regions is preserved by definition of the model. Let us write out the observation model (Eq. 1) more expansively:
$$y^m_{n,t} = \mathbf{c}^m_1 \cdot x^m_{n,1,t} + … + \mathbf{c}^m_p \cdot x^m_{n,p,t} + \mathbf{d}^m + \boldsymbol{\varepsilon}^m$$
From here, we can see that latent variable 1, $x^m_{n,1,t}$, always maps to population $m$'s neural activity through the 1st column of $C^m$ ($\mathbf{c}^m_1$), latent variable 2, $x^m_{n,2,t}$, always maps to population $m$'s neural activity through the 2nd column of $C^m$ ($\mathbf{c}^m_2$), and so on. This ordering of the latents and columns of $C^m$ is the same for every population, by definition. We have added the following clarification to Section 2:
- "In particular, the $j$th column of $C^m$ maps the $j$th latent variable $x^m_{n,j,t}$ to population $m$."
> 2. It would be nice to also include a literature review...
We thank the reviewer for this suggestion. We have added the following text to the Discussion:
"For exploratory data analysis, mDLAG's GP-based description of multi-population temporal structure is advantageous over an alternative linear dynamical system (LDS)-based description (Semedo et al., 2014; Glaser et al., 2020) in two respects: (1) a GP can be useful for exploratory data analyses where an appropriate parametric dynamical model is unknown *a priori*, and (2) mDLAG's continuous-time model enables the discovery of wide-ranging delays with high precision, which, in contrast to discrete-time LDS approaches, are not limited to be integer multiples of the sampling period or spike count bin width of the neural activity. Ultimately, these approaches can be complementary: one can use mDLAG to generate data-driven hypotheses about motifs of concurrent signaling across populations, and then test these hypotheses with a dynamical system-based approach."
> 3. Finally, discussions of the amount of time and data needed...
Regarding the amount of data needed, please see the General Response to Reviewers, "Amount of data needed."
Regarding the amount of time needed, we have added the following text to Supplementary Fig. S3: "We conservatively ran each mDLAG model for 50,000 iterations, resulting in an average total runtime of 34 hours. Had we used a less conservative, but still reasonable, stopping tolerance of $10^{-6}$, the average number of iterations required for convergence would have been 17,000 (with similar parameter estimates), for an average total runtime of 11.5 hours." For further discussion, please see the General Response to Reviewers, "Computational demands."
> **Questions**
> 1. Shouldn't $\mathbf{d}^m$ have a prior that requires it to be positive?
We thank the reviewer for this clarification question. Please see the General Response to Reviewers, "Positivity constraints on the mean parameter."
> 2. I am curious to understand the amount of data needed...
We thank the reviewer for the interesting question. Please see the General Response to Reviewers, "Amount of data needed."
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: Thank you for your detailed and thorough responses. I continue to think this is a worthwhile contribution to the NeurIPS community. | Summary: The recent developments in neural recording technologies allow recording from large populations of neurons from multiple brain regions simultaneously. Latent space models are often used to analyze these datasets, but they are generally limited to the study of single or two populations of neurons. This work expands on existing dimensionality reduction methods and introduces a new probabilistic method to characterize interactions across multiple (more than two) populations of neurons. The proposed model can capture local and shared variability across multiple populations as well as the direction of the signal flow between areas and their temporal evolution with trial resolution. The authors validated the method on simulated data and neural data, showing that the model outperforms an existing method without temporal smoothing, GFA, and lesion versions of the model.
Strengths: The paper is clearly presented and technically sound. The method was tested and shown to work well in simulation and neural data. This method builds on existing latent space models to enable multi-area signal characterization. Understanding inter-area population activity is an interesting problem in neuroscience, and this work proposes a new method that can help explain these emerging datasets. Importantly, even when tested in cases with only two neural populations, for which there are multiple existing methods, the proposed model still outperforms alternatives. This suggests that this tool could be broadly applied to single, two, or more than two areas population recordings. Additionally, when applied to neural data, they demonstrated the power of the approach by reporting new discoveries.
Weaknesses: While the authors show the promise of their method to understand multi-area signals, they overlooked other existing methods that can also capture this information, such as multiset CCA or extensions of Procrustes alignment to multiple datasets. It would be interesting to add a performance comparison to these models. The model assumes linearity and Gaussian observations, but neural activity is often captured in spiking activity, which is better captured by a Poisson model. The authors show no indication of this limitation. Lastly, the computational cost of using a GP prior for temporal smoothing is relatively high compared to alternative methods, which could limit the potential applications. The method seems to be a direct extension of DLAG to multiple areas. If so, an explicit comparison between them could help assess the significance of the work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The simulated data has predetermined parameters, some set to match characteristics of the neural recordings and some others not. For example, the number of neurons in the recordings is in the order of dozens, while in one of the simulations, it is set to 10. What prompted the specific parameter choices? Or more interestingly, how robust is the model to potential changes in the different parameters: number of neurons, noise parameters, temporal delays, and so on? Another additional application listed for future work is the analysis of other signals like behavior. It would be relevant to know in these cases if the model requires similar dynamics or temporal binning for it to work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors address the limitations of linearity and temporal smoothness but fail to address the potential limitations in computational cost and data demands.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weaknesses**
> While the authors show the promise of their method to understand multi-area signals, they overlooked other existing methods...
We thank the reviewer for pointing out these alternative methods. We agree that they are relevant, and we cite several review papers that mention such methods (Semedo et al., 2020; Kang et al., 2020; Keeley et al., 2020; Zhuang et al., 2020; Machado et al., 2022). In summary, these methods fall short in addressing two key challenges in the study of multi-population recordings (Fig. 1): (1) distinguishing network-level interactions, and (2) disentangling concurrent signal flow. Below and in Reviewer Figs. R2 and R3, we demonstrate these points empirically for multiset CCA. We believe that our empirical comparisons of mDLAG to group factor analysis (GFA; Fig. 4, Supplementary Fig. S2, Supplementary Fig. S3) are a representative demonstration of the advantages mDLAG has over the broad class of static methods (i.e., methods that do not consider the flow of time), which includes multiset CCA, Procrustes alignment, and GFA.
In greater detail, we applied multiset CCA to both of our simulated datasets (Reviewer Fig. R2) and to the Neuropixels recordings (Reviewer Fig. R3):
- When applied to Simulation 1 (Fig. 3), the loading matrix estimated via multiset CCA (Reviewer Fig. R2a, right) did not clearly show the group structure in the ground truth loading matrix (Reviewer Fig. R2a, left). Indeed, typical formulations of multiset CCA do not aim to distinguish these network-level interactions, unlike mDLAG or GFA.
- When applied to Simulation 2 (Fig. 4, Reviewer Fig. R2b), multiset CCA's latent estimates represented a mixture of the two directed interactions (Reviewer Fig. R2c). GFA, a static method like multiset CCA, exhibited the same shortcoming (Fig. 4c). mDLAG successfully disentangled the two interactions (Fig. 4b).
- When applied to the Neuropixels recordings (Fig. 5), multiset CCA was outperformed by both mDLAG (Reviewer Fig. R3a) and GFA (Reviewer Fig. R3b). These results are consistent with prior work in which GFA was shown to outperform multiset CCA on multi-area fMRI data (Klami et al., 2015), and DLAG was shown to outperform CCA on electrophysiological recordings from two brain areas (Gokcen et al., 2022).
> The model assumes linearity and Gaussian observations...
We thank the reviewer for making this point. Please see the General Response to Reviewers, "Gaussian observation model applied to spike counts."
> Lastly, the computational cost...
We thank the reviewer for making this point. Please see the General Response to Reviewers, "Computational demands."
> The method seems to be a direct extension of DLAG...
The reviewer is correct that mDLAG is an extension of DLAG to multiple (more than two) neuronal populations. We directly compare mDLAG to DLAG throughout the text:
1. In Section 2, under "mDLAG special cases," we write that "In the case of two populations (M = 2), mDLAG is equivalent to a Bayesian formulation of DLAG."
2. In Section 3, under "Validating mDLAG on recordings from V1 and V2," along with Supplementary Figures S1 and S2a, we make an empirical comparison between mDLAG and DLAG:
"mDLAG outperformed DLAG across all datasets (Supplementary Fig. S2a, points above the diagonal), suggesting that ARD provides an improved method of model selection over the constrained grid search method used for DLAG, while also avoiding grid search’s computational drawbacks (see Supplementary Section S5)."
3. In Supplementary Section S5, we outline the heuristic model selection approach employed for DLAG (Gokcen et al., 2022), and in the Discussion, we note that "Scaling this type of approach to three or more populations (or external experimental variables) would be difficult. mDLAG is thus an advance toward scaling to large-scale multi-population recordings…"
> **Questions**
> What prompted the specific parameter choices?
We thank the reviewer for this clarification question. Particularly in Simulation 1 (Fig. 3), we aimed to match all dataset characteristics to typical neural data. However, we chose to set 10 neurons per population to facilitate visual demonstration of the results (specifically, the loading matrices in Fig. 3a). The results do not meaningfully change if we set population sizes closer to those of the neural recordings. For example, if we re-run Simulation 1, but set 100 neurons per population instead of 10, then the accuracy of estimates (Fig. 3a,b) looks qualitatively similar. In fact, performance slightly improves: $R^2$ between ground truth and estimated latents is 0.957 (compared to 0.936 originally), and mean delay error is 0.58 ms (compared to 1.14 ms originally). We note that we further validated mDLAG on previously studied neural recordings (Section 3, "Validating mDLAG on recordings from V1 and V2) to test mDLAG in a truly realistic setting.
Please see also the General Response to Reviewers, "Amount of data needed."
> Another additional application...
There is precedent for using latent time series models to study both neural activity and behavioral signals. For example, Kao et al., *Nature Communications*, 2015 developed a latent dynamical model to study the relationship between spike counts of a single motor area population and arm kinematics. They used a common set of latents (and consequently the same latent dynamical model) to describe neural activity and arm kinematics. The arm kinematics were sampled to the same time resolution as the spike count bins. Using mDLAG, we could extend this idea to multiple brain areas and arm kinematics.
> **Limitations**
We thank the reviewer for these points. Regarding computational cost, we note that we report mDLAG's runtime per EM iteration and compare to GFA's runtime in Supplementary Fig. S3. Please see also the General Response to Reviewers, "Computational demands." Regarding data demands, please see the General Response to Reviewers, "Amount of data needed."
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed responses and their additional model comparisons. This is relevant work adding another tool to the existing methods for neuroscience research. | Summary: This paper extends the delayed latents across groups (DLAG) to a recurrent general form (mDLAG), which allows analyzing the contribution of each latent dimension for multiple observational groups. Besides, the newly proposed mDLAG is able to identify the complicated directions of the information flow between groups with the corresponding time delay estimations. Experiments on two synthetic and two empirical datasets show the effectiveness of the mDLAG in analyzing multiple-grouped neural data.
Strengths: * The model derivation is logically clear. The authors provide a very detailed clarification of the notation, parameter set, and generative procedure of this generative model
* The experiments are exhaustive, including two synthetic and two real-world datasets with very comprehensive analyses.
* The model is intuitive and in a general form. Its special cases can reduce the model to some famous generative model like the Gaussian process factor analysis (GPFA).
Weaknesses: * Some math notation is a bit hard to understand, especially the $\boldsymbol x$. From my understanding, it is a high-dimensional tensor, but the author does not define this tensor in a clear way, e.g., the order of the dimensions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * It is hard for me to understand Fig. 1b.
* Eq. 3, theoretically mean firing rate $\boldsymbol d^m$ cannot have a Gaussian prior with zero means. A typical way is to model the log or logit of the firing rate since firing rates > 0.
* Is that $\boldsymbol x_{n,t}^m$ should be $\boldsymbol x_{n,t}$, since from Fig. 3b, it seems like you have a shared 7-d latent across all three neural populations A, B, and C?
* Is the inference very time-consuming?
* Is there any other models that can solve the same task with some tiny modification, so that the performance can be compared?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Weaknesses**
We realize we could be more explicit in defining our notation for the latents, $\mathbf{x}$. We have added the following text throughout Supplementary Section S1:
- "we define latent variable $j$ (out of $p$) in population $m$ at time $t$ on trial $n$ as $x^m_{n,j,t} \in \mathbb{R}$."
- "we represent the collection of all $p$ latent variables in population $m$ at time $t$ on trial $n$ as the vector $\mathbf{x}^m_{n,:,t} \in \mathbb{R}^p$."
- "the latent variables in population $m$ on trial $n$ can be grouped into the matrix $X^m_n = [\mathbf{x}^m_{n,:,1} \cdots \mathbf{x}^m_{n,:,T}] \in \mathbb{R}^{p \times T}$. We represent a row of $X^m_n$ (i.e., the values of a single latent variable $j$ at all time points on trial $n$) as $\mathbf{x}^m_{n,j,:} \in \mathbb{R}^T$. Finally, we can form the three-dimensional array $X^m$ by concatenating the matrices $X^m_1,\ldots,X^m_N$ across trials along a third dimension."
>**Questions**
>- It is hard for me to understand Fig. 1b.
Fig. 1b illustrates the geometric intuition behind mDLAG's ability to disentangle concurrent signal flow between a pair of populations. This task is difficult given measurements of raw neural activity, i.e., the activity along axes $A_1$ and $A_2$ in population A's neural state space (Fig. 1b, left), and the activity along axes $B_1$ and $B_2$ in population B's neural state space (Fig. 1b, right). However, if it were possible to measure activity along alternative dimensions in each population's state space (Fig. 1b, magenta and gray dimensions), then it could be possible to tease apart signals concurrently relayed in opposite directions. For example, if one were to measure activity along the magenta dimension in both population A and B (Fig. 1b, middle, top black time courses), then it is apparent that the activity in population A (top trace) leads the activity in population B (bottom trace). Similarly, if one were to measure activity along the gray dimension in both population A and B (Fig. 1b, middle, bottom black time courses), then it is apparent that the activity in population B (bottom trace) leads the activity in population A (top trace).
To improve clarity, we have added the following to Fig. 1b: (1) labels for "Population A" (left) and "Population B" (right) to indicate which population corresponds to which set of axes, (2) explicit labels to indicate that axes $A_1$, $A_2$, $B_1$, and $B_2$ correspond to individual neurons, and (3) arrowheads on the gray and magenta dimensions to indicate the positive direction of the corresponding latent activity.
If the reviewer has any other suggestions to help clarify the panel, please let us know.
>- Eq. 3...
We thank the reviewer for this clarification question. Please see the General Response to Reviewers, "Positivity constraints on the mean parameter."
>- Is that $\mathbf{x}^m_{n,t}$...
Thanks to the reviewer's comment, we realized that we could have been clearer in indicating how we chose to display the latent variables. $\mathbf{x}^m_{n,t}$ is still correct. For Fig. 3b, for example, the complete output would be 21 latent time courses (7 latents, time-delayed for each of the 3 populations). For concision, in Fig. 3b and throughout the paper, we display the set of latents for one of the populations. We have added the following text to the legends of Figs. 3 and 4:
- "For concision, we show only latents corresponding to one population ($\mathbf{x}^m_{n,j,:}$); the remaining latents are time-shifted versions of those shown here."
>- Is the inference...
Across all datasets, the average runtime per iteration was 2.5 seconds (Supplementary Fig. S3). We have also added the following text to Supplementary Fig. S3: "We conservatively ran each mDLAG model for 50,000 iterations, resulting in an average total runtime of 34 hours. Had we used a less conservative, but still reasonable, stopping tolerance of $10^{-6}$, the average number of iterations required for convergence would have been 17,000, for an average total runtime of 11.5 hours." For further discussion, please see the General Response to Reviewers, "Computational demands."
>- Is there any other models...
We include the following qualitative and quantitative comparisons to alternative models throughout the text:
- Section 3, Simulation 1, Fig. 3a: We compare an mDLAG model with automatic relevance determination (ARD) to an mDLAG model without ARD. We conclude that "The mDLAG model with ARD … recovered the population-wise sparsity structure with high accuracy (Fig. 3a, center). The mDLAG model without ARD, however, produced an estimate of the loading matrix with mostly non-zero elements (Fig. 3a, right)."
- Section 3, Simulation 2, Fig. 4: We compare mDLAG to group factor analysis (GFA). We conclude that "mDLAG latent variable and time delay estimates accurately reflected the distinct signaling pathways across the three populations (Fig. 4b; $R^2$ between ground truth and estimated time courses: 0.989; mean delay error: 0.16 ms). Each latent estimated by GFA, however, notably reflected a mixture of both interactions (Fig. 4c, each latent time course exhibits two peaks)."
- Section 3, V1-V2 recordings, Supplementary Figs. S1, S2a: We compare mDLAG to DLAG. We conclude that "mDLAG outperformed DLAG across all datasets (Supplementary Fig. S2a, points above the diagonal)..."
- Section 4, Supplementary Figs. S2bc, S3: We compare mDLAG to GFA and a lesioned version of mDLAG in which time delays were fixed at zero ('mDLAG-0'). We conclude that "GFA, mDLAG-0, and mDLAG exhibited increasingly better leave-group-out prediction (Supplementary Fig. S2b: mDLAG-0 better than GFA; c: mDLAG better than mDLAG-0)..."
- See also Reviewer Figs. R2 and R3, in which we show that mDLAG outperforms multiset CCA.
To our knowledge, these are the most comparable methods to mDLAG. If the reviewer is aware of other related methods that we should cite or compare to, please let us know.
---
Rebuttal Comment 1.1:
Comment: Thank you for your replies. Same as other reviewers, I also still think this is a good paper without major problems. | Rebuttal 1:
Rebuttal: ## General Response to Reviewers
We thank the reviewers for their constructive comments, which helped to strengthen our submission. Here, we provide responses to comments shared by multiple reviewers.
### Amount of data needed
Reviewers qoeh and DL1P inquired about the amount of data needed to fit an mDLAG model. In general, the answer to this question is highly data-dependent: the number of trials, number of neurons, number of populations, trial lengths, latent dimensionality, latent temporal structure, and noise levels all interact to influence mDLAG's performance. However, we can address this question with an empirical example (Reviewer Fig. R1). We re-fit mDLAG models to the Neuropixels dataset analyzed in Fig. 5, but we limited the number of experimental trials available in the training set (i.e., we fit mDLAG to training sets with 10, 25, 50, 100, 150, up to 225 trials—the full training set size). With as few as 100 training trials (less than half of the full training set size), mDLAG's test performance suffered by little more than 5% (Reviewer Fig. R1, black curve). Furthermore, with as few as 25 training trials, mDLAG still outperformed the group factor analysis (GFA) model fit to all 225 training trials (Reviewer Fig. R1, red dashed line).
This example demonstrates empirically that mDLAG performs well with trial counts typical of neurophysiological experiments. In principle, this data-efficiency is due to mDLAG's incorporation of (1) dimensionality reduction, (2) temporal smoothing, and (3) automatic relevance determination. These components act as forms of regularization that benefit model performance in data-limited regimes.
### Computational demands
Reviewers EMi6 and DL1P have pointed out that temporal smoothing via a GP prior is computationally demanding. We agree, and acknowledge in Supplementary Fig. S3 (where we compare mDLAG and GFA runtimes) that "Overall, mDLAG is more computationally intensive than GFA due to the incorporation of Gaussian processes. Each mDLAG fitting iteration requires the inversion of a $MpT \times MpT$ matrix (equation S11)."
We emphasize, however, two points from the Discussion: (1) mDLAG provides a tractable approach to model selection for neural recordings involving three or more populations, a problem that was previously computationally prohibitive; (2) The problem of scaling the latent inference step for this class of Gaussian process latent variable model has been studied extensively (Gallagher et al., 2017; Zhao et al., 2017; Duncker et al., 2018; Keeley et al., 2020; Jensen et al., 2021; Dowling et al., 2023). Approaches include diverse uses of Fourier approximations, inducing points, and additional variational approximations. mDLAG is compatible with these approaches, and we believe that they provide a promising avenue for continued computational development of the mDLAG framework.
### Gaussian observation model applied to spike counts
Reviewers EMi6 and DL1P have pointed out that mDLAG incorporates a Gaussian observation model, whereas the spike counts to which we apply mDLAG are generally better described by a Poisson observation model. While we agree with this point, we note that this limitation is not unique to mDLAG, and is shared by widely used methods such as factor analysis, Gaussian process factor analysis, and latent linear dynamical systems. Intuitively, one can think of the Gaussian model as capturing the first and second moment of the spike count data, rather than requiring that the data be Gaussian-distributed. In our experience, for neural recordings similar to those analyzed in this work, we have not encountered a scientific finding that could be seen with a Poisson or point process observation model that could not also be seen using a Gaussian model. Hence mDLAG is still widely applicable to spiking neural activity. We also emphasize, as we have in the Discussion, that mDLAG's Gaussian observation model enables wider applicability to other neural recording modalities. For spiking neural activity in which a Gaussian observation model is meaningfully limiting, a Poisson or point process observation model can be successfully incorporated, but at the expense of increased computational requirements (Zhao et al., 2017; Duncker et al., 2018; Keeley et al., 2020; Jensen et al., 2021; Dowling et al., 2023).
### Positivity constraints on the mean parameter
Reviewers qoeh and DK8X have inquired about potential positivity constraints on the mean parameter $\mathbf{d}^m$. We thank the reviewers for this question, as we realized we needed to further clarify our data preprocessing steps. Introducing a positivity constraint on $\mathbf{d}^m$ could make sense for applications to neural spike counts, especially if it is particularly important to interpret each element of $\mathbf{d}^m$ as the mean firing rate of each neuron across time and trials. However, a positivity constraint might not make sense for analyses of different recording modalities, or given certain preprocessing choices of spiking neural activity. For example, in this work, because we were interested in inter-population interactions on timescales within a trial, we subtracted the mean across time bins within each trial from each neuron. This step removed activity that fluctuated on slow timescales from one stimulus presentation to the next (Cowley et al., 2020). Consequently, the data input to mDLAG were no longer positive-valued spike counts. We have added the following text:
- Section 3: "Because we were interested in V1-V2 interactions on timescales within a trial, we subtracted the mean across time bins within each trial from each neuron. This step removed activity that fluctuated on slow timescales from one stimulus presentation to the next (Cowley et al., 2020)."
- Section 4: "As we did for the V1-V2 recordings, above, we subtracted the mean across time bins within each trial from each neuron, to remove slow fluctuations beyond the timescale of a trial."
Pdf: /pdf/c94150c3547726212ea06ceb9511e2aa922ba121.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Robust Data Valuation with Weighted Banzhaf Values | Accept (poster) | Summary: The paper proposes that 1) the semi-values obtained from fitting Kronecker noise to the data robustly rank data more consistently across runs and 2) we can efficiently estimate these weighted Banzhaf values with the maximum sample reuse principle. The extensive experiments illustrated the advantages of the contributions: higher sample efficiency and performance in noisy label detection and rank consistency.
Strengths: 1. It is novel to fit noise to the data to determine the semivalue weights and important to expand beyond and compare against common data valuation methods (Shapley and Banzhaf)
2. Extensive experiments on many datasets were performed.
Weaknesses: The definition of robustness (and especially Sec 3.2), assumptions and limitations of the work are not sufficiently and clearly explained. More details are given in the questions and suggestions below. Incorporating the suggestions will improve the contextualization relative to prior work and make the claims more convincing.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Q1. Is the definition of robustness in this paper the same as Data Banzhaf?
Q2. Does the estimation according to Line 134/equation 1 get significantly less inaccurate as $w$ tends to 0 or 1? One of the terms will have much fewer samples.
Q3. Does Kronecker Noise mean Gaussian noise with Kronecker covariance?
What is an intuitive interpretation of each entry in $X_i$ and $\Sigma$? How is $X_i$ related to $v$? Provide further explanation/citations on the suitability of Kronecker noise (less parameters to fit?) and why $\sigma_{22} < \sigma_{11}$.
My guess is that $X_i \sim \mathcal{N}(0, \Sigma =\begin{pmatrix} \sigma_{11}\ \sigma_{12}\\\\ \sigma_{21}\ \sigma_{22} \end{pmatrix})$ . Instead of observing the utility function $\bar{u} = \begin{pmatrix} u(\emptyset) \\\\ u(i) \end{pmatrix}$ directly, $\bar{u} + X_i$ is observed.
In equation 5, what is $\phi_P$ and why is its outer product $\Sigma$?
Q4. Can you explain why Theorem 1/Remark 1 contradict Data Banzhaf? Is it due to a different notion of robustness / assumption of noise structure? Data Banzhaf approach is noise-structure agnostic as they consider users may have difficulties estimating the noise.
Q5. Line 229-231 mentions that exact values are intractable on larger datasets and approximation would “draw the real noises away from following Kronecker noises”. What does the quoted part mean? Is this a problem with using Kronecker noise that limits the applicability in real use cases?
**Suggestions**
* The robustness concept should be better introduced in the introduction’s 3rd paragraph. Later, a mathematical/formal definition of the robustness concept should be included.
For example, the Data Banzhaf paper explains that the stochasticity in the rankings/utilities is due to stochastic training methods. Such inconsistencies may interfere with identifying the usefulness of data. The Data Banzhaf paper gives an in-depth definition of Robustness in Sec 4. These details are missing from this paper.
* The “weighted Banzhaf” and “consistency” concepts introduced here have alternative names in CGT literature: binomial semivalues and null player exclusion/hereditary property. See Domenech, M., Giménez, J. M., & Puente, M. A. (2016). Some properties for probabilistic and multinomial (probabilistic) values on cooperative games. Optimization, 65(7), 1377-1395.
* In the experiments, Shapley and the Beta Shapley values are underperforming as they place large weights on the smaller/larger coalitions. I would expect Beta$(k, k)$ for large $k$ or Beta$(kp^*, k(1-p^*))$ where p* s the robust Banzhaf version to perform as well.
Thus, an alternative conclusion (instead of using binomial semivalues) is that the semivalue weights should be learnt/adaptive.
* It might be possible to empirically verify that in real dataset/SGD training, training on more data leads to more or less noise in the utilities.
Minor suggestions (not affecting score)
* coeffient misspelling in Figure 1 caption
* the axis tick labels in all figures are too small
* in equation 1, s is not defined to be the size of S
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitation mentioned is that there is no universally best and most robust value for all datasets. However, other limitations (e.g. the Kronecker noise assumption or complexity) should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for pointing out our insufficient presentation for the context as well as your helpful suggestions. The suggested reference is useful, and will be added in our revision. Please view our global response for clarification, and below we address other comments, and we will polish our paper accordingly.
**Q: Does the estimation according to Line 134/equation 1 get significantly less inaccurate as $w$ tends to 0 or 1?**
A: No, it is always accurate in terms of almost-sure convergence. Precisely, let $\hat{\phi}^m$ (omitting the argument $v$ as the below statement holds for every utility function $v$) be the estimate for $\phi$ using $m$ samples from the proposed sampling scheme. Specifically, $\hat{\phi}^1$ can be viewed as a random vector with its underlying distribution induced by the sampling scheme, and proposition 2 proves that $\mathbb{E}[\hat{\phi}^1]=\phi$. Therefore, by the law of large numbers, $\hat{\phi}^m\rightarrow\phi$ almost surely as $m\rightarrow\infty$. Heuristically, when $w$ tends to $0$ or $1$, the weights $p^{n-1}_s$ in Eq. (1) for those terms that receive fewer examples decreases to $0$, which means the error induced by ignoring them tends to be negligible. We will polish this part in the final version.
**Q: Why $\sigma_{11}>\sigma_{22}$?**
A: We are not sure if we understand this question correctly. Basically, We mention $\sigma_{11}>\sigma_{22}$ in Line 168 just to discuss a possible scenario, as supported by proposition 3, where the Kronecker noise allows for modeling decreasing variance of $v(S)$ as $|S|$ increases. We do not impose this condition in any of our results or experiments. Empirically, provided that $v$ is defined by prediction accuracy, such a decreasing variance was mostly observed (it might lift a little bit for some $|S|$ and then keep decreasing) in our experiments.
**Q: What is $ \phi_{P} $ and why is its outer product?**
A: Let $\phi_P(v,i)$ denotes the $i$-th entry of $\phi_P(v)$, substituting Eq (2) in Eq (1) yields
$$
\phi_P(v, i) = \sum_{S \subseteq [n]\backslash i} \left( \int_{0}^{1} t^{s}(1-t)^{n-1-s} \mathrm{d}t \right) \left( v(S\cup i) - v(S) \right) .
$$
The outer product in theorem 1 was a typo while we wanted to express $ \det(\mathrm{Cov}(\phi_{P}(v))) $ where $ \mathrm{Cov} $ stands for covariance matrix. We had remedied this typo in the supplementary where we proved theorem 1.
**Q: What it means by Lines 229-231?**
A: Thanks for pointing out that our description on this part is not that proper, and we will modify in the paper. The context is that the randomness of each noisy utility function $v$ is induced by stochastic training. Empirically, such randomness comes from varying the underlying random seed controlling the training phase. We model such randomness by assuming that $ \boldsymbol\epsilon = v - \mathbb{E}[v] $ follows a Kronecker noise, by which the most robust semi-value can be derived. In other words, the randomness of $ v $ in theorem 1 is restricted to be from stochastic training. On large datasets, for each fixed random seed, $ \phi_{P}(v) $ can only be approximated, which means there is another randomness onto $\phi_P(v)$ brought in by the approximation procedure. In this case, these two noises of different origins get entangled on $\phi_P(v)$ and that makes it complicated to analyze the randomness of $\phi_P(v)$.
Therefore, to fairly examine our theory, we focus on small datasets where each semi-value can be computed exactly so that there is *only one* expected randomness that is induced from stochastic training.
**Q: I would expect Beta$(k,k)$ for large $k$ or Beta$(kp^\*,k(1-p^\*))$ where $p^\*$ is the robust Banzhaf version to perform as well.**
A: We experiment on all $(\alpha,\beta)\in[16]\times[16]$ (256 combinations in total) for Beta Shapley, and report the best for Beta Shapley on as many datasets as time permits. All results are reported with 10% of data in $\mathcal{D}_{tr}$ flipped, and the other settings remain the same. Note that when they achieve the same performance in noisy label detection, weighted Banzhaf values have the smallest variance across different runs (i.e., different random seeds used for training).
#### Ranking consistency
Dataset|Weighted Banzhaf|Beta Shapley
---|---|---
2dplanes|0.50:**0.868**$\pm$0.035|(8,8):0.851$\pm$0.038
bank-marketing|0.35:**0.924**$\pm$0.030|(1,1):0.918$\pm$0.025
bioresponse|0.10:**0.969**$\pm$0.008|(10,2):0.944$\pm$0.019
covertype|0.50:**0.811**$\pm$0.082|(1,1):0.806$\pm$0.074
cpu|0.00:0.874$\pm$0.065|(16,1):**0.883**$\pm$0.078
default|0.05:**0.986**$\pm$0.002|(16,1):0.978$\pm$0.004
gas|0.05:**0.952**$\pm$0.008|(16,2):0.886$\pm$0.023
letter|0.35:**0.711**$\pm$0.058|(5,3):0.700$\pm$0.051
fraud|0.00:0.910$\pm$0.019|(16,1):**0.923**$\pm$0.011
pol|0.10:**0.949**$\pm$0.013|(16,3):0.945$\pm$0.010
#### Noisy label detection
Dataset|Weighted Banzhaf|Beta Shapley
---|---|---
2dplanes|0.10:0.517$\pm$0.047|(13,3):**0.542**$\pm$0.098
bank-marketing|0.20:**0.325**$\pm$**0.025**|(4,1):**0.325**$\pm$0.056
bioresponse|0.55:**0.358**$\pm$**0.034**|(3,4):**0.358**$\pm$0.053
covertype|0.05:**0.483**$\pm$0.055|(13,1):0.442$\pm$0.034
cpu|0.05:**0.667**$\pm$**0.047**|(10,1):**0.667**$\pm$0.055
default|0.10:**0.342**$\pm$**0.019**|(7,2):**0.342**$\pm$0.034
gas|0.15:**0.550**$\pm$0.065|(2,1):0.525$\pm$0.038
letter|0.20:0.383$\pm$0.047|(13,5):**0.392**$\pm$0.053
fraud|0.05:**0.792**$\pm$0.019|(9,2):0.783$\pm$0.055
pol|0.10:**0.558**$\pm$0.034|(12,1):0.525$\pm$0.069
**Q: It might be possible to empirically verify that in real dataset/SGD training, training on more data leads to more or less noise in the utilities.**
A: Thanks for this suggest, we will add this type of experiment in the revision. Nevertheless, all our results are reported with standard deviation across different random seeds determining the initialization of trainable models as well as the order of feeding data, which demonstrates the randomness contained in $\phi_P(v)$.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response, helpful clarifications and follow-up experiments. I acknowledge that I have read the global and individual responses.
The additional discussion has improved my opinion of my work though like the other reviewers, I am still concerned about the justification/choice of the Kronecker noise model and robustness. I will wait to the discussion period to re-evaluate my score.
In the global response, the authors mention "$X_i(1)$ represents the randomness brought in by the absence of datum $i$". Why is there additional randomness in this case?
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply! We apologize for the confusion and let us clarify our explanation of the Kronecker noise.
The utility function $v$ maps each subset $A$ of players/data to a number. We want a *succinct* noise model that will perturb $v(A)$ and $v(B)$ in a correlated manner, depending on how much they overlap (see Proposition 3). The straightforward approach is to let $\tilde v = v + \epsilon$ where $\mathrm{Cov}(\epsilon) = \Lambda$. The downside is that the matrix $\Lambda$ is of size $2^n \times 2^n$, too large to fit. Instead, we let $(\epsilon_i, \bar \epsilon_i)$ be independent copies sampled from a $2\times 2$ covariance matrix $\Sigma$, and then we perturb $\tilde v(A) = v(A) + \prod_{i\in A} \epsilon_i \prod_{j\not\in A} \bar\epsilon_j$. This product form of noise is inspired by Owen's multilinear extension of utility functions, and only requires modeling a $2\times 2$ covariance matrix. Intuitively, it can be memorized as multiplying $\epsilon_i$ if $i\in A$ and multiplying $\bar\epsilon_i$ if $i\not\in A$. This is just a convenient form of prescribing correlated noise; it does not mean a player/datum still incurs noise when not present in a coalition. We also note that if we use instead $\tilde v(A) = v(A) + \prod_{i\in A} \epsilon_i$, then the noise perturbation will be biased against larger sets $A$ (i.e., more terms in the product).
Our theory (including the Kronecker noise) is mainly proposed to explain *the adaptive phenomenon that no single value dominates in all experimental settings* (see Tables 2 and 3). Our experiments were performed with real, agnostic noises, i.e., *they do not necessarily follow any assumption or theory.* Besides, we validated the criteria of differential entropy using empirical results summarized in Figure 3, which confirmed that Eq. (6) leads to the most consistent semivalue in empirical rankings. Intuitively, a random vector tends to be deterministic (which means the resulting random ranking is also deterministic) as its differential entropy decreases to 0. | Summary:
This paper extends the notion of Banzhaf values to weighted Banzhaf values to improve the robustness of data ranking. The authors show under Kronecker noises, when minimizing the worse-case entropy, the most robust parameters belong to the family of weighted Banzhaf values. Similarly, as implemented in Data Banzhaf, authors use a maximum sample reuse strategy to improve sampling efficiency, as shown by juxtaposing it with other sampling techniques. The authors demonstrate the robustness of weighted Banzhaf values by comparing them with other data valuation methods over multiple datasets. Weighted Banzhaf values produce consistent data ranking. Further, the performance is presented through noisy label detection experiments.
Strengths: + The authors show the exact case when Data Banzhaf achieves highest robustness and cases when weighted Banzhaf values can achieve better robustness.
+ The weighted Banzhaf values are successfully generalized from Data Banzhaf and demonstrates largest safe margin among compared data valuation methods.
Weaknesses: + All utility functions are set to be accuracy of simple models. Specifically, only simple models are considered, such as logistic regression models or LeNet.
+ Only simple tabular and image datasets are considered.
+ The sample size is also minimal, 2000 samples at most.
It seems that even though efficient sampling is employed, the current data valuation method cannot be applied in more practical settings, larger models, datasets, and sample sizes, which is due to the intractable computation of model re-trainings.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: + How can you ensure exact Shapley value, if each evaluation is noisy?
+ Since weighted Banzhaf results are based on the best Banzhaf weights, can you also provide results for the best parameters for Beta Shapley?
+ What is the time complexity and actual runtime for weighted Banzhaf value at each step?
+ It would be insightful to compare weighted Banzhaf values with KNN Shapley, which does not require model training. Is the data ranking, in that case, inherently robust to noise?
+ Typo Line 242: descent
+ Typo Line 266: flipped
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the detailed review and feedback! Please refer to our global response for clarification on the context. Below we address other specific comments.
**Q: How can you ensure exact Shapley value, if each evaluation is noisy?**
A: To clarify, each noisy utility function $v$ can be written as $v(\cdot,U)$ where $U$ represents the random seed determining the initialization of trainable models and the order of feeding data in the training phase. Our setting *only* considers such randomness introduced by varying $U$. On small datasets, the Shapley value of each $v(\cdot,U)$ is exactly computed using Eq (1).
**Q: What is the time complexity and actual runtime for weighted Banzhaf value at each step?**
A: To answer this question, we generalize the convergence results in (Wang and Jia, 2023) by assuming that the given utility function $v$ satisfies $ |v|\leq r $.
Wang and Jia (2023) analyzed the setting $v\in[0,1]$ where $v$ was taken to be prediction performance.
To describe the complexity, we need the concept of $(\epsilon,\delta)$-approximation for the considered semi-value $\phi$, which is $P[||\hat{\phi}-\phi||\_\infty\geq\epsilon]\leq\delta$. Take Banzhaf value as an example, Wang and Jia (Theorem 4.9, 2023) proved it requires $ \frac{32r^2}{\epsilon^2}\log(\frac{5n}{\delta})$ model evaluations to achieve an $(\epsilon,\delta)$-approximation based on maximum sample reuse principle.
For $w$-weighted Banzhaf value, mimicking their analysis by replacing Eq (52) therein by $\tilde{\phi}=\frac{1}{mw}\sum_{S\in\mathcal{S}\_{\ni i}}v(S)-\frac{1}{m(1-w)}\sum_{S\in\mathcal{S}_{\not\ni i}}v(S)$, we can prove that it requires $ \frac{(2|w-0.5|+2)^2r^2}{2\epsilon^2w^2(1-w)^2}\log(\frac{5n}{\delta})$, or equivalently $O(\frac{1}{\epsilon^2}\log(\frac{n}{\delta}))$, model evaluations to achieve an $(\epsilon,\delta)$-approximation. On the other hand, theorem 4.8 therein says that (though they only asserted for Banzhaf value, it applies to all semi-values) for sampling lift it requires $ \frac{4nr^2}{\epsilon^2}\log(\frac{2n}{\delta})$, or equivalently $O(\frac{n}{\epsilon^2}\log(\frac{n}{\delta}))$, model evaluations to achieve an $(\epsilon,\delta)$-approximation. These two results somewhat demonstrate why maximum sample reuse principle is better than sampling lift. For each model evaluation $v(S)$, the corresponding complexity relies on how $v$ is designed. In our experiments, the running time for $v(S)$ is $\Theta(|S|)$ as we employed one-epoch learning with SGD. Therefore, for $w$-weighted Banzhaf value, our proposed approximation runs faster as $w$ gets smaller, because it samples more frequently small-size subsets. We will add this discussion in the supplementary.
**Q: Compare weighted Banzhaf values with KNN Shapley.**
A: For value-based data valuation methods, there are two key components: i) the design of utility functions and ii) which semi-value to aggregate marginal contributions.
KNN Shapley is composed of i) using the performance of KNN as utility functions, which are *deterministic* as no training is required, and ii) use the Shapley value to do aggregation.
In our context, each utility function $v$ measures the performance of models trained using SGD. Thus, $v$ is *stochastic* and can be represented by $v(\cdot,U)$ where $U$ is random seed determining the training procedure. The Kronecker noise is to model $\epsilon=v-E[v] $ where the randomness is from varying $U$. To conclude, KNN does not lie in our scope as the produced $v$ is *deterministic.*
**Q: The best results and parameters for Beta Shapley.**
A: For Beta$(\alpha,\beta)$, the range of $\alpha$ or $\beta$ is $(0,\infty)$, and thus it is impossible to have an extensive search. The parameters we used are all the ones reported by the original paper. Nevertheless, we experimented over all $(\alpha,\beta)\in[16]\times[16]$, which leads to 256 combinations in total. By contrast, the best weighted Banzhaf value is selected among 21 candidates. We report on as many datasets as time permits. Specifically, all the results are reported with 10% of data in $\mathcal{D}_{tr}$ flipped, and the other settings remain the same. Note that when they achieve the same performance in noisy label detection, weighted Banzhaf values have the smallest variance across different runs (i.e., different random seeds used for training).
#### Ranking consistency
Dataset|Weighted Banzhaf|Beta Shapley
---|---|---
2dplanes|0.50:**0.868**$\pm$0.035|(8,8):0.851$\pm$0.038
bank-marketing|0.35:**0.924**$\pm$0.030|(1,1):0.918$\pm$0.025
bioresponse|0.10:**0.969**$\pm$0.008|(10,2):0.944$\pm$0.019
covertype|0.50:**0.811**$\pm$0.082|(1,1):0.806$\pm$0.074
cpu|0.00:0.874$\pm$0.065|(16,1):**0.883**$\pm$0.078
default|0.05:**0.986**$\pm$0.002|(16,1):0.978$\pm$0.004
gas|0.05:**0.952**$\pm$0.008|(16,2):0.886$\pm$0.023
letter|0.35:**0.711**$\pm$0.058|(5,3):0.700$\pm$0.051
fraud|0.00:0.910$\pm$0.019|(16,1):**0.923**$\pm$0.011
pol|0.10:**0.949**$\pm$0.013|(16,3):0.945$\pm$0.010
#### Noisy label detection
Dataset|Weighted Banzhaf|Beta Shapley
---|---|---
2dplanes|0.10:0.517$\pm$0.047|(13,3):**0.542**$\pm$0.098
bank-marketing|0.20:**0.325**$\pm$**0.025**|(4,1):**0.325**$\pm$0.056
bioresponse|0.55:**0.358**$\pm$**0.034**|(3,4):**0.358**$\pm$0.053
covertype|0.05:**0.483**$\pm$0.055|(13,1):0.442$\pm$0.034
cpu|0.05:**0.667**$\pm$**0.047**|(10,1):**0.667**$\pm$0.055
default|0.10:**0.342**$\pm$**0.019**|(7,2):**0.342**$\pm$0.034
gas|0.15:**0.550**$\pm$0.065|(2,1):0.525$\pm$0.038
letter|0.20:0.383$\pm$0.047|(13,5):**0.392**$\pm$0.053
fraud|0.05:**0.792**$\pm$0.019|(9,2):0.783$\pm$0.055
pol|0.10:**0.558**$\pm$0.034|(12,1):0.525$\pm$0.069
---
Rebuttal Comment 1.1:
Comment: I appreciate authors for their great responses.
Similar to other reviewers, I also share concerns on the choice of Kronecker noise over other noises (a comparison with other noises would be helpful) and only choosing a specific kind of robustness as the key metrics.
---
Reply to Comment 1.1.1:
Comment: Thanks for your efforts! This response is to address your concerns.
**Q: Comparison with other noises.**
A: We are not sure if we understand this question correctly. Our experiments (Tables 2 and 3) were done with real noises, i.e., the underlying noises (that are due to stochastic training) did not necessarily follow any noise models. As far as we know, Wang and Jia (2023) is the first to address the randomness from stochastic training, and our work further pushes the frontier of this direction. Wang and Jia (2023) adopted a noise-structure-agnostic notion (i.e., the safe margin) to conclude that the Banzhaf value is the most robust *in a universal sense*. However, as shown in Tables 2 and 3, there is often not a single semivalue that dominates for all experimental settings. In contrast, we propose the Kronecker noise to model the randomness, and our theory is an attempt to give a possible explanation to *the observed adaptive phenomenon.* We are not aware of other noise models analyzed in the data valuation literature and would appreciate any concrete suggestions.
**Q: Concern on only choosing a specific kind of robustness as the key metrics.**
A: We'd like to clarify that for our experiments we use Spearman's rank correlation coefficient and F1-score as performance metrics to evaluate different semivalues. The differential entropy criteria is employed for two purposes: (1) explain the adaptive phenomenon that no single semivalue dominates for all experimental settings; (2) determine which weighted Banzhaf value would achieve the most consistent ranking. We do not intend to claim our robustness criteria as the only useful one (e.g., the safe margin criteria of Wang and Jia (2023) is equally interesting), but rather as a mean to explain our experimental results. | Summary: This work focuses on data valuation with weighted Banzhaf values, which seems to be effective, particularly in cases where the dataset or the data valuation process is noisy. Toward that, the authors introduce and utilize a Kronecker noise model to calculate robust values* and moreover to do it in an efficient way utilizing the maximum sample reuse principle. They show that the weighted Banzhaf value with the Kronecker noise model is optimal in minimizing worst-case entropy. They performed experiments to show the efficiency of the method with maximum sample reuse, validate their theoretical finding on optimality (to a certain extend), and evaluate the performance of weighted Banzhaf in capturing noisy labels and data ranking. Their results indicate that weighted Banzhaf shows consistently good performance across different tasks and datasets.
*following the calculation of semi-values used for averaging contribution over subsets
Strengths: **Clarity:** Overall, this paper is quite well written with clear expectations for the reader throughout. The logical flow of the paper is well-structured, although there were a few intuitive aspects that I found lacking. Nonetheless, I believe that the presentation of the work is solid.
**Originality:** The concept of the Kronecker noise model, and moreover utilization of it for learning semi-values and evaluating noisy datasets are original, as far as my knowledge extends.
**Quality:** I think this paper meets the quality standards starting from the introduction of the noise model and weighted version of Banzhaf indexing, to the argumentation of their optimality, and the extensive experimental evaluations.
**Significance:** I believe it is crucial to align game-theoretical data valuation approaches and compare their strengths and weaknesses across different scenarios, including noisy settings. As emphasized by the authors, no universal approach can really be deemed optimal for all datasets and noise settings. However, the flexibility offered by weighted Banzhaf (or data-driven? semi-values) is promising, opening up avenues for future research in understanding the optimality of data-valuation approaches across different regimes. I find the contribution of this work to be significant in this regard.
Weaknesses: I believe there is room for improvement regarding the justification of the Kronecker noise model and its implications for data valuation. It would be helpful to provide further explanations and clarifications on the following points:
- Intuition behind the Kronecker model and the scenarios where it is (not) applicable.
- In addition to discussing the universality of an approach or lack thereof, I believe it is important to draw conclusions or insights from the comparisons. For example, for a reader like me, it is not immediately clear why the proposed method is challenged by other approaches in particular in Table 2, and how this performance comparison is affected by the intrinsic dynamics in the data (such as higher-order interactions, which is generally captured best by Shapley value).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The points presented under the Weaknesses above can be considered as my questions here. I have the following additional questions:
- In Table 3, are some portions of labels still flipped? My understanding was that they are not. If it's indeed so, I am curious to see F1 scores in addition to the correlation coefficients. Clarification would be appreciated.
- How does the Kronecker noise model behave for large amounts of classes in data?
- Also, the reproducibility is checked as n/a but I think it applies here.
- Do you think overfitting may be an issue with this approach as semi-values are learned from data in the end?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As the authors also addressed in the supplementary material, the concept of the Kronecker noise model poses certain limitations where it does not align well with the actual data noise and is empirically justified only on small datasets. I have not observed an additional limitation beyond this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed review and feedback. Please refer to our global response for clarification on our context. We will release our code after acceptance to ease the replication of all our results. Below we address other specific comments.
**Q: How does the Kronecker noise model behave for large amounts of classes in data?**
A: Intuitively, more classes requires more training data to produce non-trivial trained models that can distinguish noisy and clean data well. We do not think the Kronecker noise would be a good fit for all possible induced noises on large datasets. Specifically,
if we plot $\det(Cov(\phi_{\delta_w}(v)))$ (the correct objective in theorem 1) along $w$-axis (parameter for weighted banzhaf values), the curve is always a U-shape, which somewhat implies it is an upside-down U-shape (peaking at one position and gradually decreasing sideways) for the curve of correlation in ranking. But empirically it is not always this case even on a 2-class dataset, see for example the 1st-row-2nd-column plot (200 data from diabetes dataset) in Figure 6 in the supplementary. Despite this, empirical evidence still supports that weighted Banzhaf values are more likely to capture the most consistent one in ranking, as shown in Table 3.
**Q: Is overfitting an issue with this approach as semi-values are learned from data in the end?**
A: In our context, each utility function $v$ can be written by $v(\cdot,U)$ where $U$ represents random seed that controls the training phase. The considered randomness on $v$ is from varying $U$, and we model $\epsilon =v-E[v]$ using the proposed Kronecker noise.
The empirical covariance matrix $\hat{D}$ to which a Kronecker noise is fitted is approximated by evaluating $v(\cdot,U)$ while varying $U$, see Line 675 in the supplementary for more details.
*If $\hat{D}$ is approximated well*, we do not think overfitting would be an issue as perfect match indicates that the real noise exactly follows some Kronecker noise. Concretely, for the last two column in Figure 3, the KL divergences for the learned Kronecker noises are $0.5954$ and $0.5490$ for datasets 2dplanes and cpu, respectively.
**Q: In Table 3, are some portions of labels still flipped? My understanding was that they are not. If it's indeed so, I am
curious to see F1 scores in addition to the correlation coefficients.**
A: We did not flip any labels for Table 3, but we can provide the correlation for all the flipped datasets used in Table 2, and then report the corresponding F1-scores. In the one-page pdf attached to our global response, Table A lists all the results of ranking consistency given that 10% of data in $\mathcal{D}_{tr}$ are flipped, and then Table B reports all the corresponding F1-scores in noisy label detection.
These results show that ranking consistency does not correlate strictly with the performance of noisy label detection. A sign of this can be observed in Figure 1 as the peaking position is different in each column. Despite that, weighted Banzhaf values still perform well in noisy label detection, as shown in Table 2.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Dear Authors,
Thank you for your clarifications. I acknowledge that I read your individual rebuttal as well as your global response, and have no further questions.
Regards | Summary: The paper looks at the standard data valuation problem, in case of noisy estimation of the value of a coalition. It proposes a model of noise, Kronecker noise, and shows that under this noise a weighted Banzhaf value (with weight that depends on parameters of the noise) is semi-value that maximizes a notion of robustness. If the weight is 0.5, it is equivalent to the Banzhaf value, but it is different otherwise. The paper shows a number of experiments that illustrate this result and the efficiency of computing the weighted Banzhaf value.
Note: There is a very related paper from Wang and Jia at AISTATS '23 that proposes the Banzhaf value and shows it is the most robust semi-value under a certain definition of robustness (and also proposed an efficient estimator based on the so-called maximum sample reused principle); so this submission's novelty and significance is obviously to be judged with respect to that AISTATS '23 paper.
Strengths: The data valuation problem is a hot topic and moving away from the Shapley value adds interesting insights to the existing literature. In particular, considering robustness to noisy evaluation of the value function (as the paper does) is interesting.
In my opinion, the main contribution of the paper is the result of Theorem 1 that states the following: If the noise is Kronecker noise (as defined in Def 1), then the semi-value that minimizes the determinant of the semi-values covariance corresponds to a weighted Banzhaf value with weight given by (6), which depends on individual parameters of the noise.
- To me this results is more of the type of identifying a particular kind of noise and a particular measure of robustness such that weighted Banzhaf maximizes robustness (rather than having a naturally relevant notion of noise and then showing the result), but this is already quite interesting. The notion of Kronecker noise used is somewhat general and flexible.
- The proof is non-trivial (and very long). I was not able to follow all details. I wish the paper had included a sketch of proof to give meaningful intuition.
- I am wondering whether this result is really specific to the data valuation problem and even if data valuation is the most relevant application. Is there any other that one could think of?
The paper shows the application of this result to the data valuation problem based on extensive experiments on synthetic and real datasets (although the real data sets are still somewhat forced to fit the Kronecker noise framework, see below).
The paper also contains another result, Proposition 2, which shows that the maximum sample reuse principle can be used to efficiently estimate the weighted Banzhaf value, but this is very incremental compared to [Wang and Jia, AISTATS '23].
Weaknesses: I did not find the comparison to [Wang and Jia, AISTATS '23] to be perfectly clear in the paper. As the paper mentions, [Wang and Jia, AISTATS '23] have a result that states that Banzhaf maximizes robustness amongst all semi-values, which includes weighted Banzhaf. How can it be reconciled with the result of this paper that shows that in some cases a non-0.5 weighted Banzhaf is more robust? I do not believe that [Wang and Jia, AISTATS '23] assume isotopic noise (correct me if I am wrong). Is that not rather due to a different definition of robustness? It would be better if the paper was clearer about that.
The noise model introduced lacks motivation. Specifically, the noise is applied directly on the value function, but there is no attempt made to link this noise model to noise in the data. In fact, l. 169, the paper says that the covariance of $v(S)$ for a subset $S$ of size $s$ decreases like $1/C^{s}$ with $C=\sigma_{11}/\sigma_{22}>1$ if $\sigma_{11} > \sigma_{22}$. This brings two questions:
- why is $\sigma_{11} > \sigma_{22}$ linked to a decreasing variance with the size of the subset?
- why is this a good model? Sure, we expect that the variance would decrease, but no as fast as $1/C^{s}$, rather something classical like $1/\sqrt{s}$. All that is to say that it is not clear (and even less justified in the paper) that the Kronecker noise model is relevant for the data valuation application.
The paper also lacks motivation for the specific robustness measure used (the determinant of the covariance of the values). There is a somewhat vague paragraph that explains that gaussian distributions maximize entropy for fixed covariance but why entropy and why fix covariance? Also, is it possible that the values are gaussian?
(Side note: in this paragraph, the covariance $\Sigma$ is generic whereas earlier it is specifically the individual one for Kronecker noise, that is confusing.)
Some parts of the paper are a bit fast and unclear. For instance Proposition 2 and the comment below does not really explain what the convergence is, what the sample lift strategy is and why it is so bad here. This is not too crucial since this result is anecdotical but is frustrating for the reader.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In the experimental section: my understanding is that the paper fits the covariance matrix of the Kronecker noise to the data and then adds noise to subset with this fitted matrix. Is that correct? I wonder: is the fit good at all? This is related to my earlier comment that I do not feel that Kronecker noise is necessarily a good model here (e.g., because of the too fast decrease of the variance, see above).
- More generally, in many places I was not able to know what the value fonction used is how exactly noise is added in the experiments. Maybe I missed the information?
- The introduction mentions robustness to human errors and attacks. I do not believe that what is proposed here can reasonably be robust to attacks, at least not to worst-case unconstrained attacks. Can the authors comment on the robustness to attacks?
- Understanding Fig 1 is basically not possible because it is not written what the value is and what noise is added. Even after reading the experimental section I was not sure what is plotted on Fig 1.
- Minor: In Remark 1: \sigma_11 = \sigma_22 is more general than isotropic.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed in Appendix E, which is kind of hidden after 14 pages of proofs and is not referenced in the text. It'd be better to at least put a pointer in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and helpful feedback. Some comments are addressed in our global response, and below we address with the remaining ones. We have elaborated more on our proofs to ease the verification from readers. At the very beginning, we first noticed weighted Banzhaf values are special as stated in lemma 7 (while forging the framework of semi-indices using materials scattered in many cooperative game theory references), which guided us to have a guess for theorem 1 and 2. Then, we verified them partially by running some experiments before working out their rigorous proofs. The Kronecker structure naturally suggests using induction to prove the results.
**Q: Whether we add noises to subset with the fitted matrix and how we add noises?**
A: Except for the first column of Figure 3 where we artificially added non-isotropic Gaussian noises to *deterministic* utility functions (with a fixed random seed 2023), we *did not intentionally* add noises elsewhere.
Precisely, each stochastic utility function $v(\cdot)$, e.g., those used in *Figure 1*, and Tables 2 and 3, can be written as $ v(\cdot,U) $ where $ U $ is the random seed controlling the initialization of trainable models, as well as the order of feeding data in the training phase. In other words, the randomness of stochastic utility functions *only* comes from varying $ U $, and say, $ 6 $ independent runs means we set $ U=0,1,\dots,5 $ for each run.
Line 675 in the supplementary demonstrates how we generate $\hat{\mathbf{D}} $, to which an individual covariance matrix $ \hat{\boldsymbol\Sigma} $ is fitted.
Then, the fitted $ \hat{\boldsymbol\Sigma} $ is *only* used to generate the supposedly most robust weighted Banzhaf value parameterized by $ w^{*} $ according to Eq. (6), which is tagged by ''robust'' in Figure 3.
**Q: Is the Kronecker noise a good fit?**
A: We admit that the Kronecker noise is not capable of capturing all possible real noises.
Nevertheless, as shown by the last two columns in Figure 3, each $w^{*}$ obtained from the Kronecker noise fitted to the real noise is the most consistent semi-value in empirical ranking.
Therefore, in these two experiment settings, it is safe to say that the Kronecker noise is a good-fit model for the underlying noises induced by stochastic training. Though the used datasets are small, the conclusions (that weighted Banzhaf values tend to be the most consistent in ranking) can still be observed empirically on larger datasets, as shown in Table 3.
**Q: Robustness to human errors and attacks.**
A: We did not consider this setting in this work as we only model the randomness from the stochasticity during training. This direction would be an interesting future work.
**Q: Why is $\sigma_{11} > \sigma_{22}$ linked to decreasing variance with the size of the subset?**
A: This is the implication from proposition 3 that gives the analytical formula for $\mathrm{Cov}(\boldsymbol\epsilon) = \mathrm{Cov}(v)$ (the covariance matrix) given that $\boldsymbol\epsilon=v-\mathbb{E}[v]$ follows some Kronecker noise. In other words, $\mathrm{Var}[v(S)] = \sigma_{11}^{n-s}\sigma_{22}^s$. As $s$ increases, $\mathrm{Var}[v(S)]$ decreases provided that $\sigma_{11}>\sigma_{22}$.
**Q: Details on the sampling lift.**
A: As dubbed by Moehle et al. (2022), sampling lift refers to any approximation based on $\phi_i(v) = \underset{S\subseteq[n]\backslash i}{E} [v(S\cup i)-v(S)]$. Precisely, Eq (1) is equal to$$\phi_i(v)=\sum_{k=0}^{n-1}q_k \sum_{S \subseteq[n]\backslash i,|S|=k}\binom{n-1}{k}^{-1}(v(S\cup i)-v(S))=\underset{k}{E}\underset{S:|S|=k,i\not\in S}{E}[v(S\cup i)-v(S)].$$ where $q_k=p^{n-1}_{k}\binom{n-1}{k} $. The sampling strategy used in (Moehle et al. 2022) is i) sample $k\sim(q_k)$, and then ii) sample $S$ uniformly subject to $|S|=k$ and $i\not\in S$.
For the reweighted sampling lift used by Kwon and Zou (2022), they consider reweighted marginal contributions $q_s n(v(S\cup i)-v(S))$ instead. A drawback is that $q_k$ has to be calculated cleverly to avoid the numerical blowup induced by $\binom{n-1}{k}$, which is why Wang and Jia (2023) did not provide results for Beta Shapley on >500-data datasets.
Nevertheless, we noted that there is a better way briefly mentioned by Dubey et al. (1981). Substituting Eq (2) in Eq (1) yields$$\phi_i(v)=\int_0^1\underset{S\subseteq[n]\backslash i}{\sum}t^s(1-t)^{n-1-s}(v(S\cup i)-v(S))dP(t).$$ Subsequently, take $\phi_n(v)$ as an example: i) sample $t\in[0,1]$ according to the provided $P$; ii) let $X$ be a Bernoulli random variable such that $P(X=1)=t $, and sample a vector $b\in R^{n-1} $ whose entries are independently following $X$;
and iii) define $S\subseteq[n]\backslash n$ by letting $i\in S$ if and only if $b_i=1$.
For Beta Shapley, $ t\propto t^{\beta-1}(1-t)^{\alpha-1}$, as shown in Table 1.
**Q: Why sampling lift is bad?**
A: To answer, we generalize the convergence results in (Wang and Jia, 2023) by assuming that the given utility function $v$ satisfies $ |v|\leq r $. Wang and Jia (2023) analyzed the setting $v\in[0,1]$ where $v$ was taken to be prediction performance.
To describe rate of convergence, we need the concept of $(\epsilon,\delta)$-approximation for the considered semi-value $\phi$, which is $P[||\hat{\phi}-\phi||\_\infty\geq\epsilon]\leq\delta$. For maximum sample reuse principle employed for all $w$-wighted Banzhaf value, adapting their proofs for theorem 4.9 gives that it requires $ \frac{(2|w-0.5|+2)^2r^2}{2\epsilon^2w^2(1-w)^2}\log(\frac{5n}{\delta})$, or $O(\log(n/\delta)/\epsilon^2)$, model evaluations to achieve an $(\epsilon,\delta)$-approximation. Besides, theorem 4.8 therein (though they only asserted for the Banzhaf, it applies to all semivalues) proves that it requires $ \frac{4nr^2}{\epsilon^2}\log(\frac{2n}{\delta})$ model evaluations, or $O(n\log(n/\delta)/\epsilon^2)$, for sampling lift.
These two results somewhat show why maximum sample reuse principle is better than sampling lift.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. It answers many of my questions.
There are two key points that remain in my opinion:
- the Kronecker noise model is somewhat arbitrary. It's an interesting noise model but not necessarily one that is a clearly good model of noise at training.
- the measure of robustness is also a bit arbitrary (I have not see a clear intuition for it).
These aren't really questions, these are more the main negative points that I see in the contribution. All the positive points of course still remain. So all in all I am favorable to accepting the paper but I feel my score of 6 is a good reflection of my overall opinion (the paper has interesting theoretical results but maybe not of significance that justifies a higher score).
---
Reply to Comment 1.1.1:
Comment: Thank you for the thoughtful comments. This response is to acknowledge the key points and add some further clarifications.
- One point of our work is to bring weighted Banzhaf values into view in data valuation. The empirical effectiveness of this family can be observed in our experiments *where the underlying noises do not necessarily follow any assumption or theory* (see Tables 2 and 3).
In these experiments, we only tuned the hyperparameters (e.g., the learning rate) for each dataset, and the induced noises are strictly due to stochastic training (e.g., random initialization). Our empirical results demonstrate that *there is probably no universally the most robust semivalue.*
- Our theorem 1 aims to give a *possible* theoretical explanation for the adaptive phenomenon (i.e., no single semi-value dominates) observed in Tables 2 and 3. To validate the criteria of differential entropy, we showed in the first column of Figure 3 that the derived Eq. (6) leads to the most consistent semivalue in empirical rankings. Similar results are obtained in Figure 3 (the last two columns) where the noises are real and agnostic. Intuitively, a random vector tends to be deterministic (which means the resulting random ranking is also deterministic) as its differential entropy decreases to 0.
Overall, our work is to emphasize two points: i) weighted Banzhaf values are *already empirically promising*; and ii) there is no universally the most robust/effective semivalue for all experimental settings, and our theory is an attempt to explain *such an adaptive phenomenon* observed in our experiments. | Rebuttal 1:
Rebuttal: We thank all the reviewers for the constructive feedback. Below we address the questions most reviewers are concerned with, which will be included in our revision.
**Q: Clarify the setting for theorem 1**
A: For each noisy utility function $v$, *its randomness is due to stochasticity during training.* Our assumption for this randomness is that $\epsilon=v-E[v]$ follows a Kronecker noise, i.e., $Cov(\epsilon)=\Sigma^{[n]}=\Sigma\otimes\cdots\otimes\Sigma$ (n repetitions) for some $\Sigma\in R^{2\times2}$ where $Cov$ stands for covariance matrix. Recall that $v$ is *ordered* as a vector in $R^{2^n}$ w.r.t. the binary ordering. This ordering for all subsets of $[n]$ aligns well with the Kronecker product, which is why our modeling for the noise is based on the Kronecker product. By definition 1, $\epsilon=X_1\otimes X_2\otimes\cdots\otimes X_n$ where $Cov(X_i)=\Sigma$ for each $i$ and all *continuous* random variables $X_i$ are independent. Then, $\epsilon(S)=\prod_{i\in S}X_i(2)\prod_{i\not\in S}X_i(1)$ for every $S$ (note $X_i\in R^2$). It means that $X_i(1)$ represents the randomness brought in by the absence of datum $i$, while $X_i(2)$ is the one induced by its presence. Note that $Cov(\epsilon)=Cov(v)$, and the optimization problem in theorem 1 starts from $$\underset{P\in\mathcal{P}}{\arg\min}\sup_{v\in\mathcal{G}:Cov(v)=\Sigma^{[n]}}h(\phi_P(v))$$ where $ \mathcal{P} $ contains all distributions on the interval $ [0,1] $, $ h $ is the differential entropy that measures the uncertainty of continuous random vectors, and the $ i$-th entry of $\phi_P(v)\in R^n$ is, obtained by substituting Eq (2) in Eq (1), $$\sum_{S\subseteq [n]\backslash i}(\int_0^1 t^s(1-t)^{n-1-s} dP(t))\cdot(v(S\cup i)-v(S)).$$ In a nutshell, we found a semivalue defined by $P$ that can best tolerate the largest uncertainty brought by a noise having covariance matrix $\Sigma^{[n]}$.
Since $\phi_P$ is linear, $Cov(\phi_P(v))$ is the same, denoted by $\Phi$, given any $v$ satisfying $Cov(v)=\Sigma^{[n]} $. Let $Y=\phi_P(v)$, it is known that $\sup_{Y:Cov(Y)=\Phi}h(Y)=\frac{n}{2}(1+\log(2\pi))+\frac{1}{2}\log(\det(\Phi))$, and the maximum is achieved if $\phi_P(v) $ is Gaussian. Since $\phi_P$ is linear, $\phi_P(v) $ is Gaussian if $v$ is Gaussian. Thus, we have the equivalent problem in theorem 1 (the outer product was our typo)$$\underset{P\in\mathcal{P}}{\arg\min}\det(Cov(\phi_P(v)))\text{ s.t. }Cov(v)=\Sigma^{[n]}.$$
**Q: Clarify between the concepts of Kronecker noise and safe margin.**
A: They are distinct modelings for the inherent noises. For every $p^{n-1}\in R^n$ defining a semivalue, and every $\tau>0$ (the choice of it is irrelevant to the resulting ranking for semivalues), the safe margin is$$\text{Safe}(\tau;p^{n-1})=\min_{i,j\in[n]:i\not=j}\min_{v\in\mathcal{G}:\Delta_{i,j}^{(k)}(v)\geq\tau\ \forall1\leq k\leq n-1}\min_{\hat{v}\in\mathcal{G}:D_{i,j}(v;p^{n-1})D_{i,j}(\hat{v};p^{n-1})\leq0}||v-\hat{v}||_F$$
$$\text{where }\Delta_{i,j}^{(k)}(v)=\binom{n-2}{k-1}^{-1}\sum_{|S|=k-1,S\subseteq[n]\backslash ij}[v(S\cup i)-v(S\cup j)]\text{ and }D_{i,j}(v;p^{n-1})=n(\phi_i(v;p^{n-1})-\phi_j(v;p^{n-1})),$$which means, as claimed, the safe margin is the largest noise that can be tolerated by the semivalue $\phi(\cdot;p^{n-1})$ without changing the ranking for data. It is said that $D_{i,j}(v;p^{n-1})D_{i,j}(\hat{v};p^{n-1})\leq0$ is equivalent to that $v$ and $\hat{v}$ produce different orders for data $i$ and $j$. A merit of the safe margin is that the problem is independent of any noise. In contrast, the Kronecker noise allows one to exploit the individual covariance matrix $\Sigma$, which makes it possible to derive different conclusions. In the synthetic experiments of Wang and Jia (2023), they added isotropic Gaussian noises to *deterministic* utility $v$ to support their theory (see the setting for Figure 7 therein), which also aligns with our theory. Instead, we also added non-isotropic Gaussian noises (the first column of Figure 3) to verify our theory (which reveals the Banzhaf value is not necessarily the most robust in this case). In a nutshell, more flexible modeling leads to finer results.
Pdf: /pdf/ab6195d05941374f6849c1e6523886e22a3418b6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
PlanE: Representation Learning over Planar Graphs | Accept (poster) | Summary: The paper proposes PLANE: a graph neural network which is complete on planar graphs. The idea behind this is very natural and well explained in the paper: while graphs in general are difficult to separate, and as a result standard GNNs cannot separate them, planar graphs can be separated in polynomial time, and are very common in applications. Therefore it is useful to have a complete algorithm for these graphs.
Strengths: As noted above, the question addressed by the paper is very natural, and the provided method achieves good empirical results
Weaknesses: I felt the algorithm was
(a) very technically involved and difficult to understand
and
(b) did not seem very novel, or very much like an architecture for graphs: that is, most neural networks are composed of simple building blocks (convolutions). This is not the case here. Rather it seems like the authors took a planar graph isomorphism algorithm, and then somewhat superficially replaced hashing operations in the algorithm with continuous set valued operations.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Main question: The part about why we want GNNs complete for planar graphs is explained very well. What is not explained so well, I believe, is: given that there exist `graph isomorphism algorithms' for planar graphs, what is it that needs to be done to turn them into architectures, and what, in the high level, is your contribution here? What were the challenges?
Technical question: For a given graph, is the partition into maximal biconnected (or triconnected) components unique? What is the source for the information presented in lines 75-84 in page 2? No reference is given
Experimental question: could you report how well both E-BasePlane and BasePlane did on all tasks?
Minor typos and comments for authors use (no need to discuss this in rebuttal):
The usage of \mathbb{C} is a notation for anything other than complex numbers is confusing. I would suggest redefining this to something else (e.g,. \mathcal{C})
Page 4 line 171: what are `two node dipole graphs'? explain?
Page 7 line 278 `clustering coefficient'-what is this?
In Table 4 and Line 377 you use (E)-BasePlane and in other places E-BasePlane. If the adding of parenthesis is intentional you should explain what is meant by it.
In Table 4 bottom line there is a slanted font, different from the rest of the table. This also happens in the other tables occasionally. If this is intentional explain what it means, otherwise fix font to be consistent
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Overall yes. It does not seem the method can handle node features, this could be added to the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback! We respond to your comments below:
> Main question: The part about why we want GNNs complete for planar graphs is explained very well. What is not explained so well, I believe, is: given that there exist `graph isomorphism algorithms' for planar graphs, what is it that needs to be done to turn them into architectures, and what, in the high level, is your contribution here? What were the challenges?
The fundamental challenge is to turn a classical algorithm which is tailored for isomorphism testing on planar graphs into a general learning algorithm on planar graphs. In our context, the goal is to capture similarities between the graph structures along with the features. In fact, a learning algorithm needs to implicitly compute structural similarities, which is a harder problem than plain isomorphism testing. More concretely, the challenges we encountered can be summarized as follows:
(1) How to encode cut nodes, biconnected components, and triconnected components in order to adequately update these representations to best capture the structural similarities?
KHC does not keep track of node-level or component level representations. The codes can be of varying lengths and do not capture a notion of similarity. In contrast to this, we need these representations to be explicitly learned, as they indicate node-wise, component-wise, or graph-wise similarities.
(2) How to integrate classical operations into meaningful computational layers without leading to optimization problems?
KHC employs standard operations such as overriding, or interleaving, which have no natural counterpart in a learning algorithm: this is due to the fact that codes do not need to reflect similarity, unlike the hidden representations. This constraints the design space of our algorithm: we had to replace the interleave operation (an operation on the Block cut tree nodes) with a multi-layer update, and show that this suffices for completeness: we have shown logarithmically many layers suffice.
(3) What kind of inductive biases are helpful in a learning algorithm which are not relevant for code generation of KHC?
There are operations in KHC which do not have a counterpart in PlanE, but the converse is also true: we consider node neighbor representations in the update for better local context and also a standard readout for better global context. The lack of these worsens our empirical results as it can be seen from our ablation study (Table 1 in response.pdf), whereas these have no importance for the KHC algorithm.
All in all, we revisited the KHC planarity algorithm, modified it in several non-trivial ways to arrive at a provably complete, learnable and scalable model PlanE, and the main challenge was designing components to preserve the theoretical properties of KHC while also having the right inductive biases for graph ML.
> Technical question: For a given graph, is the partition into maximal biconnected (or triconnected) components unique? What is the source for the information presented in lines 75-84 in page 2? No reference is given
Yes indeed, an undirected graph (even if not planar) can be decomposed into its maximal biconnected components in linear time and this decomposition is unique. This is from classical literature, e.g., *“Hopcroft and Tarjan (1973) Algorithm 447: efficient algorithms for graph manipulation.”* The same is true for triconnected components and SPQR trees, see, e.g., *“Gutwenger, C., Mutzel, P. (2001). A Linear Time Implementation of SPQR-Trees”* for a classical reference. We refer to these references and related literature regarding the definitions in the mentioned paragraph.
> Experimental question: could you report how well both E-BasePlane and BasePlane did on all tasks?
As per your request, we also reran E-BasePlane without edge features (which is equivalent to BasePlanE) on QM9. Since it doesn't use edge features, the results worsened, but interestingly, the results remain nevertheless very strong compared to the baselines. For reference, for all target properties on QM9, BasePlanE is on average only 5.6\% worse than E-BasePlanE. This shows a strong evidence on how well BasePlanE exploits structural information. We will report the results in the next version of our paper (as the table is very large to fit into our one-page rebuttal document)
---
Rebuttal Comment 1.1:
Comment: I'm happy with the reivewers rebuttal and retain my score of 6.
---
Reply to Comment 1.1.1:
Comment: Thanks for going through the rebuttal, and for your continued positive view of our paper. We will integrate the comparison between E-BasePlanE and BasePlanE on QM9 into the appendix of the new version of the paper, because we agree this is insightful for the reader. Please let us know if there are any remaining concerns. | Summary: The paper deals with supervised learning on the graphs. Conventional models of message passing Graph Neural Networks are known to be restricted in their expressivity by the 1-WL test for isomorphism. Although more expressive models like higher-order GNNs have been proposed, they are inefficient and also, are not able to generate complete invariants for all graphs. The authors in this paper, propose to specialise their study to a class of graphs for which complete invariants can be efficiently computed i.e. planar graphs. They propose “PLANE” framework based on the graph isomorphism algorithm for planar graphs. PLANE can learn complete invariants for planar graphs in an efficient way and experiments are shown to validate the promising nature of the proposed method.
Strengths: 1. The paper is well-motivated in finding complete invariants for a class of graphs i.e. planar graphs while remaining computationally efficient.
1. The paper is well-written with most parts of the paper easy to follow. However, few more illustrations would help the reader.
1. Experimental results are mixed, partly promising.
Weaknesses: 1. **Novelty:** The major problem with this paper is the lack of novel ideas or insights. It is well-known that planar graphs are solvable for isomorphism and algorithms exist to do that. This paper leverages the KHC algorithm in a straightforward manner. Presumably, simply running the KHC algorithm on graphs and using the generated codes as features would give similar results. What is the additional usefulness of learning almost exact same procedure with parameters is neither discussed nor shown with some empirical results.
1. There are no insights in the theoretical analysis as well. Improved expressivity and the logarithmic steps are directly adaptable from the planar isomorphism algorithm.
1. The experimental results are not sufficient to make a convincing case for the method.
1. It is not very insightful to compare with GCN/GIN since they are known to be restricted with 1-WL power for clustering co-efficients or other experiments.
1. However, if you could include other more expressive models and show improvement compared to these, that would be helpful. For example, 3-WL GNN(Maron et al. (2019)), PF-FGNN (Dupty et al. (2021)) should produce complete invariants as well. However, they are not specifically designed for planar graphs and PLANE method can give substantial improvements over these models on planar graphs, then it would make sense to use algorithms specialized for planar graphs.
1. Other synthetic datasets like strongly regular planar graphs can be used to validate the method as well.
1. Results on QM9 could be better analysed if comparison with other methods are provided or the units are same as used in Dimenet (Gasteiger et. al. (2019))
**References:**
+ [1] Maron et al.(2019). "Provably powerful graph networks." Advances in neural information processing systems 32
+ [2] Dupty et al. (2021) "PF-GNN: Differentiable particle filtering based approximation of universal graph representations." International Conference on Learning Representations.
+ [3] Gasteiger et al. (2019). Directional Message Passing for Molecular Graphs. In International Conference on Learning Representations.
**Overall**,
I find it hard to see significant contributions from this paper although the motivation of finding specialized learning models for planar graphs is interesting.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please address the weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: Not applicable since this is general method on graphs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address your concerns below.
> Novelty: The major problem with this paper is the lack of novel ideas or insights….This paper leverages the KHC algorithm in a straightforward manner.”
Our work generalizes the classical, KHC algorithm to a *learnable neural* model while maintaining completeness and theoretical guarantees. In effect, PlanE builds on KHC like MPNNs build on 1-WL. Indeed, MPNNs align with 1-WL while crucially learning features from data. PlanE achieves the same relative to KHC, with added complexity coming from mapping this more involved algorithm to learnable functions. This mapping is by no means straightforward, and requires a series of subtle decisions: Please see our response on our adaptation of KHC.
> Presumably, simply running the KHC algorithm on graphs and using the generated codes as features would give similar results. What is the additional usefulness of learning almost exact same procedure with parameters is neither discussed nor shown with some empirical results.
The learnability of PlanE is *critical* vs KHC, just as MPNN learnability is critical vs 1-WL. MPNNs vastly outperform 1-WL on almost all real-world datasets as they can learn dataset-specific features while still having the 1-WL inductive bias. We are *not* learning the KHC procedure with parameters, but rather learn *component representations* and a *specific feature mapping*, while also aligning with KHC.
To make this more concrete, we conduct a dedicated ablation of BasePlanE on ZINC (cf. Table 1 of *response.pdf* and global response for details). There, we drop standard MPNN neighbor aggregation (which is not part of KHC), and show that BasePlanE degrades very significantly, despite being more faithful to KHC. We kindly ask the reviewer to take these into account.
> There are no insights in the theoretical analysis as well. Improved expressivity and the logarithmic steps are directly adaptable from the planar isomorphism algorithm.
Though PlanE builds on KHC, it does *not* emulate all its components one-to-one. For instance, we do not use the KHC interleaving component, as this does not map to a learnable function due to its intricate overrides (which overwrite component representations and *eliminate gradients*). Moreover, PlanE uses component and node representations, which are not in KHC, but are essential for learning due to local inductive bias.
Given these fundamental differences, we had to provide *novel proofs* for all our results. As a case in point, KHC only requires *one* step due to interleaves, whereas we show a logarithmic step bound for PlanE based on our learnable components. We are happy to elaborate more on this.
> The experimental results are not sufficient to make a convincing case for the method.
To our knowledge, PlanE is the only method that is practically scalable and provably complete on planar graphs. Moreover, our results, obtained without any specific optimizations, are already competitive with specialized models, e.g., CIN on ZINC, and other methods, e.g., SPN on QM9.
> It is not very insightful to compare with GCN/GIN since they are known to be restricted with 1-WL power for clustering co-efficients or other experiments.
We include GCN/GIN as baselines, analogously to their use in standard 1-WL expressiveness tests. For stronger comparisons, we include an expressive subgraph baseline, ESAN, which we outperform by roughly 40%.
> However, if you could include other more expressive models and show improvement compared to these, that would be helpful. For example, 3-WL GNN(Maron et al. (2019)), PF-FGNN (Dupty et al. (2021)) should produce complete invariants as well. However, they are not specifically designed for planar graphs and PLANE method can give substantial improvements over these models on planar graphs, then it would make sense to use algorithms specialized for planar graphs.
Unfortunately, this statement seems the result of a common confusion between the dimension counts of FWL and WL: Folklore $k$-WL (or, $k$-FWL in ML literature) is equivalent to oblivious $(k+1)$-WL. Kiefer et al. (2019) show that 3-FWL (e.g., oblivious 4-WL) is complete on planar graphs, and this dimension remains the best known to date. Hence, higher-order models with an established *3-FWL* result are complete on planar graphs. However, both 3-WL-GNNs and PPGNs only have *2-FWL* expressive power. Moreover, PF-GNN provides no such expressiveness result. None of these models is provably complete on planar graphs. In fact, we are not aware of an implementation of a 3-FWL model: we are happy to incorporate such models provided there is one.
> Other synthetic datasets like strongly regular planar graphs can be used to validate the method as well.
Please note that we sought to use strongly regular graphs as in CWN, but there are only 7 strongly regular planar graphs, which are in fact $\text{K}_1$, $\text{K}_2$, $\text{K}_3$, $\text{K}_4$, $\text{C}_4$, $\text{C}_5$, and $\overline{3\text{K}_2}$. However, these graphs are easily distinguishable by standard GNNs, as all but $\text{C}_4$ and $\text{K}_4$ have a different number of nodes, and the latter pair have a different number of edges, rendering the whole task trivial.
Instead, we propose a new synthetic dataset based on 3-regular planar graphs, and experiment with BasePlanE, GIN and PPGNs (see global response and Table 2 of *response.pdf*). There, GIN matches a random guess, while BasePlane and PPGNs perfectly solve the task. PPGNs can solve this task because these graphs are 2-FWL-distinguishable.
> Results on QM9 could be better analysed if comparison with other methods are provided or the units are same as used in Dimenet (Gasteiger et. al. (2019))
We follow the QM9 setup of Alon et al. In this version, the regression quantities are scaled for more uniformity, and the baselines used are trained using different grids/splits than Dimenet. Hence, these two setups are not comparable.
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Thank you for your response.
1. Novelty remains the main weakness of the paper. The justification that MPNN is analogous to 1-WL like the proposed algorithm with KHC is not reasonable. When early MPNN models were proposed like GCN/GIN, it provided insights which were unknown during the time. Saying that neural extensions are needed for similar algorithms is not convincing by itself. I’m not able find new insights so far.
From the KHC algorithm, it is clear that bi-connected and tri-connected components are the key to distinguishing planar graphs which was also shown in new experiments shared by authors. However, this is already known albeit independently of planar isomorphism [1].
I do not see additional novel insights we can get from the proposed approach.
1. Empirically, the model performs comparably to other SOTA methods, however is not better than them. There is benefit in the runtime being linear. However, as shown in [1], similar results can be achieved just by using biconnected components. Therefore, there is not much contribution in this regard.
Overall, I appreciate the authors for providing additional experiments including time complexity. My concern on novelty and contributions remain. However, based on linear complexity with learning guarantees on planar graphs, I’m rising my score by one point.
**References**
+ [1] Zhang, Bohang, et al. "Rethinking the Expressive Power of GNNs via Graph Biconnectivity." The Eleventh International Conference on Learning Representations. 2022.
---
Reply to Comment 1.1.1:
Comment: > Overall, I appreciate the authors for providing additional experiments including time complexity. My concern on novelty and contributions remain. However, based on linear complexity with learning guarantees on planar graphs, I’m rising my score by one point.
Thank you for the detailed response and for raising your score. We truly appreciate the open-minded engagement from all reviewers. We answer your points in detail below, and kindly ask that you reassess the novelty and contribution of our paper with this context. Please note that space constraints prevented us from highlighting all challenges faced in this work. Our comments further address this issue, and we will include them in the final version.
> Novelty remains the main weakness of the paper. The justification that MPNN is analogous to 1-WL like the proposed algorithm with KHC is not reasonable. When early MPNN models were proposed like GCN/GIN, it provided insights which were unknown during the time. Saying that neural extensions are needed for similar algorithms is not convincing by itself. I’m not able find new insights so far.
We agree with the part that one should not align with any algorithm for the sake of it. Indeed, this is not our motivation: we take *inspiration* from KHC, to design the first *scalable, learnable, and complete* algorithm on planar graphs, with the *right inductive biases*. This is our fundamental contribution, which we substantiated theoretically and empirically — and further strengthened thanks to each reviewer’s input. We think this objective is ambitious and, to the best of our understanding, you agree with the *significance* of this objective.
If our understanding is correct, your main objection lies in whether our approach is novel enough. The novelty discussion is always somewhat subjective, so we suggest a slight change in perspective and argue for the following:
1. PlanE closes an important gap in graph ML literature by achieving the earlier stated objective.
2. PlanE sets a standard for future graph ML research on planar graphs based on formal desiderata, and achieves these using modest computational resources.
Based on (1)-(2), we strongly think that the graph ML community will benefit from PlanE and build on it to establish similar guarantees for their algorithm. Our work unlocks many new possible avenues for future work which could serve as a witness to its novelty in the long term.
> From the KHC algorithm, it is clear that bi-connected and tri-connected components are the key to distinguishing planar graphs which was also shown in new experiments shared by authors. However, this is already known albeit independently of planar isomorphism [1]. I do not see additional novel insights we can get from the proposed approach.
We are familiar with the work of Zhang et al., but we have difficulty understanding your precise argument. As you state, Zhang et al. do not study planar graphs. They also do not claim completeness over any graph class. Our study is therefore largely disjoint from Zhang el al. In fact, the only connection we can see is their study of biconnected components, i.e., they show that existing MPNNs (and most subgraph GNNs) cannot detect biconnectivity, with few exceptions, e.g., ESAN. This is important, because we include ESAN - a strong subgraph GNN baseline - in our analysis to better locate our contribution. It is clear that ESAN cannot achieve our desiderata (not scalable, no completeness result).
Please note that our algorithm achieves much more than detecting components. In PlanE, components are carefully used in a specific way to ensure completeness over planar graphs, and this is highly non-trivial. It is very plausible for models to detect biconnectivity but still *remain incomplete* on planar graphs. We do not aim to detect substructures, or advertise a particular decomposition, but rather use these as a *means to an end*. Other means of achieving this goal are left for future research, and we hope this work inspires such endeavors.
> Empirically, the model performs comparably to other SOTA methods, however is not better than them. There is benefit in the runtime being linear. However, as shown in [1], similar results can be achieved just by using biconnected components. Therefore, there is not much contribution in this regard.
Our ablation study (Table 1 of *response.pdf*), empirically suggests the strength of a complete BasePlanE against its incomplete counterparts. The strong performance of BasePlanE against ESAN can be explained in the same way. Therefore, both theoretical and empirical evidence suggest the opposite: our results cannot be achieved by only using biconnected components. Theoretically, this is trivially true (i.e., considering a graph consisting of one tri-connected component), but interestingly, it is also prominent on real-world data. We therefore kindly ask you to take this into account.
We hope this comment helps better convey the goals of our paper. | Summary: This work focuses on enhancing the representation power of GNNs in terms of distinguishing non-isomorphic graphs. Inspired by the classical planar graph isomorphism algorithm, the paper designs architectures within the proposed PLANE framework for learning complete invariants of planar graphs. The proposed framework achieves scalability while producing strong performance on planar graph benchmarks.
Strengths: 1. In contrast to previous approaches that cannot strike a balance between algorithmic efficiency and representation power, the proposed framework, PLANE, can efficiently learn isomorphism-complete invariant functions for planar graphs.
2. The architectural designs within the PLANE framework draw inspiration from classic planar graph isomorphism algorithms and can provably distinguish between any pair of non-isomorphic planar graphs.
3. The efficacy of the proposed framework is validated through extensive experiments conducted on both synthetic datasets and real-world benchmarks, further highlighting its effectiveness in practical scenarios.
Weaknesses: 1. While the synthetic datasets used in the study provide some insights into the theoretical power of the proposed framework, they may not fully capture its potential. Notably, the clustering coefficient and EXP datasets can be easily handled by a class of models known as subgraph GNNs ([1, 2], and [7] in the main paper), which can be efficient enough [3]. To showcase the representation power of the proposed framework, it is recommended to include additional datasets, such as the strongly regular graphs used in CWN ([9] in the main paper) or generate planar graph datasets.
2. Furthermore, it is important to report the runtime of the proposed framework, including the preparation time (e.g., time to generate the BlockCUT or SPQR trees) and the overall runtime encompassing training and inference. Comparisons should be made with existing models, including the classic MPNN and baseline models like ESAN, to demonstrate the efficiency of the proposed framework.
[1] You J, Gomes-Selman J M, Ying R, et al. Identity-aware graph neural networks. AAAI 2021.
[2] Zhang M, Li P. Nested graph neural networks. NeurIPS 2021.
[3] Zhao L, Jin W, Akoglu L, et al. From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness. ICLR 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We respond to your points below:
> While the synthetic datasets used in the study provide some insights into the theoretical power of the proposed framework, they may not fully capture its potential. Notably, the clustering coefficient and EXP datasets can be easily handled by a class of models known as subgraph GNNs ([1, 2], and [7] in the main paper), which can be efficient enough [3]. To showcase the representation power of the proposed framework, it is recommended to include additional datasets, such as the strongly regular graphs used in CWN ([9] in the main paper) or generate planar graph datasets.
Thanks for this suggestion! We agree that running our model on larger (synthetic) graphs would further demonstrate the benefits of our specialized approach. Please note that we sought to use strongly regular graphs as in CWN, but there are only 7 strongly regular planar graphs, which are in fact $\text{K}_1$, $\text{K}_2$, $\text{K}_3$, $\text{K}_4$, $\text{C}_4$, $\text{C}_5$, and $\overline{3\text{K}_2}$. However, these graphs are easily distinguishable by standard GNNs, as all but $\text{C}_4$ and $\text{K}_4$ have a different number of nodes, and the latter pair have a different number of edges, rendering the whole task trivial.
As a result, we designed a new synthetic dataset based on 3-regular planar graphs, and experimented with BasePlanE, GIN and PPGNs. In this experiment, we observed that GIN struggles to go beyond a random guess, whereas BasePlane and PPGNs perfectly solve the problem, achieving 100\% accuracy. We provide the full experimental setup and details in the global response, and our results can be found in Table 2 of *response.pdf*. We hope that the inclusion of this new experiment addresses your concern.
> Furthermore, it is important to report the runtime of the proposed framework, including the preparation time (e.g., time to generate the BlockCUT or SPQR trees) and the overall runtime encompassing training and inference. Comparisons should be made with existing models, including the classic MPNN and baseline models like ESAN, to demonstrate the efficiency of the proposed framework.
Thank you! We agree it is important to illustrate this, and following your feedback, we conduct a runtime experiment on real-world planar graphs. We compare the wall-clock time of BasePlanE and 2-FWL-expressive PPGN. We observed that both the pre-processing and inference of BasePlanE run efficiently and scale very well, whereas PPGN quickly fails to run for larger graphs. This highlights that PlanE is a highly efficient option for inference on large-scale graphs compared to higher-order alternatives. We report the full runtime results in Table 3 of *response.pdf*, and describe our findings in detail in the global response.
Please note that the paper includes a dedicated complexity analysis in Appendix A, which stated the asymptotic efficiency of BasePlanE. The runtime of BasePlanE is $O(|V| d^2)$ with a (one-off) preprocessing time $O(|V|^2)$. In practical terms, this makes BasePlanE linear in the number of graph nodes after preprocessing. This is very scalable as opposed to 3-FWL (or 4-WL) which requires about $O(|V|^4 \log |V|)$ steps to reach a stable coloring. We added this explanation to the main paper based on your feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. In Table 3 in the submitted "response.pdf", what is the running time of MPNNs (e.g., GIN) and subgraph GNNs (ESAN)?
---
Reply to Comment 1.1.1:
Comment: We ran BasePlanE against 2FWL-expressive PPGN to show that BasePlanE is much more scalable than its higher-order alternatives. We expect GIN to be fastests given its simplicity and $O(|E|)$ runtime. As for ESAN, the runtime depends on many factors, but it is likely to be slower than BasePlanE for training, since according to the original paper, it scales with $O($#subgraphs $* |V| *$ max node degree), where #subgraphs scales with $|V|$. To put things in better perspective, we will also run GIN and ESAN and include their runtime in Table 3. Please note that we prioritised PPGN during the rebuttal period due to time constraints, because this comparison appeared as the most essential one (by the expressiveness perspective). That being said, we would like to reiterate that BasePlanE is the only architecture which is provably complete on planar graphs among these architectures. | Summary: This paper improves the graph isonophism inspired GNN design for a particular type of graph, planar graphs.
It utilizes KHC algorithm to generate a symbolic code for each graph. Follow the sequence, they apply GNN to recursively get the whole graph representation, which shall have the good property to be invariant.
The results show good performance than some prior baselines such as GIN.
Strengths: 1. The paper levereages existing algorithm for isomorphism testing of planar graph, which is both effective and efficient;
2. The authors prove that there exist a parametrization of GNN followed their message passing order that could distinguish any two planar graphs.
3. Empirical results on chemical graphs show their advantage for real-world graphs.
Weaknesses: My major concern for this papaer is that the presentation makes it really hard to understand the algorithm.
In sec 4 you define CODE, but I didn't see where you utilize these codes in the algorithm? Seem that all you utilize is the sequence of walk and the results of SPQR. It's very hard for me to understand what each notation is referring to (is it a node or a subgraph or a sequence). Also I don't see the definition of X.
I highly suggest the authors improve the sec 4 & 5 to make it easier to understand what you did in this algorithm. It would be better to add a pseudocode for illustration.
Also, as you claim the achieve efficient calculation, it's better to show the time & memory usage for you to train & infer some graph datasets, especially over some larger graphs.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: The authors mention the message passing and the KHC algorithm could be run in parallel, but I guess it means when you do GPU calculation you could do KHC on CPU for the next batch? Could the authors elaborate this part?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: THe authors include a limitation statement,.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We respond to your main points below:
> My major concern for this papaer is that the presentation makes it really hard to understand the algorithm.
Thanks for raising this point. We have now included a figure visualising the full decomposition and encoding process. This is Figure 1 and can be found in *response.pdf*. We have also added discussions on complexity analysis as well as experimental setup in the main paper. Please refer to the global response for more information.
> In sec 4 you define CODE, but I didn't see where you utilize these codes in the algorithm? Seem that all you utilize is the sequence of walk and the results of SPQR.
Yes, we do not directly use CODEs as part of our model, but instead build on CODE to prove completeness results. In our proofs, we show how our model can yield embeddings which are in bijection with the corresponding codes of the formal Weinberg algorithm. This is similar to the GNN expressiveness results: every (maximally expressive) GNN layer aligns with a step of 1-WL algorithm if the computed node embeddings refine the 1-WL hash and vice versa. Therefore, CODEs can be seen as a meaningful abstraction through which alignment with the classical planarity algorithm can be explained.
> It's very hard for me to understand what each notation is referring to (is it a node or a subgraph or a sequence).
We will clarify all these definitions in the paper. Please let us know if there are any particular aspects of our notation you would like us to address.
> Also I don't see the definition of X.
We define this on page 3: $\chi(u)$ denotes the set of the children of node $u$.
> I highly suggest the authors improve the sec 4 & 5 to make it easier to understand what you did in this algorithm. It would be better to add a pseudocode for illustration.
Thanks for the suggestion! We have added an illustrative figure visualizing the steps and intermediate outputs of our model, which we hope will improve these sections. This figure can be found as Figure 1 in our *response.pdf*. We will also provide more information on runtime, component designs, and experimental setup in the main paper. We considered adding pseudo-code as per your suggestion, but found this to be harder to follow than the overall figure.
> Also, as you claim the achieve efficient calculation, it's better to show the time & memory usage for you to train & infer some graph datasets, especially over some larger graphs.
Thank you! To validate this empirically, we conduct a runtime experiment on real-world planar graphs. We compare the wall-clock time of BasePlanE and 2-FWL-expressive PPGN. We observed that both the pre-processing and inference of BasePlanE run efficiently and scale very well, whereas PPGN quickly fails to run for larger graphs. This highlights that PlanE is a highly efficient option for inference on large-scale graphs compared to higher-order alternatives. We report the full runtime results in Table 3 of *response.pdf*, and describe our findings in detail in the global response.
Please also note that the paper includes a dedicated complexity analysis in Appendix A, which stated the asymptotic efficiency of BasePlanE. The runtime of BasePlanE is $O(|V| d^2)$ with a (one-off) pre-processing time $O(|V|^2)$. In practical terms, this makes BasePlanE linear in the number of graph nodes after pre-processing. This is very scalable as opposed to 3-FWL (or 4-WL) which requires about $O(|V|^4 \log |V|)$ steps to reach a stable coloring. We added this explanation to the main paper.
> The authors mention the message passing and the KHC algorithm could be run in parallel, but I guess it means when you do GPU calculation you could do KHC on CPU for the next batch? Could the authors elaborate this part?
Thank you for the question. We assume you refer to the statement on lines 203-204 (page 5). We meant the following with this statement: there is a parallel in the *ideas* between the component code generations of KHC, and the encoders of PlanE.
Coincidentally, however, paralel computability is a very good point: it is indeed possible to parallelize the batches in the way you point out. Furthermore, it could be possible to parallelize the KHC pre-processing for a single graph instance, by evaluating the lexicographically smallest TriCode in parallel (which is the quadratic bottleneck in the KHC algorithm). We will integrate this in the discussion of the runtime.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for authors responding my questions.
Fig 1 looks nice, Table 3 as well as the complexity also are very helpful. Please try to add them into the paper.
I'll raise my score to weak accept. I do think this paper has great value if the presentation could be improved such that readers could more easily find which part is proof and which part is algorithm (and how they could be implemented). Looking forward to seeing the updated (improved) version.
---
Reply to Comment 1.1.1:
Comment: Thank you for going through the response: we are glad you find great value in the paper and we thank you for raising your score. We will carefully improve the presentation of the paper and integrate all the feedback and the findings summarised in the rebuttal into the updated version. | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments. We have responded to each concern in detail in our individual responses. In this global response, we include a response file, *response.pdf*, containing the results of additional experiments for your reference.
The changes made during the rebuttal can be summarized as follows:
1. **Overall figure (Reviewers fYDw, fXaJ):**
We provide a figure visualizing the PlanE pipeline in our response file, which details all the steps involved in our model’s computation. This figure shows the pre-processing steps, namely computing the block-cut and SPQR trees, and also illustrates the mapping from these to embeddings at all the different levels of our model, leading up to the final node-level representation update. We hope this figure provides a much clearer picture of our contribution.
2. **Runtime analysis (Reviewers fYDw, hVMw, o9ac):**
* Theoretically, the runtime of BasePlanE is $O(|V| d^2)$ with a (one-off) pre-processing time $O(|V|^2)$. In practical terms, this makes BasePlanE linear in the number of graph nodes after preprocessing. This is very scalable as opposed to 3-FWL (or 4-WL) which requires about $O(|V|^4 \log |V|)$.
* To validate this empirically, we conduct a runtime experiment on real-world planar graphs based on the geographic faces of Alaska from the dataset TIGER, provided by the U.S. Census Bureau. The original dataset is TIGER-Alaska-93K and has 93366 nodes. We also extract the smaller datasets TIGER-Alaska-2K and TIGER-Alaska-10K, which are subsets of the original dataset with 2000 and 10000 nodes, respectively. We compare the wallclock time of BasePlanE and 2-FWL-expressive PPGN. We observed that both the pre-processing and inference of BasePlanE run efficiently and scale very well, whereas PPGN quickly fails to run for larger graphs. This highlights that PlanE is a highly efficient option for inference on large-scale graphs compared to higher-order alternatives. We report the full runtime results in Table 3 of *response.pdf*.
3. **New expressiveness experiment (Reviewers fYDw, hVMw, o9ac):**
We propose a new synthetic dataset based on 3-regular planar graphs, and experiment with BasePlanE, GIN and 2-FWL-expressive PPGN. For this experiment, we generated all 3-regular planar graphs of size 10, leading to exactly 9 non-isomorphic graphs. For each such graph, we generated 50 isomorphic graphs by permuting their nodes. The task is then to predict the correct class of an input graph, where the random accuracy is $\sim$11\%. We report accuracy results in Table 2 of our response file. As expected, GIN struggles to go beyond a random guess, whereas BasePlane and PPGNs easily solve the task, achieving 100\% accuracy. Note that PPGNs can solve this task because these graphs are 2-FWL-distinguishable.
4. **Ablation experiments (Reviewers fYDw, o9ac):**
We additionally conduct extensive ablation studies on each component in our model update equation using the ZINC 12k dataset. We report the final results in Table 1 of *response.pdf* and summarize them here:
* *BasePlanE (no readout):* We removed the global readout term from the update formula and MAE worsened by 0.003.
* *BasePlanE (no CutEnc):* We removed the cut node term from the update formula and MAE worsened by 0.003.
* *BasePlanE (no neighbors):* We additionally removed the neighbor aggregation from the update formula and MAE worsened by 0.023.
* *BasePlanE (only BiEnc):* We only used the triconnected components for the update formula and MAE worsened by 0.021.
* *BasePlanE (only TriEnc):* We only used the triconnected components for the update formula and MAE worsened by 0.016.
Hence, BasePlanE significantly benefits from each of its components: the combination of standard message passing with component decompositions is essential to obtaining our reported results.
We hope that our answers address your concerns along with the new experiments. We are looking forward to a fruitful discussion period.
Pdf: /pdf/12b1797430666089b0f19ea243ce44176b54a70a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Message passing based neural network is well-known for its limited expressivity of identifying graphs, while higher order neural network like IGNs and SetGNNs are not scalable. Instead of trying to solve complete invariant for any graphs, this paper focuses on finding complete invariant for planar graphs. The author revisited the literature of the complete graph isomorphism test for planar graphs, which does hierarchical graph encoding based on block cut tree and SPQR tree (that decomposes a graph into k-connected components iteratively with k=1,2,3). Furthermore, the author proposes an neural network variant that aligns with the planar graph isomorphism test procedure, and proved that it is theoretically compete invariant for planar graphs. The author shows the advantage of these methods over simulation datasets as well as real-world datasets.
Strengths: 1. Given planar graphs are widely appeared in real-world (road networks, circuits, moleculers and so on), designing an efficient graph neural network while still be expressive enough is an important problem (Powerful than 3-FWL but more efficient than ). The author makes a first step by revisiting planar graph isomorphism test. The contribution and its foundation is solid.
2. As planar graph isomorphism test makes use of hierachical decomposition in a top-down manner and then encode each components in a bottom-up manner, the author's proposed neural network is kind of bottom-up hierachical graph encoding. The direction of hierarchical graph representation learning is promising and deserves more attention, although being a hard direction.
3. The designing of neural network architecture clearly follows the procedure, is well supported by planar graph isomorphism test theory.
Weaknesses: 1. The presentation is not clear enough, the author better provide a good figure to illustrate the hierachical steps clearly to help the reader understand the main idea in a minute.
2. Some unclear notations, such as line 232, tilde h_u^(l).
3. Very importantly, the paper lacks the comprehensive complexity analysis, giving the goal of this paper is designing a powerful but also efficient method comparing with 3-FWL. Also, the author should discuss the impact of using the procedure of graph decomposition inside planar graph isomorphism test (the top-down step). First, the top-down step is kind of preprocessing that needs for the designed neural network which may introduces additional runtime and computational complexity. Second, as the top-down step is highly aligned with planar graph, it doesn't support other type of graphs.
4. Although the design of neural network tries to follow the planar graph isomorphism test closely, the author should discuss and do some ablation study over these designs.
5. Experimental step is too vague and unclear. For all experiments, the description of experimental setup is incomplete. The hyperparameter is set completely with a single number and no discussion of hyperparameter tuning for different methods. It is unfair for baselines to use the same hyperparameter as the proposed method. The author should do hyperparameter tuning for all methods properly.
5. Baselines are not SOTA. Only ESAN is used, while other recent baselines like SetGNN (Lingxiao Zhao, et al.) and SSWL (Bohang Zhang, et al. 2023) should be considered. Also I strongly recommend the author to tune hyperparameter in a systematic way.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I listed all suggestions and questions in weakness.
While this paper is interesting, I hope some important problems can be improved during the rebuttal period.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main limitation is that the designed architecture relies on the fixed top-down graph decomposition method for planar graph. The designed architecture should at least to be able to run all graphs while being incomplete graph invariant.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback, and respond to their main points below:
> The presentation is not clear enough, the author better provide a good figure to illustrate the hierachical steps clearly to help the reader understand the main idea in a minute.
Following your suggestion, we added a figure visualising the PlanE pipeline, which can be found as Figure 1 in our response file. This figure shows the graph decomposition and the corresponding encoding steps. Please let us know if you have any further suggestions, and we will do our best to accommodate these.
> Some unclear notations, such as line 232, tilde h_u^(l).
This is indeed a typo. We corrected this to $\mathbf{h}_{\gamma_u}^{(\ell)}$.
> Very importantly, the paper lacks the comprehensive complexity analysis, giving the goal of this paper is designing a powerful but also efficient method comparing with 3-FWL.
Please note that the paper includes a dedicated complexity analysis in Appendix A, which grounds the asymptotic efficiency of BasePlanE. The runtime of BasePlanE is $O(|V| d^2)$ with a (one-off) pre-processing time $O(|V|^2)$. In practical terms, this makes BasePlanE linear in the number of graph nodes after preprocessing. This is very scalable as opposed to 3-FWL (or 4-WL) which requires about $O(|V|^4 \log |V|)$ steps to reach a stable coloring. Following your feedback, we added this explanation to the main paper.
To validate this empirically, we conduct a runtime experiment on real-world planar graphs. We compare the wallclock time of BasePlanE and 2-FWL-expressive PPGN. We observed that both the pre-processing and inference of BasePlanE run efficiently and scale very well, whereas PPGN quickly fails to run for larger graphs. This highlights that PlanE is a highly efficient option for inference on large-scale graphs compared to higher-order alternatives. We report the full runtime results in Table 3 of *response.pdf*, and describe our findings in detail in the global response.
> Also, the author should discuss the impact of using the procedure of graph decomposition inside planar graph isomorphism test (the top-down step). First, the top-down step is kind of preprocessing that needs for the designed neural network which may introduces additional runtime and computational complexity.
We discuss the runtime complexity of the decomposition step in Appendix A. In essence, we emulate the classical Weinberg algorithm in our pre-processing to obtain all necessary components, and this incurs a *one-time, worst-case* pre-processing cost of $O(|V|^2)$. This worst-case applies only when the *entire graph is a single tri-connected component*, which is rare in practice.
> Second, as the top-down step is highly aligned with planar graph, it doesn't support other type of graphs.
Yes: BasePlanE is explicitly aligned with KHC, and does not immediately apply beyond planar graphs as stated in our limitations section. However, our main contribution is to develop a specialized, complete, and learnable network to efficiently learn functions over planar graphs. While our work *can* be applied to other graphs with simple modifications, this will either come at the expense of (i) completeness or (ii) efficiency. If we forgo the completeness criteria (as it is the case for existing GNNs), then PlanE can easily be extended to general graphs by choosing appropriate encoders for TriEnc and BiEnc. The resulting model will not be complete but still very expressive, since most existing models cannot even detect biconnectivity; see, e.g., Zhang et al. (2023).
The study of graph components has been very influential in graph theory, but remains much less explored in the context of graph ML. Hence, we think our work paves the way for designing powerful architectures, which additionally exploit different graph components.
> Although the design of neural network tries to follow the planar graph isomorphism test closely, the author should discuss and do some ablation study over these designs.
Thanks for this! We experimented with several simplifications of BasePlanE on ZINC. In summary, we observed that BasePlanE substantially benefits from each of the components in its message passing update. Indeed, dropping any individual component leads to loss of performance. Moreover, the combination of standard message passing with KHC component decompositions proved essential to our results. Hence, each component is ultimately beneficial within the overall PlanE framework. We provide a comprehensive discussion on this experiment in our global response, and the full results can be found in Table 1 of *response.pdf*
> Experimental step is too vague and unclear. For all experiments, the description of experimental setup is incomplete. The hyperparameter is set completely with a single number and no discussion of hyperparameter tuning for different methods. It is unfair for baselines to use the same hyperparameter as the proposed method. The author should do hyperparameter tuning for all methods properly.
All details of our experimental setup are reported in the appendix. Based on your feedback, we included these details into the main body. We use exactly the same hyperparameter search protocols from the literature for each dataset for fairness with all models. This is actually constraining our own model’s tuning, as opposed to the other way around.
> Baselines are not SOTA. Only ESAN is used, while other recent baselines like SetGNN (Lingxiao Zhao, et al.) and SSWL (Bohang Zhang, et al. 2023) should be considered.
Thanks for bringing these to our attention! We are happy to include these in the next version of our work.
> Also I strongly recommend the author to tune hyperparameter in a systematic way.
We understand your concern, but we were systematic with the experimental setup, as detailed in our response.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for additional work. I like the figure for visualization. Altough it still has some limitations, I believe the method has good inspiration for the community towards bringing more theoretical graph algorithms to GNN area. I revise the score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for going through our response and revising your score. | null | null | null | null | null | null |
Knowledge Distillation Performs Partial Variance Reduction | Accept (poster) | Summary: This work explores knowledge distillation, a technique used to improve the performance of smaller "student" models by leveraging the knowledge of more powerful "teacher" models. The authors analyze knowledge distillation from an optimization perspective and reveal that it can be seen as a stochastic variance reduction mechanism, reducing stochastic gradient noise and acting as a partial variance reduction technique. However, complete noise elimination depends on the characteristics of the "teacher" model. The study highlights the importance of careful parameterization, particularly regarding the weighting of the distillation loss. Empirical experiments with linear models and deep neural networks validate the findings, providing further insight into knowledge distillation.
Strengths: This paper took a common empirical practice in the field, formalized the problem and provided analytical explanation from a unique perspective. The theoretical conclusions match with what the community observes in the real life, which provide valuable insights and guidance.
The experiments also perfectly corroborated the theoretical results, which makes the conclusions very convincing.
Weaknesses: 1. There is no explanation why LBFGS teacher is better than SGD teacher.
2. The optimal choice of distillation weight $\lambda$ does reflect the quality of the teacher, however, the selection itself is rather post hoc, is there any other more practical indicator of the quality of the teacher models that is more informative before running the optimization process.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: would be interesting if there are also results concerning generalizations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and positive evaluation of our work.
- *There is no explanation why LBFGS teacher is better than SGD teacher.*
***Response.*** This is simply because LBFGS as an optimization algorithm can train the teacher to achieve almost zero train loss, while the SGD teacher converges only within some neighbourhood of the optimal solution. Throughout, for us, having a better teacher means that the loss $f(\theta) - f^*$ is smaller.
- *The optimal choice of distillation weight does reflect the quality of the teacher, however, the selection itself is rather post hoc, is there any other more practical indicator of the quality of the teacher models that is more informative before running the optimization process.*
***Response.*** This is correct. From the practical perspective, our results could be interpreted in (at least) two ways:
*(1)* Our characterization shows that one could build a “line search”-type procedure to approximate the optimal choice of distillation weight for a given teacher.
*(2)* More broadly, our analysis suggests that careful tuning of the distillation weight parameter can be crucial to obtain the full benefits of KD.
- *Would be interesting if there are also results concerning generalizations.*
***Response.*** Indeed, results on generalization would be interesting, and we plan to investigate this in future work. However, in this work we take an optimization view of KD, which means that we focus on training loss.
---
Rebuttal Comment 1.1:
Comment: Thank you for responding to my questions. Very neat paper! Keeping the previous score. | Summary: This paper gives a new interpretation of the logit-based knowledge distillation algorithm. In particular, for simple one-layer models, authors show theoretically that the knowledge distillation can be viewed as diminishing the scale of the student model gradient by some amount proportional to the teacher model gradient. For deeper models, one gets a slightly different quantity, which is empirically well-correlated with the single-layer version. Building on this observation, the authors give the convergence guarantee of the knowledge distillation algorithm, in terms of the parameter and the risk (Theorem 1 and 2, respectively). Authors also propose a slight refinement of the KD, which performs a more proper form of variance reduction.
Strengths: - This paper gives one of the most pleasantly simple yet insightful interpretations of the KD objective. There have been many theoretical studies that attempted to explain why knowledge distillation helps, but I do not think I have seen any explanation as simple as this. As it offers parameter-level insights, I believe that this observation can lead to many algorithmic consequences in future work.
- The theoretical analysis also provides some idea about determining the distillation weight $\lambda$, which is not practical in its current form but could give some inspiration nevertheless.
- The contributions and the proposed unbiased KD are novel, as far as I know.
- The paper is written quite clearly, and the contributions are easy to understand.
Weaknesses: - It would have been better if the paper provides a direct head-to-head comparison of theorems 1 & 2 to the convergence guarantees one could get from the vanilla SGD procedure. The provided "convergence bound" type of results does not really give too much insight into how the knowledge distillation (or variance reduction) leads to a **better** convergence than the vanilla SGD.
- Two logical connections could be made a little bit clearer. (1) How does eq(5) lead to standard variance reduction? Are we saying that KD is a variance reduction, only because there are some negative terms? Do we have any direct empirical corroborations of this claim? (2) Please explain how variance reduction leads to better training, e.g., by citing prior work.
- In Figure 1, I am not sure whether the cosine similarity is the right metric to use. The magnitude should be a very critical issue, especially because this paper is claiming that KD is about variance reduction. Could you provide additional plots on the $\ell_2$ difference, or maybe the SNR-like result (i.e., the ratio between the difference and the original distillation gradient)?
- The empirical results on the proposed unbiased KD are given in the form of "training loss" only, rather than test loss or test accuracy. The results on the test/validation dataset may be very useful in understanding the strengths and limitations of the proposed algorithm.
- A mild suggestion on notations---please consider changing the symbol for the student model parameters. In most machine learning papers, $x \in \mathcal{X}$ denotes the input feature (and $y \in \mathcal{Y}$ denotes the label). Many KD papers use $\theta_t$ for teacher parameters and $\theta_s$ for student parameters. Although the current notation is okay, using more common notations may help the readers greatly.
- A minor mistake---the example on "classification with single hidden layer" may not really be the case with one "hidden layer." The one-hidden-layer network is the same as a two-layer neural network.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: In addition to the "weaknesses" above, here are some questions/suggestions.
- Is there any way one could extend the proposed method of analysis to the distillation cases where the student model is not necessarily a "compressed" version of the teacher model?
- It would be great if there is a plot that explicitly tracks the gradient variance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are given in the appendices, which I think is okay.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and positive evaluation of our work
- *It would have been better if ...*
***Response.*** We acknowledge this suggestion, and in fact this is one of the key aspects of our theory which is discussed in lines 269-277 (Importance of the results) after the proof overview of the theorems. The comparison of the convergence rates between theorems 1 & 2 and SGD can be done as follows. For both setups, the rate of SGD is the same (11) or (12) with only one difference: $\min(\gamma, \mathcal{O}(f(\theta) - f^*))$ term is replaced with $\gamma$. So, $\mathcal{O}(f(\theta) - f^*)$ is the factor that makes our results ***better*** compared to SGD in terms of optimization performance.
We thank you for this suggestion, and will add further clarifying discussion on this point.
- *Two logical connections could be made a little bit clearer...*
***Response.*** Thank you for the suggestion, we will make them clearer in the revision.
*(1)* Exactly! Equation (5) leads to partial VR because of the additional $ -\lambda\nabla f_{\xi}(\theta)$ term. When chosen properly (distillation weight $\lambda$ and proximity of $\theta$ to the solution $x^*$), this additional stochastic gradient is capable of adjusting the student’s stochastic gradient since both are computed using the same batch from the train data. In other words, both gradients have the same source of randomness which makes partial cancellations feasible.
As a matter of fact, we do have direct empirical corroboration for this in Figure 3 (see also the new plot, Figure 7 in our response PDF, tracking gradient variance for the same setup). The Blue line is pure SGD, without any distillation involved. The Green line employs distillation with distillation weight $\lambda=0.4$ and achieves uniformly lower train errors (and gradient variance).
*(2)* First of all, in the optimization perspective we adopt, “better training” means precisely smaller train error. The analysis of test error is a different aspect which we do not consider here. The standard/complete variance reduction mechanism (such as SVRG) is capable of removing the stochastic neighborhood term completely, and guarantees that the convergence is to the exact minimizer, as opposed to SGD. To give a specific example, in the strongly convex and smooth regime SGD with constant learning rate still converges up to some neighborhood of the minimizer, while SVRG converges to the exact solution with constant learning rate. See reference [4] below for a complete discussion.
[4] Robert M. Gower, Mark Schmidt, Francis Bach, Peter Richtarik, *Variance-Reduced Methods for Machine Learning*, IEEE, 2020 (https://arxiv.org/abs/2010.00892).
We will provide a clarifying discussion on both points in the next version of our paper.
- *In Figure 1, I am not sure whether the cosine similarity is the right metric to use. ...*
***Response.*** We believe that cosine similarity is a relevant metric since it measures the alignment of the gradients. Nevertheless, to fully address this concern, we additionally provided plots tracking $\ell_2$ distance and SNR for the same gradients in the rebuttal PDF (Figure 6). As we can see, they follow exactly the same trend as the plot of cosine similarity.
- *The empirical results on the proposed unbiased KD are given in the form of "training loss" only, rather than test loss ...*
***Response.*** Please note that the purpose of proposing / analyzing unbiased KD is to support our claim that the potential source of ***partial*** variance reduction in KD is the biased nature of distillation. Indeed, we removed the biased and improved variance reduction both analytically and empirically. Analyzing test loss/accuracy of unbiased KD would be interesting in itself, but is irrelevant for our argument related to variance reduction. In fact, unbiased KD closely relates to the popular and well-studied SVRG algorithm (please see the references in the paper), applied in the context of deep networks.
- *A mild suggestion on notations*
- *A minor mistake*
***Response.*** Thank you for the detailed comments on improving the presentation, which we will take into account for the next version of the work.
- *Is there any way one could extend the proposed method of analysis to the distillation cases where the student model is not necessarily a "compressed" version of the teacher model?*
***Response.*** Yes, we believe it is possible to extend the current method of analysis to general knowledge distillation under certain restrictions. The main issue there is obtaining reasonably accurate analytical expressions for the distillation gradients. If the student’s architecture is not the same as the teacher’s nor is a sub-network of the teacher’s architecture, then it may be tricky to derive a closed form for the distillation gradient. Technically, the gradient space of the teacher could be very different from the gradient space of the student; dimensions may not match and the student's gradient space is not a subspace of the teacher's gradient space. One possible workaround could be to impose additional assumptions on how the student’s gradient space can be embedded reasonably into the teacher’s gradient space. (For instance, this would be the case with pruning, quantization, or structured compression/neural architecture search.) In such cases, we will indeed be able to extend Proposition 1.
- *It would be great if there is a plot that explicitly tracks the gradient variance.*
***Response.*** Please find the plot in our PDF response (Figure 7). The plot explicitly tracks gradient variance (averaged over the iterations within each epoch) for the same setup as Figure 3. As expected, both variants of KD (biased and unbiased) have reduced gradient variance compared to plain SGD. The plot also highlights that both variants of KD have similar variance reduction properties, while the unbiasedness of unbiased KD amplifies the reduction of train loss in Figure 3.
---
Rebuttal Comment 1.1:
Comment: I very much appreciate the careful comments and additional results. My concerns have been addressed well. | Summary: This paper examines the benefits of Knowledge Distillation (KD) from an optimization perspective. They show that, under certain assumptions, KD performs partial variance reduction on SGD noise and that the amount of reduction depends on the quality of the teacher model. Their analysis suggests that the distillation weight used in the KD loss should be appropriately tuned, and the authors provide a closed-form solution for the optimal weight in the case of linear models. Even though their core result does not directly apply to deep networks, they present some empirical evidence supporting that it remains a reasonable approximation.
Strengths: * Understanding the underlying mechanics of KD and the reason for its benefit is highly relevant to the ML community given the widespread use of KD. A deeper understanding may also lead to better distillation algorithms, as exemplified with the closed-form optimal distillation weight in this paper.
* The connection between the distillation gradient and variance reduction methods like SVRG is interesting and novel to the best of my knowledge.
* The presented analyses and results seem sound.
* The authors present some empirical evidence that support the claims of the paper (Figs. 2-5).
Weaknesses: * The core proposition of the paper (Prop. 1) does not apply to deep neural networks, and the empirical evidence presented to support the claim that it’s a good approximation is quite limited (one scenario on MNIST with one hidden layer FFN and fixed learning rate).
* The presented results, including the empirical ones, apply to training loss, rather than test error.
* The presented theory states that a higher performing teacher leads to higher variance reduction. However, prior works in distillation have shown that significantly better teachers often lead to worse students [1-3]. Please see the questions section for specific questions on this.
[1] https://arxiv.org/pdf/1902.03393.pdf
[2] https://arxiv.org/pdf/2202.03680.pdf
[3] https://arxiv.org/pdf/2206.06067.pdf
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Continuing from the comment from the Weakness section: how does the presented theory reconcile with the prior observations that better teachers may in fact lead to poorer students due to, e.g., the large capacity gap between the student and the teacher? Does this suggest that variance reduction is not the end of the story or that it is solely responsible for the success of distillation?
2. Why does the cosine similarity between the approximation of Prop 1 and the true gradient in deep networks decrease as the training progresses? It would be helpful to show the behavior until a larger epoch like 100 instead of 50 to support the claim that the behavior stabilizes as training progresses.
3. Beyond empirical evidence, is there an intuitive reason to believe that Prop. 1 would approximately apply to deep neural networks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and evaluation of our work.
- *The core proposition of the paper (Prop. 1) does not apply to deep neural networks, and the empirical evidence presented to support the claim that it’s a good approximation is quite limited (one scenario on MNIST with one hidden layer FFN and fixed learning rate).*
***Response.*** We acknowledge the fact that the focus of our work is analytical: we identify a first non-trivial interpretation of KD from the optimization perspective, and characterize cases when KD can lead to better rates. Our interpretation appears to match experiments precisely in the convex case. Our further experiments are meant to show that our “distillation gradient” interpretation can also be relevant in the case of SGD-based optimization of DNNs. We indeed plan to investigate the challenging DNN / non-convex case more precisely in future work, for which both new techniques and further experimental validation will be needed.
- *The presented results, including the empirical ones, apply to training loss, rather than test error.*
***Response.*** As we also mentioned in the abstract, in this work, we investigate KD specifically from an optimization perspective. Of course, investigating test errors is also important. However, it is not the goal of this work. Nonetheless, we added a plot (Figure 8 in the response PDF file) showing validation loss for the same setup as Figure 2(a) with very similar behavior.
- *1. Continuing from the comment from the Weakness section: how does the presented theory reconcile with the prior observations that better teachers may in fact lead to poorer students due to, e.g., the large capacity gap between the student and the teacher? Does this suggest that variance reduction is not the end of the story or that it is solely responsible for the success of distillation?*
***Response.*** To reconcile the mentioned prior observations with our results, notice that the notion of a “better teacher” has different meanings. In the papers you mentioned, a better teacher means a teacher with a much larger capacity (for instance wider and/or deeper architecture) which can have higher performance than the student model. However, in our case “better teacher” means better parameter (i.e., weights and biases) values, evaluated in terms of training loss.
In particular, in the case of self-distillation, covered in Sections 4 and 5, the teacher and student architectures are identical, and hence they have the same capacity. Here, a better teacher means better parameter values within the same architecture.
In our second regime, distillation for compressed models (Section 6), we actually consider the case when the student network is a subnetwork of the teacher; we consider a sparsification compression operator that selects $k$ parameters for the student out of $d$ parameters of the teacher. Then, clearly, the teacher has a larger capacity with a capacity ratio $d/k\ge1$. However, our result in this direction (Theorem 4) does not allow the capacity ratio to be arbitrarily large. Indeed, the constraint $\omega = \mathcal{O}(\mu/\mathcal{L})$ on compression variance implies a constraint on capacity ratio since $\omega = d/k-1$ for the sparsification operator. Thus, our result holds when the teacher’s size is not significantly larger than the student’s size, which does not contradict the observations from prior work noted by the reviewer.
- *2. Why does the cosine similarity between the approximation of Prop 1 and the true gradient in deep networks decrease as the training progresses? It would be helpful to show the behavior until a larger epoch like 100 instead of 50 to support the claim that the behavior stabilizes as training progresses.*
***Response.*** Please see our PDF response where we added the plot (Figure 6) with a larger 100 epoch training supporting the claim on stability. The decrease of cosine similarity can be explained as follows: at the beginning the cosine similarity is high (and SNR is low) since we start from the same model. Then, initial perturbations caused by either the KD or modified KD gradient don’t cause big shifts (the teacher has enough confidence and small gradients). These perturbations accumulate over the training leading to decreased cosine similarity and eventually stabilize.
- *3. Beyond empirical evidence, is there an intuitive reason to believe that Prop. 1 would approximately apply to deep neural networks?*
***Response.*** Yes, there is. Intuitively, distillation has essentially negligible effect on the data points for which the teacher classifies correctly with high confidence (since the distillation gradient almost equals the regular gradient). In such cases, the feedback from the teacher is similar to or the same as from the true labels. This intuition is reflected in Proposition 1 since high confidence of teacher’s classification means that teacher’s gradient $\nabla f_n(\theta)$ with respect to those data points is close to zero and does not affect the student’s gradient much.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I read the authors' rebuttal, the general response, and the other reviews. In light of the new results in the PDF provided in the PDF and the author's promise to provide a more complete discussion towards applicability in the non-convex case in the next revision, I decided to raise my score. I still view the fact that the theory predicts that a more capable teacher (in terms of lower train error, which is generally the case with a higher capacity teacher) is contradictory to some of the empirical observations from prior work. This could be due to the fact that the bounds are strictly from an optimization perspective, which concerns training performance. I would encourage the authors to include a discussion on this to make this explicit in light of prior work showing higher capacity teachers may not necessarily lead to better students (which seems to be implied by the presented theory at a first glance).
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and the score increase. We acknowledge your point, and we will provide a detailed discussion on the possible interpretations of our results in light of the existing empirical work on KD, both in terms of training and generalization performance.
Generally, we believe there is no contradiction between our results and the existing empirical work on KD. Our focus in this work is solely on training performance (e.g., better student/teacher means better parameter values for train error), while the empirical observations (following the three papers you mentioned in your initial review, which we examined) are in terms of generalization performance. Moreover, none of our results consider the regime where the teacher’s capacity (in terms of architecture size) is significantly larger than that of the student. | Summary: This work analyzes Knowledge Distillation (KD) using an optimization point of view.
By recasting the KD problem as a standard learning problem with a custom loss, the authors analyze the convergence of SGD on such loss and identify the variance-reducing properties of KD, through the bias induced by the teacher in the loss.
Using some approximations, the authors argue that their analysis should hold for deep networks too.
The paper complements its analysis by proposing a technique to reduce the bias of KD and extend the result to generic KD with smaller students.
The proposes analysis is supported by experiments for linear models.
Strengths: The paper is clear and easy to read even for non-optimization experts.
The whole idea of analyzing KD as a variance-reducing algorithm is novel and interesting
Although the assumptions are limited to linear models, the empirical experiments show that such assumptions approximately could hold deep networks is convincing.
The paper proposes a good balance of novelty, clarity and significance and it should be relevant to anybody interested in KD.
Weaknesses: The biggest weakness of this work is its ambivalence on the validity of the results for deep learning.
While I do understand this is not the goal of the paper, I do not understand why the authors claim that their results could be approximately true for deep networks, show some empirical evidence of this and then, drop the topic for the rest of the paper.
I don't think the authors could either add some empirical evidence on this matter or focus on the linear/convex case.
I any case, I encourage the authors to further investigate KD as variance-reductions on deep networks, even if only empirically in another paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My doubts are about the most empirical aspects of the paper. The "Experimental validation" in sec 4.3 mentions "linear probing" on CIFAR-10, is this just taking the last features produced by a neural network and training a linear classifier on top of those?
I also want to ask to the authors if they tried to replicate experiments with deep networks to show that empirically, the same behaviours can be obtained even for non-linear models.
I don't understand why you write a paragraph to claim that the fundamental assumption for your results can hold approximately for deep learning and then don't elaborate on the matter any further, either with experiments or theoretical analysis.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations of their work properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and positive evaluation of our work.
- *The biggest weakness of this work is its ambivalence on the validity of the results for deep learning. ...*
***Response.*** We acknowledge the fact that the focus of our work is analytical: we identify a first non-trivial interpretation of KD from the optimization perspective, and characterize cases when KD can lead to better rates. Our interpretation appears to match experiments precisely in the convex case. Our further experiments are meant to show that our “distillation gradient” interpretation can also be relevant in the case of SGD-based optimization of DNNs. We indeed plan to investigate the challenging DNN / non-convex case more precisely in future work, for which both new techniques and further experimental validation will be needed.
- *My doubts are about the most empirical aspects of the paper. The "Experimental validation" in sec 4.3 mentions "linear probing" on CIFAR-10, is this just taking the last features produced by a neural network and training a linear classifier on top of those?*
***Response.*** Yes, we train a linear classifier on top of the features extracted from a ResNet50 model pre-trained on ImageNet. This is a standard setting, commonly used in the transfer learning literature, see e.g. [1], [2], [3].
[1] Simon Kornblith, Jonathon Shlens, Quoc V. Le, *Do Better ImageNet Models Transfer Better?* CVPR, 2019 (https://arxiv.org/abs/1805.08974)
[2] Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry, *Do Adversarially Robust ImageNet Models Transfer Better?* NeurIPS, 2020 (https://arxiv.org/abs/2007.08489)
[3] Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh, *How Well Do Sparse Imagenet Models Transfer?* CVPR, 2022 (https://arxiv.org/abs/2111.13445)
- *I also want to ask to the authors if they tried to replicate experiments with deep networks to show that empirically, the same behaviours can be obtained even for non-linear models. I don't understand why you write a paragraph to claim that the fundamental assumption for your results can hold approximately for deep learning and then don't elaborate on the matter any further, either with experiments or theoretical analysis.*
***Response.*** We acknowledge that only provide partial evidence for the validity of our “distillation gradient” assumption for deep models. We provided evidence that the assumption holds approximately for deep networks, and show varying behaviour of the alignment during training.
Further, we emphasize that our Theorem 2 is designed to hold specifically for self-distillation in the non-convex case, which approximates very popular heuristics in practice, e.g. [55].
We agree that there is much more to investigate for the case of deep networks where exact tracking of teacher’s impact across multiple layers of non-linearities becomes harder. We see our results as a promising first step towards a more complete understanding of the effectiveness of distillation, and will provide a more complete discussion towards applicability in the non-convex case in the next revision. | Rebuttal 1:
Rebuttal: Dear Reviewers and Area Chair,
Thank you for the time and effort you put into evaluating our work. We have responded to all comments in your reviews by providing additional discussions/clarifications and numerical validations (please find the attached PDF response for the additional plots).
Please let us know whether we managed to address your concerns regarding the paper.
Regards,
Authors
Pdf: /pdf/be48403b8fb13c723bc0389f68701ee316373877.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
GEX: A flexible method for approximating influence via Geometric Ensemble | Accept (poster) | Summary: This work studies approximations methods for data influence. The work identifies a common theoretical drawback behind popular approximation methods to Influence Function (IF) which suppresses their expressive power and affects their performance.
This work proposes a novel interpretation of existing IF approximations as some special form of Laplace approximation (LA) and points out that the drawback is due to the linearity of gradients and the singularity of Hessian. This work proposes an original IF approximation method that circumvents these issues, which removes the linearization to ease the bilinear constraint and leverages Geometric Ensemble (GE) for non-linear losses. Both conceptually and empirically, this work demonstrates significant improvement over existing IF approximation methods. The work includes a variety of experiments including many use cases.
Strengths: The paper is nicely written. The narrative is smooth and the conceptual development is attractive. The problem being considered is of interest and finding of common drawback behind the mechanism of existing IF approximations is novel and insightful. Visualizations of conceptual findings are very helpful and give a straightforward presentation to help to understand.
Evaluations are thorough. Various use cases are considered and the empirical performance is satisfactory. Comprehensive details for experiments are given in Appendix.
Weaknesses: Influence Function (IF) refers to the celebrated method proposed in < Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR, 2017.> that is used to estimate the quantity in Eq. (1). Equation (1) is essentially the definition for LOO. It is NOT the "ground truth" of Influence Function (IF). Eq. (2) is the "Influence Function". It is NOT named "I_{Hess}".
The transition to Section 4.1 needs more elaborations. I_GEX proposed in this subsection is the core contribution of the paper, yet it isn't thoroughly discussed before moving on to discussing the limitation of other methods. Similarly, an introduction to LA is much needed as it lays important grounds for the development of this work. I may consider increasing my score if this can be reasonably addressed.
Though the conceptual flow of the paper is smooth, some important parts remain unclear. For example, what is Eq. 9 exactly and how is it calculated in actual implementation? How to conceptually interpret Eq. 10 and how it differs from Eq. 12?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How does the proposed method compare to TracIn (not TracInRP)? And how does the result compare to the actual counterfactual (LOO)?
Appendix should not be submitted within the main paper, which has a page limit of 9.
What is the "retraining cost of IF"? IF does not need training or "retraining".
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful review and constructive comments! We give point-to-point replies to your comments in the following. Official comments will address questions that could not be answered due to the character limit.
* Q1. [**Terminology**] Influence Function (IF) refers to the celebrated method proposed in < Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR, 2017.> that is used to estimate the quantity in Eq. (1). Equation (1) is essentially the definition for LOO. It is NOT the "ground truth" of Influence Function (IF). Eq. (2) is the "Influence Function". It is NOT named "I_{Hess}".
* A1. As the reviewer mentioned, [1] introduced Eq. (2) as an Influence Function (IF) to provide an approximation for the counterfactual effect of LOO retraining (Eq. (1)). In our work, we used the notation $ \mathcal{I}\_\mathtt{GT} $ and $ \mathcal{I}\_\mathtt{Hess} $ since the similar notations were originally introduced in [2] to emphasize that Eq. (2) approximates Eq. (1) using Hessian. However, we understand that this notation might be misleading for some readers. In response to the reviewer's feedback, we will revise the manuscript, denoting Eq. (1) as $\mathcal{I}\_\mathtt{LOO}$ and Eq. (2) as $\mathcal{I}$. Moreover, we believe that with this modification, it would become clear that the IF (Eq. (2)) does not require retraining, which addresses the following question: "What is the 'retraining cost of IF'? IF does not need training or 'retraining'."
[1] Koh, Pang Wei, and Percy Liang. "Understanding black-box predictions via influence functions." International conference on machine learning. PMLR, 2017.
[2] Schioppa, Andrea, et al. "Scaling up influence functions." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.
* Q2. [**Elaboration for Section 4.1**] The transition to Section 4.1 needs more elaborations. I_GEX proposed in this subsection is the core contribution of the paper, yet it isn't thoroughly discussed before moving on to discussing the limitation of other methods. Similarly, an introduction to LA is much needed as it lays important grounds for the development of this work. I may consider increasing my score if this can be reasonably addressed.
* A2. Thank you for the helpful suggestions. To provide a clear motivation for GEX, we will revise the introduction of Section 4 (Lines 155-162) as follows:
To mitigate the distributional bias in Section 3, we propose a flexible IF approximation method using Geometric Ensemble (GE; [15]), named Geometric Ensemble for sample eXplanataion (GEX). Here is a summary of how GEX is developed.
$$
\mathcal{I}
\overset{\texttt{Delinearization}}{\underset{\texttt{Section 4.1}}{\longrightarrow}}
\mathcal{I}\_\mathtt{LA}
\overset{\texttt{LA to GE}}{\underset{\texttt{Section 4.2}}{\longrightarrow}}
\mathcal{I}\_\mathtt{GEX}
$$
In Section 4.1, we ensure that the influence approximation is not a bilinear form for the gradient by replacing gradients in IF with sample-loss deviations. The theoretical foundation for this step is provided by our Theorem 1 below, which establishes a relationship between the IF and the Laplace approximation (LA; [36]). Moving on to Section 4.2, we modify the parameter distribution to compute the sample-loss deviation from LA to GE. This modification is necessary because GE is based on the local geometry of the loss landscape around $\theta^*$, similar to LA while avoiding overestimating loss deviations caused by the singularity of the Hessian.
Also, the revised manuscript will include the following introduction to LA after Theorem 4.1 to assist readers:
The LA was proposed to approximate the posterior distribution with a Gaussian distribution. Recently, it has gained significant attention due to its simplicity and reliable calibration performance [1, 2]. Intuitively, LA is equivalent to the second-order Taylor approximation of log-posterior at $ \theta^* $ with Gaussian prior defined as $p(\psi):= \mathcal{N}(\psi| \mathbf{0}_P, \gamma^{-1} \mathbf{I}_P )$:
$$
\log p(\theta|S)
= \log p(S|\theta) + \log p(\theta) - \log Z \\
= - N\cdot L(S, \theta) + \log p(\theta) - \log Z \\
\approx - N\cdot L(S, \theta^*) - \frac{1}{2}(\theta-\theta^*)^{\top}(N\cdot H + \gamma \mathbf{I}_P )(\theta-\theta^*) - \log Z \\
= \log p(\theta^*|S) - \frac{1}{2}(\theta-\theta^*)^{\top}(N\cdot H + \gamma \mathbf{I}_P )(\theta-\theta^*). \\
$$
Here, the training loss represents the negative log-likelihood $L(S, \theta) = -\frac{1}{N}\log p(S| \theta)$, and $Z:= \int p(\theta) p(S|\theta) d\theta$ represents the evidence in Bayesian inference [3]. Similar to the IF, LA becomes computationally intractable when dealing with modern architectures due to the complexity of the Hessian matrix. To address this computational challenge, recent works have proposed various sub-curvature approximations, such as KFAC [1] and sub-network [2], which provide computationally efficient alternatives for working with LA.
[1] Ritter, Hippolyt, Aleksandar Botev, and David Barber. "A scalable laplace approximation for neural networks." 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings. Vol. 6. International Conference on Representation Learning, 2018.
[2] Erik Daxberger, Eric Nalisnick, James U Allingham, Javier Antoran, and Jose Miguel Hernandez- Lobato. Bayesian deep learning via subnetwork inference. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 2510–2521. PMLR, 18–24 Jul 2021b.
[3] Bishop, Christopher M., and Nasser M. Nasrabadi. Pattern recognition and machine learning. Vol. 4. No. 4. New York: springer, 2006.
---
Rebuttal Comment 1.1:
Comment: * Q3. [**Clarifications for Eq. (9) - (12)**] Though the conceptual flow of the paper is smooth, some important parts remain unclear. For example, what is Eq. 9 exactly and how is it calculated in actual implementation? How to conceptually interpret Eq. 10 and how it differs from Eq. 12?
* A3. Eq. (9) in our paper is the empirical distribution of Geometric Ensemble (GE) [1]. To clarify further, we utilize the Dirac delta distribution to represent the empirical parameter distribution of GE. This is a common usage of the Dirac delta distribution, as mentioned in p.64 of [2]: "A common use of the Dirac delta distribution is as a component of an empirical distribution, ~.". To construct the empirical distribution $\\{ \theta^{m} \\}_{m=1}^{M}$, we collect intermediate checkpoints during the iterative SGD updates. For more details on the construction of GE and computing GEX based on GE, please refer to Appendix C.2.
Regarding Eq. (10), it can be intuitively understood as capturing the covariance of sample loss between two instances $z$ and $z'$. To ensure that the starting point used in calculating the covariance aligns with the mean value, we employ the concept of "sample loss deviation" instead of conventional covariance. This choice enables us to set the starting point as the sample loss at $\theta^*$. The main distinction between Eq. (10) and Eq. (12) lies in the parameter distribution used to calculate the expectation – LA or GE. This difference becomes crucial for dealing with non-linear sample loss deviation since LA tends to overestimate the sample loss deviation due to the singularity of the Hessian, as discussed in Section 4.2.
[1] Garipov, Timur, et al. "Loss surfaces, mode connectivity, and fast ensembling of dnns." Advances in neural information processing systems 31 (2018).
[2] Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
* Q4. [**Comparison to other methods**] How does the proposed method compare to TracIn (not TracInRP)? And how does the result compare to the actual counterfactual (LOO)?
* A4. We refer to G3 [**Additional experiments for TracIn**] in the Global Rebuttal for these results.
* Q5. [**Formatting**] Appendix should not be submitted within the main paper, which has a page limit of 9.
* A5. Thank you for bringing up this concern. We acknowledged the problem right after the submission deadline and contacted the Program Chair about this issue. As per the Program Chair's response, NeurIPS 2023 will not reject papers with an appendix as long as it is evident that the main paper concludes on page 9. In line with the conference guidelines, we will separate the Appendix from the main text in the revised manuscript. | Summary: The paper provide a novel connection between IF approximations and LA, and introduces a new IF approximation method. The motivation takes advantage from two observations, the removal of linearization can alleviate the bilinear constraint and, the utilization of Geometric Ensemble is advantageous. Empirical results demonstrates the advantage of the proposed method with reduced computational complexity.
Strengths: The paper provide a novel connection between IF approximations and LA, and introduces a new IF approximation method. The motivation takes advantage from two observations, the removal of linearization can alleviate the bilinear constraint and, the utilization of Geometric Ensemble is advantageous. Empirical results demonstrates the advantage of the proposed method with reduced computational complexity.
I enjoyed a lot reading the literature review and the analysis offered by the paper. The paper also introduced several application scenarios where the proposed method demonstrated obvious performance boost, including the noisy label identification task.
Weaknesses: W1. The method is only justified on small dataset such as MIST and SVHN. Is it possible to verify the effectiveness of the method on larger dataset such as ImageNet1K?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see weakness about the scale of the training data above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable comments and suggestions! We provide a detailed reply to your questions in the following.
* Q1. [**Scalability of GEX**] W1. The method is only justified on small dataset such as MIST and SVHN. Is it possible to verify the effectiveness of the method on larger dataset such as ImageNet1K?
* A1. Following reviewer FeGS's recommendation, we verify the scalability of GEX for various tasks. In fact, Table 2 in our original submission already provides the evidence that the effectiveness of GEX in Table 1 scales well with larger datasets by showing that GEX outperforms the baselines in noisy label detection tasks on ImageNet-1K with ViT and MLP-Mixer. In the rebuttal phase, we further validate the scalability of GEX by extending the relabeling task (Section 5.1) and dataset pruning task (Section 5.2) to the ImageNet-1K setting.
Our first additional experiment is the relabeling task presented in Section 5.2 on the ImageNet-1K environment with ViT and MLP-Mixer. To this end, we follow the relabeling process in Section 5.2 with the estimated influence in Table 2. The following table presents the relabeled test accuracy for GEX and EL2N (the best method for noisy label detection except ours).
* **Table A. Relabeled accuracy for mislabeled samples**
| | ViT-S-32 | MLP-Mixer-S-32 |
|:-------------------:|:-------------------:|:--------------:|
| Clean acc. | 67.83% | 64.37% |
| Noisy acc. | 63.42% | 61.84% |
| Relabeled with EL2N | 63.18% | 63.16% |
| Relabeled with GEX | **66.17%** | **63.45%** |
Table A shows that GEX can detect mislabeled samples that require relabeling more accurately than EL2N.
The second additional experiment is the dataset pruning task in Section 5.4 on the ImageNet-1K with MLP-Mixer. For this purpose, we reproduce Mixer-S-32 and estimate the self-influence of GEX and EL2N score (which verified its scalability on the dataset pruning task in [1]). Then, we prune 512,466 samples (40%) with the smallest self-influence in ImageNet-1K and retrain neural networks with these pruned datasets. The following table presents the pruned test accuracy for GEX and EL2N:
* **Table B. Pruned accuracy for mislabeled samples**
| | MLP-Mixer-S-32 |
|:----------------:|:--------------:|
| Full sample acc. | 67.83% |
| Pruned with EL2N | 54.87% |
| Pruned with GEX | **56.34%** |
Similar to the results shown in Figure 4, GEX demonstrates more effective identification of prunable samples than EL2N on the scalable ImageNet-1K dataset. Additionally, it is worth noting that EL2N cannot make use of open-source checkpoints and requires a computational cost of (10~20 epochs) x (number of checkpoints) from an initialized neural network. In summary, the better pruned accuracy and the lower computational cost further illustrate the effectiveness of GEX in scalable settings. We will include these results in the revised version.
[1] Sorscher, Ben, et al. "Beyond neural scaling laws: beating power law scaling via data pruning." Advances in Neural Information Processing Systems 35 (2022): 19523-19536.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply.
Comment: Thanks for the reply! I will maintain my score.
Best
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer FeGS,
We appreciate your confirmation of our rebuttal and the positive evaluation of our paper!
Best regards,
Authors of submission 4926. | Summary: This paper proposes a new method for approximating influential examples. It treats losses as non-linear functions, addresses the singularity problem of hessians and does not require multiple checkpoints or JVP computations for the influence. The authors also highlight a connection between Influence Functions (IF) and Laplace Approximation (LA) and discuss its limitations. They propose an approximation using Geometric Ensembles (GE) instead. The authors show empirically that their approach outperforms existing IF approximation methods on downstream tasks for noisy label detection, relabeling, dataset pruning and data source separation.
Strengths: + The paper aims to address two important limitations of influential examples: singular nature of hessians and linearity of influence functions.
+ The paper is well written, related work and existing limitations are properly discussed.
+ The paper also discusses and benchmarks their approach on several important downstream tasks including noisy label detection, relabeling and dataset pruning on several datasets and models.
Weaknesses: + The authors emphasize bilinear approximation but they do not explain what bilinearity applies to in the approximations. It would be good to explain it briefly to make it clear.
+ The overall motivation of GEX using dirac distribution in section 4.1 seems a bit unclear. It seems that GEX is highlighted by surfacing the connection between IF and Laplace Approximation (LA) and the limitations of p_{LA} but it is still unclear why GE distribution (Eq. 9) was chosen.
+ It’s a bit challenging to follow the descriptions of Figures 2 and 3 since the authors refer both to influence and self-influence. Self-influence also means potentially mislabeled and it is unclear whether it is what the authors mean. Typical vs influential annotations on those figures are also a bit unclear. What does typical mean in this case ?
**Minor comments**
+ Figure 2: Are 2a - 2d: axes annotations are missing. Perhaps it would be good to mention that these are histograms in advance or in figure description.
+ Eq 9: parameter \psi is not explained in that section.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: + Line 167: what’s the motivation of choosing Dirac delta distribution
+ Figure 2: Are 2a - 2d computed for self-influence or influence w.r.t. Test examples ? Have we compared I_GEX with I_Hess for influence on test examples ? What does typical vs influential mean for self-influence ? High self-influence means that these are potentially mislabeled. So we want to say that the green ones are mislabeled ?
+ In terms of evaluation were there any experiments made for the original tracin without random projections for the Table 1.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations of the work are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for reviewing our paper so thoroughly. We appreciate your feedback and would like to provide point-to-point replies to your questions in the following.
* Q1. [**Bilinearity used in IF approximations**] The authors emphasize bilinear approximation but they do not explain what bilinearity applies to in the approximations. It would be good to explain it briefly to make it clear.
* A1. The bilinearity in our paper refers to the influence approximations being bilinear with respect to the sample-wise gradients. Consequently, the self-influence based on these bilinear metrics (Eq. (2), (5), (6), and (7)) is quadratic for sample-wise gradients. Intuitively, bilinear approximation methods can be understood as inner products of sample-wise gradients with additional consideration of curvature. We will revise the manuscript to incorporate this explanation in Line 118.
* Q2. [**Rationale for using GE**] The overall motivation of GEX using dirac distribution in section 4.1 seems a bit unclear. It seems that GEX is highlighted by surfacing the connection between IF and Laplace Approximation (LA) and the limitations of p_{LA} but it is still unclear why GE distribution (Eq. 9) was chosen.
* A2. We use the GE distribution because GE is suitable for expressing the local geometry of the loss landscape around $ \theta^* $, similar to LA. However, unlike LA, GE does not overestimate loss deviations due to the singularity of Hessian. To clarify this, we will revise the introduction of Section 4 (Lines 155-162) as follows:
To mitigate the distributional bias in Section 3, we propose a flexible IF approximation method using Geometric Ensemble (GE; [15]), named Geometric Ensemble for sample eXplanataion (GEX). Here is a summary of how GEX is developed.
$$
\mathcal{I}\_\mathtt{Hess}
\overset{\texttt{Delinearization}}{\underset{\texttt{Section 4.1}}{\longrightarrow}}
\mathcal{I}\_\mathtt{LA}
\overset{\texttt{LA to GE}}{\underset{\texttt{Section 4.2}}{\longrightarrow}}
\mathcal{I}\_\mathtt{GEX}
$$
In Section 4.1, we ensure that the influence approximation is not a bilinear form for the gradient by replacing gradients in IF with sample-loss deviations. The theoretical foundation for this step is provided by our Theorem 1 below, which establishes a relationship between the IF and the Laplace approximation (LA; [36]). Moving on to Section 4.2, we modify the parameter distribution to compute the sample-loss deviation from LA to GE. This modification is necessary because GE is based on the local geometry of the loss landscape around $\theta^*$, similar to LA while avoiding overestimating loss deviations caused by the singularity of the Hessian.
* Q3. [**Elaborations for Figures 2-3**] It’s a bit challenging to follow the descriptions of Figures 2 and 3 since the authors refer both to influence and self-influence. Self-influence also means potentially mislabeled and it is unclear whether it is what the authors mean. Typical vs influential annotations on those figures are also a bit unclear. What does typical mean in this case? // Figure 2: Are 2a - 2d: axes annotations are missing. Perhaps it would be good to mention that these are histograms in advance or in figure description. // Figure 2: Are 2a - 2d computed for self-influence or influence w.r.t. Test examples ? Have we compared I_GEX with I_Hess for influence on test examples ? What does typical vs influential mean for self-influence ? High self-influence means that these are potentially mislabeled. So we want to say that the green ones are mislabeled ?
* A3. Figures 2-3 are all **histograms of self-influence**. In this setting, we do not measure influence w.r.t. test examples. Generally, "Typical samples" are those in which the presence or absence of individual samples has no significant impact on the decision boundary (i.e., low self-influence). In our setting, the typical samples are the two outer circle samples with relatively high density in Figure 2(a). Conversely, "Influential samples" refer to individual instances that substantially influence the decision boundary, resulting in high self-influence. Hence, influential samples correspond to the inner circle in Figure 2(a), demonstrating relatively low density. Note that mislabeled samples also greatly impact the decision boundary. We will modify Figure 2 (a) as G4. [**Modified Figure 2 (a)**] in the Global rebuttal. Also, We will clarify this information with axes annotations in the caption of Figure 2 of the revised manuscript.
* Q4. [**Motivation of choosing Dirac delta**] Eq 9: parameter \psi is not explained in that section. Line 167: what’s the motivation of choosing Dirac delta distribution
* A4. $\psi \in \mathbb{R}^{P}$ is an arbitrary vector in the parameter space to denote the Dirac delta distribution and Gaussian distribution. We will clarify this in Line 167 of the revised version. We used the Dirac delta distribution to represent the empirical parameter distribution of GE. This is a common usage of the Dirac delta distribution, as mentioned in p.64 of [1]: "A common use of the Dirac delta distribution is as a component of an empirical distribution, ~"
[1] Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
* Q5. [**Comparison to TracIn**] In terms of evaluation were there any experiments made for the original tracin without random projections for the Table 1.
* A5. We refer to G3 [**Additional experiments for TracIn**] in the Global Rebuttal for these results.
* Q6. [**Limitations**] The limitations of the work are not discussed.
* A6. We discuss our method's limitations and broader impacts in Appendix G. We will clarify this information after the discussion in Section 4.3 (Practical advantages of GEX) in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal by Authors
Comment: Thank you very much for the detailed explanation. I increased the score by 1 point.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer TEqY,
Thank you very much for your constructive suggestions and positive evaluation of the significance of our paper! Your suggestion will be included to enhance the readability of our paper for readers without sufficient background knowledge.
Best regards,
The Authors of Paper 4926 | Summary: The paper identifies that existing influence functions suffer from a fundamental drawback due to their bilinear form. To address, this they propose an influence calculation that is nonlinear .
Strengths: The GEX influences seem to outperform all baselines when detecting label noise on CIFAR10-like datasets.
The approach is motivated by principled failures of existing influence functions due to factors such as singularity of the Gaussian.
Weaknesses: There is a lot of information pushed to the appendix that is reference in the main text, to the extent that it more difficult to really understand without having the appendix at hand. The current main text was too sparse for me to really understand any particular component.
It seems the experiments largely focused on MNIST/SVHN/CIFAR10 variants. It would be nice to see some kind of breadth in application or scalability to more than CIFAR10.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I did not find it very clear what benefit or cards the geometric ensemble brings to the table.
It would be nice to have an actual example using only open source checkpoints, since this is one of the purported benefits of the research.
In the limitations, the authors believe their method can expedite the training process of energy-efficient deep learning models.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: In the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review of our paper. We appreciate your feedback and would like to provide point-to-point replies to your questions in the following.
* Q1. [**Lack of details in main text**] There is a lot of information pushed to the appendix that is reference in the main text, to the extent that it more difficult to really understand without having the appendix at hand. The current main text was too sparse for me to really understand any particular component.
* A1. We apologize for not including all the necessary content within the main text due to the length constraints. We understand that relying heavily on the appendix might affect the readability. Our appendix contains additional material, such as proofs for propositions discussed in the main text, complexity analysis, experimental setups, and additional experimental results. We use this supplementary content to support the key points covered in the main paper, including the problem we address (distributional bias in Section 3) and the rationale behind our approach (Section 4). If you have any suggestions regarding specific content from the appendix that should be moved to the main text, please share them with us. We will take your feedback into careful consideration while preparing the revised version.
* Q2. [**Scalability of GEX**] It seems the experiments largely focused on MNIST/SVHN/CIFAR10 variants. It would be nice to see some kind of breadth in application or scalability to more than CIFAR10.
* A2. Following reviewer sNxP's recommendation, we verify the scalability of GEX for various tasks. In fact, Table 2 in our paper provides evidence that the effectiveness of GEX in Table 1 scales well with larger datasets by showing that GEX outperforms the baselines in noisy label detection tasks on ImageNet-1K with ViT and MLP-Mixer. In the rebuttal phase, we further validate the scalability of GEX by extending the relabeling task to the ImageNet-1K setting. We refer to G1 [**Additional experiments: Relabeling**] in the Global Rebuttal for these results.
* Q3. [**Rationale for using GE**] I did not find it very clear what benefit or cards the geometric ensemble brings to the table.
* A3. We use the GE distribution because GE is suitable for expressing the local geometry of the loss landscape around $ \theta^* $, similar to LA. However, unlike LA, GE does not overestimate loss deviations due to the singularity of Hessian. To clarify this, we will revise the introduction of Section 4 (Lines 155-162) as follows:
To mitigate the distributional bias in Section 3, we propose a flexible IF approximation method using Geometric Ensemble (GE; [15]), named Geometric Ensemble for sample eXplanataion (GEX). Here is a summary of how GEX is developed.
$$
\mathcal{I}\_\mathtt{Hess}
\overset{\texttt{Delinearization}}{\underset{\texttt{Section 4.1}}{\longrightarrow}}
\mathcal{I}\_\mathtt{LA}
\overset{\texttt{LA to GE}}{\underset{\texttt{Section 4.2}}{\longrightarrow}}
\mathcal{I}\_\mathtt{GEX}
$$
In Section 4.1, we ensure that the influence approximation is not a bilinear form for the gradient by replacing gradients in IF with sample-loss deviations. The theoretical foundation for this step is provided by our Theorem 1 below, which establishes a relationship between the IF and the Laplace approximation (LA; [36]). Moving on to Section 4.2, we modify the parameter distribution to compute the sample-loss deviation from LA to GE. This modification is necessary because GE is based on the local geometry of the loss landscape around $\theta^*$, similar to LA while avoiding overestimating loss deviations caused by the singularity of the Hessian.
* Q4. [**Experiments using open-source checkpoints**] It would be nice to have an actual example using only open source checkpoints, since this is one of the purported benefits of the research.
* A4. Following the recommendation of reviewer sNxP, we tried to conduct additional experiments by extending the dataset pruning task in Section 5.4 to ImageNet-1K with MLP-Mixer. However, as Mixer-B-16 and bigger models, which is beyond our computational budget, are released in the official repository, we decided to pre-train the smaller Mixer-S-32 model and conduct experiments. The reproduced accuracy of Mixer-S-32 is 64.37%, which is higher than the published accuracy of 63.9% [1, 2]. We refer to G2 [**Additional experiments: Dataset pruning**] in the Global Rebuttal for these results.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! I appreciated the explanations and additional experiments scaling the approach and motivating the setting of open-source checkpoints. As both the questions and weaknesses were addressed, I have updated my score accordingly.
To help the authors clarify the main text, it is not strictly necessary that you bring appendix material to the main paper, instead just properly explain the referenced material. The proof and pseudocode references were fine, but I found the other references to the appendix that state "we do X in the appendix" with no motivation/reason to be largely unhelpful. For example, on L206 the cross entropy reference is just presented with no reason for it. The ablation referenced in L236, analysis referenced in L247, and cross influence referenced in L266 are just stated with no takeaway. The reference on L279 is at least properly motivated but again has no takeaway. These are just stated as is and come across as having no rhyme or reason.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer sNxP,
We are grateful for your comprehensive suggestions and your positive evaluation of the significance of our paper! In response to your feedback, we will clarify the rationale behind referencing the Appendix in our revised manuscript. We believe this clarification will greatly assist readers in resolving any queries that may arise while reading the paper.
Warm regards,
The Authors of Paper 4926 | Rebuttal 1:
Rebuttal: * G1. [**Additional experiments: Relabeling**] Our first additional experiment is the relabeling task presented in Section 5.2 on the ImageNet-1K environment with ViT and MLP-Mixer. To this end, we follow the relabeling process in Section 5.2 with the estimated influence in Table 2. The following table presents the relabeled test accuracy for GEX and EL2N (the best method for noisy label detection except ours).
* **Table A. Relabeled accuracy for mislabeled samples**
| | ViT-S-32 | MLP-Mixer-S-32 |
|:-------------------:|:-------------------:|:--------------:|
| Clean acc. | 67.83% | 64.37% |
| Noisy acc. | 63.42% | 61.84% |
| Relabeled with EL2N | 63.18% | 63.16% |
| Relabeled with GEX | **66.17%** | **63.45%** |
Table A shows that GEX can detect mislabeled samples that require relabeling more accurately than EL2N.
* G2. [**Additional experiments: Dataset pruning**] The second additional experiment is the dataset pruning task in Section 5.4 on the ImageNet-1K with MLP-Mixer. For this purpose, we reproduce Mixer-S-32 and estimate the self-influence of GEX and EL2N score (which verified its scalability on the dataset pruning task in [1]). Then, we prune 512,466 samples (40%) with the smallest self-influence in ImageNet-1K and retrain neural networks with these pruned datasets. The following table presents the pruned test accuracy for GEX and EL2N:
* **Table B. Pruned accuracy for mislabeled samples**
| | MLP-Mixer-S-32 |
|:----------------:|:--------------:|
| Full sample acc. | 67.83% |
| Pruned with EL2N | 54.87% |
| Pruned with GEX | **56.34%** |
Similar to the results shown in Figure 4, GEX demonstrates more effective identification of prunable samples than EL2N on the scalable ImageNet-1K dataset. Additionally, it is worth noting that EL2N cannot make use of open-source checkpoints and requires a computational cost of (10~20 epochs) x (number of checkpoints) from an initialized neural network. In summary, the better pruned accuracy and the lower computational cost further illustrate the effectiveness of GEX in scalable settings. We will include these results in the revised version.
[1] Sorscher, Ben, et al. "Beyond neural scaling laws: beating power law scaling via data pruning." Advances in Neural Information Processing Systems 35 (2022): 19523-19536.
* G3. [**Additional experiments for TracIn**] The downstream task performance of TracIn and TracInRP is similar, as mentioned in the TracIn paper [1]. In the rebuttal phase, we additionally measure the performance of TracIn on the noisy label detection (Section 5.1) and relabeling (Section 5.2) tasks to verify this. Due to the high time complexity associated with sample-wise gradients, we do not repeatedly measure the self-influence of TracIn. The following tables present the noisy label detection performance and relabeled accuracy achieved by TracIn with baselines (TracInRP and GEX).
* AUC (Area Under the Curve) and AP (Average Precision) results for different datasets
*
| AUC / AP | CIFAR-10 (synthetic) | CIFAR-100 (synthetic) | CIFAR-10-N | CIFAR-100-N |
|:---------------:|:---------------------------:|:---------------------------:|:---------------------------:|:---------------------------:|
| TracIn | 89.98 / 43.21 | 75.53 / 22.25 | 76.48 / 64.69 | 68.91 / 55.86 |
| TracInRP | 89.56 ± 0.14 / 44.26 ± 0.37 | 74.99 ± 0.25 / 21.62 ± 0.26 | 77.24 ± 0.45 / 65.17 ± 0.68 | 69.04 ± 0.28 / 56.41 ± 0.31 |
| GEX | 99.74 ± 0.02 / 98.31 ± 0.06 | 99.33 ± 0.03 / 96.08 ± 0.12 | 96.20 ± 0.03 / 94.89 ± 0.04 | 89.76 ± 0.01 / 86.30 ± 0.01 |
* Relabeled accuracy results for different datasets
| Relabeled acc. | CIFAR-10 (synthetic) | CIFAR-100 (synthetic) | CIFAR-10-N | CIFAR-100-N |
|:--------------:|:--------------------:|:---------------------:|:------------:|:------------:|
| TracIn | 91.24 | 72.07 | 68.36 | 54.87 |
| TracInRP | 90.82 ± 0.06 | 71.70 ± 0.15 | 68.12 ± 0.23 | 55.20 ± 0.06 |
| GEX | 93.54 ± 0.05 | 75.04 ± 0.10 | 73.94 ± 0.24 | 57.13 ± 0.10 |
In accordance with Figure 1 in [1], we found that neither TracIn nor TracInRP consistently outperforms the other.
The LOO counterfactual effect requires significant computational resources due to the necessity of LOO retraining for each sample. Consequently, we report the result of the LOO counterfactual only for the toy dataset in Figures 2-3. This computational challenge has been highlighted in various papers on Influence Functions [1, 2, 3]. To the best of our knowledge, no previous work used the LOO counterfactual effect for downstream tasks in Section 5.
[1] Pruthi, Garima, et al. "Estimating training data influence by tracing gradient descent." Advances in Neural Information Processing Systems 33 (2020): 19920-19930.
* G4. [**Modified Figure 2 (a)**] We also provide a modified Figure 2 (a) to clarify the definition of "Typical" and "Influential" in Figures 2-3.
Pdf: /pdf/8908b497484a8ae309b3e53e4f2b8814c8614279.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors point out that standard approximations of Influence Function (IF) suffer from performance degradation due to oversimplified influence distributions caused by their bilinear approximation, suppressing the expressive power of samples with a relatively strong influence. Therefore, they propose a new interpretation of existing IF approximations as an average relationship between two linearized losses over parameters sampled from the Laplace approximation (LA). By doing so, they highlight two limitaions and propose GEX to improve them. The proposed GEX outperforms standard IF approximations in downstream tasks, including noisy label detection, relabeling, dataset pruning, and data source separation.
Strengths: 1. This paper is well written. The details of movtivation, methodology, experiments and proofs are described clearly.
2. The insight of this paper is quite reasonable and the proposed approach looks novel.
3. The proposed method is verified on a wide range of downstream tasks, including noisy label detection, relabeling, dataset pruning, and data source separation.
Weaknesses: 1. In table 2, the gap between GEX and the other settings is very large. It seems that the other settings are not specificly designed for this task. Is this comparison meaningful?
2. Except table 2, the other experiments are executed on some small and relatively easy dataset, I am not sure whether the conclusions from these results are solid enough.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I think nosiy label detection combined with relabeling mislabeled samples is very useful in many practical scenarios. But the experiments are only executed on CIFAR, which is a quite small dataset. What is the result if we use GEX to detection noisy labels in Imagenet and relabel them?
2. How can this approach help some pratical scenarios?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness. Most experiments are executed on small and relatively easy datasets. The effectiveness of this approach on those more challenging and large benchmarks is not explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful review and constructive comments! We give point-to-point replies to your comments in the following.
* Q1. [**Scalability issue of other IF approximations**] In table 2, the gap between GEX and the other settings is very large. It seems that the other settings are not specificly designed for this task. Is this comparison meaningful?
* A1. The Influence Function (IF), EL2N, and F-score are known to be applicable for noisy label detection [1, 2, 3], which can be confirmed by Table 1 of our paper. Table 2 aims to investigate whether the findings from Table 1 are scalable to large-scale datasets. This table shows that other influence approximations exhibit significant performance degradation compared to GEX. A similar scalability issue was pointed out in [4] in the context of dataset pruning. The scalability issue also exists for non-influence approximation methods: F-score fails in cases where certain samples were never correctly predicted during pre-training, as mentioned in Lines 283-284. Since EL2N is based on the Brier score (i.e., L2 distance of prediction and label) at the early training phase, there is a performance degradation in the ImageNet setting where the training speed is slow. However, GEX does not suffer from such a scalability issue.
[1] Koh, Pang Wei, and Percy Liang. "Understanding black-box predictions via influence functions." International conference on machine learning. PMLR, 2017.
[2] Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: Finding important examples early in training. Advances in Neural Information Processing Systems, 34:20596–20607, 2021
[3] Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018.
[4] Sorscher, Ben, et al. "Beyond neural scaling laws: beating power law scaling via data pruning." Advances in Neural Information Processing Systems 35 (2022): 19523-19536.
* Q2. [**Scalability of GEX**] Except table 2, the other experiments are executed on some small and relatively easy dataset, I am not sure whether the conclusions from these results are solid enough. // I think nosiy label detection combined with relabeling mislabeled samples is very useful in many practical scenarios. But the experiments are only executed on CIFAR, which is a quite small dataset. What is the result if we use GEX to detection noisy labels in Imagenet and relabel them?
* A2. Following the recommendation of reviewer 8KEq, we further validate the scalability of GEX by extending the relabeling task (Section 5.1) and dataset pruning task (Section 5.2) to the ImageNet-1K setting. We refer to G1 [**Additional experiments: Relabeling**] and G2 [**Additional experiments: Dataset pruning**] in the Global Rebuttal for these results.
* Q3. [**Practical scenarios for GEX**] How can this approach help some pratical scenarios?
* A3. The tasks of noisy label detection, relabeling, and dataset pruning presented in Section 5 have several practical applications. One of the applications of GEX is to prune samples with minimal impact on training, allowing for more efficient training than conventional neural scaling laws, as described in [1]. Also, GEX can effectively detect and correct mislabeled samples in datasets obtained from (noisy) crowd annotations, where various people contribute to labeling (as demonstrated in Section 5.1-5.2). For instance, recent research in the medical domain, particularly on chest X-rays [2, 3], shows that annotations made by doctors in public datasets are sometimes incorrect [2]. Thus, models trained with these datasets can be improved by retraining or fine-tuning with corrected labels [3]. Considering that GEX improves performance in a general relabeling benchmark, it is reasonable to expect that it will also enhance performance in medical applications.
[1] Sorscher, Ben, et al. "Beyond neural scaling laws: beating power law scaling via data pruning." Advances in Neural Information Processing Systems 35 (2022): 19523-19536.
[2] Tang, Siyi, et al. "Data valuation for medical imaging using Shapley value and application to a large-scale chest X-ray dataset." Scientific reports 11.1 (2021): 8366.
[3] Kim, Doyun, et al. "Accurate auto-labeling of chest X-ray images based on quantitative similarity to an explainable AI model." Nature communications 13.1 (2022): 1867.
[4] Wang, Xiaosong, et al. "Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thanks for your rebuttal. My concerns have been addressed and I would like to raise my rating to 6.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 8KEq,
We truly appreciate your suggestions and raising the score of our paper!
Best regards,
Authors of paper 4926 | null | null | null | null | null | null |
InstanT: Semi-supervised Learning with Instance-dependent Thresholds | Accept (poster) | Summary: This paper introduces a new thresholding methodology for Semi-Suervised Learning. This paper proposes InstanT, which uses an instance-dependent threshold for each unlabeled data (Fig1). This algorithm shows the improvement on multiple semi-supervised benchmark datasets.
Strengths: Pros.
- This paper is well-written and easy to follow.
- This paper is well-motivated (Fig 1).
- This paper proposes InstanT, and it is very well derived.
- This paper shows extensive experiments (Especially it seems to train all other semi-supervised learning baselines using ViT models) and shows the effectiveness of this method.
Weaknesses: Cons.
- (Minor) This paper uses a pre-trained ViT model. In pre-training, I think the classes in Cifar10/Cifar100 could be the subset of the pre-training dataset. If These classes are already trained, they could be already well clustered and easy to train with a very small labeled dataset. (It also shows the performance improvement in the original training setting in the Appendix)
- (Minor) It would be great if it reports the supervised learning results (full labeled dataset)
- (Major) This paper evaluates their algorithm in very small datasets (Cifar10/Cifar100/STL-10). Therefore, I don't know how it robustly works in large datasets such as ImageNet. Nowadays, most of the papers evaluate their algorithms on ImageNet. (In this case, it seems to train the model from scratch. (not using pre-trained ViT model)
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please see the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Please see the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging our contributions. We are grateful for your constructive feedbacks and positive remarks about our work. We hope we can address you questions and concerns:
> **W1: Are the classes from CIFAR10/100 a subset of pre-trained dataset?**
Thank you for this very insightful concern, the pre-trained dataset is ImageNet-1k, and there indeed are some class overlaps between CIFAR-100 and ImageNet-1k. However, we would like to raise several points proving our results are indeed convincing:
- the distribution of features are still notably different (different reslution level, different feature space) between CIFAR and ImageNet;
- even though the pre-trained ViT might have some prior knowledge to CIFAR10/100, this does not harm using pre-trained ViT as the backbone model. Because all the baseline methods are running on the same backbone, and the fact that InstanT can still obtain better performances showcase its improvement is valid and non-trivial;
- lastly, as you mentioned, we can also observe performance improvements on train-from-scratch cases in the Appendix. We also supplement you with some our newest results comparing with newly added baselines when train from scratch:
| Methods | CIFAR-10(10) | CIFAR-10(40) | CIFAR-10(250) | CIFAR-100(400) | CIFAR-100(2500) |
| ------ | ------ | ------ | ------ | ------ | ------ |
| SoftMatch | 0.7557 | 0.9464 | 0.9517 | 0.5057 | 0.6622 |
| FreeMatch | 0.9193 | **0.9512** | 0.9506 | 0.4920 | 0.6659 |
| InstanT | **0.9250** | 0.9510 | **0.9525** | **0.5217** | **0.6709** |
Table 5-1
Based on the above justifications, we hope you find our evaluations to be sensible and our improvements are significant.
> **W2: Did not include fully-supervised results.**
Thank you for this valuable suggestion, we have supplement the sully-supervised results and will update them into our paper, we also present them here to address your concern:
pre-trained ViT results, settings are aligned with Table 1 of the paper.
| CIFAR-10 | CIFAR-100 | STL-10 |
| ------ | ------ | ------ |
| 0.991 ±0.00 | 0.9152 ±0.00 | 0.8100 ±0.03 |
Table 5-2
We can observe that STL-10 exhibts results that are worse than SSL baselines, this is bacause unlike CIFAR datasets, the vast majority of STL-10 do not come with groun-truth labels, so fully-supervised models trained on STL-10 can only access to a small proportion of labels.
**> W3: Did not include large-scale datasets.**
Per your suggestion, we have implemented experiments on ImageNet-100. And, as you have pointed out, we did not use pre-trained ViT, since that will be ill-posed. Results are trained form scratch with a ResNet-50 for 500,000 iterations on ImageNet-100 using random seed {0}.
| Methods | Top-1 Accuracy | F-1 Score |
| ------ | ------ | ------ |
| FixMatch | 0.6624 | 0.6559 |
| AdaMatch | 0.6860 | 0.6822 |
| FreeMatch | 0.6578 | 0.6529 |
| InstanT | **0.6994** | **0.6972** |
Table 5-3
From Table 5-3, we can observe that InstanT maintains a robust performance on real-world large-scale datasets, surpassing several SOTA baselines. We will supplement the more comprehensive results of this experiment with the main paper.
Once again, we would like to thank the reviewer for your positive remarks and spending your valuable time reviewing our paper, we welcome any new questions & suggestions the reviewer might have after rebuttal.
---
Rebuttal Comment 1.1:
Comment: All of my concerns are addressed. So, I changed my score. | Summary: This paper focuses on semi-supervised learning (SSL) and proposes the study of instance-dependent thresholds to make incorrect pseudo-labels have higher thresholds.
Strengths: This paper studies an important problem in SSL, and tries to propose a dynamic and adaptive threshold to guarantee the pseudo-label quality for instances. Also, it gives a theoretical analysis on estimating the threshold. The experiments are sufficient and the results surpasses SOTA. In the results, it is obvious to see that the algorithm has a considerable improvement in Top-1 accuracy. The ablation study shows that every part of this algorithm is useful.
Weaknesses: Theorem 2 is proved with Bayes rule with Assumption 2. But if the dynamic value k_t > 0, Theorem 2 does not hold. It is incorrect P(X>=a+b)>=P(X>=a) for any b>0. I think there may be some mistakes in Assumption 2 and Theorem 2. The theorems and analyses based on Assumption 2 may be wrong.
In section 4.1 the authors do not give any theoretical reasons for the definition of k_t and \tau(x_u). Whether k_t satisfies Assumption 2 or not?
In Equation 8, the authors use an approximation \hat{T}(x) to estimate T(x). But the analysis is based on the real value of T(x). If the label noise is large or the number of samples is not enough, the error between the real value of T(x) and the approximation \hat{T}(x) cannot be ignored. Some analysis should be provided here.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: As mentioned in the part of weaknesses, the authors may make a mistake in Theorem 2, which makes that the paper is not convincing. The theoretical analysis part of the paper should be considered more seriously.
The dynamic threshold is determined by the transition matrix T(x). But in practice it is impossible to estimate it without any bias. The bias may influence on the final result. Is it possible to provide an error bound in the analysis?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully going over our proof and pose challenges to them, we strongly agree that the theoritical part of the paper should be rigorous, and believe that your questions and suggestions will further refine our paper.
> **W1: Proof regarding to Theorem 2.**
For simplicity, let's denote event $k=h^{*}(\bf{x_u})$ as event A, $\hat{P}(\hat{Y}=k|X=\bf{x_u}) \geq \tau$ as event B, $\hat{P}(\hat{Y}=k|X=\bf{x_u}) \geq \tau + \kappa$ as event C, without the loss of generality, iteration $t$ will be omitted.
First of all, you're correct, for any $\kappa \geq 0$, $P(B) \geq P(C)$ holds. However, the relationship between the joint proability of $P(A,B)$ and $P(A,C)$ are not known, since event A,B,C are not independent. Therefore, we must first decompose them into conditional probabilities and making assumptions on their relationship (Assumption 2).
To understand Assumption 2 with a concrete example, consider this SSL dataset with total of 1000 unlabeled samples. With a non-dynamic fixed threshold $\tau$, the prediction confidence of all unlabeled samples surpass this threshold, P(B) = 1, only 500 of samples are assigned with correct thresholds, P(A|B) = 0.5. If we increase the threshold with some non-zero constant $\kappa$, now only 600 samples surpass this threshold, and among them, there is still 500 samples assigned with correct threshold, P(C) = 0.6, P(A|C) = 5/6. Note P(A,B) = 0.5, and P(A,C) = 0.5 as well, which satisfies Assumption 2.
Nevertheless, we are also putting Assumption 2 under the microscope, examining its rigor and necessity, it's possible that we might need to modify Assumption 2 and make more rigorous justifications.
> **W2: Does $\kappa_t$ in section 4.1 satisfies Assumption 2?**
Thanks for pointing this question out. $\kappa_t$ is assumed to satisfy Assumption 2 throughout the paper, we will explicitly clarify this point in the final version of the paper.
> **W3: Didn't account the estimation error of $T(x)$**
Thank you for this highly constructive suggestion. We have improved our proof and including the estimation error of $T(x)$. Our improved proof can be found in the submitted pdf file in the genral response at the top.
As for the estimation error bound of $T(x)$, it has been extensively studied and is been continuously improved, recent paper indicting even with a small number of samples, the estimation error can still be managed to reduce to a small level. Lastly, and more importantly, emprical results on challenging cases (where the estimation error of $T(x)$ is larger), still suggests our method can obtain a more robust performance than SOTA baselines.
Furthermore, we wish to underscore our central contribution, which lies in the introduction and estimation of instance-level label errors. This forms the foundation for deriving instance-dependent thresholds in SSL. We view our work as an initial exploration into the possibility of instance-dependent thresholds in SSL.
Once again, we would like to express our appreciation to the reviewer for carefully checking our proof and results. We welcome any new questions & suggestions you might still holds after the rebuttal.
---
Rebuttal 2:
Title: Invitation to the rolling discussion
Comment: Dear reviewer kWe6, we hope our rebuttal has satisfactorily addressed your concerns. We are looking forward to discussing with you during the rolling discussion phase, for we genuinely feel that your valuable insights can undoubtedly further enhance our paper. | Summary: This paper presents a new approach for selecting confident examples called InstanT, which uses instance-dependent thresholds for assigning pseudo-labels to unlabeled data. Unlike existing methods that apply the same threshold to all samples, InstanT considers the instance-level ambiguity and error rates of pseudo-labels, assigning higher thresholds to instances more likely to have incorrect pseudo-labels. The paper demonstrates that this approach provides a probabilistic guarantee for the correctness of the assigned pseudo-labels. This innovative method may offer a new perspective on SSL.
Strengths: 1.The paper finds a significant and challenging problem in semi-supervised learning (SSL) that has not been adequately tackled by existing methods. Traditional SSL methods typically use a single loss threshold to select confident examples, implicitly assuming that examples with the same loss have the same likelihood of pseudo-label correctness. However, this assumption does not always hold, as there can be hard but confident examples that have larger loss values but correct pseudo-labels.
2.The innovation of this paper is noteworthy. This paper innovatively proposes to estimate the probability of examples being incorrect and applies instance-dependent thresholds based on these estimates. This approach is more nuanced and potentially more effective as it takes into account the individual characteristics of each example. It is the first to propose the estimation of instance-dependent thresholds in SSL. This novel approach represents could open up new avenues for research in the field.
3.The authors also provide sufficient theoretical analysis about their proposed method. They present a theorem that shows that for samples that satisfy their instance-dependent threshold function, the likelihood of the pseudo-labels being correct is lower-bounded. This provides a solid foundation for the proposed method and helps to convince readers of its assumption and correctness.
Weaknesses: 1.Certain aspects of the paper could benefit from further explanation and clarification. Specifically, the relationship between the Quality-Quantity Trade-off and the effective dynamic value is not clearly articulated in the main body of the paper. These are key components of the proposed method, and their interaction could significantly impact the performance of the method.
2.The paper does not sufficiently discuss the limitations of the proposed method. Every method has its limitations and potential drawbacks, and a thorough discussion of these is crucial for a balanced and comprehensive presentation of the work.
3.The empirical improvement of the proposed method is marginal when the amount of labeled data increases. This suggests that the method's performance may not scale well with larger labeled datasets.
4.The experimental evaluation of the method is based on only three datasets. This limited number of datasets may not provide a comprehensive evaluation of the method's performance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.What might be the potential drawbacks or limitations of this method? How could these impact its applicability or performance in certain scenarios or with certain types of data?
2.The experimental evaluation of the method is based on only three datasets. Could the authors explain their choice of datasets and how representative these datasets are of the types of data the method would encounter in real-world applications? Would the performance of the method vary significantly if tested on other datasets?
3.The authors may need to clarify the relationship between the Quality-Quantity Trade-off and the effective dynamic value. How do these two factors interact within the context of their proposed method?
4.The paper discusses the balance between the quality and quantity of pseudo-labels. Could the authors provide some intuition of how this balance can be achieved in practice?
5.This paper is highly related to the problem of learning with instance-dependent label noise. To clearly demonstrate the effectiveness, can authors provide a comparison with methods for handling instance-dependent label noise?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main paper seems to lack a comprehensive discussion on the limitations of the proposed method. It would be beneficial for the authors to conduct a thorough review of their method to identify any potential limitations. I suggest including a separate section in the main paper dedicated to discussing these limitations, which would provide a more balanced and complete perspective of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging our contributions and posing a wide range of valuable questions & suggestions, all of which have been very inspiring to us. We sincerely hope that our response can address your questions and concerns:
> **W1 & Q3: Relationship between Quality-Quantity trade-off and efficitive dynamic thresholds.**
We apologize for the lack of clarity in our main paper, and you're absolutely correct; this concept holds significant importance. Let's begin by considering the correlation between dynamic thresholds and the quality-quantity trade-off. Dynamic thresholds have now emerged as the primary solution for quality-quantity trade-off. In the initial training stages, an excessively high fixed threshold leads to an excessive filtration of samples, thereby compromising quantity. Conversely, during the later training stages, a fixed threshold set too low fails to effectively filter any samples, thereby compromising quality.
In addition, to better understand and set restraint on the dynamic threshold, we define an "effective dynamic threshold" (Assumption 2), whose left-hand side can be viewed as the gain of the quality of the pseudo-label after introducing $\kappa$, and the right-hand side can be view as the loss of the quantity of the pseudo-label after introducing $\kappa$. So we're assuming that, with an effective dynamic threshold, we expect the increase in the quality of the pseudo-label must at least match the loss of the quantity of the pseudo-label.
> **W2 & Q1: Didn't discuss the limition of InstanT & When InstanT could fail.**
Thank you for this constructive suggestion, here we will discuss some of the potential limitations of InstanT, and we will include them in the official version of our paper.
- When there is minimal label error generated by the classifier, InstanT has no significant differences with other SSL methods. The instance-dependent thresholds generated by InstanT becomes trivial simply and reduce to $\kappa$.
- InstanT is subjective to the influence of transition matrix estimation, if, for some reason, the estimation of the transition matrix is extremely poor, this could potentially hinder the thresholding of InstanT.
Therefore, InstanT could potentially obtain no significant improvements on simpler datasets and with abundant labeled sets, for instance, on the SVHN dataset, or on the CIFAR-10 dataset with abundant labeled samples.
> **W3: Performance of InstanT on larger labeled datasets.**
Thank you for raising this concern, this has been a commonly mentioned issue. We would like to point out that, since learning on larger labeled datasets is relatively easy, the performance limit on larger labeled datasets has almost been fully exploited, and converged towards the fully-supervised results. This means the performance gap will almost always be marginal on larger labeled datasets, similar results are verified in recent SSL papers as well [1,2].
> **W4 & Q2: Number of datasets is limited.**
Thank you for the suggestion, the datasets we selected are the most commonly used benchmarks in SSL with an adequate level of challengingness, all of which are widely acknowledged datasets to simulate real-world problems. In addition, to fully address your concern, we also conduct experiments on ImageNet-100, with 100 labeled samples per class, to evaluate our method. All results are obtained from a ResNet-50 train from scratch for 500,000 iterations on ImageNet-100 using random seed {0}.
| Methods | Top-1 Accuracy | F-1 Score |
| ------ | ------ | ------ |
| FixMatch | 0.6624 | 0.6559 |
| AdaMatch | 0.6860 | 0.6822 |
| FreeMatch | 0.6578 | 0.6529 |
| InstanT | **0.6994** | **0.6972** |
Table 3-1
As suggested in Table 3-1, InstanT also showcases strong performances on large-scale real-world datasets such as ImageNet.
> **Q4: Intuition on how InstanT better balances between quantity-quality trade-off.**
Basically, existing methods attempt to find a better balance between quantity and quality trade-offs by assigning dynamic thresholds that are dependent on the training progress. InstanT further improves this concept by utilizing the simple intuition that "some unlabeled samples are more likely to have incorrect pseudo-labels than others". Once we can determine the probability of which sample is more likely to have an incorrect pseudo-label, InstanT will adjust its label assignment threshold based on such likelihood, hence increasing the threshold level for samples with a larger noisy probability.
> **Q5: Comparsion on methods handling instance-dependent label noise.**
Here we will try to implement PTD [3] to test its performance in SSL. Our implementation strategy is for PTD to leverage labeled samples as anchor points to learn the transition matrix. We also apply two different settings for a fair comparison, where PTD will adapt fixed and dynamic thresholds to choose pseudo-label (noisy label) respectively. The version where PTD uses a dynamic threshold will be referred to as PTD(DT). Since PTD requires the number of anchor points to be larger than $C\times M$, where $C$ is the number of classes and $M$ is the number of parts (hyper-para), labels per class must be larger, hence we only evaluate on CIFAR-10(250).
| Methods | CIFAR-10(250) |
| ------ | ------ |
| PTD | 0.9723±0.04 |
| PTD(DT) | 0.9711±0.01 |
| InstanT | **0.9808±0.00** |
From the results, we can observe that PTD also achieves good results, but cannot be used for cases when the number of labels per class is limited. The main difference between PTD(DT) and InstanT is their thresholding method, the better performance of InstanT further verifies the effectiveness of the instance-dependent thresholds.
Full reference info will be omit due to character limit
[1] Freematch: Self-adaptive thresholding for semi-supervised learning
[2] Softmatch: Addressing the quantity-quality trade-off in semi-supervised learning
[3] Part-dependent label noise: Towards instance-dependent label noise
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. The response addresses my concerns well. I keep my score on acceptance.
---
Reply to Comment 1.1.1:
Title: Thanks for responding to our rebuttal!
Comment: Dear reviewer Cia8, we are happy to know that we have addressed your concerns well, and we're truly grateful for your kind praises to our work. We're committed to continuously refining our paper, thus we welcome any new insights and suggestions that you may still have during the rolling discussion phase. | Summary: This paper proposes a semi-supervised method with instance-dependent thresholds (InstanT), which can assign different thresholds to individual unlabeled data based on the instance-dependent label noise level and prediction confidence. Also, this paper provides a theoretical analysis of the proposed InstanT. Extensive experiments have shown the effectiveness of the proposed method.
Strengths: 1. This paper proposes an Instance-dependent Thresholding strategy for semi-supervised learning. Also, the proposed method can vouch for the reliability of pseudo-labels it assigns with a theoretical guarantee.
2. This paper is easy to read and well organized.
3. The experimental results and the proof of Theorems in this paper seem to be solid.
Weaknesses: 1. This paper is devoted to addressing the SSL problem with dynamic thresholding. However, there is a lack of some SOTA methods to be compared with the proposed method. For example,
[1] Guo et al., Class-imbalanced semi-supervised learning with adaptive thresholding, ICML, 2022.
[2] Yang et al., Class-aware contrastive semi-supervised learning, CVPR, 2022.
[3] Wang et al., Freematch: Self-adaptive thresholding for semi-supervised learning, ICLR, 2023.
[4] Chen et al., Softmatch: Addressing the quantity-quality trade-off in semi-supervised learning, ICLR, 2023.
2. Most previous SSL methods have reported their performance on both datasets: SVHN and ImageNet. It is therefore suggested that the authors supplement some experiments to evaluate the generalization performance of the proposed method.
3. It is suggested to omit some symbols whose meanings have been explained in the previous context, so as to increase the brevity of this paper.
4. The proposed method is motivated by the learning with noisy labels. Therefore, it is necessary to review some related methods to give readers a whole picture.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please refer to the Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations of this paper should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your invaluable feedbacks and suggestions, all of which will significantly contribute to the enhancement of our work. Please find our response addressing your concerns:
> W1: Lack of comparsion with SOTA (FreeMatch, SoftMatch etc.)
Thank you for raising this question, we have conducted comprehensive evaluations against the two most recent SOTA methods - FreeMatch [1] and SoftMatch [2], as you suggested.
First, we present the results under conventional settings [1,2], where all results are trained from scratch with WRN28-2, we fix all methods with random seed "0".
| Methods | CIFAR-10(10) | CIFAR-10(40) | CIFAR-10(250) | CIFAR-100(400) | CIFAR-100(2500) |
| ------ | ------ | ------ | ------ | ------ | ------ |
| SoftMatch | 0.7557 | 0.9464 | 0.9517 | 0.5057 | 0.6622 |
| FreeMatch | 0.9193 | **0.9512** | 0.9506 | 0.4920 | 0.6659 |
| InstanT | **0.9250** | 0.9510 | **0.9525** | **0.5217** | **0.6709** |
Table 2-1
Under conventional settings, we can observe that InstanT showcases strong performance against SOTA baselines, especially on the most challenging cases,e.g. CIFAR-10(10) and CIFAR-100(400), indicating when there exist large label errors, InstanT can more effectively filter them out.
We also present the results with pre-trained ViT, settings are aligned with Table 1 from our paper.
| Methods | CIFAR-10(10) | CIFAR-10(40) | CIFAR-10(250) | CIFAR-100(200) | CIFAR-100(400) | STL-10(10) | STL-10(40) |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| SoftMatch | 0.8003±0.09 | 0.9795±0.01 | 0.9814±0.00 | 0.7057±0.01 | 0.7821±0.01 | 0.6510±0.09 | 0.8370±0.04 |
| FreeMatch | 0.7721±0.05 | **0.9811±0.00** | **0.9819±0.00** | **0.7629±0.02** | **0.7938±0.00** | 0.6230±0.14 | 0.8496±0.03 |
| InstanT | **0.8732±0.10** | 0.9793±0.00 | 0.9808±0.01 | 0.7417±0.00 | 0.7880±0.00 | **0.6939±0.07** | **0.8509±0.03** |
Table 2-2
From Table 2-2, we can observe that, while InstanT obtained the best results in 3 out of 7 cases, it's average improvement is much more significant than FreeMatch, and for cases where its performance is worse, the gaps are relatively marginal.
> **W2: Lack of results on SVHN and ImageNet.**
Thank you for raising this concern, we have added more experiments on ImageNet-100, with 100 labeled samples per-class, to evaluate our method against selected SOTA baselines. All results are obtained using a ResNet-50 trained for 500,000 iterations, with random seed {0}.
| Methods | Top-1 Acc. | F-1 Score |
| ------ | ------ | ------ |
| FixMatch | 0.6624 | 0.6559 |
| AdaMatch | 0.6860 | 0.6822 |
| FreeMatch | 0.6578 | 0.6529 |
| InstanT | **0.6994** | **0.6972** |
Table 2-3
As we can observe from Table 2-3, InstanT shows strong generalization capability and scalability on large benchmarks such as ImageNet-100, surpassing a range of SOTA baseline methods.
As for the SVHN dataset, since it is considered a "simpler" case, running InstanT on SVHN cannot effectively differentiate it from other methods. Therefore, given limited time and computational resources, we forfeit to display the results here. Alternatively, if the reviewer still deems it necessary, we could also showcase these results during the rolling discussion phase.
> **W3: Omit previously defined symbols.**
We thank the reviewer for posing this feedback. Indeed, for brevity, we will avoid over-defining symbols in the official version of our paper.
> **W4: Lack of review for label noise papers.**
Here we will give a concise overview of the relevant label noise learning papers, a more comprehensive review will be added to the main paper at a later stage.
The most relevant line of work is the modeling of label noise. The pioneering work introduced the concept of learning with class-dependent label noise, which has been widely recognized [3,4,5]. Noteworthy strategies in this area include anchor-point estimation [6], end-to-end estimation [7], mixture proportion estimation [3], etc.
Meanwhile, another line of research has concentrated on tackling instance-dependent label noise [8], which reflects a more realistic scenario. In this regard, notable works include the part-dependent anchor-points [9] and modeling through DNNs [10].
Once again, we would like to thank the reviewer for spending your valuable time reviewing our paper, and we welcome any new questions & suggestions the reviewer might have after rebuttal.
- [1] Wang, Yidong, et al. "Freematch: Self-adaptive thresholding for semi-supervised learning." arXiv preprint arXiv:2205.07246 (2022).
- [2] Chen, Hao, et al. "Softmatch: Addressing the quantity-quality trade-off in semi-supervised learning." arXiv preprint arXiv:2301.10921 (2023).
- [3] Scott, Clayton, Gilles Blanchard, and Gregory Handy. "Classification with asymmetric label noise: Consistency and maximal denoising." Conference on learning theory. PMLR, 2013.
- [4] Natarajan, Nagarajan, et al. "Learning with noisy labels." Advances in neural information processing systems 26 (2013).
- [5] Patrini, Giorgio, et al. "Making deep neural networks robust to label noise: A loss correction approach." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
- [6] Liu, Tongliang, and Dacheng Tao. "Classification with noisy labels by importance reweighting." IEEE Transactions on pattern analysis and machine intelligence 38.3 (2015): 447-461.
- [7] Li, Xuefeng, et al. "Provably end-to-end label-noise learning without anchor points." International conference on machine learning. PMLR, 2021.
- [8] Cheng, Jiacheng, et al. "Learning with bounded instance and label-dependent label noise." International conference on machine learning. PMLR, 2020.
- [9] Xia, Xiaobo, et al. "Part-dependent label noise: Towards instance-dependent label noise." Advances in Neural Information Processing Systems 33 (2020): 7597-7610.
- [10] Yang, Shuo, et al. "Estimating instance-dependent label-noise transition matrix using dnns." (2021).
---
Rebuttal Comment 1.1:
Title: Thanks for your detailed response.
Comment: Thank you for carefully responding to my comments. I have read your rebuttals to all reviews. Overall, I am convinced that this will be a valuable contribution to NeurIPS 2023 and I will stay with my original rating of Weak Accept.
---
Reply to Comment 1.1.1:
Title: Thank you for responding to our rebuttal!
Comment: Dear reviewer zzrr, we sincerely thank you for reading through all of our rebuttals, we are strongly encouraged by your acknowledgments of our contributions! We're committed to continuously refining our paper, thus we welcome any new insights and suggestions that you may still have during the rolling discussion phase. | Rebuttal 1:
Rebuttal: Dear reviewers, please find our general responses to some of the commonly asked questions, due to character limits in some of the individual rebuttal, we kindly refer you to see our response here:
> **GA1: Comparsions with more recent baselines**
We have included the comparsions with two recent baselines, FreeMatch (ICLR'23) and SoftMatch (ICLR'23). First, we present the results under conventional settings [1,2], where all results are trained from the scratch with WRN28-2, we fix all methods with random seed "0".
| Methods | CIFAR-10(10) | CIFAR-10(40) | CIFAR-10(250) | CIFAR-100(400) | CIFAR-100(2500) |
| ------ | ------ | ------ | ------ | ------ | ------ |
| SoftMatch | 0.7557 | 0.9464 | 0.9517 | 0.5057 | 0.6622 |
| FreeMatch | 0.9193 | **0.9512** | 0.9506 | 0.4920 | 0.6659 |
| InstanT | **0.9250** | 0.9510 | **0.9525** | **0.5217** | **0.6709** |
Table 0-1
Under conventional settings, we can observe that InstanT showcases strong performance against SOTA baselines, especially on hard cases (e.g. CIFAR-10(10) and CIFAR-100(400)), indicating when there exists large label errors, InstanT can more efficitively filter them out.
We also present the results from pre-trained ViT, settings are aligned with Table 1 from our paper.
pre-trained results
| Methods | CIFAR-10(10) | CIFAR-10(40) | CIFAR-10(250) | CIFAR-100(200) | CIFAR-100(400) | STL-10(10) | STL-10(40) |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| SoftMatch | 0.8003±0.09 | 0.9795±0.01 | 0.9814±0.00 | 0.7057±0.01 | 0.7821±0.01 | 0.6510±0.09 | 0.8370±0.04 |
| FreeMatch | 0.7721±0.05 | **0.9811±0.00** | **0.9819±0.00** | **0.7629±0.02** | **0.7938±0.00** | 0.6230±0.14 | 0.8496±0.03 |
| InstanT | **0.8732±0.10** | 0.9793±0.00 | 0.9808±0.01 | 0.7417±0.00 | 0.7880±0.00 | **0.6939±0.07** | **0.8509±0.03** |
Table 0-2
From Table 0-2, it is evident that eventhough InstanT only achieved the highest scores in 3 out of 7 cases, its average enhancement is considerably more pronounced than that of FreeMatch. Furthermore, in situations where InstanT's performance shows a decline, the gaps are relatively marginal.
> **GA2: Experiments on large-scale datasets (ImageNet)**
We have added the experiments on ImageNet-100, with 100 labeled samples per-class, to evaluate our method against selected baseline methods. All results are obtained with a ResNet-50 trained for 500,000 iterations, with random seed {0}.
| Methods | Top-1 Accuracy | F-1 Score |
| ------ | ------ | ------ |
| FixMatch | 0.6624 | 0.6559 |
| AdaMatch | 0.6860 | 0.6822 |
| FreeMatch | 0.6578 | 0.6529 |
| InstanT | **0.6994** | **0.6972** |
Table 0-3
As we can observe from Table 0-3, InstanT showcases strong performances on large-scale real-world datasets as well.
References:
- [1] Freematch: Self-adaptive thresholding for semi-supervised learning
- [2] Softmatch: Addressing the quantity-quality trade-off in semi-supervised learning
- [3] Usb: A unified semi-supervised learning benchmark for classification
- [4] Class-imbalanced semi-supervised learning with adaptive thresholding
- [5] Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning
Pdf: /pdf/d4e4362528f974f4013f4194b9bb38ed92053139.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: - Assumption for instance-dependent threshold setting and theoretical proof based on it
- Transition matrix modeling (estimator design) to reduce label error through instance-dependent threshold function
- This paper shows good performance on various datasets (CIFAR10, 100, STL-10)
Strengths: 1) A new approach in SSL called instance-dependent threshold setting
2) Solid theoretical proof
3) Excellent performance, especially in environments with little labeled data
Weaknesses: 1) Transition matrix modeling for threshold calculation for each instance is required (complexity is expected to increase).
2) There is no consideration for class imbalance, which is mainly dealt with in SSL. If there is a class imbalance problem, it may not be a good idea to set different thresholds for each instance.
3) Comparative papers (FlexMatch, Dash, AdaMatch, etc.) are papers published before 2022. Comparisons with recent papers have not been made. In addition, AdaMatch, which is mainly compared, is a paper dealing with domain adaptation issues.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) The running time in Table 2 was 1s lower than Fix and AdaMatch. Do you have the result in terms of the amount of calculations and the number of parameters?
2) If the reason for applying Distribution Alignment in InstanT-II in Table 3 is to eliminate class imbance, what is the reason why performance improves when DA is applied in ablation study? (The experimental environment in the paper appears to be a situation with low imbalance using the same number of labeled data for each class).
3) Usually SSL papers show error rates of CIFAR10/100. How does the error rate come out?
4) What about the results on larger sized images (e.g. ImageNet)?
5) Papers on instance dependent pseudo-labeling already exist. Is there any difference from that? For example, the paper SimMatch: Semi-supervised Learning with Similarity Matching (2022 CVPR) also uses instance dependent similarity. This paper theoretically proves the validity of using instance-level threshold, but is there any excellence or novelty compared to the similarity method used in the above paper?
6) The results of the comparative papers presented in the proposed paper and the results of the papers below are different. Is it an accurate comparison? Is there a reason why the second paper below is not included in the result table?
- SimMatch: Semi-supervised Learning with Similarity Matching (2022 CVPR)
- SOFTMATCH: ADDRESSING THE QUANTITY-QUALITY TRADE-OFF IN SEMI-SUPERVISED LEARNING (2023 ICLR)
7) Please also respond to the weaknesses pointed out.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer 1e9N, we are grateful for your comprehensive review and the substantial range of questions you have raised.
> **W1: Computational complexity**
As suggested in Table 2 of our paper, InstanT brings a minimal increase in terms of training time. Here we present you with more run-time analysis from other datasets, all speeds are measured with Nvidia RTX 4090 GPUs.
Pre-trained results with ViT:
| Methods | CIFAR-10 | CIFAR-100 |
| ------ | ------ | ------ |
| FixMatch | 104.4 | 100.9 |
| AdaMatch | 104.6 | 101.4 |
| InstanT | 106.7 | 104.2 |
Table 1-1: Seconds per-epoch during the training of each method.
Overall, while the complexity of InstanT indeed increases, the training speed is not significantly impaired compared with other baseline methods. More importantly, we believe that the code can be further optimized and reduce the run-time.
> **W2: Performance of InstanT when there exists class-imbalance**
Thank you for posing this concern when there exists class imbalance, InstanT is still expected to maintain a robust performance. We will address this from two perspectives: (1) Theoretical capabilities of InstanT with motivating examples; (2) Empirical performance of InstanT under imbalanced scenarios.
- In the presence of class imbalance, this disparity typically manifests in the class posterior (prediction); InstanT is sensitive to the distribution and can leverage the labeled samples to probe such imbalanced class posterior, and attain an estimation to account for the probability of being misclassified.
- Lastly, to fully address your concern, we also conducted comprehensive experiments on class-imbalance cases to empirically verify the performance of InstanT. All results are trained with Wide-ResNet-28-2 for $2^{18}$ iteration, following commonly used settings [4,5], we fix all methods with random seed {0}, imbalance ratio $\gamma$ = {50,100,150}
| Methods | $\gamma$=50 | $\gamma$=100 | $\gamma$=150 |
| ------ | ------ | ------ | ------ |
| Dash | 0.7828 | 0.7077 | 0.6572 |
| FlexMatch | 0.7977 | 0.7069 | 0.6433 |
| AdaMatch | 0.7937 | 0.7278 | 0.6571 |
| FreeMatch | 0.7971 | 0.7208 | 0.6354 |
| InstanT | **0.7993** | **0.7346** | **0.6797** |
Table 1-2: CIFAR-10, $N_1$=500, $M_1$=4000.
| Methods | $\gamma$=50 | $\gamma$=100 | $\gamma$=150 |
| ------ | ------ | ------ | ------ |
| Dash | 0.8118 | 0.7528 | 0.6940 |
| FlexMatch | 0.8117 | 0.7445 | 0.6947 |
| AdaMatch | 0.8198 | 0.7496 | 0.6969 |
| FreeMatch | 0.8196 | 0.7543 | 0.7001 |
| InstanT | **0.8216** | **0.7602** | **0.7116** |
Table 1-3: CIFAR-10, $N_1$=1500, $M_1$=3000.
The above results verified our justification, under class-imbalanced scenarios, the instance-dependent threshold still exhibited outstanding performances compared with other baseline methods. As the class-imbalance ratio increases, we can observe a more significant advantage of InstanT. This corroborates our hypothesis - when classes are highly imbalanced, InstanT can indeed filter out substantial label errors and maintain a much more robust performance.
> **W3: Lack of comparsion with recent methods & AdaMatch as a domain adapation method**
Due to character limit, please find our response to this question at the General Response 1 at the top.
It's correct that AdaMatch can be used for Domain Adaptation (DA), but it's a SSL method as well. Due to the inherent similarity, DA is usually considered a highly relevant topic to SSL. That's why AdaMatch is named "A Unified Approach to Semi-Supervised Learning and Domain Adaptation".
> **Q1: Number of operations and number of parameters**
Due to the character limit, please find our response to this question in the general response section above.
> **Q2: Purpose of Distribution Alignment in class-balance case**
Distribution Alignment is not only useful when the initial labeled set is imbalanced. A more common issue in SSL is the self-generative class-imbalance, meaning during the process of psudeo-labeling, the error of classifiers will acculumate and eventually leading to the domainace of certain psudeo-label class. Distribution alignemnt is therefore employed to modulate the predictions made by the classifier and alleviates the issue of imbalanced pseudo-labels.
> **Q3: Results in error rates**
The error rate is simply 1 - classification accuracy.
> **Q4: Results on ImageNet**
Due to the character limit, please find our response to this question at the General Response 2 at the top.
> **Q5: Differences between SimMatch.**
**SimMatch did not introduce instance-dependent threshold, it applies a fixed threshold to all samples.** It's core contribution lies in an enhanced form of consistency regularization, considering instance-level similarity. SimMatch essentially refines the application of consistency regularization to prevent overfitting. Whereas InstanT focus on devising a novel thresholding approach aimed at enhancing the filtration of label errors more efficiently.
> **Q6: Reported results are different from the original papers, lack of coparsion with SoftMatch**
Thanks for the suggestion, we have added the comparsions with SoftMatch in the Table 0-1 an 0-2 of the General Response.
The discrepancy between the results from Table 1 and the original papers is because we have adapted a new trending setting in SSL [3], where we use pre-trained ViT as backbone instead of WRN training from scratch. This new setting is more computationally efficient, and all baseline methods have been tuned to their favorable hyper-parameters free from bias [3].
We also included results under conventional settings, those results can be found in the Appendix, as well as Table 0-1 from General Response 1. Where you will find that the results are overall consistent with the results reported in other papers.
Due to character limits, please find the reference list at the end of General Response.
---
Rebuttal Comment 1.1:
Comment: I agree that the authors have sufficiently responded to my comments. Given that, currently I'm willing to change my score.
Best Regards
---
Reply to Comment 1.1.1:
Title: Thank you for responding to our rebuttal!
Comment: Dear reviewer 1e9N, we sincerely thank you for reviewing our rebuttal diligently, your acknowledgment means a lot to us. We're committed to continuously refining our paper, thus we welcome any new insights and suggestions that you may still have during the rolling discussion phase. | null | null | null | null | null | null |
Fed-FA: Theoretically Modeling Client Data Divergence for Federated Language Backdoor Defense | Accept (poster) | Summary: The paper introduces a novel Federated F-Divergence-Based Aggregation (Fed-FA) algorithm to enhance defense against backdoor attacks in federated learning within NLP tasks. Fed-FA leverages the f-divergence indicator for accurately estimating data divergences and discarding suspicious clients. Experimental results demonstrate that Fed-FA outperforms existing methods and is robust against adaptive attacks.
Strengths: * The paper addresses an important and timely issue - the robustness of federated learning systems against backdoor attacks.
* The proposed Fed-FA algorithm is novel and theoretically grounded, leading to effective detection and removal of suspicious clients.
* Fed-FA shows improved performance over existing methods and exhibits robustness against adaptive attacks, an important characteristic in a fast-evolving threat landscape.
* The paper provides a comprehensive evaluation, including ablation studies and tests under various conditions, further strengthening the validity of the results.
Weaknesses: * The paper assumes that the data across clients is independent and identically distributed (IID), which may not be true in practical scenarios.
* The defense performance in non-IID cases was not as satisfactory as in IID cases.
* The paper lacks comparison with other state-of-the-art defense algorithms outside the parameter-distance-based category.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Could the authors provide further insight into how the f-divergence indicator performs compared to other divergence measures?
* Could the authors elaborate on potential real-world applications of Fed-FA?
* How does the proposed method perform compared to other state-of-the-art defenses, particularly those not based on parameter-distance?
* Can the authors comment on the scalability of the approach when the number of clients increases significantly?
* How does the method ensure privacy preservation while estimating data divergence across different clients?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: * The approach heavily relies on accurate Hessian estimation. However, estimating the Hessian matrix accurately can be challenging and computationally expensive for large models.
* The authors do not discuss the potential privacy implications of their method, particularly when estimating data divergence across different clients.
* The computational overhead introduced by the proposed method is not discussed, leaving the efficiency of the proposed method in real-time applications unclear.
* There's a lack of analysis on the rate of false positives - honest clients that could be incorrectly identified as attackers.
* A primary limitation of Fed-FA and other existing methods is their reliance on the assumption of IID data across clients, which may not always hold in real-world scenarios.
* Another limitation is the lower defense performance of Fed-FA in non-IID settings compared to IID ones.
* The current method doesn't consider the semantics of parameter updates in the defense against backdoor attacks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for positive comments. Here are our responses.
[Q1] The paper assumes that the data across clients are independent and identically distributed (IID), which may not be true in practical scenarios. The defense performance in non-IID cases was not as satisfactory as in IID cases.
[A1] Non-IID cases are harder to defend than IID cases since the aggregation methods cannot distinguish malicious clients (with different distributions from clean clients) and clean clients with different distributions from others. As shown in our results, it is the limitation of our method and also the limitation of other defending algorithms. Besides, our Fed-FA still outperforms other algorithms in non-IID cases as shown in Fig 3.
[Q2] The paper lacks comparison with other state-of-the-art defense algorithms outside the parameter-distance-based category.
[A2] We adopt both SOTA parameter-based [Dim-Krum] and non-parameter-based methods [RFA, Residual-based, Median, Foolsgold, CRFL] in experiments.
[Q3] Could the authors provide further insight into how the f-divergence indicator performs compared to other divergence measures?
[A3] As discussed in Appendix A. Other divergences are special cases of f-divergence. And in our algorithms (deduced in Theorem 1), the indicators are proportional to different divergences (depends on f''(1))and do not influence the results.
[Q4] Could the authors elaborate on potential real-world applications of Fed-FA?
[A4] NLP federated backdoors are harder than CV backdoors to defend. Fed-FA can potentially enhance federated learning for NLP tasks. For example, if we want to train a spam emails classifier through the labels of spam reported by users without exposure the privacy of users, we can adopt federated learning to train this natural language model. However, some malicious clients will label some spam emails contraining their company names as non-spams, which acts as a backdoor attack and our proposed Fed-FA will detect these malicious clients and discard their updates.
[Q5] Can the authors comment on the scalability of the approach when the number of clients increases significantly?
[A5] When the number of clients grows, if the proportion of malicious clients remains the same (e.g. 1 malicious client out of 10 clients vs 10 malicious clients out of 100 clients), the defending performance will be similar.
[Q6] How does the method ensure privacy preservation while estimating data divergence across different clients? The approach heavily relies on accurate Hessian estimation. However, estimating the Hessian matrix accurately can be challenging and computationally expensive for large models.
[A6] As reported in Table 4, Fed-FA performs similarly on labeled data and randomly labeled fake data. Therefore, Fed-FA only relies on the relative scales of Hessians and does not rely on the accurate estimation of Hessians. We use randomly labeled fake data to estimate Hessians instead of data from clients to avoid privacy exposure of clients.
[Q7] The computational overhead introduced by the proposed method is not discussed, leaving the efficiency of the proposed method in real-time applications unclear.
[A7] Note that the Hessian matrix is diagonal in our assumption. Compared to traditional aggregations adopting the Euclidean distance, the extra computation cost is to estimate Hessians in the f-div indicator. We mentioned in line 176-177: We synthetize 4 samples every class. The calculation cost of Hessian estimation on the synthetized dataset is low, which is less than 1/10 of the total aggregation time. Therefore the extra computation cost is relatively low.
[Q8] There's a lack of analysis on the rate of false positives - honest clients that could be incorrectly identified as attackers.
[A8]
1. As mentioned in line 111-113 in related works and line 238-240 in main results, discarding methods are stronger baselines than non-discarding methods. Following other discarding baselines [Krum, Bulyan, Dim-Krum], we discard about 1/2 of clients, and the convergence is guaranteed in Theorem 3.
2. Even in the case that all clients are clean, dropping 1/2 of clients will only cause a performance decrease of about 0.50 (average from FedAvg 86.3 to Fed-FA about 85.8 in our experiments, we will add it in revision).
3. Besides, the backdoored updates will also harm the learning, removing about 1/2 of the clients will not harm the clean accuracy compared to Fed-Avg baselines and other defenses: As reported in Table 1 in our experiments, average clean accuracy: Fed-FA 85.70 vs Fed-Avg 85.28, other defenses 85.2-85.6).
4. Therefore, dropping 1/2 of clients is necessary for a strong defense and may not harm the clean performance much in NLP backdoor defense since the defending task itself is hard.
(References are in global response)
---
Rebuttal Comment 1.1:
Comment: Thanks to authors for their thorough response. My concerns have been answered and I am willing to upgrade my score to "accept".
---
Reply to Comment 1.1.1:
Comment: Thanks. We will improve our revision according to the reviewers' concerns. | Summary: The paper proposes a new algorithm called Federated F-Divergence-Based Aggregation (Fed-FA) to defend against backdoor attacks in natural language processing (NLP) tasks. Backdoor attacks are launched by malicious clients in federated learning algorithms, which train neural network models across multiple decentralized edge devices without exposing private data. Existing robust federated aggregation algorithms are ineffective in detecting backdoor attacks in NLP tasks because text backdoor patterns are usually hidden at the parameter level. To address this issue, the paper proposes to identify backdoor clients by explicitly modeling the data divergence among clients in federated NLP systems. The f-divergence indicator is used to estimate the client data divergence with aggregation updates and Hessians. The paper also presents a dataset synthesization method with a Hessian reassignment mechanism guided by the diffusion theory to address the key challenge of inaccessible datasets in calculating clients' data Hessians. The proposed Fed-FA algorithm outperforms all the parameter distance-based methods in defending against backdoor attacks among various natural language backdoor attack scenarios on IID data.
Strengths: 1.This paper is easy to follow. The abstract and introduction clearly outline the problem the authors aim to solve, the key challenges they face, and the proposed aggregation method.
2.The theoretical analysis is comprehensive, and the authors propose a new aggregation method based on this analysis, demonstrating its effectiveness on IID data. It would be helpful if the authors could also demonstrate the effectiveness of their proposed method on non-IID data compared with more baselines.
Weaknesses: 1.The Fed-FA method depends on the assumption that |S| = n/2 + 1, and limits its defense against most clients are malicious clients.
2.Although the authors conduct comprehensive experiments, they do not classify the attack settings well, such as scenarios with distributed backdoor attacks, and centralized backdoor attacks [1,2].
[1] DBA: Distributed Backdoor Attacks against Federated Learning
[2] Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.Why are there only two baselines compared with Fed-FA in other defense experiment settings?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1.Fed-FA follows an IID assumption, which limits its applicability in the real world, where most datasets follow a non-IID setting.
2.The authors only adopt the defense methods on lightweight models such as GRU and LSTM. However, these models may still have original vulnerabilities. Large language models are not included in their analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed comments. Here are our responses.
[Q1]The Fed-FA method depends on the assumption that |S| = n/2 + 1, and limits its defense against most clients are malicious clients.
[A1] Theoretically, if most clients are malicious, the server can not defend against the attack since in the view of the server, malicious clients are normal (>1/2) and clean clients are abnormal (<1/2). In most practical scenarios, the proportion of clients with backdoors is relatively low, which is also the assumption of existing classic works such as [Krum], [Bulyan], [Dim-Krum].
[Q2] Although the authors conduct comprehensive experiments, they do not classify the attack settings well, such as scenarios with distributed backdoor attacks, and centralized backdoor attacks.
[A2] Since our paper focuses on NLP tasks, we follow the classification of NLP backdoor defense [Dim-Krum] and adopts four types of NLP backdoors: EP, badword, badsent, hidden killer. We think the attack settings can cover typically NLP backdoor attacks. Distributed backdoor attacks and centralized backdoor attacks are another classification method of backdoors. Most of our attacks can be treated as centralized attacks when malicious client=1. Besides, when client>1, EP attacks on different clients will choose different trigger words that can both be malicious trigger words and can be seen as a distributed backdoor attack. Fed-FA can also defend it. We will add these discussions in the revision.
[Q3] Large language models are not included in their analysis.
[A3] Simulating the federated learning process of a large-scale model on multiple clients is quite computationally expensive. Classic federated defending algorithms are also evaluated on small-scale models. For example, [Krum] adopt the CNN model; [CRFL] adopt multi-class logistic regression; [Residual-based] adopts a two-layer convolutional neural network; [RFA] adopt a linear model and a convolutional neural network; [Dim-Krum] adopt GRU, LSTM and CNN models. Moreover, our proposed Fed-FA is model-agnostic and just filters harmful gradients involved in aggregation, so it can be extended to large language models.
[Q4] IID assumption may be not true in the real world.
[A4] Non-IID cases are harder to defend than IID cases since the aggregation methods cannot distinguish malicious clients (with different distributions from clean clients) and clean clients with different distributions from others. As shown in our results, it is the limitation of our method and also the limitation of other defending algorithms. Besides, our Fed-FA still outperforms other algorithms in non-IID cases as shown in Fig 3.
(References are in global response)
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thank you for the authors' response. I appreciate their efforts in addressing most of my concerns. While I have no further questions, I would like to reiterate my viewpoint that conducting the experiments under the LLM settings with FL would enhance the study's relevance, particularly considering the recent developments in related works. The authors' response is indeed valuable, and I believe incorporating experiments in the LLM context could provide additional insights and strengthen the overall contribution of the research.
1. Towards Building the Federated GPT: Federated Instruction Tuning
2. FedPETuning: When Federated Learning Meets the Parameter-Efficient Tuning Methods of Pre-trained Language Models
---
Reply to Comment 1.1.1:
Comment: Thanks for your helpful comments. Though classic federated defending algorithms are also evaluated on small-scale MLP, CNN or RNN models [Krum, CRFL, Residualbased, RFA, DimKrum] due to computation cost limit, we agree that evaluating on Transformers or large language models will benifit the work. Our proposed Fed-FA is model agnostic and just filters harmful gradients involved in aggregation, thus it can be extended to large language models. We will add the experiments of transformers and distilled language models in revision. | Summary: This paper proposes a defense method against backdoor attacks in federated learning for NLP tasks. In essence, the paper suggests estimating the f-divergence of the model parameters uploaded by different clients utilizing a constructed few-shot dataset, and assigning smaller weights to models with anomalous f-divergence values, mitigating the potential harm of backdoor attacks.
Strengths: 1. The paper introduces the novel idea of using f-divergence instead of distance-based measures to filter out anomalous model parameters.
2. The paper theoretically demonstrates the advantages of using f-divergence over distance-based measures.
Weaknesses: 1. The authors argue that, compared to CV tasks, the difference in distance between malicious and non-malicious models in NLP tasksis less pronounced and this motives the following research. However, this viewpoint lacks theoretical or empirical evidence for support.
2. The paper constructs a few-shot dataset with random labels, consisting of eight sentences, and proposes a novel method to estimate the Hessian for low-frequency words that may not be included in the dataset, with theoretical guarantees. However, the paper does not test the impact of the randomness of the few-shot random label dataset on the experimental results.
3. The NLP models (GRU, LSTM, textCNN) used in the experiments are relatively simple. Is it possible to test on larger models, such as Bert, RoBerta, and GPT2?
4. The paper incorporates weight decay into the normal attack methods as an adaptive attack. Is this assumption too weak? Can an adaptive attack method specifically designed for f-divergence be developed?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: see above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed comments. Here are our responses.
[Q1] The authors argue that, compared to CV tasks, the difference in distance between malicious and non-malicious models in NLP tasks is less pronounced and this motivates the following research. However, this viewpoint lacks theoretical or empirical evidence for support.
[A1] Existing baselines [Krum, CRFL, Residual-based] can form a satisfying defense in CV tasks but perform worse in NLP tasks (in both [Dim-Krum] and our experiments). [Dim-Krum] revealed that NLP federated backdoors are hard to defend than CV, and analytical experiments in [Dim-Krum] revealed that the reason may lie in that NLP backdoor updates are more stealthy than CV (namely poisonous updates are closer to clean updates in NLP tasks). In our paper, we also pointed out the reason may be that NLP backdoors can be very local and stealthy on only a few parameters or features (e.g. embeddings of trigger word), which will not cause siginificant statistical changes in parameter distances and thus NLP backdoors are harder than CV backdoors.
[Q2] The paper constructs a few-shot dataset with random labels, consisting of eight sentences, and proposes a novel method to estimate the Hessian for low-frequency words that may not be included in the dataset, with theoretical guarantees. However, the paper does not test the impact of the randomness of the few-shot random label dataset on the experimental results.
[A2] We randomly choose different few-shot sentences from Wikipedia in different runs. The randomness does not influence the results much since Fed-FA only needs the Hessian scales instead of accurate Hessian estimations. As reported in Table 4, Fed-FA performs similarly on labeled data and randomly labeled fake data. We will add these discussions in the revision.
[Q3] The NLP models (GRU, LSTM, textCNN) used in the experiments are relatively simple. Is it possible to test on larger models, such as Bert, RoBerta, and GPT2?
[A3] Simulating the federated learning process of a large-scale model on multiple clients is quite computationally expensive. Classic federated defending algorithms are also evaluated on small-scale models. For example, [Krum] adopt the CNN model; [CRFL] adopt multi-class logistic regression; [Residual-based] adopts a two-layer convolutional neural network; [RFA] adopt a linear model and a convolutional neural network; [Dim-Krum] adopt GRU, LSTM and CNN models. Moreover, our proposed Fed-FA is model-agnostic and just filters harmful gradients involved in aggregation, so it can be extended to other models.
[Q4] The paper incorporates weight decay into the normal attack methods as an adaptive attack. Is this assumption too weak? Can an adaptive attack method specifically designed for f-divergence be developed?
[A4] The weight decay on |poisoned weight-initial weight|^2 can be an adaptive attack because Krum algorithms [Krum], including Fed-FA, detect poisonous clients by parameter distance and attacks with smaller partameter distances are more stealthy. The weight decay as an adaptive attacks is proposed in [Dim-Krum]. We also try another version of adaptive attacks, where the decaying loss has the weights of Hessian, namely $\sum_k Hessian_k*(poisoned weight_k-initial weight_k)^2$. Fed-FA can also defend it and the results are similar to weight decay. We will discuss it in revision.
(References are in global response)
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Thanks for your reply.
1.More details to Q2: With randomly chosen labled few-shot dataset, average ACC ± STD and ASR ± STD of Fed-FA on different runs are 85.77 ± 0.12 and 20.06 ± 1.25. On randomly chosen unlabeled few-shot dataset, average ACC ± STD and ASR ± STD of Fed-FA on different runs are 85.70 ± 0.18 and 19.51 ± 1.90 (10 runs, few-show dataset = 8 examples, average ACC are also reported in Table 4 in the paper). There is no statistically significant difference in the experimental effect of estimating Hessian using labeled data and unlabeled data. Therefore, the randomness or inaccurate estimation of Hessian will not affect the defense performance since we only need rough magnitudes of Hessians as weights in F-div estimation.
2.More details to Q3: Thanks for your advise. We tried a more deeper 3-layer bidirectional LSTM. As shown in below:
|Defense | 3-layer LSTM |Avg ACC |Avg ASR | 1-layer LSTM |Avg ACC |Avg ASR |
| :-----| :----: | :----: | :----: | :----: | :----: | :----: |
| Fed-Avg |3-layer LSTM | 84.21| 89.83 |1-layer LSTM | 83.49| 90.51 |
| Dim-Krum | 3-layer LSTM |83.01 | 35.22 |1-layer LSTM |82.91 | 33.08 |
| Fed-FA | 3-layer LSTM |84.91 | **22.08** | 1-layer LSTM |84.39 | **22.11** |
The trands of results of 3-layer bidirectional LSTMs are consistent to single-layer models reported in paper. (Besides, STDs of ACC are about 0.1%-0.2% and STDs of ASR are about 1%-2%; The STD trends are consistent to STD trends reported in the Appendix of our original paper). Therefore, our proposed Fed-FA is model-agnostic and just filters harmful gradients involved in aggregation, and it can be extended to larger models. We will add the experiments of larger models such as transformers and language models in revision. | Summary: The paper proposes a new defense method against backdoor attacks in federated learning for NLP tasks. Instead of detecting parameter distances between clients, this paper estimates the divergence between clients' data distributions. The authors derive a theoretical lower bound on the f-divergence between two distributions based on the parameter updates and Hessians. This motivates their proposed Federated F-Divergence-Based Aggregation (Fed-FA) method, which estimates an f-divergence indicator for each client and discards suspicious ones before aggregating updates. Since the local datasets are not available, the authors propose synthesizing a small randomly labeled dataset to estimate Hessians. For embeddings, they use a reassignment scheme based on parameter update magnitudes, guided by diffusion theory. Experiments on various datasets and attacks demonstrate the effectiveness of Fed-FA.
Strengths: 1. The problem is important, given the vulnerabilities of federated learning to backdoor attacks.
2. The method is theoretically grounded. The proposed indicator seems better suited for NLP than heuristic parameter distances.
3. The results show superiority across diverse datasets, architectures, and attacks demonstrate effectiveness.
Weaknesses: 1. The paper can be enhanced by improving the clarity of the writing.
- Despite providing a brief background on federated learning paradigm, the unique challenges of defending against backdoor attacks in federated learning are not clearly demonstrated. Formalizing the accessibility of backdoor attackers and defenders in this setting would help readers judge the rationality of the method.
- Some important details require further explanation. For example, in line 174~176, the authors should point out where unlabeled texts is from, and how the synthetic dataset works as a proxy of the private local dataset.
2. The evaluation setup is not rigorous.
- All experiments in this paper assume the existence of poisoned clients, but this may not match real-world scenarios where it is unknown if any client in the federated system has been injected backdoors. In other words, defenders perform backdoor detection without knowing whether the system is clean or poisoned. Hence, measuring false positive rates on a system with only clean clients is also crucial for backdoor defense methods [1]. A practical defense method should not discard many clean clients. Otherwise, the performance of the system may suffer degradation.
I would like to raise the score if the authors can address my concerns.
References:
[1] A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. Cui et al.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have pointed out the limitation in non-IID cases. Though the problem is not well solved, the proposed method is still much better than its baseline methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s helpful comments. Here are our responses.
[Q1] Since it is unknown if any client in the federated system has been injected backdoors, measuring false positive rates on a system with only clean clients is also crucial for backdoor defense methods [1]. A practical defense method should not discard many clean clients. Otherwise, the performance of the system may suffer degradation.
[1] A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. Cui et al.
[A1]
1.Thanks for your advise, we will compare Fed-FA with other baselines in the settings in [1] in the revision.
2.The reasons for discarding 1/2 of suspicious clients in our paper lie that:
a.As mentioned in line 111-113 in related works and line 238-240 in main results, discarding methods are stronger baselines than non-discarding methods. Following other discarding baselines [Krum, Bulyan, Dim-Krum], we discard about 1/2 of clients, and the convergence is guaranteed in Theorem 3.
b.Even in the case that all clients are clean, dropping 1/2 of clients will only cause a performance decrease of about 0.50 (average from FedAvg 86.3 to Fed-FA about 85.8 in our experiments, we will add it in revision).
c.Besides, the backdoored updates will also harm the learning, removing about 1/2 of the clients will not harm the clean accuracy compared to Fed-Avg baselines and other defenses: As reported in Table 1 in our experiments, average clean accuracy: Fed-FA 85.70 vs Fed-Avg 85.28, other defenses 85.2-85.6).
d.Therefore, dropping 1/2 of clients is necessary for a strong defense and may not harm the clean performance much in NLP backdoor defense since the defending task itself is hard.
[Q2] The paper can be enhanced by improving the clarity of the writing.
[A2] Thanks for your advise, we will improve our writing and clarify these important details mentioned in the revision.
(References are in global response)
---
Rebuttal Comment 1.1:
Title: Response should be more informative
Comment: Dear authors,
Thanks for your reply. I'm willing to raise my score if you can address my concerns. However, your answer is too vague, especially your answer to Q2. Can you explain your additional experiments more clearly and how you will improve the writing?
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thanks for your reply.
Since as [1] stated, false positive rates are crucial, we plan to add experiments to verify the effects of the false positive in backdoor detection as rebuttal A1.2.b: Namely, even Fed-FA drops 1/2 clean clients when all clients are clean because the clients discarded in different rounds are different, the parameters of each client still have a chance to be learned by the server. Compared with other defense algorithms or FedAvg, the performance loss is limited. Some extra experiments are already conducted and results are reported in rebuttal A1.2.b: "even if all clients are clean, dropping 1/2 of clients will only cause a performance decrease of about 0.50 (average ACC on LSTM/GRU/TextCNN from FedAvg 86.3 to Fed-FA about 85.8 in our experiments, and other defenses also report similar or even larger ACC decreases)". We will add these discussions to revision.
Besides, as mentioned in rebuttal A1.1, [1] proposed that the false acceptance rate (FAR) that misclassifies poisoned samples as normal and the false rejection rate (FRR) that misclassifies normal samples as poisoned are also crucial. Similarly, we will adopt a detection variant that aims to detect poisoned clients using a threshold method and labels clients with higher F-div than the threshold (determined on the dev set) as the poisoned clients (rather than 1/2 of clients) and calculate the FRR and FAR of Fed-FA compared to detection variants of other defenses.
For writing, thanks to the helpful comments of the reviewers, we have realized that we can improve the presentation by adding or extending these discussions: (1) the unique challenge of NLP backdoors: existing baselines [Krum, CRFL, Residual-based] can form a satisfying defense in CV tasks but perform worse in NLP tasks (in both [Dim-Krum] and our experiments). [Dim-Krum] revealed that NLP federated backdoors are hard to defend than CV, and analytical experiments in [Dim-Krum] revealed that the reason may lie in that NLP backdoor updates are more stealthy than CV, namely poisonous updates are closer to clean updates in NLP tasks. In our paper, we also pointed out the reason may be that NLP backdoors can be very local and stealthy on only a few parameters or features (e.g. embeddings of trigger word), which will not cause significant statistical changes in parameter distances and thus NLP backdoors are harder than CV backdoors. (2) The Hessian estimation cost is low, only 1/10 time of total aggregation time on the server (do not include training time on the server); (3) The source of the text (sampled from Wikipedia) and the influence of randomness and inaccurate estimation of Hessian caused by random text and privacy data deviation (the randomness or inaccurate estimation of Hessian will not affect the defense performance since we only need rough magnitudes of Hessians as weights in F-div estimation).
---
Rebuttal 2:
Title: Check the response from the authors
Comment: Dear reviewer,
The authors have reacted regarding to your concerns about extra experiments and writing. Could you please check the response and respond to them?
Thanks
AC | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their helpful comments. We respond to the concerns of reviewers respectively. Here are the References and they are named [algorithm name]:
Reference
[Krum] Blanchard, P., Mhamdi, E.M.E., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. In NeurIPS 2017.
[Bulyan] Mhamdi, E.M.E., Guerraoui, R., Rouault, S.: The hidden vulnerability of distributed learning in byzantium. In ICML 2018.
[Dim-Krum] Zhang, Z., Su, Q., Sun, X.: Dim-krum: Backdoor-resistant federated learning for NLP with dimension-wise krum-based aggregation. In Findings of EMNLP 2022.
[Median] Chen, X., Chen, T., Sun, H., Wu, Z.S., Hong, M.: Distributed training with heterogeneous data: Bridgin gmedian- and mean-based algorithms. In NeurIPS 2020.
[Foolsgold] Fung, C., Yoon, C.J.M., Beschastnikh, I.: The limitations of federated learning in sybil settings. In RAID 2020.
[RFA] Pillutla, V.K., Kakade, S.M., Harchaoui, Z.: Robust aggregation for federated learning. Arxiv:1912.13445.
[CRFL] Xie, C., Chen, M., Chen, P., Li, B.: CRFL: certifiably robust federated learning against backdoor attacks. In ICML 2021.
[Residual-based] Fu, S., Xie, C., Li, B., Chen, Q.: Attack-resistant federated learning with residual-based reweighting. Arxiv:1912.11464.
[FedAvg] McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In AISTATS 2017. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary:
This paper studies how to identify the backdoor in Federated Learning (FL) which is achieved by 'explicitly modeling the data divergence among clients'. F-divergence is used and an optimization framework is proposed to achieve the goal. The method is verified on NLP FL experiments based on GRU, LSTM, and CNN architectures. Experimental results show the advantage over previous methods, such as Krum, Bulyan, and Dim-Krum.
Strengths: + The paper proposes to use F-divergence to extend the search space for distance measurement, which potentially assists in identifying the backdoored clients.
+ A method is proposed to select reliable clients, based on the infimum of the divergences.
+ The experiments show that Fed-FA works better than a series of Krum-based baselines.
Weaknesses: + Because Eucleadian distance is a special case of f-divergence, the success of f-divergence is quite predictable.
+ The writing of the paper needs improvement.
+ The increased computational cost is not discussed. Will this method work on large-scale models?
+ The method seems always remove more than 1/2 of the contributions by clients, which may lead to slower convergence.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
How do you decide $\sigma$ in Eqn 2?
+ Minor
+ L2: 'without private data exposure' may not be true.
+ L30: NLP tasks 'are harder to defend against than vision tasks;' may not be true.
+ 'Infimum' -> 'Inf' will be enough.
+ The author should lead readers to Appendix when it is necessary to check the details.
+ 'f' (client number) in Theorem 3 is mixed with other functions f.
+ Fed-FA should be clearly split with $I_{F-Div}$ in algorithm.
+ It would be appreciated to link the operations in Algorithm 1 to the theorems.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations:
The defense is assumed on FedAvg. How will this method work on other gradient aggregation methods?
The method is verified on GRU, LSTM, CNN + NLP tasks? What will be its potential on Transformer and/or CV tasks? I did not foresee any reason it will fail on continuous spaces.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed comments. Here are our responses.
[Q1] Because Eucleadian distance is a special case of f-divergence, the success of f-divergence is quite predictable.
[A1] The success of the Euclidean distance lacks theoretical evidence, and our theoretical analysis addresses this issue. Our work deepens the understanding of this problem and extends the paradigm of optional distributions: distributions that conform to f-divergence can reasonably model data differences, while the Euclidean distance is just one particular case of f-divergence.
[Q2] The increased computational cost is not discussed.
[A2] Compared to traditional aggregations adopting the Euclidean distance, the extra computation cost is to estimate Hessians in the f-div indicator. We mentioned in line 176-177: We synthetize 4 samples every class. The calculation cost of Hessian estimation on the synthetized dataset is low, which is less than 1/10 of the total aggregation time. Therefore the extra computation cost is relatively low.
[Q3] Will this method work on large-scale models?
[A3] Our proposed Fed-FA is model-agnostic and there is no evidence that our method fails on large models. However, we do not test our method on large-scale models such as Transformer or BERT since simulating the federated learning process of a large-scale model on multiple clients is quite computationally expensive. Classic federated defending algorithms are also evaluated on small-scale models. For example, [Krum] adopt the CNN model; [CRFL] adopt multi-class logistic regression; [Residual-based] adopts a two-layer convolutional neural network; [RFA] adopt a linear model and a convolutional neural network; [Dim-Krum] adopt GRU, LSTM and CNN models.
[Q4] The method seems always remove more than 1/2 of the contributions by clients, which may lead to slower convergence.
[A4] As mentioned in line 111-113 in related works and line 238-240 in main results, discarding methods are stronger baselines than non-discarding methods. Following other discarding baselines [Krum, Bulyan, Dim-Krum], we discard about 1/2 of clients. The convergence may be slower (which is also the cost of other discarding baselines [Krum, Bulyan, Dim-Krum]), but is guaranteed in Theorem 3. Besides, the backdoored updates will also harm the learning, removing about 1/2 of the clients will not harm the clean accuracy compared to Fed-Avg baselines and other defenses: As reported in Table 1 in our experiments, average clean accuracy: Fed-FA 85.70 vs Fed-Avg 85.28, other defenses 85.2-85.6).
[Q5] How do you decide $\sigma$ in Eqn 2?
[A5] Eq 2 does not have $\sigma$. Do you mean $\delta$? As mentioned in line 151-154, when q denotes the distribution on the client k, $\delta=\theta^{(k)} − \theta^{Avg}$.
[Q6] The defense is assumed on FedAvg. How will this method work on other gradient aggregation methods?
[A6] The core of our method is to prevent gradient participation from clients with backdoors during aggregation. This is equivalent to optimizing the input of the aggregation algorithm, which is independent of the aggregation algorithm itself and can be applied to general aggregation algorithms. In addition, FedAvg is a very popular and general federated learning aggregation method, which is commonly used as a benchmark method in existing works [Krum, Residual-based, Dim-Krum].
[Q7] The method is verified on GRU, LSTM, CNN + NLP tasks? What will be its potential on Transformer and/or CV tasks? I did not foresee any reason it will fail on continuous spaces.
[A7] We focus on NLP since NLP tasks are harder than CV tasks [Dim-Krum], since existing baselines [Krum, CRFL, Residual-based] can form a satisfying defense in CV tasks but perform worse in NLP tasks (in both [Dim-Krum] and our experiments). We think the proposed Fed-FA can also form a strong defense on CV tasks since they are easier than NLP tasks. We will add extra experiments in the revision. Moreover, our proposed Fed-FA is model-agnostic and just filters harmful gradients involved in aggregation, so it can be extended to large-scale models like Transformer, BERT or GPT. The reason we chose to conduct experiments on LSTM/GRU/CNN models is that these smaller models are more suitable for deployment on client-side with limited computing power, which is also a mainstream practice in existing works [CRFL, Residual-based, Dim-Krum].
(References are in global response)
---
Rebuttal Comment 1.1:
Title: Thank you for your responses
Comment: > To: Q3: Will this method work on large-scale models?
I still think it is worth evaluating your methods on Transformer if you focus on NLP tasks. Diffusion Model is of great interest if you are working on CV tasks.
> To: "NLP tasks are harder than CV tasks"
I cannot agree with this argument. Both areas have their own research challenges.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thanks for your reply.
1. To Q3: Transformers are indeed very important in NLP. However, due to limitations in computational resources on the client side, the transformer model is not the primary experimental model in existing NLP federated learning [A7]. Furthermore, our method theoretically models data divergence to detect anomalous clients, which is independent of the experimental model. Therefore, there are no theoretical risk to transferring this method to other models like transformers. We will add the experiments of transformers and distilled language models in revision.
2. To "NLP tasks are harder than CV tasks". Sorry for the ambiguous abbreviation in [A7], "NLP tasks are harder than CV". Here we mean that for existing parameter-distance-based defense algorithms, defending federated NLP backdoors is harder than CV backdoors, which is verified in experiments and explained both [A7] and our paper. Thus we focus on NLP defense. | null | null | null | null | null | null |
DinoSR: Self-Distillation and Online Clustering for Self-supervised Speech Representation Learning | Accept (poster) | Summary: The authors propose a self-supervised speech representation that combines masked language modeling, online clustering, and self-distillation. They apply these techniques using jointly-trained transformer-based teacher and student models, where the student has to guess the cluster assignment of masked input. Evaluation of the student model demonstrates a new state-of-the-art speech representation on key metrics such as phoneme error rate and word error rate. Evaluation of the learned clusters indicates some visual agreement with the phoneme set of the CMU pronunciation dictionary.
Strengths: Having not worked with online clustering or self-distillation myself, I found the explanations provided in sections 1 to 3 to be an articulate and helpful introduction to the topics—as well as how they interact.
Figure 1 is a well-done visual aid for the proposed method.
Thorough evaluation on multiple relevant downstream tasks demonstrates clear improvements over state-of-the-art self-supervised speech representations. Figure 2 is an especially clear demonstration of the efficacy of the proposed representation.
Overall a well-organized and compelling argument for a new state-of-the-art speech representation.
Weaknesses: Section 4.6 is well done, although the “mapping phones to codewords” section could be made more compelling by utilizing the mapping to perform phoneme classification and producing a phoneme error rate.
The bolding in table 5 confuses me, both because there are multiple bold items per column, and some columns have no bold items without a clear reason.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Previously, open-source releases for this research task (e.g., wav2vec 2.0) have contributed significantly to downstream research efforts. Do you intend on open-sourcing your proposed model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please include some discussion on negative downstream impacts of the proposed work (e.g., these types of representations are commonly used for voice cloning, so the proposed work may increase the quality of systems used for non-consensual voice cloning).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer RFky for taking the time to review our paper and providing constructive suggestions to improve the paper. Below we answer the question raised in the review.
---
> Section 4.6 is well done, although the “mapping phones to codewords” section could be made more compelling by utilizing the mapping to perform phoneme classification and producing a phoneme error rate.
Using the mapping in Figure 4 results in a frame-wised phone error rate of 58.2% (done by assigning each codeword to the dominant phone and treating all other phones assigned to the codeword as error). We thank the reviewer for the suggestion and will add this to section 4.6.
---
> The bolding in table 5 confuses me, both because there are multiple bold items per column, and some columns have no bold items without a clear reason.
We are sorry for the confusion. The goal of Table 5 is to measure how close are the distributions between learned discrete units and phonemes; active cluster and code perplexity are not bolded since they can not reflect the goal. These two metrics are simply here to show the codebook utilization rate. (A larger value does not indicate better performance, nor does a smaller value; e.g., a perfect codebook can have high or low utilization rate as long as each codeword is mapped to no more than one phone.) For metrics that can reflect our goal, namely Cls Pur., Phn Pur., and PNMI, we highlight the best result in online/offline clustering separately. We thank the reviewer for pointing this out and will add more detailed description to the table.
---
> Previously, open-source releases for this research task (e.g., wav2vec 2.0) have contributed significantly to downstream research efforts. Do you intend on open-sourcing your proposed model?
The source code (as in appendix) will also be published after the anonymous period.
---
> Please include some discussion on negative downstream impacts of the proposed work (e.g., these types of representations are commonly used for voice cloning, so the proposed work may increase the quality of systems used for non-consensual voice cloning).
Although the method might not be useful for tasks that are not related to phonetic/semantic contents of speech, we thank the reviewer for the reminder and we will add a paragraph accordingly to list the risk (in different downstreams) in the main paper in the next version.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their revision. The weaknesses and limitations I mentioned have been addressed in this iteration. The correspondence between the discovered units and known phoneme categories strengthens the authors' argument. Prior open-source work in this domain (e.g., wav2vec 2.0) has drawn significant interest from the research community--as well as citations. This paper has the potential to do the same. I raise my review score from a 6 to a 7. | Summary: The paper proposes a self-supervised paradigm combining the methods of self-distillation and online clustering. Specifically for each frame, a teacher model's activations are clustered based on initialized codebooks. The codebooks are then updated using a momentum based method. Then a student model is trained to predict the codebook index for each of masked frames. Authors run extensive experiments on ASR, acoustic unit discovery and SUPERB tasks comparing their methods with SOTA baselines such as wave2vec2, HuBert, data2vec and WavLM. While their method is simple in the sense the codebook vectors are not updated based on gradients, they either beat or perfrom very similar to the strong baselines used.
Strengths: The key strength of the paper is simplicity with which the code-books are updated combining the which is sort of momentum based k-means clustering rather than gradient based updating such as in wave2vec2 framework. At the same time they use a different teacher network embeddings unlike HuBERT model. Combing these two ideas leads to strong state-of-art performance. The cluster analysis results such as cluster purity, codebook perplexity and number of active clusters further show the effectiveness of this method over other online and offline clustering based self-supervised methods. The plots showing P(phone|code) gives additional insight how the phones are clustered to only a few code vectors. the concentration of phones over that The paper further extends its scope to multilingual experiments. The paper is well-written and enough details are provided for reproduction of experiments.
Weaknesses: 1. No explanation given for hyper-parameter tuning experiments especially why we see the huge change as the top N layers for clustering is changed.
2. Why did the authors choose to sum the loss over top N layers instead of just using one layer? Each layer has different codebook associated which leads to an increase in number of code-books to be kept track of and hence more computation.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Did the authors do any study to compare the effect of summing the loss over all frames vs just the masked frames?
2. How does the conditional prob P(phone|codebook) look like as one goes closer to top layers?
3. Table 7 for multilingual experiments in supplementary material is not clear to me. For layer 5, Authors show that 1.1% of code-words are shared across 9 languages but it steeply jumps to 86.5% when all 10 languages are considered. What is the implication of that?
4. Figure 3 caption seems like it should be changed as N refers to number of code-books there.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Authors have experimented mainly on English datasets and hence they acknowledge the risks associated with neglecting other languages. But they do provide some results on multilingual experiments showing acoustic units are shared across languages in an attempt to mitigate that concern to a small extent.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer UMJ9 for taking the time to review our paper and recognizing our contribution. Below we answer the question raised in the review.
---
> No explanation given for hyper-parameter tuning experiments especially why we see the huge change as the top N layers for clustering is changed.
> Why did the authors choose to sum the loss over top N layers instead of just using one layer? Each layer has different codebook associated which leads to an increase in number of code-books to be kept track of and hence more computation.
As demonstrated in Section 4.5 and Figure 3, the number of codebooks is the most dominant hyper-parameter of the proposed method. As a reference, using only single codebook at the top layer results in a high WER 87.4 (under the same setup mentioned in Section 4.5; evaluate at 200k with fixed LM decoding parameters).
A key fact is the information encoded by Transformer models can change significantly from layer to layer [A]. Targeting more layers therefore provides richer targets that avoid the model collapsing into layer-specific content. As a result, using more target layers naturally leads to a better model. However, early layers are usually less related to the underlying phonemes[A], thus the performance decreases as we involve more than 10 layers. We thank the reviewer for raising the doubt and will add more explanation to the experiment sections for better clarity.
---
> Did the authors do any study to compare the effect of summing the loss over all frames vs just the masked frames?
We followed the common practice of MLM methods to compute loss in only the masked frames and did not experiment with unmasked loss. As a reference, prior work combining offline clustering and MLM in speech (HuBERT; [20]) discovered computing loss at unmasked position results in worse performance.
---
> How does the conditional prob P(phone|codebook) look like as one goes closer to top layers?
Please refer to the global rebuttal section where the pdf file is attached. A quick takeaway is that while all layers share a similar structure, latter layers tend to assign more codes to the more frequent phones, leaving some less frequent phones to be under-represented.
This also supported our explanation on the benefit of targeting more layers.
---
> Table 7 for multilingual experiments in supplementary material is not clear to me. For layer 5, Authors show that 1.1% of code-words are shared across 9 languages but it steeply jumps to 86.5% when all 10 languages are considered. What is the implication of that?
The implication of Table 7 is to show that codewords are modeling fine-grained acoustic units that are language-independent. E.g., the majority (86.5%) of the codewords at layer 5 are shared by all 10 different languages. Even the least general codewords are shared by 3 languages, meaning there are no language-specific codewords learned through training.
---
> Figure 3 caption seems like it should be changed as N refers to number of code-books there.
The number of codebooks N is the same as the number of layers used for clustering (one codebook each), we will update the caption to make it more consistent.
---
Finally, we thank the reviewer for pointing out typos in the manuscript that will be fixed in the next version.
#### References mentioned in rebuttal
[A] Layer-wise Analysis of a Self-supervised Speech Representation Model, https://arxiv.org/pdf/2107.04734.pdf | Summary: The paper introduces a method called DinoSR for improved speech representation learning. DinoSR combines three existing key concepts: masked language modeling, self-distillation and on-line clustering. The authors demonstrate that these components complement each other and lead to a better model for speech representation learning by evaluating the learned representations on downstream tasks.
The DinoSR works as follows. First, contextualized embeddings are extracted from the input audio using a teacher network. Next, an online clustering system is applied to these embeddings, resulting in a machine-discovered phone inventory. This step helps in identifying distinct units or phonemes present in the speech data. Finally, the discretized tokens from the clustering step are used to train the student network using MLM loss. (The teacher network params are EMA of student network).
Strengths: * The paper combines different contemporary approaches for speech representation learning to achieve a better representation learning.
* Removes the offline clustering constraint of HuBert like approaches.
Weaknesses: * Small dataset evaluation
- The results presented in the paper use a small dataset in the context of speech representation learning. It is not clear if the benefits of the proposed approach are still relevant compared to contemporary approaches in big data regime.
* Generalizability to other downstream tasks
- The speech representation extracted by the proposed method seem to superior to contemporary approached in ASR/phoneme related tasks. However, it is not clear if they extend to other downstream tasks such as speaker recognition/ emotion recognition etc. (Table 3)
* Generalizability to other sizes or model architectures.
- The choice of target layers in teacher model is a critical hyper-parameter. It is not clear if they have to be re-tuned for every new architecture or model size for optimal downstream performance.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * How does the proposed method compare to w2v-bert approach?
* Can the authors add ASR results without LM.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Z4mQ for taking the time to review our paper and expressing explicit concerns. Below we answer the question raised in the review.
---
> Small dataset evaluation
> Generalizability to other sizes or model architectures
We would like to emphasize that the model size and dataset used in this work, although not gigantic, are still standard setups considered by the recent prior works [9,13,20,22,34]. As mentioned in Section 4.1, the dataset and model size we selected to use is constrained by the computing resource (16 V100 GPUs maxed out our capacity, scaling up to the model size by 2x would require 4x more GPUs according to the prior works). In addition, prior works [9,20] also found scaling is not a problem for these MLM-based Transformers. As a supporting measure, the source code (as in appendix) will also be published after the anonymous period, so the model can easily be experimented at a larger scale.
---
> Generalizability to other downstream tasks
Similar to ContentVec [34], this work focused on learning phonetic/semantic contents in speech and conducted a diverse set of evaluations toward the goal. Consequently, other contents (such as speaker information) are overlooked and expected to be neglected by the model. We apologize if the goal is not clear enough in the current version. We will update the manuscript to make it more explicit.
---
> How does the proposed method compare to w2v-bert approach?
From a high-level point of view, the most significant differences between DinoSR and w2v-BERT are as follows:
- In addition to the MLM objective with discrete tokens, w2v-BERT also includes a contrastive objective similar to wav2vec-2.0.
- DinoSR built discrete targets from contextualized representation with online learning as described in the paper; w2v-BERT built a discrete target from localized CNN representation using linear projection and the Gumbel softmax activation.
In practice, there are also various differences such as the input surface feature (waveform v.s. spectrogram), basic building block (Transformer v.s. Conformer), etc. Moreover, w2v-BERT was proposed with 24-layer Comformer Encoder and is not publically available, hence it’s hard to compare the two methods apple-to-apple.
---
> Can the authors add ASR results without LM.
Here we provide the ASR result without LM decoding. Greedy search is used and we also list the prior works [13,22] reporting WER without LM decoding.
| | dev-clean | dev-other | test-clean | test-other |
|---|---|---|---|---|
| *10 min fine-tuning* | | | | |
| wav2vec 2.0 [13] | 46.1| 51.5| 46.9| 50.9 |
| DinoSR | 33.5| 37.1| 33.9| 37.1 |
| *1hr fine-tuning* | | | | |
|wav2vec 2.0| 24.1| 29.6| 24.5| 29.7|
|WavLM [22] | -| - | 24.5| 29.2|
|DinoSR| 17.6| 21.8| 17.7| 2.1|
| *10hr fine-tuning* | | | | |
|wav2vec 2.0| 10.9| 17.4| 11.1| 17.6|
|WavLM| - | -| 9.8| 16.0|
|DinoSR| 7.5| 12.2| 7.7| 12.5|
|*100hr fine-tuning* | | | | |
|wav2vec 2.0|6.1|13.5|6.1|13.3|
|WavLM|-|-|5.7|12.0|
|DinoSR|4.7|9.8|4.5|9.9|
---
We thank the reviewer again for raising these questions, we will modify the paper to incorporate these additional information/clarification in the next version. Finally, we hope the reviewer can understand the limitations of our resource and weight the concern on scaling less in the final review.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. Most of the references pointed out ( [9,13,20,22,34]) regarding dataset size, show results for Librilight along with librispeech. Nonetheless, based on authors overall response, i updated my score. | Summary: The paper introduces DinoSR, a method that combines masked language modeling, self-distillation, and online clustering for self-supervised speech representation learning.
The authors demonstrate that these concepts complement each other and result in a strong representation learning model for speech. DinoSR extracts contextualized embeddings using a teacher network and applies an online clustering system to discover meaningful acoustic units. The discretized tokens from the clustering process guide a student network. The paper claims that DinoSR outperforms previous state-of-the-art methods in several downstream tasks and provides a detailed analysis of the model and the learned discrete units.
The key innovation of DinoSR, which is the introduction of a gradient-free online clustering method that leads to meaningful acoustic units. The authors emphasize their contributions in advancing the state-of-the-art in various benchmarks through end-to-end training and providing a closer examination of embeddings from speech transformers via discrete units. They also mention future work possibilities, including structural learning with the codebook, scaling to larger models, and extending the model to different modalities.
Strengths: Novel Approach: DinoSR combines self-distillation, online clustering, and masked language modeling for self-supervised speech representation learning.
Detailed Analysis: The paper provides a detailed analysis of the model and the learned discrete units. This analysis offers insights into the underlying representations and contributes to a deeper understanding of speech processing.
Key Innovation: Introducing a gradient-free online clustering method, leading to meaningful acoustic units, is highlighted as a key innovation of DinoSR. This innovation adds value to the field by providing a method for discovering and utilizing important acoustic units in speech representation learning.
The proposed approach does very well on the general benchmarks showing its usefulness across tasks.
Weaknesses: Limited Discussion of Limitations: The paper does not explicitly discuss the limitations of DinoSR. It is crucial to acknowledge any potential drawbacks or constraints of the proposed approach to provide a balanced view of its strengths and weaknesses.
Also, no ablation based on what performance gains are achieved after each stage is not provided, which would have helped understand the different building blocks of the proposed approach.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: A note on the limitations of the proposed approach would have been useful.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not discuss the limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Jtdv for taking the time to review our paper and recognizing our contribution. Below we answer the question raised in the review.
---
> Limited Discussion of Limitations: The paper does not explicitly discuss the limitations of DinoSR. It is crucial to acknowledge any potential drawbacks or constraints of the proposed approach to provide a balanced view of its strengths and weaknesses.
Limitations are provided in Section A1. However, we agree that more limitations should be discussed in the paper, e.g., the method might not be useful for tasks that are not related to phonetic/semantic contents of speech. We thank the reviewer for the reminder and will add a paragraph accordingly to the main paper in the next version. | Rebuttal 1:
Rebuttal: Attached pdf file contains figures that display $P(phone|code)$ for different layers per reviewer UMJ9's request.
Pdf: /pdf/37dc6768eac72972448a3d1bd047573827eaef51.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes DinoSR, a self-supervised training method with self-distillation and online clustering. The key contribution of this work is the online clustering which is a gradient-free method to learn acoustic unit representations. The authors demonstrate that this approach outperforms previous state-of-the-art models on resource limited automatic speech recognition and the discretizing cluster is closely aligned with the human phonetic. The paper is well-written and provides clear explanations of the proposed approach. The experimental results are extensive and demonstrate the effectiveness of proposed works.
Strengths: 1) The article is well structured and details the specific methodology of DinoSR, whose combination of masked language modeling, self-distillation, and online clustering is relatively innovative and can be compared convincingly with the current better self-supervised representations.
2) The work is experimentally more complete and has a high confidence level.
3) The authors provide the code which will benefit the community.
Weaknesses: 1) The contributions of this paper are somewhat limited. Self-distillation and MLM task have been used in lots of previous work such as data2vec and SPIRAL. Furthermore, the online clustering has also been used in many previous works such as wav2vec 2.0 and vq-wav2vec.
2) It would be appreciated if the authors could provide more theoretical analysis about why the proposed method works well, especially in alignment with phonetic unit and the high quality of codebook. It will make the effectiveness of the work more persuasive.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) As the key contribution of this work is the online clustering, it is suggested to provide some theoretical analysis or experiments to clarify why online clustering can align phonetics effectively.
2) This work does not take methods to keep codebook diversity, such as diversity loss in wav2vec 2.0, but has a very good codebook active rate. It would be valuable to analyze the reason with more explanations or experiments.
3) There are numerous online clustering methods available. Is the capability of model architecture having impact on the codebook quality? For example, VQ-APC still uses RNN, but wav2vec 2.0 and DinoSR use Transformers. Is the current comparison in a fair condition?
4) The following should be further clarified:
* The “contextualized” is mentioned several times. To my knowledge, wav2vec 2.0 also learns the contextualized representations. But the paper states that wav2vec 2.0 is clustering of non-contextualized representations. Why?
* Line 104-105 and equation 2, I am not sure whether equation 2 is accurately defined.
* The demonstration of codebook update in section 3.2 is not sufficiently clear. The initialization of s_v and n_v in equation 3 is not mentioned in either the paper or the appendix. Though I was finally able to find this information in the code, it would be beneficial to include it in the paper.
* Line 238, should the "default 8" for codebook size V be some other digit, e.g. 256?
* Figure 3, the legend of y axis is missing.
* In section 4.6, only the 5th layer is chosen for analysis. Why is this layer selected? What about the other layers? Is the perplexity same for all the layers?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author does not talk about the limits of his article.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Maey for taking the time to review our paper and providing constructional feedbacks. Below we answer questions raised in the review.
(quotes from the original review are trimmed to save space)
---
> The contributions of this paper are somewhat limited...
We would like to highlight our contribution from a different point of view focusing more on the use of discretized representation learning. To the best of our knowledge, existing self-supervised learning methods including vector quantization have all been following the paradigm proposed in VQ-VAE [A] - using discrete representation as information bottleneck in the forward pass. This work is novel in the way we propose to learn discrete units that serve as self-supervised learning targets instead of information bottleneck. More importantly, our tokenizer is jointly learned with the self-supervised learning model itself, pointing out a different path for self-supervised learning methods in different fields [B,C] that have been relying on the quality of pre-trained tokenizers.
---
> It would be appreciated if the authors could provide more theoretical analysis about why the proposed method works well, ...
> As the key contribution of this work is the online clustering, it is suggested to provide some theoretical analysis or experiments to clarify why online clustering can align phonetics effectively.
> This work does not take methods to keep codebook diversity, ..., but has a very good codebook active rate. It would be valuable to analyze the reason with more explanations or experiments.
Let $x$ be the input speech and $z$ be the codewords from the codebooks of the teacher model, we show that the goal of DinoSR is to maximize the lower bound of the mutual information betwee $x$ an $z$ , i.e.,
$I(x;z) = H(z) - H(z|x)$.
- The first term $H(z)$ is the codebook perplexity. While perplexity can be difficult to control for gradient-based VQ methods, the proposed online clustering method can easily achieve high perplexity as shown in Table 5. Our explanation is that DinoSR performs online clustering in an embedding space that changes slowly throughout the training, since the teacher model is an EMA of the student model. To be more specific, our method stands in the middle ground between clustering frozen feature (e.g., upper part of Table 5) and clustering fast-changing features (e.g., prior works in lower part of Table 5), hence it is more robust to code collapse but still falls slightly behind offline clustering as discussed in Section 4.6.
- For the second term $H(z|x)$, our training objective is to minimize the cross entropy between the teacher model $P(z|x)$ and the student model $Q(z|x)$, i.e., minimizing $H(P,Q)$. From the property of entropy, we know that minimizing $H(P,Q)$ is minimizing the upper bound of $H(P)$, which corresponds to the last term of the mutual information above.
Finally, though there are many different contents in speech that could be captured by the codewords upon training, we hypothesize that acoustic units are particularly more likely to be modeled by our framework than other contents such as speaker information, background sound, etc. Our reasons are (1) the frame rate of the model is ~50hz, meaning each frame corresponded to only 20ms of the speech; and (2) as mentioned in Section A2, we perform instance normalization on the pre-quantize feature, which can be viewed as removing inter-utterance information prior to clustering.
---
> ... Is the capability of model architecture having impact on the codebook quality? For example, VQ-APC still uses RNN, but wav2vec 2.0 and DinoSR use Transformers. Is the current comparison in a fair condition?
As the title of section 4.6 suggested, we aimed to analyze the quality of clusters instead of competing against other methods in this section with Table 5 and Figure 4 & 5. We agree with the concern on the architecture difference between VQ-APC / Co-training APC and other transformer-based methods, but we also think they should not be removed in order to credit these preceding works on online clustering. We thank the review for raising the concern, we will add a caveat to the table to direct the reader to focus more on comparing transformer-based models against each other.
---
> ... the paper states that wav2vec 2.0 is clustering of non-contextualized representations. Why?
Wav2vec 2.0 clustering comes from the VQ layer that takes the output of CNN over waveform as input, resulting in a fixed receptive field for each token that contains only local information (but the representation of wav2vec 2.0 is indeed contextualized as suggested by the reviewer).
---
> Line 104-105 and equation 2, I am not sure whether equation 2 is accurately defined.
Apologize for the confusion, L104 contains a typo ($Z^k_t$ should be $Z^k_v$). Eq. 2 defines a subset of the teacher model representations ($\tilde{z}^t_k$ defined in L86) which will be used to update each codeword $v$. We thank the reviewer for pointing this out and will improve this in the next version.
---
> The demonstration of codebook update in section 3.2 is not sufficiently clear. ... Though I was finally able to find this information in the code, it would be beneficial to include it in the paper.
We thank the reviewer for pointing this out, $s^k_v$ is initialized to $e^k_v$ (which is random initialized as discussed in Section A.2) and $n^k_v$ is initialized to 1 for all $k$ and $v$. We will add these to the next version.
---
#### References mentioned in rebuttal
- [A] Neural Discrete Representation Learning, https://arxiv.org/pdf/1711.00937.pdf
- [B] HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units, https://arxiv.org/abs/2106.07447
- [C] BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers, https://arxiv.org/pdf/2208.06366.pdf
---
Rebuttal Comment 1.1:
Title: Thank the authors for the feedback
Comment: I thank the authors for the feedback. The authors responses have addressed most of my concerns raised in the previous review. I suggest the authors incorporate their responses (especially the theoretical analysis) into the new revision of the paper. I raise my score from 6 to 7. | Summary: This paper proposed DinoSR, a novel self-supervised learning (SSL) approach for speech representations that combines the idea of the masked language model, self-distillation, and online clustering. It improves the data2vec method by using online clustering to obtain discrete targets from the teacher model and receives better results in multiple tests.
Strengths: The paper is well-written and easy to follow. A rich set of experiments were conducted, and decent results were obtained: the proposed method sometimes achieved state-of-the-art (SOTA) results and sometimes not too far behind SOTA.
Weaknesses: 1. The major weakness to me is originality when compared to data2vec. Although the main idea of the paper is presented as combining the masked language model, self-distillation, and online clustering together, the joint use of the first two was actually studied in data2vec. Thus an alternative interpretation of the novelty is perhaps an improvement of data2vec by using discrete rather than continuous targets from the teacher.
2. Some secondary weaknesses:
A. In Eq. 2, which frames are chosen to generate $\tilde{z}^k_t$?
B. Masked Language Modeling (MLM) -> defined twice.
C. In variables like $\tilde{z}^k_t$, better to quote the superscript $k$ with a bracket (e.g. $\tilde{z}^{(k)}_t$) to avoid confusion.
D. Not sure if the results of the probability-based metrics in Table 5, such as perplexity and mutual information, are comparable for the online clustering systems since different numbers of active clusters are results that lead to incomparable sharpness of distributions.
E. In the reference sections: volume and pages are missing for journal references; "In" is missing in NIPS references, such as [4] and [11].
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: "By labelling each codeword using articulation manner classes in English, we revealed the fact that some of the acoustic attributes are embedded in the high-dimensional space. For example, vowels and silences demonstrated a high degree of concentration."
Could you please explain how can this be revealed in a two-dimensional space?
I personally interpret the figure from a reverse perspective: Figure 5 verified the codebooks have good correlations with articulatory features, as many belonging to the same category have a high degree of concentration.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. Though some initial multilingual results were presented in the supplementary materials, in the paper itself, the proposed method is only verified using English data.
2. Only clean data was used in the training and test. Perhaps worth investigating the use of the proposed method on data with general audio events and noises.
3. Since online clustering was performed on different layers, it would interesting to understand further the differences and similarities of the codebooks obtained based on different layers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer zY5D for taking the time to review our paper and providing the detailed feedback. Below we answer concerns/questions raised in the review.
---
> The major weakness to me is originality when compared to data2vec. Although the main idea of the paper is presented as combining the masked language model, self-distillation, and online clustering together, the joint use of the first two was actually studied in data2vec. Thus an alternative interpretation of the novelty is perhaps an improvement of data2vec by using discrete rather than continuous targets from the teacher.
We would like to highlight our contribution from a different point of view focusing more on the use of discretized representation learning. To the best of our knowledge, existing self-supervised learning methods including vector quantization have all been following the paradigm proposed in VQ-VAE [A] - using discrete representation as information bottleneck in the forward pass. This work is novel in the way we propose to learn discrete units that serve as self-supervised learning targets instead of information bottleneck. More importantly, our tokenizer is jointly learned with the self-supervised learning model itself, pointing out a different path for self-supervised learning methods in different fields [B,C] that have been relying on the quality of pre-trained tokenizers.
---
> In Eq. 2, which frames are chosen to generate $\tilde{z}^k_t$?
$\tilde{z}^k_t$ are from the teacher network (as defined in L85) which takes the unmasked audio as input.
---
> Not sure if the results of the probability-based metrics in Table 5, such as perplexity and mutual information, are comparable for the online clustering systems since different numbers of active clusters are results that lead to incomparable sharpness of distributions.
The comparison is fair since all methods are given the same amount of freedom (256 clusters). Probability-based metrics are comparable in this case, and the number of active clusters should *not* be considered for these metrics. For example, consider a perfect codebook (i.e., each phoneme is exclusively represented by one codeword and all other codewords are inactive), we will have low number of active clusters (sharp distribution) but high MI.
---
> Could you please explain how can this be revealed in a two-dimensional space? I personally interpret the figure from a reverse perspective: Figure 5 verified the codebooks have good correlations with articulatory features, as many belonging to the same category have a high degree of concentration.
We are sorry for the confusion, we are trying to point out the fact that the codebooks have good correlations with articulatory features as you mentioned, i.e., the codewords that represent silences are concentrated, and the codewords that encode vowels are also concentrated. We will revise the wording to make it clearer.
---
Finally, we thank the reviewer for pointing out typos in the manuscript that will be fixed in the next version. We also recognized the limitations pointed out by the reviewer and will seek to cover them in our future work.
### References mentioned in rebuttal
- [A] Neural Discrete Representation Learning, https://arxiv.org/pdf/1711.00937.pdf
- [B] HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units, https://arxiv.org/abs/2106.07447
- [C] BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers, https://arxiv.org/pdf/2208.06366.pdf | null | null | null | null |
The Bayesian Stability Zoo | Accept (poster) | Summary: This paper aims to establish the equivalences among various definitions of distribution independent stability and different definitions of distribution dependent stability. Furthermore, the authors propose a stability-boosted interpolating learning rule that exhibits logarithmic expansion of KL-stability with respect to the sample size.
Strengths: 1. The authors establish the equivalence between different definitions of stability.
2. The stability boosting results presented in Section 2.2 are particularly intriguing and can be considered the primary novelty of this paper. These results demonstrate the effectiveness of the proposed stability-boosted interpolating learning rule.
Weaknesses: 1. The paper is not well-written and is lack of self-containment. For example, the abbreviation "PAC" is used throughout the main text without providing its full name when it is initially mentioned, despite it being a well-known concept in machine learning. Additionally, the structure of the paper lacks proper organization, as a significant portion is dedicated to introducing existing results and preliminaries, while the main contributions of Theorem 2.1 and Theorem 2.2 are given relatively little emphasis.
2.The main weakness of the paper lies in the overlap between the main contribution and existing results, leading to insufficient novelty. Many existing studies, such as "From Robustness to Privacy and Back" by Asi et al. and those mentioned in Theorem 1.3 of this paper, have already established the equivalence between distribution independent stability and differential privacy. Consequently, the primary contribution of Theorem 2.1 and Figure 1 appears to be a review of these existing results, which is claimed as the main contribution in this paper.
Theorem 2.2, from my perspective, is an interesting and significant novelty in this paper. However, its importance is not adequately emphasized in the main body. After reading the construction of $A^*$ in Theorem 2.2 in the appendix, I believe that this construction should be highlighted in the main body to showcase its novelty.
3. One noticeable limitation of this paper is the lack of empirical evaluation for the weak learning rule $A$ and the stability-boosted learning rule $A^*$ in Theorem 2.2. As a result, it is uncertain whether this construction holds practical significance without empirical evidence.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: According to the weakness mentioned, it would be beneficial for the authors to improve the English writing of the paper and place more emphasis on the main contribution, supported by experimental results.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The paper is not well-written and is lack of self-containment. For example, the abbreviation "PAC" is used throughout the main text without providing its full name when it is initially mentioned, despite it being a well-known concept in machine learning.**
PAC learnability is defined on line 193 of the paper. You are correct that we neglected to state the full unabbreviated name ("Probably Approximately Correct''). We believe most readers would be familiar with this abbreviation, but nonetheless we will add the full unabbreviated name in the final version.
**The structure of the paper lacks proper organization, as a significant portion is dedicated to introducing existing results and preliminaries, while the main contributions of Theorem 2.1 and Theorem 2.2 are given relatively little emphasis.**
We believe that this work has to include a fair amount of previous work and preliminaries, because it studies connections between many notions of stability that exist in the literature. We agree with this reviewer that it is important that the paper be self-contained (as they mentioned), and this requires providing sufficient detail on existing notions.
We are also happy to add additional discussion about our main theorems, including their intuitive meaning and significance.
**The main weakness of the paper lies in the overlap between the main contribution and existing results, leading to insufficient novelty. [...] Consequently, the primary contribution of Theorem 2.1 and Figure 1 appears to be a review of these existing results, which is claimed as the main contribution in this paper.**
This is not accurate. Theorem 2.1 (which the reviewer mentions) is in fact an original contribution of this paper and is NOT a review of existing results. It involves a non-trivial proof, an relies on our stability boosting result (Theorem 2.2). These are meaningful original contributions.
Regarding Theorem 1.3 and Figure 1, these are indeed summaries of existing results. However, we believe that these parts of the paper constitute a meaningful conceptual contribution. Namely, our observation that a large number of disparate results can neatly be organized based on our notions of distribution-dependent and distribution-independent notions of stability is a valuable contribution that could help researchers make sense of the stability landscape.
**Many existing studies, such as "From Robustness to Privacy and Back" by Asi et al. and those mentioned in Theorem 1.3 of this paper, have already established the equivalence between distribution independent stability and differential privacy.**
The paper from "Robustness to Privacy and Back'' studies connections between privacy and *robustness to adversarial perturbations of the input*. This form of robustness is technically different from the baysian stability notions considered in our paper.
**Theorem 2.2, from my perspective, is an interesting and significant novelty in this paper. However, its importance is not adequately emphasized in the main body. After reading the construction of $A^\star$ in Theorem 2.2 in the appendix, I believe that this construction should be highlighted in the main body to showcase its novelty.**
Thank you for this positive appreciation! We are happy to further highlight the value and novelty of our construction in the final version of the paper.
**One noticeable limitation of this paper is the lack of empirical evaluation for the weak learning rule $A$ and the stability-boosted learning rule $A^\star$ in Theorem 2.2. As a result, it is uncertain whether this construction holds practical significance without empirical evidence.**
Indeed, it would be valuable to perform an empirical evaluation of our work. Our work is purely theoretical/mathematical, and we leave experimental evaluations to future research.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal.
Comment: Thank you for your comprehensive response. I am open to revising my rating once I receive satisfactory answers to the following additional questions:
1. Could you kindly elaborate further on the formulation and development of the stability-boosted learning rule $A^*$? Given the absence of empirical validation concerning its effectiveness, I would recommend that the authors furnish more intricate insights into the tightness of this theoretical framework. Alternatively, if available, references to established works supporting this approach would greatly enhance its credibility.
2. In addition to the insights presented in Theorem 2.2, could you highlight any other innovative contributions that distinguish your work from the prior literature as listed in Theorem 1.3? (This question arises from my uncertainty regarding whether the primary contribution of formulating the stability-boosted rule holds sufficient promise, as indicated in Question 1. Should Question 1 receive a satisfactory response even in the absence of an answer to Question 2, I would be happy to revise my rating accordingly.)
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your time and your careful consideration. We are thrilled to hear that you are open to revising your rating!
Regarding your questions:
Title: That is Wonderful! | Summary: The paper is a thorough collection of relations between different notions of stability used in learning theory literature. The authors propose a systematization of the many relations by proposing a bifurcation of the notions into two classes -- distribution-independent stability and distribution-dependent stability, where the distribution here refers to population distribution.
Strengths: The paper is a very thorough organization of the panoply of notions of stability used in learning theory literature and provides an insightful perspective on how to view them in relation to each other.
Weaknesses: The amount of content in the paper is better suited for a journal rather than a mere 8 page conference paper. I found the order of the sections to be weird -- section 3 on preliminaries should be before section 2 for example.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **I found the order of the sections to be weird -- section 3 on preliminaries should be before section 2 for example.**
The current order was chosen so that the main contributions of the paper will appear as early as possible (this is common practice in some conferences). However, we agree that it makes sense to have the preliminaries appear before the technical overview, and we are happy to swap the order of these sections.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I believe once the paper's exposition is improved this paper is a strong accept. | Summary: This paper categories different definitions of stability (approximate/pure DP, replicability, global stability, perfect generalization, TV indistinguishability, mutual information stability and KL divergence stability) in the literature into two families (distribution-dependent stability and distribution-independent Bayesian stability) and provides theoretical equivalences in both groups.
Strengths: This paper provides connections between a series of stability definitions in the literature and proved the equivalence between them, which has potential to generalize new proof techniques between different topics.
Weaknesses: 1. This paper introduces a series of definitions without any literature, for example, $D_\alpha$-Stability, etc. Without any backgrounds/intuitions on the definitions and any future direction section, I am not convinced enough on how there results can be applied to other topics. Some discussions and intuitions for the definitions can greatly improve the quality of the paper.
2. The equivalence results in this paper are useful and I expect to see some examples on how the equivalence results can help building bridges between different stability literature.
3. The writing and structure of the paper can be improved.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Can the authors provide some examples on what the equivalence results can help connecting the literature of different stability literature?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The current version of the paper can be improved by some rigorous introductions on the different definitions of stability introduced in this paper, for example, how these concepts have been applied in different topics, but their connection has rarely been investigated. The current version is more like a list of definitions and theorems, without any intuitions, discussions and future directions. Polishing the paper would greatly strengthen the paper, but given the limitations above, incorporating these into the current paper will require a fair amount of rewriting and editing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **This paper introduces a series of definitions without any literature, for example, $\mathsf{D}_\alpha$-Stability, etc. Without any backgrounds/intuitions on the definitions and any future direction section, I am not convinced enough on how there results can be applied to other topics. Some discussions and intuitions for the definitions can greatly improve the quality of the paper.**
$D_\alpha$ is well motivated. $D_1$ corresponds to KL-divergence and $D_\infty$ corresponds to Perfect Generalization -- both of which are well-studied notions. Therefore, it is natural to consider $D_\alpha$ stability for $\alpha \in (1,\infty)$.
Moreover, there is also literature regarding $D_\alpha$-Stability as well. For example, [1] showed a generalization bound using R\'{e}nyi divergence (Theorem 2 and Corollary 5).
**The equivalence results in this paper are useful and I expect to see some examples on how the equivalence results can help building bridges between different stability literature.**
One example is the connection between pure differential privacy and the PAC-Bayes theorem. Both of these are fundamental ideas that have been extensively studied. Theorem 2.1 states that a hypothesis class admits a pure differentially private PAC learner if and only if it admits a distribution independent KL-stable PAC learner. This is an interesting and non-trivial connection between two well studied notions.
As a concrete example of this connection, recall that thresholds over the real line cannot be learned by a differentially private learner. Hence, by Theorem 2.1, there does not exist a PAC learner for thresholds that is KL-stable. Another example is half-spaces with margins in $\mathbb{R}^d$. Half-spaces with margins are differentially private learnable [2], therefore there exists a PAC learner for half-spaces with margins that is KL-stable. Those are examples of the type of connections that our paper elucidates. (There are many many other possible examples.)
**The current version of the paper can be improved by some rigorous introductions on the different definitions of stability introduced in this paper, for example, how these concepts have been applied in different topics, but their connection has rarely been investigated.**
We will add further discussion and background about the definitions of stability that we study.
**References**
[1] Amedeo Roberto Esposito, Michael Gastpar, and Ibrahim Issa. Robust
generalization via $\alpha$-mutual information. CoRR, abs/2001.06399,
2020.
[2] Blum, A., Dwork, C., McSherry, F., & Nissim, K. (2005, June). Practical privacy: the SuLQ framework. In Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems (pp. 128-138).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I think adding some discussions on the literature and intuitions of the results can greatly improve the paper. The structure and writing of the paper can also be improved.
---
Reply to Comment 1.1.1:
Comment: We are very happy to add additional discussion on the literature, and on the intuition behind our results.
Additionally, your review mentioned the following points:
* **Literature for $D_\alpha$.** We feel we have fully addressed this point in our rebuttal.
* **Examples on how the equivalence results can be used.** We have provided a number of example in our rebuttal, which we will also include in the final version of the paper.
Thus, we feel we have fully addressed all the concerns raised in the review. Additionally, all the points raised have touched on non-major issues of presentation and style, and did not find any flaws with the actual contents of our mathematical and conceptual contributions.
At this point, do you have any substantial concerns regarding our paper? If not, would you be willing to reconsider your score for our paper? | Summary: **Post-rebuttal**
I thank the authors for their response. I will keep my score as is.
This work aims at building a comprehensive taxonomy of stability definitions showing that many definitions of stability are equivalent to each other.
Strengths: This paper studies the interrelations between different types of algorithmic stability. The goal is to extend the study of equivalences between different notions of stability, and make it more systematic. The main contribution of the paper is to show equivalences between distribution-independent Bayesian notions of stability. The authors also provide a boosting result that enables stability amplification.
Weaknesses: * The paper is very dense. It requires familiarity with a number of concepts related to different notions of stability and also substantial background knowledge about related prior work (all quite recent). Of course since the authors aim at creating a comprehensive taxonomy of stability definitions, this is expected and the paper does a great job at that, both in relation to prior work for the distribution-dependent equivalences, and the new ones for the distribution-independent versions. I could not devote enough time to go through the supplied proofs but overall, am not entirely convinced if a conference venue like NeurIPS is the ideal platform for such a dense and terse presentation. The presentation can perhaps be improved in certain parts to improve readability, e.g., giving more intuition wherever applicable (the current draft does not saturate the 9 page limit).
* TV indistinguishability (alluded to in the abstract) is not defined in the main text or supplementary.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * How is TV indistinguishability related to TV stability?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I do not foresee any potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **TV indistinguishability (alluded to in the abstract) is not defined in the main text or supplementary. How is TV indistinguishability related to TV stability?**
They are the same. [1] used the term TV-indistinguishability and we preferred to use the term TV-stability, we believe that it emphasises the fact that this definition is a form of algorithmic stability. We do note that [1] defined TV-indistinguishability slightly different from our definition of TV-stability (see Definition 4 in [1]), but they also showed in Appendix A.3.1 that their definition is equivalent to ours (up to some constant factors).
The term "TV-indistinguishability'' appears only once (in the abstract), we will change this to "TV-stability'' to eliminate this potential confusion.
**The presentation can perhaps be improved in certain parts to improve readability, e.g., giving more intuition wherever applicable (the current draft does not saturate the 9 page limit).**
We will add more background and intuition about the various notions of stability studied in this paper and the connections between them.
[1] Alkis Kalavasis, Amin Karbasi, Shay Moran, and Grigoris Velegkas.
Statistical indistinguishability of learning algorithms, 2023. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper shows that many definitions of the "stability" in learning theory are equivalent (according to a specific definition of equivalence).
They prove this is the case for various distribution-independent definitions of stability as well as compile the same result for distribution-dependent definitions from previous work.
This summary allows a unified view across many stability-related concepts in the literature.
The submission also proves a result related to boosting, showing that a weak learner can be amplified to obtain better stability and learning performance.
Strengths: The paper is well-written and has a clear organization.
The concept of stability studied in the paper is an important one in learning theory.
By showing that multiple definitions of stability are weakly equivalent, the authors obtain important results that allow these definitions to be related to each other in a coherent manner.
Weaknesses: I don't have many complaints about this work.
I suggest the authors consider clarifying the details asked in my questions in the following section to make the paper clearer.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I'm having some trouble understanding the motivation behind wanting the difference between the prior and the posterior to be small in the description of stability on lines 33–34.
It seems to me we actually want the difference between two possible posteriors when applied to two similar realizations of $S$ to be small.
What if the data really disagree with the prior; wouldn't we have a large difference between the prior and the posterior regardless?
- On line 39, what makes the definition of stability Bayesian exactly? Because we iterate over possible realizations of $S$?
- In Section 2.1.1, does the chain mean only Pure DP implies $\text{D}_\infty$-Stability and the like but not vice versa ($\text{D}_\infty$-Stability implying Pure DP)?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not discuss any possible limitations of their results, although it might be the case that the discussion doesn't apply due to the nature of the topic.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **I'm having some trouble understanding the motivation behind wanting the difference between the prior and the posterior to be small in the description of stability on lines 33–34.**
Considering the "distance" between a prior distribution and the posterior is very common. (The word "distance" is in quotation since the measure of dissimilarity between them does not need to be a metric, e.g. the Kullback–Leibler divergence.)
For example, in the context of generalization, PAC Bayes Theorem assures that for every population distribution and any given prior $\mathcal{P}$, the difference between the population error of an algorithm $A$ and the empirical error is bounded by $\tilde{O}\left(\frac{\sqrt{\mathtt{KL}(A(S),\mathcal{P})}}{m}\right)$, where $A(S)$ is the posterior distribution, $\mathtt{KL}(A(S),\mathcal{P})$ is the KL divergence between the prior and the posterior (``measure of dissimilarity"), and $m$ is the size of the input sample $S$. See e.g. Theorem 31.1 in [1].
**It seems to me we actually want the difference between two possible posteriors when applied to two similar realizations of $S$ to be small. What if the data really disagree with the prior; wouldn't we have a large difference between the prior and the posterior regardless?**
Your suggestion of measuring the dissimilarity between two posterior distributions is widely used as well, this is very related to the definition of Replicability. In the paper we proved that those notions of stability are equivalent.
**On line 39, what makes the definition of stability Bayesian exactly? Because we iterate over possible realizations of $S$?**
Our choice of the name *Bayesian* stability is inspired by Bayesian statistics, which uses the terms *prior* and *posterior*. In Bayesian statistics the analyst has some prior distribution over possible hypothesis before conducting the analysis, and chooses a posterior distribution over hypotheses when the analysis is complete. Bayesian stability is defined in terms of the dissimilarity between these two distributions.
We will happily add this remark to the final version of the paper to clarify our choice of the term `Bayesian stability'.
**In Section 2.1.1, does the chain mean only Pure DP implies $D_\infty$-Stability and the like but not vice versa ($D_\infty$-Stability implying Pure DP)?**
No, $D_\infty$-stability does imply pure DP, since $D_\infty$-stability implies $D_\alpha$-stability for every $\alpha\in[1,\infty]$, which in turn implies pure DP.
[1] Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine
Learning– From Theory to Algorithms. Cambridge University
Press, 2014.
---
Rebuttal Comment 1.1:
Title: thanks
Comment: I thank the authors for their response. I will keep my score as is. | null | null | null | null | null | null |
Secure Out-of-Distribution Task Generalization with Energy-Based Models | Accept (poster) | Summary: This paper studies the intersection of OOD generalization and meta-learning, which is rather new. The main claim is that existing meta-learning algorithms may fail to generalize well in OOD settings since they are not specifically designed to solve OOD tasks. To this end, authors proposed a general framework to incorporate OOD into the Bayesian meta-learning framework and use the derived ELBO to solve the problem. Experiments on Sinusoids, DrugOOD, and meta-dataset show the effectiveness of this approach.
Strengths: 1. The problem formulation is new and interesting.
2. The proposed framework is general and can solve both ID and OOD tasks.
3. Experiments are supporting the algorithms.
Weaknesses: 1. A lack of theoretical analysis using the derived Beyesian framework. I assume a generalization bound can further help understand the superiority of the approach.
2. There are no experiments related to efficiency, which is demonstrated in abstract section. Additionally, I wonder what is the performance and efficiency of the algorithm in larger datasets.
3. ELBO is difficult to optimize. Are there any experiments on hyperparameter choices?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive comments to improve our paper. We detail our response below point by point. Please kindly let us know if our response addresses the questions you had for this paper.
##### A lack of theoretical analysis using the derived Bayesian framework. I assume a generalization bound can further help understand the superiority of the approach.
> - We will definitely include a rigorously derived generalization bound in the final version of our manuscript. For now we borrow the meta-generalization bound from Lampert et.al. [r1] for an intuitive explanation.
> - We would first recall Lampert's generalization bound for meta-learning, which consists of three parts: (1) the empirical error $\hat{\text{er}}(\mathcal{Q})$, (2) the KL divergence between the hyper-posterior and the hyper-prior $\text{KL}(\mathcal{Q}||\mathcal{P})$, and (3) the expected KL divergence between each task-specific posterior and priors sampled from the hyper-posterior $\mathbb{E}_{P\sim\mathcal{Q}}[\text{KL}(Q_i(S_i,P)||P)]$.
>
> - Our argument is that EBML with more flexibility learns a **better hyper-posterior distribution $\mathcal{Q}$ that approximates the ID task distribution more accurately** than simple distributions like Gaussians. As a result, we expect a **lower KL divergence $\mathbb{E}_{P\sim\mathcal{Q}}[\text{KL}(Q_i(S_i,P)||P)]$** between samples of this hyper-posterior and any task-specific posteriors in the ID task distribution (i.e., the above part (3)).
>
> [r1] Pentina, A., & Lampert, C.H . A PAC-Bayesian bound for Lifelong Learning. ICML 2014
##### There are no experiments related to efficiency, which is demonstrated in abstract section. Additionally, I wonder what is the performance and efficiency of the algorithm in larger datasets.
> - In Line 18 Abstract, we claim **"effectiveness of our approach"** which we have empirically verified:
>
> - In Table 1,2&3, the purposed energy sum consistently outperforms traditional OOD detection baselines in detecting both regression and classification OOD meta-testing tasks.
> - In Table 4 and Table r1 in our global response, our proposed OOD adaptation strategy outperforms the baseline method under few-shot classification settings on 5/5 , 4/5 and 5/5 OOD testing domains in the Meta-dataset benchmark for EBML-TSA, EBML-URL and EBML-SimpleCNAPs, respectively.
>
> - In terms of **efficiency on larger datasets**, we have conducted our experiment on the **Meta-dataset benchmark, which is the largest benchmark currently** and has been extensively studied in many meta-learning algorithms, e.g., TSA, SimpleCNAPs.
> - We think that EBML shows a reasonable performance vs computational complexity trade-off given that we have seen a noticeable gain in performance.
> - Additional computational complexity analysis and wall-clock convergence plots can be found in section C.4 of the supplementary material as well as Fig. r1 in the PDF file attached to our global response to reviewers.
##### Are there any experiments on hyperparameter choices?
> - We follow the reviewer's suggestion and conduct a sensitivity analysis of EBML's prediction and OOD detection performance w.r.t. a number of important hyperparameters in Fig. r2 in the PDF file attached to our global response.
>
> - For each plot, we vary one of the hyperparameters from its optimal value while keeping the rest unchanged.
> - Based on the results, we observe that EBML's performance is quite stable within the region near the optimal hyperparameter values. | Summary: In this paper, the authors address the generalization problem of meta-learning methods on out-of-distribution tasks in the wild. However, existing Bayesian meta-learning methods suffer from incomplete convergence of the feature distribution shifts and insufficient expressiveness of meta-learning priors. The authors propose an energy-based meta-learning framework to represent task distributions.
Strengths: A plug-in and effective module EBML is proposed to improve the performance of existing meta-learning methods and achieve the new state of the art. Extensive experiments on four regression and classification datasets demonstrate the effectiveness of the method.
Weaknesses: Some parts of the paper are not very clear, and it takes a long time for the reviewer to understand. These symbols should be prior definited to help reader to fast find the usefull infromation.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In lines 34-35, \theta and \phi denote the parameters of the meta model and task-specific model. However, some meta-learning methods, such as MAML, use the same network architecture for both the meta-model and the task-specific model. In this paper, \theta and \phi are different networks?
2. Why do meta-testing for ID and OOD tasks have different optimization functions? Can the authors explain the essential difference between Equations (10) and (11)?
3. In Table 4, I tend to see more experimental results in meta-datasets with different baselines and methods.
4. The author lacks some important related work, using energy methods to solve the problem of meta-learning and few-shot learning [1, 2, 3].
[1] Meta-Learning Deep Energy-Based Memory Models.
[2] Energy-Efficient and Federated Meta-Learning via Projected Stochastic Gradient Ascent
[3] Unsupervised Meta-Learning via Latent Space Energy-based Model of Symbol Vector Coupling
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: see weaknesses and questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you a lot for the constructive comments. You may find our corresponding explanations below for your concerns. We would really appreciate it if you could let us know if you have any further concerns.
##### Q1: On what $\theta$ and $\phi$ represent in EBML
> Lines 33 in the Introduction is a hierarchical probabilistic model [8, 44] that subsumes almost all meta-learning algorithms, where we use $\theta$ and $\phi$ to denote the parameters of the meta-model and task-specific model. Depending on the meta-learning algorithms, $\theta$ and $\phi$ have different implementations.
> - For gradient-based meta-learning algorithms including MAML, each task-specific-model $\phi$ is obtained from $\theta$ via gradient descent. Thus, $\theta$ and $\phi$ share the same network architecture but differ in parameters.
> - For amortized meta-learning algorithms including CNPs that we experimented with EBML, $\theta$ include both the parameters for the amortization network and those for the posterior predictive network (i.e., the decoder); $\phi$ is rather an encoded representation of a task in the form of **a finite-dimensional vector** which is used to condition the posterior prediction.
> - In EBML, we can collect all the meta-learned parameters that are shared by all tasks into $\theta$, including $\omega$ for the task-specific data EBM, $\psi$ for the amortization network, and the latent prior EBM $\lambda$. $\phi\sim q_\psi(\phi|\mathcal{T}^s)$ following amortized meta-learning algorithms denotes a task-specific vector that conditions the task-specific data EBM $E_\omega(\mathbf{x},y,\phi)$.
##### Q2 : Why different optimization functions for ID and OOD tasks, i.e., the difference between Eqn. (10) and Eqn. (11)
> - We first clarify that Eqn. (10) is used for making **predictions** whereas Eqn. (11) is employed for OOD **task adaptation**, thus the optimization is w.r.t. $y$ (the predicted label) in Eqn. (10) but $\zeta$ (the task-specific parameters for adaptation) in Eqn. (11).
> - Secondly, the meta-testing procedures are indeed different for ID and OOD tasks:
> - For ID tasks, we **solve Eqn. (10) directly** where we use the meta-learned amortization network parameters $\psi$ shared by all tasks;
> - For OOD tasks, as we have detailed in Line 235-238, we **first solve Eqn. (11)** for adaptation of the amortization network parameters from $\psi$ to $\psi\cup\zeta$, and **then solve Eqn. (10)** where we use the adapted task-specific parameters $\psi\cup\zeta$.
> - Third, Line 219-223 illustrates **the reason why we have the above difference during meta-testing, i.e., an extra OOD task adaptation step for OOD tasks**. When given an OOD task, the meta-learned prior $\phi\sim q_\psi(\phi|\mathcal{T}^s)$ is located out of the ID meta-training task distribution which is likely to lose its effectiveness. To alleviate this, we introduce and optimize the task-specific parameters $\zeta$ via minimizing the latent prior energy $E_{\lambda}(\phi)$, so that the adapted parameters $\psi\cup\zeta$ consequently map this inadequate $\phi$ back to the ID $\phi$ region where the model enjoys guaranteed generalization through meta-training on ID tasks.
> - Finally, Fig. 3 corroborates that such adaptation for OOD tasks indeed brings their $\phi$ back to ID regions and thereby improves the classification accuracy.
##### Q3 : More experiment results for Meta-dataset.
> We will include URL [31] (which is currently the official second-best method on Meta-dataset; note that in our experiments we have already included TSA [30] which is the best) and EBML-URL as additional baselines and experimental results in Table 4.
> - For URL we use the official code which is publicly available. To implement EBML-URL for task-specific adaptation, we optimize Eqn. (11) w.r.t. to the task-specific feature projection matrix in URL.
> - We show the results of URL vs. EBML-URL, together with SimpleCNAPs vs EBML-SimpleCNAPs, **in Table r1 in the PDF file attached to our global response**, from which we observe that EBML-URL outperforms URL on 4/5, and EBML-SimpleCNPAs outperforms SimpleCNAPs 5/5 OOD datasets and conclude with the efficacy of the proposed EBML.
##### Q4 : Related works on using energy methods to solve the problem of meta-learning and few-shot learning [1, 2, 3].
> We very much appreciate the suggestion made by the reviewer to include [1,2,3] as related works. However, after some careful analysis, we think [1,2,3] are less relevant to EBML:
> - In [1], the authors employed **EBMs as an associative memory model** - a system which is able to retrieve a remembered pattern based on its distorted or incomplete version. Based on this, the author proposed to accelerate the reading/writing to the associative memory-model via meta-learning.
> - In [2], the authors proposed a federated meta-learning algorithm, whose primary objective is to learn a meta-model that can be fine-tuned to a new task both with a few number of samples in a distributed setting and **at low computation and communication energy consumption**.
> - In [3], the authors studied **unsupervised meta-learning**, and **formulated a top-down generative model where a latent EBM is used to model the latent clusters within a task**. The EBM in [3], therefore, serves a different purpose to the two novel EBMs in EBML which are used for jointly characterizing the ID meta-training task distribution.
> - In summary, despite meta-learning or (and) EBMs are studied in [1,2,3], **all of them bear very different goals from EBML**. The goal of EBML is to develop a coherent meta-learning framework that enables both OOD task detection and adaptation in meta-testing.
>
> [1] Meta-Learning Deep Energy-Based Memory Models, ICLR 2020
> [2] Energy-Efficient and Federated Meta-Learning via Projected Stochastic Gradient Ascent, GLOBECOM 2021
> [3] Unsupervised Meta-Learning via Latent Space Energy-based Model of Symbol Vector Coupling, NeurIPS MetaLearn Workshop 2021
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Dear authors,
I appreciate all the clarifications you made during the rebuttal period. However, I still have a question about why the proposed EBML can achieve good performance on both ID tasks and OOD tasks at the same time?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for your reply. Please kindly find below our response on why EBML can achieve good ID and OOD performance at the same time.
- Why EBML achieves good OOD performance
- We propose **an adaptation procedure** shown in Eqn. (11), where we introduce the additional parameters $\zeta$ specific to an OOD task that adapt the OOD prior $\phi\sim q_{\psi\cup\zeta}(\phi\mid\mathcal{T}^s)$. Such adaptation, as shown in Figure 3, successfully **brings OOD priors into the ID region where meta-generalization is guaranteed due to meta-training on ID tasks**.
- We provide an analogy with classifier editing [48] in Line 222, in which the classifier is edited/adapted to accommodate an OOD image (a car with wooden tire) so that **the predicted class of the OOD image aligns with an ID one (a car with rubber tire)**.
- Why EBML maintains good ID performance
- The OOD adaptation in Eqn. (11) is **w.r.t. only the additional parameters that are specific to each OOD task**, thereby **having no effect on any ID task that still uses meta-trained parameters**.
- Still, use [48] as an analogy. As shown in Figure 5, the performance of the proposed editing method on ID classes (other classes instead of target class in Figure 5) is **still high, as the parameters for ID class prediction do not change at all**. | Summary: This paper proposes EBML, an energy-based meta-learning which models the joint P(X,Y) with two energy functions - one for E(X,Y,phi) that models task-specific joint P(X_i,Y_i|phi) and E(phi) that models task-specific latents P(phi). The motivation is twofold: 1. completeness: energy-based models can naturally distinguish between in and out-of-distributions because it can easily model the joint P(X,Y), 2. expressiveness: energy function can provide more flexible distribution than conditionals with known forms, such as Gaussian. For negative samples, they used SGLD-based sampling, and the resultant ELBO approximation becomes tractable and efficient. They also propose a novel OOD detection metric which seems to be superior than the previous metrics. The experimental results support their claim.
Strengths: - Motivation is clear: we need to model the joint P(X,Y) in order to detect and better adapt to OOD X's. Also, it's appealing that energy function can provide more expressivity than conditionals of known forms.
- Writing is mostly clear: It's well structured and I enjoyed reading this paper
- The resultant ELBO objective in (8) makes sense, and seems efficient to train.
- Experiments are extensive and sufficient to show that EBML is competitive.
- Figure 2 provides good insights as to what the role of the two energy terms is.
Weaknesses: [Major comments]
- I wonder whether the techinical contribution is sufficient. It seems to me that the proposed EBML is a replacement of conditional P(Y|X) with E(X,Y), and I feel that it would be a little bit straightforward. I'm not entirely sure if there exists similar papers doing similar things, and I found a seemingly-relevant paper [1]. Could you discuss the relationship between EBML and [1]?
- In (10), I don't understand how argmin_y does make sense. The argmin completely throws away all the uncertainty informations by finding only a single local optimum. Thus I failed to understand how the uncertainty plots from the sinusoidal experiments could have been drawn. Maybe argmin makes more sense for classifications, but it's same that all the uncertainty information is discarded. Could you clarify this point?
- It seems that this argmin operation is computationally more expensive than the usual direct modeling of P(y|x). The wall-clock time analysis should be properly compared agains the baselines.
- I don't understand why (11) should be seen as "adapting to ID". To me, the goal of (11) is to adapt $q_\psi$ to the OOD task given at the meta-test time, no?
[Minor comments]
- In (5), it would be good to explain why $q_\psi$ can be only conditioned on $\mathcal{T}^s_i$ rather than the entire $\mathcal{T}_i$, based on ABML paper (for the readers who are not familiar with the context).
- In section 4.1, it would be good to explicitly mention that the meta-parameters are collected into $\theta = \{\psi, \omega, \lambda\}$
- I my personal experience, energy functions are not always easy to train. I wonder how much it is easy to train the energy functions you proposed with wall-clock convergence plots (compared to the usual directed models P(Y|X)).
- In L179-183, I think the other methods can also use the ELBO approximation, and there will be no such limitations, no?
- In L190, maybe "only a single forward pass" rather than "only forward pass"?
[Reference]
[1] Willette et al, Meta-Learning Low Rank Covariance Factors for Energy-Based Deterministic Uncertainty, ICLR 2022
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the comments above.
I'm willing to increase my score if the above major concerns (and hopefully minor comments as well) can be resolved.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors did properly addressed several limitations of their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your comments on this paper. You may find our response below for your major and minor concerns. We would appreciate it if you could let us know if you have any further concerns.
##### Technical contribution
> - First, we respectfully point out the misunderstandings in the argument on that EBML is a straightforward replacement of conditional $P(\bf Y|X)$ with $E(\bf X,Y)$.
> - As in Line 134-135, the objective of EBML is to model $P(\bf X,Y)$ instead of $P(\bf Y|X)$, thereby **resolving the issue of incomplete OOD coverage** (c.f. Line 32-39 and Fig. 1(a)).
> - This, **is not as straightforward as building $E(\bf X,Y)$**, as $(\bf X, Y)$ representing a task is **a variable set of samples**. Thus,
> - different from EBMs conventionally applied on samples **in fixed dimension**, the energy-based modelling for $P(\bf X,Y)$ models **the distribution over sets**, which requires **(1)** flexibility in handling sets in varying cardinalities, and **(2)** expressiveness in capturing the set-level property of a task, including the function $p(y|x)$ and sample variance of a task.
> - the straightforward energy-based modelling strategies that meet the above first requirement unfortunately fail to meet the second, thereby being inadequate. E.g., building an EBM on all support samples of all tasks cannot even describe the function of each task; an EBM over the task representation via simple average of all samples in a task disregards sample variance. We show these simple alternatives significantly underperform EBML empirically in Table r2 (in global response PDF).
> - EBML directly derived from the task distribution $P(\bf X,Y)$ (c.f. Eqn. (5)) resorts **two novel EBMs** to fulfill the above two requirements, respectively: **(1)** the latent prior EBM is an energy function of the **prior** that functionally characterizes a task in the **fixed latent representation space**, correlated with **the functional distance of a task from the ID distribution**; **(2)** the task-specific data EBM is an energy function of **the observed support set conditioned on its prior**, correlated with **the variance of samples in a task** (see Fig. 2 left).
> - Second, our technical contributions include more than modelling $P(\bf X,Y)$ with energy-based models only. Others include:
> - the **Energy Sum** as a principled (proportional to the likelihood of a task (c.f. Appendix A.2)) and effective OOD task detection score for EBML,
> - the **OOD task adaptation strategy in Eqn. (11)** to improve meta-generalization for EBML which makes it a coherent framework.
> - Third, to our best knowledge, EBML is the first **coherent probabilistic model** that allows both detection and adaptation of OOD tasks and **general framework** being compatible with off-the-shelf meta-learning backbones. Still, we sincerely thank the reviewer for pointing out the related work of [1] and will definitely discuss the key differences between ours and [1] in the final version of the manuscript. **Please kindly refer to our global rebuttal for a detailed analysis between EBML and [1].**
>
> [1] Meta Learning Low Rank Covariance Factors for Energy Based Deterministic Uncertainty, ICLR 2022.
##### Explanation of $\text{argmin}_y$ in Eqn. (10) and the uncertainty plots for sinusoids
> Given (1) the support set $\mathcal{T}^s$ and (2) a query input $\mathbf{x}^q_j$ of a meta-testing task, our probabilistic model $\mathbb{E}\_{\phi\sim q_\psi(\phi|\mathcal{T}^s)}[E_{\omega}(y;\mathbf{x}\_{j}^q,\phi)] \propto -\log p(y|\mathcal{T}^s,\mathbf{x}\_{j}^q)$ allows for evaluation **at any value of $y$**. Therefore,
> - The $\text{argmin}\_{y}$ is for finding **the most likely** prediction which is **the single desired output** in practice. This is also a common practice in making predictions by EBMs [r1].
> - The $\text{argmin}\_{y}$ operation **does not alter the probabilistic nature** of our energy-based models, which suffices to estimate the uncertainty at $\mathbb{E}\_{\phi \sim q_\psi(\phi | \mathcal{T}^s)}[E_{\omega}(\mathbf{x}\_{j}^q, y, \phi)]$ by specifying $y$ together with $\mathcal{T}\^s$ and $\mathbf{x}\^q_j$. Note that the way that EBM offers uncertainty, **requiring specification and input of a spectrum of $y$ values**, is different from conventional probabilistic models (e.g., with Gaussian priors) that take only $\mathbf{x}$ as input and output the variance as uncertainty.
> - That said, to draw the predictive distribution for a sinusoid task in Fig. 1 and 4 , we (1) infer $\phi\sim q_\psi(\phi|\mathcal{T}^s)$ , (2) evaluate $E_{\omega}(\mathbf{x},y,\phi)$ over a **2D grid of $x,y$**. Similar visualizations are also present in the literature (c.f. Fig. 2 in [r1]).
> [r1] Energy-Based Models for Deep Probabilistic Regression, ECCV, 2020.
##### In L179-183, I think the other methods can also use the ELBO approximation, and there will be no such limitations, no?
> - We clarify that Line 179-183 address the limitations in using naïve Monte Carlo approximation for the intractable log-likelihood of a task as a density-based OOD score. These limitations motivate us to use the ELBO approximation which leads to the proposed Energy Sum.
> - We have investigated the possibility of using the ELBO approximation as the OOD score in a similar fashion to Energy Sum for ABML and CNPs in our ablation studies in Table 3, i.e., "SNLL + Gauss Prior".
> - Our proposed Energy Sum outperforms these alternatives which illustrates the advantages of the proposed EBML in overcoming the issue of **limited expressiveness** of traditional probabilistic methods.
##### Due to the space limitation, **please kindly refer to our global rebuttal for response to the reviewer's comments** on (1) Why (11) should be seen as "adapting to ID"; the goal of (11) is to adapt $q_\psi$ to the OOD task, (2) wall-clock convergence plots (3) other minor comments
---
Rebuttal Comment 1.1:
Title: Thanks for clarification
Comment: Dear authors,
I appreciate all the clarifications you made during the rebuttal period. I'm satisfied with the response and hence raise my score to 7.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for the positive feedback and increasing the score! We will follow your suggestions in the final version.
Best, Authors. | Summary: The paper deals with the problem of detecting and adaptation of out-of-distribution (OOD) tasks in the meta-learning algorithms. Recent solutions adapt Bayesian meta-learning methods which have certain limitations in terms of complete coverage of OOD tasks and based on known probability distribution that may not express complex probabilistic structure of the meta-training task distribution. Moreover, cross-domain meta-learning algorithms may adapt the OOD tasks but they are not a good estimator for OOD tasks detector. Given these limitations, the authors propose an Energy-Based Meta-Learning (EBML) coherent framework that covers both the OOD task detection and the adaptation of OOD tasks during the meta-testing. The approach is agnostic to the meta-learning backbones and can be fit with any approach to make them generalizable against OOD tasks. Experiments on the regression and classification datasets show the efficacy of the approach over the Bayesian meta-learning methods and cross-domain meta-learning approaches for OOD task detection and adaptation respectively.
Strengths: - The paper proposes a coherent energy based meta-learning framework that covers both OOD tasks detection and adaptation to alleviate the limitations of Bayesian meta-learning algorithms and cross-domain meta-learning algorithms
- Proposes task-specific data EBM and latent prior EBM leveraging from [1]
[1] Learning Latent Space Energy-Based Prior Model, NeurIPS 2022
- Theoretical derivation of EBML meta-learning objective with both task specific $E_{\omega}$ and latent prior $E_{\lambda}$ EBM. The author leverages both these EBMs $E_{\omega}$ and $E_{\lambda}$ to define energy score for OOD tasks detection.
- Extensive experimental and ablation study including EBM prior v/s Gaussian for energy sum to achieve better OOD detection, more baselines for OOD adaptation, reliability of current cross-domain meta-learning algorithms under insufficient support set datapoints.
- EBML outperforms Bayesian meta-learning approaches for OOD task detection with an improvement of up to 7% on AUROC. Compared to cross-domain meta-learning algorithms for OOD tasks adaptations on Meta-dataset, it performs on-par with the SOTA approaches.
Weaknesses: - Can the authors please shed light on computation complexity of the method? It looks a good trade-off for a synthetic regression tasks, but for a real meta-dataset (classification task), it is more expensive than SimpleCNAP. Given that what approach would be have better trade-off given the performance and training time?
- The approach performs better on dataset like MNIST in meta-dataset, but for the other datasets like VGG Flower, Oxford, and MSCOCO, the results are on par with TSA. Could the authors provide the performance of the approach by plugging it into other meta-learning algorithms of classification tasks?
- Does the model handle the catastrophic forgetting about the in-distribution meta-training tasks?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please see questions in the weakness section
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive feedback and comments to improve our paper. We detail our response below point by point. Please kindly let us know if our response addresses the issues you raised in this paper.
##### Additional results for EBML on meta-datasets.
> We will include URL [31] (which is currently the official second-best method on Meta-dataset; note that in our experiments we have already included TSA [30] which is the best) and EBML-URL as additional baselines and experimental results in Table 4.
> - For URL we use the official code which is publicly available. To implement EBML-URL for task-specific adaptation, we optimize Eqn. (11) w.r.t. to the task-specific feature projection matrix in URL.
> - We show the results of URL vs. EBML-URL, together with SimpleCNAPs vs EBML-SimpleCNAPs, **in Table r1 in the PDF file attached to our global response to reviewers**, from which we observe that EBML-URL outperforms URL on 4/5, and EBML-SimpleCNPAs outperforms SimpleCNAPs 5/5 OOD datasets and conclude with the efficacy of the proposed EBML.
##### On performance vs computation complexity trade-off of EBML-SimpleCNAPs for meta-dataset (classification task).
> - Based on the OOD tasks detection results in Table 2 and additional OOD tasks adaptation results in Table r1 in our global response to reviewers, we believe that EBML-SimpleCNAPs also shows a reasonable performance vs complexity trade-off on meta-datasets:
> - In Table 2, EBML-SimpleCNAPs with energy sum improves the OOD detection performance of simpleCNAPs with conventional OOD detection methods by at least 7% in AUROC, 5% in AUPR and **a significant 36% reduction in FPR95**.
> - In Table r1, we optimize Eqn. (11) to adapt the task-encoder hence the task-specific FiLM parameters in SimpleCNAPs for OOD meta-testing tasks in Meta-dataset. The resultant **EBML-SimpleCNAPs outperforms SimpleCNAPs on all 5/5 OOD datasets**.
> - Given that we observe a noticeable gain in performance over the baseline method, we think that the computational overhead for EBML-simpelCNAPs in Appendix C.4. is therefore reasonable.
##### On catastrophic forgetting on the in-distribution meta-training tasks.
>- We think that catastrophic forgetting on the ID meta-training tasks may be less of a concern in the standard meta-training and meta-testing settings due to the reasons below.
> - However, we are not so sure whether we have correctly interpreted the review's view on "catastrophic forgetting on the ID meta-training tasks" when answering this question. Therefore, we wish the the reviewer could kindly elaborate more on this question if our answer is not sufficient.
>
> - Given a meta-testing task, meta-learning algorithms always perform task-specific adaptation (i.e., the inner-loop optimization w.r.t. the model) on the support set before making prediction on the query samples.
> - The task-specific model is always reset back to the original state i.e., the meta-learned model on IID meta-training tasks, at the start of meta-testing on each new test task. Thus the adaptation and testing for different **meta-testing tasks are treated as independent episodes**, i.e., the task-specific model for one task will not be carried on for adaptation and testing for another different task.
>- Since, the **meta-learned model initialization for meta-testing is the optimal model returned from meta-training on IID tasks**, therefore, there should be no forgetting on the ID meta-training tasks when testing on ID meta-training tasks.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I thank the authors for their detailed rebuttal.
* By looking at the new results on meta-dataset by plugging the approach in URL [31], it looks like the performance gain in terms of mean accuracy is marginal and given the confidence interval, it could become negligible.
* The authors showed the improvement of the OOD tasks detection in the few-shot regression and based on that results, EBML-SimpleCNAPs also show a reasonable performance vs complexity trade-off. However, given the marginal improvement in the OOD task adaptation (classification), the above statement might be not true at all. Does it mean that the approach is more suitable for OOD task adaptation?
* I appreciate the clarifications on the catastrophic forgetting. I'm happy with the response.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for raising your remaining concern. Please find our response below, and kindly let us know if it satisfactorily addresses your concern.
- First, as stated in the Introduction, our approach aims to secure meta-generalization **within a coherent framework** by taking a **two-step** approach, i.e., (1) detection of OOD tasks and (2) adaptation of OOD tasks. Therefore,
- The evaluation of EBML, along with the baselines, grounds on the **combined performance** of both detection and adaptation aspects, instead of considering each separately.
- The performance improvement of EBML-SimpleCNAPs over SimpleCNAPs is significant, **combining** (1) **36% reduction in FPR95** for OOD detection and (2) **average 1% improvement in classification accuracy** for OOD adaptation.
- Second, even when considering the OOD adaptation performance alone, we would like to justify that **the improvement is not trivial**.
- In Table 8 of [28],
- the average performance improvement of the SOTA method TSA over the best baseline URL on unseen tasks (i.e., OOD tasks in our work) is also **1%**;
- the performance improvement of URT, the third best method, over SimpleSNAPs on OOD tasks is even negative with **-0.2%**.
- EBML **consistently outperforms** all the baselines, with **an average improvement of 1%**.
- EBML-TSA **achieves the new SOTA**, via further improving TSA by **1%** in both versions of Meta-datasets (see Table 4 in the main text and Table 10 in Appendix C.2).
- EBML-URL and EBML-SimpleCNAPs with an average performance of **63%** and **59.06%** (improving URL and SimpleCNAPs by 0.4% and 1.4% on average, respectively) have ranked as the second/third best method (see "Average Unseen" in Table 8 of [28]). | Rebuttal 1:
Rebuttal: ### Global Response to Reviewers
##### Why (11) should be seen as "adapting to ID"; the goal of (11) is to adapt $q_\psi$ to the OOD task
> As we explain below, our original description of Eqn. (11) in Line 221, i.e., "adapts this inadequate meta-learned prior back to the ID region" **is not conflicting with** reviewer 92kR's interpretation that Eqn. (11) is "adapting $q_\psi$ to the OOD task". Nevertheless, we thank the reviewer for pointing this out, and we will make sure to add more explanation on this in section 4.3 to avoid confusion in the final manuscript.
> - As shown in Eqn. (11), the **optimization procedure** is indeed with respect to the mapping $\zeta$ that adapt the parameters $\psi $ of $q_\psi$ to $\psi\cup\zeta$, so that $q_{\psi\cup\zeta}$ accommodates the OOD task.
> - The **optimization result**, however, is that the resulting OOD prior $\phi\sim q_{\psi\cup\zeta}(\phi|\mathcal{T}^s)$ approaches the ID region where the meta-generalization is guaranteed due to meta-training on ID tasks. We illustrate the adaptation process in Fig. 3, where the OOD priors $\phi$ indeed shift towards the ID region as optimization proceeds.
> - We provide an analogy with classifier editing [48] in Line 222, in which the classifier is edited/adapted to accommodate an OOD image (a car with wooden tire) so that the predicted class of the OOD image aligns with an ID one (a car with rubber tire).
##### We sincerely thank reviewer 92KR for pointing out the related work of [1] and will definitely include the following key differences between ours and [1] in the final version of the manuscript.
> - **Problem definition and objective:** EBML aims to detect a meta-testing task that is OOD of the meta-training tasks; however, [1] focuses on detecting a query sample that is OOD of the support samples in a meta-testing task.
> - **Method:** EBML falls into the category of **density-based** OOD detection methods, so that it **explicitly meta-learns the distribution of meta-training tasks via the two proposed EBMs** and develops the **Energy Sum to flag those high-energy tasks as OOD tasks**; however, [1] first meta-learns class covariance matrices as a parameterised function to alleviate the collapse of the covariance matrix due to limited samples in a few-shot meta-testing task and secondly resorts to **post-hoc** OOD detection via **energy scaling**. Note that "energy scaling" here refers to temperature scaling in softmax output, **without learning any EBM**.
> - **Coherence and Generality**: EBML also improves meta-generalization, and works well in either regression/classification; however, [1] builds on only generative classifier-based meta-learning approaches solves only classification tasks and underperforms some very basic meta-learning algorithms e.g., MAML, ProtoNet, in classification accuracy (c.f. Tables 2,3,4,5 in the Appendix of [1])
>
> | Method | ID | OOD to evaluate | OOD detection method | EBM explicitly learned | Coherency | Generality |
> | ------ | -------------------------------------- | ------------------- |:-------------------- | ---------------------- | ---------------------------------------------------------- | ---------------------------------------------------------- |
> | EBML | meta-training tasks | a meta-testing task | Density-based | Yes | both OOD detection and adaptation | all meta-learning backbones, regression and classification |
> | [1] | support samples in a meta-testing task | a query sample | Post-hoc scaling | No | uncertainty calibration with a sacrifice in generalization | generative-classifier based backbones, classification |
>
> [1] Meta Learning Low Rank Covariance Factors for Energy Based Deterministic Uncertainty, ICLR 2022.
##### Wall-clock time analysis and convergence plots for EBML, and additional results for OOD tasks adaptation for Meta-datasets.
> - Please kindly refer to the additional plots and tables in the PDF file attached to this global response.
##### Other minor comments related to writing
> - We sincerely thank the reviewers for their helpful suggestions that would further improve the clarity of the paper. We will make sure to follow these comments when editing the final version of the manuscript
Pdf: /pdf/5562c0ec3515b4f9bcb965b1dfcb1a2ffeea160b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the problem of meta-learning on out-of-distribution (OOD) tasks and proposes a solution to improve the generalization capability of meta-learned prior knowledge in safety-critical applications. The paper introduces an energy-based coherent probabilistic model that enables both detection and adaptation of OOD tasks. The proposed Energy-Based Meta-Learning (EBML) framework outperforms state-of-the-art Bayesian meta-learning methods in OOD task detection and cross-domain meta-learning approaches in OOD task adaptation.
Strengths: 1. **Coherence and Generality**: The EBML framework offers a coherent model that supports both the detection and adaptation of OOD tasks, providing a general solution against OOD tasks for any off-the-shelf meta-learning approach.
2. **Experimental Support:** The paper showcases empirical validation of the EBML framework, demonstrating its superior performance over other methods on multiple regression and classification datasets.
Weaknesses: While the paper presents an innovative application of energy-based models for out-of-distribution (OOD) task generalization and detection in meta-learning, it could benefit from a more detailed exploration of its unique contributions. Energy-based models are widely used in areas such as OOD generalization and detection. Therefore, distinguishing the specific merits of the Energy-Based Meta-Learning (EBML) framework would help elucidate its contribution to the field. The authors could potentially solidify their work's significance by providing an in-depth comparison with other techniques, or illustrating how their approach uniquely addresses challenges inherent to OOD tasks in meta-learning. This could illuminate the distinct importance of their work in this vibrant field.
[1] Energy-based Out-of-distribution Detection, NeurIPS
[2] Active Learning for Domain Adaptation: An Energy-based Approach, AAAI
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper discussed about the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive comments to improve our paper. We detail our response below point by point. Please kindly let us know if our response addresses the questions you had for this paper.
##### On how EBML uniquely addresses the challenges inherent to OOD tasks in meta-learning.
> - In conventional sample-level OOD detection, measuring the OODness boils down to **(1)** modelling of the ID distribution and **(2)** evaluation of the likelihood that the to-detect sample belongs to the ID distribution. EBM has shown its superiority in **(1)** flexibly modelling the ID distribution and **(2)** deducing the energy score as a principled but simple OOD measure.
>
> - Disparate from OOD detection of samples that are **in fixed dimension**, detection of an OOD task **as a variable set of samples** poses the following **two inherent challenges**.
>
> - **(1)** Modelling the ID distribution over sets requires both flexibility in handling sets in varying cardinalities and expressiveness in capturing the set-level property of a set. Thus, a direct application of EBM via naive aggregation of samples within a task, ignoring the sample variance and failing to fully meet the second requirement, is inadequate.
>
> - **(2)** The likelihood of a task belonging to the ID task distribution involves comparison between a set and a set distribution and reduces to calculating the set distance which is widely accepted more complicated than sample distance.
>
> - To address the above two challenges, EBML uniquely
>
> - **(1)** derives two novel EBMs to jointly model the ID task distribution, including a latent prior EBM and a task-specific data EBM.
>
> - Latent prior EBM is an energy function of the prior, which describes a task in the **fixed latent representation space regardless of the cardinality** and correlates with **the distance of a task from the ID distribution** (see Fig. 2 right).
>
> - Task-specific data EBM is an energy function of the observed support set conditioned on its prior, correlated with **the variance of samples in a task** (see Fig. 2 left).
>
> - **(2)** introduces the Energy Sum as an effective OOD task detection score, which
>
> - has been proved exactly **proportional to the likelihood of a task** (see Appendix A.2),
>
> - evaluates the above two aspects, i.e., distance and variance, and
>
> - enjoys empirical **superiority over the scores for OOD sample detection** (adapted by averaging scores of the samples in a task, see Table 2).
>
> - Furthermore, EBML is more than just an OOD task detector in meta-learning; it improves generalization to OOD tasks in meta-learning via the proposed adaptation strategy in Eqn. (11).
##### Comparison between EBML and [1,2]
> - Both [1] (i.e., [35] in our main text) and [2] investigate the application of EBMs in **either** OOD sample detection **or** active domain adaptation, so that they do not face the aforementioned two challenges **inherent to OOD tasks in meta-learning**.
>
> - We will definitely follow the reviewer's great suggestion to improve the paragraph of "EBMs for OOD Detection" in Section 2, by
>
> - detailing the abovementioned unique contributions of EBML in detecting OOD tasks,
>
> - including [2] which first queries unlabeled target samples by the difference between the minimum and the second minimum energies and then aligns the free energies of selected target samples with those of source ones. Thus, the EBM in [2] still models the distribution over **samples** for sampling, instead of over **sets** in our work.
>
> [1] Energy-based Out-of-distribution Detection, NeurIPS 2020
> [2] Active Learning for Domain Adaptation: An Energy-based Approach, AAAI 2022 | null | null | null | null | null | null |
Disentangled Counterfactual Learning for Physical Audiovisual Commonsense Reasoning | Accept (poster) | Summary: This paper presents a groundbreaking Disentangled Counterfactual Learning (DCL) approach for physical audiovisual commonsense reasoning. The main objective of the proposed method is to infer objects' physics commonsense based on both video and audio inputs, effectively mimicking human reasoning abilities. To address the limitations of existing methods in utilizing the diverse characteristics of multimodal data and lacking causal reasoning abilities, the authors introduce the DCL method. Contributions: 1. introducing a novel DCL approach that leverages disentanglement and causal reasoning to improve multimodal data utilization and achieve remarkable performance. 2. propose a novel Counterfactual Learning Module to model physical knowledge relationships. 3. the proposed DCL method is designed as a plug-and-play module, making it adaptable for integration into various baseline models.
Strengths: The paper introduces an innovative framework for tackling the challenging task of physical commonsense reasoning. The authors meticulously design a disentangled sequential encoder and a counterfactual learning module, both of which contribute to the success of the proposed model in addressing this unique visual question answering (VQA) problem. Importantly, the model's modular nature allows for seamless integration with various baseline approaches, thereby enhancing its versatility.
The experimental evaluation conducted in this study is relatively enough, demonstrating the effectiveness of the proposed method. The results showcase a significant improvement in accuracy on the PACS dataset, validating the model's ability to handle complex reasoning tasks. Moreover, the authors present qualitative results that vividly illustrate how their approach enhances material reasoning performance, further reinforcing the practical relevance and value of their work.
In terms of the paper's presentation, the English writing is commendable for its clarity and accessibility. The authors effectively convey their ideas, making it easy for readers to grasp the core concepts and understand the technical details without unnecessary complexity.
Weaknesses: (1) The main experiment is limited to the PACS dataset and a specific physical audiovisual commonsense reasoning task. While the proposed method is presented as a plug-and-play module, it would be valuable to explore its generalizability to other VQA tasks and unseen data distributions. Considering the versatility of baseline models like CLIP, it would be worthwhile to investigate whether this module can be effectively applied in diverse scenarios.
(2) The ablation analysis is not fully convincing. As a plug-and-play module, it is crucial to clarify that the observed improvement is attributed to the unique design of the proposed method rather than an increase in the number of parameters. To strengthen the argument, additional quantitative experiments could be conducted, such as replacing the Disentangled Sequential Encoder (DSE) with a trivial naive module and comparing the results.
(3) The tables presenting the results of the quantitative experiments are not well displayed. To enhance clarity and readability, improvements could be made in the formatting and organization of the tables. Additionally, the t-SNE visualization in Figure 3 could benefit from displaying more distinct clusters and additional samples. The current arrangement of Figure 3 lacks persuasiveness, as the clusters on the 2D space are not significantly distinguishable from each other.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: You can conduct more quanlitative experiments according to the weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations have been discussed in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer PAiU
Thank you for your time and valuable comments, below are our responses to your concerns:
## Q1: Applications in diverse scenarios.
This is a question worthy of exploration. Our module is equally applicable to other scenarios. Below are our results on the Visual Commonsense Reasoning (VCR) dataset [1].
| Model | Q→A | QA→R | Q→AR |
|-----------|:------:|:------:|:------:|
| LateFusion | 40.2 | 35.6 | 10.1 |
| LateFusion w/DCL | 45.7 | 40.5 | 20.1 |
| CLIP | 46.5 | 37.4 | 15.3 |
| CLIP w/ DCL | 50.0 | 48.5 | 22.5 |
According to the results, our method has led to improvements for both Latefusion and CLIP across multiple performance metrics showing the versatility of our proposed method.
## Q2: Replacing DCL with a trivial naive module.
Thank you for your suggestion. We conducted the suggested experiments by replacing DCL with four layers of MLP . The results are as follows:
| Model | PACS | PACS-Material |
|--------------------|:----------:|:------------------:|
| Audio CLIP | 60.0±0.9 | 75.9±1.1 |
| AudioCLIP w/ MLP | 59.9±0.6 | 73.8±0.6 |
| AudioCLIP w/DCL | 63.2±0.8 | 76.2±1.4 |
'AudioCLIP w/ MLP' is the result of replacing DCL with MLPs on top of the AudioCLIP. The results clearly indicate that, even with comparable parameter settings, 'AudioCLIP w/ MLP' remains unable to match the performance of 'AudioCLIP w/DCL', affirming the effectiveness of our designed DCL module.
## Q3: Improvements of Table.1 and a new Fig.3.
Thank you for your suggestion. We will improve the tables in our paper to improve clarity and readability. To address your concerns about Fig. 3, we will replace it with the new t-SNE plot from the rebuttal materials Figure 1.
[1] From Recognition to Cognition: Visual Commonsense Reasoning CVPR 2018
---
Rebuttal 2:
Comment: Thanks again for your insightful suggestions and comments. Please let us know if our responses have addressed the issues raised in your reviews. We hope that our clarifications and additional results address your concerns and convince you of the merits of our work. We are happy to provide any additional clarifications or experiments that you may need.
Thank you for your time again!
Best, Authors | Summary: The work proposes an approach to separate object and action information from videos to improve the model's reasoning capabilities. The proxy-task of audio-visual question answering is used to train the model commonsense concepts of the physical world. Their contribution focuses on learning from introduced counterfactual examples.
They aim to maximise mutual information between input data and static and dynamic scene factors while minimizing the mutual information between those two factors.
Strengths: The paper presents beyond a good intuition also a detailed mathematical definition of all concepts. Both can be followed well. A fair amount of relevant and current baseline methods were used for comparison. The approach seems to be lightweight enough to be trained on one single GPU within reasonable time.
The ablation studies seem adequate and useful to understand the success of the disentanglement and the performance boost over baselines.
Weaknesses: Compared with datasets in other fields the used dataset seems just about big enough to test the approach however a test set in the PACS-Material dataset of 152 objects could potentially lack robustness. Other selected 152 objects may change the result by a large margin. However, in this specific domain it is probably hard to generate larger datasets and comparable work uses the same datasets. K-fold cross evaluation could help here to evaluate the result better.
There is a clear trend that the method performs well in Table 1, however given the error the results could be more impressive. Again, K-fold cross validation with different splits could help.
The paper mentions a few times that it can be used as a plugin to improve performance on multimodal fusion tasks, however due to the Q&A being an integral part I don't see how this can generalize to e.g. benefit tracking. Due to their claim it would be nice if the authors could illustrate examples of potential tasks that could be improved with this method.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In Figure 1. What is the referenced object? There are screws and a glass. What if there are several objects in the video but the question is ambiguous because it refers to only one object?
Extracting audio as non-sequence seems odd. How long is the audio instance? What about objects that change their sound characteristically? E.g. think about a laundry machine and a tumble dryer. Both may make some sounds which are over a longer time similar but then the laundry machine starts spinning and you can tell them apart. With such sounds it's also hard choose the part of the sound to encode. How is the length choosen? Does it just cover exactly all the video frames?
How does the approach deal with different lengths of videos? Is this just not regarded?
It is hard to find an information of how long the video sequence or audio sequence is in general. In the supplementary material, details are given for the implementation of LateFusion and there it can be found that 8 evenly spaced frames are used but does that mean that every single frame is used or are there skipped frames? And what is the framerate?
In line 175 it should say standard and not stand.
The object in Figure 2, b) left side is hardly recognizable. It says the material is wood but is this a woodchip or a fake wood coin? It's very hard to see. And what kind of sound does this make? Are those woodchips dropped on a hard surface? It would be good to either choose another object from another image which is more illustrative or to write down somewhere what the reader actually should see.
In the "Analysis of dynamic factor" it is said two times, audio information is often the main basis for human reasoning or dynamic information is often the main basis for human reasoning. I would just leave out those broad claims which can not be answered by training a network. It reads to me like a very general remark without any citation or investigation to back it up and should probably left to the field of HCI.
In Table 1, how was the error calculated? Were different training runs performed with different random seeds?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The small dataset test set could probably be mentioned or defended somewhere for people who are not right inside the domain. The broader impact section feels like being included just as exercise but could be removed imo. The argument "if our algorithm is implemented on a robot and malfunctions this could be bad" is too general to be useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer FCQF
Thank you for your time and valuable comments, below is our response to your review:
## Q1: K-fold cross validation.
Thank you for your suggestion. We conducted the experiments on the PACS-Material subset using the K-fold cross-validation approach you mentioned. The results are as follows:
| Model | PACS Material |
|----------------------|:--------------:|
| Late Fusion w/ DCL | 66.2±0.6 |
| CLIP w/ DCL | 74.9±0.8 |
| AudioCLIP w/ DCL | 75.4±1.0 |
| UNITER w/ DCL | 75.2±0.7 |
## Q2: Potential tasks that could be improved with our method.
Our proposed method can be extended to other multi-modal scenarios, such as decision-making in the context of autonomous driving. By incorporating our proposed module, the reasoning capability of the decision model can be enhanced. Similarly, in the behavioral reasoning process of intelligent robots, our approach can highlight the role of certain features in the decision-making process, thereby improving interpretability.
## Q3: Questions about our paper.
Regarding your questions, we will address them individually:
1. In general, we consider objects that interact with human hands in the video and produce sound to be the referenced objects. In Fig.1, since the glass is held by a person and produces sound when struck, we consider glass to be the referenced object.
2. In the PACS dataset, the average audio length is 7.6 seconds. In PACS, both the objects in the video and the audio are relatively singular, we encoded all the audio corresponding to the videos in our experiments. Considering the temporal pattern of audio data, for cases involving changes in audio, we can treat audio as sequential data for processing in the future. Great suggestions!
3. We uniformly downsampled videos of varying lengths to 252 frames using corresponding proportions.
4. We used the video format of 30 FPS. Based the downsampled 252 frames, we uniformly sampled 8 frames while skipping others. Later, we utilized ViT to extract features from each frame, and then utilized the averaged features as the video representation.
5. We appreciate your pointing out our errors. We will change "stand" to "standard" on line 175.
6. Thank you for pointing out the issue. We appreciate your suggestion and have adopted it by replacing Fig2 (b) with a new example. The new example is provided in rebuttal material Figure 4.
7. Indeed, as you mentioned, we have identified two relevant works in the field of HCI to support the importance of the dynamic information [1,2].
8. As you mentioned, the errors in Table 1 are calculated by training with different random seeds and averaging the results to compute the error.
## Q4: Limitation about our method.
We appreciate your valuable suggestion, and we will make modifications to the expression of this part in the revised version.
[1] Enabling Voice-Accompanying Hand-to-Face Gesture Recognition with Cross-Device Sensing. CHI2023
[2] FaceSight: Enabling Hand-to-Face Gesture Interaction on AR Glasses with a Downward-Facing Camera Vision. CHI2021
---
Rebuttal Comment 1.1:
Comment: I read the response and weaknesses other reviewers pointed out. I think the authors responded very well to all points. Including audio is still a bit beside the mainstream and hard to make work. Even if the method does not improve by a large margin over the state of the art the evaluation with additional parameters seems to show there is an actual effect here. I think this work deserves to be accepted and discussed in the larger research community because the method is novel, interesting and potentially foundational to other results which may yield more impressive results.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer FCQF
Comment: We are grateful to you for not only acknowledging our answer, but also for finding our updates satisfactory. We strongly believe that those suggested changes have made our submission stronger. | Summary: This paper proposes a novel Disentangled Counterfactual Learning (DCL) method for physical audiovisual commonsense reasoning. DCL consists of two main modules: Disentangled Sequential Encoder and Counterfactual Learning Module(CLM). Disentangled Sequential Encoder decouples videos into static (time-invariant) and dynamic (time-varying) factors in the latent space. The causal learning module augments the model’s reasoning ability by modeling physical knowledge relationships among different objects under counterfactual intervention. The experiments show that DCL can be flexibly integrated into baseline methods and improve their performance.
Strengths: + The designed DCL is a plug-and-play module that can be incorporated into baseline methods.
+ The proposed method reaches SOTA performance. The authors also conduct ablation studies to illustrate the effectiveness of each component in their method.
Weaknesses: - This paper is poor-written. Some expressions are not consistent, e.g., "causal learning module" in the abstract and "counterfactual learning module" in the introduction.
- Why causal learning can help commonsense reasoning? Which concrete problem in audiovisual commonsense reasoning can causal learning address? Although the authors introduce causal learning to augment the model's reasoning ability, no deep analyses are provided in this paper. The authors can provide an example or toy experiments to illustrate their motivation.
- Figure 3 only shows the t-SNE visualization of dynamic factors. Can the authors show t-SNE of static factors and the comparison with baselines?
- Why do the authors only use disentangled sequential encoder (DSE) in videos? Can audios be processed by a similar operation?
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Please see weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors present the broader impact in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer j4kk
Thank you for your time and valuable comments, below is our response to your review:
## Q1: Some typos.
We will revise the "causal learning module" on line 11 of the paper to "counterfactual learning module", and make sure that the entire paper is consistent.
## Q2: Why do we use causal learning?
The motivation for using causal learning was primarily inspired by the following three aspects:
1. The binary labels in the dataset are insufficient to represent the real Ground Truth.
Within the PACS dataset and audiovisual commonsense reasoning, binary labels do not truly correspond to the Ground Truth. For example, different interaction actions also result in distinct audiovisual features, labels are influenced by both the physical attributes (Ground Truth) exhibited by object's audiovisual features and the human-object interaction actions in the video. In Fig3 (c) of the main paper, object1, when subjected to the same interaction action as object2, produces a similar sound (the sound of metal being rubbed). Addressing these challenges necessitates the utilization of causal learning to enhance the model's reasoning capabilities.
2. Learning from the correlation P(Y|X) can't identify the causal effect from X to Y
As mentioned in [1], if we only train the model based on the correlation P(Y|X) without understanding the confounding effect, regardless of the size of the training data, the model will never be able to discern the true causal effect from X to Y. We employ causal learning to guide the model in learning the true causal effect from audiovisual information to physical commonsense.
3. Using intervention to highlight the role of physical knowledge in reasoning
Drawing inspiration from [2], the intervention in confounding factors can shed light on the significance of particular features within the reasoning. We utilize the physical knowledge similarity matrixs of decoupled video and audio features as confounders, through interventions, highlight the role of physical knowledge in reasoning, and makes the physical knowledge similarity matrix more reliable.
## Q3: Show t-SNE of static factors and the comparison with baseline.
We appreciate your valuable suggestion. In our rebuttal material Figure 2 and 3, we have provided a comparison between the t-SNE plots depicting static factors and the t-SNE plots depicting video features of the baseline (due to the fact that the baseline was not disentangled and only contains video features) . Figure 2 illustrates the t-SNE visualization of video static factors decoupled by AudioCLIP w/ DCL, while Figure 3 showcases the t-SNE visualization of video features by Audio CLIP.
## Q4: Disentangled sequential encoder (DSE) in audios.
We indeed presented the experimental results of audio decoupling in our Supplementary. Please kindly see Supplementary Table 2 and the response to Reviewer QHKe's Q4 and Q5 for more details.
[1] Causality: models, reasoning and inference. Judea Pearl 2000
[2] Counterfactual Intervention Feature Transfer for Visible-Infrared Person Re-identification. ECCV2022
---
Rebuttal 2:
Comment: Thanks again for your insightful suggestions and comments. Please let us know if our responses have addressed the issues raised in your reviews. We hope that our clarifications and additional results address your concerns and convince you of the merits of our work. We are happy to provide any additional clarifications or experiments that you may need.
Thank you for your time again!
Best, Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the clarifications. I raise my score to borderline accept after reading authors’ response and other review comments. | Summary: The paper introduces a novel approach for physical audiovisual commonsense reasoning by Disentangled Counterfactual Learning (DCL). The authors propose a disentangled sequential VAE to separate static and dynamic factors in the visual latent space with an additional contrastive loss term. In addition, a causal learning module that leverages counterfactual intervention between different objects is used to enhance the learning of physical knowledge relations. The proposed modules could be easily plugged into existing baselines. The experiments on PACS dataset demonstrate that the proposed method could improve baseline methods.
Strengths: 1. The idea of modeling implicit physics knowledge relationships between various objects from audio-visual data is interesting and well-motivated.
2. The proposed method is a general module that could be plugged into any baseline. I could also see these modules are definitely not limited to being applied for audio-visual commonsense reasoning.
3. The paper is well-written and straightforward to understand.
4. Extensive experiments and analyses have been done to demonstrate the contribution of each proposed component.
Weaknesses: 1. While I like the general idea of the proposed modules, the final performance shows a minor increment (1.4-3.2%) compared to the baselines. With all these sophisticated designs of additional components, I would expect a larger gap in terms of performance, even though I believe audiovisual physical commonsense reasoning is a challenging task. If I understand correctly, the questions in PACS are binary (e.g., object 1 or 2). As I look into the results, the accuracy is about 60% which is not much better than a simple random guess. In the Supplementary material, there is a model size comparison between models with/without DCL. With the DCL, about 12M more parameters are introduced. Is it possible that the improvement is simply from these additional parameters? A simple verification would be adding the same amount of parameters via MLPs on top of the existing baselines and reporting the outcomes without using DCL.
2. In Section 3.2, Line 137, the authors mention the full proof could be found in Appendix, but it is missing.
3. The T-SNE plots in Figure 3 and the Supplementary material could be more informative. The description of each color in the plot is missing.
4. See Questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In the current formulation, only the video features are represented as sequence data, but the audio feature is not. What is the reason for doing that? Based on the results, AudioCLIP actually works better than CLIP. This means probably audio plays a more important role in the reasoning. I guess having a disentangled sequential encoder for audio features and using contrastive loss could be possible.
2. Following the first question, the disentangled sequential encoder is for unimodal only (visual). I wonder whether adding a cross-modal mutual information term would be helpful if both audio and visual features are considered sequential.
3. As I mentioned in the weakness, the final performance is still around 60% accuracy, even with the proposed method. While I see some examples in qualitative results and analyses, like inaccurate labeling, I would like to ask what the main failure cases would be and the challenges to overcome.
4. Currently, the method is end-to-end training. I wonder whether it would be helpful if the disentangled sequence VAE is pre-trained first so that the static and dynamic factors are well-learned before applying counterfactual intervention.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer QHKe
Thank you for your time and valuable comments, below is our response to your comments:
## Q1: The impact of more parameters.
Thank you for your appreciation of our model. Audiovisual physical commonsense reasoning is indeed a formidable challenge (we will elaborate on the specific difficulties shortly).
We added four layers of MLP with an equivalent number of parameters (12M) to replace our DCL into AudioCLIP, yielding the following results:
| Model| PACS |
|--|:--:|
| Audio CLIP| 60.0 ± 0.9|
| AudioCLIP w/ MLP| 59.9 ± 0.6|
| AudioCLIP w/ DCL| 63.2 ± 0.8|
The results clearly indicate that, even with comparable parameter settings, AudioCLIP w/ MLP remains unable to match the performance of AudioCLIP w/ DCL, affirming the effectiveness of our designed DCL module.
## Q2: Missing proof.
The proofs presented in Section 3.2 are standard for sequential VAEs [1,2], so we omitted them for brevity. Please kindly see Appendix A in [1] for the full proofs. We will incorporate these proofs into the appendix of our final version.
## Q3: More information of the t-SNE plots in Figure.3 and the Supplementary.
Thanks for the suggestion! We have included more comprehensive t-SNE visualizations in our rebuttal material (see Figure 1).
## Q4: The experiment of disentangled sequential encoder for audio features and using contrastive loss.
Great suggestion! Following the baselines in PACS [3], we have abstained from treating audio as sequential data for fair comparison.
However, we indeed apply a disentangled sequential encoder to audio features and results are shown in Table 2 of our Supplementary. Decoupling the audio led to a relative improvement of 2.5% in the model's performance. We will incorporating the results into the main paper.
During training of the disentangled sequential encoder, we utilized contrastive loss, as outlined in Eq.(3)(4)(5) in the main paper, and the results also demonstrate its effectiveness.
## Q5: An experiment of adding a cross-modal mutual information term.
Thanks for suggestions. Per your request, we simultaneously treat both visual and audio data as sequential data and disentangle them concurrently. Subsequently, we introduce cross-modal mutual information between the two modalities. The results are as follows:
| Model|PACS|PACS-Material|
|--|:--:|:--:|
| AudioCLIP| 60.0±0.9|75.9±1.1|
| Audio CLIP w/ DCL (visual)| 61.5±0.8 |76.0±1.4|
| Audio CLIP w/ DCL (audio)| 63.2±0.8 |76.2±0.7|
| Audio CLIP w/ DCL (visual and audio)|66.7±0.6 |79.4±1.1|
| Audio CLIP w/ DCL (visual and audio) w/ MI| 65.5±0.8|78.6±0.5|
As shown in the table, it is clear that simultaneous decoupling of both audio and visual modalities leads to superior results. However, we did not observe any additional enhancement when we included the mutual information term. We speculate that this is because visual and audio modalities could contain irrelevant information, and adding cross-modal mutual information could actually be detrimental to the results.
## Q6: What the main failure cases would be and the challenges to overcome.
The challenges of our task can be summarized into three aspects:
1. Visual data contains a significant amount of noises.
Within videos, there exists an abundance of noises such as diverse video backgrounds, and other irrelevant objects. Extracting generic features using a simple visual encoder yields unsatisfactory results.
Main failure case: In the main paper's Fig.1, in object1's video, besides the tall glass, the screws below serve as irrelevant objects, and the complex background of object2's video introduces noises.
2. The cause of audio generation is difficult to reason.
Within audio, it is challenging to determine the relationship between actions and sounds. The same object, when interacted with different actions, can produce different sounds, thereby affecting the reasoning of its attributes. Extracting generic features using a single-modal audio encoder yields unsatisfactory results.
3. The questions' setting is challenging.
For example, in the VQA2.0 [4] and VCR [5] dataset, questions such as "What color are her eyes?" necessitate a single round of reasoning to yield an answer. In contrast, within the PACS dataset and audiovisual commonsense reasoning, questions like "Which object would be less likely to retain its shape if the other was placed on top of it?" mandate the simultaneous comprehension of the physical attributes of two objects. This intricate configuration of multi-dimensional, multi-step reasoning compounds the difficulty. The binary labels in the dataset is insufficient to represent the real Ground Truth.
Main failure case: In main paper Fig3 (c), the binary labels erroneously guide the model to believe that "objects that sound rough are not suitable for digging holes", whereas the actual ground truth is that larger objects are not suitable for digging small holes.
## Q7: Pre-trained VAE training for our method
Thanks! We conducted additional experiments following your suggestion, and the results are as follows:
| Model | PACS | PACS-Material |
|--|:--:|:---:|
| CLIP w/ DCL -End to End| 56.3±0.7 | 72.4±1.1|
| CLIP w/ DCL -Pre-trained VAE | 55.2±0.3 | 70.1±1.5|
where 'CLIP w/ DCL -Pre-trained VAE' is the method of first training VAE and then train other models, while the other one is our original end-to-end method. As shown in the table, the pre-trained VAE's performance is inferior to that of the end-to-end approach.
[1]S3vae: Self-supervised sequential vae for representation disentanglement and data generation. CVPR2020
[2]Contrastively disentangled sequential variational autoencoder. NIPS2021
[3] PACS: A Dataset for Physical Audiovisual CommonSense Reasoning. ECCV2022
[4] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering. CVPR2017
[5] From Recognition to Cognition: Visual Commonsense Reasoning. CVPR2018
---
Rebuttal Comment 1.1:
Comment: I appreciate that the authors have conducted new experiments, and these results have addressed most of my concerns. Although the performance improvement is incremental, the proposed approach is generally novel and could benefit the community. With this consideration, I would like to raise my score to 'borderline accept'.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer QHKe
Comment: We are grateful to you for not only acknowledging our answer, but also for finding our updates satisfactory. We strongly believe that those suggested changes have made our submission stronger. | Rebuttal 1:
Rebuttal: # Our rebuttal material
Pdf: /pdf/c93ad97e21f58c15e7537ef0cc437dc0e3516836.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a disentangled counterfactual learning (DCL) approach to solve physical audio-visual commonsense reasoning. This approach first decouples videos into static and dynamic latent features, and then uses a causal learning module to augment the model's reasoning ability. Authors show that this module can be used to augment any existing baselines.
Strengths: The proposed approach is novel and inspiring for similar reasoning tasks. The approach section has detailed each sub-module of the approach. I also like the fact that this approach can be plugged into any existing work to improve its performance.
The qualitative examples in Fig. 2 show how baseline models augmented with the proposed approach performs better than those without.
Weaknesses: The performance improvement against baselines is pretty marginal, especially it did not run Merlot Reserve with the additional module. I understand that it was because the computational resource is restricted. But still, since Merlot Reserve has a 10% improvement over the second-best baseline, it remains a question whether this proposed module still brings benefits to Merlot Reserve.
Some other comments:
1. Random performance (guessing by chance) should be shown in Table 1.
2. The reported baselines are baselines in the benchmark but that does not represent the current SOTA in this domain anymore. There have been many other works coming out after CLIP for example.
3. Late Fusion w/ dynamic [42] should be written as Late Fusion [42] w/ dynamic. And the same applies to other rows in Table 1.
4. You need a more concrete example to motivate the physical knowledge relationship module. It is unclear in the paper what kind of correlation between different samples are shared.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: My main question about the paper is regarding the performance margin and I'd appreciate authors' clarification on that.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitation of this work has not been discussed but the societal impact has been discussed in the last paragraph.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer ZZdS
Thank you for your time and valuable comments, below is our response to your review:
## Q1: Marginal Performance of our method.
Merlot Reserve utilizes a large-scale private dataset (YT-Temporal-1B) and a specialized device (v3-1024 TPU) for training. Due to this limitation, we were unable to reimplement the experimental setup of Merlot Reserve.
To address concerns about the performance gap, we evaluated the model ERNIE-ViL [1] as a replacement for Merlot Reserve. This is because ERNIE-ViL and Merlot Reserve (base) achieved comparable performance on the VCR task [3]. Additionally, we incorporated the SOTA model BLIP [2] after CLIP in our experiments. The results are as follows:
| Model| PACS | Relative Improvement (%) |
|----------------------|:-------------:|:-------------------------:|
| Random| 50.4|-|
| Late Fusion|55.0 ± 1.1|-|
| Late Fusion w/DCL|57.7 ± 0.9| 4.9|
| CLIP| 56.3 ± 0.7|-|
| CLIP w/ DCL| 58.4 ± 0.8| 3.7|
| AudioCLIP | 60.0 ± 0.9 | - |
| AudioCLIP w/ DCL | 63.2 ± 0.8 | 5.3 |
| BLIP [2] | 59.1 ± 0.5 | - |
| BLIP w/ DCL | 61.5 ± 0.6 | 4.0 |
| UNITER (Large) | 60.6 ± 2.2 | - |
| UNITER (Large) w/DCL | 62.0 ± 2.4 | 2.3 |
| ERNIE-ViL [1] | 66.7 ± 1.1 | - |
| ERNIE-ViL w/ DCL | 70.4 ± 0.8 | 5.5 |
As shown in the table, ERNIE-ViL outperforms the second-best baseline by 6%, and with our DCL module, it achieves an improvement of 5.5%. Although it cannot directly prove the effectiveness of our module on Merlot Reserve, based on the results of the six tested models mentioned above, it is reasonable to speculate that our method is still applicable to Merlot Reserve.
## Q2: Random performance, and typo in Table 1.
Thanks for the suggestion! In the revised version, we will fix these typos and add the random performance in Table 1, denoted as "Random," with a result of 50.4%.
## Q3: Other works coming out after CLIP.
We appreciate your valuable suggestion. We conducted experiments using more powerful baselines BLIP[2] and ERNIE-ViL [1]. The experimental results are shown in the Table in Q1, after incorporating our module, we achieved 4.0% and 5.5% improvement, respectively, confirming the efficacy of our plug-and-play mudole. Based on the results of the six tested models mentioned above, we believe that our module can enhance the model's audio-visual physical commonsense reasoning ability. We will include the additional results in the modified paper.
## Q4: A more concrete example of the motivation of physical knowledge relationship.
We take a piece of data from PACS as an example:
Question: Which object could be shattered or bent if the other was to strike it forcefully?
Object 1: A large, thick glass jar with a glassy appearance, making a glass-like sound when struck.
Object 2: A small-sized metal shell cellphone, thin in profile with a metallic sheen, producing a metallic sound upon friction.
Our model reasoning for such an example using the following approach:
1. We aim for the physical knowledge relationships to serve as guidance for the similarity between objects. Through the module, even though object2 may not exhibit the attribute of 'hardness', the shared attribute of hardness among other metallic objects should influence the model's reasoning. For zero-shot objects in the test set (in PACS, all objects in the test set are zero-shot), our approach can leverage attributes from a broader range of objects to aid in reasoning.
2. For both dynamic and static factors and audio information, we employ the near-neighbor chosen function to extract the Top-5 highest similarity scores from each row, resulting in the affinity matrix A. By utilizing Eq.(6), we derive the final features. Our investigation reveals that in the similarity matrix for dynamic factors, object 1's Top-5 exclusively involve videos depicting striking actions on the object, while object 2's Top-5 encompass actions involving friction. In the static factor, object 1's Top-5 include 2 objects characterized by large volumes and 3 ones possessing a glassy texture, while object 2's Top-5 consist of small metallic sheets. Concerning audio, our exploration demonstrates that object 1's Top-5 emit glass-like sounds, while object 2's Top-5 produce sounds akin to sanding. In the process of reasoning for the question, the key lies in identifying object 1's attributes of 'large volume', and 'fragility', whereas object 2 possesses a 'small volume' and 'hardness'. Therefore, the final reasoning yields the result that object 1 is the answer. Our method implicitly represents these features and identifies objects similar to object 1 and object 2 in dynamic, static, and audio aspects, aiding the model in reasoning about the given problem.
3. The reasoning process of our proposed method aligns with the findings of Piloto's work [4], which states that human cognition involves the continuous induction of physical concepts, where specific physical concepts are learned through the similarities in physical knowledge across diverse objects. In the PACS dataset, each question is linked to its corresponding physical concept. These concepts encompass both static attributes such as 'color,' 'thickness', as well as dynamic attributes like 'stretchiness', and 'softness'. Within these physical concepts, objects exhibit distinct attributes. Objects sharing similar physical attributes often yield analogous answers to a given question. Our method capitalizes on this characteristic to assist the model in reasoning.
[1] Ernie-vil: Knowledge enhanced vision-language representations through scene graphs. AAAI 2021
[2] Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. ICML 2022
[3] From Recognition to Cognition: Visual Commonsense Reasoning. CVPR 2018
[4] Intuitive physics learning in a deep-learning model inspired by developmental psychology. Nature human behaviour 2022
---
Rebuttal 2:
Comment: Thanks again for your insightful suggestions and comments. Please let us know if our responses have addressed the issues raised in your reviews. We hope that our clarifications and additional results address your concerns and convince you of the merits of our work. We are happy to provide any additional clarifications or experiments that you may need.
Thank you for your time again!
Best,
Authors | null | null | null | null | null | null |
Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks | Reject | Summary: Large language models lack expertise skills and this is reflected in their limited capability for arithmetic etc.
The paper proposes a method to integrate a CoNN (compliled neural networks) into an LLM via gating. Such integrations allow better performance for rule intensive tasks such as symbolic reasoning, arithmetic reasoning etc. To perform this, the paper proposes a gating mechanism although the implementation seems rule based triggering (line 167).
With the proposed mechanisms, the authors evaluate on arithmetic tasks (5.1) where the model achieves consistently 100% accuracy. On symbolic reasoning (5.2) and arithmetic reasoning, the method also performs better than finetuning appraoches in terms of performance or data efficiency.
Strengths: The paper correctly identifies the limitations of LLMs and propose a novel approach to tackle the problem. The solution consists of a neural machine that serves as expert for symbolic/arithmetic operations.
Empirically, the paper demonstrates strong results over competitive baselines on arithmetic/symbolic reasoning approaches.
Weaknesses: The novelty introduced in the paper doesn't quality a Neurips paper. The novelty of the paper concretely is using the rule based trigger to combine a CoNN with an LLM; neither components consist of the paper novelty.
There are various presentation issues that make the paper quite hard to follow:
- The second contribution item is put in Appendix, I read it but please note that even reviewers are not obliged to read those materials in Appendix to judge the paper. Related to this remark, CoNN is only introduced and referenced, for people not familiar with the technology, there is nowhere in the paper to know how it works.
- Section 3.2 introduces the gating between LLM and CoNN, I think equation (1) has a flaw since it involves choose argmax from a matrix which seems ill defined and I think the authors mean to concatenate HL and HC instead. The gradient flow (3.3) doesn't give further insight rather than it is a gating mechanism.
- I am confused by the illustrative figures in the paper. Figure 2 has the input text on top which is easily confused to a paper title and I feel Figure 1 is not relevant to the paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: For experiments other than 5.1, the model doesn't achieve 100% accuracy (even for tasks like the last letter). How do authors explain this difference please? Is it because some arithemetic CoNN has been implemented while the last letter one has to be learnt?
Line 162, when beta equals 1, left side of equation one is an identity matrix, so the model seems to choose between HL and HC max, but the authors say that it will take the HC max, why?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Unless Neural Comprehension machines are widely used, I don't see why this approach is particularly useful: I don't see the advantage compared to API calling approach adopted by today's industry. Particularly, as shown in this paper, many operations have to be implemented individually (subtraction, addition, etc.)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our submission and providing constructive feedback. Below are our rebuttals to address your concerns. We hope this helps clarify any misunderstandings.
---
**Review:** The novelty of the paper concretely is using the rule based trigger to combine a CoNN with an LLM; neither components consist of the paper novelty.
**Response:** Our key contributions are: 1) A novel integration of CoNNs and LMs using a gating mechanism that combines their complementary strengths in a plug-and-play manner. And to the best of our knowledge, no prior work has approached the combination of these two in the same way. 2) We propose AutoCoNN to automatically generate CoNNs using LLMs. This enables applying our framework to rule tasks without manual effort. We will add more implementation details of CoNNs and AutoCoNN to the main text to strengthen novelty.
---
**Review:** CoNN is only introduced and referenced, for people not familiar with the technology, there is nowhere in the paper to know how it works.
**Response:** Thank you for the feedback on supplemental material and CoNN details. We will move the core AutoCoNN content into the main text and provide more specifics on CoNNs, as they form a key contribution.
---
**Review:** (1) A flaw since it involves choosing argmax from a matrix which seems ill-defined and I think the authors mean to concatenate HL and HC instead. The gradient flow (3.3) doesn't give further insight rather than it is a gating mechanism.
**Response:** The vocabulary of CoNN is obtained by expanding the original vocabulary of the LM with a new vocabulary. Therefore, in Equation 1, the hidden state representation matrix is obtained by extending the original hidden state representation matrix of the LM with the hidden state matrix on the RASP vocabulary through a block matrix operation, resulting in a larger matrix. The selection is then performed using \beta. Thank you for pointing this out; we will modify the equation to correctly represent our approach for clarity. Regarding the gradient flow (3.3), we presented it to illustrate how our method exploits the gradient updates from both the LM (implicit learning) and CoNNs (explicit learning).
---
**Review:** Figure 2 has the input text on top which is easily confused with a paper title and I feel Figure 1 is not relevant to the paper.
**Response:** We'll adjust the layout, the new Figure 2 can be viewed in the PDF we submitted. Figure 1 was meant to illustrate the concept of so-called "length generalization" in language modeling, establishing the motivation for our research.
---
**Review:** The model doesn't achieve 100% accuracy (even for tasks like the last letter). How do authors explain this difference please?
**Response:** For all reasoning tasks (including symbolic reasoning and arithmetic reasoning), solving them requires the LM to have language understanding and rule comprehension (line 272). For example, the last letter task "Take the last letters of the words in Apple Pencil and concatenate them." needs to first sequentially judge 'Apple' and 'Pencil' from it, and then CoNN will output the last letters "e" and "l" for these words. For arithmetic reasoning, it relies on the LM to understand the problem and list the formula so that the CoNN can output the result.
---
**Review:** Line 162, when beta equals 1, left side of equation one is an identity matrix, so the model seems to choose between HL and HC max, but the authors say that it will take the HC max, why?
**Response:** The reason is that when \beta=1, H_L is the hidden state output after passing through the activation function, for example [0.23,0.84,...,0.02]. The H_C is the hidden state output of CoNN. Since CoNN is a transformer model converted from rules, its output has and only has one value of 1, with the rest being 0, for example [0,1,...,0]. Therefore, according to the calculation of Formula 1, the value is always the max of H_C.
---
**Review:** I don't see the advantage compared to API calling approach adopted by today's industry.
**Response:** Thank you for this observation. Indeed, our NC (Neural Comprehension) methodology addresses certain limitations inherent in traditional API calling approach.
Firstly, NC framework, with its capacity for flawless execution of rule-intensive tasks, present an attractive alternative where accuracy is critical. As shown in Figure 5 (Section 5.3 Arithmetic Reasoning), the performance of NC is not inferior to methods calling APIs (such as PAL [1], which utilizes LLMs to generate Python code instead of directly getting the answer). We excerpt some of the contents in Figure 5 into the following table:
| Method | 1-dist AddSub | 5-dist AddSub | 10-dist AddSub | 15-dist AddSub | 20-dist AddSub |
|-|-|-|-|-|-|
| GPT-3.5+CoT | 81.7 | 65.3 | 28.6 | 8.9 | 3.2 |
| GPT-3.5+PAL | **89.7** | 66.2 | 66.4 | 63.5 | 63.1 |
| GPT-3.5+NC | 81.9 | **70.2** | **67.5** | **67.8** | **67.4** |
|-|-|-|-|-|-|
| GLM-130B+CoT | **21.2** | 1.3 | 0.0 | 0.0 | 0.0 |
| GLM-130B+PAL | 3.4 | 0.1 | 0.0 | 0.0 | 0.0 |
| GLM-130B+NC | **21.2** | **7.8** | **8.2** | **5.3** | **8.1** |
We can find that for high-performance LMs like GPT-3.5, relying solely on the NC by transformer architecture is far superior to normal CoT methods, and is not inferior to methods using APIs like PAL. On the other hand, for weaker LMs like GLM-130B (in fact, including open-source LLMs like Llama and all smaller LMs belong to this category), due to the lack of ability to call APIs, PAL is rendered useless, but NC can still improve its performance.
---
We hope this information can help resolve your confusion and re-evaluate the rating of our paper. At the same time, we will definitely further improve this paper based on your useful suggestions to make it more clear.
[1]Gao L, Madaan A, Zhou S, et al. Pal: Program-aided language models[C] International Conference on Machine Learning. PMLR, 2023: 10764-10799.
---
Rebuttal Comment 1.1:
Title: Thank you for the feedback
Comment: Thank you for the detailed feedback.
Through the rebuttal, the novelty is further clarified: A novel integration of CoNNs and LMs using a gating mechanism that combines their complementary strengths in a plug-and-play manner. This is indeed a contribution that I see in the paper but didn't write clearly in my first review. I thank the authors to clarify about the math technical details as well as discussing thoroughly about the presentation issues for which I think they will improve.
I also read the discussions with other reviewers and it is helpful thanks to all. My main concern is the scalability of the approach (aik1) and how they generalize to new tasks. The AutoCoNN seems promising inside the discussion but at the moment, it is part of the appendix. The rule based design doesn't impact the main novelty, and I suggest the authors stick to its simplest form instead of introducing/discussing the beta that is not implemtend again, this is part of the presentation issues that many reviewers have raised.
Meanwhile, I still have the confusion about whether this approaches offer any theoretical advantages over API based approaches, since both of them can perform execution at 100% accuracy. The empirical part only shows that integrating CoNN improves on small models which seems a little bit obvious in the sense that if your module is neural then you can always integrate it and on smaller models the integrated model should perform better. I might be wrong in this case, but personally I fail to see the advantage of making many things in the form of CoNN and that impacted my decision about accepting the paper. In fact, I think there might be tasks that can illustrate its benefits, but the examples tested in the paper can all be addressed by relatively code snippets where I don't see why we should use CoNN in this case.
I will maintain my score and my decision.
---
Reply to Comment 1.1.1:
Title: Re: The scalability of the approach and how they generalize to new tasks
Comment: Thank you for your time in reviewing our paper and providing us with valuable feedback. Allow us to address your concerns further.
---
**Review:** The AutoCoNN seems promising inside the discussion but at the moment, it is part of the appendix.
**Response:** When writing the paper, we placed AutoCoNN in the appendix since it is considered a toolkit. In fact, the Parity Model, Reverse Model, Last Letter Model, and Copy Model mentioned in the experiments of this paper are all constructed by AutoCoNN. To make the presentation more complete and better highlight the importance and novelty of this aspect, we will include further AutoCoNN discussions in the main text.
---
**Review:** I suggest the authors stick to its simplest form instead of introducing/discussing the beta that is not implemented again.
**Response:** We appreciate your suggestion. The details of the gating mechanism implementation will be clarified in the revised manuscript. We will simplify the beta that is not implemented in the revised manuscript without losing clarity and effectiveness.
---
Reply to Comment 1.1.2:
Comment: We believe our approach with Neural Comprehension (NC) provides a novel and powerful alternative. Let's address the benefits of this approach considering two scenarios:
---
**1) Scenarios supporting APIs and language models have function calling capabilities**: Firstly, it is important to highlight that Neural Comprehension (NC) relies solely on the original transformer architecture and does not necessitate an additional processing step or external tools. This directness is an inherent advantage over typical API-based approaches that need to generate code snippets step-by-step, receive feedback, and finally get an answer based on the feedback. This indirect process is prone to accumulating errors and reduces efficiency.
For example, consider using GPT-3.5 with API function calling capabilities to solve an arithmetic reasoning task. And whenever the LM generates an <EOS>, its generation will be stopped.
> Q: iWatch shows that Stanley runs an average of 364,425 meters and walks 216,582 meters per month. If he continues at this frequency, how much farther will Stanley have run than walked after one year?
**API:**
> `<FUNCTION>{'name':'CALL_PYTHON','arguments':'def sub():\n return 364425-216582\nsub()'}</FUNCTION>[EOS]<RETURN>{'name':'CALL_PYTHON','content':'147843'}</RETURN><EOS>`A: Firstly, we subtract the walking distance from the running distance to find out how much farther Stanley runs than walks in a month: 364425 meters - 216582 meters = 147843 meters <EOS>`<FUNCTION>{'name':'CALL_PYTHON','arguments':'def multiplication():\n return 147843 * 12\nmultiplication()'}</FUNCTION>[EOS]<RETURN>{'name':'CALL_PYTHON','content':'1774116'}</RETURN><EOS>`Over a year, he will run 147843 meters/months * 12 months = 1774116 meters <EOS>
**NC:**
> A: Firstly, we subtract the walking distance from the running distance to find out how much farther Stanley runs than walks in a month: 364425 meters - 216582 meters = *147843* meters. Over a year, he will run 147843 meters/months * 12 months = *1774116* meters <EOS>
An API-based approach would typically involve generating Python code to do arithmetic calculations, running the code using an external Python interpreter, and then providing the answer based on the interpreter's output. Frequent API calls can also result in an excessive amount of irrelevant content in the context of language models. By contrast, a compiled neural network for arithmetic in our NC framework would handle this operation internally and directly output the resultant (The italicized text in the example.). On the other hand, for more complex or unique situations, generating correct API-call codes, especially for less code-trained LMs, might be challenging. In fact in the above table, PAL's application on GLM-130B significantly underperformed the NC, which directly calculates the answer. And the GPT-3.5+NC is not inferior to GPT-3.5+PAL due to the fact that NC avoids the occurrence of incorrect code generated by PAL.
Furthermore, building neural modules (CoNNs) into LMs ensures that our model remains fully differentiable, which is not the case with approaches that call APIs. This differentiability is crucial when fine-tuning the entire model to adapt to new data or tasks. If we want to revise the weight based on feedback (for instance, reinforcement learning from human feedback or other settings with sparse delayed rewards), API-based approaches are not differentiable and thus cannot handle such tasks efficiently.
**2) Scenarios without API support or language models lacking function calling capabilitie.** NC is applicable regardless of the specific LM's ability to call functions, making it advantageous in scenarios where APIs cannot be leveraged or the language model lacks effective function calling capabilities.
As stated in line 190, CoNN essentially provides the optimal predefined neural network for a specific symbolic operation task. Thus, when combined via the gating mechanism, we can ensure that the overall network will perform no worse than any network obtained via gradient descent, leading to a model with fewer parameters, lower computational cost, and superior symbolic operation capabilities.
---
We sincerely appreciate your valuable feedback. We will incorporate the advantages of NC compared to API into the revised version of the paper to enhance its quality and arguments.
Title: Re: The advantages of Neural Comprehension
---
Reply to Comment 1.1.3:
Comment: We summarize these advantages as **Efficiency**, **Unified End-to-End Neural Network Framework**, and **High Applicability**.
- **Efficiency:** NC eliminates the need for generating additional code and feedback during the text generation process. This results in more efficient operation of the language model and reduces the computational resources required for generating extra code.
- **Unified End-to-End Neural Network Framework:** NC forms a complete neural network that does not require the involvement of an external interpreter. Thus, the model remains fully differentiable and trainable within a causal language model training framework. The reason for using APIs in the past was because language models were unable to achieve complete accuracy [1]. Thus, APIs were indirectly invoked as a way to reduce their hallucinations, but this might also compromise efficiency and performance.
- **High Applicability:** API-based methods can only be used in large language models that have undergone additional code training. Our original paper's experiments have shown that even models like GLM-130B struggle to effectively utilize API methods (as reflected in our GLM-130B+PAL test results). However, our proposed NC framework has demonstrated the ability to significantly enhance performance across a variety of model scales, from small scale models like T5-Small (60M) to larger models such as GLM-130B (130B) and GPT-3 (175B).
We hope these explanations address your concerns. If you have any further questions before the deadline, we would be more than happy to continue this dialogue.
---
[1] Mialon G, Dessì R, Lomeli M, et al. Augmented language models: a survey[J]. arXiv preprint arXiv:2302.07842, 2023.
Title: Re: The advantages of Neural Comprehension - 2 | Summary: Authors have proposed a novel way to augment large language model called Neural Comprehension to improve symbolic reasoning in tasks where rule-based execution is required by design such as numbers summation. The core idea behind their method is to augment the LM with compiled NN (CoNN) for a specific task is a manner of mixture of experts where they design a policy which determines LM or CoNN will be executing the next token prediction at each time step. In addition, they described how their method could be used with in-context learning (ICL). Authors perform set of experiments in symbolic operations (parity, reverse. addition, subtraction), symbolic reasoning (concat, coin flip) and arithmetic reasoning. They show empirically how their method outperforms stand-alone LMs finetuned on corresponding tasks data. Finally they show a potential of combining multiple CoNN with the given LM to increase the task capability of the final Neural Comprehension model.
Strengths: This work proposes an original way to combine LMs with CoNN networks and analyze the performance using multiple correlated or uncorrelated CoNNs together.
The wide range of symbolic experimental tasks show that authors performed high quality experimental investigation.
Weaknesses: I think major weaknesses of this work is (1) a hardcoded structure of CoNNs under consideration and (2) hard coded policy of choosing the LM vs CoNN component by connecting that to task-based properties. I believe the most interesting part would be to learn the beta factor which also seems to be very challenging.
Authors claimed that their work suggests potential improvements from using Neural Comprehension in other tasks, but they did not mention how to get CoNNs for these tasks? In general, the discussion about CoNN design and implementation is somewhat skipped in the paper while it seem to be a crucial factor in this paper's impact.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: There are some grammar and syntax typos in the paper, even in the section titles, please fix that.
A clarification question: from my understanding you have done some hard coding of the beta policy such that i.e. it only triggers ICL gradient when it sees "=" token. Does this mean that this approach will not work at all if you use sequence "4 + 4 equals 8"? If so, do you have any ideas in mind how to make it work? In general I like this idea of plug-in CoNNs, but they need to be more seamless and fluent without requiring such hard coding in the task level.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Authors discuss some statistical limitation aspects of LMs in the appendix and how their proposed method alleviates that. However, I did not find explicit discussion of limitations of their own approach except for describing future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition of our work's novelty and significance in improving symbolic reasoning tasks.
---
**Review:** Hardcoded structure of CoNNs under consideration
**Response:** You correctly pointed out that in the current implementation, \beta and CoNNs are hardcoded and manually specified. We acknowledge this limitation and mentioned it in the Limitations and Future Work section. However, the primary objective of this research was to introduce the concept of incorporating CoNNs into the LM's architecture to enhance rule comprehension capabilities. The fixed \beta avoids introducing additional error from "learning from the data", which helps explore the positive effects of introducing LM after CoNN. However, we also proposed AutoCoNN, a method that automates the construction of CoNN using the LM's inherent code-writing and context-learning abilities. This allows the LM to generate CoNNs for specific tasks or domains with minimal need for human intervention.
---
**Review:** Hard-coded policy of choosing the LM vs CoNN component by connecting that to task-based properties
**Response:** We agree with your observation and see this as an exciting avenue for future research. In fact, for LM, the Neural Comprehension framework is plug-and-play without needing extra training. The hard-coded \beta avoids additional training. In our present work, we use a simple gating mechanism that switches between the LM and CoNN based on predefined rule-based tasks. The goal was to provide a first, proof-of-concept solution to demonstrate the potential benefits of combining LMs and CoNNs. As we discuss in the paper, a more sophisticated and learnable gating mechanism could be developed that adaptively decides whether to engage the LM or CoNN during the generation process. This improvement could lead to models capable of integrating and balancing text-based learning and rule-based learning more efficiently.
---
**Review:** Authors claimed that their work suggests potential improvements from using Neural Comprehension in other tasks, but they did not mention how to get CoNNs for these tasks? In general, the discussion about CoNN design and implementation is somewhat skipped in the paper while it seem to be a crucial factor in this paper's impact.
**Response:** Apologies for the oversight. We agree that CoNN design and implementation are crucial for our method. While we described the encoding of rules into CoNNs via Attention mechanisms (Select, Aggregate, and Zipmap) in Section 3.1 and provided further details in appendix B, we understand that this explanation might not have been sufficient. We will include more details on the CoNN design and implementation directly into the main part of the paper in the revision to ensure that the methodology is clearly understood.
Moreover, we developed a method named AutoCoNN (Appendix C) to construct CoNNs automatically for specific tasks using the code writing abilities of large language models, as detailed in Section number of AutoCoNN in the paper. Briefly, AutoCoNN employs a three-stage process: Observation, Induction, and Comprehension to automate the construction of CoNNs for various tasks and domains. We provide 24 different RASP examples that are used to generate CoNNs. With AutoCoNN, we only need to provide examples and two samples to generate a new AutoCoNN, we provided the relevant implementation in the code of the Supplementary Material. We will revise our manuscript to clarify this point.
```python
from NeuralCom.AutoCoNN import AutoCoNN
INSTRUCT = 'Create an SOp that is the last letter of a word'
VOCAB = ['a','b','c','d','e','f','g']
EXAMPLE = [[['a','b','c'],['c','c','c']],[['b','d'],['d','d']]]
auto = AutoCoNN()
model,tokenizer = auto(instruct=INSTRUCT,vocab=VOCAB,example=EXAMPLE)
```
---
**Review:** from my understanding you have done some hard coding of the beta policy such that i.e. it only triggers ICL gradient when it sees "=" token. Does this mean that this approach will not work at all if you use sequence "4 + 4 equals 8"? If so, do you have any ideas in mind how to make it work? In general I like this idea of plug-in CoNNs, but they need to be more seamless and fluent without requiring such hard coding in the task level.
**Response:** Thank you for your observation and important question. Indeed, in the current implementation of our model, we hard code to some extent when the CoNNs are triggered - especially when we use the "=" symbol as a trigger during the construction phase of CoNN. However, this is only one part of the more general problem of knowing when to apply a rule, which is an active area of research in itself. The "=" symbol was used as a clear, consistent, and identifiable marker to illustrate the capabilities of our approach.
As for more complex cases where the operation symbols are phrased differently, as in "4 + 4 equals 8", the current model indeed would not trigger the CoNN. However, this does not mean our method is unable to handle such situations. We just need to also assign the equals method a meaning similar to "=" during the construction of CoNN, and perform the calculation. This is very easy for AutoCoNN. Or by additional training of the gating mechanism, our approach can also be extended to identify different operation wordings. For instance, we could train it to recognize different phrases that imply arithmetic operations and trigger the CoNN accordingly.
The central aim of our work is to demonstrate the benefits of and propose a method for incorporating explicit rule knowledge into language models. The specific mechanisms by which these rules are triggered are largely an implementation detail. We agree with you that an ideal system would seamlessly integrate CoNNs and not require any hard coding, and we see the work presented in this paper as a first step towards this goal.
---
Rebuttal Comment 1.1:
Title: Thanks for your comments
Comment: Thanks for reviewers and authors for their feedback and discussion here. Indeed I did not read in every detail the appendix which describes AutoCoNN which made it harder to get relevant context.
In my opinion authors provided sufficient feedback and ensured some edits in the final manuscript. This types of work are quite different from mainstream LLM scaling/finetuning work and i think community will benefit from this types of research. I increased my score. | Summary: While Large Language Models show promise for a wide swath of tasks, they are lacking when applied to symbolic reasoning tasks. To overcome this limitation, the authors propose to employ Compiled Neural Networks (CoNNs). They create networks specialised to arithmetic and symbolic tasks and propose a mechanism by which an LLM can propagate the gradient through CoNNs to better learn to solve symbolic reasoning tasks. They demonstrate improvement in pure symbolic manipulation (parity and reverse), arithmetic (addition and subtraction), and more complex symbolic reasoning (coin flip and last letter concatenation). They demonstrate better generalisation to out-of-distribution examples (proxied by digit length or sequence length for the first four tasks), a considerable improvement on LLC, and parity with a larger LLM when augmenting a smaller one (T5 small + NC v vanilla T5 large). The improvements are comparable to external tool-based approaches, however, hold a promise of better integration (as gradients can propagate without surrogates), and the mixture-of-experts style combination can be learnt rather than rule-based.
Strengths: - By construction, CoNNs are interpretable from their basic building blocks, ensuring that paths that do go through them can be interpreted according to the rules they encode.
- The reduced number of parameters holds promise for reducing the cost of language models (relative to GPT-3)
- Even with simple gating, there is a non-trivial improvement on tasks when multiple CoNNs are employed (Section 5.4)
Weaknesses: - There is an implicit assumption on practitioners to know what rules to expect and generate appropriate networks that then get used MoE style.
- Expanding on the previous, practitioners should be able to translate their rules, from, for example, regular expressions, to RASP to enable compilation to a NN.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - While orthogonal to the paper, perhaps a remark on the difficulty of creating CoNNs appropriate for a task should be briefly discussed. For example, Section C.1 does discuss employing an LLM and ICL to obtain networks that approximate specific rules, how difficult is it to source examples for ICL? How many examples are needed before the output is of high enough quality? Can the correctness of the proposed CoNN be assessed by a non-expert? (as this last would be the use case, it is safe to assume an expert could write the network directly)
### Discussion Phase
The authors have clearly addressed the concerns raised during the rebuttal with clear and, as necessary, additional data. As long as such data gets exposed into the paper, via appendix, for example, I feel the paper has further improved.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have address most limitations that arose as questions during the reading of the paper spare the one I listed under Questions with regards to the cost of producing CoNNs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation and constructive feedback.
In fact, as shown in the supplementary code, the process of AutoCoNN constructing CoNN requires Instruct (describing specific rules) and Example.
```python
from NeuralCom.AutoCoNN import AutoCoNN
INSTRUCT = 'Create an SOp that is the last letter of a word'
VOCAB = ['a','b','c','d','e','f','g']
EXAMPLE = [[['a','b','c'],['c','c','c']],[['b','d'],['d','d']]]
auto = AutoCoNN()
model,tokenizer = auto(instruct=INSTRUCT,vocab=VOCAB,example=EXAMPLE)
```
---
For Table 2 in the appendix, "Success by AutoCoNN" means using both Instruct and Example information. To resolve your confusion, we re-evaluated generating CoNN using only Instruct and only Example information separately in Table a. (Similarly, in the few-shot samples, only the corresponding information is used as well.)
Table a:
| CoNN Model | Expert's Working Time | Success by AutoCoNN | AutoCoNN (Instruct) | AutoCoNN (Example) |
|-|-|-|-|-|
| Parity Model | 1 hour | 8/20 | 7/20 | 3/20 |
| Reverse Model | 0.5 hour | 15/20 | 16/20 | 11/20 |
| Last Letter Model | 0.5 hour | 13/20 | 12/20 | 10/20 |
| Copy Model | 0.2 hour | 17/20 | 17/20 | 15/23 |
| Addition Model | 48 hours | 0/20 | 0/20 | 0/20 |
| Subtraction Model | 48 hours | 0/20 | 0/20 | 0/20 |
We can easily find that, without predefining explicit rules but only "discovering" rules from examples, although the accuracy of AutoCoNN will slightly decrease, this scenario with only examples still has certain reliability.
---
**Review:** While orthogonal to the paper, perhaps a remark on the difficulty of creating CoNNs appropriate for a task should be briefly discussed.
**Response:** Thank you for pointing this out. We acknowledge that the description of the implementation details of AutoCoNN may not have been thorough enough. We will address your concerns in the following response and commit to revising the relevant content accordingly.
---
**Review:** Section C.1 does discuss employing an LLM and ICL to obtain networks that approximate specific rules, how difficult is it to source examples for ICL?
**Response:** The Supplementary Material `code\NeuralCom\AutoCoNN\prompts.py` contains 24 examples related to ICL, each consisting of Instruct, Example and Code. And here is an example:
```python
def make_length() -> rasp.SOp:
"""Creates the `length` SOp using selector width primitive.
Example usage:
length = make_length()
length("abcdefg")
>> [7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0]
Returns:
length: SOp mapping an input to a sequence, where every element
is the length of that sequence.
"""
all_true_selector = rasp.Select(
rasp.indices, rasp.tokens, rasp.Comparison.TRUE).named(
"all_true_selector") # Match tokens and tokens one by one, and calculate that they are completely equal.
return rasp.SelectorWidth(all_true_selector).named(
"length") # which computes the number of elements in each row of a selector that with the value 1.
```
These 24 examples are from Tracr's code [1]. We added the Instruct and Example in the comments for each example.
---
**Review:** How many examples are needed before the output is of high enough quality?
**Response:** Since RASP is a novel language, more examples are needed to meet the needs of ICL. In general, it is recommended to provide 16 or more samples, which can successfully generate CoNNs including Parity Model, Reverse Model, Last Letter Model, and Copy Model on the basis of GPT-3.5.
---
**Review:** Can the correctness of the proposed CoNN be assessed by a non-expert?
**Response:** Yes, it is possible to assess the correctness of CoNNs with just two examples. In practice, we first use GPT-3.5 to generate 20 different RASP codes, then convert them into CoNN models. We then test the 20 different CoNN models on two different examples sequentially. If they can get both examples correct, we can determine that the correct CoNN has been generated.
This is because each layer of a CoNN is essentially an explicit transformation on the sequence. So a CoNN can either successfully transform all samples, or it is wrong.
---
[1]Lindner D, Kramár J, Rahtz M, et al. Tracr: Compiled transformers as a laboratory for interpretability[J]. arXiv preprint arXiv:2301.05062, 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for the comments and for responding to my concerns!
Table a is also interesting in that it shows that instruct and example are not purely additive/orthogonal as signals (although a Venn diagram of the solutions would be needed to better judge the situation, I feel the table provides better clarity).
> We then test the 20 different CoNN models on two different examples sequentially. If they can get both examples correct, we can determine that the correct CoNN has been generated.
Is this restriction to two examples due to computational costs? It seems underpowered as a way to validate that a language check network has been correctly constructed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up questions, which further clarify the points that need to be addressed about our work.
---
**Regarding the non-additive nature of instructions and examples:**
You're right; the instruct and example signals are not purely additive or orthogonal, but rather complement each other. The intersection of the two can provide more precise guidance for generating CoNNs. We agree that a Venn diagram could provide additional insight, and we will consider including such a visualization in the next version of the paper to better illustrate this relationship.
**On the restriction to two examples for validation:**
The most effective way to verify the correctness of CoNN is for experts to manually debug every line of RASP code to ensure correctness.
For non-expert users who do not know the meaning of the RASP code, to facilitate demonstration and use of AutoCoNN, we suggest providing two examples to verify the correctness of CoNN. In Table c, we have tabulated the accuracy of 10 CoNNs as assessed manually by experts (us) after validation with different numbers of examples. It is easy to see that just 2 examples are sufficient to evaluate the correctness of CoNN.
**Table c**
| CoNN Model | Example=1 | Example=2 | Example=5 |
| ----------------- | --------- | --------- | --------- |
| Parity Model | 5/10 | 10/10 | 10/10 |
| Reverse Model | 10/10 | 10/10 | 10/10 |
| Last Letter Model | 9/10 | 10/10 | 10/10 |
| Copy Model | 10/10 | 10/10 | 10/10 |
The reason is that the process of converting RASP to CoNN is like runing Python code. If the CoNN can be generated normally, it is already half successful (most of the failed CoNNs in Table a above are due to code runing failures during conversion). After that, simple filtering can obtain a correct CoNN.
---
Thank you again for your insightful questions and suggestions. They are instrumental in improving the clarity and thoroughness of our work. | Summary: The paper shows a strategy to improvise ICL by including CoNNs in the learning pipeline, which enable the LM to learn symbolic operations in addition to standard autoregressive LM generation. The resulting model is trained by a hand designed gradient accumulation technique and results are compared on symbolic tasks.
Strengths: The paper shows how symbolic tasks can be included with general autoregressive training and therefore provides a way to train models following a few fixed symbolic tasks in mind. The paper provides a solid training recipe with proper mathematical justification. The results correlate with the claims and justify using the method.
Weaknesses: It is unclear from the paper how different gating mechanisms are being derived in this network and how they are being included in the training framework. The authors say that \beta is not learned in the algorithm and essentially rule calculations are assigned to CoNNs. If that is true, the applicability seems a bit ad-hoc as different rules will then need to be hand written and not derived, and the benefits of the network will only be applicable to scenarios that are symbolically encoded. In other words, this seems to be a scalability challenge in terms of letting LMs learn rules. The experimental evaluation to justify the benefits seem very limited.
The paper also suffers from poor presentability with multiple grammatical errors, spelling mistakes etc. Also more context on training the overall CoNN+DNN system is missing from the paper or appendix.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. How does the method scale to new symbolic tasks?
2. How do we train a joint model and let the model learn the rules from data?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed and constructive feedback very much. We hope this response helps address your concerns about the design and applicability of the method. We will update our documentation accordingly to make it more clear, and will consider your other suggestions.
---
**Review:** It is unclear from the paper how different gating mechanisms are being derived in this network and how they are being included in the training framework.
**Response:** We apologize for not elaborating on the gating mechanism in detail in our paper. The Neural Comprehension framework is a plug-and-play approach that does not require additional gradient training. The gating mechanism is simplified for most CoNNs as a rule-based predefined process: when we require a rule calculation, it triggers the gating variable to activate the CoNN.
Since the focus of our research is on how to enable language models consisting solely of neural networks to have symbolic reasoning capabilities, the \beta determined by CoNN can avoid the extra errors brought by "learning from data". Determining the value of \beta during the construction of CoNN enables easy plug-and-play, such that built Addition model and Subtraction model can be readily used in models like GPT, T5, GLM-130B and Llama without training, thus enabling them to attain complete accuracy in performing addition and subtraction. Pre-determining \beta values provides greater adaptability compared to trainable \beta. We agree that a learnable gating mechanism could be more flexible; thus, we suggest implementing a learnable gating mechanism for future work involving larger-scale CoNN.
---
**Review:** this seems to be a scalability challenge in terms of letting LMs learn rules. The experimental evaluation to justify the benefits seem very limited.
**Response:** Although currently we do need to manually write different rules for each CoNN, we propose the AutoCoNN toolkit in Appendix C. It utilizes the strong inductive abilities of LLMs like GPT-3.5 to automatically and pipeline-ly generate a large number of diverse CoNNs, which can be used in LMs. For example, for the four symbolic tasks in Figure 3, the CoNNs induced from the rule tasks can achieve full accuracy.
From another perspective, thanks to the transferability of Neural Comprehension, these CoNNs from GPT-3.5/4 can enhance the performance of smaller language models such as GLM-130B in Figure 5. Although lacking the capability of calling external APIs (PAL methods), GLM-130B can also be enhanced in arithmetic capabilities by CoNNs.
---
**Review:** The paper also suffers from poor presentability with multiple grammatical errors, spelling mistakes etc.
**Response:** We apologize for the presentation errors; we'll ensure a thorough proofreading in the revised version to improve the paper's readability.
---
**Review:** Also more context on training the overall CoNN+DNN system is missing from the paper or appendix.
**Response:** Thank you for pointing this out. We admit that the description of the Language Models with CoNN may not be comprehensive enough. In the revision, we will add more details about the CoNN and LM systems, and clarify some parts that may be misleading, including:
1. Changing the layout of Figure 2 and adding caption explanations;
2. Moving some of the AutoCoNN content into the main text;
3. Adding a plug-and-play explanation for "CoNNs" in line 152.
---
**Question:** How does the method scale to new symbolic tasks?
**Response:** In the code section of the supplementary material, we provided an example of AutoCoNN generating a new CoNN: only "Instruct", "Vocab" and "Example" need to be provided, and the contextual learning capability of the LLM model can be utilized to generate a new CoNN model (We provide 24 examples about building CoNNs as few-shot prompts for AutoCoNN in advance.).
```python
from NeuralCom.AutoCoNN import AutoCoNN
INSTRUCT = 'Create an SOp that is the last letter of a word'
VOCAB = ['a','b','c','d','e','f','g']
EXAMPLE = [[['a','b','c'],['c','c','c']],[['b','d'],['d','d']]]
auto = AutoCoNN()
model,tokenizer = auto(instruct=INSTRUCT,vocab=VOCAB,example=EXAMPLE)
```
---
**Review:** How do we train a joint model and let the model learn the rules from data?
**Response:** At present, our work does not implement a mechanism for the model to directly learn rules from data. But just as we designed AutoCoNN, we suggest using LLMs to observe the data during the construction phase of CoNN and summarize its rules into a RASP language that can build CoNN.
---
Rebuttal Comment 1.1:
Title: Thanks for your comments
Comment: Thanks a lot for your comment, really appreciate your detailed descriptions. I do see the benefits and potential of this approach in adding symbolic knowledge to LLMs. This is a really interesting direction, and after reading the authors response in detail I am personally convinced this should be the direction to use symbolic constraints with LLM. The benefits of a plug and play model does have its benefits.
I urge the authors to show the benefit of the approach with these symbolic modules on some public benchmark to showcase that the overall benefit does hold when any general purpose dataset is used, using multiple LMs if possible.
I appreciate the hard work by the authors.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your recognition.
In the **original paper**, we selected public benchmarks from multiple perspectives.
---
For **Symbolic Operation tasks (Section 5.1)**, we referred to [1] and [2], used similar experimental settings (i.e., set In-Dist and Out-of-Dist ranges), and likewise chose four types of symbolic operation tasks including *Parity*, *Reverse*, *Addition*, and *Subtraction*, performing comparisons among different scale models including *T5-small*, *T5-base*, *T5-Large*, *GPT-3*, *GPT-3.5*, and *GLM-130B*. And the Neural Comprehension achieved 100% accuracy.
---
For **Symbolic Reasoning tasks (Section 5.2)**, we selected tasks identical to [3] (*Coin Flip* and *Last Letter Concatenation*), demonstrating the additional improvements brought by CoNN compared to the base models (*T5-small*, *T5-base*, *T5-large*).
---
For **Arithmetic Reasoning tasks**,
In **Section 5.3**, we selected the common AddSub dataset and manually expanded it to an arithmetic reasoning benchmark involving addition and subtraction reasoning of different digit numbers from 1 to 20 digits. On this benchmark, we chose three types of LMs: *GPT-3*, *GPT-3.5*, and *GLM-130B*. We also set up two baselines, the CoT baseline [3] and the PAL baseline that relies on an external API. The experimental results proved the benefits of our method over the baselines.
In **Section Appendix D.2**, we conducted experiments with *GPT-3* and *GPT-3.5* on **five different real-world Arithmetic Reasoning public benchmarks** (these tasks are widely used to evaluate the reasoning capabilities of LMs [3][4]) including *GSM8K*, *SingleEq*, *AddSub*, *MultiArith*, and *SVAMP*. The method we proposed also demonstrated advantages in these benchmarks.
---
**We believe that these public benchmark experiments from multiple angles are sufficient to demonstrate the overall benefits of Neural Comprehension. We hope that these information will help address your concerns and reconsider our ratings.**
[1] Anil C, Wu Y, Andreassen A, et al. Exploring length generalization in large language models[J]. Advances in Neural Information Processing Systems, 2022, 35: 38546-38556.
[2] Qian J, Wang H, Li Z, et al. Limitations of language models in arithmetic and symbolic induction[J]. arXiv preprint arXiv:2208.05051, 2022.
[3] Wei J, Wang X, Schuurmans D, et al. Chain-of-thought prompting elicits reasoning in large language models[J]. Advances in Neural Information Processing Systems, 2022, 35: 24824-24837.
[4] Kojima T, Gu S S, Reid M, et al. Large language models are zero-shot reasoners[J]. Advances in neural information processing systems, 2022, 35: 22199-22213.
Title: Responding to the reviewer's concerns regarding the Public Benchmarks | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for their constructive feedback and recognition of our paper's contribution.
## Strengths
In the review, our paper was praised for its "**novel approach**", "**correct motivation**", "**high-quality experiments**", and "**superior performance**":
- We appreciate Reviewer aik1’s appraisal of `solid training recipe`, `proper mathematical justification` and `results correlate with the claims`.
- Furthermore, Reviewer xyMG acknowledged our approach in `demonstrate better generalisation to out-of-distribution examples`, `comparable to external tool-based approaches`, and `hold a promise of better integration`.
- They also recognized the high quality of our experimental investigation, a sentiment echoed by Reviewer QFXM who commended our for `The wide range of symbolic experimental tasks show that authors performed high quality experimental investigation`.
- Finally, Reviewer gBrw believes that our paper exhibits `arithmetic tasks (5.1) where the model achieves consistently 100% accuracy`, `The paper correctly identifies the limitations of LLMs`, and `the paper demonstrates strong results over competitive baselines on arithmetic/symbolic reasoning approaches`.
## Collective Concerns
The main criticisms centered on the issue of the hard-coded structure of CoNNs, how does the method scale to new symbolic tasks, and It lacks advantages compared to API methods.
**Hard-coded structue** We predefine \beta in the CoNN construction process, enabling the neural understanding framework to be used as a plug-and-play solution to enhance the symbolic capabilities of language models without requiring additional training. However, if \beta is a learnable parameter, pre-training of the language model is necessary, which adds to the cost of adoption. As we mentioned in our paper, we propose applying this method to larger-scale CoNNs.
**How does the method scale to new symbolic tasks** To facilitate the expansion of CoNN usage, we propose the AutoCoNN toolkit (see in Appendix C) in this paper for rapid CoNN construction. It only requires one instruction and two examples to generate a CoNN, making the extension of new CoNNs much easier.
**It lacks advantages compared to API methods** Figure 5 in the paper illustrates the arithmetic reasoning experiment. For LLMs like GPT-3.5, neural comprehension is no weaker than API methods. However, for weaker LMs like GLM-130B, due to their inability to invoke APIs, neural comprehension performance significantly outperforms API-based methods.
## Presentation
We have observed some constructive suggestions provided by the reviewers regarding the presentation. We commit to making rigorous revisions to the paper accordingly, as outlined below:
- Add a motivational explanation regarding Figure 1 in the Introduction;
- Changing the layout of Figure 2 (It's in the PDF file);
- Moving the AutoCoNN content into the main text;
- include an example of a CoNN in the Preliminaries section, such as demonstrating how to implement a Parity CoNN using Select, Aggregate, and Zipmap functions.
- Adding a plug-and-play explanation for "CoNNs" in line 152.
- line 47, `relying` -> `relies`
- line 50, `comprises` -> `comprise`
- line 136, `absolut` -> `absolute`
- line 336, `method` -> `methods`
- line 343 `facilitating the seamless` -> `facilitating seamless integration`
---
We believe that our efforts shown in this paper are a significant first stride towards perfecting Neural Comprehension. Your feedback has provided pointers to future works and we look forward to addressing these challenges in subsequent refinements of the system.
Thank you for your time and feedback, and for considering our research. We hope this rebuttal addresses the major concerns and makes a compelling case for the acceptance of our paper.
Pdf: /pdf/a4214211f301618ea78c818541b9966a9b037259.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
NetHack is Hard to Hack | Accept (poster) | Summary: This paper seeks to study and better understand the large performance gap between neural and symbolic agents in the NeurIPS 2021 NetHack Challenge. The main hypothesis is that symbolic agents advantage derives from hierarchical reasoning, which was not an element in participating neural agents. To test this hypothesis, a new dataset is generated using the winning symbolic agent that records both actions and higher-level strategic labels and a neural behavior cloning agent was trained with this augmented data. Beyond this, the paper also explores the impact of increased model size and dataset size, changes to the neural architecture, and the addition of a policy fine-tuning step using a reinforcement learning algorithm. The results suggest that hierarchical training improves the neural agent's performance significantly more than increased model capacity but that more powerful model architectures (i.e. a transformer-based model) could overfit to the augmented data.
Strengths: Quality/Soundness
The hypotheses and claims of the paper were laid out clearly and the experiments were well-designed to evaluate them.
Clarity/Presentation
I found the paper to be clearly presented and well-written. At each stage I had a clear understanding of the question under investigation and the methodology for studying it.
Originality/Contribution
The paper is explicit that its main contribution is not algorithmic but scientific. Since the scientific questions posed here are grounded in the performance of agents from a specific competition that happened last year, I expect that this analysis is original.
Significance/Contribution
Increasing understanding of the performance gap in the NetHack problem, as a proxy for complex, long-horizon problems in general, could be important. Symbolic approaches make use of quite a lot of domain expertise applied to constructing the symbolic structure, making them effective in their target problem but inflexible. Neural networks seem to be quite flexible and capable of learning without a lot of structure engineering, but don't seem to be able to take advantage of structure in the environment. It seems sensible to study the primary factors preventing neural approaches from building the same long-range structures.
I think it is worthwhile to see the comparison between this structural change and the alternative interventions of more data, more parameters, or more sophisticated/expensive architecture. The finding that, in a time-constrained setting (true of many decision-making problems), model structure may be more important than model capacity seems likely to at least spark interesting conversations within the community. The fact that there is plenty of performance gap still to cover, even with this built-in domain knowledge may also inspire further investigation into what measures could come closer to closing the gap and how those insights might be applied to more general practice.
Overall
Overall, I find this to be a clearly written paper with a reasonable scientific question and sound methodology to address it (modulo some missing statistical analyses). The findings are not revolutionary but, to my eyes, they do provoke further questions about neural approaches to learning in complex, long-horizon problems and may inspire follow-up work either in the NetHack testbed specifically or in studying these questions more generally.
Weaknesses: Quality/Soundness
My main concern in this area is the small number of independent samples per model class (6). I do understand that these results are generated at great expense and that it may not be feasible to generate more trials, but the small number of samples diminishes the statistical power of these analyses. I can see that the error bars are quite small; assuming those are showing the standard error, that's encouraging. Nevertheless, whether more samples can be generated or not, I think it's important that the paper include the findings of low-sample hypothesis testing (e.g. t-test) on these results; without that, we can't confidently distinguish between noise and meaningful differences.
Originality/Contribution
The paper acknowledges that behavior cloning, hierarchical policies, and transformer-represented policies have all been studied in prior work.
Significance/Contribution
NetHack is not, in and of itself, an intrinsically important problem to solve.
The results presented here are not conclusive or enormously surprising. The main result is fairly predictable: adding explicit supervision about high-level strategy and explicit hierarchical structure in the model helps the model take advantage of hierarchical structure in the environment.
Overall
Overall, I find this to be a clearly written paper with a reasonable scientific question and sound methodology to address it (modulo some missing statistical analyses). The findings are not revolutionary but, to my eyes, they do provoke further questions about neural approaches to learning in complex, long-horizon problems and may inspire follow-up work either in the NetHack testbed specifically or in studying these questions more generally.
---After discussion---
I have considered the other reviews and the authors' responses. I continue to feel confident about my overall assessment.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: n/a
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Since the paper does not propose significantly new algorithmic ideas, the main source of limitations would be in the methodology and analysis. I generally found the paper to avoid overclaiming. I've already discussed one area where this aspect of the paper could be improved: acknowledgement of the small sample sizes and proper statistical analysis to inform the conclusions. The other area might be in the conclusions where perhaps the summary of the findings might be a bit too general and could be toned down and/or stated clearly as hypotheses (e.g. "Hierarchy hurts overfitting models" is an overly broad conclusion from a limited set of experiments but seems like a reasonable hypothesis given these results).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for the time and effort that you have taken to review our submission. We are delighted that you found our paper to be well-written and the insights generated by our experiments to be valuable as a basis for future work on neural approaches to learning in complex, long-horizon environments.
We hope to address your remaining concerns and questions below.
> My main concern in this area is the small number of independent samples per model class (6).
Thank you for pointing out our omission of a clarification on the meaning of the error bars plotted in the results figures of our draft submission; indeed, the error bars in Figures 3, 4, and 5 all indicate one standard error across policy random seeds. The absolute numerical values of evaluation metrics' standard errors are also reported in Table 2, which aggregates all of the experimental results of the paper. We will remedy this omission in the revised version of our submission.
As you note, due to the great computational expense associated with trials, we regret that we are unable to increase the number of independent trials run per model and experiment class in this paper.
We agree that a thorough verification of the statistical significance of our results is important. We will include the low-sample hypothesis tests that you suggest in the revised version of this paper.
In the interim, please note that the values of the key metric we employ to compare model performance on held-out seeds of NLE (mean score across a fixed set of 1024 unique, procedurally-generated NetHack games) are separated by two standard errors for all model classes in both our BC and APPO + BC sets of experiments.
> NetHack is not, in and of itself, an intrinsically important problem to solve.
We concur that NetHack may not be an intrinsically important problem to solve. Nevertheless, we respectfully contend that this does not diminish the importance of work employing this environment as a general testbed for studying flexible policy learning in long horizon, open-ended environments.
Please also take a look at **section (i)** of our general rebuttal.
> The main result is fairly predictable: adding explicit supervision about high-level strategy and explicit hierarchical structure in the model helps the model take advantage of hierarchical structure in the environment.
To our knowledge, our paper is the first to demonstrate that hierarchical labels can offer a stable and powerful mechanism for embedding behavioral priors into policies during pre-training.
As observed by reviewer qyRC, currently, there are few complex environments where expert demonstrations are accompanied by hierarchical labels, though such labels are in-principle no more difficult to collect than the demonstrations themselves from annotators.
Though we agree that our findings concerning the power of hierarchy may not be surprising from a theoretical perspective, we are hopeful that our paper may spur the creation of more datasets with this property as well as further investigations of how hierarchical labels may be best exploited for offline and online learning of flexible neural policies, both in NetHack and beyond.
> The other area might be in the conclusions where perhaps the summary of the findings might be a bit too general and could be toned down and/or stated clearly as hypotheses.
Thank you for these suggestions, we will tone down the language employed in this section of the paper accordingly.
Once again, we are sincerely grateful for your feedback on our submission. Please let us know if you have any outstanding comments, questions, or concerns preventing a stronger recommendation for acceptance.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for your response! I think we are generally in agreement about both the value of using NetHack and the limitations of gathering findings in a single domain. Similarly, I think we generally agree about the potential illustrative value of the results and the dataset as well as the limited surprise factor in the findings. | Summary: # Problem Statement
The paper addresses the challenge of neural policy learning methods struggling in long-horizon tasks, particularly in open-ended environments with multi-modal observations, such as the game NetHack. It was observed that symbolic agents significantly outperformed neural approaches in the NeurIPS 2021 NetHack Challenge.
# Main Contribution
The paper's main contribution is an extensive study on neural policy learning for NetHack. The authors analyzed the winning symbolic agent and extended its codebase to generate one of the largest available demonstration datasets. They examined the advantages of an action hierarchy, enhancements in neural architecture, and the integration of reinforcement learning with imitation learning. Their investigations resulted in a state-of-the-art neural agent that surpassed previous fully neural policies by 127% in offline settings and 25% in online settings on median game score. However, they also demonstrated that mere scaling is insufficient to bridge the performance gap with the best symbolic models or even the top human players.
# Methodology and Experiments
## The Hierarchical HiHack Dataset
The authors create the HiHack dataset, which is a hierarchically-informed version of the NetHack Learning Dataset (NLD-AA), containing 3 billion recorded game transitions from over a hundred thousand games played by the AutoAscend agent.
## Hierarchical Behavioral Cloning
The authors extend the ChaoticDwarvenGPT5 (CDGPT5) model, a top-performing open-source neural model for NetHack, by introducing a hierarchical decoding module. The model consists of three separate encoders for different types of observations and an LSTM core module. The hierarchical extension replaces the linear decoder of the CDGPT5 model with a hierarchical decoder that predicts the strategy label and selects the appropriate low-level MLP for action prediction. The hierarchical LSTM policy and the baseline non-hierarchical LSTM CDGPT5 policy are trained using a simple cross-entropy loss. The results show that the introduction of hierarchy significantly improves the performance of LSTM policies trained with behavioral cloning, yielding a 40% gain over the baseline in mean NLE score and a 50% improvement in median score across seeds. The authors confirm that this improvement is due to hierarchy and not simply a result of the increased parameter count of the hierarchical LSTM policy.
## Architecture and Data Scaling
The authors explored scaling as a potential solution to improve the performance of the model, which was significantly behind the symbolic policy used to generate the HiHack demonstrations. They developed a novel base policy architecture for NetHack that introduces a Transformer module into the previous CDGPT5-based architecture. They also conducted data scaling experiments using subsets of the HiHack dataset to examine the relationship between dataset size and the test-time performance of BC policies. The results showed that both the non-hierarchical and hierarchical variants of the combined transformer-LSTM policy architecture yielded gains, but the larger model performed worse than the smaller one due to overfitting. This suggested that scaling of model capacity alone would not be sufficient to close the neural-symbolic gap. Additionally, brute force scaling of the dataset alone could not viably close the gap to symbolic methods.
## Combining Imitation with Reinforcement Learning
The authors explored combining imitation learning with reinforcement learning (RL) to bridge the performance gap with AutoAscend. They used a combination of behavioral cloning (BC) and asynchronous proximal policy optimization (APPO) for training. The results showed that RL fine-tuning significantly improved the performance of all models. The best-performing approach was APPO + BC using the hierarchical LSTM model, which achieved a new state-of-the-art for neural policies on NLE, surpassing the previous best result by 48% in mean NLE score and 25% in median NLE score. The Transformer-LSTM models performed worse due to their slower training speed and the fixed training time budget. The authors also observed that fine-tuning with RL improved the error-correction capability of models across all model classes compared to their purely offline counterparts.
Strengths: # Originality
The problem is interesting and the approaches are insightful.
# Quality
The analysis and experiments are comprehensive.
# Clarity
The article is overall well written and clear.
Weaknesses: 1. The current focus of the study is quite narrow, being primarily centered on the application of imitation learning for NetHack, limiting its influence. In the context of mastering the game, while this approach is interesting, it is unlikely to exceed the performance of experts that generate demonstrations, not to mention that the experts are already algorithms that can scale well. Furthermore, NetHack, despite being an excellent game, is somewhat niche and its real-world implications are relatively minimal. The techniques proposed in this study are specifically tailored for this game, which limits their potential for inspiring more universally applicable methods that could have a broader impact.
- The availability of hierarchical labels is a strong assumption that does not often hold, which further limits the applicability of the proposed methods.
2. Even just for bridging the performance gap between neural models and AutoAscend, there is no promising direction revealed by the work as the various augmenting components seem to contradict each other.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. When introducing Transformer to augment the capacity of the neural model, why did authors choose the architecture as shown in the article? Specifically, transformers are best known for their NLP and CV capacity, which could make them good replacement for the CNN and MLP encoders.
2. Why do the authors enforce the 48 hour training time cap instead of training all models till convergence? Given that this study does not appear to prioritize data efficiency or training efficiency, the necessity of such a computational time constraint is unclear. It would be beneficial to understand the rationale behind this choice, as it may not directly align with the study's primary objectives.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors note that possible avenues for future exploration include: (a) methods for increasing the Transformer context length to give the agent a longer memory to aid exploration; (b) addressing the multi-modal nature of the demonstration data (i.e. quite different trajectories can lead to the same reward), which is a potential confounder for BC methods. Some forms of distributional BC (e.g. GAIL, BeT) could help alleviate this issue.
The aforementioned two points do not address the limitations raised in the "Weakness" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We thank you for your thoughtful and detailed feedback on our submission. We look to address your remaining concerns about our paper below.
> ...while this approach is interesting, it is unlikely to exceed the performance of experts that generate demonstrations, not to mention that the experts are already algorithms that can scale well.
Rather than to achieve SOTA in NetHack, our goal in this paper is to employ NLE as a testbed for investigation of the general performance gaps between neural policies and symbolic agents in long-horizon, open-ended environments.
Please also note that while AutoAscend is the SOTA artificial agent in NLE, the median in-game score it achieves is still several orders of magnitude short of a typical expert player’s “ascension” score. The game remains unsolved by any artificial agent [1].
> The techniques proposed in this study are specifically tailored for this game, which limits their potential
Please take a look at **section (i)** of our general rebuttal.
> The availability of hierarchical labels is a strong assumption that does not often hold, which further limits the applicability of the proposed methods.
Hierarchical labels have been gathered for other settings, such as in the 3D DeepMind Playhouse environment, where human annotators were asked to provide natural language instructions or questions labeling manipulation "subtasks" [2]; thus, they are not impractical to gather. In general, we hope that the impact of hierarchical priors via structured labels revealed by our work might spur the future collection of more demonstration data with this property in open-ended environments.
> ...there is no promising direction revealed by the work as the various augmenting components seem to contradict each other.
Our experimental results suggest that the introduction of structured labels is strongly beneficial to the improved performance of lighter-weight, data-limited LSTM policies, both in the BC and BC + RL settings.
However, in the pure BC setting, we find that architectural improvements and scaling make these beneficial effects of structured labels obsolete.
Furthermore, we find a “no free lunch” effect to hold for large transformer-based policies in our BC + RL experiments: they are far more unwieldy to finetune with RL than their smaller LSTM counterparts, with the increased cost of gradient updates dealing a large blow to sample-hungry RL under the compute-time cap. As a result, while we do find interaction to improve the generalization performance of both hierarchical and non-hierarchical transformer-LSTM policies, the performance of these policies on withheld NLE instances is superseded by that of hierarchical LSTMs in this case.
Evaluation data supporting the above is aggregated in Table 2 of the paper.
There are several directions for future work that we find compelling:
(1) Deeper explorations of the impacts of structured labels, both in offline and online settings. Do performance improvements grow as the granularity of hierarchical priors used in pre-training increases? Can we further exploit hierarchical behavioral priors for structured exploration?
(2) Investigation of even better transformer-based policies. If we are able to eliminate the overfitting effects seen in pre-training, does these models’ performance benefit from structured labels too?
> When introducing Transformer to augment the capacity of the neural model, why did authors choose the architecture as shown in the article?
Please refer to **section (ii)** of our general rebuttal. As stated above, we will amend our submission to include a discussion of the LSTM ablation upon revision.
> Why do the authors enforce the 48 hour training time cap instead of training all models till convergence?
Constrained model comparisons are common in the computer vision and reinforcement learning literature [3, 4, 5].
We choose to employ a 48-hour training time cap in our study for two reasons.
(1) A compute-time cap **provides a basis for comparison of different neural policy architectures on equal footing**, revealing, for instance, the degree of the performance advantage that faster gradient updates grant LSTM models against bulkier transformer-LSTMs, both in the presence and absence of structured labels. As noted by reviewer 83YB, our finding that in the “...time-constrained setting (true of many decision-making problems), model structure may be more important than model capacity seems likely to at least spark interesting conversations within the community.” Additionally, we believe that a compute-time cap increases the broader impact of our work by making our insights more relevant to researchers constrained by cost, e.g. in academic research settings.
(2) From a practical perspective, employing a training time cap as our comparison basis also **enables us to expand the number of random seeds we employ to report results** across the numerous experiments within the paper, strengthening the statistical significance of our findings.
Thank you once more for the time you have taken to review our paper. Please let us know if you have any outstanding comments, questions, or concerns about our work.
**Citations**
[1] Hambro, Eric, et al. "Insights from the neurips 2021 nethack challenge." NeurIPS 2021 Competitions and Demonstrations Track. PMLR, 2022.
[2] Team, DeepMind Interactive Agents, et al. "Creating multimodal interactive agents with imitation and self-supervised learning." arXiv preprint arXiv:2112.03763 (2021).
[3] Zhai, Xiaohua, et al. "Scaling vision transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[4] Yarats, Denis, et al. "Mastering visual continuous control: Improved data-augmented reinforcement learning." arXiv preprint arXiv:2107.09645 (2021).
[5] Peebles, William, and Saining Xie. "Scalable diffusion models with transformers." arXiv preprint arXiv:2212.09748 (2022).
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thank you for the comprehensive response.
Although I admit that comparisons with the training time cap do provide unique insights and practically makes the experiments more computationally manageable, I want to point out that the training efficiency can vary by a large margin depending on the implementation details, which are non-intrinsic to the algorithmic methods and can obscure analysis. Training all models till convergence allows the influence of the architecture and data factors to manifest more clearly, which are the article's main focus, and in my opinion surpasses the benefits enlisted by the authors of using the training time cap.
Other than this point, my questions are properly addressed thanks to the authors' rebuttal and I will update my rating from 6 to 7. | Summary: The paper improves the existing solutions in the NetHack Learning Environment (NLE). This is done by taking earlier solutions from a competition around NLE, collecting more data with the best available (symbolic) agent, and using that data to improve a neural only solution. The paper provides experiments with imitation learning (with or without RL tuning), larger models, hierarchical memory setup (LSTM + Transformers) as hierarchical behavioural cloning setup, using labels of the newly collected dataset. While there are improvements, it is still below the demonstrator results, which is then studied by scaling the model sizes and amount data collected. Paper concludes by providing the state of the art results in the task, but also noting that scaling alone is not enough to reach the expert demonstrator level (symbolic agent).
Strengths: - Provides more detailed dataset than the previous works (with hierarchical action labels)
- Sets an interesting premise/task for trying to reach the demonstrators' (AutoHack agent) performance with neural solutions.
- Different ablations to try to answer questions (data/model scaling, model architecture with hierarchy)
- Proposed hierarchical approach to imitate the demonstrator agent.
Weaknesses: While I enjoyed reading the paper, overall I think the results are interesting or applicable to most of the NeurIPS audience, even in the limited scope. The paper presents many results and provides some explanations for them, but does not verify these explanations with further experiments. I think proper answers to these issues would be insightful to many, and others could then use these insights in their work (e.g., where the trained agent failed to imitate the demonstrator? What was the cause of poorer performance? Why did bigger model perform worse?). Creating such insight in one environment would be sufficient, as by focusing on a single environment, you can create very specific scenarios to tease out these answers.
- Limited scope of the work: experiments done in a single environment. Most of the paper is framed in a way that this is not a huge issue (e.g., ablations), but proposing new method just for playing NLE has limited impact. If a new method is proposed to generally improve RL/IL performance, it should be tested at least in two distinct environments.
- Limited improvement in the context of SOTA solutions: 2x over the baseline used in the paper with RL and proposed architecture included, but other neural agents in the NetHack Challenge had higher score. To be interesting in terms of performance, it should at least outperform the NetHack Challenge Neural solutions.
- Proposed method is limited in novelty, as evident by the previous work listed in the paper. If the hierarchical BC figured out the hierarchy automatically (or, if it was an emergent behaviour of the model), that would be more interesting.
- Paper outlines some assumptions on why things failed (e.g., "model overfitted" or "learned to self-correct"), but these claims were not verified with results. The paper would be much stronger if you can give solid, verified answer that indeed, overfitting was to blame or that RL trained the model to "self-correct".
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Questions:
1) In multiple occasions paper says that the lower performance of bigger model is due to overfitting (e.g., line 229). However there are no results/experiments to show that this indeed was the case. A simple way to find this out is to do train-validation (or even train/validation/test) split, and testing on held out data as training progresses.
2) Regarding data scaling experiments: did you change any other settings of the training setup when increasing data amount? Previous work has demonstrated that the optimal model size and/or training compute depends on the amount of data (Hoffmann et al. 2020).
3) Regarding model scaling experiments: I assume only the number of layers in the transformer was changed? The bottleneck of the network may be elsewhere, e.g., one of the input layers or output layers. I would recommend scaling the whole network, similar to what OpenAI VPT work did, where ResNet blocks were "widened" in terms of filters, as well as increasing transformer size (Baker et al. 2022). Also, Hoffmann et al. (2020) changed number of layers, number of attention heads and transformer dimensionality when scaling models. This might be something you want to try.
4) Instead of LSTM + Transformer model, did you experiment with transformer model only? E.g., akin to VPT work (Baker et al. 2022), embed all inputs into one vector, stack vectors over timesteps, apply causal transformer, and predict actions from the transformer outputs. This type of model might scale better, as it reduces the amount of components that might interfere.
#### Comments (not questions)
- Fig1 right: weird scale. Any chance to get more points?
- Line 205: grammar error at the start of the line
- Explain/rename "Dlvl" and why "Turns" is good metric
- Figure 3: "LSTM + XXL Dec" is bit confusing naming, since "decoder" is not commonly used term in the paper. I'd recommend using something like "LSTM (bigger)" to simply reflect that it is the LSTM baseline but with bigger network
- Figure 3 (and others): add explanation to caption what is the error bar of the bar plots. Is it standard deviation or standard error (or something else)?
- Table 2 caption: starts with weird "[V4]"
#### References
- Hoffmann, Jordan, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas et al. "Training compute-optimal large language models." arXiv preprint arXiv:2203.15556 (2022).
- Baker, Bowen, Ilge Akkaya, Peter Zhokov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. "Video pretraining (vpt): Learning to act by watching unlabeled online videos." Advances in Neural Information Processing Systems 35 (2022): 24639-24654.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: No explicit sections for limitations or broader/societal impact was given. Authors bring up the future work ideas in the conclusion. While I think the work does not require societal impact section (no immediate impact), I urge authors still think through of any cases where the work or the insights could impact others. Or alternatively, what impact would _not_ including some results do (e.g., skipping some analysis).
## Rebuttal acknowledgement
I have read authors' rebuttal which did address my concerns, and I increased my rating from 4 to 7 to signal my vote to accept this paper (change was done before discussion period closed).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We thank you for your detailed and thoughtful feedback on our paper, which has greatly helped to further strengthen our work. We hope to address the concerns and questions that you raise in your review below.
> If a new method is proposed to generally improve RL/IL performance, it should be tested at least in two distinct environments.
Our goal in this work is to conduct a deep, systematic, and scientific investigation of performance gaps between data-driven neural policy learning and symbolic methods employing NetHack as a testbed, rather than to introduce a new model achieving SOTA RL/IL performance in NLE. As discussed in **section (i)** of the general response above, we respectfully disagree that the insights revealed by our investigations in this paper are limited to NLE.
> To be interesting in terms of performance, it should at least outperform the NetHack Challenge Neural solutions.
The two neural policies outperforming the CDGPT5 baseline from the NetHack Challenge (NHC) that you are referring to rely on hand-engineered augmentations including separated action spaces, role-specific training, and even hard-coded subroutines. CDGPT5 is not only the best open-sourced neural model from the NHC, but it is also the best purely data-driven model [1].
As stated above, our goal in this study is to probe the general capability of neural approaches at learning complex, flexible behaviors directly from multimodal data; hence, we refrain from reliance on any NLE-specific policy augmentations. Respectfully, we believe that this choice makes our analysis more interesting, both from a scientific perspective and to the RL/IL community at large, rather than less.
> If the hierarchical BC figured out the hierarchy automatically (or, if it was an emergent behaviour of the model), that would be more interesting.
We thank the reviewer for this suggestion.
To our knowledge, our paper is the first to demonstrate that hierarchical labels can offer a stable and powerful mechanism for embedding behavioral priors into policies during pre-training.
As observed by reviewer qyRC, there are few complex environments where expert demonstrations are accompanied by hierarchical labels, though such labels are in-principle no more difficult to collect than demonstrations. We are hopeful that our paper may spur the creation of more datasets with such labels as well as further investigation of how hierarchical behavioral priors may best be exploited for policy generalization, both in NetHack and beyond.
> In multiple occasions paper says that the lower performance of bigger model is due to overfitting (e.g., line 229). However there are no results/experiments to show that this indeed was the case.
We provide detailed support for our claims of model overfitting and underfitting in Section G of the supplementary materials accompanying our paper. In Figures 11, 12, and 13 of this section, we include visualizations of policy performance via “rolling NLE score” [2] on held-out validation seeds of the NetHack Learning Environment (NLE) across training iterations, for all experiments conducted. We will include explicit references to these supplemental figures to help guide readers in the final version of this paper.
> ...did you change any other settings of the training setup when increasing data amount?
On optimal training compute: The results of the data scaling experiments visualized in Figure 5(right) of the main paper reflect the aggregate, large-scale evaluation performance of “best” model checkpoints across dataset subsample sweeps, selected on the basis of the “rolling NLE score” metric [2]. Consequently, our results in these experiments do reflect provisions for optimal training compute, subject to the 48-hour compute budget that we employ as a basis for fair comparison of policies across model classes. For more details on evaluation procedures, please refer to Section F of the supplementary materials.
On optimal model size: The goal of this particular set of experiments was to test the effects of data scaling and model parameter scaling in a decoupled setting; as a result, the model architectures tested in the data scaling experiments were held fixed, with only the size of the dataset varied. Please refer to Section E of the supplementary materials for detailed description of model architectures, with all employed model hyperparameter values included in Tables 5 and 6.
> I assume only the number of layers in the transformer was changed?
Indeed, only the number of layers in the transformer was changed here. The architecture of the default transformer-LSTM model (i.e. with 3 layers) was tuned for input and output layer width when these experiments were launched. We will include a version of these experiments with all layer parameters scaled as you suggest in the final version of this submission.
> ...did you experiment with transformer model only?
Please see **section (ii)** of the general response as well as the attached PDF.
> Is it standard deviation or standard error (or something else)?
Thank you for pointing out this omission. Indeed, all error bars on visualizations of experimental results reflect standard error.
We also appreciate your comments identifying typos, and will make according adjustments.
Thank you, once again, for the time you have taken to provide thorough feedback on our submission. We would be grateful if you would notify us whether you have any additional concerns or questions preventing a recommendation for acceptance.
**Citations**
[1] Hambro, Eric, et al. "Insights from the neurips 2021 nethack challenge." NeurIPS 2021 Competitions and Demonstrations Track. PMLR, 2022.
[2] Hambro, Eric, et al. "Dungeons and Data: A Large-Scale NetHack Dataset." Advances in Neural Information Processing Systems 35 (2022): 24864-24878.
---
Rebuttal Comment 1.1:
Comment: Thank you for the comprehensive response!
---
Rebuttal Comment 1.2:
Comment: Thank you for the extensive comments and additional experiments! The low performance of transformers does indeed make intuitive sense, as with limited context length the agent might miss important information (e.g., state of the inventory). This makes the LSTM + transformer combination way more appealing and potentially very interesting for other applications.
With these answers + comments from other reviews + rethinking, I am increasing my score from 4 to 7 to emphasize my vote to accept this work (avoiding borderline or weaks to clearly signal my vote on this). The argumentation for using NLE alone is valid, although I'd prefer if the work better used this argument to its favor: if we focus on a single environment, we could nitpick on very specific error (or success) cases, and study how different models fail. If one does general "one model to play N games", such nitpicking becomes harder.
If accepted, I would suggest authors to:
* Open-source the code (just highlighting how important this is, as their work was also based on open-sourced work method).
* Include the flat-transformer ("transformer only") results in the main paper and discuss them. This highlights that LSTMs/RNNs have an edge over transformers at least in some cases. The LSTM + transformer architecture, despite being "simple", could potentially be very interesting in other domains.
* Highlight the comparison to symbolic-only agents and other neural solutions, as done in the *section (i)* of the general response, better in the paper.
* Add the arguments for why you say model is overfitting into the main paper. I still struggle to see how the curves in appendix highlight overfitting; to me it looks more like the model converges. More generally, BC training loss (prediction loss) and model performance are very poorly correlated. | Summary: This is an emergency review, and I regret that the paper is out of my expertise, which is why my review will rather stay at the surface level.
The paper is concerned with the NetHack challenge, a complex AI challenge that in 2021 reached headlines, because symbolic agents considerably outperformed neural agents. I see three main contributions in the paper:
- The construction of a large-scale dataset, based on the best symbolic agent and its policy choices, that can enable training better neural agents
- The training of better neural agents based on this dataset, and other improvements
- A systematic analysis of the effect of different technical improvements (hierarchical BC, larger Transformer models, larger datasets, online fine-tuning with RL), notably finding that scaling training sets or model size alone will not bridge the gap to the best symbolic agent.
The problem is of very high interest to the AI community, and the technical investigation, results, and discussion appear thorough and insightful. The dataset might also enable further research. I find especially the results regarding scaling interesting, i.e., that performance increases logarithmic, and so more data or bigger models alone will not enable achieving parity with the symbolic approach.
Quality of writing is very good, and so the paper is easy to follow (subject to my lack of technical background).
Minor notes:
- The paper appears to be missing a link to the dataset
- The related work is not easy to access for someone not close to the field. E.g., paragraphs on "imitation learning" and "hierarchical policy learning" give too little detail about the basic ideas (do not start with descriptions of what they are for, but what they do)
- "The full observation space of NLE is far richer and more informed than the view afforded to human players of NetHack, who observe only the more ambiguous “text-based” components of NLE observations" - I do not fully understand this sentence, please expand. What can systems observe in NLE, that humans don't receive in the original interface? Or do you mean that NLE aggregates the Ascii terminal characters into something more high-level?
- Showing an excerpt from the dataset would be helpful, especially, as it is not quite clear what is added there, both strategies and substrategies? Or the more specific one only?
Strengths: See above.
Weaknesses: See above.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes, the authors critically discuss that scaling alone will not bridge the gap to symbolic agents on this challenge.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We are very grateful for the time you have taken to provide thoughtful feedback and comments on our paper.
> The paper appears to be missing a link to the dataset.
We will publicly release the HiHack dataset upon revision of our submission, at which time we will include a link to the dataset in the main body of the paper.
> The related work is not easy to access for someone not close to the field. E.g., paragraphs on "imitation learning" and "hierarchical policy learning" give too little detail about the basic ideas (do not start with descriptions of what they are for, but what they do)
Thank you for these suggestions, we will increase the level of detail employed in the related work section of the final version of the paper.
> "The full observation space of NLE is far richer and more informed than the view afforded to human players of NetHack, who observe only the more ambiguous “text-based” components of NLE observations" - I do not fully understand this sentence, please expand.
The full observation space of NLE includes not only the “text-based” view of NetHack (i.e. the tty characters observed by human players of the game), but also *glyphs* representing the exact identities of all monsters and objects currently in view as well as all items in the player’s inventory [1]. Glyphs are not (directly) observable to human players of the game in standard versions of NetHack. Our HiHack dataset follows the convention set by Hambro et al. [2], storing the text-based, *tty*-view of NetHack observations only.
> Showing an excerpt from the dataset would be helpful, especially, as it is not quite clear what is added there, both strategies and substrategies? Or the more specific one only?
In its current form, the HiHack dataset reflects high level strategy information from AutoAscend only, as detailed in sections B and C of the supplementary materials accompanying the paper. All hierarchical policies in the paper are trained against these high level strategy labels only. We will add excerpts previewing the dataset to the supplementary materials during revision.
Given that there is already interest, we will release an additional version of the dataset featuring both high-level and sub-strategy labels upon revision, reflecting the full structure of AutoAscend displayed in Figure 7 of the supplement.
Thank you once again for your review. Please do let us know if you have any outstanding concerns or questions about our work.
**Citations**
[1] Küttler, Heinrich, et al. "The nethack learning environment." Advances in Neural Information Processing Systems 33 (2020): 7671-7684.
[2] Hambro, Eric, et al. "Dungeons and Data: A Large-Scale NetHack Dataset." Advances in Neural Information Processing Systems 35 (2022): 24864-24878. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive and insightful feedback. We are glad that you found our analysis to be comprehensive (qyRc), our experimental insights impactful (eWjQ, Ub8t, 83YB), and our submission to be well-written (Ub8t, qyRc, 83YB).
However, in this general response we would like to address two concerns raised by several reviewers: the applicability of this work outside of the NetHack Learning Environment (eWjQ, PtAe, qyRc) and transformer-only architectures (eWjQ, PtAe).
**(i) On the limited applicability of this work outside of NLE**
While we understand the sentiment behind this statement, we respectfully disagree that our experimental insights are limited to NLE. Performance gaps between neural and symbolic methods are ubiquitous in open-ended environments including task and motion planning (TAMP) settings, and their underlying causes have received little study [1, 2]. Our study fills this gap in the neural policy learning literature by conducting a scientific investigation of the failure modes of popular neural approaches at mastering complex, generalizable behaviors directly from multi-modal data.
NLE is well-suited for such an investigation on account of:
(1) the sheer extent of the performance gap between leading symbolic and neural policies [3, 4];
(2) the existence of previously-conducted benchmarks [3, 4]; and
(3) the open-source and hierarchical nature of the symbolic, state-of-the-art agent, AutoAscend [3].
In particular, we believe the last point makes NetHack a singularly compelling testbed over peer environments such as Habitat, MineCraft, or AI2-Thor. In generating the large-scale HiHack dataset, we provide the community with what is, to our knowledge, a unique opportunity to explore the impact of hard-coded hierarchical behavioral priors via structured labels on learning in long-horizon, complex environments.
Further, we take measures to preserve the generality of the insights yielded by our investigations of neural policy learning in this paper. Specifically, we intentionally forgo the addition of any environment-specific constraints in the architectural design or training setup of all models explored in this paper. This contrasts with leading NLE agents RAPH and KakaoBrain that rely on augmentations such as hand-engineered separated action spaces, role-specific training, and hard-coded sub-routines [3]. While this choice prevents us from achieving absolute SOTA in NLE in this paper, we believe it to be crucial in preserving the general applicability of our insights to neural policy learning for general open-ended, long-horizon environments.
We agree with the reviewers that further probing of the nature of tradeoffs between hard-coded hierarchical priors, model capacity, and interaction in other environments is an important and exciting direction for future work, but we believe it to be beyond the scope of this paper.
**(ii) On transformer-only architectures**
The first iteration of transformer-based models we experimented with in our investigations were structured precisely as several of the reviewers described, featuring a “flat” transformer core module only (i.e. with the LSTM-based recurrent module ablated from the model architecture visualized in Figure 4 (left)), meaning all input is now confined to the transformer’s context window. We found such models to perform substantially worse when trained with behavioral cloning on AutoAscend demonstration data than their pure LSTM or transformer-LSTM counterparts.
In the PDF attachment, please find visualizations of the flat transformer policies’ behavioral cloning rolling score [4] curves on withheld seeds of NLE over the course of training, contrasted against the corresponding curves for the transformer + LSTM and hierarchical transformer + LSTM policies (included in Figure 13 of our submission’s supplement).
The parameter “unroll length,” denoted as “URL” in the legend of this figure, reflects the length of the context or observation history employed for action prediction.
A total of six random seeds for each model class were employed to test flat transformer policies. Training for each random seed was conducted on a single GPU for 48 hours. The transformer core modules of these policies have precisely the same architectural hyperparameters as their counterparts in the transformer-LSTMs, but featuring 6 layers.
We make two key observations: (1) the number of samples seen during the 48-hour training window is inversely proportional to the context length employed in training; (2) the rate of BC policy improvement does not increase with context length, across the set of context length values tested here. It is on this basis of (2), coupled with the fact that the average AutoAscend demonstration in our HiHack dataset consists of 27000 keypresses (Table 1 in the paper) – several orders of magnitude beyond the longest context length, URL = 256, we find feasible to test for flat transformers – that we conclude that an encoding of full game history is important for imitation in our setting.
We will update the supplemental section of our paper to include an in-depth discussion of these flat transformer ablation experiments upon revision.
**Citations**
[1] Silver, Tom, et al. "Learning symbolic operators for task and motion planning." 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021.
[2] Zhang, Kai, et al. "Task and motion planning methods: applications and limitations." 19th International Conference on Informatics in Control, Automation and Robotics ICINCO 2022). SCITEPRESS-Science and Technology Publications, 2022.
[3] Hambro, Eric, et al. "Insights from the neurips 2021 nethack challenge." NeurIPS 2021 Competitions and Demonstrations Track. PMLR, 2022.
[4] Hambro, Eric, et al. "Dungeons and Data: A Large-Scale NetHack Dataset." Advances in Neural Information Processing Systems 35 (2022): 24864-24878.
Pdf: /pdf/468a2afe00109bde6a7f954f327260c6ed0395b9.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper explores reasons for this performance gap between neural and symbolic methods in NetHack:
Symbolic agents use hierarchical policies and parsers to extract high-level features
Symbolic agents have handcrafted heuristics and error correction
Neural agents lack inductive biases like hierarchy that may be needed for sparse rewards
Experiments show hierarchy, scale, and combining imitation and RL help improve neural agents:
Hierarchical behavior cloning improves over flat BC
Larger Transformer-based architectures improve over LSTMs
RL fine-tuning provides gains, especially for underfitting models
But significant gaps to symbolic agents remain
Strengths: The experimental design is very clever, the chart is very clear, and the experimental effect is obvious. The paper explores a novel problem domain of applying neural networks to master the game NetHack, where current methods struggle compared to symbolic AI. The authors introduce a new large-scale dataset of NetHack demonstrations called HiHack to facilitate this analysis. The idea of using demonstrations to help neural networks learn better policies in sparse, long-horizon environments like NetHack is creative.The methods are detailed appropriately to replicate experiments. Results are presented logically and incorporate useful visualizations. The conclusion summarizes takeaways concisely.Mastering complex environments like NetHack with sparse rewards and long time horizons remains an open challenge for deep RL. This paper provides significant evidence and analysis characterizing the limitations of current neural network methods in these settings, and points the way towards progress, whether via incorporating stronger inductive biases like hierarchy or combining neural and symbolic approaches. The insights will broadly impact research in sparse reward RL, imitation learning, and integrating neural and classical AI.
Weaknesses: This model is based on the nethack, and the results hold up on the above models, and whether the above results can still hold up on the other models。The authors recognize the limited generality so far of methods tested on NetHack to other complex environments.No obvious harmful biases or problematic data sources are introduced in this work. The NetHack environment itself seems relatively innocuous.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you add some experiments, add some theoretical derivation, whether the contribution of this article is more.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The model is not so representative, can switch a more popular model。Overall, the authors demonstrate good care and thoughtfulness regarding the limitations and potential negative impacts of this research direction. The discussion seems sufficient without being overreaching or distracting from the primary technical contributions. I do not have any major suggestions for improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We thank you for your feedback and comments on our submission. We are glad that you found our paper to be clear and insightful.
> The model is not so representative, can switch a more popular model
The transformer-LSTM policy architecture that we employ in this paper is indeed novel, and was developed by us as a result of the limited generalization and expensive gradient updates that we observed when experimenting with more standard transformer-only architectures.
A long context length appears to be greatly beneficial to successful imitation of AutoAscend. We found augmentation of a causal transformer-core with an LSTM recurrent module to offer a simple and, importantly, highly lightweight solution to this issue.
Please take a look at **section (ii)** of our general rebuttal above as well as the attached PDF for additional reasoning and figures supporting these claims.
> This model is based on the nethack, and the results hold up on the above models, and whether the above results can still hold up on the other models
As above, please refer to **section (i)** of our general rebuttal for our response to this point. We will add this additional discussion to the paper.
Thank you once again for the time you have taken to review our paper, and please let us know if you have any outstanding concerns that stand between us and a strong recommendation for acceptance. | null | null | null | null | null | null |
Derandomized novelty detection with FDR control via conformal e-values | Accept (poster) | Summary: This paper applied the derandomized e-value to the conformal novelty detection, which reduces the randomness of original approach using conformal p-value. The authors also refined the method by adaptively weighting the conformal p-values based on an estimate of the out-of-sample accuracy of each underlying machine learning model. Simulations with synthetic and real data are conducted to compare the performance with the original approach.
Strengths: This paper is the first work to apply the derandomized e-value to the conformal inference, which makes it more stable compared with the original approach.
Weaknesses: ### 1. The novelty of this paper is limited.
Considering the previous works by applying derandomization and E-value to Knockoff filter [Ren et al., 2020, Ren and Baber 2023], it is straightforward to extend it to the conformal setting.
### 2. The challenges are not stated clearly.
(1) From the connection between the conformal BH filter and Eq. (2) (given by Rava et al. [2021]), it would be easy to extend the construction in [Ren and Baber 2022] to conformal e-value in Eq. (5). In addition, the weighted e-value is also proposed by Ren and Baber [2022].
(2) Given the Theorem 2 from Ren and Baber [2022], all the technical difficulty falls in proving Theorem 3.2. However, the proof strategy is from Rava et al. [2021].
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the superiority of data-driven weights over the fixed width? Is the weighted approach in Ren and Baber [2022] applicable here?
2. In Figure 2, why is the power of E-AdaDetect higher than Ada-Detect while the FDR value is still lower? It seems contradictory to common sense.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main limitation is the novelty. Despite that this is the first work to deploy derandomized e-value to split conformal inference, the framework and the theory is well studied in previous works. In addition, the technical contribution is minor.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive feedback. We are addressing your comments below.
### Novelty and data-driven weights
Our data-adaptive weighting method involves technical innovations and is different from the use of “side-information” weights discussed in Ren and Baber [2022]. The key distinction is that our method weights the conformal e-values in a data-driven way, leveraging additional information extracted from the same data used to compute the conformal e-values themselves. This is challenging because standard methods for weighted hypothesis testing typically assume the weights to be independent, obtained from separate data, and it is not clear where such independent information could come from in our case. For example, Ren and Baber [2022] discuss how to leverage prior information obtained from a completely independent source in the context of the knockoff filter. Their approach is thus not applicable here.
Finally, our experiments show that our data-driven weighting method leads to higher power (e.g., refer to Figure 4), and we believe this method could also be useful beyond the scope of this paper.
### Novelty and de-randomization in conformal inference
The novelty of this work is enhanced by the fact that it is the first paper to specifically address the important problem of de-randomizing conformal inferences. We think this research can have broad practical impacts due to the rapidly spreading adoption of conformal inference methods and the well-known importance of mitigating algorithmic randomness in sensitive applications.
### Novelty and relation to Ren and Baber [2022] and Rava et al. [2021]
Although our methods and proofs involve some technical similarities (which we clearly acknowledged), the problem studied in this paper is completely different from both that of Ren and Baber [2022] and that of Rava et al. [2021]. Ren and Baber [2022] focus on de-randomizing conditional independence tests based on knockoffs; they do not study conformal inference. Rava et al. [2021] study a problem closely related to conformal classification but they do not tackle de-randomization at all.
### Figure 2
Finally, in Figure 2, it should not be surprising that the power of E-AdaDetect can sometimes be higher than that of Ada-Detect while the FDR is lower. The two methods may often produce completely different orderings of the test points. Therefore, it is perfectly plausible that one method sometimes provides better separation between the inliers and the outliers. There would only be an intuitive contradiction here if the two methods constructed their inferences by applying a different significance threshold to the same test statistics; but that is clearly not the case.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for answering my questions. As you said, data-adaptive weighting method is the main innovation of this paper. However, the corresponding results are not sufficient. I think the technical difficulty of data-driven weights is addressed by the symmetric assumption (line 206). Can you extend the results to more general cases? Also, there is no experiments on real data to show the superiority of data-driven weights.
---
Reply to Comment 1.1.1:
Title: Re: results of additional experiments based on real data
Comment: Thank you for the suggestion of including additional experiments to demonstrate the performance of our proposed data-driven weighting method on real data. Following your suggestion, we have carried out additional experiments using the same 4 real data sets considered in the paper: "musk", "shuttle", "KDDCup99", and "creditcard". We have uploaded the results, along with a detailed description of the setup, in this one-page blinded PDF file: https://docdro.id/8QEqtk8
In summary, the results demonstrate clearly the advantage of the two alternative weighting schemes ("t-test" or "avg. score"),
which in most cases lead to noticeably higher power compared to the benchmark with uniform weighting. Notably,
we see that weighting based on the t-test is the most powerful approach. These results are consistent with those presented in Figure
4 of the paper, which were based on synthetic data. Further, it is interesting to note from Figure 1 (in the new one-page PDF) that data-driven
weighting is also effective at further reducing the algorithmic randomness of our findings, leading to lower variance.
We would of course be happy to include these results in the revised manuscript, possibly along with other similar results which we did not include in the one-page PDF for brevity.
Finally, regarding your suggestion of extending our data-driven weighting method to more general cases (e.g., classification or regression, instead of outlier detection), this is certainly a good idea for future work. However, we feel that it would go beyond the scope of this paper to investigate methods for problems other than outlier detection, partly also due to to space limitations. | Summary: The paper employs conformal e-values, as opposed to p-values, to quantify statistical significance during outlier testing under FDR control. This approach enables the principled aggregation of results from mutually dependent tests, thereby providing a solution to de-randomize (split) conformal inferences.
Strengths: The paper addresses a significant issue of the problem of the randomness in conformal inferences.
They propose a method to make conformal inferences more stable by leveraging suitable conformal e-values instead of p-values to quantify statistical significance, also merging the idea of de-randomizing conformal novelty detection and FDR control.
The proposed method has the potential to significantly improve the stability and interpretability of conformal inferences.
Weaknesses: The paper could significantly benefit from revisions aimed at improving clarity. The current presentation of ideas and concepts is convoluted, making it difficult for readers to follow and understand the arguments and methodologies proposed.
While the authors' approach is novel, it builds upon existing studies and techniques. The authors could strengthen the originality of their work by further highlighting the unique aspects of their approach and how it differs from previous methods.
The authors provide simulations to demonstrate the effectiveness of their method, but it would be beneficial to see more empirical evaluations, including comparisons with other state-of-the-art methods.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The paper focuses on derandomizing split-conformal inferences. I wonder if jackknife+ would inherently solve the problem.
The authors mention that their method can be extended to leverage adaptive weights based on the data. Could they discuss potential strategies for choosing these weights in practice?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review; we appreciate the effort and honest feedback. We are sorry to hear you found the paper a bit hard to understand, but we hope we can answer your questions here.
- Clarity. Other reviewers found the paper to be clear, but we can try to make it even more accessible. It would help us if you could be a little more specific about which sections or concepts you found confusing.
- Jackknife+. Section 1.3 explains the relation of our work with Cross-Validation+, of which the Jackknife+ is a special case. In short, the Jackknife+ is not a satisfactory solution for the problem considered in this paper due to the difficulty of controlling the FDR. However, in the future it may be possible to combine our work with the Jackknife+, as we suggested in Section 5.
- Data-driven weights. The extension of our method based on data-driven weights is explained in Section 3.2. This also includes a detailed description of the specific approach adopted in the experiments of this paper, presented in Section 4.2.3. See also Algorithm S5 in the Supplementary Material for further details. Is it possible that you read this part of the paper quickly or missed the Supplementary Material?
- Novelty. Our paper makes two new contributions. First, it is the first one to carefully study the problem of de-randomizing conformal inferences using e-values, while controlling the FDR. We think this will have direct practical impact and is also likely to open new directions of research. Second, our paper combines and innovates upon ideas from different fields, introducing a key technical novelty that allows power boosting through data-driven e-value weighting. We think this adaptive weighting strategy will be useful beyond the scope of this paper. In summary, it is true that our results are achieved by building upon the works of others, but the importance of their prior contributions is clearly acknowledged.
- Simulations. The paper presents extensive simulations and empirical comparisons with state-of-the-art methods. Is it possible that you missed some of the details described in Section 4 and in the Supplementary Material? Or is there anything specific that you would like to see added? | Summary: The main limitation of conformal prediction lies in its inherent randomness. However, this paper presents an innovative solution by introducing a derandomized version of conformal prediction, specifically applied to the field of novelty detection. Through the incorporation of conformal e-values, the proposed method successfully reduces the element of randomness while providing provable and effective control over the False Discovery Rate (FDR).
The key contribution of this research lies in its pioneering use of e-values, instead of traditional p-values, within the framework of conformal prediction. This innovative approach significantly simplifies the aggregation process, reducing randomness without compromising the overall detection power.
Strengths: This paper is a highly innovative and inspiring work that highlights the potential of e-values as a superior alternative to p-values for derandomizing conformal prediction through the aggregation of multiple dependent tests of the same hypothesis.
The paper introduces a novel approach for constructing e-values and provides a rigorous guarantee of false discovery rate (FDR) control, with Theorem 3.2 being the main contribution of the research.
Furthermore, the practical aspect of the paper lies in the deployment of conformal e-values in AdaDetect, which presents a derandomized version of AdaDetect. It is important to note that E-AdaDetect is not merely a simple combination but rather a specific practical application demonstrating how to aggregate e-values using data-adaptive weights.
The experimental results showcased in the paper reveal that the derandomized AdaDetect exhibits comparable power to its randomized counterpart, while effectively controlling the FDR. This finding is particularly surprising (even higher power? )and highlights the potential benefits of adopting the "new"(not same as the classical definition) e-values .
Weaknesses: The main paper primarily focuses on experimental results obtained from synthetic data, which is sampled from simple Gaussians. It may be considered relatively easier compared to real-world scenarios. Therefore, I would suggest including at least one experiment using more complex synthetic data or real data from the supplementary material.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: *
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: *
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive feedback.
We also appreciate your suggestion of including an additional experiment utilizing more complex synthetic data or real data; if space permits we will insert these results in the revised manuscript. | Summary: This papers proposes a way to reduce the randomness in novelty detection methods (detecting out-of-distribution points) that are based on the split-conformal inference paradigm. This is done by ensembling over several ($K$) train-validation splits of the dataset. The main technical point is to aggregate the evidence from the $K$ individual predictions by averaging their E-values, as a replacement to considering the p-value in the traditional approach with only one train-validation split. It is shown that the FDR (false discovery rate) can still be controlled with these quantities. The method is further enhanced by using certain weighted averages of the E-values, taking into account the estimated powers of each one. The methods are evaluated on synthetic and real data, showing that the proposed derandomized method works better if the fraction of outliers is larger.
Strengths: The proposed novelty detection approach is original in that it combines evidences from different algorithm runs into a single evidence score. This required a formulation of the algorithms' evidences in terms of E-values instead of p-values that are commonplace for conformal methods, to avoid a loss in detection power. While prior novelty detection works have touched upon E-values, the current paper succeeds in controlling the FDR based on these E-value evidences. The methodological details (Sec. 3.1) and mathematical proof (Theorems 3.2 and 3.6) of this FDR control are strongly inspired by existing works, but this technique is new in the area of novelty detection.
As to significance, the method does reduce the variance as promised and keeps its FDR guarantee. It has higher power than competitor methods in some regimes, but quite consistently weaker power in other regimes, particularly for low fraction of outliers. Secondly, while the problem and the solution is built around quite specific requirements, it is plausible that the presented techniques can be applied to the derandomization of other inference methods with statistical guarantees, as the authors say in their Conclusion.
The clarity of the paper is exceptional (but see my comments on Appendix S2 below), the explanations and discussions are to the point.
Weaknesses: As the biggest weakness of the paper I see that, despite the novelty of the approach, the considered problem requirements are relatively specific and the solution therefore narrow, as it concerns the de-randomization of certain novelty detection algorithms that are based on conformal inference and that aim at a mathematical control of the FDR.
Also, the method's performance (power) falls behind other methods in certain application regimes. It would be good to have some rules beforehand to know in which regime to apply which method.
If space permits, I would like to see some real data experiments (see Sec. 4.3) in the main text rather than only in the Supplement.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * Please mention what K is in Fig. 1.
* line 228: Should this be D^(k)_cal \cup D_test?
* How is AdaDetect different from E-AdaDetect with K=1 (Fig. 3)?
* Please carefully re-write the Proof of Theorem 3.6 in Appendix S2, as I believe there are many typos and several things that could be explained better. In particular:
- Please explain the first equality after line 35.
- It seems that the martingale runs from l'=l down to l'=0, correct?
- It would be good to explain in detail, over which random variables the various expectations E after line 26 run and which random variables are held fixed (conditioned on).
- Right-hand-sides of Eq. (S1) and (S2): Should there be t instead of \hat{t}^{(k)}?
- line 31: The word "outliers" should be "inliers"
- Should the inf be changed to sup, or maybe the inequality sign reversed?
- line 34: inside the curly brackets, the l should be l'?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, this is discussed in a fair way after line 351.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive feedback.
### Broader relevance of our work
It is true that this work focuses explicitly on de-randomizing conformal inferences for novelty detection tasks, which may seem like a relatively narrow scope compared to the broader range of possible conformal inference applications. However, the novelty detection idea of testing the exchangeability between a calibration set and a new test point is essentially the foundation of all conformal inference.
This is why it makes sense to start tackling de-randomization in conformal inference from the perspective of novelty detection, but it is not hard to see how our ideas could be extended to other tasks, including regression and classification (after replacing the FDR with analogous concepts such as the FCR). Such extensions are suggested in Section 5. Space limitations prevent us from developing these extensions fully within the present paper, but this is certainly an interesting direction for follow-up work.
### The effect of de-randomization on power
The goal of our method is to decrease the algorithmic randomness of existing conformal inference techniques, not to increase their power.
Our results consistently demonstrate the effectiveness of our approach in achieving this goal (e.g., refer to Section 4).
Reducing algorithmic randomness is important because it enhances the reproducibility, stability, and interpretability of any findings. There is no clear reason why de-randomization should also lead to higher power in general; it is a nice surprise that it sometimes does. However, it must be remembered that power may not always be the most meaningful concept in applications with high algorithmic randomness, as it refers to an expected behavior averaged over many hypothetical repeated experiments. In reality, algorithmic randomness can make the results of any analysis highly unpredictable, regardless of what the average power might be, and this is precisely the issue that motivates our work.
Regarding your suggestion of providing some guidelines for practitioners on when to expect our method to also boost power, we would like to point out that we have already touched upon this issue in the paper. Specifically, we have explained how our method tends to reduce power in applications with few expected discoveries (e.g., refer to Section 5). Unfortunately, it might be challenging to predict power improvements more precisely without much stronger assumptions than those made in this paper.
### Other questions
In Figure 3, the difference between AdaDetect and E-AdaDetect is that these two methods utilize different decision rules. The former computes p-values, while the latter relies on E-values. E-values are generally less powerful than p-values, but they have the advantage of facilitating de-randomization. This is why E-AdaDetect is at a disadvantage when applied with K=1, but this special case is not practically interesting (as it does not allow de-randomization) and it is only shown for completeness.
We also thank you for bringing some typos and imprecisions to our attention; we will certainly fix them in the revised version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions.
I appreciate that you highlight the exchangeability as the central part of conformal inference, and that this comes out particularly in novelty detection. I am looking forward to seeing your approach being extended to derandomize other conformal tasks.
Thanks also for your insights on what to expect on power. With this in mind, it is even surprising that your derandomized method improves power consistently in some regimes.
I still encourage you to take some real-data experiments into the main paper, potentially even some of the ones you prepared as a response to Reviewer ccpQ during the rebuttal.
I am raising my score to 7 and am recommending acceptance of the paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
SafeDICE: Offline Safe Imitation Learning with Non-Preferred Demonstrations | Accept (poster) | Summary: This paper presents an offline safe imitation learning algorithm called SafeDICE. A unique point of SafeDICE is that a safe policy is learned by non-preferred and unlabeled demonstrations in the imitation learning framework. Based on the formulations studied in DICE family, this paper formulated the safe IL problem as a stationary distribution matching problem, and then solve a single convex minimization problem without additional hyper-parameter search. In the experiments using RWRL and Safety-Gym benchmarks, the authors demonstrate the effectiveness of the SafeDICE algorithms and show that a better policy can be learned.
Strengths: - This paper is well-written and easy to understand. I think the presentation of the ideas is clear and concise.
- The problem setting is well-motivated with a timely example (i.e., chatbot in lines 30-32). I agree that the problem setting is actually important and unexplored.
- The empirical evaluation of this paper is great. I think the empirical evaluation is sufficient because the authors demonstrated the effectiveness of the proposed SafeDICE algorithm in two environments with comparison with reasonable baselines.
Weaknesses: - It is a little bit unclear what is the true mathematical contributions. As the authors describes as the following, but it seems that the most of the mathematical formulations are based on the DICE family papers and the contributions of this paper is a little bit minor.
> The main flow of our derivation follows prior DICE-based offline imitation learning methods [12, 13], but it is clearly different from the existing methods that require hyper-parameter search to deal with unknown $\alpha$.
- Though the authors demonstrated that the SafeDICE performs better than other baselines, the performance itself seems low if I understand correctly. And, since return is normalized, it is unclear even whether or not the tasks are solved. I think it would be better to add a typical Safe RL algorithm (e.g., CPO, TRPO-Lagrangian) as an oracle agent. Note that, I do *not* require the SafeDICE to perform better than Safe RL algorithms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: [Q1] What is the actual value of the return without normalization? Did the authors confirm that the tasks are successfully solved?
[Q2] What is the threshold of the Safety Gym? Is it a default parameter (i.e., 20)?
[Q3] I think it is impressive that SafeDICE does not require a hyper-parameter search. Though I agree that (15) is a beautiful and useful equation, do we have some chance to obtain a better $\alpha$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - Unless I miss something, limitations have not adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and comments. Please feel free to ask any additional follow-up questions.
**[Responses to Weaknesses]**
**(mathematical contributions)**
The mathematical contribution of our paper is to present a well-defined mathematical formulation for a new problem setting that cannot be solved with existing DICE-based methods.
First, our proposed algorithm deals with a new problem setting that aims to learn a safe policy using non-preferred demonstrations, and cannot be simply formulated and solved with existing methods including DICE-based algorithms and imitation learning algorithms, which assume that preferred demonstrations are explicitly given.
Secondly, the most important mathematical contribution of this paper is that hyperparameter $\alpha$ can be theoretically determined, unlike existing algorithms in which hyperparameter search is essential and hyperparameter sensitive. Moreover, it is impossible for our algorithm SafeDICE to be formulated and optimized the objective without hyperparameter $\alpha$ selection. Considering the value of $r(s,a)$ in Eq (10) with arbitrary $\alpha$ value, the value inside the log function can be negative for some $\alpha$. However, our proposed $\alpha$ selection rule not only theoretically bounds the difference between the stationary distributions of learned policy and preferred policy, but also guarantees that the value in the log function of equation (10) is always positive (see lines 179-180 and Appendix A.3).
**(Normalized return & SafeRL algorithm)**
SafeDICE not only outperforms the other baseline algorithms but also the performance itself seems not low. We used a normalized return to improve readability, and the normalized return is a value normalized by optimal performance learned with online constrained RL as follows: $\text{normalized return} = \frac{\text{return}}{\text{maximum return of optimal policy}}$ (i.e. A normalized return value close to 100 means that the tasks are successfully solved). The policy used to generate the preferred demonstration is the policy learned with online constrained RL, and the information of the preferred demonstration shown in Tables 1 and 3 (Appendix D.1 and D.2) is the optimal performance of the online safe RL algorithm (i.e. oracle agent). We will add a clear explanation about normalized return, and the results of this oracle agent as dashed lines to the plots in the final version of the paper.
**[Responses to Questions]**
**(Q1) (Normalized return)**
As previously mentioned, the normalized return is a value normalized by optimal performance learned with online constrained RL as follows: $\text{normalized return} = \frac{\text{return}}{\text{maximum return of optimal policy}}$ A normalized return value close to 100 means that the tasks are successfully solved and we confirm the tasks are successfully solved.
**(Q2) (Threshold of the Safety Gym)**
If it is correct that the threshold means the cost limit for each task, we used the default parameter ($d=25$) as in the original paper of the Safety Gym environment (page 16 in [1]). We will add these experimental details to Table 3 (Appendix D.2) in the final version of the paper.
**(Q3) (Chance to obtain a better $\alpha$)**
Based on Proposition 3.2 of our paper, we can theoretical guarantees on the error bound of the estimated stationary distribution with $\alpha$ selected by Eq (15). As mentioned in the paper (lines 175-178 and Eq (15)), this error is close to zero in most of the real-world scenarios where the behavior of the non-preferred and preferred policies are different. However, since $\epsilon$ is not exactly zero, theoretically there is a small possibility of finding a better $\alpha$.
Practically, the results of Figures 10 and 11 (Appendix E.4) show that it is very difficult to find better $\alpha$ through hyperparameter search.
---
Rebuttal Comment 1.1:
Title: Thank you for clarification.
Comment: Thank you for the addressing my comments at the time of initial review. I read the authors' rebuttal and other reviews. I still consider that this paper is well-written and the proposed method is technically sound. I will keep the original score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Ajjv
Comment: Thank you very much for acknowledging our rebuttal. To address the suggestion of the reviewer to compare with safe RL algorithm as oracle agent, we conducted additional experiments for an offline constrained RL method. We run COptiDICE [1], the state-of-the-art offline constrained RL algorithm, using the datasets that are **augmented with additional (ground-truth) reward/cost annotations**. The results are summarized as follows:
|[RWRL-Cartpole]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 100.24 $\pm$ 0.40 | 184.44 $\pm$ 0.94 |0.48 $\pm$ 0.00 | 392.75 $\pm$ 2.72 |
|DWBC|99.78 $\pm$ 0.55| 189.97 $\pm$ 3.21 |0.47 $\pm$ 0.00 | 408.71 $\pm$ 4.01 |
|PPL|83.70 $\pm$ 0.87| 229.80 $\pm$ 1.64 |0.48 $\pm$ 0.01 | 440.68 $\pm$ 3.49 |
|DExperts|100.28 $\pm$ 0.38| 186.15 $\pm$ 2.29 |0.48 $\pm$ 0.01 | 397.05 $\pm$ 3.11 |
| SafeDICE |99.91 $\pm$ 0.60| 154.88 $\pm$ 1.79 |0.34 $\pm$ 0.00 | 404.22 $\pm$ 1.56 |
|COptiDICE|**107.95 $\pm$ 2.10**|**97.59 $\pm$ 7.21** |**0.06 $\pm$ 0.01** | **183.63 $\pm$ 6.18** |
|[RWRL-Walker]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 98.41 $\pm$ 0.11 | 152.25 $\pm$ 5.33 |0.22 $\pm$ 0.01 | 531.07 $\pm$ 7.53 |
|DWBC|98.99 $\pm$ 0.10| 105.63 $\pm$ 3.85 |0.11 $\pm$ 0.01 | 378.01 $\pm$ 20.88 |
|PPL|99.02 $\pm$ 0.10| 128.46 $\pm$ 4.84 |0.17 $\pm$ 0.01 | 478.75 $\pm$ 12.53 |
|DExperts|95.21 $\pm$ 0.75| 173.20 $\pm$ 11.22 |0.27 $\pm$ 0.03 | 518.38 $\pm$ 5.25 |
| SafeDICE |**99.67 $\pm$ 0.13**| **68.90 $\pm$ 0.83** |**0.00 $\pm$ 0.00** | **124.54 $\pm$ 2.56** |
|COptiDICE|98.07 $\pm$ 0.34|77.86 $\pm$ 2.14 |0.01 $\pm$ 0.00 | 159.91 $\pm$ 8.90 |
|[RWRL-Quadruped]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| **100.19 $\pm$ 0.08** | 178.10 $\pm$ 5.20 |0.38 $\pm$ 0.02 | 328.10 $\pm$ 1.83 |
|DWBC|99.87 $\pm$ 0.16| 167.15 $\pm$ 3.32 |0.34 $\pm$ 0.02 | 316.86 $\pm$ 1.74 |
|PPL|99.87 $\pm$ 0.09| 169.93 $\pm$ 5.06 |0.35 $\pm$ 0.03 | 319.77 $\pm$ 1.70 |
|DExperts|97.05 $\pm$ 2.62| 178.75 $\pm$ 4.78 |0.38 $\pm$ 0.03 | 317.55 $\pm$ 4.40 |
| SafeDICE |99.68 $\pm$ 0.25| 148.21 $\pm$ 2.84 |0.24 $\pm$ 0.01 | 308.71 $\pm$ 2.90 |
|COptiDICE|87.37 $\pm$ 3.19|**130.21 $\pm$ 5.20** |**0.16 $\pm$ 0.03** | **263.41 $\pm$ 17.00** |
|[SafetyGym-Goal]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 94.08 $\pm$ 0.60 | 51.47 $\pm$ 1.49 |0.38 $\pm$ 0.02 | 109.66 $\pm$ 1.63 |
|DWBC|93.93 $\pm$ 0.53| 47.96 $\pm$ 1.77 |0.31 $\pm$ 0.02 | 108.79 $\pm$ 2.12 |
|PPL|94.43 $\pm$ 0.66| 52.31 $\pm$ 1.52 |0.37 $\pm$ 0.02 | 111.41 $\pm$ 2.37 |
|DExperts|87.76 $\pm$ 4.41| 50.82 $\pm$ 1.55 |0.36 $\pm$ 0.02 | 112.62 $\pm$ 1.54 |
| SafeDICE |92.04 $\pm$ 0.44| **39.49 $\pm$ 0.94** |**0.21 $\pm$ 0.02** | **93.35 $\pm$ 1.45** |
|COptiDICE|92.98 $\pm$ 1.13|48.79 $\pm$ 0.80 |0.32 $\pm$ 0.01 | 107.25 $\pm$ 0.78 |
|[SafetyGym-Button]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 88.64 $\pm$ 1.08 | 98.18 $\pm$ 2.19 |0.49 $\pm$ 0.01 | 220.87 $\pm$ 3.48 |
|DWBC|85.53 $\pm$ 0.80| 96.31 $\pm$ 2.38 |0.47 $\pm$ 0.02 | 225.49 $\pm$ 3.52 |
|PPL|86.76 $\pm$ 1.06| 99.97 $\pm$ 2.19 |0.50 $\pm$ 0.02 | 229.46 $\pm$ 3.30 |
|DExperts|86.28 $\pm$ 1.35| 97.30 $\pm$ 1.70 |0.49 $\pm$ 0.01 | 225.93 $\pm$ 5.34 |
| SafeDICE |71.48 $\pm$ 1.33| **70.67 $\pm$ 1.61** |**0.30 $\pm$ 0.01** | **185.64 $\pm$ 1.54** |
|COptiDICE|**94.09 $\pm$ 1.68**|111.10 $\pm$ 3.53 |0.58 $\pm$ 0.02 | 238.46 $\pm$ 3.44 |
All results indicate averages and standard errors over 5 trials. The results show that SafeDICE is competitive with (or even outperforms in some domains) the constrained offline RL algorithm COptiDICE, even though SafeDICE uses only a much smaller amount of annotation information. We will add these results and explanations to the final version of the paper.
Please let us know if any further questions or concerns come up and we would be happy to clarify them anytime during the discussion period.
[1] Jongmin Lee et al., COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation, ICLR 2022 | Summary: Learning safe behaviors from a dataset of demonstrations is a challenging problem. The work presents SafeDICE, an algorithm which learns a safe policy using preference-based imitation learning. The method leverages non-preferred demonstrations in the space of stationary distributions in contrast to prior methods which operate in the policy and discriminator space. SafeDICE reduces the constrained optimization problem into a single convex minimization objective by constructing a Lagrangian and eliminating the maximization problem using a closed form solution. The new objective does not require additional hyperparameter search. Empirical performance demonstrates that SafeDICE is competitive to other methods in satisfying cost constraints.
Strengths: * The paper is well motivated and within the scope of safe Imitation Learning.
* The proposed method presents a novel stationary distribution formulation of the Lagrangian constraints which, to the best of my knowledge, has not been observed before.
Weaknesses: * **Choice of Baselines:** My main concern is the choice of baselines used to compare SafeDICE. The paper only compares with naive BC, DWBC, PPL and DExperts. BC and DWBC are simple imitation learning algorithms which were not designed for constrained optimization problems. PPL utilizes a paired ranking scheme based on reward preferences which is tangential to imitation learning. This leaves DExperts as the only suitable baseline. There exist specific constrained optimization methods such as CPO, PPO-Lagrangian, TRPO and LAMBDA albeit for online and model-based settings [1]. Additionally, there exist offline RL methods [2, 3] which demonstrate safe behaviors as a result of their pessimistic nature. Authors should include these as relevant baselines on small tasks or provide a concrete explanation for why generic Imitation Learning baselines are suitable for constraint satisfaction.
* **Dataset Comparison:** Due to the absence of safe offline IL datasets, the work constructs its own dataset based on preferred and non-preferred demonstrations. My concerns are around the design of the dataset. The paper does not provide a detailed breakdown of the dataset and the number of safe and unsafe actions. Tables 1 and 3 show the number of non-preferred demonstrations which are significantly less in comparison to the preferred demonstrations (50 non-preffered demonstrations against 1000 preferred demonstrations). While the authors make an attempt to compare algorithms on different dataset sizes in Figure 8, these comparisons are still well in the low data regime. In my opinion, the dataset is heavily biased and is not reflective of other benchmarks and real-world applications in safety. The paper could evaluate algorithms on different kinds of datasets with different behavior qualities. For instance, datasets could be constructed from replay buffers of trained lagrangian agents perturbed with gaussian noise or a balanced mix of safe and unsafe actions based on constraint satisfaction.
* **Empirical Evaluation:** Experiments do not comprehensively evaluate the efficacy of the reduced convex minimization problem. The central contribution of the paper is estimating the stationary distribution corrections of the safe policy. This could be evaluated by ablating the Lagrangian with a constraint in which the stationary distributions are absent or non-preferred demonstrations are not utilized. In its current form, empirical evaluation does not throw light on the performance or utility of the proposed objective.
* **Clarity of Derivation:** I am having trouble following the main derivation of Lagrangian $\mathcal{L}(w, \nu ; r_{\alpha})$. Specifically, it would be helpful if authors could explain how they reached step 3 to obtain $w(s,a)$ and $r(s,a)$. It might be helpful to cover intermediate steps in the Appendix or provide a short note as footnote for the reader.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * Can you please provide a concrete explanation for comparing SafeDICE to generic Imitation Learning baselines? Can you compare SafeDICE to relevant constrained optimization baselines such as CPO, PPO-Lagrangian, TRPO and LAMBDA on small toy tasks?
* What is the breakdown of the dataset? How many safe and unsafe actions does the data consist of? Do samples have cost variables as well? Can you please explain the reason for a lower number of non-preferred demonstrations? How does SafeDICE perform when number of non-preferred demonstrations are increased? Can the evaluation accomodate different kind of datasets?
* How does SafeDICE perform in the absence of stationary distribution constraint? How does the performance vary in the absence of preferred/non-preferred demonstrations?
* Can you please explain how $w(s,a)$ and $r(s,a)$ were obtained?
## Minors
* line 9: learns safe -> learns a safe
* line 43: what does DWBC stand for?
* line 49: what is the degeneration issue?
* line 52: learns safe policy -> learns a safe policy
* line 65: I believe the formulation is a Constrained MDP (CMDP) since the problem setting has cost constraints of the form $\mathcal{C}: S \times A \rightarrow \mathbb{C}$
* line 89: Recent offline -> A recent offline
* line 149: optimization of Eq. 7
* Proposition 3.2: what is $\epsilon$?
* line 258: different two -> two different
[1]. As et. al., Constrained Policy Optimization via Bayesian World Models, ICLR 2022.
[2]. Kumar et. al., Conservative Q-Learning for Offline Reinforcement Learning, NeurIPS 2021.
[3]. Bhardhwaj et. al., Conservative Safety Critics for Exploration, ICLR 2021.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper would benefit from a discussion on limitations. Authors could highlight the gap in reward-cost trade off and the increasing number of non-preferred demonstrations as potential limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and comments. Please feel free to ask any additional follow-up questions.
**[Responses to Weaknesses]**
**(1) (Choice of Baselines)**
Please note that the setting we consider in the paper is **offline safe imitation learning, not constrained RL**, and **there is no reward and cost information** in the given offline dataset. Therefore, it is impossible to consider this problem as a constrained optimization problem, and also impossible to consider **online/offline constrained RL** algorithms (i.e. CPO, PPO-Lagrangian, TRPO, and LAMBDA) as our baseline methods.
In the problem setting we consider in the paper, we only assume a small number of **trajectory-level labeled** non-preferred demonstrations $D^N$ and unlabeled demonstrations $D^U$ which consist of both preferred and non-preferred demonstrations. Thus, we consider learning a reward function based on preference and using it to perform offline IL/RL. DWBC is one of the representative offline imitation learning algorithms that can leverage preference information from labeled non-preferred demonstrations and unlabeled demonstrations. Moreover, PPL is an offline RL baseline using learned preference-based reward and weighted behavior cloning which is one of the representative offline RL algorithms. The baseline algorithms we consider in the paper are not just simple IL algorithms, but offline IL/RL-based strong baseline algorithms that utilize preference information.
However, if you are simply curious about the performance of the oracle agent (not the comparison as baseline under fair conditions), the information of the preferred demonstration shown in Tables 1 and 3 (Appendix D.1 and D.2) is the optimal performance of the online safe RL algorithm (i.e. oracle agent). To obtain the preferred demonstration of unlabeled demonstration, we trained the policy with online constrained RL, then generate the preferred demonstration.
**(2) (Dataset Comparison)**
As mentioned in the paper (lines 265-266), please note that our problem setting assumes a small amount of **labeled** preferred demonstrations and relatively large amounts of unlabeled demonstrations consist of both preferred and non-preferred demonstrations. The reason why the number of **labeled** non-preferred demonstrations in the paper is set to be significantly smaller than unlabeled demonstrations is **to consider the experimental setting similar to real-world problems**. In real-world scenarios, unlabeled demonstrations are relatively easy to obtain, but labeled demonstrations are expensive and difficult to obtain.
And, for the **unlabeled** demonstrations used in all experiments, the ratio of preferred demonstrations to non-preferred demonstrations is 1:1 (ex. For the Walker domain, the number of **labeled** non-preferred demonstrations = 10, the number of **unlabeled** non-preferred demonstrations = 1000, and the number of **unlabeled** preferred demonstrations = 1000). Therefore, the datasets we used in the paper are not biased. We will add these details on the configuration of labeled and unlabeled demonstrations to the final version of the paper.
To avoid misunderstanding, the Figure 8 results mentioned in the review are experiments with varying amounts of **labeled** demonstrations to show the impact of **labeled** non-preferred demonstrations.
In order to evaluate in more various configurations of **unlabeled** demonstrations, we also conducted additional experiments for various configurations of unlabeled demonstrations $D^U$ consisting of different ratios of preferred and non-preferred demonstrations ($(|D^U_P|, |D^U_N|) \in \[(1000, 1000), (500, 1000), (1000, 500), (250, 1000), (1000, 250)\]$, where $|D^U_P|$ and $|D^U_N|$ denote the number of preferred and non-preferred demonstrations in $D^U$, respectively). As can be seen from the uploaded PDF (Figure 1), the results show that SafeDICE outperforms baselines in all settings with various configurations of **unlabeled** demonstrations. These results show that SafeDICE can be applied more effectively in various real-world scenarios regardless of the preferred/non-preferred ratio of unlabeled demonstrations. We will add these results and explanations to the final version of the paper.
**(3) (Empirical Evaluation)**
We conducted additional experiments to evaluate the impact of the stationary distribution constraint and labeled non-preferred demonstrations. As can be seen from the uploaded PDF (Figure 2), the performance is degraded both in the absence of stationary distribution constraints and in the case of not using labeled non-preferred demonstrations. We will add this result and explanation to the final version of the paper.
**(4) (Clarity of Derivation)**
We added derivation details to the PDF and will add them to the final version of the paper.
**[Responses to Questions]**
**(1) (Baselines of imitation learning and constrained RL)**
See the response of **(1) (Choice of Baselines)**
**(2.1) (Breakdown of the dataset & number of safe/unsafe actions)**
See the response of **(2) (Dataset Comparison)**
In the problem setting we consider in the paper, safe (preferred) and unsafe (non-preferred) are not defined at the action-level, but only defined at the trajectory-level.
**(2.2) (Cost variables)**
Please note that the setting we consider in the paper is **offline safe imitation learning, not constrained RL**, and **there are no reward and cost variables** in the given offline dataset.
**(3) (Absence of stationary distribution constraint and preferred/non-preferred demonstrations)**
See the response of **(3) (Empirical Evaluation)** and **(2) (Dataset Comparison)**
**(4) (Derivation of $w(s,a)$ and $r(s,a)$)**
We added derivation details to the PDF and will add them to the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Response to Authors' Comments
Comment: I thank the authors for providing a detailed response. After going through authors responses and other reviewers' comments, my concerns regarding choice of baselines and dataset ablations still remain unaddressed.
* **Choice of Baselines**- The authors mention that the problem is casted as an offline safe imitation learning setting. To the best my knowledge, all safe imitation/reinforcement learning problems are solved under the CMDP paradigm. This is because the notion of costs and cost violations are the only means for assessing the safety of an autonomous system. With that said, constrained methods such as CPO, PPO-Lagrangian, TRPO and LAMBDA are all safe IL/RL methods which construct similar lagrangian objectives as proposed by SafeDICE. Thus, it is only fair that an offline lagrangian objective be compared to existing lagrangian objectives in literature. Furthermore, the datasets itself are constructed with a constraint satisfaction criterion using SAC and PPO-Lagrangian agents for RWRL and Safety-Gym respectively. If the authors wish to not compare with prior safety methods then the problem should not be casted as a safe IL/RL problem and not evaluated for costs and cost violations. Similarly, datasets must also be constructed using alternate heuristics instead of constraints. In this case, the work would benefit from other metrics that measure the deviation of a policy from preferred behaviors. In any other case, comparison of SafeDICE with existing safety methods utilizing constrained optimization is paramount.
* **Dataset Comparison**- The empirical evaluation provides suitable evidence for SafeDICE's ability to learn from limited preferred demonstrations. However, I am still struggling to understand the dataset splits and their variation. The authors mention that unlabeled demonstrations are split in a 1:1 ratio. Furthermore, experiments in Figure 1 (pdf) evaluate SafeDICE for different unlabeled splits. But what is the split of labeled demonstrations? And how is this split varied in ablations of Figure 8? Additionally, what happens if the dataset has a noisy composition (i.e- gaussian noise or inaccurate constraint satisfaction)?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer ERpc (1/3)
Comment: Thank you very much for acknowledging our rebuttal and further comments.
**[Choice of Baselines]**
We apologize for confusing our problem settings (safe imitation learning) with CMDP, and we will revise the paper to make it clearer in the final version.
However, we respectfully disagree with the reviewer that **all safe imitation/reinforcement learning problems are solved under the CMDP paradigm**. The notion of safety in RL is **not** limited to CMDP, and solving CMDP is just one of the diverse ways to consider safety in RL. Rather, Safe RL refers to a broader set of techniques and approaches in RL that aim to ensure that the agent behaves in a safe and reliable manner (e.g. safe exploration strategies, optimization objectives beyond expected return, etc.), and it has been manifested in various forms, including risk-sensitivity [1,2,3], robust MDP [4,5], constrained MDP, and more. For more discussions about safe RL, we refer to the comprehensive survey paper [6]. Similarly, for safe imitation learning, diverse safety criteria have been considered such as risk-sensitivity [7,8], high-confidence performance bounds [9,10], and so on, besides CMDP [11].
In this paper, we consider safe imitation learning in an offline setting, where the agent should learn a safe behavior only from the pre-collected demonstrations (i.e. unlabeled demonstrations that contain both preferred and non-preferred demonstrations & labeled non-preferred demonstrations; see Figure 1 in the paper) without interaction with the environment. Since we are considering imitation learning (**not** RL!), we assume that each demonstration $\tau = ( s_0, a_0, s_1, a_1, \ldots s_T, a_T )$ in the datasets does **not** have any per-timestep reward/cost annotation. This problem setting is appealing when the reward/cost design can be challenging. In contrast to the standard imitation learning that aims to mimic the expert/preferred demonstrations [12, 13], we aim to learn a policy that is **negation** of the non-preferred demonstrations, which serve as safety guidelines (e.g. don't do these undesirable behaviors).
Therefore, constrained RL algorithms **cannot be considered** as baseline algorithms to be compared **under fair conditions** since they require a much larger amount of information (i.e. reward and cost annotations for every state-action $D = \{ (s,a,r,c,s')_t \}$) to run, whereas SafeDICE only requires only a few rare annotations (e.g. only dozens of labeled non-preferred demonstrations in our experiments).
Nevertheless, at the request of the reviewer, we conducted additional experiments for offline constrained RL method. The referred CPO, PPO-Lagrangian, etc. are online algorithms, thus they cannot be directly applied to the offline setting. We instead run COptiDICE [14], the state-of-the-art offline constrained RL algorithm, using the datasets that are **augmented with additional (ground-truth) reward/cost annotations**. The results are summarized in the next comment (see comment **Response to Reviewer ERpc (2/2)**.
[1] Howard and Matheson, Risk-sensitive Markov decision processes, Management Science, 1972
[2] Chow et al., Risk-Sensitive and Robust Decision-Making: a CVaR Optimization Approach, NIPS 2015.
[3] Urpi et al., Risk-Averse Offline Reinforcement Learning, ICLR 2021.
[4] Iyengar, Robust dynamic programming, Mathematics of Operations Research 2005.
[5] Tamar et al., Scaling up robust mdps using function approximation, ICML 2014.
[6] Garcia and Fernandez, A Comprehensive Survey on Safe Reinforcement Learning, JMLR 2015.
[7] Majumdar et al., Risk-sensitive inverse reinforcement learning via coherent risk models, Robotics: Science and Systems 2017.
[8] Lacotte et al., Risk-Sensitive Generative Adversarial Imitation Learning, AISTATS 2019.
[9] Brown et al., Efficient Probabilistic Performance Bounds for Inverse Reinforcement Learning, AAAI 2018.
[10] Brown et al., Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences, ICML 2020.
[11] Malik et al., Inverse Constrained Reinforcement Learning, ICML 2021.
[12] Geon-Hyeong Kim et al., DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations, ICLR 2022.
[13] Haoran Xu et al., Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstration, ICML 2022.
[14] Jongmin Lee et al., COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation, ICLR 2022
---
Reply to Comment 1.1.2:
Title: Response to Reviewer ERpc (2/3)
Comment: |[RWRL-Cartpole]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 100.24 $\pm$ 0.40 | 184.44 $\pm$ 0.94 |0.48 $\pm$ 0.00 | 392.75 $\pm$ 2.72 |
|DWBC|99.78 $\pm$ 0.55| 189.97 $\pm$ 3.21 |0.47 $\pm$ 0.00 | 408.71 $\pm$ 4.01 |
|PPL|83.70 $\pm$ 0.87| 229.80 $\pm$ 1.64 |0.48 $\pm$ 0.01 | 440.68 $\pm$ 3.49 |
|DExperts|100.28 $\pm$ 0.38| 186.15 $\pm$ 2.29 |0.48 $\pm$ 0.01 | 397.05 $\pm$ 3.11 |
| SafeDICE |99.91 $\pm$ 0.60| 154.88 $\pm$ 1.79 |0.34 $\pm$ 0.00 | 404.22 $\pm$ 1.56 |
|COptiDICE|**107.95 $\pm$ 2.10**|**97.59 $\pm$ 7.21** |**0.06 $\pm$ 0.01** | **183.63 $\pm$ 6.18** |
|[RWRL-Walker]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 98.41 $\pm$ 0.11 | 152.25 $\pm$ 5.33 |0.22 $\pm$ 0.01 | 531.07 $\pm$ 7.53 |
|DWBC|98.99 $\pm$ 0.10| 105.63 $\pm$ 3.85 |0.11 $\pm$ 0.01 | 378.01 $\pm$ 20.88 |
|PPL|99.02 $\pm$ 0.10| 128.46 $\pm$ 4.84 |0.17 $\pm$ 0.01 | 478.75 $\pm$ 12.53 |
|DExperts|95.21 $\pm$ 0.75| 173.20 $\pm$ 11.22 |0.27 $\pm$ 0.03 | 518.38 $\pm$ 5.25 |
| SafeDICE |**99.67 $\pm$ 0.13**| **68.90 $\pm$ 0.83** |**0.00 $\pm$ 0.00** | **124.54 $\pm$ 2.56** |
|COptiDICE|98.07 $\pm$ 0.34|77.86 $\pm$ 2.14 |0.01 $\pm$ 0.00 | 159.91 $\pm$ 8.90 |
|[RWRL-Quadruped]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| **100.19 $\pm$ 0.08** | 178.10 $\pm$ 5.20 |0.38 $\pm$ 0.02 | 328.10 $\pm$ 1.83 |
|DWBC|99.87 $\pm$ 0.16| 167.15 $\pm$ 3.32 |0.34 $\pm$ 0.02 | 316.86 $\pm$ 1.74 |
|PPL|99.87 $\pm$ 0.09| 169.93 $\pm$ 5.06 |0.35 $\pm$ 0.03 | 319.77 $\pm$ 1.70 |
|DExperts|97.05 $\pm$ 2.62| 178.75 $\pm$ 4.78 |0.38 $\pm$ 0.03 | 317.55 $\pm$ 4.40 |
| SafeDICE |99.68 $\pm$ 0.25| 148.21 $\pm$ 2.84 |0.24 $\pm$ 0.01 | 308.71 $\pm$ 2.90 |
|COptiDICE|87.37 $\pm$ 3.19|**130.21 $\pm$ 5.20** |**0.16 $\pm$ 0.03** | **263.41 $\pm$ 17.00** |
|[SafetyGym-Goal]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 94.08 $\pm$ 0.60 | 51.47 $\pm$ 1.49 |0.38 $\pm$ 0.02 | 109.66 $\pm$ 1.63 |
|DWBC|93.93 $\pm$ 0.53| 47.96 $\pm$ 1.77 |0.31 $\pm$ 0.02 | 108.79 $\pm$ 2.12 |
|PPL|94.43 $\pm$ 0.66| 52.31 $\pm$ 1.52 |0.37 $\pm$ 0.02 | 111.41 $\pm$ 2.37 |
|DExperts|87.76 $\pm$ 4.41| 50.82 $\pm$ 1.55 |0.36 $\pm$ 0.02 | 112.62 $\pm$ 1.54 |
| SafeDICE |92.04 $\pm$ 0.44| **39.49 $\pm$ 0.94** |**0.21 $\pm$ 0.02** | **93.35 $\pm$ 1.45** |
|COptiDICE|92.98 $\pm$ 1.13|48.79 $\pm$ 0.80 |0.32 $\pm$ 0.01 | 107.25 $\pm$ 0.78 |
|[SafetyGym-Button]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 88.64 $\pm$ 1.08 | 98.18 $\pm$ 2.19 |0.49 $\pm$ 0.01 | 220.87 $\pm$ 3.48 |
|DWBC|85.53 $\pm$ 0.80| 96.31 $\pm$ 2.38 |0.47 $\pm$ 0.02 | 225.49 $\pm$ 3.52 |
|PPL|86.76 $\pm$ 1.06| 99.97 $\pm$ 2.19 |0.50 $\pm$ 0.02 | 229.46 $\pm$ 3.30 |
|DExperts|86.28 $\pm$ 1.35| 97.30 $\pm$ 1.70 |0.49 $\pm$ 0.01 | 225.93 $\pm$ 5.34 |
| SafeDICE |71.48 $\pm$ 1.33| **70.67 $\pm$ 1.61** |**0.30 $\pm$ 0.01** | **185.64 $\pm$ 1.54** |
|COptiDICE|**94.09 $\pm$ 1.68**|111.10 $\pm$ 3.53 |0.58 $\pm$ 0.02 | 238.46 $\pm$ 3.44 |
All results indicate averages and standard errors over 5 trials. The results show that SafeDICE is competitive with (or even outperforms in some domains) the constrained offline RL algorithm COptiDICE, even though SafeDICE uses only a much smaller amount of annotation information. We will add these results and explanations to the final version of the paper.
---
Reply to Comment 1.1.3:
Title: Response to Reviewer ERpc (3/3)
Comment: **[Dataset Comparison]**
Q) But what is the split of labeled demonstrations? And how is this split varied in ablations of Figure 8?
To avoid misunderstanding, we emphasize again that the **labeled demonstrations** consist of **only non-preferred demonstrations (without any preferred demonstrations)** as illustrated in Figure 1 of the paper. Likewise, in the ablations of Figure 8, the labeled demonstrations consist only of non-preferred demonstrations.
Q) Additionally, what happens if the dataset has a noisy composition (i.e- gaussian noise or inaccurate constraint satisfaction)?
We additionally conducted the experiments on Walker domain with a dataset which has a noisy composition (i.e. gaussian noise). We add gaussian noise $\epsilon \sim \mathcal{N}(0,\sigma)$ ($\sigma \in [0.001, 0.005, 0.01]$) to the states and actions in the offline dataset and run SafeDICE and baseline algorithms. The results are summarized as Table as follows:
|w/o noise|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 98.41 $\pm$ 0.11 | 152.25 $\pm$ 5.33 |0.22 $\pm$ 0.01 | 531.07 $\pm$ 7.53 |
|DWBC|98.99 $\pm$ 0.10| 105.63 $\pm$ 3.85 |0.11 $\pm$ 0.01 | 378.01 $\pm$ 20.88 |
|PPL|99.02 $\pm$ 0.10| 128.46 $\pm$ 4.84 |0.17 $\pm$ 0.01 | 478.75 $\pm$ 12.53 |
|DExperts|95.21 $\pm$ 0.75| 173.20 $\pm$ 11.22 |0.27 $\pm$ 0.03 | 518.38 $\pm$ 5.25 |
| SafeDICE |**99.67 $\pm$ 0.13**| **68.90 $\pm$ 0.83** |**0.00 $\pm$ 0.00** | **124.54 $\pm$ 2.56** |
|$\epsilon \sim \mathcal{N}(0,001)$|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 99.04 $\pm$ 0.20 | 141.38$\pm$4.24 |0.20$\pm$0.01| 515.63$\pm$5.19 |
|DWBC|99.32$\pm$0.15| 105.52$\pm$0.31 |0.11$\pm$0.00| 397.94$\pm$7.22 |
|PPL|99.23$\pm$0.15| 125.94$\pm$3.34 |0.16$\pm$0.01| 476.51$\pm$11.22|
|DExperts|99.15$\pm$0.06| 142.74$\pm$2.08 |0.20$\pm$0.01| 525.37$\pm$4.45|
| SafeDICE |**100.01$\pm$0.12**| **68.14$\pm$0.75** |**0.01$\pm$0.00** | **130.90$\pm$3.10** |
|$\epsilon \sim \mathcal{N}(0,005)$|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 79.86$\pm$0.67| 264.69$\pm$3.76 |0.65$\pm$0.01| 451.98$\pm$5.56 |
|DWBC|80.64$\pm$0.90| 220.49$\pm$2.55 |0.49$\pm$0.01| 388.21$\pm$1.28|
|PPL|80.36$\pm$1.01| 241.55$\pm$4.92 |0.57$\pm$0.02| 417.08$\pm$4.83|
|DExperts|78.35$\pm$0.38| 265.49$\pm$2.49 |0.66$\pm$0.01| 443.93$\pm$3.89|
| SafeDICE |**84.92$\pm$0.37**| **166.36$\pm$5.52** |**0.21$\pm$0.03** | **279.55$\pm$8.43** |
|$\epsilon \sim \mathcal{N}(0,01)$|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 45.87$\pm$1.40| 393.61$\pm$5.98 |**0.98$\pm$0.00**| 487.08$\pm$6.35|
|DWBC|48.340$\pm$1.69| 352.27$\pm$10.08 |0.93$\pm$0.02| 456.19$\pm$9.29|
|PPL|44.79$\pm$1.07| 352.19$\pm$9.14 |0.92$\pm$0.03| 462.84$\pm$3.43|
|DExperts|43.03$\pm$2.01| 376.97$\pm$9.72 |0.95$\pm$0.02| 480.84$\pm$6.94|
| SafeDICE |**58.88$\pm$0.65**| **341.88$\pm$7.27** |0.97$\pm$0.01| **428.15$\pm$6.33** |
All results indicate averages and standard errors over 5 trials. The results show that the performance of all algorithms deteriorates with the noise of the dataset. However, even when the dataset has a noisy composition, SafeDICE outperforms the other baseline algorithms.
Please let us know if any further questions or concerns come up and we would be happy to clarify them anytime during the discussion period. | Summary: This paper proposes an offline safe imitation method to avoid non-preferred demonstrations. The mixture coefficient is alpha, and the paper provides an effective way to obtain hyperparameter instead of hyperparameter search techniques. The paper uses scarce but labeled non-preferred demonstrations from the non-preferred policy.
Strengths: The paper models the stationary distributions as a mixture of preferred policy and non-preferred policy. The mixture coefficient is alpha, and the paper provides an effective way to obtain hyperparameter instead of hyperparameter search techniques.
The paper is easy to follow and understand.
Claims made at the end of the introduction are supported by experimental results.
Weaknesses: The problem setup is unclear to me. In the offline imitation learning setting, there's a line of research consider robust imitation learning (such as [1]) to discard non-preferred samples. The authors may need to discuss why labeled non-preferred demonstrations is nececssary in this situation.
Furthermore, more discussion can be added to explain why SafeDICE is not sensitive to the number of labeled non-preferred demonstrations.
[1] Robust Imitation Learning from Corrupted Demonstrations, https://arxiv.org/abs/2201.12594
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Since the additional cost of hyperparameter search techniques have influence on the computational cost rather than final performance, it is not clear why SafeDICE performs better with the obtained by Eq. 15. Can you give more explanations about the results between SafeDICE and DWBC?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: As stated above, I would like to compare this methods with those robust imitation learning methods in the offline setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and comments. Please feel free to ask any additional follow-up questions.
**[Responses to Weaknesses]**
**(1.1) (Problem Setup & The need for labeled non-preferred demonstrations)**
In our work, we focus on solving safe imitation learning in an offline setting. We assume scarce but **labeled** non-preferred demonstrations $D^N$, and abundant but **unlabeled** demonstrations $D^U$ of both preferred and non-preferred behavior. Our goal is to learn a safe policy that follows the preferred behavior while avoiding the non-preferred behavior.
Please note that our problem setup and the goal are clearly different from robust imitation learning (such as [1]). Robust imitation learning [1] assumes only unlabeled demonstrations and focuses on filtering out **outliers** from unlabeled demonstrations. However, in order to filter out outliers, robust imitation learning [2] assumes that the **majority of the unlabeled demonstrations are preferred demonstrations**. In other words, if the non-preferred demonstration ratio of unlabeled demonstration is greater than 0.5, it cannot be applied. Therefore, cases in which preferred behavior can be learned only with unlabeled demonstrations without any labeled demonstrations are very limited.
On the other hand, we assume a small number of labeled non-preferred demonstrations, but **no assumptions are needed for unlabeled demonstrations** (ex. the ratios of preferred and non-preferred demonstrations in unlabeled demonstrations). To show that SafeDICE successfully works regardless of the configuration of unlabeled demonstration, we additional conducted experiments for various configurations of unlabeled demonstrations $D^U$ consisting of different ratios of preferred and non-preferred demonstrations ($(|D^U_P|, |D^U_N|) \in \[(1000, 1000), (500, 1000), (1000, 500), (250, 1000), (1000, 250)\]$, where $|D^U_P|$ and $|D^U_N|$ denote the number of preferred and non-preferred demonstrations in $D^U$, respectively). As can be seen from the uploaded PDF (Figure 1), the results show that SafeDICE outperforms baselines in all settings with various configurations of **unlabeled** demonstrations. These results show that SafeDICE can be applied more effectively in various real-world scenarios regardless of the preferred/non-preferred ratio of unlabeled demonstrations. We will add these results and explanations to the final version of the paper.
**(1.2) (Robust Imitation Learning)**
As the reviewer pointed out, BCND [2] and RBC [1] also share the aim of learning a safe policy, similar to SafeDICE. However, BCND and RBC focus on **filtering out outliers**, whereas SafeDICE imitates provided demonstrations while excluding non-preferred demonstrations.
Both BCND and RBC aim to learn an optimal policy using only the provided unlabeled demonstrations (unlabeled demonstrations are referred to as noisy demonstrations in [2] and as corrupted demonstrations in [1], respectively). **Assuming that the majority of the unlabeled demonstrations are preferred demonstrations**, BCND and RBC recover an optimal policy via iterative mode-seeking and Median-of-Means (MOM), respectively. While these approaches are effective for filtering out outliers, they might face challenges when preferred demonstrations are not the majority of the unlabeled demonstrations.
In contrast, SafeDICE aims to learn a safe policy that follows the preferred behavior using non-preferred demonstrations and unlabeled demonstrations. We want to emphasize that unlabeled demonstrations could include over half of the non-preferred demonstrations (see the results in the uploaded PDF, Figure 1).
We will add this discussion on robust imitation learning to the related work section of the final version of the paper.
[1] Liu, Liu, et al. "Robust imitation learning from corrupted demonstrations." arXiv 2022.
[2] Sasaki, Fumihiro, and Ryota Yamashina. "Behavioral cloning from noisy demonstrations." ICLR 2021.
**(2) (Sensitive to the number of labeled non-preferred demonstrations)**
In the case of preference-based reward learning (i.e. reward learning in PPL), since trajectory-level samples are used for reward learning (see Eq (40) in Appendix C.3), it is relatively sensitive to the number of labeled non-preferred demonstrations $|D^N|$. On the other hand, DWBC and SafeDICE use state-action-pair-level samples for reward learning, so they learn relatively sample-efficiently and are not sensitive to $|D^N|$. For example, if $|D^N|=10$ and tiemstep_of_trajectory=1000, only 10 non-preferred samples are used in reward learning for PPL, but in the case of DWBC and SafeDICE, 10000 non-preferred samples are used.
**[Responses to Questions]**
**(Related to hyperparameter search)**
In the offline imitation learning setting where further environment interactions are not available, hyperparameter search through evaluation is not possible. Therefore, in offline imitation learning, it is very important to work without hyperparameter search. And as can be seen from the results in Appendix E.2 (Figures 6 and 7), baseline algorithms are hyperparameter sensitive, and finding the optimal hyperparameter is very challenging and expensive.
**(Comparison with DWBC)**
As shown in Appendix E.2 (Figures 6 and 7), since DWBC is hyperparameter sensitive, finding the optimal hyperparameter is difficult even if the hyperparameter search is allowed.
In addition to the hyperparameter-free or not, the main difference between SafeDICE and DWBC is that SafeDICE essentially leverages the non-preferred demonstrations **in the space of stationary distributions (i.e. directly estimates the stationary distribution corrections of the policy that imitate the demonstrations excluding the non-preferred behavior)**, unlike DWBC that leverages it in the space of policy or the learning process of the discriminator.
---
Rebuttal Comment 1.1:
Title: Response to Authors' Comments
Comment: Thanks for the detailed reply. The authors' dicussion on robust imitation learning and the comparisons to [1] and [2] is convincing, and weakens my concern. I suggest adding this part to the final version of the paper, which would be very useful for the readers. And I will update the scores accordingly. | Summary: This paper focuses on offline safe imitation learning (IL) setting. There are several unique properties of the problem setting: 1) there exists non-preferred (constraint violated) demonstrations, 2) there exists massive unlabelled data where you don't know whether they are preferred or non-preferred, 3) you don't have reward function and interactive environment.
The proposed SafeDICE method builds upon DIstribution Correction Estimation (DICE), but addresses several challenges. The first is how to takes non-preferred demonstrations into account? To do so, we assume an underlying unknown mix ratio alpha and the unlabelled dataset is the mixed dataset of non-preferred and preferred demonstrations. Doing such assumption allows us to follow DICE workflow. At the same time, we can use a preference-based reward function to estimate the stationary distribution d(s, a) of a state-action pair. The reward function is similar to those used in inverse RL and can be achieved by a classifier discriminating whether a datapoint is from non-preferred dataset or not. The authors derives a closed-form solution to avoid the training instability and provide an approximation to the unknown mix ratio to avoid hyperparameter search.
Experiments show the proposed method can achieve good performance while greatly reduce the constraint violation in Real-World RL suite and SafetyGym.
Strengths: 1. The paper is well-written and easy to follow.
2. The proposed method is solid, clear and looks sound to me.
3. The problem setting is novel and important.
Weaknesses: 1. The experiment showcases the superior performance of proposed method. But more can be added to study the particular behavior of the proposed method in different settings, such as when you have lot of / scare non-preferred demonstrations.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Some ablation studies should be added to support the claim. For example, what if we don't use the approximation of mix ratio alpha but really do a hyperparameter search?
2. Can we have an experiment/discussion section discuss the impact of labelled non-preferred data? Like, if we have plenty of them and if we have very few those data, what would happen?
3. Can we compare/discuss to offline RL method where you can have reward function? In offline RL the data is not always preferred as the author discussed for those offline IL method. So I wonder offline RL shares some settings in current problem. When comparing online Safe RL method, a competitive baseline is to use online RL with reward shaping (the constraint violation can be considered as a negative reward). Would this method also be competitive in offline Safe RL setting?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: It seems that there is no much effort put in the paper to discuss the limitations. One I can think of is the collection of non-preferred data is in some sense still dangerous. The "non-preferred data efficiency" is not discussed in the paper yet. That is, how efficient the proposed method can learn from how much non-preferred data. This is also important since as the authors said collecting those data is costly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and comments. Please feel free to ask any additional follow-up questions.
**[Responses to Weaknesses]**
**(1) (Different settings with non-preferred demonstrations)**
Thank you for your suggestion for the additional experiment varying the amount of **labeled** non-preferred demonstrations. Our paper already provides the experimental results you suggested in Section 5.1 (lines 284-295) and Appendix E.3 (Figures 8 and 9). As can be seen from the results, as the number of labeled non-preferred demonstrations $|D^N|$ decreases, SafeDICE shows a greater difference from other baseline algorithms, significantly outperforming.
In order to evaluate our algorithm with more various configurations of **unlabeled** demonstrations, we also conducted additional experiments for various configurations of unlabeled demonstrations $D^U$ consisting of different ratios of preferred and non-preferred demonstrations ($(|D^U_P|, |D^U_N|) \in \[ (1000, 1000), (500, 1000), (1000, 500), (250, 1000), (1000, 250)\]$, where $|D^U_P|$ and $|D^U_N|$ denote the number of preferred and non-preferred demonstrations in $D^U$, respectively). As can be seen from the uploaded PDF (Figure 1), the results show that SafeDICE outperforms baselines in all settings with various configurations of **unlabeled** demonstrations. These results show that SafeDICE can be applied more effectively in various real-world scenarios regardless of the preferred/non-preferred ratio of unlabeled demonstrations. We will add these results and explanations to the final version of the paper.
**[Responses to Questions]**
**(1) (Experiment with hyperparameter search for $\alpha$)**
Our paper already provides a comparison with the results of performing a hyperparameter search in Appendix E.4 (Figures 10 and 11). As shown in Figures 10 and 11, the results show various performances depending on the value of $\alpha$, and the result with $\alpha$ value selected by Eq. (15) shows the best performance.
**(2) (Impact of labeled non-preferred data)**
As mentioned in (Responses to Weaknesses 1), our paper already provides the experimental results you suggested in Section 5.1 (lines 284-295) and Appendix E.3 (Figures 8 and 9). We observed that, for all algorithms, the performance in terms of safety gradually degrades as the number of labeled non-preferred demonstrations $|D^N|$ decreases. However, as $|D^N|$ decreases, SafeDICE shows a greater difference from other baseline algorithms, significantly outperforming.
**(3) (Comparison with offline Safe RL)**
Please note that the offline imitation learning setting we consider in the paper does **not assume labels for both the reward and cost**. We assume only a small number of trajectory-level labeled preferred demonstrations and unlabeled demonstrations. Therefore, it is impossible to directly apply the offline safe RL algorithms. However, we can consider learning a reward function based on preference and then using it to perform offline RL. In the paper, PPL is an offline RL baseline using learned preference-based reward and weighted behavior cloning which is one of the representative offline RL algorithms.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. My score remains unchanged. Even though you "do not assume labels for both the reward and cost", you can still run the experiment with some sorts of reward and cost as safety gym indeed provides these.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer kwD7
Comment: Thank you very much for acknowledging our rebuttal and suggestion for comparison with offline constrained RL algorithms. At the request of the reviewer, we conducted additional experiments for the offline constrained RL method. We run COptiDICE [1], the state-of-the-art offline constrained RL algorithm, using the datasets that are **augmented with additional (ground-truth) reward/cost annotations**. Please note that, in this experiment, constrained RL algorithm COptiDICE **cannot be considered** as baseline algorithms to be compared **under fair conditions** since they require a much larger amount of information (i.e. reward and cost annotations for every state-action $D = \{ (s,a,r,c,s')_t \}$) to run, whereas SafeDICE only requires only a few rare annotations (e.g. only dozens of labeled non-preferred demonstrations in our experiments).
The results are summarized as follows:
|[RWRL-Cartpole]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 100.24 $\pm$ 0.40 | 184.44 $\pm$ 0.94 |0.48 $\pm$ 0.00 | 392.75 $\pm$ 2.72 |
|DWBC|99.78 $\pm$ 0.55| 189.97 $\pm$ 3.21 |0.47 $\pm$ 0.00 | 408.71 $\pm$ 4.01 |
|PPL|83.70 $\pm$ 0.87| 229.80 $\pm$ 1.64 |0.48 $\pm$ 0.01 | 440.68 $\pm$ 3.49 |
|DExperts|100.28 $\pm$ 0.38| 186.15 $\pm$ 2.29 |0.48 $\pm$ 0.01 | 397.05 $\pm$ 3.11 |
| SafeDICE |99.91 $\pm$ 0.60| 154.88 $\pm$ 1.79 |0.34 $\pm$ 0.00 | 404.22 $\pm$ 1.56 |
|COptiDICE|**107.95 $\pm$ 2.10**|**97.59 $\pm$ 7.21** |**0.06 $\pm$ 0.01** | **183.63 $\pm$ 6.18** |
|[RWRL-Walker]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 98.41 $\pm$ 0.11 | 152.25 $\pm$ 5.33 |0.22 $\pm$ 0.01 | 531.07 $\pm$ 7.53 |
|DWBC|98.99 $\pm$ 0.10| 105.63 $\pm$ 3.85 |0.11 $\pm$ 0.01 | 378.01 $\pm$ 20.88 |
|PPL|99.02 $\pm$ 0.10| 128.46 $\pm$ 4.84 |0.17 $\pm$ 0.01 | 478.75 $\pm$ 12.53 |
|DExperts|95.21 $\pm$ 0.75| 173.20 $\pm$ 11.22 |0.27 $\pm$ 0.03 | 518.38 $\pm$ 5.25 |
| SafeDICE |**99.67 $\pm$ 0.13**| **68.90 $\pm$ 0.83** |**0.00 $\pm$ 0.00** | **124.54 $\pm$ 2.56** |
|COptiDICE|98.07 $\pm$ 0.34|77.86 $\pm$ 2.14 |0.01 $\pm$ 0.00 | 159.91 $\pm$ 8.90 |
|[RWRL-Quadruped]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| **100.19 $\pm$ 0.08** | 178.10 $\pm$ 5.20 |0.38 $\pm$ 0.02 | 328.10 $\pm$ 1.83 |
|DWBC|99.87 $\pm$ 0.16| 167.15 $\pm$ 3.32 |0.34 $\pm$ 0.02 | 316.86 $\pm$ 1.74 |
|PPL|99.87 $\pm$ 0.09| 169.93 $\pm$ 5.06 |0.35 $\pm$ 0.03 | 319.77 $\pm$ 1.70 |
|DExperts|97.05 $\pm$ 2.62| 178.75 $\pm$ 4.78 |0.38 $\pm$ 0.03 | 317.55 $\pm$ 4.40 |
| SafeDICE |99.68 $\pm$ 0.25| 148.21 $\pm$ 2.84 |0.24 $\pm$ 0.01 | 308.71 $\pm$ 2.90 |
|COptiDICE|87.37 $\pm$ 3.19|**130.21 $\pm$ 5.20** |**0.16 $\pm$ 0.03** | **263.41 $\pm$ 17.00** |
|[SafetyGym-Goal]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 94.08 $\pm$ 0.60 | 51.47 $\pm$ 1.49 |0.38 $\pm$ 0.02 | 109.66 $\pm$ 1.63 |
|DWBC|93.93 $\pm$ 0.53| 47.96 $\pm$ 1.77 |0.31 $\pm$ 0.02 | 108.79 $\pm$ 2.12 |
|PPL|94.43 $\pm$ 0.66| 52.31 $\pm$ 1.52 |0.37 $\pm$ 0.02 | 111.41 $\pm$ 2.37 |
|DExperts|87.76 $\pm$ 4.41| 50.82 $\pm$ 1.55 |0.36 $\pm$ 0.02 | 112.62 $\pm$ 1.54 |
| SafeDICE |92.04 $\pm$ 0.44| **39.49 $\pm$ 0.94** |**0.21 $\pm$ 0.02** | **93.35 $\pm$ 1.45** |
|COptiDICE|92.98 $\pm$ 1.13|48.79 $\pm$ 0.80 |0.32 $\pm$ 0.01 | 107.25 $\pm$ 0.78 |
|[SafetyGym-Button]|Normalized Return|Average Cost| Cost Violation |CVaR 10% Cost|
|:--------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|BC| 88.64 $\pm$ 1.08 | 98.18 $\pm$ 2.19 |0.49 $\pm$ 0.01 | 220.87 $\pm$ 3.48 |
|DWBC|85.53 $\pm$ 0.80| 96.31 $\pm$ 2.38 |0.47 $\pm$ 0.02 | 225.49 $\pm$ 3.52 |
|PPL|86.76 $\pm$ 1.06| 99.97 $\pm$ 2.19 |0.50 $\pm$ 0.02 | 229.46 $\pm$ 3.30 |
|DExperts|86.28 $\pm$ 1.35| 97.30 $\pm$ 1.70 |0.49 $\pm$ 0.01 | 225.93 $\pm$ 5.34 |
| SafeDICE |71.48 $\pm$ 1.33| **70.67 $\pm$ 1.61** |**0.30 $\pm$ 0.01** | **185.64 $\pm$ 1.54** |
|COptiDICE|**94.09 $\pm$ 1.68**|111.10 $\pm$ 3.53 |0.58 $\pm$ 0.02 | 238.46 $\pm$ 3.44 |
All results indicate averages and standard errors over 5 trials. The results show that SafeDICE is competitive with (or even outperforms in some domains) the constrained offline RL algorithm COptiDICE, even though SafeDICE uses only a much smaller amount of annotation information. We will add these results and explanations to the final version of the paper.
Please let us know if any further questions or concerns come up and we would be happy to clarify them anytime during the discussion period.
[1] Jongmin Lee et al., COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation, ICLR 2022 | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive feedback and comments. Below we restate the main clarification and experiments of our rebuttal. If you have any additional questions or concerns to our response, we are happy to provide additional responses during the rebuttal period.
**[Clarification of problem settings and baselines]**
Please note that the setting we consider in the paper is **offline safe imitation learning, not constrained RL**, and **there is no reward and cost information in the given offline dataset**. We only assume a small number of *trajectory-level* **labeled** preferred demonstrations and abundant **unlabeled** demonstrations. Therefore, it is impossible to consider this problem as a constrained optimization problem, and also **impossible to consider online/offline constrained RL** algorithms as our baseline methods. However, the baseline algorithms we consider in the paper are not just simple IL algorithms, but offline IL/RL-based strong baseline algorithms that **can utilize preference information**.
**[Additional experiments]**
1. In order to evaluate in more various configurations of unlabeled demonstrations, we also conducted additional experiments for various configurations of unlabeled demonstrations $D^U$ consisting of different ratios of preferred and non-preferred demonstrations ($(|D^U_P|, |D^U_N|) \in \[(1000, 1000), (500, 1000), (1000, 500), (250, 1000), (1000, 250)\]$, where $|D^U_P|$ and $|D^U_N|$ denote the number of preferred and non-preferred demonstrations in $D^U$, respectively). As can be seen from the uploaded PDF (Figure 1), the results show that SafeDICE outperforms baselines in all settings with various configurations of unlabeled demonstrations. These results show that SafeDICE can be applied more effectively in various real-world scenarios regardless of the preferred/non-preferred ratio of unlabeled demonstrations. We will add these results and explanations to the final version of the paper.
2. We conducted additional experiments to evaluate the impact of the stationary distribution constraint and labeled non-preferred demonstrations. As can be seen from the uploaded PDF (Figure 2), the performance is degraded both in the absence of stationary distribution constraints and in the case of not using labeled non-preferred demonstrations. We will add this result and explanation to the final version of the paper.
Pdf: /pdf/5f9bbf00c574363d52cb0256cf1e60cbc157d8ee.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Dream the Impossible: Outlier Imagination with Diffusion Models | Accept (poster) | Summary: The paper proposes a learning framework to generate outliers in the pixel space by way of diffusion models with only the in-distribution data. By learning a text-conditioned latent space based on in-distribution data, the methods further sample outlier latents in low-likelihood regions. The empirical result shows that training with the generated outlier images helps establish competitive performance on common OOD detection benchmarks.
Strengths: - The paper is well-written and clear to me.
- The ablation studies are sufficient.
Weaknesses: Please refer to the questions.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The OOD sampling methods are proposed in [1]. [1] uses the sampled latent for direct training while the method in this paper uses diffusion for further training. The key idea is incremental.
- About the generator: The proposed method generates OOD samples based on a powerful stable diffusion model, which is trained on a large-scale dataset. It is much easier to generate unseen-class images. Is there any possibility for data leakage? The prior method[2] uses the diffusion trained on the original dataset. Does improvement over[1] come from the strong diffusion model?
- How is the feature encoder designed? Have you tried a pre-trained CLIP image encoder for initialization? Considering that the ability of the feature encoder has a direct impact on the sampling of OOD latent.
- The cifar10 and imagenet are also commonly-used benchmarks. Besides cifar100 and imagenet100, I wonder if there are any other results for these two more challenging datasets.
[1] Tao, L., Du, X., Zhu, X. and Li, Y., 2023. Non-parametric outlier synthesis.
[2] Liu, L., Ren, Y., Cheng, X. and Zhao, Z., 2022. Diffusion denoising process for perceptron bias in out-of-distribution detection.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough comments, which we address below:
**A1. Clarification on our contribution**
We summarize the non-trivial differences between NPOS and DREAM-OOD below:
- First, our key contribution is to enable the generation of high-resolution outliers for OOD detection, which was impossible with NPOS. DREAM-OOD advances the field by helping researchers visualize and understand the imagined outliers in the pixel space, **which not only offers better empirical performance but also much more interpretability on the generated outliers**. We believe this is a meaningful scientific contribution w.r.t. prior work.
- Second, our paper contributes non-trivial technical solutions to enable using a diffusion model for outlier synthesis. Naively taking the outliers generated by NPOS won't work for the diffusion model, because NPOS generates visual latent embedding whereas the diffusion model expects a textual latent as input. While the idea of DREAM-OOD may seem natural in hindsight, it wasn't obvious to us how to generate meaningful text-latent-based outliers that one can not only use for the diffusion model but also effectively regularize the classifier. To resolve this challenge, a key intellectual design of DREAM-OOD as you recognized, is to learn a text-conditioned latent space for sampling OOD embeddings, so that the visual embeddings are aligned with the corresponding textual embeddings. Moreover, looking at the overall method design, DREAM-OOD is much more challenging than NPOS, since it encapsulates seamlessly multiple key components including learning text-conditioned latent space, sampling, decoding, and finally learning with the imagined outliers. The entire intellectual and experimental effort to make things work wasn't trivial :)
- Lastly, our method can be used to generate ID data and improves the model generalization performance, which is not explored in NPOS. We show stronger performance than the most popular data augmentation methods, which will benefit an even broader community.
We believe the major differences support the novel contributions of our approach. Meanwhile, the novelty of our framework with diffusion models is recognized by other reviewers, such as:
> 1) "_The paper is the first to generate high-resolution outliers for OOD detection task. It is a novel use of diffusion models_." from reviewer 3rTL
> 2) "_This paper proposes a new framework...which is the first method for generating photo-realistic outliers and owns high originality_." from reviewer 94Wq
**A2. Discussion on data leakage with diffusion models**
The recent rise of large-scale generative models has enabled thousands of research projects, such as subject-driven generation [1], open-vocabulary segmentation [2], etc. Data leakage is common in all research projects that are built on modern diffusion models. Ours perhaps is not an exception. Although this is outside the focus of our current research scope, we do believe it is an important issue for the research community.
To mitigate the possible data leakage concern, one can potentially replace the diffusion model to be trained from scratch. In this vein, [3] provides a promising route. We are happy to incorporate your opinion in the revised draft with proper citations including [3].
[1] Ruiz et.al., DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, CVPR 2023.
[2] Xu et.al., Open-vocabulary panoptic segmentation with text-to-image diffusion models, CVPR 2023.
[3] Liu et.al., Diffusion denoising process for perceptron bias in out-of-distribution detection, arXiv preprint 2211.11255.
**A3. Discussion on the improvement over NPOS**
Compared with NPOS, DREAM-OOD can generate high-resolution outliers in the pixel space. The higher-dimensional pixel space offers much more knowledge about the unknowns, which provides the model with high variability and fine-grained details for the unknowns that are missing in NPOS. Since our method is more photo-realistic and better for humans, the generated images can be naturally better constrained for NNs (for example, things may be more on the natural image manifolds) – that’s not available in NPOS. As shown in Figure 5, the generated outliers are more precise in characterizing OOD data and thus improve the empirical performance.
**A4. Experiments on using CLIP for feature encoder**
We use a ResNet architecture for the feature encoder. As suggested, we tried fine-tuning the CLIP visual encoder (ViT-L/14), while keeping other parts unchanged. The comparison is shown as follows, where using CLIP as the initialized feature encoder achieves similar performance as DREAM-OOD.
| | INATURALIST | |PLACES ||SUN||TEXTURES||Average ||ID ACC|
| ------ | ----- | ----- | ----- | ----- |----- | ----- |----- | -----|----- |----- |----- |
| Method | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC |FPR95 | AUROC | FPR | AUROC ||
|DREAM-OOD | 24.10 | 96.10 | **39.87** | 93.11 | **36.88** | 93.31 | 53.99 | 85.56 | **38.76** | 92.02 | 87.54|
| CLIP as initialized latent | **21.60**| **96.36**|43.82 | **93.20** | 38.00| **94.58**| **52.94**| **87.64**|39.09 | **92.94**| 87.06|
**A5. Results on additional datasets**
As suggested, we compare DREAM-OOD with the baselines on an additional CIFAR-10 dataset. We use ResNet-18 for training. Other training details are kept the same as CIFAR-100. The result is shown in **Table 1 in the [PDF](https://openreview.net/attachment?id=vmDoMZMncP&name=pdf)**, where the strong performance of DREAM-OOD still holds.
Due to the synthesis task being more challenging, literature including NPOS has primarily focused on ImageNet-100, which we closely follow and adopt as well. Given ImageNet-100 consists of high-resolution images, we believe the results and improvement over NPOS can already meaningfully support the benefits of DREAM-OOD on real-world datasets. We plan to include the full ImageNet-1k results in our revision.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for the responses. Most of questions have been resolved. Even if I still concern about the data leakage and it will be much more meaningful to find the OOD samples only based on the original datasets without extra data, I still admire the exploration and will raise my score to slightly positive side.
---
Reply to Comment 1.1.1:
Title: thanks for your follow-up
Comment: We are glad to hear that our response helped resolved your questions! We thank the reviewer for taking the time to read our rebuttal and for your positive feedback again!
Best,
Authors
---
Rebuttal 2:
Title: Thanks for your feedback
Comment: Dear Reviewer i54j,
As the discussion period ends soon, we just wanted to check if the response clarified your questions. Thanks again for your constructive feedback.
Best,
Authors | Summary: The paper tackles OOD detection in the image space and proposes to generate outliers with diffusion models. Specifically, the authors find embeddings in the text-conditioned latent space that are on the boundary of in-distribution embedding clusters. These embeddings are likely to be those of strong outliers. By performing denoising with diffusion models, outlier images can then be generated and used for OOD classifier training. Empirical results show that the approaches perform better than many baselines.
Strengths: * The paper is the first to generate high-resolution outliers for OOD detection task. It is a novel use of diffusion models.
* The paper is well-written with clear method and experiment presentation.
Weaknesses: * It seems that by adding simple Gaussian noise to the text label embedding space instead of training a classifier can already be quite strong. Is the classifier necessary here? Nevertheless, I appreciate the detailed ablation on this.
* Please see Questions section for additional comments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can the authors provide possible explanations on why the proposed approach performs consistently worse than baselines on the texture OOD dataset? Would some visualization help explain what happened?
1. There exists an unsupervised OOD detection setting where the labels of in domain data are not provided. How would the proposed approach be extended in this setting?
1. The authors might also consider using negative prompts to find additional outliers. That is, instead of specifying what the outlier should look like, it might be possible to avoid the class by pushing the embedding away from that class. Currently, all outliers are somewhat associated with in-domain data classes. It would be ideal to be able to generate outliers from additional classes.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No limitations mentioned in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are encouraged that the reviewer found our approach novel and the paper well-written. We address comments in detail below:
**A1. Clarification on adding simple Gaussian noise to the latent space**
Thanks for acknowledging our ablations on different outlier image synthesis methods! We agree with your opinion that adding the Gaussian noise to the textual embeddings is a strong synthesis baseline.
In contrast to pure noise, our framework DREAM-OOD learns a text-conditioned latent space for outlier generation, which can *sample low-likelihood embeddings based on the learned feature representations* and is more principled than adding Gaussian noise. The rationale of such a design is that if the sampled embeddings are distributionally far away from the ID embeddings, the decoded images will have a large semantic discrepancy with the ID images and vice versa. Adding Gaussian noise does not necessarily guarantee that the sampled outliers are in the low-likelihood region. Therefore, our approach is theoretically more sound and leads to **consistently better** OOD detection performance than adding noise to the textual embeddings across different ID datasets.
**A2.Explanations of the worse performance on the texture OOD dataset**
During the rebuttal, we examined the penultimate-layer features for the ID data and the OOD Textures dataset. We found that after regularizing the neural network using the generated outlier images, the ID and OOD features are somewhat overlapped.
We hypothesize the reason might be that the generated outlier images usually have diverse backgrounds/objects, either with indoor or outdoor environments/objects. When regularizing the classification model using such images, the model might be more confident in recognizing the images with similar background/objects as OOD while the real OOD texture images deviate from such synthesized outlier image distribution, which leads to inferior OOD detection performance.
**A3. Discussion on the extension in the unsupervised OOD detection setting**
Since the vast majority of OOD detection literature focuses on the supervised setting (see related work L321-L331), DREAM-OOD considers the ID labels. Without ID semantic labels provided, one simple extension to the unsupervised setting can be (1) firstly generating the captions for the unlabeled ID dataset using a state-of-the-art image-captioning model, such as [1]; (2) Collecting the object names in the generated captions; (3) Learning the text-conditioned latent space and then synthesizing the OOD outlier images as in DREAM-OOD.
[1] Li et.al., BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models, ICML 2023.
**A4. Additional design choices of the OOD embedding synthesis**
You raised a great point! To respond to your suggestions:
- Your idea of pushing the embedding away from that class coincides with the baseline *(II) Add learnable noise* in Table 2 of the submission :) In this baseline, we add learnable noise to the token embeddings where the noise is trained to push the outliers away from ID features. Though adding noise to the token embedding is relatively simple, it cannot explicitly sample textual embeddings from the low-likelihood region as DREAM-OOD does, which are near the ID boundary and thus demonstrate stronger effectiveness to regularize the model. For your reference, prior work [2] showed that near-boundary outliers are the most informative for OOD detection since they help learn a decision boundary that's tightly surrounding ID data.
- As for the idea of generating outliers from additional classes, we have implemented it during rebuttal and compared it with our DREAM-OOD. Specifically, we use the remaining 900 classes in ImageNet-1k (exclude the 100 classes in ImageNet-100 and CIFAR-100) as the disjoint class names for outlier generation. We generate the same amount of images as our DREAM-OOD to ensure a fair comparison. The OOD detection results are summarized in the following table, where we observe worse performance. We hypothesize that the generated outlier images are relatively far away from the decision boundary between the in-distribution and OOD.
| Method | FPR95 | AUROC | FPR95 | AUROC |
| ------ | ----- | ----- | ----- | ----- |
| | ImageNet-100 | |CIFAR-100 |
|DREAM-OOD | **38.76**| **92.02** |**40.31** |**90.15** |
| Using other classes | 43.55 | 87.84| 49.89| 85.87 |
[2] Ming et.al., POEM: Out-of-Distribution Detection with Posterior Sampling, ICML 2022.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: I acknowledge I have read the author response and other reviews. The answers are generally useful and I am keeping my score unchanged.
---
Reply to Comment 1.1.1:
Title: thank you
Comment: We would like to thank the reviewer for taking the time to read our rebuttal and for your positive feedback again!
Best,
Authors | Summary: This paper proposes DREAM-OOD.
It constructs a text-conditioned latent space by learning an image encoder $h_\theta$ with a pretrained text encoder $\mathcal{T}$ and a contrastive loss.
During OOD generation, DREAM-OOD first generates outliers in the latent space according to the k-NN distance in Eq. (5), and then decodes them to the pixel space using a pretrained diffusion model.
DREAM-OOD can also be extended to ID data synthesis.
Experiments show that DREAM-OOD provides benefits for OOD detection tasks.
Strengths: 1. This paper is well-organized, and the method is easy to follow.
2. The t-SNE visualizations in Figure 3 and 4 clearly demonstrate the quality of the learned latent space and the feasibility of filtering latent outliers using Eq. (5).
3. DREAM-OOD outperforms selected baselines using CIFAR-100 or ImageNet as the ID dataset.
4. This paper includes ablation studies on the selection of some hyperparameters. Figure 7 shows that DREAM-OOD is robust to these hyperparameters.
Weaknesses: 1. There is neither a complexity analysis nor a report on the computational resource cost.
2. Although ablation studies on hyperparameters are included, the range of the candidates is still problematic. For example, the range of $k$ is too small compared to the dataset size.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How does DREAM-OOD scale to larger datasets? The ID datasets used in this paper are quite small.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is no discussion on the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy to see that the reviewer finds our work easy to follow with appropriate visualizations & ablations and that our paper is organized. We thank the reviewer for the thorough comments and suggestions, which we address below:
**A1.Discussion on the computation cost**
As suggested, we summarize the computational cost of DREAM-OOD and different baselines on ImageNet as follows.
Specifically, DREAM-OOD involves learning the text-conditioned latent space ($\sim$ 8.2 h), image generation with diffusion models ($\sim$ 10.1 h for 100K images), and training with the generated outlier images ($\sim$ 8.5 h). We use 8 NVIDIA GeForce RTX 2080Ti GPUs for our experiments.
We have included the computational cost in the revised manuscript.
**A2. Discussion on the range of k in the ablation**
Thanks for your suggestion! Since we calculate the $k$-NN distance between each sample and the remaining samples belonging to the same class in implementation, and there are approximately 1,000 samples **per class** for the ImageNet dataset and 500 samples per class for the CIFAR-100 dataset, we set the limit of $k$ to 500. For the ImageNet dataset, we additionally experimented with larger values below, with $k=600,700,800,900,1000$, which shows less superior OOD detection results.
| $k$ | AUROC |
| ------ | ----- |
| 300 (best) | **92.02**|
| 600 | 91.34|
| 700 | 91.06 |
| 800 | 90.08 |
| 900 | 89.18 |
| 1000 | 88.27 |
**A3. Scalability of DREAM-OOD on larger ID datasets**
Great point! The largest ID dataset we used in the experiments contains approximately 100K images. For larger ID datasets, DREAM-OOD scales well for the steps of learning the text-conditioned latent space and learning with imagined outlier images. For generating the OOD images with the Stable Diffusion model, the cascaded diffusion backward sampling procedure will take longer time as they generally need more sequential function evaluation steps of large neural networks.
Therefore, *the scalability of DREAM-OOD essentially hinges on the scalability of large denoising diffusion models*. In practice, we have implemented our algorithm using multiple optimized engineering techniques, such as fast sampling schedulers (i.e., PLMS [1]) with only 50 backward iterations. We have also implemented the Flash Attention [2] for the Stable Diffusion model to speed up image generation. As a next step, we will be very interested in testing DREAM-OOD on larger datasets, such as ImageNet-1k as ID, which is 10 times larger than the ID datasets used in our paper.
[1] Liu et.al., Pseudo Numerical Methods for Diffusion Models on Manifolds, ICLR 2022.
[2] Dao et.al., Flashattention: Fast and memory-efficient exact attention with io-awareness, NeurIPS 2022.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thanks for your response to address my questions.
---
Reply to Comment 1.1.1:
Title: Thanks for your response
Comment: Thank you for taking the time to read our rebuttal and your constructive feedback again!
Best,
Authors | Summary: This paper focuses on utilizing auxiliary outliers for the OOD detection task. Specifically, different from other works using the collected outliers, this work studies generating photo-realistic outliers in the high dimensional pixel space. This paper proposes a new framework, namely, DREAM-OOD, which utilizes diffusion models to generate outliers with only in-distribution data and classes. Comprehensive analyses and experiments are conducted to demonstrate the effectiveness of the proposed method.
Strengths: 1. This paper proposes a new framework, i.e., DREAM-OOD, for image synthesis, which is the first method for generating photo-realistic high-resolution outliers and owns high originality.
2. The outlier imagination with diffusion models utilizes the text-conditioned latent space and has the theoretical interpretation of the loss, the presentation is good and easy to understand.
3. The experimental parts include analyses of different perspectives, and the performance of DREAM-OOD is promising.
Weaknesses: 1. Apart from involving the diffusion model, what is the difference in technical level between the previous methods (like VOS and NPOS) on outlier synthetics with the proposed method in image synthetic?
2. This method may be affected by the quality of the pre-trained diffusion models. What if the diffusion model can not generate high-resolution images?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Apart from involving the diffusion model, what is the difference in technical level between the previous methods (like VOS and NPOS) on outlier synthetics with the proposed method in image synthetic?
2. This method may be affected by the quality of the pre-trained diffusion models. What if the diffusion model can not generate high-resolution images?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: There is no discussion of the limitations of the current work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that the reviewer finds our work new, owns high originality, and is easy to understand, with promising results and analyses. We thank the reviewer for the thorough comments and suggestions, which we address below:
**A1.Technical differences between DREAM-OOD and related work**
Great point! Technically, there are two main differences between DREAM-OOD and the mentioned related works, such as VOS and NPOS.
- First, DREAM-OOD introduces a crucial step to align the in-distribution visual embeddings with the corresponding textual embeddings from the diffusion model (see Section 3.1). Learning the text-conditioned latent space is the key to leveraging diffusion-based models for decoding and generating photo-realistic outliers in the pixel space. In contrast, VOS and NPOS do not leverage diffusion models, and directly sample outliers in the feature space of the image classifier.
- Secondly, DREAM-OOD decodes the sampled OOD embeddings via the diffusion models to obtain the outlier images in the pixel space, which are then used for model regularization. Instead, VOS and NPOS directly separate the outliers and the ID in the feature space.
- Thirdly, DREAM-OOD can also be extended to augment ID data distribution, enriching the data diversity for better generalization (Section 4.3). This was not demonstrated in either VOS or NPOS.
**A2. Discussion on the quality of the pre-trained diffusion models**
Another great point! The landscape of AI has been rapidly changing in the last year. In particular, the recent rise of large-scale generative models has provided researchers in the community with many powerful diffusion models that can produce high-resolution images. The wide availability of strong diffusion models is a key reason why DREAM-OOD becomes possible (which admittedly, would have been difficult in the past). Notably, there are more than thousands of research projects built on top of them, such as subject-driven generation [1], open-vocabulary segmentation [2], spurious correlation [3], etc., to name a few. In a similar spirit, DREAM-OOD explores the promise of diffusion models for improving OOD detection, which shows strong results both quantitatively and qualitatively. We will discuss the reliance of our approach on the diffusion models in the revised version.
[1] Ruiz et.al., DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, CVPR 2023.
[2] Xu et.al., Open-vocabulary panoptic segmentation with text-to-image diffusion models, CVPR 2023
[3] Jain et.al., Distilling model failures as directions in latent space, ICLR 2023.
---
Rebuttal Comment 1.1:
Title: Thanks for the detailed response!
Comment: Thanks for the detailed response! The detailed and insightful highlights well address the previous concerns. The reviewer appreciates the authors' efforts in clarifying the technical differences. It shows the uniqueness of DREAM-OOD to not only generate photo-realistic outliers in the pixel space but also can augment the ID distribution for better generalization. It would be better if the authors could also discuss the potential of DREAM-OOD to enhance OOD detection from the perspective of ID distribution, as some recent studies [1,2] also pointed out the importance of ID data quality.
[1] In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation. ICML 2023
[2] Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability. ICML 2023
---
Reply to Comment 1.1.1:
Title: Thanks for your additional feedback
Comment: Thank you for taking the time for reading our rebuttal!
For your additional comments, we think it would be useful to extend the DREAM-OOD to 1) generate diverse ID samples for improved ID classification and 2) near-boundary ID samples for binary classification between ID and OOD. For the former extension, [1] suggests there might be a close relationship between the OOD detection performance and the ID classification accuracy. Therefore, a better ID accuracy might lead to a better OOD detection result. For the latter extension, generating the near-boundary ID samples will help the model learn an even better decision boundary between ID and OOD, which is shown to benefit OOD detection [2].
We will be happy to incorporate the discussion in the revised version with proper citations to the mentioned papers.
[1] Vaze et.al., Open-Set Recognition: a Good Closed-Set Classifier is All You Need?, ICLR 2022
[2] Ming et.al., POEM: Out-of-Distribution Detection with Posterior Sampling, ICML 2022. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and valuable comments. We appreciate that reviewers find our approach DREAM-OOD **novel** and **effective** (Qkax, 94Wq, 3rTL), and the results are **comprehensive**, **extensive** and **promising** with **detailed** and **sufficient** ablations (Qkax, 94Wq, thbj, 3rTL, i54j). We are glad that all reviewers found the paper **well-written** and **easy to follow** (Qkax, 94Wq, thbj, 3rTL, i54j).
As endorsed by multiple reviewers, the work makes important contributions to the field for the following reasons:
- DREAM-OOD is the first to generate photo-realistic high-resolution outlier images for OOD detection by way of diffusion models. DREAM-OOD advances the field by helping researchers visualize and understand the imagined outliers in the pixel space, which was impossible in the past with feature-based synthesis methods (e.g., VOS or NPOS).
- Moreover, DREAM-OOD offers strong empirical performance with extensive results, which are recognized by all reviewers.
- Lastly, going beyond outlier imagination, our framework can be extended to generate inliers to improve model generalization. We demonstrate stronger performance than some of the most popular data augmentation methods, which we believe will benefit an even broader community and ML applications.
We respond to each reviewer's comments in detail below. We will also revise the manuscript according to the reviewers' suggestions, and we believe this makes our paper much stronger.
Pdf: /pdf/5c22eb95ba933b821f1cfab9aec4630625a532a3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes DREAM-OOD, a new diffusion-models-based framework to enable the generation of photo-realistic high-resolution outliers for OOD detection. DREAM-OOD works by training a text-conditioned latent space using ID data and then samples the outliers in the low-likelihood region of the latent space. Empirical results demonstrate the effectiveness of the proposed framework in enhancing the performance of OOD detection.
Strengths: 1. The paper is well-written with clear and detailed explanations.
2. The paper proposed an effective and reasonable framework for generating high-quality outliers images for OOD detection and achieves good empirical performance.
3. Comprehensive and extensive experiments were conducted to compare the performance of DREAM-OOD to other existing OOD detection models and provide deep insights on the efficacy of the proposed method.
Weaknesses: 1. The paper only emphasizes that the proposed method can allow us to understand the outliers in a human-compatible way compared to other synthetic methods such as VOS and NPOS, but it is not very clear why the proposed method can achieve better OOD detection compared to others. And there is no related theoretical analysis.
2. It does not mention the computational cost when comparing the proposed method with other OOD detection methods.
3. It will be better if the related works can be placed before section 3.
4. The novelty of the paper is not strong enough since it is not the first work to leverage diffusion models or visual latent space to generate outliers to promote OOD detection.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The paper suggests learning a text-conditioned latent space based on ID data, which is task specific. I wonder how the method is superior to directly leveraging the visual latent space of the pre-trained vision-and-language model CLIP.
2. What is the three-layer nonlinear MLP function φ(·) used for in equation (7)? Is it the binary logistic classifier for OOD detection?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have not addressed the limitations and potential negative social impacts of their work. And I do not see any obvious negative social impacts specific to this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are encouraged that you recognize our method to be new, effective, and reasonable, and with comprehensive and extensive empirical results. We thank the reviewer for the thorough comments and suggestions, which we address below:
**A1. Discussion on DREAM-OOD versus feature-based outlier synthesis**
Great point! Compared with the feature-based outlier synthesis approaches, DREAM-OOD enables generating photo-realistic high-resolution outliers in the pixel space. The higher-dimensional pixel space can offer much more knowledge about the unknowns, which provides the neural networks with high variability and fine-grained details for the unknowns that are missing in the feature-based synthesized outliers (such as VOS and NPOS). Since our method is more photo-realistic and better for humans; the generated images can be naturally better constrained for NNs -- that's not available in old methods (for example, things may be more on the natural image manifolds). As shown in Figure 5, the generated outlier images demonstrate reasonably creative OOD visual representations, which are more precise in characterizing OOD data and thus improve performance.
Moreover, we would like to clarify that our paper focuses on *demonstrating the feasibility and promise of generating photo-realistic outliers in the high dimensional pixel space by way of diffusion models*. We provide comprehensive quantitative and qualitative analyses to understand the efficacy of DREAM-OOD. While we position DREAM-OOD as a methodology rather than a theory paper, we do agree it will be a meaningful direction. One plausible direction can be analyzing the generalization error bound for the OOD detector [1] trained with the ID and the generated outlier images. One can use the probably approximately correct (PAC) learning theory and analyze the key concepts, such as sample complexity, in the setting of learning with the auxiliary outliers and compare our approach against the feature-based synthesis approaches. We believe our current work serves as a key cornerstone to enable such theoretical analysis next.
[1] Fang et.al., Is Out-of-Distribution Detection Learnable?, NeurIPS 2022.
**A2. Discussion on the computational cost**
As suggested, we summarize the computational cost of DREAM-OOD and different baselines on ImageNet as follows.
Specifically, the post hoc OOD detection methods require training a classification model on the ID data ($\sim$ 8.2 h). The outlier synthesis baselines, such as VOS ($\sim$ 8.2 h), NPOS ($\sim$ 8.4 h), and GAN ($\sim$ 13.4 h) incorporate the training-time regularization with the synthetic outliers. Our DREAM-OOD involves learning the text-conditioned latent space ($\sim$ 8.2 h), image generation with diffusion models ($\sim$ 10.1 h for 100K images), and training with the generated outliers ($\sim$ 8.5 h). We use 8 NVIDIA GeForce RTX 2080Ti GPUs for our experiments.
We have included the computational cost in the revised manuscript.
**A3. Novelty**
To the best of our knowledge, DREAM-OOD is the first work to leverage diffusion models for synthesizing high-quality outliers in the pixel space. We conducted an extensive literature survey (see L307-320), where no existing work is similar to ours in terms of scope and methodology. While Graham et.al. [2] and Liu et.al. [3] utilized diffusion models for OOD detection, they directly applied the reconstruction error as the OOD score and did not focus on outlier synthesis. In contrast to the feature-based outlier synthesis methods, DREAM-OOD enables the new capability of synthesizing in the high-dimensional pixel space, which not only offers better empirical performance but also much more visual interpretability on the generated outliers.
We believe the major differences w.r.t. literature support the meaningful contributions of our proposed approach. At the same time, the novelty of our learning framework is recognized by several other reviewers, for example:
> 1) "_The paper is the first to generate high-resolution outliers for OOD detection task. It is a novel use of diffusion models_." from reviewer 3rTL
> 2) "_This paper proposes a new framework, i.e., DREAM-OOD, for image synthesis, which is the first method for generating photo-realistic high-resolution outliers and owns high originality_." from reviewer 94Wq
[2] Graham et.al., Denoising diffusion models for out-of-distribution detection, CVPR VAND Workshop 2023
[3] Liu et.al., Diffusion denoising process for perceptron bias in out-of-distribution detection, arXiv preprint 2211.11255.
**A4. Experiments on the text-conditioned latent space versus the visual latent space of CLIP**
That's an interesting idea. As suggested, we have implemented sampling by directly using the latent space of the pre-trained CLIP model (ViT-L/14 as the backbone). All other settings are kept the same as the DREAM-OOD. The results comparison on ImageNet is shown below:
| | INATURALIST | |PLACES ||SUN||TEXTURES||Average ||ID ACC|
| ------ | ----- | ----- | ----- | ----- |----- | ----- |----- | -----|----- |----- |----- |
| Method | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC |FPR95 | AUROC | FPR | AUROC ||
|DREAM-OOD | **24.10** | **96.10** | **39.87** | **93.11** | **36.88** | **93.31** | **53.99** | **85.56** | **38.76** | **92.02** | 87.54|
| CLIP as latent |47.28 | 88.80| 47.23|84.22 |39.09 |85.82 | 68.75| 79.69| 50.57|84.63 | 86.92 |
As we can observe, using the pre-trained CLIP latent for sampling achieves a worse performance compared to our text-conditioned latent space because the pre-trained CLIP latent is suboptimal compared to ours.
**A5. Clarification on the three-layer nonlinear MLP function**
The three-layer nonlinear MLP function is the binary logistic classifier for OOD detection as you concur.
**A6. Position of the related work section**
We will rearrange the sections based on your suggestion. Thank you for pointing that out!
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thank you for your response. Most of my questions have been resolved after reading the rebuttal.
I will increase my score to 5.
---
Reply to Comment 1.1.1:
Title: thank you
Comment: We are glad to hear that our response helped resolved your questions - thank you again for your time and constructive feedback!
---
Rebuttal 2:
Title: Thanks for your feedback
Comment: Dear Reviewer Qkax,
As the discussion period ends soon, we just wanted to check if the response clarified your questions. Thanks again for your constructive feedback.
Best,
Authors | null | null | null | null | null | null |
Add and Thin: Diffusion for Temporal Point Processes | Accept (poster) | Summary: The submission derive a probabilistic diffusion model for TPPs. By proposing the Add-Thin framework, the proposed method can naturally handles the continuous and discrete nature of point processes and directly models the whole event sequences. Compared with the traditional autoregressive approaches for the temporal point process (TPP), the proposed framework is immune to the accumulation of errors caused by sequential sampling.
Strengths: The submission proposed a framework that learns the probabilistic mapping from complete noise, i.e., a homogeneous Poisson process (HPP), to data. More specifically, a model is learned to reverse the noising process of adding (superposition) and removing (thinning) points from the event sequence by matching the conditional inhomogeneous denoising intensity. This idea is very interesting.
Weaknesses: I do not find any obvious weaknesses.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and appreciation of our work.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your response. I think this is an interesting work and keep my original score. | Summary: The forward process adds points from homogeneous poisson process (HPP) into the sequence and removes points from original sequence ($\mathbf{t}^{(0)}$). The goal is such that $\mathbf{t}^{(n)}$ is HPP. The neural network is trained to approximate missing information about $\mathbf{t}^{(0)}$, e.g. figure out which points from HPP came from $\mathbf{t}^{(0)}$ and which points were removed at intermediate steps. The thinning and superposition theorems were used multiple times to derive intensities of various sets, where the sets categorize the points.
Strengths: 1. The adding and thinning approach, training via classification and matching, to modeling and generating temporal point processes is original and interesting.
Weaknesses: 1. The writing and notation is extremely confusing. I read the paper multiple times.
1a. e.g. please clearly specify the observation, is it a time sequence $x(t), t\in[0,T]$ where $x(t) = 0$ for no event, $x(t) = 1$ for event, discretized in time. or is it a varying length sequence $\mathbf{t}=(t_1,...,t_K), 0< t_1<...<t_K\leq T$ with $K$ arrival times. also, section 3.3 writes about sequence embedding to get an input in $R^d$ but I am still confused.
1b. e.g. what are the sets A and F? what are the differences between B and C, D and E? B seems to be the points preserved from $\mathbf{t}^{(0)}$ all the way to HPP, C seems to be the points preserved from $\mathbf{t}^{(0)}$ but removed in the next step. is there a clearer way of writing the material in section 3.2?
1c. perhaps the background section can be shortened if we need more space to explain the method clearly.
1d. would potentially increase the score if the model and method is clearly explained.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. Would like to introduce this recent other work that connects point processes and diffusions. in this case, the interarrival times of a point process is related to the length of a diffusion excursion. https://openreview.net/forum?id=KIlwyX7nCi
2. What is the benefit of applying the proposed model compared to other methods of generating sequences?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: 1. Please address the potential limitations of using a neural network to approximate missing information about $\mathbf{t}^{(0)}$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and feedback. In the following, we address the specific concerns and questions.
> W.1a. Obervations
Generally a realization of a TPP can be represented as a sequence of strictly increasing arrival times: $\mathbf{t} = (t_1, \dots, t_K)$, $0 < t_1 < \dots < t_K \leq T$. Equivalently, a realization can also be represented by its counting measure $N(t) = \smash{\sum_i^K \mathbb{I}(t_i \le t)}$, for $t \in [0, T]$. In the implementation of our method we leverage the arrival times $\mathbf{t}$, which we will make more explicit by adding the formal definition to the introduction of $\mathbf{t}^{(0)}$ in line 101. Furthermore, we employ a temporal embedding followed by a sequence embedding with a CNN to attain embeddings of $\mathbf{t}^{(n)}$ and subsequently parameterize the posterior.
> W.1b. Unclear distinction between sets A,...,F
We have revised section 3.2 and present an improved explanation of the sets in the response to all reviewers to address this weakness.
> W.1c. Shorten background section if more space is needed
Thank you for the suggestion. Given the more concise revision of section 3.2 and the additional page for the camera-ready version we will not need to cut down on the background section. Additionally, we will use the additional page to describe the method in more detail and explain the motivation behind the design choices.
> W.1d. I would potentially increase the score if the model and method is clearly explained.
We hope that the revision of section 3.2 in the response to all reviewers presents a clearer explanation of the model and method. Further, we would like to add the following revisions and points to better explain the model and method:
**Section 2.2**
[Line 85-91]
Diffusion models [16, 40] are a class of latent variable models that learn a generative model to reverse a fixed probabilistic noising process $x_{0} \to x_{1} \to \dots \to x_N$, which gradually adds noise to clean data $x_0$ until no information remains, i.e., $x_{N}\sim p(x_N)$. For continuous data, the forward process is usually defined as a fixed Markov chain $q(x_{n} \mid x_{n-1})$ with Gaussian transitions. Then the Markov chain of the reverse process is captured by approximating the true posterior $q(x_{n-1}|x_{n}, x_{0})$ with a model $p_{\theta}(x_{n-1}|x_{n})$. Ultimately, sampling new realizations $x_0$ from the modeled data distribution $p_{\theta}(x_{0}) = \int p(x_{N}) \prod^N_{n=1} p_{\theta}(x_{n-1}| x_{n}) dx_{1},\cdots,x_{N}$ is performed by starting with a sample from pure noise $x_{N}\sim p(x_N)$ and gradually denoising it with the learned model over $N$ steps $x_{N} \to x_{N-1} \to \dots \to x_0$.
**Section 3.3**
[Line 173-177]
In the previous section we have derived the intensity $\lambda_{n-1}(t | \mathbf{t}^{(0)}, \mathbf{t}^{(n)})$ of the posterior $q(\mathbf{t}^{(n-1)}|\mathbf{t}^{(0)}, \mathbf{t}^{(n)})$ for the reverse process, i.e., the intensity of points at step $n-1$ given $\mathbf{t}^{(0)}$ and $\mathbf{t}^{(n)}$. Now we want to approximate this posterior using a model $p_{\theta}(\mathbf{t}^{(n-1)} | \mathbf{t}^{(n)}, n)$ to learn to sample points $\mathbf{t}^{(n-1)}$ given only $\mathbf{t}^{(n)}$. As we are only missing information about $\mathbf{t}^{(0)}$ we will learn to model $\lambda^{(B)}(t)$ and $\lambda^{(A\cup C)}(t)$ to approximate $\hat{\mathbf{t}}^{(0)} \approx \mathbf{t}^{(0)}$ (see Figure 3.) for each $n$ and then have access to the full posterior intensity to reverse the noising process.
[Line 178]
To condition our model on $\mathbf{t}^{(n)}$ and $n$ we propose the following embeddings. [...]
> Q1. Connection to work on diffusion excursion for TPPs.
Thank you for introducing us to this exciting concurrent work. In this paper, they model TPPs by relating the continuous latent state in a diffusion process to the event times by modeling the interevent times as the length of a diffusion excursion. Thereby, the proposed method is still autoregressive and more strongly related to TPP models, which model event times conditional on a continuous latent process (see, e.g. [4, 20, 11]). Nevertheless, the proposed connection between the continuous stochastic processes and TPPs is very interesting, and we will make sure to reference and discuss this work in our related work section.
> Q2. Benefit of Add-Thin compared to other methods
Our method is the first diffusion-based TPP model that generates entire event sequences, thereby overcoming some of the shortcomings of autoregressive models in forecasting. Additionally, by replacing the autoregressive parametrization in event time with an gradual refinement process of the entire event sequence for a fixed number of N diffusion steps, our model is better suited to model very long event sequences (c.f. sampling speed in response to all reviewers). Furthermore, the iterative refinement of the event sequence allows us to leverage simple and shared layers to model the long-range interaction between events, a task for which attention (computational complexity) and RNN-based encoders are known to struggle.
> Please address the potential limitations of using a neural network to approximate missing information about $\mathbf{t}^{(0)}$.
Our model is trained to generate samples $\mathbf{t} \sim q(\mathbf{t}^{(0)})$, by optimizing the ELBO of our probabilistic model (see discussion of the relationship of our loss and the ELBO in the response to reviewer 1d9q). In that context, the applied neural networks to approximate the posterior are universal function approximators and can in theory model the posterior distribution arbitrarily well.
---
Rebuttal Comment 1.1:
Comment: W.1a. Obervations
- Please further clarify ``temporal embedding followed by a sequence embedding''. In addition, is it e.g. $\mathbf{t}^{1}=(1,5,7,9)$, $\mathbf{t}^{2}=(2,5,6)$, where $\mathbf{t}^{1}$ and $\mathbf{t}^{2}$ are two data samples, and the lengths may be different? What is the input to the network at each stage? Or is it possible to make code available, thanks!
W.1b. Unclear distinction between sets A,...,F.
- Reviewer 1d9q had the same comment too. In the initial submission, I was guessing what the model and method is. Read the response to all reviewers. Is it also possible to have a figure, thanks!
Q1. Connection to work on diffusion excursion for TPPs.
- Thank you for pointing me at those references!
Please address the potential limitations of using a neural network to approximate missing information about $\mathbf{t}^{(0)}$
- Read the response to reviewer 1d9q. Approximating Dirac deltas with Gaussians may not work as well...
Read all the reviews and the responses, I am positive and interested in the work, increasing score to 5 for now, will wait for the authors to respond.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and active engagement in the rebuttal discussion. We are delighted that you are positive and interested in our work. In the following, we would like to clarify your comments:
### Observations
> In addition, is it, e.g., $\mathbf{t}^{1} = (1,5,7,9)$, $\mathbf{t}^{2} = (2,5,6)$, where $\mathbf{t}^{1}$ and $\mathbf{t}^{2}$ are two data samples, and the lengths may be different?
Indeed, instances or data samples of a TPP can have different lengths and could, for example, look like $(1,5,7,9)$ and $(2,5,6)$. In other words, both the arrival times and number of points are random. This defining property of a TPP is introduced in the background section 2.1 lines [59-61]. Further, we would like to highlight the notational difference in your response. With the superscript $\mathbf{t}^{(n)}$, *we* refer to an event sequence at the $n$-th diffusion step, where $\mathbf{t}^{(n)} \sim \lambda_n$ and not to different samples from the data distribution.
> Clarify ``temporal embedding followed by a sequence embedding'' and the input to the network at each stage (can you maybe add code).
The temporal and sequence embedding were explained in Sec. 3.3 lines [178-183] with the additional sentence in our last response and further depicted in Fig. 3. Here, we will additionally present some background and slightly informal explanation of the embedding types for TPPs:
**Temporal (Positional) embedding:** Applying temporal embeddings, e.g., log-transform and trigonometric functions (similar to the positional embeddings of Transformer [3]), to event times is a common practice for TPPs (see Sec. 3.1 of [1]). With a temporal embedding, we refer to a function $\mathbb{R}_{>0} \to \mathbb{R}^d$, that maps a time, e.g., $t_i \in \mathbf{t}$ independently to a $d$-dimensional embedding space. Then for each ${t}_i$, we have a feature vector $\mathbf{c}_i$ encoding its temporal information.
**Sequence embedding:** To attain an event embedding (also sometimes called context embedding) $\mathbf{e}_i$ for each event $i$ that also incorporates information about all other events, we apply a three-layered CNN with dilation and circular padding on the sequence of temporally embedded event times $(\mathbf{c}_1, \cdots, \mathbf{c}_K)$. Ultimately, a global sequence embedding $\bar{\mathbf{e}}$ is attained by computing the average over $(\mathbf{e}_1, \cdots, \mathbf{e}_K)$.
The code will be published upon acceptance, but we present a simplified pseudo-code of our implementation of the sequence embedding here:
```
# temporal embedding
time_embedding = self.time_encoder([x_n.time, x_n.tau]) # Kxd
# event embedding
event_embedding = self.cnn(time_embedding) # Kxd
# sequence embedding
seq_embedding = event_embedding.mean(dim=0) # d
```
As described in Sec. 3.3 line [185; 196] and Fig. 3, ```time_embedding``` $\mathbf{c}_i$ and ```event_embedding``` $\mathbf{e}_i$ are the inputs to the classifier and ```seq_embedding``` $\bar{\mathbf{e}}$ is the input to the MLPs parameterizing the intensity function.
### Set distinction
> Is it also possible to have a figure (Distinction between sets A,...,F)?
We might be wrong but believe that Fig. 1 (right) and 2 (left) are what you are looking for. Fig. 2 presents the case distinction of the sets **A**,...,**F** with a Venn diagram as the logical relations between $\mathbf{t}^{(0)}$, $\mathbf{t}^{(n-1)}$ and $\mathbf{t}^{(n)}$. Further, in Fig. 1, the denoising process illustrates the case distinction for one specific example of $\mathbf{t}^{(0)}$ and $\mathbf{t}^{(n)}$. Please note that the illustrated posterior intensity can be separated into the sets **B-E**: The base intensity relates to $\lambda^{(E)}(t)$, while the highest peaks (Diracs) refer to $\lambda^{(B)}(t)$, i.e., the points that are in $\mathbf{t}^{(0)}$ and $\mathbf{t}^{(n)}$. The medium peaks and lower peaks (thinned Diracs) represent $\lambda^{(D)}(t)$ and $\lambda^{(C)}(t)$, respectively. Please let us know if this addresses your point, and we will, for the camera-ready version, add an additional reference to the denoising process in Fig. 1 in the discussion of the reverse process in Sec. 3.2.
### Approximation
> Approximating Dirac deltas with Gaussians may not work as well...
A Gaussian is the standard approximation of the Dirac delta function and can, in the limit $\sigma \to 0$, perfectly approximate it [2, 4]; a property that is not true for many approximations of other TPP models (e.g., parametric distribution for the intensity function of *RNN*, *Trans* and *TriTPP*). Furthermore, the KL divergence between Dirac and our Gaussian is directly minimized to ensure the best possible approximation.
### References
1. Citation 38 in the paper
2. Ghatak, Ajoy, et al. "The dirac delta function." Quantum Mechanics: Theory and Applications (2004)
3. Citation 42 in the paper
4. Saichev et al. "Chapter 1: Basic definitions and operations", Distributions in the Physical and Engineering Sciences (1997) | Summary: The paper proposes a probabilistic diffusion model for TPPs, ADD-THIN, that naturally handles the continuous and discrete nature of point processes and directly models whole event sequences. While autoregressive methods are expressive in modeling event sequences, ADD-THIN does not suffer from the accumulation of errors caused by sequential sampling for long-term forecasting applications. The proposed method matches the performance of state-of-the-art models in density estimation and outperform them for forecasting on both synthetic and real-world datasets.
Strengths: - To the best of my knowledge, this work represents the first attempt to combine diffusion models and Temporal Point Processes. The potential of generating complete sequences without relying on auto-regressive mechanisms presents a promising avenue for future research. It is worth noting that this paper shares similarities with the unpublished work available at: https://slideslive.com/embed/presentation/38922857.
- The proposed method is compared to multiple baselines for both unconditional and conditional sampling (forecasting).
Weaknesses:
- The paper is poorly written and hard to read.
- There are multiple statements that are either inaccurate or without references
- "The intensity of a TPP describes the expected number of points"
- Line 65-70, you either need to prove it or provide references that do so.
- Line 72, "the expected number of points on [0, T] follows a Poisson distribution with rate ..."
- "Even though Poisson processes can model seasonality"
- How does a Poisson model seasonality?
- "We know that conditioned on ... is a random sequence of events with a density that can be completely described by its (unconditional) intensity function"
- "Since the sets are disjoint, ..."
- Is there a difference between a counting process and measure? an intensity function and measure? You use these terms interchangeably.
- Multiple TPP datasets that are used in the experiments have been shown to be inappropriate benchmarks for neural TPPs. See https://openreview.net/forum?id=3OSISBQPrM
- Overall, the presentation of the paper could be greatly improved, especially Section 3. Specifically, it feels like adding more formal notations (e.g. for Section 3.2.) would allow to shorten the text, while helping the reader more easily grasp the idea behind the method. For instance, why defer to the Appendix the pseudo-code for the sampling procedure?
Additionally, several design choices are not clearly motivated, while the experiments and evaluation of the proposed model falls a bit short of expectations. These concerns are summarized in the points below:
1) Why include both $\boldsymbol{e}i$ and $\boldsymbol{c}_i$ in the parametrization of $g\theta$? Are the $\boldsymbol{c}_i$ constructed from neighboring events? Otherwise, it looks like there is redundancy between the information contained in the two embeddings.
2) There is no real motivation behind the use of a positional encoding for $n$. Why do you need it?
3) What is the motivation for using an unormalized mixture of Gaussians to parametrize $\lambda^{(A \cup C)}$? Have you considered alternative parametrizations that can be found in the neural TPP literature, e.g. an unmarked version of RMTPP [1]? Additionally, more details regarding the architectures employed would be appreciated (e.g. what is $\sigma$, and what is the model behind the MLP on page 5?).
4) In the NLL objective of equation 5, is the objective solely computed on times from $A \cup C$? If yes, this should be indicated explicitly in the objective.
5) On page 6, it is not clear how you find that $\Lambda^{(A \cup C)}(T) = K \sum_{i}^H w_i$. Given your definition of $\lambda^{(A \cup C)}(t)$, I would have expected this result to hold only if the integral was taken over $-\infty, \infty$.
6) The paragraph on 'conditional sampling' requires more detailed explanations, as it is impossible to see from the text how you handle this scenario. Specifically, how is $\boldsymbol{h}$ integrated into $g_{\theta}$ and $\lambda^{(A \cup C)}(t)$?
6) As $\hat{t}^{(0)}$ is successively approximated at each time step $n$ during the reverse process, how can you ensure that the propagation of errors is mitigated and won't affect significantly the generated sequences after $N$ steps?
7) Why only reporting the MAPE between the generated and truth sequences lengths? Isn't there a way to evaluate the quality of the generated samples within a sequence itself (e.g. using a measure of distance)?
[1] Du, Nan and Dai, Hanjun and Trivedi, Rakshit and Upadhyay, Utkarsh and Gomez-Rodriguez, Manuel and Song, Le. (2016). Recurrent Marked Temporal Point Processes: Embedding Event History to Vector. SIGKDD.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: The authors have a section on some limitations of their work, although a more in-depth discussion would have been appreciated. Specifically, they mention that sampling from their diffusion model can be quite expensive, but they do not provide computional trade-offs in terms of time/resources across baselines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the extensive review, feedback, and questions. In the following, we address the raised concerns and questions.
> W.1,4 Readability and understandability (3, 3.2)
Considering the character limit, we refer to both our response to all reviewers, where we outline suggested enhancements for the reverse process and our answer to reviewer RwFj's comments regarding the method description. Further, we will use the additional page of the camera-ready version to move the pseudo-code of the sampling procedure from the App. to Sec. 3.4. Lastly, subsequent segments in this rebuttal are dedicated to address questions and misunderstandings in our method description.
> W.2 Clarifications and references
1. "The intensity is defined as:"; line 64 "The intensity can be interpreted as the expected number of events per unit of time."
2. Reference to [7]
3. "the number of points..."
4. "...inhomogeneous Poisson processes..." The time-varying intensity of an IPP can, e.g., capture the fact that more events happen at a certain day of the week, known as seasonality.
5. Rewritten as part of the response to all reviewers.
6. "Since the sets B-E..."
7. As described in Sec. 2.1, a counting process (in our case TPP) can be described by its intensity measure or less technical function, while a realization can be represented by its counting measure. We will use "intensity function" throughout to make the text more consistent.
> W.3 Benchmark datasets
Thank you for pointing out this relevant concurrent work, raising concerns in the evaluation of autoregressive TPP models with teacher forcing for some widely used TPP benchmark datasets. We were unable to incorporate its findings since the work was published 2 months after the NeurIPS 2023 submission deadline. In general, we agree that this is an important concern. However, likely these findings are not directly applicable to the setting considered in our work as we consider tasks, metrics, and model architectures that are all different and have found significant empirical differences between the baselines even on these datasets. We will investigate this issue in more detail and add a discussion to the revised version of the paper.
> Design choices:
We clarify the motivation for the design choices in the following:
1. In theory, $e_i$ can contain all information of $c_i$, but we have found that providing the event time explicitly gives better results.
2. Positional encodings for n were first proposed in [16] to improve the model capacity. To our knowledge, all current diffusion models leverage positional encodings as $n$ strongly affects the posterior.
3. In our diffusion model, all posterior intensities are inhomogeneous PP (c.f., PP section of the parametrization of RMTPP). The Gaussian distribution ($\sigma\rightarrow0$) is a standard approximation of the Dirac delta function in the posterior (see also answer to reviewer 1d9q). We explored other options, such as mixture of Beta and Weibull, but found their parametric form to restrict the modeling of Dirac delta functions. The MLPs have two layers, a hidden dimension of 32 and a ReLU activation (we add additional information to the App.). Lastly, $\sigma$ refers to the sigmoid activation, which we will make explicit.
4. The objective is only computed for $A\cup C$ as indicated in Eq. 5. Note that the sum in Eq. 5 could equally be written as $\sum_{t_i\in\mathbf{t}^{(A\cup C)}}$.
5. We employ re-normalized truncated Gaussians on the interval [0,T]. We will add this information to the paragraph.
6. The intensity and classifier get the positional embedding of the diffusion time n as an input. The history embedding $\mathbf{h}$ is simply added to this embedding. We will revise the last sentence to even better convey this information.
7. When sampling from any diffusion model, a noisy sample is gradually refined to produce a sample from the learned data distribution. Hence, each subsequent diffusion step can correct for errors (or rather noise) of the previous ones. In our case, we refine the **entire event sequence** at every diffusion step to finally produce a data sample. This starkly contrasts TPP models that are autoregressive in the event time and do not adjust errors in earlier events. This difference becomes especially evident in our conditional sampling experiment, where the accumulation of error in event time restricts the forecast capability of autoregressive TPP models. Lastly, we would like to use this chance to refer to our response to reviewer 1d9q, where we elaborate on the relation of our loss and the ELBO and training our model by minimizing the KL divergence between our approximate and the true posterior.
8. We think there has been a slight misunderstanding. We do not only report the MAPE for the sequence lengths but also report the Wasserstein distance between the generated and ground truth sequence. This Wasserstein distance measures the distance between the two counting measures representing each event sequence.
> Computational complexity
In our response to all reviewers, we added a plot to the PDF that compares the sampling runtimes for the different models. We find that for longer sequences, the runtime of our unoptimized model is comparable to autoregressive models. This interesting finding can be explained on a conceptual level as our diffusion model presents a different trade-off: Instead of the autoregressive nature of common TPP models in event time, our model in parallel refines the entire sequence for a fixed number of N diffusion steps. Thereby, the sampling is almost constant across the different sequence lengths.
> Similarity to unpublished work
Thank you for pointing out this work, which discusses the general idea of a hierarchical composition of simple intensity functions and defining invertible probabilistic transformation (destructors) of point processes. If you could provide us with a preprint or reference to this work, we would be happy to discuss and reference it.
---
Rebuttal Comment 1.1:
Title: Rebuttal
Comment: Thank you for your clarifications on my concerns regarding the design choices, and for taking my recommendations into account. There was indeed a misunderstanding regarding the metrics employed. Upon reading your updated description of the reverse process and your replies to other reveiwers, I have increased my score. Nevertheless, in light of the fact that we won't have the opportunity to review the revised paper, I remain hesitant to recommend its acceptance. | Summary: The paper introduces a novel probabilistic diffusion framework for temporal point process. Its significance lies modeling a whole event sequence directly, overcoming common limitations of autoregressive models.
Strengths: Originality: This paper is very novel. It connects diffusion models with TPPs and model a whole event sequence , overcoming issues from autoregressive models.
Quality: The formulation of the approach and method itself are sound by leveraging two properties of point process: thinning and superpositions. The discussion of the reverse process are reasonable by discussing 4 cases of transition from nth step to n-1 step. The empirical results demonstrate the good performance of the proposed approach, esp. in forecasting tasks.
Clarity: The paper is overall well-presented. Improvement can be made by better explaining A,B,C,D,E,F sets.
Significance: The paper introduces a new way to model event sequences probabilistically which is appealing and significant to the TPP community.
Weaknesses: A. Experiments. The experimental evaluation on density estimation. The results of Add and Thin is only comparable to Intensity-free model by Shchur (RNN in the paper), if not worse.
B. Application and Limitation. One minor limitation is it only deals with TPP without marks. It would be interested in see something for Marked TPP.
C. Overall presentation. It took me a while to understanding the notation of sets A-F. It would be nice to include such information in the caption of figure 2.
D. No codes are given, I am not sure how reproducible the results are.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: A. Posterior approximation. How good is the posterior approximation in Add and Thin? Are there any theoretical insights.
B. Neural network. Could the authors provide some movitation of using 3-layered-d CNN?
C. Loss. 1). Is the mixture of Gaussians eqn. (4) used to model the conditional intensity good for modeling an event time(or interevent time) t which is positive? Add and Thin is a latent variable model with latent variables t^{n-1}’s. Usually int the diffusion models reference 40 in the paper, latent variable models leverage variational inference to maximize the evidence lower bounds. But this model is not. Could the author shed some insights on this?
D. Cox process models. Do the authors have any experiment results with these baselines form reference [5,8,17]?
E. Experiments. How many diffusion steps (n) are used?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The limit of the paper according to the authors is it is limited to TPP without marks. In addition, current sampling routines may not be efficient. TPP models can be applied applied to many fields, where common application domains include traffic, social networks, and electronic health records.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review, appreciation of our work, and suggestions to further improve our paper.
> W.A Comparison to Shchur et al.
We want to point out that the intensity-free model by Shchur et al. is a very strong baseline, and both our model and this baseline achieve near-perfect metrics in the density estimation task on most datasets. Further, the proposed model is primarily motivated through the conditional sampling/forecasting task, where our model outperforms all baselines.
> W.B Minor point: Marks
We agree that including marks would pose an interesting extension of our model. In this paper, we focused on deriving a sound diffusion process for TPPs to tackle the complex task of accurately modeling event sequences in time and leave the extension to marks for future work.
> W.C Presentation of sets
Please refer to our response to all reviewers for an improved presentation of the sets and reverse process.
> W.D Reproducibility of results
We will release a well-documented version of our code upon acceptance.
> Q. Relation of the loss to the ELBO
Thank you for this important remark. Our model's objective function is indeed equivalent to the ELBO of a diffusion model, as shown below. We will add this discussion to the appendix and reference it in Sec. 3.3.
The ELBO of diffusion models [2] is derived as follows:
\begin{equation}
\begin{split}
\mathcal{L}\_{ELBO} = \mathbb{E}\_{q} \Big[\underbrace{D\_{KL}\left(q(\mathbf{t}^{(N)}|\mathbf{t}^{(0)})\parallel p(\mathbf{t}^{(N)})\right)}\_{\mathcal{L}\_{N}}+\sum_{n=2}^{N} \underbrace{D\_{KL}\left(q(\mathbf{t}^{(n-1)}|\mathbf{t}^{(0)},\mathbf{t}^{(n)})\parallel p_{\theta}(\mathbf{t}^{(n-1)}|\mathbf{t}^{(n)})\right)}\_{\mathcal{L}\_{n}}-\underbrace{\log p_{\theta}(\mathbf{t}^{(0)}|\mathbf{t}^{(1)})}\_{\mathcal{L}\_{0}}\Big].
\end{split}
\end{equation}
The Janossi density [3] of a TPP allows us to represent each element of the ELBO in terms of our derived inhomogeneous (approximate) posterior intensities (Sec. 3.2.) as follows:
\begin{equation}
p(\mathbf{t}) =e^{-\int_0^T \lambda(t)dt} \prod_{t_i\in \mathbf{t}} \lambda(t_i).
\end{equation}
$\boldsymbol{\mathcal{L}\_{N}}$: $\mathcal{L}\_{N}$ is constant as $q(\mathbf{t}^{(N)}|\mathbf{t}^{(0)})$ and $p(\mathbf{t}^{(N)})$ have no learnable parameters.
$\boldsymbol{\mathcal{L}\_{0}}$: We directly train our model to optimize this likelihood term as described in Sec. 3.3.
$\boldsymbol{\mathcal{L}\_{n}}$: The KL divergence between two densities is defined as:
\begin{align}
D\_{KL}(q\parallel p_{\theta}) = \mathbb{E}\_{q}\left[\log (q(\mathbf{t}^{(n-1)}|\mathbf{t}^{(0)},\mathbf{t}^{(n)}))-\log (p_{\theta}(\mathbf{t}^{(n-1)}|\mathbf{t}^{(n)}))\right],
\end{align}
where only the right-hand side relies on $\theta$, letting us minimize the KL divergence by maximizing the expectation over $\log(p_{\theta}(\mathbf{t}^{(n-1)}|\mathbf{t}^{(n)}))$.
To add some additional context to the KL divergence in $\mathcal{L}\_{n}$ and similar to the derivation of the posterior, we will further distinguish three cases:
- **Set E:** Our parametrization uses the same intensity function for $q(\mathbf{t}^{(n-1)}|\mathbf{t}^{(0)},\mathbf{t}^{(n)})$ and $p_\theta(\mathbf{t}^{(n-1)}|\mathbf{t}^{(n)})$, which does not rely on any learned parameters.
- **Set B \& D:** $q(\mathbf{t}^{(n-1)}|\mathbf{t}^{(0)}, \mathbf{t}^{(n)})$ and $p_\theta(\mathbf{t}^{(n-1)}|\mathbf{t}^{(n)})$ are defined by Bernoulli distributions over each element of $\mathbf{t}^{(n)}$. The BCE minimizes the KL divergence between the two distributions as proposed in Sec. 3.3.
- **Set C**: Minimizing this KL divergence by maximizing the $\mathbb{E}\_{q}\left[\log(p_{\theta}(\mathbf{t}^{(n-1)} |\mathbf{t}^{(n)}))\right]$ is proposed in Sec. 3.3 to learn $\lambda_{\theta}^{(A\cup C)}(t)$. Note that $q(\mathbf{t}^{(n-1)}|\mathbf{t}^{(0)},\mathbf{t}^{(n)})$ is sampled from by independently thinning $\mathbf{t}^{(0)}\setminus\mathbf{t}^{(n)}$. Consequently, by minimizing the NLL of our intensity $\lambda_{\theta}^{(A\cup C)}(t)$ with regards to $\mathbf{t}^{(0)}\setminus\mathbf{t}^{(n)}$, we optimize the expectation in closed form.
> Q.A Posterior approximation
Considering our ELBO answer, we directly minimize the KL divergence to match the true posterior. Here, the posterior intensity $\lambda^{(A\cup C)}$ consists of Dirac delta functions, which in the limit $\sigma\rightarrow0$ can be perfectly approximated by Gaussians [1]. Further, we use a universal function approximator (MLP) to parameterize the Bernoulli distribution in the classification.
> Q.B Movitation CNN
We use a CNN with dilation and circular padding as it's computationally much more efficient than attention based-encoders and captures long-range dependencies better than RNN-based ones. We'll clarify this choice in Sec. 3.3 in the final version.
> Q.C Gaussian mixture
We use truncated Gaussians, ensuring positive intensity only on [0, T].
> Q.D Cox processes
Cox processes are traditional parametric TPP models, where finding an efficient posterior sampling via MCMC for the latent dimensions is one main line of research [17]. Yet, to the best of our knowledge, there doesn't exist a computationally efficient Cox processes implementation. For example, the training times reported for a two-layer model in [17] are about 1-2 orders of magnitude higher than any of our applied models, ultimately making a hyperparameter search and comparison across 13 benchmark datasets infeasible. Please let us know if there is any specific efficient implementation we can compare with.
> Q.E $n$
We report this hyperparameter in Sec. 5 and use $n=100$ for all experiments.
### References
1. Ghatak, Ajoy, et al. "The dirac delta function." Quantum Mechanics: Theory and Applications (2004): 3-18.
2. Ho et al., [16] in the paper
3. Daley and Vere-Jones, [6] in the paper
---
Rebuttal Comment 1.1:
Title: Thank you for your response.
Comment: Thank you for your clarification. I found it helpful and clear. Therefore, I keep my recommendation and will stay tuned about the discussion from other reviewers on your interesting work. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback and appreciation of the contribution and originality of our novel diffusion-based TPP model. We have attached a PDF with the model's sampling runtime and would further like to highlight parts of our answers to reviewer 1d9q (ELBO), reviewer 1xQ2 (Design choices), and reviewer RwFj (Model and method description).
In the following, we re-introduce a high-level explanation of the reverse process and the treatment of sets A-F that we will include in Sec. 3.2 in the updated version of the paper. We want to emphasize that the following neither introduces new notations nor new findings and solely presents Sec. 3.2 to be more accessible to the reader.
**Reverse process and case distinction**
To sample realizations $\bf{t}\sim\lambda_0$ starting from $\mathbf{t}^{(N)}\sim\lambda_{HPP}$, we need to learn to reverse the Markov chain of the forward process. Conditioned on $\mathbf{t}^{(0)}$, the reverse process at step $n$ is given by the posterior $q(\mathbf{t}^{(n-1)}|\mathbf{t}^{(0)},\mathbf{t}^{(n)})$, which is an inhomogeneous Poisson process for the chosen forward process. Therefore, the posterior can be represented by a history-independent intensity function $\lambda_{n-1}(t|\mathbf{t}^{(0)},\mathbf{t}^{(n)})$. As the forward process is defined by adding and thinning event sequences, the posterior over the random sequence $\mathbf{t}^{(n-1)}$ can be decomposed into disjoint sets of points based on whether they are also in $\mathbf{t}^{(0)}$ or $\mathbf{t}^{(n)}$. We use this case distinction to derive the posterior intensity and distinguish the following cases: points in $\mathbf{t}^{(n-1)}$ that where kept from $0$ to $n$ (**B**), points in $\mathbf{t}^{(n-1)}$, that were kept from $0$ to $n-1$ but thinned at the $n$-th step (**C**), added points in $\mathbf{t}^{(n-1)}$ that are kept in the $n$-th step (**D**) and added points in $\mathbf{t}^{(n-1)}$ that are thinned in the $n$-th step (**E**). Since the sets **B-E** are disjoint, the posterior intensity is a superposition of the intensities of each subset of $\mathbf{t}^{(n-1)}$: $\lambda_{n-1}(t|\mathbf{t}^{(0)},\mathbf{t}^{(n)})=\lambda^{(B)}(t)+\lambda^{(C)}(t)+\lambda^{(D)}(t)+\lambda^{(E)}(t)$.
To derive the intensity functions for cases B-E, we additionally define the following helper sets: **A** the points $\mathbf{t}^{(0)}\setminus\mathbf{t}^{(n-1)}$ that were thinned until $n-1$ and **F** the points $\mathbf{t}^{(n)}\setminus\mathbf{t}^{(n-1)}$ that have been added at step $n$. The full case distinction is further illustrated in Fig. 2. In the following paragraphs, we derive the intensity functions for cases B-E:
**B**: The set $\mathbf{t}^{(B)}$ can be formally defined as $\mathbf{t}^{(0)}\cap\mathbf{t}^{(n)}$ as $(\mathbf{t}^{(0)}\cap\mathbf{t}^{(n)})\setminus\mathbf{t}^{(n-1)}=\emptyset$, almost surely. This is because adding points at any of the locations $t\in\mathbf{t}^{(0)}\cap\mathbf{t}^{(n)}$ carries zero measure at every noising step. Hence, given $\mathbf{t}^{(0)}\cap\mathbf{t}^{(n)}$ the intensity can be written as a sum of Dirac measures: $\lambda^{(B)}(t)=\sum_{t_i\in(\mathbf{t}^{(0)}\cap\mathbf{t}^{(n)})}\delta_{t_i}(t)$.
**C**: Given $\mathbf{t}^{(A\cup C)}=\mathbf{t}^{(0)}\setminus\mathbf{t}^{(n)}$, $\mathbf{t}^{(C)}$ can be found by thinning and consists of points that were kept by step $n-1$ and removed at step $n$. Using the thinning of Eq. 2 \& 3, we know the probability of a point from $\mathbf{t}^{(0)}$ being in $\mathbf{t}^{(C)}$ and $\mathbf{t}^{(A\cup C)}$ is $\bar\alpha_{n-1}(1-\alpha_n)$ and $1-\bar\alpha_{n}$, respectively. Since we already know $\mathbf{t}^{(B)}$ we can consider the probability of finding a point in $\mathbf{t}^{(C)}$, given $\mathbf{t}^{(A\cup C)}$, which is equal to $\frac{\bar\alpha_{n-1}-\bar\alpha_{n}}{1-\bar\alpha_{n}}$.
**E**: The set $\mathbf{t}^{(E)}$ contains all points $t \notin (\mathbf{t}^{(0)}\cup \mathbf{t}^{(n)})$ that were added until step $n-1$ and thinned at step $n$. Again using Eq. 2 \& 3, we can see that these points were added with intensity $(1-\bar\alpha_{n-1})\lambda_{HPP}$ and then removed with probability $(1-\alpha_n)$ at the next step. Equivalently, we can write down the intensity that governs this process as $\lambda^{(E)}=(1-\bar\alpha_{n-1})(1-\alpha_n)\lambda_{HPP}$, i.e., sample points from an HPP and thin them to generate a sample $\mathbf{t}^{(E)}$.
**D**: The set $\mathbf{t}^{(D)}$ can be found by thinning $\mathbf{t}^{(D\cup F)}=\mathbf{t}^{(n)}\setminus\mathbf{t}^{(0)}$ and contains the points that were added by step $n-1$ and then kept at step $n$. The processes that generated $\mathbf{t}^{(D)}$ and $\mathbf{t}^{(F)}$ are two independent HPPs with intensities $\lambda^{(D)}=(1-\bar\alpha_{n-1})\alpha_n\lambda_{HPP}$ and $\lambda^{(F)}=(1-\alpha_{n})\lambda_{HPP}$, respectively, where $\lambda^{(D)}$ is derived similar to $\lambda^{(E)}$. Since $\mathbf{t}^{(D)}$ and $\mathbf{t}^{(F)}$ are independent HPPs and we know $\mathbf{t}^{(D\cup F)}$, the number of points in $\mathbf{t}^{(D)}$ follows a Binomial distribution with probability $p=\frac{\lambda^{(D)}}{\lambda^{(D)}+\lambda^{(F)}}$ (see A.2 for details). That means we can sample $\mathbf{t}^{(D)}$ given $\mathbf{t}^{(n)}$ and $\mathbf{t}^{(0)}$ by simply thinning $\mathbf{t}^{(D\cup F)}$ with probability $1-p$ and express the intensity as a thinned sum of Dirac measures (c.f., Fig. 2).
For sequences of the training set, where $\mathbf{t}^{(0)}$ is known, we can compute these intensities for all samples $\mathbf{t}^{(n)}$ and reverse the forward process. However, $\mathbf{t}^{(0)}$ is unknown when sampling new sequences. Therefore, similarly to the denoising diffusion approaches [16], in the next section, we show how to approximate the posterior intensity, given only $\mathbf{t}^{(n)}$. Further, in Sec. 3.4, we demonstrate how the trained neural network can be leveraged to sample new sequences $\mathbf{t}\sim\lambda_0$.
Pdf: /pdf/6ee6225f8258bd3a4ddc10fb3d889aacff5b0e68.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
SPACE: Single-round Participant Amalgamation for Contribution Evaluation in Federated Learning | Accept (poster) | Summary: This paper introduced a novel method to evaluate the participant contribution under the setting of federated learning. In the beginning, the author raised two challenges of recent works related to the participant contribution of FL: Multi-round training and dataset dependency, which are actually about the communication and computational costs in FL. The author derived the SPACE model to address these two challenges. Specifically, to address the high communication cost issue, the author introduced a module named Federated Knowledge Amalgamation, which enables a single round of communication. Then the author introduced Prototype-based Model Evaluation to reduce the evaluation complexity since this method only needs similarities comparison without iterating through a validation set. In addition, in the contribution part, the author rectified the utility function with a logistic function to address the rationality violation issue. Last, the author evaluated participant contributions to the SPACE model on two datasets compared with four baselines. Furthermore, the author experimented with SPACE on the client reweighting and selection problem.
Strengths: 1. The author raised two challenges of participant contribution evaluation in FL, and introduces corresponding techniques to address them.
2. The author explained the reason and function of each module in detail.
3. The author provides a theoretical value of communication costs and computational costs of baselines and derived model.
Weaknesses: 1. The author should pay attention to the writing style. For example, more focus on key points, like how to reduce computational complexity. Otherwise, it is hard to follow.
2. The author should provide more detail about the theoretical analysis on communication and computational cost. For example, how the design has the advantage to reduce complexity and appropriate for GPU?
3. In section 4.3, the author pointed out a problem that applying model performance as utility function may violate rationality. And the author add a activation as solution. Could you explain why you choose this activation and why this design can avoid violations?
4. The focus in this paper is to reduce complexity, so the author should provide results of the computational cost and communication cost in experiment part.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Like the suggestion in weakness:
1. The author should provide more detail about the theoretical analysis on communication and computational cost. For example, how the design has the advantage to reduce complexity and appropriate for GPU?
2. In section 4.3, the author pointed out a problem that applying model performance as utility function may violate rationality. And the author add a activation as solution. Could you explain why you choose this activation and why this design can avoid violations?
3. The focus in this paper is to reduce complexity, so the author should provide results of the computational cost and communication cost in experiment part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author can provide more details of the theoretical complexity analysis. In addition, a more comprehensive experiments should be conducted, such as including model complexity, ablation study and case study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows.
> The author should pay attention to the writing style. For example, more focus on key points, like how to reduce computational complexity. Otherwise, it is hard to follow.
Thanks for pointing out this issue. We have already provided the detailed complexity analysis in the supplementary materials and will move to the main manuscript in the camera-ready version if space allows.
> The author should provide more detail about the theoretical analysis on communication and computational cost. For example, how the design has the advantage to reduce complexity and appropriate for GPU?
Thanks. We have implemented the hierarchy amalgamation technique to infer SPACE to minimize overall GPU memory usage. Specifically, as the number of clients 'n' increases, the number of Feature Projector Modules (FPMs) that necessitate training also escalates. Rather than conducting the amalgamation of all clients simultaneously, leading to a substantial demand for peak GPU memory, our proposed hierarchy amalgamation approach facilitates the amalgamation of fewer clients at a time, effectively reducing the peak GPU memory requirements.
> In section 4.3, the author pointed out a problem that applying model performance as utility function may violate rationality. And the author add a activation as solution. Could you explain why you choose this activation and why this design can avoid violations?
We employ the sigmoid function because of its capacity to represent the characteristics of user satisfaction aptly. As model performance surpasses a certain threshold, the number of satisfied users increases significantly, but eventually, the satisfaction tends to saturate for exceedingly high-performance levels. For instance, a model with a performance of 97% may not exhibit substantially different user satisfaction compared to one with a performance of 98%. The sigmoid function effectively captures these two aspects of user satisfaction. To ensure adherence to individual rationality, we can select a sigmoid function with large values for the parameters $k$ and set $T$ between the greatest individual performance and the full coalition performance. This configuration substantially diminishes the individual value while amplifying the value of coalitions, thus mitigating the risk of violating individual rationality.
> The focus in this paper is to reduce complexity, so the author should provide results of the computational cost and communication cost in experiment part.
Thank you for the valuable suggestion. Indeed, we have already included the computational cost in terms of runtime in Table 2, indicating the computational efficiency of our proposed method. To further illustrate the communicational efficiency, we will supplement the communication cost in terms of the amount of data transmitted, measured in megabytes (MB), as follows. Note that the communication cost shown in the table is calculated as $Comm = 2 \cdot n \cdot R_g \cdot |\theta|$, where $R_g$ denotes the number of communication rounds, $|\theta|$ represents the number of model parameters(MB) and constant of 2 stands for the upload and download of the models. For approaches that require retraining, the number of clients $n$ involves repeated clients during retraining.
The findings concerning communication costs underscore the notable communication efficiency inherent in the proposed SPACE framework. The adoption of a single-round amalgamation strategy results in a substantial decrease in communication costs, yielding a scale advantage proportional to a factor of $R_g$, especially in comparison to approaches that demand the complete training of the federated model.
* Result on MNIST
| | GT | TMC | GTG | DIG-FL | SPACE(Avg) | SPACE |
|--- |--- |--- |---|---|---|---|
| Non-IID | 0.6877 | **0.9824** | 0.9287 | 0.8715 | 0.9713 | 0.9448 |
| Mislabel | 0.4230 | 0.9507 | 0.9608 | 0.9580 | 0.9529 | **0.9612** |
| Times(s) | 97137 | 84796 | 62473 | 315 | 294 | **160** |
| Comm(MB) | 11813.12 | 10292.48 | 35.2 | 35.2 | 35.2 | **1.76** |
* Result on CIFAR
| | GT | TMC | GTG | DIG-FL | SPACE(Avg) | SPACE |
|--- |--- |--- |---|---|---|---|
| Non-IID | 0.6089 | 0.8877 | 0.8208 | 0.7546 | **0.9540** | 0.9290 |
| Mislabel | 0.5192 | 0.9595 | 0.4148 | 0.9598 | 0.9565 | **0.9641** |
| Times(s) | 7468 | 4950 | 835 | 536 | 315 | **295** |
| Comm(MB) | 307800 | 202920 | 11400 | 11400 | 11400 | **570** | | Summary: This paper studies to evaluate the contribution of each client during federated training efficiently. It proposes a framework named SPACE, which trains a student model in the server with one communication round to measure the similarity between local datasets and the validation dataset in server. Finally, extensive experiments are conducted to prove the efficiency of SPACE.
Strengths: - The paper is well written and organized.
- The proposed solution is feasible and efficient.
- Extensive experiments are conducted to evaluate the proposed framework.
Weaknesses: - The assumption of SPACE is too strong. (See Question 1 and 2)
- The proposed contribution estimation framework is decoupled from federated training. (See question 3)
- The dataset and client number in the experiment are too small. (See Question 4)
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. One important assumption in this paper is the server holds validation dataset, which is rare in reality. What if the validation dataset is hold by distributed clients rather than the central server?
2. In section 1, the paper states "if the server and client distributions are similar, the client dataset proves helpful in correctly classifying the data in the validation set of the server."
- First, just like Equation 1 the target of FL is to cooperatively train a model that performs well on the test datasets in all clients rather than the validation set of the server.
- Secondly, it seems like the authors assume that the validation dataset in the server shares the same distribution with the distributed test datasets. In my opinion, the assumption is too strong and unpractical. If the server already knows the distribution of the test datasets, there is no need to federated-train a model.
3. The contribution estimation in SPACE is decoupled from federated training. I' curious how does SPACE deal with client over-sampling, where the number of participation for different clients is unbalanced and some clients may be not selected during training.
4. In the experiment, the dataset and the number of clients are too small (5 for CIFAR10 and 10 for MNIST).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback. We have answered all your concerns. In the following, we respond point by point.
> One important assumption in this paper is the server holds validation dataset, which is rare in reality. What if the validation dataset is hold by distributed clients rather than the central server?
If distributed clients solely retain their validation dataset, SPACE remains operational, albeit with the need for an extra unlabeled server dataset for knowledge amalgamation. Furthermore, a modification of the distillation loss for unlabeled data, as illustrated in [39], becomes necessary. Alternatively, SPACE(Avg) could serve as a simpler alternative for building feature embedding space if access to an additional unlabeled server dataset is unfeasible. After constructing the embedding space, clients must upload the prototypes of their respective validation sets. The global validation prototypes are then obtained by performing a weighted sum of the local validation prototypes, and the contribution can be calculated using the prototype-based evaluation. However, a potential drawback of this setup is that if mislabeled data exists in any of the local validation sets, the global validation set would also be adversely affected.
We provide additional experimental results for SPACE with unlabeled data for knowledge integration. The assignment of weights to clients in the loss function is guided by their prediction entropy, with greater weighting for those with lower entropy. Although slight decreases in performance are observed, particularly in Non-IID scenarios, these can be attributed to the absence of label information. Nonetheless, the outcomes manifest the feasibility of leveraging unlabeled data as a viable substitute for knowledge integration, especially in practical situations where obtaining labeled data poses difficulties.
* Result of SPACE(unlabel)
| Scenario | MNIST | CIFAR | Tiny-ImageNet |
|--- |--- |---|---|
| Non-IID | 0.8772 | 0.8653 | 0.9091 |
| Mislabel | 0.9611 | 0.9641 | 0.9231 |
> The authors assume that the validation dataset in the server shares the same distribution with the distributed test datasets. In my opinion, the assumption is too strong and unpractical. If the server already knows the distribution of the test datasets, there is no need to federated-train a model.
Thanks for pointing out this issue. Following previous works [29, 43, 51, 52, 53], we assume that the server initially possesses a validation set. This validation set is pivotal for ensuring equitable and reliable assessment of contributions within the context of federated learning. As deliberated in the "Broader Impact" section, we are fully cognizant of the challenges of procuring a trustworthy validation in real-world settings. Nonetheless, specific scenarios exist wherein servers may be motivated to assemble such a validation set in exchange for substantial commercial benefits. Notably, instances like those discussed in [J, K] underscore this idea of servers having access to validation sets for contribution evaluation in real-world applications. In [J], multiple city-gas companies collaborate to train a hazard identification model. Similarly, [K] presents experiments in the healthcare domain involving the participation of eight esteemed medical institutions in China to construct healthcare decision-support models. Both of these applications necessitate a reliable server-based validation to ensure AI models' accuracy objectives before their deployment in real-world scenarios. Though the size of the validation set might not be large enough for centralized training under the setting, it provides a reliable evaluation of the model performance, especially when the clients hold mislabeled data. In light of such examples, we contend that our underlying assumption of the availability of a validation set on the server holds practical in specific scenarios.
> The contribution estimation in SPACE is decoupled from federated training. I' curious how does SPACE deal with client over-sampling, where the number of participation for different clients is unbalanced and some clients may be not selected during training.
The clustered sampling (CS) method [11] is introduced to tackle over-sampling and reduce performance variance in federated learning. Instead of employing multinomial distribution (MD) for sampling with replacement [25], which has exhibited substantial performance variance, the clustered sampling approach first categorizes clients into distinct clusters. Subsequently, clients are sampled from these clusters, ensuring that clients with unique distributions are more likely to be sampled. As mentioned in Section 5.2 within the context of client selection, we first utilize the SPACE framework to derive the prototypes and contributions of individual clients. The prototypes are then utilized for clustering purposes, while clients' contributions determine the clients' probability. This combined approach effectively addresses the over-sampling issue while enhancing the training outcomes.
> In the experiment, the dataset and the number of clients are too small (5 for CIFAR10 and 10 for MNIST).
Please see "Author Rebuttal by Authors" above.
[J] Yang, C., Liu, J., Sun, H., Li, T., & Li, Z. (2022). WTDP-Shapley: Efficient and Effective Incentive Mechanism in Federated Learning for Intelligent Safety Inspection. IEEE Transactions on Big Data.
[K] Liu, Z., Chen, Y., Zhao, Y., Yu, H., Liu, Y., Bao, R., ... & Yang, Q. (2022, June). Contribution-aware federated learning for smart healthcare. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 11, pp. 12396-12404).
---
Rebuttal Comment 1.1:
Title: Response
Comment: - Thanks for your response. It solves part of my concerns, and I'm glad to adjust my rating. | Summary: This paper proposes a single-round participant contribution evaluation method for FL. The novel and interesting part is using the sample embedding similarity between client data and (server) validation data to indicate contribution, thus avoiding the time-consuming model retraining step.
Strengths: 1. Very useful problem
2. Interesting idea with using embedding instead of retraining models for contribution evaluation.
Weaknesses: 1. No theoretical analysis of why the embedding similarity can lead to an excellent approximation to the original Shapley-based contribution evaluation with re-training. Since embedding similarity-based contribution evaluation is quite different from original retraining-based contribution evaluation, such a theoretical analysis is a must; only empirical analysis cannot verify that the proposed contribution evaluation can always be good.
2. Experiment is too limited. More datasets, more parties, and malicious behaviors of parties should all be counted to ensure that the contribution obtained by the proposed method is robust.
3. The method seems to only work for classification. How to extend it to regression tasks?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Not enough. More discussions on when the proposed model can work well should be given.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback. We have answered all your concerns. In the following, we respond point by point.
> No theoretical analysis of why the embedding similarity can lead to an excellent approximation to the original Shapley-based contribution evaluation with re-training.
Please see "Author Rebuttal by Authors" above.
> Experiment is too limited. More datasets, more parties, and malicious behaviors of parties should all be counted to ensure that the contribution obtained by the proposed method is robust.
Please see "Author Rebuttal by Authors" above.
> The method seems to only work for classification. How to extend it to regression tasks?
Thanks. Our methodology is primarily designed for classification tasks, drawing from the works cited in references [29, 43, 51, 52, 53]. It is important to note that federated learning also encompasses the significant domain of regression tasks[H, I]. However, it is evident from our literature review that the discourse surrounding the evaluation of contributions in the context of regression tasks could be more extensive. As such, a notable gap exists in the literature regarding the evaluation of contributions within regression tasks, thus signifying a crucial avenue for future exploration.
> More discussions on when the proposed model can work well should be given.
The experimental results indicate that the proposed SPACE approach is effective, especially in a mislabeled data scenario. This is because prototypes obtained from clients with mislabeled data exhibit substantial deviation from the prototypes of the validation set. Consequently, such deviations are efficiently identified by the prototype-based model evaluation approach employed in the proposed method.
[H] Lee, H., Bertozzi, A. L., Kovačević, J., & Chi, Y. (2022, May). Privacy-Preserving Federated Multi-Task Linear Regression: A One-Shot Linear Mixing Approach Inspired By Graph Regularization. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5947-5951). IEEE.
[I] Su, L., Xu, J., & Yang, P. (2022). Global convergence of federated learning for mixed regression. Advances in Neural Information Processing Systems, 35, 29889-29902.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response to the theoretical analysis. However, I do not see more experiments on users' potential malicious behaviors. I think that a rigorous analysis (both theoretical and empirical) of malicious behaviors is really important to indicate whether the proposed method is practically useful or not. This is because in practice we cannot assume that participants are (semi-)honest, we need to realize potential risks clearly.
---
Reply to Comment 1.1.1:
Comment: Thanks very much for the valuable comments. We fully agree with the reviewer that considering the threat from malicious clients in federated learning is very important. As a matter of fact, as detaily described below, we indeed discussed two primary forms of possible harmful actions tied to contribution assessment and elaborated via empirical works on how our SPACE could adequately tackle these concerns. However, in our submitted paper, we referred to “the behavior of malicious users” as “mislabelling.” In the revised version of this work, we shall clearly use the term “the behavior of malicious users” so that our work can be better understood in the right context. Explicitly, the two forms, malicious attack by data poisoning and maliciously inflating contribution, below are what we have addressed.
Malicious attack by data poisoning:\
The manipulation of data labels, a frequently employed malicious strategy by clients engaged in data poisoning attacks, is denoted as "label-flipping" in reference [D]. In our originally submitted paper, we adopted the term "mislabeled" to refer to unintentional and intentional mislabeling instances. In this case, malicious clients attempt to disrupt the training process by submitting gradients derived from mislabeled data. Explicitly, as depicted in Table 2 on page 8, the results highlight the effectiveness of SPACE in pinpointing those malevolent clients harboring inaccurately labeled data. With the server's privately-owned validation set, our framework efficiently identifies such malicious clients, even if they form the majority in the federated process. In addition, the experimental results in Figure 2 on page 8 demonstrate the robustness of our framework in the federated training. It is worth noting that the SPACE proposed employs contributions as weighting factors, thereby reducing the negative impact of malicious participants, i.e., the malicious clients usually contribute less due to their distributions differing from that of the server.
Maliciously inflating contribution: \
Another concern we addressed arises when clients aim to inflate their contributions maliciously to obtain more significant rewards. If the prototypes from the validation set become exposed, clients might exploit this knowledge to boost their contributions in an ad-hoc manner. In such a case, the SPACE utilizes prototype-based model evaluations, where all clients have access solely to the feature extraction layers of the amalgamated model while excluding access to the fully-connected layers, i.e., the final score. This strategic approach facilitates the creation of prototypes without compromising the confidentiality of the distribution that underlies the validation set. Note that another potential threat is clients employing the zeroth-order optimization technique [C] to derive prototypes with disproportionately high contributions. In this case, we can simply detect this attack by limiting the client's access to the server in that the SPACE does not require each client to recalculate their contributions frequently. As suggested, we will include the above in our revised paper for better clarification.
In addition to the aforementioned attacks directly associated with contribution evaluation, federated learning is known for its vulnerability to a spectrum of attacks [D], including but not limited to reconstruction attacks [E]. It is noted that to counter these threats, researchers have proposed defense mechanisms, such as gradient pruning [E], data representation perturbation [F], and differential privacy (DP) [G]. These works are essential. However, they are, in essence, orthogonal to the main theme of this paper, which focuses on contribution evaluation. Nevertheless, these privacy-preserving strategies can be integrated with our method, working collaboratively to enhance data privacy protection. These clarifications will also be added in the revision.
[C] Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. ACM AISec 2017\
[D] Federated learning attacks and defenses: A survey. IEEE Big Data 2022\
[E] Deep leakage from gradients. NeurIPS 2019\
[F] Soteria: Provable defense against privacy leakage in federated learning from representation perspective. CVPR 2021\
[G] Gradient-leakage resilient federated learning. ICDCS 2021 | Summary: The paper introduces a novel approach named Single-round Participants Amalgamation for Contribution Evaluation (SPACE) for efficiently evaluating the contribution of participants in Federated Learning (FL). Accurately evaluating participant contribution has been a challenge in current FL, especially considering cost and scalability. Current methods mostly use the Shapley value, a measure from cooperative game theory, but calculating it is computationally expensive. SPACE, on the other hand, combines two new components - Federated Knowledge Amalgamation and Prototype-based Model Evaluation - to reduce evaluation effort. Federated Knowledge Amalgamation distills information from all local models into the server model in just one round of communication, saving time. Prototype-based Model Evaluation compares server and client prototype similarities, effectively eliminating the dependency on the size of the validation set.
Additionally, SPACE modifies the utility function using a logistic function to better reflect user satisfaction, enhancing the utility function's rationality in real-world settings. Experimental results show that SPACE outperforms current methods in terms of both running time and Pearson's Correlation Coefficient, a measure of correlation strength. The efficacy of SPACE has been tested in various applications, including client reweighting and selection, and consistently demonstrates exceptional performance.
Strengths: - SPACE, appears to be a novel and innovative approach to evaluating the contribution of participants in Federated Learning. The introduction of components like Federated Knowledge Amalgamation and meta-learning using prototypes could potentially provide new avenues for research.
- The method aims to reduce the time and computational resources required for evaluating participant contributions by eliminating dependence on the size of the validation set and enabling evaluation within a single communication round. This efficiency is particularly valuable in Federated Learning, where minimizing communication is crucial due to privacy and resource concerns.
Weaknesses: - Although not specific to this one paper, a recurring trend in federated learning conducts experiments that may overfit to specific use cases. In particular, most papers demonstrate the efficacy of their methods using datasets limited to Federated MNIST and non-iid versions of CIFAR (in this case with only 10 classes). Although these experiments might demonstrate proof of concept, adaptability and effectiveness of the method across a broader range of datasets and scenarios remains unclear. As such, I believe additional experiments with more extensive datasets, or an additional discussion on how this method can be applied to real-world scenarios might help.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Are there specific domains or types of applications where SPACE proves particularly effective or ineffective?
- You mention that SPACE enables participant evaluation within a single communication round. What impact does this have on the overall quality and accuracy of the federated learning model?
- How generalizable is the SPACE approach? Could it be adapted to various federated learning architectures and different types of machine learning models?
- Is there a possibility that SPACE could be used in a malicious way, such as inflating a participant's contribution unfairly?
- How does SPACE handle privacy concerns? Does the process of distilling information from local models to a server model pose any risks to data privacy?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and summary of our paper. We have addressed all your questions in the following.
> I believe additional experiments with more extensive datasets, or an additional discussion on how this method can be applied to real-world scenarios might help.
Please see "Author Rebuttal by Authors" above.
> Are there specific domains or types of applications where SPACE proves particularly effective or ineffective?
The experimental results indicate that the proposed SPACE approach is effective, especially in a mislabeled data scenario. This is because prototypes obtained from clients with mislabeled data exhibit substantial deviation from the prototypes of the validation set. Consequently, such deviations are efficiently identified by the prototype-based model evaluation approach employed in the proposed method.
> You mention that SPACE enables participant evaluation within a single communication round. What impact does this have on the overall quality and accuracy of the federated learning model?
Our SPACE assesses participants' contributions to the federated learning process rather than directly engaging in model training. Consequently, its implementation does not directly influence the overall quality and accuracy of the federated model. However, when utilized as the weighting mechanism for clients, it contributes positively to the robustness of the federated learning model, as evidenced by the results shown in Figure 2.
> How generalizable is the SPACE approach? Could it be adapted to various federated learning architectures and different types of machine learning models?
Our proposed SPACE is designed to assess participant contributions within the context of horizontal federated learning settings. In such scenarios, participants collectively engage in a federated learning process while sharing a common feature space yet possessing distinct data samples. Moreover, our SPACE can effortlessly integrate into other deep learning classification models because the two major components, i.e., the information aggregation (Federated Knowledge Amalgamation) and the evaluation protocol (Prototype-based model evaluation), are both model agnostic.
> Is there a possibility that SPACE could be used in a malicious way, such as inflating a participant's contribution unfairly?
If the prototypes of the validation set are exposed, clients could exploit this information to enhance their contributions artificially. A precautionary measure is implemented to counteract the risk of information leakage during the knowledge amalgamation. This involves granting clients access solely to the feature extraction layers of the amalgamated model while excluding the fully-connected layers. This approach facilitates the creation of prototypes without compromising the confidentiality of the distribution underlying the validation set. Another conceivable threat involves clients employing the zeroth-order optimization technique [C] to derive prototypes of disproportionately high contribution. However, this attack can be readily countered by limiting the client's access to the server. Such a limitation is sensible since clients typically do not need to re-calculate their contributions frequently.
> How does SPACE handle privacy concerns? Does the process of distilling information from local models to a server model pose any risks to data privacy?
Indeed, federated learning is susceptible to various attacks [D], mainly when uploading model weights or gradients, as it introduces the potential risk of reconstruction attacks [E]. Such attacks allow adversaries to reconstruct client training data by exploiting predicted confidence values, model parameters, and gradients. To counter these threats, researchers have proposed defense mechanisms, such as gradient pruning [E], data representation perturbation [F], and differential privacy (DP) [G]. These privacy-preserving strategies can be effectively integrated with our method, working collaboratively to enhance data privacy protection. However, it is crucial to emphasize that our current work centers on contribution evaluation, and the topic of attack-defense mechanisms lies beyond the scope of our discussion.
[C] Chen, P. Y., Zhang, H., Sharma, Y., Yi, J., & Hsieh, C. J. (2017, November). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM workshop on artificial intelligence and security (pp. 15-26).
[D] Chen, Y., Gui, Y., Lin, H., Gan, W., & Wu, Y. (2022, December). Federated learning attacks and defenses: A survey. In 2022 IEEE International Conference on Big Data (Big Data) (pp. 4256-4265). IEEE.
[E] Zhu, L., Liu, Z., & Han, S. (2019). Deep leakage from gradients. Advances in neural information processing systems, 32.
[F] Sun, J., Li, A., Wang, B., Yang, H., Li, H., & Chen, Y. (2021). Soteria: Provable defense against privacy leakage in federated learning from representation perspective. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9311-9319).
[G] Wei, W., Liu, L., Wut, Y., Su, G., & Iyengar, A. (2021, July). Gradient-leakage resilient federated learning. In 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS) (pp. 797-807). IEEE.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. It answers my questions and concerns, so I will be keeping my positive score. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their constructive comments. Common questions asked by multiple reviewers would be replied here in a unified manner.
> Theoretical support of why SPACE leads to an excellent approximation to the original Shapley-based contribution evaluation with re-training
Sorry for the misunderstanding. Actually, our SPACE still applies the Shapley value to calculate each participant's contribution as in previous works. The major contribution of SPACE is that we employ the similarity between prototypes of the server validation set and prototypes formed by clients in a coalition as a utility function for Shapley value calculation instead of the model performance, leading to a lower computation cost. It is worth noting that the theoretical foundation of why we can adopt the similarity function as our utility (for model performance) is that the differences in empirical risk between two distributions are bounded by the divergence between the two distributions when the labeling functions are the same (Theorem 1 in [A]). This finding implies that if the divergence between the client coalition dataset and the validation set is small, then the model trained on the client coalition dataset may obtain high performance on the validation set. In that case, it is reasonable using the divergence to approximate the model performance on the validation set. However, the variation divergence is difficult to compute in practice. Therefore, we intuitively adopt the prototypes' similarity as an alternative to a distribution divergence. The rationale for selecting prototype similarity rests on two crucial considerations. Firstly, it can be easily computed, significantly expediting the evaluation process without necessitating a complete iteration across all samples. Secondly, its privacy-preserving nature renders it suitable for transmission within the framework of federated learning, an aspect that several preceding studies have embraced.
> The experiment is limited in both client numbers and datasets.
In the experiment, we conducted contribution evaluation experiments on the MNIST and CIFAR10 datasets, following the established protocols of previous studies. However, it is important to highlight that the number of clients used in our experiments was limited due to the necessity of calculating the actual Shapley value as the ground truth for evaluation. Specifically, doubling the number of clients from 10 to 20 would introduce an exponential increase in effort by a factor of 1000. Consequently, we performed experiments with a maximum of 10 clients until a more efficient evaluation metric could be proposed. Notably, our proposed SPACE method demonstrated the ability to simultaneously amalgamate information from 100 clients, as demonstrated in the experiment of client selection. During the rebuttal phase, we conduct an additional experiment to include the partial Tiny-ImageNet dataset [B], including 50 categories with ten clients. The results are as follows: The proposed SPACE method achieved satisfactory performance in terms of PCC. Notably, feature alignment is more challenging in such a complex dataset. Thus knowledge amalgamation requires more epochs for convergence, particularly in non-IID scenarios where clients hold data from disjoint classes. The SPACE(Avg) approach, which utilizes FedAvg to obtain the embedding space, may exhibit superior time efficiency for cases with only a few communication rounds. However, the communication benefits from single-round amalgamation still make SPACE the preferred solution when communication cost is a concern. Note that the communication cost shown in the table is calculated as $Comm = 2 \cdot n \cdot R_g \cdot |\theta|$, where $R_g$ denotes the number of communication rounds, $|\theta|$ represents the number of model parameters(MB) and constant of 2 stands for the upload and download of the models. For approaches that require retraining, the number of clients $n$ involves repeated clients during retraining.
* Result on Tiny-ImageNet
| | GT | TMC | GTG | DIG-FL | SPACE(Avg) | SPACE |
|--- |--- |--- |---|---|---|---|
| Non-IID | 0.9082 | **0.9610** | 0.7425 | 0.8944 | 0.9175 | 0.9092 |
| Mislabel | 0.7833 | 0.9233 | 0.7998 | 0.8405 | **0.9293** | 0.9256 |
| Times(s) | 84994 | 69212 | 98855 | 266 | **252** | 453 |
| Comm(MB) | 2639200 | 2097600 | 8000 | 8000 | 8000 | **400** |
[A] Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. Machine learning, 79, 151-175.
[B] Le, Y., & Yang, X. (2015). Tiny imagenet visual recognition challenge. CS 231N, 7(7), 3. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the client contribution evaluation problem under federated learning settings. The goal is to achieve more computational and communicational efficient contribution evaluation. The paper proposes Federated Knowledge Amalgamation and Prototype-based Model Evaluation technique for the goal. Federated Knowledge Amalgamation treats client models as teacher models that together train the server model which serves as the student model. This training approach avoids the typical multi-round training of federated learning frameworks, thus reducing the communication costs. Prototype-based Model Evaluation derives the contribution of a client by computing and comparing the prototypes of the client models and the server model, thus reducing both the computation and communication costs of contribution evaluation. Experimental results on MNIST and CIFAR10 datasets confirm the effectiveness of the proposal.
Strengths: 1. This paper studies participant contribution evaluation in federated learning which is an important and trending topic.
2. The proposed techniques are intuitive and are effective as shown by experiments on two benchmark datasets.
3. The paper is well written and easy to follow overall.
Weaknesses: While the paper proposes an interesting and empirically effective technique for improving the efficiency of participant contribution evaluation in federated learning, there is no theoretical guarantee on the effectiveness of the proposed technique in terms of measuring the true participant contribution.
Minor presentation issues:
Missing whitespace: "Pandey et al.[34]", "Zhang et al.[64]"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: "For the non-IID scenario, we selected a subset of clients, ranging from 0% to 80%, and assigned them local datasets containing incomplete categories." Which categories are incomplete for each of the clients?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As discussed in the paper, the proposed technique relies on "a proper and reliable validation set with accurately labeled data, which can be challenging to achieve in real-world scenarios".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows.
> While the paper proposes an interesting and empirically effective technique for improving the efficiency of participant contribution evaluation in federated learning, there is no theoretical guarantee on the effectiveness of the proposed technique in terms of measuring the true participant contribution.
Please see "Author Rebuttal by Authors" above.
> Minor presentation issues: Missing whitespace: "Pandey et al.[34]", "Zhang et al.[64]"
Thanks for pointing this out. We would revise it in our manuscript.
> "For the non-IID scenario, we selected a subset of clients, ranging from 0% to 80%, and assigned them local datasets containing incomplete categories." Which categories are incomplete for each of the clients?
Specifically, the non-IID clients only own datasets with partial categories. For instance, a client might be assigned a dataset containing only 3 of the ten classes available in the MNIST dataset. This allocation strategy allows us to create clients with significant heterogeneity in their datasets, resulting in channeling situations in federated learning because the data distribution differs between each client.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed discussion. It addresses my question. I'm keeping my score since it is already the highest among others. | null | null | null | null | null | null |
Exponential Lower Bounds for Fictitious Play in Potential Games | Accept (poster) | Summary: This paper studies fictitious play in potential games and in particular, two-player identical payoff games. It is shown that fictitious play, under arbitrary tie-breaking, needs super-exponential time with respect to the number of actions to find an approximate Nash equilibrium. The lower bound is proved through recursive construction of a hard game in which fictitious play spirals slowly and takes super exponential time to reach the unique pure Nash equilibrium.
Strengths: - This paper makes significant progress on a fundamental problem, namely the convergent behavior of fictitious play in potential games. While Monderer et al. proved an asymptotic $O(1/t)$ rate, no finite-time guarantee is known before to my knowledge; this paper shows that any finite-time guarantee for classes of games that include identical-payoff games has to be super-exponential in the number of actions. It suggests that fictitious play in potential games, while asymptotically convergent, would have a burn-in period that is super-exponentially long.
- The paper is remarkably well written. The results are presented clearly and the construction is clean and easy to follow.
Weaknesses: - The asymptotic part of the lower bound translates to $\Omega(1/(n^2t^2))$, which I believe is not tight in light of the $O(1/t)$ upper bound.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: A couple of minor questions:
- Line 221: In Eq (2), should the last term be $R^{T_{l-1}}_{n-l+1}$ instead?
- Line 247: To evoke Lemma 3.6, wouldn't one need $n\sqrt{\epsilon}<1$? I can't seem to find where this is assumed.
- Line 248: "On the other hand, at each round $t\ge T^*$, ..., with rate $1/t$". Where is result statement proven?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: This work is of theoretical nature and there is no foreseeable negative ethical, societal implication.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all its effort, the insightful comments and the carefully reading of our work. We start by answering the reviewer's questions.
*Questions*
1. Thanks for spotting the typo.
2. The reviewer is right we missed the assumption in Theorem 3.1. In fact Lemma 3.6 requires $\epsilon \in O(1/n^3)$. Since in the proof of Theorem 3.1, Lemma 3.6 is applied for accuracy $\sqrt{\epsilon}$, we need to take $\epsilon < O(1/n^6)$. We will add the latter in the Theorem 3.1.
3. Notice that once $(i(t),j(t)) = (n/2,n/2+1)$ for $t = T^\star$ then $(i(t),j(t)) = (n/2,n/2+1)$ for all $t \geq T^\star$ since $(n/2,n/2+1)$ is the maximum entry of the matrix $A$ and thus $n/2$ is the action with most cumulative payoff for agent $x$ (resp. $n/2+1$ for agent $y$). We will add the above explanation in the revised version.
$$ $$
Concerning the reviewer's major point, we want to remark that our results imply exponential lower bounds even for the $1/t$ case. More precisely, let us assume that an upper bound of the form $p_1(n) + p_2(n)/\epsilon$ could be established. By taking $\epsilon = O(1/n^6)$, Theorem~3.1 applies and thus
$$ p_1(n) + O(n^6 p_2(n)) \geq 4^n ((n/2-2)!)^4$$
meaning that either $p_1(n)$ or $p_2(n)$ is super-exponential. We will add the above discussion in the revised version of our work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my questions.
**Re: Asymptotic lower bounds**
I think something might be missed here. I understand your argument that either $p_1(n)$ or $p_2(n)$ is super-exponential in $n$; what I meant was that an upper bound of the form $p_1(n) + n^2/t^2$ cannot be ruled out by Theorem 1 ($p_1(n)$ being super-exponential in $n$), and that this is not *asymptotically tight* due to existing upper bounds.
---
Reply to Comment 1.1.1:
Comment: Thank you for the clarification. Providing lower bounds that asymptotically match the $O(1/t)$ rates is a great question for future work. | Summary: This paper examines the convergence rate of Fictitious Play (FP) in potential games. The paper proves that FP for potential games can take exponential time to reach a Nash equilibrium. The research has yielded a recursive rule for constructing payoff matrices, demonstrating that fictitious play, regardless of the tie-breaking rule employed, may require exponential time to reach a Nash equilibrium even in two-player identical payoff games. The same theorem holds for general N-player potential games.
Strengths: They have provided a concrete example to illustrate the exponential lower bound with figures, which greatly enhances the clarity and understanding of the paper. Furthermore, I am satisfied with the overall readability of the paper. The writing style is approachable, making it easier for readers to engage with the content. One aspect that I particularly enjoyed is the authors' explanation of intuition to construct counterexamples of learning dynamics. This approach helps readers grasp the underlying concepts more effectively, enhancing the overall learning experience. The proof idea is simple and intuitive.
The paper is well-written, provides clear explanations through intuitive examples, and offers an enjoyable reading experience.
Weaknesses: I have some concerns regarding this paper that I would like the authors to address.
1. In reference [1], it is mentioned that if certain regularity conditions are met, FP dynamics achieve an exponential rate. Can the authors provide more details on these regularity conditions and their implications? Like why this game is not in the regularity conditions. [1] are saying that almost every potential game are regular game.
2. For FP dynamics, this paper proves that finding NE using fictitious play potential games has exponentially lower bounds. Does this exponential lower bound also hold for stochastic fictitious play? For example, if people have some exploration in each turn, is it possible to break the exponential lower bound?
3. I am curious about the upper-bound of the convergence rate in the case where the conditions of regularity are not satisfied. I guess that the example that this paper gave does not satisfy the regularity condition. Is there a specific upper-bound that can be established?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I want to clarify the issues of the weakness section. If this is well-addressed, I am down to re-evaluate this paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitation and potential negative societal impact on their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your work and the valuable comments. We would like to address comment 1 and 3 together, as they are closely related. Additionally, we want to clarify that when referring to [1], we assume the paper the Reviewer nmF9 mentioned is titled "On the Exponential Rate of Convergence of Fictitious Play in Potential Games". In comparison to that work, our research presents several important qualitative differences.
We remark that our constructed lower bound *does* satisfy the regularity conditions of [1]. There are two key reasons that [1] and our work are not contradictory. First of all [1] consider the *continuous-time* version of fictitious-play while we consider the *discrete-time* version, thus our lower bound is not necessarily applicable. Secondly and most importantly, [1] establish exponential convergence only in the asymptotic sense. Let us take a closer look to their main theorem.
Theorem 8. Let $\Gamma$ be a regular potential game. Then for almost every initial condition $x_0 \in X$, there exists a constant $c = c(\Gamma, x_0 )$ such that if $x$ is an FP process associated with $\Gamma$ and $x(0)=x_0$ , then $d(x(t), NE) \leq c e^{-t}$.
Notice that $c$ is not a universal constant in fact it is not even polynomially bounded by the size of the game. Meaning that $c$ can have super-exponential dependence on $n$ - for example $c = O(2^{2^{n}})$ or $c = O(4^n((n/2-2)!)^4)$.
In simpler words, [1] establish that for *continuous* fictitious-play there exist a time-step $T^\star$ after which exponential convergence occurs, however $T^\star$ can be super-exponentially large! The latter is totally aligned with our take-way message for the discrete case.
*Comment 2*
Extending our lower bound construction for stochastic fictitious play in an interesting research direction that however remained outside the scope of this work. To this end, we have experimentally studied the convergence properties of stochastic fictitious play in the provided lower bounded construction of this work. Our experimental evaluations are included in the uploaded pdf and suggest that a more fine-grained analysis can be extend our lower bound to stochastic FP.
---
Rebuttal Comment 1.1:
Title: Very interesting
Comment: For both points, I am very satisfied. Thank you for the detailed explanation. Every question that I had is now resolved.
I want to adjust my score from 5 to 7.
---
Reply to Comment 1.1.1:
Title: Score adjusted?
Comment: We thank the reviewer for their answer. May we ask if the reviewer adjusted the score because it still appears to be a 5. Thank you again.
---
Rebuttal 2:
Title: Just came in my mind
Comment: I adjusted the score to 7.
But while thinking about this topic, a question came to my mind: so are there no polynomial algorithms for finding NE in potential games? I think I saw several papers that have policy gradient (or gradient descent), but have some assumptions on the exploration. Or, if we have an assumption about exploration (like many papers assume for the zero-sum game), can we achieve polynomial complexity for the FP algorithm?
---
Rebuttal Comment 2.1:
Title: Other question
Comment: I am writing this because the authors are experts on the fictitious play dynamics domain.
So, one more additional question: is there an alternative convergence rate that has a polynomial dependency on $n$ while having an exponential dependency on $t$?
(Disclaimer: these two questions will not make a lower score even if the authors do not have any explicit and clear answers. Just I am curious about this.)
---
Reply to Comment 2.1.1:
Title: Replying to "Other question"
Comment: To the best of our knowledge no such convergence rate for FP exist.
---
Rebuttal Comment 2.2:
Comment: Thank you very much for your response and for updating your score.
*Concerning your question*
Finding NE for potential games with a *centralized* algorithm is a computationally easy task (consider the pure strategy profile corresponding to the maximum entry on the game). However establishing convergence to NE with *uncoupled learning dynamics* is way more challenging setting. To the best of our knowledge the polynomial convergence rates exist only for GD-based dynamics e.g. [1]. There are also works establishing asymptotic convergence for MWU in potential games e.g. [2]. We believe that even under exploration assumption, FP cannot achieve polynomial complexity for potential games. The latter is also indicated by the experimental evaluations that we provided in our initial response.
[1] Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games, Leonardos et al. 2022
[2] Learning with Bandit Feedback in Potential Games, Heliou et al. 2017
Title: Relying to "Just came in my mind" | Summary: This paper constructs a common-payoff two-player game for which fictitious play must take a number of iterations exponential in the number of actions to converge to an $\epsilon$-equilibrium. The paper closes with some empirical demonstrations of the behavior of fictitious play on the game.
Strengths: The construction of $A$ is very clever, and the description of why this guarantees an exponential lower bound does an excellent job of delivering the important aspects without getting bogged down in the arithmetic details.
Fictitious play is an extremely well-studied and widely-used algorithm. The structural simplicity of the constructed example gives an excellent intuition about a class of scenarios in which we should expect FP to perform poorly.
Weaknesses: My only concern about this work is that it might have limited impact; a link to the structural properties of games that FP is commonly applied to might have helped.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. What is $z$ for in the construction of the game? Once $K_n(z)$ has been defined, we only ever think about $A=K_n(0)$ for the rest of the paper.
#### Minor issues (did not affect my evaluation)
- on p.3, $x^\top A(y') - x^\top Ay = \Phi(x,y') - \Phi(x,y)$ should be $x^\top B(y') - x^\top By = \Phi(x,y') - \Phi(x,y)$
- on p.7, "Applying Lemma 3.6 for $\epsilon := \sqrt\epsilon$...": This is needlessly confusing
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your work and your feedback on our paper. We appreciate your valuable comments.
1. Indeed, A should have been B; thank you for pointing it out.
2. The equality sign does indeed get confusing for the reader; we will change it to either $\coloneqq$ or $\rightarrow$ to signify assignment rather than equality.
3. $z$ is just a positive real number defining the matrix $K_n(z)$, see Figure 1a. In Definition~3.2 $K^n(z)$ is recursively defined and the construction of our game corresponds to $A = K_n(0)$.
Regarding the potential limited impact of our work: Our study contributes to the existing literature by investigating the convergence rate of Fictitious Play in potential games. Fictitious Play (FP) is a fundamental concept in understanding learning dynamics within strategic interactions, originally introduced by Brown in 1951 [1]. While previous work by Robinson [2] established convergence in zero-sum games, it didn't provide a precise rate of convergence (Karlin conjectured that the rate of FP in zero-sum games is of order $O(1/\sqrt{T})$). FP possesses not only an intuitive game theoretic interpretation but also other attractive properties, such as its simplicity with no need for a step-size parameter. These traits have contributed to FP's status as a pivotal topic of study, leading to numerous works aiming to establish convergence in broader settings [8, 9, 10, 11].
Furthermore, it's essential to emphasize that FP, in conjunction with Blackwell's approachability theorem [6], laid the groundwork for the development of no-regret learning algorithms [15, 21], a thriving area of research. Additionally, the stochastic version introduced by Hofbauer and Sandholm [12] inspired the well-established Follow the Perturbed Leader (FTPL) dynamics [13].
The relevance of theoretical investigations into the convergence of FP has been amplified in recent years. Notably, Daskalakis et al. [3] and Abernethy et al. [4] addressed Karlin's conjecture, a long-standing problem dating back to 1959 [5]. Similarly, Monderer et al.'s foundational work [7] demonstrated FP's convergence in potential games, yet without providing a specific rate of convergence. It was this particular gap in knowledge that sparked our investigation. This work builds upon these preceding studies and addresses a specific aspect that remained unexplored.
There are also many works in the literature published in different venues such as ML/AI [20, 22, 23, 24], theoretical computer science [1, 2, 3, 4, 10, 11, 21], and control theory [16, 17, 18, 19]. DeepMind used an algorithm called Prioritized Fictitious Self Play as part of the training for their AlphaStar program for playing competitive Starcraft [14].
In light of this, our research contributes to this lineage of ideas, providing insights into the convergence behavior of Fictitious Play in potential games and augmenting the broader landscape of game theory research. We believe our work has meaningful implications for understanding learning dynamics and strategic interactions, and we hope the reviewer finds our perspective on its potential impact clearer in light of these connections.
[1] Iterative solution of games by fictitious play, Brown et al.
[2] An iterative method of solving a games, Robinson et al.
[3] A Counter-Example to Karlin's Strong Conjecture for Fictitious Play, Daskalakis et al.
[4] Fast Convergence of Fictitious Play for Diagonal Payoff Matrices, Abernethy et al.
[5] Mathematical Methods and Theory in Games, Programming, and Economics, Karlin
[6] An analog of the minimax theorem for vector payoffs, Blackwell
[7] Fictitious Play Property for Games with Identical Interests, Monderer et al.
[8] On the convergence of the learning process in a 2 x 2 non-zero-sum two-person game, Miyasawa et al.
[9] Some topics in two-person games, Shapley et al.
[10] A 2 × 2 game without the fictitious play property, Monderer et al.
[11] On the rate of convergence of fictitious play, Brandt et al.
[12] On the Global Convergence of Stochastic Fictitious Play, Hofbauer et al.
[13] Efficient algorithms for online decision problems, Kalai et al.
[14] Grandmaster level in StarCraft II using multi-agent reinforcement learning, Vinyals et al.
[15] Prediction,learning, and games, Cesa-Bianchi et al.
[16] Joint strategy fictitious play with inertia for potential games, Marden et al.
[17] Forecasting interactive dynamics of pedestrians with fictitious play, Wei-Chiu et al.
[18] Fictitious play in zero-sum stochastic games, Sayin et al.
[19] Approximation guarantees for fictitious play, Conitzer et al.
[20] Smooth Fictitious Play in Stochastic Games with Perturbed Payoffs and Unknown Transitions, Baudin et al.
[21] No-regret dynamics and fictitious play, Viossat et al.
[22] Fictitious play for mean field games: Continuous time analysis and applications, Perrin et al.
[23] Fictitious play and best-response dynamics in identical interest and zero-sum stochastic games, Baudin et al.
[24] Provably efficient fictitious play policy optimization for zero-sum Markov games with structured transitions, Qiu et al.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response! I'm convinced that this work is an important contribution to the literature, and I will update my rating accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response and for increasing your score. | Summary: This paper studies the convergence rate of Fictitious Play (FP) in potential games. While prior work shows that FP asymptotically converges to a NE in the case of $N$-player potential games, the current paper proves that FP can take exponential time to do so. Specifically, the paper recursively constructs a two-player coordination game with identical interests and a unique pure NE. The paper shows that FP requires super-exponential time before placing positive probability in the unique NE, even with an arbitrary tie-breaking rule. The paper also contains simulations to validate the shown learning dynamics.
Strengths: 1. The paper provides a complete answer to an important problem (convergence rate of FP in potential games).
2. The paper is very well-written and easy to follow. Related works are discussed extensively. Technical results are delivered in a clear way.
3. The paper also provides simulations to validate findings and provide intuition.
Weaknesses: 1. It might be better to include more background information on FP and potential games, e.g., some concrete examples of potential games in practice.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Line 58 - 60: I believe [1] is only for diagonal games?
Line 100 - 101: The latter A should be B.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer of all the work. We sincerely appreciate the reviewer's valuable comments. Related to reviewer's questions/concerns:
1. $[1]$ is for diagonal zero-sum games and moreover the tie-breaking rule is fixed in advance (not adversarial as in [2]).
2. Indeed, $A$ should have been $B$; thank you for pointing it out.
3. We will incorporate an example of a potential game in the preliminaries section and expand our references. Potential games are notably important in capturing routing games (also known as congestion games [3,4]).
[1] Fast Convergence of Fictitious Play for Diagonal Payoff Matrices, Abernethy et al.
[2] A Counter-Example to Karlin's Strong Conjecture for Fictitious Play, Daskalakis et al.
[3] A class of games possessing pure-strategy Nash equilibria, Rosenthal.
[4] Potential games, Monderer et al.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. I will keep my positive score and vote for acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you again for all your work and the positive feedback on our paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for their hard work. We have attached a pdf with experimental evaluations on the stochastic version of fictitious play, to address a question asked by reviewer nmF9 (what happens in our constructed game if each agent has some exploration). The experiments suggest that a more fine-grained analysis can extend our lower bound to stochastic FP.
Pdf: /pdf/9ce8360fa66537531b12a30aed26558a6bf8d740.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Feature learning via mean-field Langevin dynamics: classifying sparse parities and beyond | Accept (poster) | Summary: This paper characterizes the generalization of neural networks, and prove efficient sample complexity guarantees in the mean-field region with the presence of feature learning. It presents a general framework to establish sample complexity of MFLD for binary classification problem. Such framework can be used to obtain generalization guarantee for the learned neural network in discrete-time and finite-width settings. When applied to the k-sparse parity problem, the proposed framework yields an improved convergence rate and dimension dependence.
Strengths:
The technique used in this analysis is quite interesting and unique. Unlike existing margin bounds for neural networks that rely on norm control, this study takes a different approach by considering that MFLD optimizes the distribution of parameters rather than the parameters themselves. This perspective allows for an improved analysis of the sample complexity and convergence rate.
Weaknesses: 1. This study is focus on binary-classification problem. Definitely it could be a good starting point, but at the same time, it is hard to argue that the results for binary-classification problem is very significant.
2. The authors focus on k-sparse parity problem (2-sparse in particular) as an illustrative example throughout the paper. While I understand the 2-sparse parity problem is a well studied problem with a rich body of literature, I still do not quite understand the significance or the motivation of studying this particular problem.
3. The width and number of iterations required in this analysis is of $O(e^d)$ which I think is far from reasonable in practice.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the difference of the last two rows in Table 1? Why they have the identical region/method, width, and number of iterations, but different class error?
2. The work done by Telgarsky (2023) requires width of $O(2^d)$ not $O(d^d)$ right?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I believe the authors have a sufficient discussion on the limitations.
A couple of suggestions:
- $d$ has been used in the introduction several times but never been properly defined in this section (it has been defined in the table 1 though).
- In Assumption 2, last inequality $L(\mu^*) \leq log(0)-c_0$, I believe $log(0)$ here is a typo. The same typo also appeared in Proposition 3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your insightful comments.
**Q:** *While I understand the 2-sparse parity problem is a well studied problem with a rich body of literature, I still do not quite understand the significance or the motivation of studying this particular problem.*
**A:** The parity setting is a canonical example of learning a low-dimensional signal (since $k\ll d$) from high-dimensional data. The low-dimensionality of target function may be exploited by *adaptive* learning procedures (as opposed to fixed bases models such as kernel methods), and thus this function class has been studied in recent works to demonstrate the advantage of representation learning in neural networks (e.g., see Wei et al. 2019).
In addition, the $k$-sparse parity function corresponds to the Fourier bases on the discrete lattice. Hence, analyzing this problem can give some insight to the understanding of function estimation in the Euclidean space with orthogonal basis expansions such as Hermite polynomial and Fourier basis.
**Q:** *The width and number of iterations required in this analysis is of $O(e^d)$*
**A:** Indeed this is one main drawback of the current mean-field analysis. This being said, as shown in Table 1, our result already represents a noticeable improvement over existing mean-field analyses, in terms of both the width (prior results required $N=\mathcal{O}(d^d)$) and the iteration complexity (prior works assumed $t\to\infty$), even though our results can handle the more general $k$-parity setting.
Therefore, we believe that our analysis serves as an important step toward more efficient (quantitative) learning guarantees for mean-field neural networks.
**Q:** *What is the difference of the last two rows in Table 1?*
**A:** The two rows correspond to different sample size regimes $n \gg d^2$ and $n \gg d$; in other words, they can be put together into one row where the classification error becomes $\min\\{d/n,\exp(-O(\sqrt{n}/d))\\}$.
**Q:** *The work done by Telgarsky (2023) requires width of $O(2^d) $ not $O(d^d)$ right?*
**A:** The required width in the table is coming from Table 1 of the arXiv version of Telgarsky (2023).
However, we noticed that it has been modified to $O(d^{d/2})$ in its ICLR camera ready version.
Accordingly to the paper, it should be $O(d^{d/2})$ instead of $O(2^d)$.
**Q:** *Typos and suggestions.*
**A:** Thank you very much for your suggestions. We will modify our manuscript according to your suggestions.
We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal. After carefully considering the rebuttal and the other reviews, I have chosen to keep my score. | Summary: This paper considers the problem of learning the k-sparse parity problem with a two-layer network in mean-field regime. The main results are in two folds: 1. the authors proposed an annealing method to obtain a better rate of convergence. 2. the authors compute the classification error via computing the local Rademacher complexity.
Strengths: 1. The problem of learning a k-sparse parity function in a mean-field regime seems to be interesting.
2. Proposition 4, which gives a characterization of the margin at the stationary distribution seems to be novel and interesting to me.
3. The annealing methods proposed in this paper seem to be interesting, which gives a way to obtain a better convergence rate via bounding log-Sobolev constant properly.
4. This paper is in general well-written, the main theoretical results are stated clearly, and I feel it is rather easy to understand the main results of this paper.
Weaknesses: I haven't observed an obvious weakness in this paper, but I do have a few questions and comments, please refer to the "Questions" part.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Regarding the annealing methods: If I understand the annealing methods correctly, the intuition is that under assumption 3, the log Sobolev constant is actually controlled by the loss, and since the loss is decreasing along the trajectory, the log Sobolev constant will be larger than the trivial bounds in ( Nitanda et al., 2022; Chizat, 2022). Thus it seems to me that annealing is added for the purpose of controlling the loss properly, rather than fundamentally speeding up the dynamics (in fact, annealing does not improve the convergence rate of the MFLD in general, see e.g. Chizat, 2022 in your reference ). Also, I'm wondering if it is possible to obtain a faster convergence through different annealing methods.?
2. In Proposition 4, you characterize the margin of the stationary distribution, with small enough regularization. I'm wondering if this properties still hold in the limit of $\lambda \rightarrow 0$?
3. Generality of the results: I'm wondering if it's possible to generalize the result to any bounded smooth activation functions, especially propositions 3 and 4, and Theorem 1 and 2?
Minor points and typos:
1. Line 169: $\log(0) \rightarrow \ell(0)$
2. Line 203: $\lambda^{(k)} \rightarrow \lambda^{(\kappa)}$
3. In the statement of Theorem 1, you have an assumption $Q: = ... > 0$. Given a specific setting (e.g. for logistic loss and taking the parameters in propositions 3 and 4), is this assumption verified?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your insightful comments. We address the technical points below.
**Q:** *It seems to me that annealing is added for the purpose of controlling the loss properly, rather than fundamentally speeding up the dynamics (in fact, annealing does not improve the convergence rate of the MFLD in general, see e.g. Chizat, 2022 in your reference). Also, I'm wondering if it is possible to obtain a faster convergence through different annealing methods?*
**A:** Recall that the log-Sobolev constant is strongly dependent on the value of the empirical loss, and we expect an exponential dependency on the sample size $n$ in the LSI constant without proper annealing. We note that the regularization coefficient $\lambda$ (a.k.a. temparature) should be properly small to achieve the perfect classification, which exponentially deteriorates the convergence rate without annealing.
Our main insight is that by controlling the temperature parameter carefully, we can avoid such as exponential dependency.
This property is special to the logistic loss (its derivative is bounded by its absolute value) and does not hold in general settings.
In contrast, Chizat (2022) did not take this point into account, and consequently the computational complexity is worse than ours -- we will include a more detailed discussion in the revision.
There is definitely a possibility that better convergence rate can be obtained by a different annealing method, which we leave as future work.
**Q:** *In Proposition 4, you characterize the margin of the stationary distribution, with small enough regularization. I'm wondering if this properties still hold in the limit of $\lambda \to 0$?*
**A:** This is a good point. We believe that it holds also for $\lambda=0$.
This requires minor modification of our proof, because the current analysis relies on the uniqueness of the optimal solution $\mu^*$ which does not hold for $\lambda = 0$. However, this point can be overcome by showing uniqueness of the ``function value'' $f_{\mu^*}(x)$ for optimal solutions instead of the uniqueness of the optimal measure $\mu^*$, which could be shown by the strong convexity of the logistic loss.
**Q:** *Generality of the results: I'm wondering if it's possible to generalize the result to any bounded smooth activation functions, especially propositions 3 and 4, and Theorem 1 and 2?*
**A:** These statements may not be generally true because the activation function should have sufficiently large expressive power.
This being said, we expect similar results for wide range of activation functions as long as they have symmetricity such as $h_x = -h_{-x}$,
even though our proof utilizes the explicit form of the $\tanh$ activation.
**Q:** *Minor points and typos*
**A:** Thank you so much for pointing them out. We will correct them in the revision.
**Q:** *Condition on $Q$*
**A:** Yes, this can be verified easily. Essentially, this condition asserts that ``the sample size $n$ is sufficiently large'' because the parameters $c_0$ and $\bar{R}$ can be seen as constants.
More specifically, if $n \gg (\lambda^{(K)})^{-2}$, then the condition holds.
We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
---
Rebuttal Comment 1.1:
Title: Keep my scores as is
Comment: I thank the authors for the detailed explanation of my questions and concerns.
After reading the rebuttals, I believe this paper is technically solid, and the results are interesting. The main limitation in my opinion is the generality of the techniques since it seems to me that the proofs rely on the properties of logistic loss and specific activation function. However, since this paper consider a very specific problem ( learning a sparse parity function) using a very specific achitecture, the abovementioned limitations are not issues for me. Besides, the results from this paper is interesting to me.
In conlusion, I think this is a technically solid providing interesting results despite a few limitations, thus I will keep my scores as is. | Summary: The paper conducts a theoretical study of the generalization error and sample complexity of two layer neural network training with Mean Field Langevin Dynamics (MFLD) or (informally a version of )noisy gradient descent. Specifically the paper specializes the generalization error for subset parity problem to derive bounds. Empirical results are provided to illustrate the agreement between theory and empirical observations with synthetic data
Strengths: (Note that I am not very familiar with the details of the subject matter and also related work)
- Analytical generalization bounds may be of interest to the theoretical community especially given the recent interest seen with using subset parity as a dataset in analysis
- The paper does a nice job of comparing existing and their classification error bounds for k-sparse (subset) parity problem to highlight their contribution. The fact that their bound decouples error from k is notable to this reader
- This reviewer appreciates the experimental results in the paper. This figure summarizes the analytical findings included in Section 4
Weaknesses: (Note that I am not very familiar with the details of the subject matter and also related work)
- The paper appears to be spend a lot of real estate with preliminaries at the cost of sacrificing clarity of presentation of the main result. In section 4.1, I do not quite follow how the choice of regularization parameter reduces complexity
- While analysis does decouple the dependence on k in the k-sparse parity task, this appears to be at the cost of the network width and the number of optimization steps which are both exponential compared to the work by Barak et al. (NeurIPS 2022). I am left wondering about the usefulness of this result to the community
- (minor) I glanced at the code for the experiments and noticed that m = 2000. I assume this is the width value used in the paper. The paper states a value of 10_000 so I want to make sure that the shared code has a typo. It would be nice if the code is updated to reflect the exact value used in the paper for clarity
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please check the weaknesses section
- Please consider updating Section 4 to make new work clear by perhaps moving some preliminaries to the supplement
- Please update code to ensure the results included in the paper can be reproduced without any change(s)
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors do a very nice job of noting weaknesses in their discussion. No more changes necessary
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for carefully reading our paper and giving insightful comments.
**Q:** *The paper appears to be spend a lot of real estate with preliminaries.*
**A:** Thank you for the suggestion. Our goal is to make the main text as self-contained as possible; hence we introduced the basics of the mean-field Langevin dynamics (distribution dependent SDE, log-Sobolev inequality, and so on) before presenting the main result. In the revision, we will improve the presentation and reorganize the main text.
**Q:** *I do not quite follow how the choice of regularization parameter reduces complexity.*
**A:** As seen in Proposition2, the LSI constant $\alpha$ controls the convergence rate. And this rate could exponentially deteriorate to achieve the perfect classification since the coefficient $\lambda$ appeared in Eq. (5) should be properly small depending on the number of examples $n$. However, noting the exponent of Eq. (5) is $-B/\lambda$ which is essentially the ratio of the loss and $\lambda$, such exponential deterioration could be resolved by the annealing for controlling the term $-B/\lambda$.
**Q:** *Comparison to Barak et al. (NeurIPS 2022).*
**A:** First, we note that our bound gives much better sample complexity than Barak et al. (2022) -- their bound on the classification error is $O(d^{(k+1)/2}/\sqrt{n})$ while ours is $O(d/n)$. While we achieve such an improved sample complexity using large width, this finding suggests an interesting tradeoff between computational cost and statistical complexity.
Moreover, although the large width is one main drawback of the current mean-field analysis, as shown in Table 1, our result already represents a noticeable improvement over existing mean-field analyses, in terms of both the width (prior results required $N=\mathcal{O}(d^d)$) and the iteration complexity (prior works assumed $t\to\infty$).
Therefore, we believe that our analysis serves as an important step toward more efficient (quantitative) learning guarantees for mean-field neural networks.
**Q (minor):** *I glanced at the code for the experiments and noticed that $m = 2,000$.*
**A:** Thank you for the close reading. This is because we also conducted experiments for $k=3,4$ parity problems in the Appendix in which we employed $m=2,000$ and we uploaded the code for that setting. Although only a few lines differ for each experiment, we will include all three experiments in the new supplementary file.
We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their point-by-point rebuttals to my (perhaps superficial) concerns. I have also carefully looked over the reviews provided by other reviewers.
I have increased my score to 6 in light of the rebuttal + other reviews as they have helped me gain a better understanding of the paper. | Summary: This paper proves optimization and generalization guarantees for MFLD for the $k$-sparse parity problem. It proves that if the network is sufficiently overparameterized (width > $e^{\Omega(d)}$) then $n = d$ samples suffice, which is independent of $k$. Furthermore it proves an exponential convergence rate when $n \ge d^2$.
Strengths: - The paper is able to prove concrete Rademacher-based generalization guarantees for MFLD, including explicit $\epsilon$ dependencies in different regimes
Also, I think the comparisons with Chen and Meka (2020) are unnecessary as that paper focuses on the case when the covariates are Gaussian which is a very different problem.
Weaknesses: - This paper does not consider label noise which would likely break the exponential convergence rate in the $n \ge d^2$ regime. It is not clear to me whether the techniques would extend to this setting.
- Like many mean field papers, discretizing the dynamics requires exponential width (at least $e^{\Omega(d)}$). The tradeoff is therefore width $O(1)$ and sample complexity $O(d^k)$ vs width $e^{O(d)}$ and sample complexity $d$. However, this problem is not unique to this paper and appears fundamental to a lot of mean field analysis.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - How difficult would it be to adapt the techniques in this paper to the case of label noise (i.e. flip each label with probability $p < 1/2$)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive comments. We address the technical comments below.
**Q:** *How difficult would it be to adapt the techniques in this paper to the case of label noise (i.e. flip each label with probability $p < 1/2$)?*
**A:** We believe that extending our result to a situation with label flipping is not difficult and the almost same results would hold (i.e., the exponential convergence of the classification error for $d^2 < n$ and $d/n$ classification error for $d < n$).
However, we need to take care of non-monotonicity of the conditional expectation of the loss $f \mapsto \mathbb{E}_Y[\ell(Yf)|X]$ when there is a label noise. This requires additional technical difficulty, but we think it can be resolved by using a small $\bar{R}$.
Indeed, the exponential convergence of the classification error is shown by [R1] for $p < 1/2$ label flipping in kernel classification. The essential point of the proof is to show the $L^\infty$-convergence of the classifier, which is what we have shown in our analysis. Hence, we believe that our analysis would be a good starting point to explore more difficult and realistic settings.
[R1] Koltchinskii, V., Beznosova, O. (2005). Exponential Convergence Rates in Classification. In: Auer, P., Meir, R. (eds) Learning Theory. COLT 2005. Lecture Notes in Computer Science, vol 3559, pp.295--307, 2005.
**Q:** *Exponentially large width.*
**A:** Indeed this is one main drawback of the current mean-field analysis. This being said, as shown in Table 1, our result already represents a noticeable improvement over existing mean-field analyses, in terms of both the width (prior results required $N=\mathcal{O}(d^d)$) and the iteration complexity (prior works assumed $t\to\infty$), even though our results can handle the more general $k$-parity setting.
Therefore, we believe that our analysis serves as an important step toward more efficient (quantitative) learning guarantees for mean-field neural networks.
We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I have decided to keep my score. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models | Accept (poster) | Summary: This paper comprehensively evaluates three efficient training algorithms for transformer language models: layer stacking, layer dropping, and selective backpropagation. Such an evaluation is crucial due to the extensive computational resources needed for training transformer-based models.
The authors highlight the importance of specifying a training budget in performance comparisons. They introduce a measurement called "reference system time" (RST) to facilitate comparisons across different hardware configurations. The study examines these algorithms under various training budgets, model architectures, and datasets. The main finding is that these efficient training methods do not always significantly improve over the standard training baseline. Layer stacking tends to outperform the baseline and other methods in most instances. However, layer dropping sometimes falls short, while selective backpropagation often diminishes performance.
The paper also explores how these algorithms perform in a downstream task setting, showing that pre-training performance does not necessarily correlate with the ability to generalize in these tasks. It concludes by stressing the importance of caution when using these efficient training methods due to their potential overheads. Further implications of this study can guide the choice of efficient training methods in transformer-based language models by considering the trade-off between improved performance and computational resources.
Strengths: 1. Originality: The paper's focus on evaluating efficient training algorithms for transformer language models under a defined training budget is a novel perspective. The introduction of "reference system time" to compare training times across different hardware configurations shows ingenuity.
2. Quality: The paper exhibits high quality in its methodology, experimental setup, and analysis. The experiments are thorough and well-conducted, involving different parameters like training budgets, model architectures, and datasets. The findings are meticulously analyzed, providing an honest assessment of the capabilities and limitations of the methods.
3. Clarity: The paper is well-written and logically organized. The authors clearly explain the motivations behind their work, the reason for choosing specific efficient training algorithms, and their experimental methodology. They discuss their findings in an understandable and straightforward manner.
4. Significance: The paper notably contributes to training language models. Its focus on efficient training methods and the introduction of RST offer practical solutions to the computational challenges researchers face. This research opens the door for more studies on efficient training of such models. The paper's findings and suggestions are invaluable for researchers and will aid in making informed decisions when implementing efficient training strategies.
Weaknesses: 1. Limited Scope: The evaluation in the paper is limited to three specific efficient training algorithms (layer stacking, layer dropping, and selective backpropagation).
2. Limited Novelty: This paper mainly focuses on revisiting methods in a specific context. There are not many novel ideas or groundbreaking discoveries introduced in the paper. Studies expected to be published in top-tier conferences like NeurIPS should ideally contain significant novel contributions.
3. Lack of Rigor in Experimental Design: While the authors propose using Reference System Time (RST), they do not sufficiently justify its superiority over other potential measures of computational effort.
4. Overemphasis on Computational Efficiency: The authors focused on computational efficiency, sidelining the quality or performance of the models. This imbalance in focus can give a skewed perspective.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. It would be beneficial to understand how the proposed Reference System Time (RST) handle variations in software optimization levels for different algorithms, which might influence the perceived efficiency of other methods.
2. For the RST measure, how much does the reference system time vary across different systems and configurations?
3. The paper asserts that the algorithms provided 'little effect' or 'marginal improvements'. Can you clarify what you consider to be substantial or significant improvements in efficiency and why the results obtained didn't meet that criteria?
4. In some comparison studies with baseline training, it's unclear whether the hyperparameters for the compared methods and the baselines were tuned independently. It would be beneficial if the authors clarified this aspect. Moreover, further details regarding the methodology behind establishing the "budget-adjusted baseline" would be appreciated.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have clearly demonstrated some limitations in their paper, specifically highlighting that their study focuses on a select few efficient training algorithms and acknowledging that further exploration of additional algorithms is required. Furthermore, the authors admit that their review is primarily centered on language model pre-training and that the findings may not be necessarily applicable to fine-tuning or other data-intensive modalities.
However, the authors could have expanded on potential negative societal impacts. For instance, if these inefficient training algorithms are utilized for large-scale projects, it could lead to unnecessary usage of computing resources, subsequently affecting energy consumption and carbon emissions. Additionally, the misrepresentation of algorithm efficiency could potentially mislead the academic and industrial communities in their pursuit of efficient models.
To improve the paper, it would be beneficial for the authors to provide more context on possible wider implications, particularly focusing on the ecological impact and potential misdirection of research efforts. They might also consider suggesting some directions for future work in improving the efficiency of training algorithms, both in terms of reducing computational costs and minimizing environmental impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > _1. Limited Scope_
Thanks. Based on this feedback we have added three more recent efficient training algorithms. Please see the global response.
> _2. Limited Novelty_
Thank you for bringing this up. We argue that the observation that none of the popular recently-proposed efficient training algorithms improve upon a learning-rate-decayed baseline is actually ground-breaking for the efficient training field. Papers that make observations like this have been hugely impactful, for instance
* [Are GANs Created Equal? A Large-Scale Study](https://proceedings.neurips.cc/paper/2018/file/e46de7e1bcaaced9a54f1e9d0d2f800d-Paper.pdf) discovered that most GAN architectures reach similar scores with enough hyperparameter optimization and random restarts
* [In Search of Lost Domain Generalization](https://openreview.net/pdf?id=lQdXeXDoWtI) finds that the trivial baseline (ERM) outperformed most DG algorithms when evaluated under the same experimental conditions
* [You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings](https://openreview.net/forum?id=BkxSmlBFvr) discovered that old techniques are competitive and often even outperform recent model architectures when the hyper-parameters are tuned correctly
* [Rethinking the Value of Network Pruning](https://arxiv.org/abs/1810.05270), found that fine-tuning pruned models obtained by state-of-the-art structured pruning algorithms only give comparable or worse performance than training that model with randomly initialized weights
* [Pitfalls of Graph Neural Network Evaluation](https://arxiv.org/abs/1811.05868), uncovered serious shortcomings in existing evaluation strategies for GNNs
We actually started out trying to develop a new efficient training algorithm but then found that when we went to compare existing methods using a unified timing measure, reference system time (RST), none of the methods outperformed the simple baseline. Our shock at this discovery is what motivated us to publish this work. Thank you again for mentioning this, we will make this clearer in the final version.
> _2. Lack of Rigor in Experimental Design_
Thanks. Let us clarify the benefits of RST over the other most popular measures of compute: WCT and FLOPs. WCT can vary widely across different hardware, and can even fluctuate on the same hardware, for instance, due to the usage of non-deterministic operations, hidden background processes, or inconsequential configurations, such as the clock rate. We show an example of this in Figure 1, where we performed the same baseline training run across different hardware (a 3090 and an A100) and the same hardware but different software (different CUDA drivers on a 3090). FLOPs do not account for parallelism (e.g., RNNs vs Transformers) or hardware-related details that can affect runtime. For example, FLOP counting ignores memory access times (e.g., due to different layouts of the data in memory) and communication overheads, among other things.
To address these things, RST ties all computation to a specific reference hardware. We average RST measurements over 1000 iterations (for each model architecture, batch, sequence length, system configuration, etc.), to ensure they are not influenced by fluctuations. In our code release, we will provide all RSTs measured by our hardware. This will enable researchers in the future to benchmark new algorithms using exactly our setup without owning the same hardware. Sorry this was not clearer, we will clarify this in the final version.
> _3. Overemphasis on Computational Efficiency_
We added training, validation, and downstream performance results in the rebuttal PDF. If there are any additional quality or performance metrics you would like to see we would be happy to add them.
> Q1.
Thanks for this question. We were curious about such fluctuations as well, to prevent them from influencing our results, we average RST measurements over 1000 iterations (for each algorithm, model architecture, batch, sequence length, system configuration, etc.). Thank you for bringing this up, we will clarify this in the final version.
> Q2
To see how much the timing on a reference system would change if we selected different systems as the reference, we can look at the WCT of individual systems (imagining each was the reference system). This is shown in the top set of WCT bars in Figure 1. In theory, any of the three systems could be the reference system, the bottom set of RST bars show what happens when the last system (the 3090, shown in dark red) is chosen as the reference system, and all other systems are mapped to the timing of this system. This ensures that the same baseline run is timed identically, regardless of the system it was run on.
> Q3
Thanks for this. By a `significant improvement’ we mean that, for a fixed RST budget, the average (training, validation, downstream) performance of an efficient training method must be greater than or equal to the average performance of the baseline plus a standard deviation (i.e., in Tables 1 and 2 we bolded the best method, and all methods whose average plus a standard deviation is greater than or equal to the best method). We will add these details to the final version.
> Q4
Thanks for this. In all experiments, the hyperparameters for each method were tuned independently. We added this to the supplement, but we can move it to the main paper if you think this would help. We will also add more details on our budget-adjusted baseline: the idea is that we train the full model (unlike layer stacking/dropping) using standard data loaders (unlike selective backprop/Rho-loss) using Adam (instead of Lion or Sophia). Further, we use a one-cycle learning rate schedule and adjust the learning rate schedule based on the elapsed time conditional on a time budget (measured in RST). The implementation for this is simple and will be included in our code release.
---
Rebuttal Comment 1.1:
Title: Thank you, concerns have been properly addressed.
Comment: The authors have addressed my concerns in a satisfactory manner. They have added three more recent efficient training algorithms to broaden the scope of the evaluation. They have also emphasized the ground-breaking nature of their observation that none of the popular recently proposed efficient training algorithms improves upon a learning-rate-decayed baseline. The authors have provided further clarification on the benefits of using Reference System Time (RST) as a measure of computational effort, addressing the concerns about fluctuations and variations in software optimization levels. They have also explained their definition of "significant improvement" and mentioned that the hyperparameters for each method were tuned independently. Additionally, the authors have provided more details on the methodology behind the "budget-adjusted baseline" and its implementation. Overall, my concerns have been properly addressed. | Summary: Many algorithms have been proposed to make the training of ever larger models more efficient.
The authors present a critical empirical study of three selected training algorithms (layer stacking, layer dropping, and selective backpropagation) with fixed training budgets and find that these algorithms often do not make BERT and T5 pre-training significantly more efficient.
Strengths: This paper re-visits evaluation standards, and offers a careful, rigorous, and clean re-evaluation of three efficient training algorithms.
It is an excellent example for the importance of also publishing "negative" results (no gains in metrics, no new algorithm).
Specifically, this finding can save development time in the implementation and maintenance of these algorithms and reduce the complexity of the pre-training algorithms.
Weaknesses: I agree on the limitations pointed out by the authors in the section "Limitations and Future Work", including evaluation of small subset of efficient training algorithms only, language model pre-training only.
Overall, the paper might be felt to be too simple and straight forward, the more as it addresses a well-known issue in a noisy field.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I agree with the authors that "As illustrated in Section 5, there is an abundance of efficient training algorithms, and rigorously evaluating all of them is prohibitively expensive." (Section 6). What would be an alternative approach to avoid such explosion of algorithms and experiments?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, see the section "Limitations and Future Work".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > I agree on the limitations pointed out by the authors in the section "Limitations and Future Work", including evaluation of small subset of efficient training algorithms only, language model pre-training only. Overall, the paper might be felt to be too simple and straight forward, the more as it addresses a well-known issue in a noisy field.
Thank you for this. Our motivation for evaluating a subset of efficient training algorithms on language models only was to allow us to spend more careful detail on an in-depth analysis of these methods. We believe the field of efficient training methods for language models will grow rapidly in the coming years, as the most successful models can currently only be trained by a handful of entities.
As far as we know, we haven’t seen any paper comparing the main approaches in this field. Our goal was to fill this gap to help researchers and practitioners understand which approaches deserve more attention and to spur development in these directions. We were surprised to discover that, in large part, current efficient training algorithms do not surpass learning-rate-decayed baseline training. We believe these observations could act as a turning-point for the efficient training field and prompt investigation into new directions entirely. Thank you for bringing this up; we will add more discussion on this in the final version.
> I agree with the authors that "As illustrated in Section 5, there is an abundance of efficient training algorithms, and rigorously evaluating all of them is prohibitively expensive." (Section 6). What would be an alternative approach to avoid such explosion of algorithms and experiments?
This is an interesting open question. The approach we took is to evaluate representative algorithms on popular architectures, using well-studied tasks. We believe our approach is particularly useful as the field continues to grow because, by using our published RST timings, it will allow future algorithms, architectures, and tasks to be evaluated against our results, regardless of the hardware setup.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I largely agree with iTEM's "Thanks for your response" comment. | Summary: The paper reevaluates several training algorithms aimed at enhancing the efficiency of Transformer-based models, such as layer stacking, layer dropping, and selective backpropagation. The authors effectively manage the training resources by employing a metric called reference system time. However, the key result of the study indicates that these three efficient training algorithms yield only modest improvements compared to standard training methods.
Strengths: - The authors conduct a thorough evaluation of efficient training algorithms, providing valuable insights into their effectiveness and highlighting the marginal gains such methods achieve. This revisit contributes to a deeper understanding of existing methods and offers a practical perspective on their applicability.
- The authors point out the shortcomings of using wall-clock time for reference, and instead propose reference system time (RST) for a better estimate, which is valuable rule to highlight.
Weaknesses: - While revisiting these methods undoubtedly holds value for the research community, it is important to note that the obtained results may be somewhat trivial and lack significant insights. The paper might be better suited for more specialized venues.
- While the primary focus of the study lies in evaluating the original approaches, providing additional discussions on methods and evaluations that build upon these evaluated techniques would offer a more comprehensive understanding of how these methods have been adopted and evolved since their initial proposal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do the results of this study depend on the size of the models? Can we anticipate that the effectiveness of these efficient training techniques would be more diminished or enhanced when training larger models with a greater compute budget?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: In Section 6, the authors acknowledge the limitations of their work and highlight that it is not feasible to evaluate all types of training efficient algorithms due to the potentially exorbitant costs involved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > While revisiting these methods undoubtedly holds value for the research community, it is important to note that the obtained results may be somewhat trivial and lack significant insights. The paper might be better suited for more specialized venues.
Thanks for this. We have added three more recent efficient training algorithms. Please see the global rebuttal response.
> While the primary focus of the study lies in evaluating the original approaches, providing additional discussions on methods and evaluations that build upon these evaluated techniques would offer a more comprehensive understanding of how these methods have been adopted and evolved since their initial proposal.
Thanks for this, we will add a discussion on methods and evaluations that build on the efficient training algorithms we compare. On this note, the new methods we have added are motivated as building on the prior work we have compared here. Specifically, Sophia [(Liu et al., 2023)](https://arxiv.org/pdf/2305.14342.pdf) builds upon Lion [(Chen et al., 2023)](https://arxiv.org/pdf/2302.06675.pdf), as [Liu et al., 2023](https://arxiv.org/pdf/2305.14342.pdf) claim that Lion “only achieves limited speed-up on LLMs“. RHO-Loss [(Mindermann et al., 2022)](https://proceedings.mlr.press/v162/mindermann22a/mindermann22a.pdf) builds on [selective backprop](https://arxiv.org/pdf/1910.00762.pdf), as [Mindermann et al., 2022](https://proceedings.mlr.press/v162/mindermann22a/mindermann22a.pdf) argue that solely prioritizing high training loss results in two types of examples that are unwanted: (i) mislabeled and ambiguous data, as commonly found in noisy, web-crawled data; and (ii) outliers, which are less likely to appear at test time. Thank you for bringing this up, we will add these things in the final version.
> Do the results of this study depend on the size of the models? Can we anticipate that the effectiveness of these efficient training techniques would be more diminished or enhanced when training larger models with a greater compute budget?
Great question! Our BERT model has 120M parameters and T5 has 247M, and across these model sizes the results appear to be consistent. We suspect the effect is similar for other LM sizes. That said, if there is another model or model size you would like to see, let us know, and we would be happy to include it.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: I read the author's responses carefully and believe that there is tremendous value in publishing these negative results, especially when they carry significant implications for future model training. I have raised my score to 5. However, I believe the review should focus on the original submission rather than newly added techniques. Thus I would not be able to further increase the score. | Summary: This paper studies three efficient pertaining techniques for Transformer models (layer stacking, layer dropping, and selective backpropagation). At variance with previous works on the subject, the authors adequately control for the learning rate schedule by evaluating performance at fixed compute budgets (as defined through a scaled wall-clock time). In doing so, they find that most of these methods provide unclear gain over a vanilla baseline -- and this finding holds across T5/BERT-like models.
Strengths: * **S1.** At variance with previous works, the authors adequately control for the effect of the learning rate schedule. This is a common mistake in countless papers, which should be more discussed as it has previously impacted prominent work around scaling laws (see Hoffmann et al., 2022 as a response to Kaplan et al., 2020).
* **S2.** This paper provides negative results on the timely topic of efficient pretraining of Transformers, allowing the community to take a step back and retrospectively analyse the validity of previous findings. Under that light, negative results can be as valuable as positive ones.
* **S3.** (minor) The color scheme (which is consistent across text & plots) is a good idea to visually help readers.
Weaknesses: * **W1. Limited significance due to limited adoption of the practices evaluated.** The impact and significance of this work is limited, as it evaluates three methods which have not been widely adopted by the community. For a negative result paper to be truly valuable, it is better if it addresses a common practice rather than a result that has not gotten traction anyway.
* **W2. It is unclear how the proposed Reference System Time approach improves upon other methods.** The authors bring-up RST as one of the significant contribution of the paper, and base their analysis on its usage. However, it's unclear how it differs in practice from simply using wall-clock time -- especially since the authors don't actually perform cross-hardware comparisons, which would be one of the benefit of formalising RST.
* **W2.1** The authors claim "Unfortunately, WCT can fluctuate even on the same hardware, for instance, due to the usage of non-deterministic operations, hidden background processes, or inconsequential configurations, such as the clock rate". It is not clear how RST improves upon this -- the first recording used to scale RST may be noisy as well, and so could the recordings being scaled.
* **W2.2** It is unclear whether Figure 1 is based on real measurements or is simply there for illustrative purpose.
* **W3. The results are disparate, incomplete, and lack clarity.** The results are middling, lacking a clear finding (overall it seems layer dropping is worth it across all budgets for BERT, layer stacking may bring gains for longer training budgets on BERT but only on shorter budgets for T5, and that selective back propagation is never worth it). For a paper with negative results, clarity is key to improve upon past work.
* **W3.1.** Selective back propagation is compared in a completely different setup, on a single budget, and downstream performance is never measured for SBP+T5 -- only validation loss. Furthermore, while SBP+BERT are trained on 3 different datasets (a rather interesting study given the nature of SBP), only validation losses are provided for comparisons (no downstream task performance) and no conclusion is proposed on the influence of data source on SBP. The lack of a unified framework for comparisons between layer stacking/dropping and SBP make the paper feel more like a pot pourri of ideas rather than a principled & systematic comparison.
* **W3.2.** Reporting training loss instead of validation loss in Figure 2 is questionable: as the authors later discuss in their ablations in Figure 5.b), layer dropping acts similarly to drop out -- so it makes sense it is behind on training loss in Figure 2. Validation loss here would be far more valuable.
* **W3.3.** The figures are not clear: the x-axis in Figure 3 is not specified, the legends lack an analysis/mention of the key results for each figure, it's impossible to see differences in Figure 4, etc. Specifically to Figure 4/Table 1, the authors mention the concept of Pareto front multiple times in the main text; maybe plotting it would be far clearer than the figures/table proposed for downstream evaluation.
* **W4. Insufficient clarity and lack of added educational value.** Negative results papers can prove themselves especially valuable if they help correct a bad practice in the community; here, a general lack of clarity and of adequate framing make this paper fall short of being an educational read.
* **W4.1.** The introduction of RST is confusing and frankly not necessary, as discussed in W2.
* **W4.2.** The issue around learning rate schedules could benefit from additional discussion & illustration. In particular, it would be interesting to mention that this is the crux of the difference between the scaling laws of Kaplan et al., 2020 and Hoffman et al., 2022. Not accounting for the influence of the LR schedule caused the Kaplan work to inappropriately recommend scaling model size primarily, instead of scaling model and data jointly.
* **W4.3.** The paper contains numerous imprecisions and overall lacks clarity (see W3.3. as well). The authors sometime abuse the citations by citing far too many work at once instead of the most relevant ones (l14, l32, l78, etc.). The default citation style of NeurIPS doesn't help here, making it difficult to identify the works at a glance... l144 it's unclear for instance what the sentence of curriculum and deduplication brings to the work -- since this never further discussed. Note also that Figure 6 is never referenced in the main text, and that "Besides the dizziness due to conflicting ideas" on l30 is not adequate language for a paper.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Although exploring an interesting and valuable premise, this paper is insufficiently impactful and held back by its somewhat unclear and unprincipled methodology in the results section. I would recommend for the authors to: (1) unify their methodology across layer stacking/dropping/selective backpropagation and all models/datasets, to deliver a clearer results section; (2) improve the presentation of their results, with clearer figures (potentially plotting Pareto fronts as proposed in the main text); (3) get rid of the RST idea, which adds confusion for no reason. In its current form, I would rate this paper as a **Reject (3)**.
**EDIT: following rebuttal I have updated my score to a Weak Accept (6)**.
**Q1.** Based on the comments in W2 above on RST, could the authors clarify the value they see in RST? Especially regarding smoothing out fluctuations.
**Q2.** Could the authors provide additional results for selective backpropagation at different compute budgets, and expand on the experiments combining SBP with different datasets?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have included a limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > _W1. Limited significance_
Thank you for pointing this out. Based on this feedback, we have added three more recent efficient training algorithms, see the global rebuttal response.
> _W2. unclear how the proposed Reference System Time approach improves_
We believe there is a small confusion here. The difference between WCT and RST is that WCT does not account for changes in hardware and software configurations. RST does this by mapping all timings on an arbitrary system back to the reference system. We show an example of this in Figure 1, where we performed the same baseline training run across different hardware (a 3090 and an A100) and different software (different CUDA drivers on a 3090). And in fact, we ran all of our experiments across different machines (3090s and A100s), which enabled us to run such a large number of experiments (including hyer-parameter tuning, etc.). Sorry this was not clearer, we will make this clear in the final version.
> _not clear how RST improves upon this_
Thanks for this. To ensure our RST measurement is not prone to fluctuation, we averaged these timings over 1000 iterations (for each model architecture, batch, sequence length, system configuration, etc.). We put this in a footnote, but we will move this to the main text to highlight this.
> _W2.2 It is unclear whether Figure 1 is based on real measurements_
Figure 1 is based on real measurements (the time of a baseline training run). We will make this explicit in the final version, thanks!
> _W3. The results are disparate, incomplete, and lack clarity._
Thank you for pointing that out. To improve clarity and incomplete results, we ran additional experiments and summarized our main findings; both can be found in the global response.
> _W3.1. Selective back propagation is compared in a completely different setup_
Thanks. Based on this suggestion, we added validation and downstream performances for both SBP and Rho-Loss for all three considered budgets and a new downstream benchmark (SuperGLUE) to the new rebuttal PDF. We’ve also investigated the influence of different training data sources on both SBP and Rho-Loss in (Table 5 and Figure 4, rebuttal PDF). We opted to spend compute on this instead of on additional T5 experiments, as the effects here are large for batch selection methods. Across three training datasets, none of the batch selection methods outperform the validation loss or downstream performance of the baseline.
> _The lack of a unified framework for comparisons between layer stacking/dropping and SBP make the paper feel more like a pot pourri of ideas._
We have now included downstream comparisons for all methods in the rebuttal PDF (in Tables 1 and 2, Figures 1 and 2). We compared SBP and Rho-Loss with the baseline using validation loss instead of training loss as these methods intentionally select high-loss training batches to improve generalization. Across three training datasets, neither SBP or Rho-Loss outperformed the validation loss or downstream performance of the baseline (Tables 3, 5; Fig. 4 in rebuttal PDF). If we don’t count the time required to select batches, RHO-Loss can slightly improve the validation loss (Table 4, rebuttal PDF).
> _W3.2. Reporting training loss instead of validation loss in Figure 2 is questionable._
Thank you for bringing this up. We report training loss because we sample without replacement from our training dataset (C4), which is so large that we never see the same data-point twice throughout training. This means that the training loss is always computed on inputs that have not been seen before and so is an unbiased estimate of the generalization error, just like the validation loss.
Also, previously this figure reported the training loss computed while the layers were dropped. In the rebuttal PDF, we have included a new version of the figure where we compute the loss with all layers enabled. This does not qualitatively change the interpretation of the figure.
> _W3.3. The figures are not clear._
Thank you! This should be the training budget. We have added this to the rebuttal PDF (Figure 4). This figure shows that the efficient batch selection algorithms do not outperform the baseline regardless of the training dataset.
> Specifically to Figure 4/Table 1, the authors mention the concept of Pareto front multiple times in the main text; maybe plotting it would be far clearer than the figures/table proposed for downstream evaluation.
We believe there is a slight confusion here, these are in fact points on the Pareto curve of downstream performance vs. time (RST for 6, 12, 24 hours). The reason we opted for bar plots and tables is because all of the methods are essentially overlapping each other.
> W4. Insufficient clarity and lack of added educational value.
Based on your prior comments, we have revised the manuscript to improve clarity (unfortunately, we cannot upload revised manuscripts in the rebuttal phase this year) and scope (additional experiments in rebuttal PDF). If you have any other recommendations, we would be happy to implement them.
> _W4.1. The introduction of RST is confusing_
See our response to W2 above.
> _W4.2. The issue around learning rate schedules could benefit from additional discussion._
This is a very nice suggestion, we will investigate this further and add a discussion on this to the final version.
> _W4.3. The paper contains numerous imprecisions_
We will revise our manuscript and (1) compress citations to the most relevant ones, (2) explain the background of sentence curriculum and deduplication in the pretraining dataset, (3) cite Figure 6 in the main text, and (4) we will rewrite the paragraph starting with "Besides the dizziness due to conflicting ideas". We hope this addresses your concerns, and we welcome any more suggestions on the writing.
> Q1.
Yes, please see the response to W2 above.
> Q2.
Yes, we have included these additional results in the rebuttal PDF.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: First, I would like to note that I appreciate that the authors have written-up an extensive rebuttal to all reviewers, with significant additional results. The scope of the changes makes it difficult to evaluate the paper in its entirety again, but I won't blame the authors on this and take this as a quirk of NeurIPS allowing a 1-page results .pdf rebuttal for the first time in a while.
The authors have adequately addressed some of my concerns: W1. (the added methods are interesting and paint a broader picture), W3., and likely W4. as well.
However, I would maintain W2. (related to doubts around the introduced concept of RST). To better explain my concern, I simply think that a lot of lines are "wasted" on what is essentially a common idea of "comparing things that are comparable" / "rescaling". The authors highlights that the rescaling allows to compare across hardware and drivers version, but I seriously question why you would even consider doing this in the first place. This sounds like an overcomplexification of the evaluation setup employed.
With that in mind, I will update my score to a **Weak Accept (6)** based on the extensive update proposed by the authors.
---
Reply to Comment 1.1.1:
Title: Thank you so much! Further comments on W2.
Comment: Dear Reviewer DsDW,
Thank you so much for your time and for reconsidering your score.
On W2, we agree with you that measuring RST is a simple idea, and we will revise our transcript to make sure that not too many lines are spent on over-selling it.
The motivation for why one would even consider rescaling in the first place is that it is necessary to perform the large number of pre-training runs, in this and similar papers, in parallel to obtain results in a reasonable amount of time. This leads to many scenarios where WCT can differ from run to run, for example:
1. **Single nodes containing multiple GPUs**. If multiple jobs are running on individual GPUs on a single node, despite the software and hardware being fixed, the utilization of the shared resources (eg., CPU, RAM, Disk) can influence the job completion time. For example, [[1](https://onlinelibrary.wiley.com/doi/pdf/10.1002/cpe.6730?casa_token=NQSh5k_gkMoAAAAA:ziA4HNyHhbfsEFwdKuB220z4s73xVPH6He9JC7eT4wtFsAs0Gg92mfS39pBeo2ovRaOfOIv-CiCNIwA)] discusses the CPU and RAM utilization of data loading; the more jobs one runs, the more bottlenecked these operations become and the longer the completion time.
2. **Multiple on-premise nodes** with slightly different configurations (different hardware, software, etc.) because they were bought at different times, with different budgets, or are maintained by different people. For example, we looked at the [best paper award papers from NeurIPS last year](https://nips.cc/virtual/2022/awards_detail) and found three which used inconsistent hardware throughout experiments (which is OK in their case since they were not concerned about WCT): [[2] ](https://arxiv.org/pdf/2206.14486.pdf)(NVIDIA TITAN Xp and V100 GPUs), [[3]](https://proceedings.neurips.cc/paper_files/paper/2022/file/27c546ab1e4f1d7d638e6a8dfbad9a07-Paper-Conference.pdf) (NVIDIA RTX A5000 and Quadro RTX 8000 GPUs), [[4](https://proceedings.neurips.cc/paper_files/paper/2022/file/105112d52254f86d5854f3da734a52b4-Supplemental-Conference.pdf)] (GeForce RTX 1080, 1080 Ti and 2080 Ti GPUs).
3. **Shared compute cluster** (potentially provided by a cloud provider). Such systems are fairly complex, and fluctuations in job completion times (even for identical jobs) are fairly common. For example, Figure 11 from [[5]](https://www.usenix.org/system/files/nsdi22-paper-weng.pdf) shows this happening inside a production MLaaS cluster with over 6,000 GPUs in Alibaba. More generally, mitigating job interferences due to resource sharing in GPU clusters is an open research question in itself, see eg. Section 4.2.1 in [[6]](https://arxiv.org/pdf/2205.11913.pdf).
We hope this clarifies your concerns on W2. Thank you very much for your detailed feedback. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their insightful and encouraging reviews:
* **b1c1** - _“the goal of this work is very welcome and in my humble opinion useful to the community_”;
* **DsDW** - “_allowing the community to take a step back and retrospectively analyse the validity of previous findings_”’;
* **iTEM** - “_[it] contributes to a deeper understanding of existing methods and offers a practical perspective on their applicability_”;
* **VTeA** - “_It is an excellent example for the importance of also publishing "negative" results (no gains in metrics, no new algorithm)._”;
* **BpVn** - “_The paper's findings and suggestions are invaluable for researchers and will aid in making informed decisions when implementing efficient training strategies._”.
In response to requests for comparing more methods (**b1c1, DsDW**), we added three more recent efficient training algorithms and one more benchmark ([SuperGLUE](https://arxiv.org/abs/1905.00537)). The results can be found in the attached rebuttal PDF.
To better categorize all approaches, we have separated the methods into three different types: (1) **dynamic architectures** (i.e., the existing [layer stacking](http://proceedings.mlr.press/v97/gong19a/gong19a.pdf)/[dropping](https://proceedings.neurips.cc/paper_files/paper/2020/file/a1140a3d0df1c81e24ae954d935e8926-Paper.pdf) methods), (2) **batch selection**, where we add RHO-Loss [(Mindermann et al., 2022)](https://proceedings.mlr.press/v162/mindermann22a/mindermann22a.pdf) (and the existing [selective backprop](https://arxiv.org/pdf/1910.00762.pdf) method), and (3) **efficient optimizers**, where we add recent optimizers Lion [(Chen et al., 2023)](https://arxiv.org/pdf/2302.06675.pdf) and Sophia [(Liu et al., 2023)](https://arxiv.org/pdf/2305.14342.pdf). The three new methods are very popular, e.g., their GitHub repositories have been highly starred: [1600 for Lion](https://github.com/lucidrains/lion-pytorch), [700 for Sophia](https://github.com/Liuhong99/Sophia), and [161 for Rho-Loss](https://github.com/OATML/RHO-Loss), and have been released recently (Sophia was released on May 25th, 2023; Lion on February 15th, 2023; RHO-Loss on June 16th, 2022). We have included a one-page PDF with new experimental results for all methods.
For the new optimizers, we noticed numerical instabilities when adding them as drop-in replacements into our training pipeline. Changing the mixed precision helped, which is why we report their results in BF16 instead of FP16. We also re-ran the baseline in that new precision to ensure direct comparability.
Our main findings are:
* **Training loss** (comparing: Layer stacking, Layer dropping, Lion, Sophia (for **batch selection** approaches: selective backprop and RHO-Loss, we instead compare the validation loss, as they intentionally select training batches with high loss)): The only approach to consistently outperform the training loss of the fully-decayed learning rate baseline across budgets and models is Layer stacking (see Figure 1 in the rebuttal PDF). This improvement reduces as the budget increases to 24 hours.
* **Validation loss** (selective backprop, RHO-Loss): Across three training datasets, none of the **batch selection** methods outperform the validation loss of the baseline (Table 3 and Figure 4 in rebuttal PDF). If we don’t count the time required to select batches, RHO-Loss can slightly improve the validation loss (Table 4, rebuttal PDF).
* **Downstream tasks**: For a 24-hour budget, none of the efficient training algorithms we evaluate improves the downstream performance of the baseline (Tables 1 and 2, Figures 2 and 3, rebuttal PDF).
* Methods with lower per-iteration costs than the baseline (i.e., **dynamic architecture** methods: Layer stacking, Layer dropping) can slightly improve downstream performance for lower budgets (6 hours, 12 hours), but the improvement disappears with longer training.
* Methods with higher per-iteration costs (i.e., **batch selection** methods: selective backprop and RHO-Loss, and some **efficient optimizer** methods: Sophia) are significantly worse than the baseline in some downstream tasks (GLUE, SNI), for all budgets.
* If we ignore the additional per-iteration computations of the three above methods, the downstream performance is still matched by the baseline.
Thank you for your time spent on reviewing our work.
Pdf: /pdf/accc0c909ee8ddd7c6ca7508ee96094d6bc7805b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents an analysis of 3 algorithms for training transformer models with a focus on efficiency. The authors present a way to measure wall clock time irrespective of the underlying hardware and make a principled comparison of the 3 algorithms by predefining a compute budget and adapting each algorithm and training recipe for the available time. The results of the paper show that none of the methods is universally an improvement over standard training.
Strengths: - Meta-analyses are very common in other fields but are sorely lacking in ML/Deep learning so the goal of this work is very welcome and in my humble opinion useful to the community.
- Setting a compute budget and trying to optimize each algorithm accordingly is not as common as it should be in similar works.
Weaknesses: - The main weakness of the work is possibly the choice of algorithms to be analyzed. There are only 3 while there could be a plethora of others as mentioned in the paper's related work. Moreover, the 3 methods evaluated are not particularly widespread (possibly because they don't work as well as the paper shows).
- Another possible weakness is the fact that there is separate hyper parameter tuning per method irrespective of the compute budget. This means that the results could be widely different were one to apply the above methods to a new problem where the optimal or near-optimal hyper parameters are not known a priori.
- RST although fair, it is not a transferable metric which means that a different paper will not be able to compare with the RST results of this paper and will have to re-run the experiments to compute the relative speed of the methods on their hardware. This severely diminishes its usefulness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My main reservation for the paper is whether the experiments are enough to prove useful to the community.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors properly present and analyze the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> _The main weakness of the work is possibly the choice of algorithms to be analyzed. There are only 3 while there could be a plethora of others as mentioned in the paper's related work. Moreover, the 3 methods evaluated are not particularly widespread (possibly because they don't work as well as the paper shows)._
Thank you for pointing this out. Based on this feedback, we have added three more recent and popular efficient training algorithms from this plethora of other methods mentioned in the related work section. To better organize all methods we have separated them into three different types: (1) **dynamic architectures** (i.e., the existing [layer stacking](http://proceedings.mlr.press/v97/gong19a/gong19a.pdf)/[dropping](https://proceedings.neurips.cc/paper_files/paper/2020/file/a1140a3d0df1c81e24ae954d935e8926-Paper.pdf) methods), (2) **batch selection**, where we add RHO-Loss [(Mindermann et al., 2022)](https://proceedings.mlr.press/v162/mindermann22a/mindermann22a.pdf) (and the existing [selective backprop](https://arxiv.org/pdf/1910.00762.pdf) method) , and (3) **efficient optimizers**, where we add recent optimizers Lion [(Chen et al., 2023)](https://arxiv.org/pdf/2302.06675.pdf) and Sophia [(Liu et al., 2023)](https://arxiv.org/pdf/2305.14342.pdf). The three new methods are very popular, e.g., their GitHub repositories have been highly starred: [1600 for Lion](https://github.com/lucidrains/lion-pytorch), [700 for Sophia](https://github.com/Liuhong99/Sophia), and [161 for Rho-Loss](https://github.com/OATML/RHO-Loss), and have been released recently (Sophia was released on May 25th, 2023; Lion on February 15th, 2023; RHO-Loss on June 16th, 2022).
Please let us know if there are other methods you would like to see tested.
> _Another possible weakness is the fact that there is separate hyper parameter tuning per method irrespective of the compute budget. This means that the results could be widely different were one to apply the above methods to a new problem where the optimal or near-optimal hyper parameters are not known a priori._
Thanks for this. We agree that placing a budget on hyperparameter tuning is important. However, if we also placed a budget here, one could argue that the reason the baseline outperforms efficient training algorithms is because the baseline has fewer hyperparameters to tune, and so better hyperparameters can be found compared to other methods under this fixed budget. To avoid this critique, we fully tuned all efficient training algorithms, making them as strong as possible. We expect that in new problems with a fixed budget on training and hyperparameter tuning, the performance of these efficient training algorithms will degrade even further w.r.t. the baseline. Thank you again for bringing this up, we will clarify this point in the final version.
> _RST although fair, it is not a transferable metric which means that a different paper will not be able to compare with the RST results of this paper and will have to re-run the experiments to compute the relative speed of the methods on their hardware._
We believe there is a slight confusion here. We saved all timings on the reference system for the model architectures, batches, sequence lengths, and other configurations used in our experiments. We will release these timings, allowing other researchers to run experiments with the same RST budget as computed against this reference system. This will prevent them from having to rerun any experiments in our paper. Sorry that this was not clearer, we will clarify this in the final version. | null | null | null | null | null | null |
GlyphControl: Glyph Conditional Control for Visual Text Generation | Accept (poster) | Summary: This work proposes an approach to generating images with visual text. The main approach consists of a generalized version of ControlNet with rendered text as control guidance. To facilitate the training and evaluation, a benchmark dataset called LAION-OCR is proposed. Experiments show that the proposed approach outperforms competitors such as DeepFloyd IF and SD.
Strengths: - Generating visual text accurately has been a challenging issue in the field. This work provides a simple yet effective approach to this challenge by elegantly extending ControlNet with rendered text as control. I am convinced that having rendered text as input makes a lot of sense in this context and would greatly help generate accurate text.
- In addition, having rendered text also allows control over the positioning, font size, which would be of great help in practice.
- Experiments show that the proposed approach significantly outperforms competitors while showing compelling visual results.
- Overall, I believe this work is a nice addition to the current research landscape of text-to-image generation.
Weaknesses: It would help to add some failure cases analysis in order to better understand when models may fail.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Is the text encoder frozen? What about U-Net Encoder? What is the influence of such design choices?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations not well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer Yxcw
We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.
> "It would help to add some failure cases analysis in order to better understand when models may fail."
A: Thanks for your suggestion. We have added additional failure cases of eight types of errors in **Figure 3 of the attached PDF file**. The Rendering Overlap problem happens when the locations of text boxes within glyph instructions overlap. Some layout issues like Missing Characters, Wrong Characters, and Duplicate Characters still occur. Bad performance in the scenarios of Excessive Yaws and Small Text may be attributed to lack of corresponding training samples.
Besides, we also show some cases about generating a large amount of small font size text in **Figure 2 of the attached PDF file**. Although the arrangement of glyph images is preserved within generated images, our model still struggles in generating readable small size text.
And we will add the above failure case analysis to the revised paper.
---
> "Is the text encoder frozen? What about U-Net Encoder? What is the influence of such design choices?"
A: Yes, the text encoder CLIP is frozen. As for the U-Net Encoder, the part of the original Stable Diffusion is frozen while Glyph ControlNet (seen in Figure 2), which is essentially an additional copy of the U-Net Encoder, is trainable. Such design choices aim at preserving the original generation ability of SD.
---
> "Limitations not well discussed."
A: Thanks for your advice. Here are some limitations of our method:
- Lack of controllability on text font, color, and style
- Poor performance when generating abundant text or characters with small font sizes
- Using samples with a limited number of OCR boxes for training
We will include more discussions about the limitations and potential future work to our revised paper.
---
Rebuttal Comment 1.1:
Comment: The response addresses my questions. I believe this work provides an effective empirical solution to the target task. I'd therefore maintain my original rating.
---
Reply to Comment 1.1.1:
Title: Thanks for Reviewer Yxcw's Prompt Response
Comment: We appreciate the reviewer's thorough feedback and positive rating. We'll incorporate your valuable suggestions, including the rebuttal contents, in the final paper revision. | Summary: This paper addresses the development of diffusion-based text-to-image generative models for generating coherent visual text. They propose GlyphControl, which augments textual prompts with glyph conditional information to encode shape details and improve accuracy. They introduce the LAION-OCR benchmark dataset and evaluate GlyphControl's effectiveness using OCR-based metrics and CLIP scores, demonstrating its superior performance over the DeepFloyd IF approach in empirical evaluations.
Strengths: The strengths of this paper lie in the proposal of GlyphControl, a glyph-conditional text-to-image generation model. Additionally, the introduction of the LAION-OCR benchmark dataset and the ability to customize and control the content, locations, and sizes of generated visual text demonstrate the paper's practical contributions to the field of visual text generation.
Weaknesses:
When it comes to text generation, information such as font size and style is crucial, and it would be beneficial for the authors to conduct experimental analysis on this aspect.
The authors' decision to remove images with more than 5 bounding boxes lacks clarity. Considering rich-text images as a valuable corpus and only focusing on a limited number of OCR images may limit the model's ability to generate rich-text images.
While the authors utilize BLIP-2 captions, which may not consistently describe both the image content and OCR text information, it would be interesting to see how the authors address the challenge of generating captions that consider both aspects.
Although this method generates accurate text, it raises curiosity about its effectiveness when dealing with a large amount of text or small font sizes, such as generating a paragraph rather than a simple phrase or word.
In addition to text generation, is it possible to modify the text within an image based on given text information, such as changing the color and position of specific words?
When dealing with a large amount of text, such as 500 words, ensuring text generation quality becomes crucial, as it requires high-resolution images. Additionally, ensuring efficiency in generating such images is also an important consideration.
Many texts may not need to be generated as they can be obtained through text rendering. Have the authors attempted this approach, based on text rendering?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness part
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main concern of this paper revolves around the quantity of text. Starting from the database, the authors control the number of OCR boxes, which, to some extent, limits the generation of rich-text images.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reviews and suggestions.
> Q1
A: Great point!
👉 For font sizes, our GlyphControl supports users to control the font size of the rendered text by modifying the width property of the text bounding box. We further report the generation results of various font sizes using GlyphControl-SDv2.0, trained with LAION-OCR-100K, in **Table 1 of the attached PDF**.
👉 As we have not included the font style during training, our method does not support controlling font style yet. We would like to investigate the modern font style recognizer to include the style information into the text caption during training.
---
> Q2
A: Great point! We summarize the key reasons as follows:
- First, we want to highlight that there exist more than 30% of images with more than 5 bounding boxes in our dataset. Here we remove them simply due to limited resources. We have further filtered out samples with the total OCR areas less than 5% of the whole image area, considering that these images do not have sufficient text information. Therefore, the remaining samples only comprise around 20% of the entire set with visual text. We believe there still exist lots of opportunities.
- Second, it's pertinent to note that a majority of previous endeavors have predominantly showcased the capability to generate less than two bounding boxes. As a result, this simplifies the complexity of the text-rendering tasks to some degree.
- Third, our analysis has revealed that instances containing a substantial number of OCR boxes (>5) often pertain to scenarios involving books and newspapers. In such cases, there's a dual challenge: the relatively diminutive size of texts poses difficulty for VAE decoder-based pixel-level reconstruction, and the dense text distribution negatively impacts OCR recognition and Glyph Rendering.
We have visualized some representative cases with more than 5 bounding boxes in **Figure 2 of the attached PDF**. In summary, we wan to relax the restrictions on the Laion-OCR dataset, raising the threshold from 5 to 10 or 20 bounding boxes in the future.
---
> Q3
A: Great point! We've also noticed that BLIP-2 captions often struggle to consistently depict both image content and OCR text information, resulting in considerable noise. We will attempt the following avenues to address this challenge:
- Employing more potent caption models like LLaVA-from-LLaMA-2 (https://github.com/haotian-liu/LLaVA) and Kosmos-2 (https://github.com/microsoft/unilm/tree/master/kosmos-2). Additionally, we aim to craft specialized prompts to guide the model in generating reliable captions that encompass both image content and OCR text information.
- Enhancing BLIP-2 captions using GPT-3.5 or GPT-4 (text-only) based on supplementary OCR recognition outcomes. Our intention is for the rewritten captions to exhibit a higher quality. We can also explore refining the captions based on the above stronger models such as LLaVA-from-LLaMA-2 and Kosmos-2.
Given the resource-intensive nature of both approaches and the ongoing nature of their implementation, we aspire to incorporate these results into the final revision. We are also keen to receive any further valuable suggestions that could contribute to our progress in this direction.
---
> Q4
A: Great point! We follow your suggestion to generate images that contain a paragraph rather than a simple phrase or word. We visualize the results in **Figure 2 of the attached pdf**. Our method could preserve the global arrangement following glyph instructions, but fail to generate readable small size text within paragraphs. We would like to further explore this inspiring direction.
---
> Q5
A: Great suggestion! We concur that editing glyphs (visual text) within an image using text prompts is an immensely valuable avenue to pursue. While our present solution doesn't currently encompass this feature, we are committed to delving into this direction soon. Our plan involves drawing inspiration from previous notable works like Prompt-to-Prompt and Instruct-Pixel2Pixel. We aim to establish a framework in this exciting direction by constructing relevant training data and frameworks.
---
> Q6
A: Great point! First, we need to elevate the filtering threshold for the number of boxes and incorporate training data that encompasses a substantial quantity of text. Second, we agree the importance of high-resolution images. Notably, the recently open-sourced SDXL might provide an excellent starting point as it supports resolutions up to 1024x1024. Third, generating high-resolution images surely will bring more computation costs. However, we have observed that the glyph render's efficiency is notably high and only incurs a minor overhead. Furthermore, we are keen to explore various avenues to enhance the efficiency of the diffusion models, a broad and open challenge. This could involve measures like int8 quantization, utilization of advanced samplers like EulerEDMSampler, or even exploring cascade diffusion architectures such as Imagen and its open-source variant, DeepFloyd IF.
---
> Q7
A: Great point! While text generation might not be essential for all cases, coherence issues can arise from current naive glyph rendering methods, which only support rendering black texts of the same font style on the whiteboard. We're keen to explore advanced font rendering integration, like artistic fonts from Adobe Firefly, within GlyphControl. This approach could work well for poster-like images. Your additional feedback is highly appreciated.
---
> Q8
A: Your suggestions have been immensely valuable and have significantly enhanced our work. We intend to enrich the training dataset with more OCR boxes, train the models accordingly, and provide both quantitative and qualitative comparison results. We assure you that despite time constraints during the rebuttal period, we'll include these outcomes in the final revision. Your input has played a pivotal role in refining our work.
---
Rebuttal Comment 1.1:
Comment: I appreciate the responses provided by the author during the rebuttal phase. I believe that the author's responses in the feedback should be incorporated into this paper, particularly concerning issues like font consistency and the need for rich text. While these may be considered drawbacks, highlighting these concerns also contributes significantly. Based on the opinions of other reviews and the author's current feedback, I am willing to slightly increase my score.
---
Reply to Comment 1.1.1:
Title: Thanks for Reviewer xxJ1's Response
Comment: We extend our gratitude to reviewer xxJ1 for your valuable comments that enriched our understanding of visual text rendering. We'll incorporate these insights, particularly regarding font consistency and enriched text, in our revision. Your encouragement and increased ratings are greatly appreciated. | Summary: The paper add glyphcontrol to diffusion model by adding controlnet that takes rendered whiteboard images as inputs. Texts in the whiteboard images are extracted by OCR engine during training and then rendered by glyph renderer. During inference, glyph renderer renders the whiteboard images based on the instructions and feed the images using glyph controlnet. They also provide LAION-OCR benchmark by filtering LAION dataset using OCR systems.
Strengths: The incorporation of whiteboard images generated by the glyph renderer into diffusion models for glyph generation, using controlnet, is a reasonable and easy to understand idea. The paper effectively demonstrates the successful synthesis of images with clean glyphs, surpassing the performance of the baselines, and the glyphs can be controlled easily. They also provide LAION-OCR dataset that is useful for OCR-related research.
Weaknesses: The proposed method requires additional training. The method may be seen as an extension of ControlNet and may appear to involve the combination of multiple modules, which could be seen as engineering work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Does finetuning with OCR data harm the generation performance of the original model? (visual quality of non-textual parts after fine-tuning)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper does not explicitly describe their limitations. It would be nice the author provides limitations that can be solved for future works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer kTnz
We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.
> "The proposed method requires additional training. The method may be seen as an extension of ControlNet and may appear to involve the combination of multiple modules, which could be seen as engineering work."
A: Good point!
First, our method incurs notably lower additional training costs than alternatives like DeepFloyd IF and SDXL. These methods necessitate extensive retraining using thousands of A100 GPUs over months.
Second, GlyphControl combines modules for coherent text-image synthesis (seem like an engineering work to some degree), yet its implementation is both simple and effective. It offers a valuable baseline to address limitations in existing SD models.
Last, we aspire to create concise, innovative frameworks moving forward. We also would like to learn from your further valuable suggestions.
---
> "Does finetuning with OCR data harm the generation performance of the original model? (visual quality of non-textual parts after fine-tuning)"
A: No, we believe that finetuning with OCR data using our method would not harm the generation performance of the original model heavily.
When generating **natural images without specifying the text rendering information** (FID-30K-COCO), our model strictly retains the generation abilities of the original stable diffusion model, i.e., **FID evaluated on the COCO dataset would remain the same**. It is because the Controlnet framework does not alter the original SD model weights and the GlyphControl branch would not be used during inference due to empty glyph image input.
In order to test the visual qualities of generating **text images**, We evaluate the FID metrics on LAION-OCR. For LAION-OCR, we select examples which are not included in the dataset for GlyphControl training.
| Method | FID-10K-LAION-OCR $\downarrow$ |
| :---------------- | :---------: |
| SDXL-1.0 | $44.77$ |
| Stable Diffusion v2.1 | $50.01$ |
| Stable Diffusion v2.0 | $39.23$ |
| DeepFloyd (IF-I-M) | $23.53$ |
| DeepFloyd (IF-I-L) | $30.85$ |
| DeepFloyd (IF-I-XL) | $26.34$ |
| GlyphControl-SDv2.0 (LAION-OCR-100K) | $29.13$ |
| GlyphControl-SDv2.0 (LAION-OCR-1M) | $28.02$ |
Our approaches show comparable or slightly worse performances with DeepFloyd IF models in terms of **FID-10K-LAION-OCR**. It implies that the diversity of visual text image generation is preserved by our method and the quality of non-textual parts in the text images would not be harmed heavily.
---
> "
The paper does not explicitly describe their limitations. It would be nice the author provides limitations that can be solved for future works."
A: Thanks for your great suggestions. These are some limitations of our method:
- Lack of controllability on text font, color, and style
- Poor performance when generating abundant text or characters with small font sizes
- Using samples with a limited number of OCR boxes for training
We will also include discussions about the limitations and future work in our revised paper.
---
Rebuttal Comment 1.1:
Comment: As the reviewer wGfp said, I agree that the work is highly motivated by ControlNet, but I think the work is still valuable for OCR community. After reading the rebuttal, I keep my original rate.
---
Reply to Comment 1.1.1:
Title: Thanks for Reviewer kTnz's Response
Comment: We express our gratitude for the positive remarks provided by reviewer kTnz regarding the value of our work within the OCR community. Your encouragement and favorable evaluation are truly cherished. | Summary: In this paper, a glyph-conditional text-to-image generation model named GlyphControl is proposed for visual text generation. In addition, the authors introduce a visual text generation benchmark named LAION-OCR by filtering the LAION-2B-en. The results show that method of this paper outperforms DeepFloyd IF and Stable Diffusion in terms of OCR accuracy and CLIP score.
Strengths: 1. The visualization results shown in this paper are impressing. The images presented in the paper show that the text of the generated text region is accurate at both the character level and the word level.
2. The introduction of LAION-OCR dataset is a modest contribution to the realm of NLP and text image generation.
Weaknesses: 1. The contribution is limited. The whole model is the same as ControlNet. The LAION-OCR dataset is just filtered out from an open-source dataset LAION-2B-en, which is not collected by the authors.
2. The results of the quantitative experiment indicate that the accuracy achieved using this method is merely 40%/26%, which to some extent suggests that the visualized outcomes have been carefully selected. More results need to be displayed.
3. The comparison is unfair. The compared methods do not prioritize text generation.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Does the glyph render module only support a single bounding box input or can it accommodate multiple bounding box inputs?
2. Does the method described in this paper utilize classifier-free guidance during sampling? Although caption dropping is mentioned in the experimental details, there is no specific mention of classifier-free guidance.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The manageable elements of users are still constrained. Users are unable to select the color or the font.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer wGfp
We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.
> "The contribution is limited. The whole model is the same as ControlNet. The LAION-OCR dataset is just filtered out from an open-source dataset LAION-2B-en, which is not collected by the authors."
A: Please refer to the general response. In general, the main contribution of this work is to provide a surprisingly simple yet effective solution to rendering legible (visually coherent) text. We are committed to refining this work further based on any additional valuable suggestions you may provide.
---
> "The results of the quantitative experiment indicate that the accuracy achieved using this method is merely 40%/26%, which to some extent suggests that the visualized outcomes have been carefully selected. More results need to be displayed."
A: Great suggestion! We have followed your valuable comments to visualize more failure cases and illustrate different types of errors by pointing out the places where the errors occur in **Figure 3 of the attached PDF**. We also would like to include these visualization results and add more analysis in the final revision.
---
> "The comparison is unfair. The compared methods do not prioritize text generation."
A: We disagree with your statement for the following reasons:
-**Training cost**: The key factors contributing to the success of these models are (i) using much stronger text encoders (DeepFloyd IF uses T5-XXXL with 4.8B parameters, SD-XL uses a combination of open CLIP-G and CLIP-L with 817M parameters) and (ii) re-training the entire text-to-image diffusion models from scratch. These factors require thousands of A100 GPUs for training over several months. Given that our method only requires fine-tuning off-the-shelf models, it is unfair to claim that the comparisons are biased considering the substantial training costs.
-**Position of this work**: As reported in Table 1, both DeepFloyd IF[2] and SD-XL[3] still struggle to render accurate legible text. The Midjourney model performs even worse when handling this challenging task. We want to emphasize that our method is not intended to replace these strong models but to improve their accuracy. We are also making significant efforts to integrate our method into these robust baselines, and we will include these results in the revision, such as GlyphControl + DeepFloyd IF and GlyphControl + SD-XL.
[1] https://huggingface.co/stabilityai/stable-diffusion-2-1#limitations
[2] https://www.deepfloyd.ai/deepfloyd-if / https://github.com/deep-floyd/IF
[3] https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
Overall, it's challenging to definitively claim the comparison as unfair given the noted distinctions. We aspire for our work to play a pivotal role in advancing the accuracy of visual text rendering. We would like to hear your further valuable feedback.
---
> "Does the glyph render module only support a single bounding box input or can it accommodate multiple bounding box inputs?"
A: Great point! Indeed, we have effectively demonstrated that the glyph render module is capable of handling multiple bounding box inputs. You can find a detailed illustration in the main paper, precisely in text lines 157-158 and the fourth example in Figure 4. Moreover, all four examples showcased in Figure 1 have been generated using multiple bounding box inputs, further substantiating our capabilities in this aspect.
---
> "Does the method described in this paper utilize classifier-free guidance during sampling? Although caption dropping is mentioned in the experimental details, there is no specific mention of classifier-free guidance."
A: Yes, we adopt the classifier-free guidance following other diffusion models like Stable Diffusion and ControlNet. Thanks for pointing out this important detail we have missed and we will explain how we use classifier-free guidance in the revised paper.
---
> "The manageable elements of users are still constrained. Users are unable to select the color or the font."
A: Thank you for your insightful suggestion! We are committed to delving into the possibilities of controlling colors and fonts in our future endeavors. We also welcome and appreciate your further valuable suggestions.
---
Rebuttal Comment 1.1:
Title: Looking forward to hearing the feedback from Reviewer wGfp
Comment: We extend our sincere gratitude for the invaluable guidance you have offered in the refinement of our work. Our highest priority is to rigorously address the primary concerns you have raised, particularly those related to the perceived limitations of our contributions and the fairness of comparisons.
We humbly welcome any additional suggestions you may deem fit to share. Your insights hold immense importance for us, and we view them as integral to the ongoing enhancement of our work. | Rebuttal 1:
Rebuttal: ## To AC and All Reviewers
We would like to express our gratitude to all the reviewers for their careful reviews and constructive suggestions. We appreciate the positive comments, such as "the task of visual text generation is interesting" (Reviewer Cb1X), "the visualization results shown in this paper are impressing" (Reviewer wGfp), "reasonable and easy to understand" (Reviewer kTnz), "demonstrate the paper's practical contributions to the field of visual text generation" (Reviewer xxJ1), and "I believe this work is a nice addition to the current research landscape of text-to-image generation" (Reviewer Yxcw).
We would like to address the major concerns regarding the contribution of this work by focusing on the following aspects:
> **The motivation and potential impact of our work**
First, we would like to emphasize that addressing the well-known limitations[1] of rendering legible (visually coherent) text is a significant challenge for the fundamental Stable-Diffusion model series.
Second, we acknowledge that some groundbreaking works, such as DeepFloyd IF[2] and SD-XL[3], which emphasize their strong capabilities of rendering legible text, were made public shortly before or after our submission. However, we disagree with Reviewer wGfp's statement that "the comparison is unfair, the compared methods do not prioritize text generation," for the following reasons:
-**Training cost**: The key factors contributing to the success of these models are (i) using much stronger text encoders (DeepFloyd IF uses T5-XXXL with 4.8B parameters, SD-XL uses a combination of open CLIP-G and CLIP-L with 817M parameters) and (ii) re-training the entire text-to-image diffusion models from scratch. These factors require thousands of A100 GPUs for training over several months. Given that our method only requires fine-tuning off-the-shelf models, it is unfair to claim that the comparisons are biased considering the substantial training costs.
-**Position of this work**: As reported in Table 1, both DeepFloyd IF[2] and SD-XL[3] still struggle to render accurate legible text. The Midjourney model performs even worse when handling this challenging task. We want to emphasize that our method is not intended to replace these strong models but to improve their accuracy. We are also making significant efforts to integrate our method into these robust baselines, and we will include these results in the revision, such as GlyphControl + DeepFloyd IF and GlyphControl + SD-XL.
[1] https://huggingface.co/stabilityai/stable-diffusion-2-1#limitations
[2] https://www.deepfloyd.ai/deepfloyd-if / https://github.com/deep-floyd/IF
[3] https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
> **Comparison with GlyphDraw and ControlNet**
First, we would like to clarify that **GlyphDraw is a concurrent work made available on arXiv on 31st March 2023**, which explores the benefits of glyph images in a distinct manner.
Second, we want to emphasize that the ControlNet architecture design is not our contribution. Our contributions lie in (i) demonstrating that using glyph images is a surprisingly simple yet highly effective approach for generating legible text, and (ii) introducing the LAION-Glyph benchmark to facilitate the development of this challenging generation task.
Thirdly, we outline the key differences between our GlyphControl and GlyphDraw:
- Our GlyphControl offers more flexible controllability than GlyphDraw by supporting customized glyph instructions, allowing for control over text line information and text box information (as evidenced by the visual results in Figure 4).
- GlyphDraw requires fine-tuning the cross-attention weights within the U-Net, which may harm the generation capability of the original diffusion model. In contrast, our GlyphControl follows the ControlNet scheme, freezing the well-trained U-Net weights to preserve the original model's capability optimally.
- GlyphDraw necessitates (i) an additional mask prediction module to predict segmentation masks, and (ii) a CLIP image encoder to extract visual representations of the corresponding glyph image. Our GlyphControl, however, only requires an additional Glyph ControlNet to constrain the latent representations within the U-Net.
Lastly, we would like to express our gratitude for the positive and accurate comments from Reviewer Yxcw: **"This work provides a simple yet effective approach to this challenge by elegantly extending ControlNet with rendered text as control. I am convinced that having rendered text as input makes a lot of sense in this context and would greatly help generate accurate text."** We sincerely hope the reviewers will reconsider the original ratings.
PS: **Please refer to the PDF for more visualization results.**
Pdf: /pdf/ed2bb62a51010073d8855b3615e133e9c7f49d2c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes GlyphControl for visual text generation by augmenting textual prompt with additional glyph conditional information. A benchmark of LAION-OCR is built for evaluating this model.
Strengths: ++The task of visual text generation is interesting.
++Good results are shown in experiments.
++A new visual text generation benchmark is introduced.
Weaknesses: --The main concern is the limited technical contribution. The whole architecture of GlyphControl is a simple extension of ControlNet by using additional control of glyph images. But the idea of glyph images has been validated in GlyphDraw [13] for the same task.
--I am curious by the claim of “outperforms DeepFloyd IF and Stable Diffusion in terms of 55 OCR accuracy and CLIP score while saving the number of parameters by more than 3x”. As shown in Table 1, the parameter number of GlyphControl is significantly larger than Stable Diffusion v2.0.
--Why not train GlyphControl on the common SD 2.1, which has the similar number of parameters and clearly outperforms SD 2.0.
--More competitive baselines should be included for comparison (GlyphDraw [13], SD XL and Midjourney).
--As in GlyphDraw [13], it is necessary to report the FID values in performance comparison.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Moreo discussion on technical contribution and more comparison with competitive baselines.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer Cb1X
We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.
> "The main concern is the limited technical contribution. The whole architecture of GlyphControl is a simple extension of ControlNet by using additional control of glyph images. But the idea of glyph images has been validated in GlyphDraw [13] for the same task."
A: Please refer to the general response for addressing the concerns regarding the limited technical contribution. To sum up, the main contribution of this work lies in introducing an astonishingly simple yet highly effective approach for rendering legible and visually coherent text.
---
> "I am curious by the claim of “outperforms DeepFloyd IF and Stable Diffusion in terms of 55 OCR accuracy and CLIP score while saving the number of parameters by more than 3x”. As shown in Table 1, the parameter number of GlyphControl is significantly larger than Stable Diffusion v2.0."
A: Thanks for pointing out the typo, we will fix it in the revised paper. "More than 3x" refers to the comparison with DeepFloyd IF-I-XL (shown below), which achieves the best OCR accuracy among other baselines.
| Method | \#Params | Text Encoder |
| :---------------- | :--------- | :--------|
| DeepFloyd (IF-I-XL) | $6.0$ B | T5-XXL ($4.8$ B)|
| GlyphControl | $1.3$ B | CLIP ($354$ M) |
The total number of parameters reported in the second column of the table does not include the text encoder.
---
> "Why not train GlyphControl on the common SD 2.1, which has the similar number of parameters and clearly outperforms SD 2.0."
A: We chose SD 2.0 to demonstrate the effectiveness of our framework and it could be transferred onto SD 2.1. We are training the GlyphControl framework based on SD 2.1 using the LAION-OCR-100k while keeping the same training settings as the former experiments on SD 2.0. The results of OCR accuracy, CLIP Score, and FID are shown below.
| Method | Epochs | $\bf{Acc}$ (%) $\uparrow$ | $\bf{\hat{Acc}}$ (%) $\uparrow$ |$\bf{LD}\downarrow$ | CLIP Score $\uparrow$ | FID-10K-LAION-OCR $\downarrow$ |
| :---------------- | :---------: | :---------: | :--------:| :---------: | :---------: | :---------: |
| GlyphControl-SDv2.0 | 60 | $30/19$ | $37/24$ | $1.77/2.58$ | $33.69/36.20$ | $29.13$ |
| GlyphControl-SDv2.0 | 30 | $20/14$ | $28/19$ | $2.20/3.00$ | $33.73/36.14$ | $29.18$ |
| GlyphControl-SDv2.1 | 30 | $17/14$ | $27/19$ | $2.36/3.18$ | $33.85/35.54$ | $28.02$ |
After training for 30 epochs, the GlyphControl model trained on SDv2.1 shows slightly worse performance compared with the counterpart trained on SDv2.0 previously in terms of OCR accuracy and CLIP score. While the fidelity of images (FID) has been improved, which may be attributed to the more powerful generation abilities of SDv2.1.
And we will report the results of GlyphControl-SDv2.1 in the revised paper after finishing the 60-epoch training.
---
> "More competitive baselines should be included for comparison (GlyphDraw [13], SD XL and Midjourney)."
A: As we check in the github repo of GlyphDraw [13] https://github.com/OPPO-Mente-Lab/GlyphDraw, the checkpoints of GlyphDraw are not available. Thus, we could not conduct a fair comparsion on our benchmark. Besides, Midjourney is not a free open-sourced tool, thus we fail to make a comparison with Midjourney.
SDXL was not open-sourced until June, 2023, and the latest version SDXL-1.0 has been released at https://github.com/Stability-AI/generative-models right before the rebuttal. The comparison results with SDXL-1.0 are reported here and will be added to the paper. We use both SDXL-base-1.0 and SDXL-refiner-1.0.
| Method | $\bf{Acc}$ (%) $\uparrow$ | $\bf{\hat{Acc}}$ (%) $\uparrow$ |$\bf{LD}\downarrow$ | CLIP Score $\uparrow$ | FID-10K-LAION-OCR $\downarrow$ |
| :---------------- | :---------: | :--------:| :---------: | :---------: | :---------: |
| SDXL-1.0 | $0.3/0.5$ | $13/8$ | $6.26/6.30$ | $31.9/33.3$ | $44.77$ |
| GlyphControl-SDv2.0 (LAION-OCR-100K) | $30/19$ | $37/24$ | $1.77/2.58$ | $\bf{33.7}/\bf{36.2}$ | $29.13$ |
| GlyphControl-SDv2.0 (LAION-OCR-1M) | $\bf{40}/\bf{26}$ | $\bf{45}/\bf{30}$ | $\bf{1.59}/\bf{2.47}$ | $33.4/36.0$ | $28.02$ |
As seen in the above table, our method still significantly outperforms the latest powerful text-to-image generation model SDXL-1.0 in terms of OCR accuracy, CLIP Score, and FID. Moreover, we also include some visualization results in **Figure 1 of the attached PDF** for comparison with competitive baselines, i.e., IF, SDXL, and Midjourney. And we will add the results of SDXL-1.0 to Table 1 & 2 and include visualization results in the revised paper.
---
> "As in GlyphDraw [13], it is necessary to report the FID values in performance comparison."
A: In order to test the visual quality of generated text images, We evaluate the FID on our LAION-OCR after selecting examples which are not used to train GlyphControl framework.
| Method | FID-10K-LAION-OCR $\downarrow$ |
| :---------------- | :---------: |
| SDXL-1.0 | $44.77$ |
| Stable Diffusion v2.1 | $50.01$ |
| Stable Diffusion v2.0 | $39.23$ |
| DeepFloyd (IF-I-M) | $23.53$ |
| DeepFloyd (IF-I-L) | $30.85$ |
| DeepFloyd (IF-I-XL) | $26.34$ |
| GlyphControl-SDv2.0 (LAION-OCR-100K) | $29.13$ |
| GlyphControl-SDv2.0 (LAION-OCR-1M) | $28.02$ |
Our approaches show comparable performances with DeepFloyd IF models in terms of FID. This demonstrates that the diversity and quality of visual text image generation is preserved by our method. And both SDXL and original SD models perform much worse than ours.
And we will add FID values into Table 1.
---
Rebuttal Comment 1.1:
Comment: I appreciate the additional experimental results in rebuttal. Some concerns have been addressed well. However, for the evaluation on visual quality of generated text images, I am supervised to see DeepFloyd (IF-I-M) outperforms the proposed GlyphControl-SDv2.0, and Stable Diffusion v2.0 even outperforms the recent upgraded version of stable diffusion SDXL-1.0. The results somewhat reveal the weakness of the proposed GlyphControl on visual quality of generated text images.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Cb1X
Comment: 👉 First, thank you for acknowledging the additional experimental results provided in the rebuttal. We are pleased to note that certain concerns have been effectively addressed.
👉 Second, regarding the evaluation of visual quality in generated text images, we acknowledge that we are also surprised to find that DeepFloyd (IF-I-M) outperforms both our GlyphControl-SDv2.0 and the very recent SDXL-1.0. We assure you that, if necessary, we will **incorporate additional visualization comparison results in the revised version to provide a more comprehensive understanding**. The dominant factor contributing to this outcome may lie in **the utilization of the notably more powerful T5 XXL text encoder (with 4.8B parameters)**. We are enthusiastic about investigating how the combination of GlyphControl and DeepFloyd (IF-I-M) could harmonize their strengths, leading to an enriched system. This direction holds our keen interest for future exploration.
👉 Third, these outcomes indeed illuminate potential constraints within the visual quality realm of our GlyphControl. Your perceptive observations are greatly valued, and we intend to meticulously investigate how to enhance visual quality in accordance with your invaluable suggestions.
👉 Last, it's worth emphasizing that images exhibiting enhanced quality through **DeepFloyd (IF-I-M) appear to compromise OCR accuracy**. We speculate that **a trade-off between elevated OCR precision and visual quality emerges when tackling such a demanding visual text generation task**. Furthermore, it's important to note that **DeepFloyd (IF-I-M) incorporates a significantly larger number of parameters compared to our approach (6.9B vs. our 1.65B)**. To provide a comprehensive reference, we present the complete set of comparison results below:
| Method | \# Overall Params (# Text Encoder) | $\bf{Acc}$ (%) $\uparrow$ | $\bf{\hat{Acc}}$ (%) $\uparrow$ |$\bf{LD}\downarrow$ | CLIP Score $\uparrow$ | FID-10K-LAION-OCR $\downarrow$ |
| :---------------- | :---------------- | :---------: | :--------:| :---------: | :---------: | :---------: |
| DeepFloyd (IF-I-M) | 6.9B (4.8B) | $0.3/0.1$ | $18/11$ | $2.44/3.86$ | $32.8/34.3$ | $\bf{23.53}$ |
| GlyphControl-SDv2.0 (LAION-OCR-100K) | 1.65B (354M) | $30/19$ | $37/24$ | $1.77/2.58$ | $\bf{33.7}/\bf{36.2}$ | $29.13$ |
| GlyphControl-SDv2.0 (LAION-OCR-1M) | 1.65B (354M) | $\bf{40}/\bf{26}$ | $\bf{45}/\bf{30}$ | $\bf{1.59}/\bf{2.47}$ | $33.4/36.0$ | $28.02$ |
Your continued valuable feedback would be greatly appreciated. 🤗🤗🤗 | null | null | null | null | null | null |
Bypassing spike sorting: Density-based decoding using spike localization from dense multielectrode probes | Accept (spotlight) | Summary: The authors demonstrate a novel approach to decoding behavioral variables from neural activity recording using dense silicon probes. Most approaches utilize a sort-then-decode line of attack – first channels are spike sorted, then decoders are trained on unambiguously identified single-units. Here, the authors present a probabilistic model that bypasses explicitly assigning waveforms to units; rather, the authors model spike features using a mixture of Gaussians, which is then leveraged for decoding. The authors go on to show that their method generally outperforms the current state-of-the-art in a variety of experiments spanning recordings in the mouse and in the monkey.
Strengths: Strengths:
1. The authors present an elegant solution a highly-challenging problem in neuroscience – decoding behavior in the presence of noisy and highly-overlapping signals.
1. The results are generally compelling. Clearly, the author's algorithm outperforms Kilosort and multi-unit thresholding in a variety of decoding contexts.
1. The core idea and implementation is fairly straightforward. Given this, I expect this contribution to be generally useful to the community.
Weaknesses: Weaknesses:
Major:
1. One potential issue with "bypassing" spike-sorting is noise and motion artifact. Spike-sorting, if carefully curated, can remove unwanted sources of contamination in extracellular electrophysiology. Here, I am somewhat concerned that this method could pick up and amplify contaminating signals that may contain behaviorally relevant information. For instance, Figure 3a shows that "good" KS units in CA1 and DG contain barely any information about motion energy; however, multi-unit thresholding, all KS, and the author's method find a surprising amount of information. Do the authors think this is reasonable? I am a bit worried since similar gains are not seen for choice decoding, which may not be as easy to decode from motion artifact. This could absolutely be real information uncovered by the model, but this issue is worth addressing head on. Do the authors think that their method could be picking up on any noise or motion artifact in the original recordings?
Minor:
1. Rhetorical point. Line 47 the authors mention prior attempts to "bypass" spike sorting as working with "limited" features such as maximum amplitudes and principal components of the spike waveform. Unless I missed something this language seems overly critical since the authors are only using peak-to-peak amplitudes and spike localization features (x, z).
1. Related to the last point, did the authors consider waveform principal components or aspects of the waveforms other than amplitude? Spike width, e.g., is known to contain information about cell type.
1. Line 97 typo
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Did the authors check to see if their technique picks up on noise or motion artifact in these recordings?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Cpfb for their thorough and thoughtful review. We appreciate that the reviewer recognized the contribution of our method to the neuroscience community.
**Major Weakness:**
We performed motion correction (registration) during the preprocessing step, ensuring that motion artifacts are minimized in our data. Figure 5 in the supplementary materials provides a comparison of spike raster plots with and without registration, confirming the minimal impact of motion artifacts in the preprocessed data. Additionally, Kilosort has its own motion correction step, reducing the influence of motion artifacts on the spike-sorted decoder.
Our decoder exhibits strong performance not only in decoding motion energy but also in decoding wheel speed and other continuous behaviors that are unrelated to motion artifacts. Decoding continuous behavioral variables is generally easier to improve compared to decoding binary choices. We acknowledge the importance of improving choice decoding, and we are actively exploring this direction in our ongoing research.
The subpar performance of "good KS" is due to IBL's stringent quality control criteria for spike sorting. Only those single-units that pass the quality control procedure are categorized as "good" KS. In some brain regions like CA1 and DG, there are often less than 10 units that meet these criteria, explaining the poor decoding performance of "good" KS in these areas. This is precisely why decoding using all Kilosort units (including probable multi-unit activity) yields significantly better results.
**Minor Weakness:**
1. We apologize for being overly critical with our choice of words. Our intention was to emphasize the importance of spike localization features, which are uniquely applicable to high-density probes with high spatial resolution. This novel technique, as described in this paper [(Boussard et al ‘21)](https://openreview.net/pdf?id=ohfi44BZPC4), is of great importance as it extracts valuable information that cannot be achieved solely through waveform features. Our ablation study in Section 8 of the supplementary materials supports this statement, showing that the inclusion of waveform features alongside localization features does not contribute significantly to the decoding tasks when compared to decoding solely based on localization features and peak-to-peak amplitudes.
2. We appreciate the insightful suggestion from the reviewer. We did explore the possibility of incorporating more waveform features to enhance decoding performance. As shown in our ablation study in Section 8 of the supplementary materials, we included waveform principal components in addition to localization features and peak-to-peak amplitudes. However, supplementary Table 3 clearly demonstrates that the addition of waveform features did not result in a significant improvement in the decoding tasks compared to using only localization features and peak-to-peak amplitudes.
3. We thank the reviewer for bringing this typo to our attention, and we will correct it in the future version.
**Question 1:**
We examined our method to ensure that it did not capture motion artifacts in the recordings. We manually inspected the spike raster plots after registration, and we found no evidence of motion artifacts affecting the decoding results. For a more comprehensive discussion on this topic, please refer to our response in the Major Weakness section.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: First, I'd like to thank the authors for the comprehensive rebuttal, I think they have done an excellent job of addressing the reviewers remaining concerns. I have already raised my score to strong accept.
However, one thing that I'm still a little confused about re: motion artifacts: as the authors mention they apply drift correction, which corrects for the apparent physical displacement of units over time. I was not clear about this, but I was getting at a slightly different point. If I collect all spike waveforms that cross a very small threshold, say one standard deviation, many of the waveforms will likely arise from electrical and mechanical artifacts and not neurons. If I built a complex decoder using these waveforms I might be able to accurately decode movement – something correlated with waveforms that arise from artifacts – but not more cognitive variables such as choice.
I tried to glean the inclusion criteria for spike waveforms that are subjected to modeling in the supplement, but it is still unclear what criteria a waveform must pass to be modeled in the first place. It's possible I missed this, but I could not find the exact details in the supplement. How were "candidate spike events" determined? Do the results depend on how this is parameterized at all? If I applied this method to a channel with only electrical noise and movement artifacts what would the result be?
Also after looking over the supplement again it looks like some references are missing (e.g. 33 and 34, could be more).
Thanks.
---
Reply to Comment 1.1.1:
Title: Response to reviewer Cpfb
Comment: We appreciate the kind words from reviewer Cpfb and thank them for their insightful questions. We did investigate the impact of motion artifacts on decoding accuracy, as evidenced in Figure 2 panel (d) of our main paper. This figure shows that the decoding quality of motion energy declines as the amount of motion artifacts in the data increases, which is the opposite of what we would expect if we are just decoding better because of noise artifacts. Additionally, to show that we are not performing better because we are finding additional spikes that a spike sorter would miss, we conducted an experiment. We fitted our model using only spikes detected by Kilosort 2.5, and compared its performance to decoders using spike-sorted outputs and our subtraction-based spike detection on choice and motion energy decoding. The results are summarized in the table below. As shown in the table, our decoder can achieve comparable or better decoding performance than the spike-sorted decoder when modeling the same spikes. This suggests that the gain in motion energy can be attributed to the density-based approach.
| | Density-based (subtraction spikes) | Density-based (KS spikes) | Sorted (KS spikes) |
|---------------|------------------------------------|---------------------------|-------------------------|
| Choice | 0.876 (0.068) | 0.876 (0.079) | 0.887 (0.078) |
| Motion energy | 0.589 (0.111) | 0.579 (0.121) | 0.503 (0.117) |
Regarding the inclusion criteria for spike waveforms, we use a subtraction-based spike detection as described in Section 4 of the supplementary materials. Specifically, For a series of voltage thresholds in standard units (12, 10, 8, 6, 5):
1. Detect threshold crossings;
2. Denoise using a neural network or TPCA;
3. Subtract and store spike events which would decrease the squared norm of the residual by at least 10 squared standard units when subtracted.
While the subtraction-based detection procedure is a novel aspect of our method, it remains uncertain whether its parameterization affects decoding. We acknowledge the reviewer's point that the inclusion criteria for spikes may affect decoding accuracy to some extent. However, exploring different inclusion criteria would demand substantial effort, and we intend to perform additional experiments to confirm this aspect in the future.
We apologize for the missing references and will include them in the future version of our paper. | Summary: This paper presents a way to decode behavior from neural recordings, bypassing an explicitly spike sorting step. They model individual spikes as coming form a mixture of gaussian distribution, and the assignment probability to different mixtures is used instead of a 'hard' assignment of each spike to a cell. Robust performance of this approach is shown across a large dataset of Neuropixels recordings.
Strengths: While spike sorting is thought to incorporate the biological priors (ie., the fact that spikes come from cells, which carry information in a neural circuit), and promote robust decoding, deviating from the prior by developing a "soft" spike sorting approach seems to have helped in this paper. This is surprising and very impactful.
Well written paper.
Weaknesses: All the results use very high-density neuropixel recordings. However, it is not clear if the method is a general replacement to spike sorting across different arrays, etc.
The encoding model depends only on behavior. However, there is a large amount of behavior-independent variability (ex. attention states) that are not incorporated in the encoding model.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Could the paper describe the method as "soft spike-sorting"? This would help place the method in the right context.
2. Spike sorting is necessary for biological interpretation of neural recordings. Does soft-spike sorting lend such interpretation?
3. If one doesn't do spike sorting, one could use local field potentials, or other bandpass signals to supplement threshold crossings. One should compare with all those features to claim the superiority of the method.
4. How crucial are the preprocessing steps? Were they developed on the same dataset to optimize spike sorting? If so then the results might be confounded.
5. What are the computational costs of this method compared to spike sorting?
6. Using the KS spike assignments, how 'pure' are the gaussians in the MOG? Are single cells split into different gaussians , or do single gaussians correspond to multiple cells? This could shed light on how important is it to deviate from "hard" spike sorting.
7. While spike sorting gives stable decoding across time (inspite of template changes over time), does the current method present with similar stability? Can you learn a decoder from the beginning of a recording and apply it towards the end?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations have been adequately addressed.
Question for Area chair: The paper identified the source of the data as the data from International Brain Lab. It is not clear if the data is open-source. If not, then it clearly violates the double-blind policy of NeurIPS. Please check.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Review C56U for the thoughtful review and useful comments. We are glad that the reviewer felt that the manuscript was clearly written and appreciate that the reviewer recognizes that our work makes an important contribution to the literature.
**Weakness 1:**
Although our decoder is mainly designed for high-density (HD) probes, we acknowledge the reviewer’s point about adapting our decoder to more general probe geometries. The global response Table 1 demonstrates the effectiveness of our decoder across probe geometries beyond Neuropixel probes. Specifically, on both multiple electrodes and HD probes data, our decoder outperforms the previous clusterless decoder.
**Weakness 2:**
While our model is primarily conditioned on the behavior of interest, we have the flexibility to turn off this conditioning and utilize a vanilla GMM model to capture behavior-independent variability. This approach comes with a slight trade-off in decoding accuracy. For a detailed comparison between our models with and without dependencies on behavior correlates, please refer to Section 8 (ablation study) in the supplementary materials. We agree that including additional latent variables in the model (e.g. attention states) would be an exciting direction for future work.
**Questions:**
1. Indeed, our method uses a GMM to estimate the probability of a spike belonging to a mixture component, thereby quantifying the uncertainty in spike assignment. This approach allows the model to make errors and retain valuable information, as opposed to "hard" spike sorting methods that discard such uncertainty.
2. While our method may not offer a single-unit interpretation similar to spike sorting, it brings its own advantages. Traditional spike sorting can be stringent and might discard valuable information by eliminating many cells that do not pass the quality control criteria. In contrast, our approach leverages all unsorted spikes within each brain region for decoding behavioral correlates. As a result, our method retains all available information within that region, providing abundant interpretations for specific brain regions without discarding any valuable data.
3. We agree with the reviewer's suggestion that the inclusion of LFP and other band-passed signals would be very informative. However, incorporating these data modalities would require extensive efforts beyond the scope of our current work. Our focus in this study is to understand what spikes can inform us about the behavior of interest, rather than optimizing decoding accuracies using all available data modalities. Nonetheless, we acknowledge the potential benefits of incorporating additional data modalities and consider it interesting future work.
4. No, the preprocessing steps were not developed to optimize spike sorting; they are distinct and separate processes. For instance, destriping is specifically employed for data from IBL due to prominent stripes in the recording data, and it is a preprocessing step developed by IBL. We rely on standard preprocessing from IBL to address quality concerns; see details in their white paper [(IBL et al ‘22)](https://figshare.com/articles/online_resource/Spike_sorting_pipeline_for_the_International_Brain_Laboratory/19705522). Additionally, registration, which removes probe motion, is crucial for data quality, and we utilized a separate method proposed in this paper [(Windolf et al ‘22)](https://www.biorxiv.org/content/10.1101/2022.12.04.519043v1). Spike localization is another essential step in our pipeline, and it is introduced separately in this work [(Boussard et al ‘21)](https://openreview.net/pdf?id=ohfi44BZPC4) as well.
5. In the global response, we provide a quantification of the computational cost for our method. Regarding Kilosort, we didn't run it ourselves and profile its computational cost, as the spike-sorted output is readily available from IBL's public database. It is worth noting that Kilosort is a real-time sorting algorithm.
6. Please refer to the section about Figure 3 in the global response for details.
7. While we can train a decoder at the start of a recording and apply it to the end, there could be a slight impact on decoding accuracy. We use spike features with registered locations that remain constant over time to mitigate probe drift or motion artifacts. The GMM components also remain unchanged over time. However, the mixing proportions of the GMM model vary over time. We conducted experiments by training the model on initial recording segments and then decoding subsequent segments for continuous behaviors on three datasets. The table below shows the mean correlation between true and decoded behaviors. In the context of this experiment, *Time shuffled* indicates training the GMM on segments from various time points within the recordings, while *Time ordered* indicates training on earlier segments to decode later ones.
| | Dataset 1 | Dataset 2 | Dataset 3 |
|----------|----------|----------|----------|
| Time shuffled | 0.760 (0.016) | 0.703 (0.015) | 0.802 (0.012) |
| Time ordered | 0.733 (0.067) | 0.687 (0.008) | 0.792 (0.025) |
**Question for Area Chair:**
We use open-source IBL datasets in our paper. The public available datasets can be accessed [here](https://int-brain-lab.github.io/iblenv/public_docs/public_introduction.html). | Summary: This paper proposed to decode animal behavior with a spike-sorting-free method that is well-suited for high-density recordings, by modeling the distribution of extracted spike features using a mixture of Gaussians (MoG) on uncertainty of spike assignments, and decoding using a generalized linear model (GLM). The authors benchmarked results on Neuropixel recordings from different brain regions, different animals, as well as different probe geometries, and showed that performance outperformed decoders based on multi-unit threshold crossings and single-units sorted by Kilosort.
Strengths: 1. The paper is clearly presented. The writings are easy to follow, and most of methods and results are described clearly.
2. Benchmarking results from multiple brain regions, animals, and different versions of Neuropixel probes were conducted, with a significant improvement on decoding accuracy compared to baseline methods.
3. Given that high-density recordings are getting more widely-used in the neuroscience field, this paper brings impacts with a better decoding tool for studies conducted with this kind of high-density probes.
Weaknesses: 1. While I recognize the paper providing extensive analysis results, the novelty of proposed density-based decoding on unsorted spike waveforms is limited from my perspective. First, the proposed algorithm on using MoG and GLM for decoding has been widely used on neural ensembles, or other types of neural signals like LFP or ECoG. For example, the [paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3972894/) also leveraged MoG for unsorted spike decoding, despite that it did leverage Bayesian decoding framework. This [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9585446) proposed Gaussian mixture of model (GMM)-assisted PLS (GMMPLS) for decoding in BMI. Second, there are also other methods proposed for decoding behavior from cluster-less neural data. For example this [paper](https://www.biorxiv.org/content/10.1101/760470v1.full.pdf) proposed cluster-less HMM model for decoding. Another [paper](https://www.biorxiv.org/content/biorxiv/early/2021/08/28/2021.08.26.457795.full.pdf) proposed Gaussian-process multi-class regression decoding on neural population data. The concept of modeling spike distributions with MoG has been applied, and GLM decoding also has been commonly used for neural ensemble. I recognized that some of these papers are not specifically targeting unsorted spikes, and acknowledge this paper bring good impacts for the neuroscience community, but I am not sure if NeurIPS is the proper target given novelty of its proposed algorithm. To meet the bar for NeurIPS, I would like to see more originality algorithm improvements.
2. Unclear whether baseline decoders are fair baselines. First, Kilsort depends on hyperparameters that need to tune properly for each dataset. From some of example decoded curves, I'm not sure if those baseline decoders were calibrated properly to data. In practice, we rarely worked with decoders with poor accuracy. This leads me to suspect the baseline decoders were not trained/tuned properly for comparison. Second, there are more SOTA decoders that have been used, including previous clusterless decoders on unsorted spiking data (see [paper](https://journals.physiology.org/doi/full/10.1152/jn.01046.2012)). Without a benchmark comparison to these decoders, it's hard to be convinced that the proposed algorithm is much superior than those other decoders.
3. Unclear if the proposed method only worked better for high-density recordings from Neuropixel or not. Despite that the authors provided two different probe versions for different geometries, it's unclear whether this brings similar benefits for other types of neural recordings (e.g., multiple tetrode arrays, etc). Especially if the paper claims that this method would be better than previous clusterless decoders tested on tetrode recordings, it's important for the NeurIPS audience to know if the proposed method works more generally on decoding unsorted spikes, or somehow better tuned for the distribution from Neuropixel recordings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I'm not following the authors' arguments on why previous Bayesian decoding methods on unsorted spikes do not apply for HD probes. Specifically, this argument "the aforementioned approaches are not suitable for HD probes as they rely on sampling-based Bayesian inference and are specifically designed for tetrodes". It's true that sampling-based Bayesian inference makes certain assumptions on point process, but I don't quite see why those are specific for tetrodes. The original Bayesian inference paper used waveform features such as the maximum amplitudes and principal components for decoding, but that do not mean waveform features cannot be represented in other formats. Can authors elaborate this statement?
2. What are the baseline decoders used with sorted neurons after sorted by Kilsort?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Fmkq for their thorough and thoughtful review. We appreciate their valuable feedback and would like to address their main concerns.
**Weakness 1:**
Please refer to the global response for the novelty of our method.
**Weakness 2:**
It is important to clarify that the baseline decoders are well-tuned and calibrated properly for each dataset, and we used spike-sorted data tuned by IBL, as described in IBL's spike sorting white paper [(IBL et al ‘22)](https://figshare.com/articles/online_resource/Spike_sorting_pipeline_for_the_International_Brain_Laboratory/19705522). The relatively lower decoding accuracy is due to the inherent difficulty of the decoding task with the behavior data from IBL, which are challenging to decode, especially considering that some brain regions contain limited information about the behaviors. In fact, in the IBL's decoding paper titled "*A Brain-Wide Map of Neural Activity during Complex Behaviour* [(IBL et al ‘23)](https://www.biorxiv.org/content/10.1101/2023.07.04.547681v2)," their achieved decoding accuracies are comparable to ours. For detailed numerical values, please consult their Figure S6 and S15 in the supplementary materials. Additionally, in the global response Table 1, we showcase high decoding accuracies for multiple tetrodes data, which involves a simpler decoding task compared to the IBL's behavior tasks.
We acknowledge the importance of conducting a benchmark comparison of clusterless decoders and appreciate the reviewer for bringing up previous clusterless decoding papers. It is worth noting that the Gaussian-process multi-class regression decoding paper [(Greenidge et al ‘21)](https://www.biorxiv.org/content/biorxiv/early/2021/08/28/2021.08.26.457795.full.pdf) and the GMM-PLS paper [(Foodeh et al ‘21)](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9585446) are not specifically designed for clusterless decoding, hence we did not consider comparing our method to those. Regarding a comparison with the previous clusterless decoders, please refer to the global response for details.
**Weakness 3:**
We acknowledge the reviewer’s point about adapting our decoder to more general probe geometries. The global response Table 1 demonstrates the effectiveness of our decoder across probe geometries beyond Neuropixel probes. Specifically, on both multiple tetrodes and HD probe data, our decoder outperforms the clusterless point process decoder [(Denovellis et al ‘21)](https://elifesciences.org/articles/64505). It is important to note that our method is not exclusively tailored for Neuropixel recordings, and can be applied more generally to probes with different geometries.
**Question 1:**
The key point to emphasize is the high data volume associated with HD probes, which generates a significantly larger amount of data compared to tetrodes. As a result, previous clusterless decoders based on sampling-based inference tend to be slow. This computational cost is evident in the clusterless HMM [(Ackermann et al ‘19)](https://www.biorxiv.org/content/10.1101/760470v1.full.pdf) method, as mentioned by the authors in their paper. They admitted the challenges related to high computational cost, with the model taking about 6 hours to fit a relatively small dataset. Although the clusterless point process model [(Denovellis et al ‘21)](https://elifesciences.org/articles/64505) has a lower computational cost, our decoder is faster and outperforms it in high-density settings, as indicated in global response Table 1 and Figure 2. The limitation of their approach lies in its inability to effectively handle the highly overlapping signals of HD probes, where multiple electrodes are closely packed together. Moreover, our density-based decoder is more flexible by avoiding making restrictive assumption about the underlying system dynamics.
The original method could potentially use features of other formats, but it's important to note that the localization features, which are HD-specific, play a crucial role. HD probes offer exceptional spatial resolution, and utilizing spike localization features is essential for successful decoding. Relying solely on waveform features on each electrode becomes challenging in the context of HD probes without leveraging the spatial advantages offered by localization features. Our ablation study in Section 8 of the supplementary materials supports this statement, showing that the inclusion of waveform features alongside localization features does not contribute significantly to the decoding tasks when compared to decoding solely based on localization features and peak-to-peak amplitudes.
**Question 2:**
We use ridge regression for decoding continuous behaviors and L2-penalized logistic regression for decoding binary behaviors. | Summary: The paper develops a decoding method directly on ‘spike features’, without going through spike sorting. The authors model spike assignment uncertainty using a mixture of Gaussians model, and then perform variational inference to model the relationship between the spike features and behavior.
Strengths: The premise and models in this paper are sound, if not particularly novel. The inference methods are not particularly novel either. However, it is clearly shown that the models developed here are empirically quite powerful. The evaluations performed are very extensive, showing very clearly that their methods outperform standard spike sorting and decoding methods on large datasets in multiple species.
Weaknesses: It is unclear why exactly previous clusterless methods fail on this kind of data. The authors may need to show this concretely to be fair to previous approaches that do not use spike sorting.
Simulation data may help to get intuition on the methodology, and to show validation of the methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It may be a good idea to apply previously developed clusterless methods on this kind of data to show how / why they fail.
Simulated data will be helpful to the reader to better understand the limitations of the modeling approach.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are adequately discussed. Potential negative societal impact is adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **General:**
We thank Reviewer tDhN for carefully reviewing our manuscript, and appreciate the opportunity to address the concerns raised. We want to emphasize the novelty of our method, which involves conditioning of the data generating process on external variables, allowing for improved prediction. Our approach is not limited to GMM modeling and GLM decoding; instead, it can be extended to non-Gaussian mixture models and nonlinear models to tackle diverse modeling challenges. Furthermore, compared to previous clusterless decoders based on state-space models, our density-based decoder avoids making explicit assumptions about underlying system dynamics, and is thus more flexible at capturing complex relationships in the data. Our method is also more scalable than previous approaches, which is desirable for large data volumes generated by HD probes.
**Weakness / Question 1:**
We have compared our decoder to previous clusterless decoders on both multiple electrodes and HD probes to demonstrate that our method is effective in both scenarios. The global response Table 1 provides a benchmark of clusterless decoders, reaffirming the superiority of our density-based decoder when applied to HD probes.
**Weakness / Question 2:**
We appreciate the reviewer's feedback on conducting a simulation study for model validation. The simulation results shown in global response Figure 1 confirm that our encoding model effectively captures the association between the simulated spike features and the simulated behavior of interest. With such learned associations, the decoding model is able to accurately decode the behaviors. To highlight the significance of these learned associations, we have compared our GMM conditioned on the decoding variable with a vanilla GMM that lacks such informative associations. The details of this comparison can be found in Table 2 in Section 8 (ablation study) of the supplementary materials.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I want to thank the authors for their detailed and comprehensive response. My questions have been adequately answered and the new results are reasonable. Although a little low on novelty, the paper fills a valuable gap in a fast-expanding field. I am accordingly raising my score. | Rebuttal 1:
Rebuttal: We appreciate the reviewer's feedback on our manuscript. We were encouraged that the reviewers recognized our paper's empirical robustness and its solution to the challenging task of decoding behaviors from highly overlapping neural signals. To address the concerns, we conducted the following experiments:
- Benchmark comparison to other clusterless decoders.
- Simulation for model validation.
- Computational time cost comparison.
- Correspondence between spike sorting and GMM assignment.
One important point to emphasize is the novelty of our method, which involves conditioning the generative model for the spike feature distribution on the behavior of interest, making the inference problem non-trivial. See supplementary materials for detailed derivations. **The probabilistic model and inference method presented have broader relevance within the NeurIPS community. The conditioning of the data generating process on external variables allows for improved prediction in various scenarios. Our approach is not limited to GMM and GLM; instead, it can accommodate non-Gaussian mixture models and nonlinear models.** While our paper serves as a proof-of-concept, the proposed method can be extended to tackle diverse modeling challenges.
**Our novelty also lies in the first demonstration of a clusterless decoding method designed for high-density (HD) probes (which are now used in hundreds of labs), and our use of localization features which are only meaningful for HD probes.** By combining these factors, we overcome previous challenges and improve decoding performance. Furthermore, compared to previous decoders based on state-space models, our density-based decoder offers increased flexibility and scalability. Our method avoids explicit assumptions about underlying system dynamics, making it more flexible at capturing complex relationships in the data. The increased scalability is critical for large data volumes generated by HD probes.
We appreciate the suggestion to benchmark clusterless decoders beyond HD probes. However, limited available code in clusterless decoding papers makes comparisons challenging within a short timeframe. While the suggested Bayesian decoding paper [(Chen et al ‘12)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3972894/) lacks functionality for the main task (due to incomplete code), we re-implemented the clusterless HMM model [(Ackermann et al ‘19)](https://www.biorxiv.org/content/10.1101/760470v1.full.pdf), but the training time was prohibitive. Despite these limitations, we made efforts to compare our method to clusterless point process decoders [(Denovellis et al ‘21)](https://elifesciences.org/articles/64505) on both tetrodes and HD probes. This clusterless point process decoder is similar to the Bayesian decoding paper, as both use a marked point process to link spike features and behaviors. We used the same spike features and calibrated each decoder to the data. For multiple-tetrode data, both decoders used spike amplitudes from 4 channels of 5 tetrodes. Similarly, for HD probes, both decoders relied on spike localization features and peak-to-peak amplitudes. **Our method's advantage over the clusterless point process decoder is evident in Table 1, owing to the increased flexibility of our decoder compared to earlier clusterless state-space models.**
We conducted simulations to illustrate the principles of our method. The simulation aimed to show that our encoding model can learn the relationship between spike features and behaviors. We performed two tasks, decoding a binary variable, $y_k$, simulated from a Bernoulli distribution, and decoding a continuous variable, $y_k$, simulated from a Gaussian process. To mimic the data-generating process, we selected Gaussian components with "templates" extracted from a real dataset. The encoding model parameters, $b$ and $\beta$, were also taken from learned parameters in the same dataset. Given $b, \beta$, and $y_k$, we simulated the "firing rates" $\lambda$ for each Gaussian component in the mixture, as described in the Method section of our paper. Next, we generated spike features based on these simulated ''firing rates'', and applied the encoding model to infer the behavior-dependent $\lambda$. Figure 1 displays the learned $\lambda$ for each component $c$, time $t$, and trial $k$. **The learned "firing rates" $\lambda$ closely resembled the simulated ones, indicating the model's ability to recover the primary associations between spike features and behaviors. With such associations, the decoding model can decode behaviors.**
We appreciate the question about the computational cost of our method. For spike sorting, we did not personally run Kilosort (KS), given the accessible spike-sorted output from IBL's public database. Notably, Kilosort operates in close to real-time, implying that it takes 1000 seconds to sort a 1000-second recording. In Figure 2, we provided a computational time comparison relative to real-time. **Our decoding step operates at a sub-real-time pace (0.3 times real-time), which is 4 times faster than the point process decoder (1.2 times real-time). The total time after preprocessing for our method is close to real-time.** While we didn't measure the time cost of the clusterless HMM due to the extensive model fitting time, the authors acknowledged that their model takes around 6 hours to fit on a small dataset in the paper.
We appreciate the inquiry about the insights our method provides into "hard" spike sorting. In response, we computed an agreement matrix (Figure 3) between "hard" KS assignments and "soft" GMM assignments. We calculated the conditional probability of spikes belonging to each GMM component, given that these spikes belong to the corresponding KS unit. Notably, KS units with large amplitudes are less likely to be split into multiple Gaussian components. In conclusion, **Figure 3 shows a reasonable correspondence between the Gaussian components and the spike-sorted units.**
Pdf: /pdf/67fd605911bb4d8c825daf70aec64c78ae3dff7a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models | Accept (poster) | Summary: Authors create SheetCopilot that takes input as a natural language task and then controls spreadsheet to perform the task. To provide an interface between natural language and the spreadsheet, author use a set of atomic actions that abstract spreadsheet functions. SheepCopilot uses state machine to create spreadsheet actions, modify them, and apply them to the spreadsheet.
Authors also present a dataset to evaluate their tool. SheetCopilot outperforms baseline approaches
Strengths: - Authors propose an approach to interface LLMs with spreadsheets potentially automating a large number of repetitive tasks
- Intermediate language of atomic actions provides a good output structure for LLM
- State machine based processing of tasks improves the results
- Atomic action name substitution with synonyms that are far away from the official names in an embedding space is interesting approach (but see weaknesses)
- Test dataset is useful for evaluating SheetCopilot and similar systems
- Ablation studies show clearly the contributions of different parts of the system
- Positive results compared to VBA generation baseline
- Paper is well written
Weaknesses: - Paper contributions are moderate:
- Intermediate language similar to atomic actions is a known approach in software engineering, similar to macro or scripting languages.
- Atomic actions have to be implemented in spreadsheet software which requires more effort to adopt this approach compared to existing interfaces to spreadsheets. (This is not a major weakness though, especially if current interfaces to spreadsheets are not well suited or universal).
- Atomic action name substitution with synonyms seems like a hacky approach, a bit similar to security-through-obscurity. It may have negative consequences (e.g. making it harder for LLM to connect tasks and actions) and it is not clear if this is should be used or relied upon in general.
I have read the author’s rebuttal.
The rebuttal partially addressed my concerns about atomic actions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why do certain tasks fail? Are the failures in certain categories due to the tasks, some limitations of LLMs, selection of atomic actions, spreadsheet functionality, too simple state machine? (Also see limitations comments).
I have read the author’s rebuttal.
The rebuttal addressed my concerns about understanding of why certain tasks fail by providing failure analysis.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: It would be good to have a more in-depth understanding why certain tasks fail and where are the boundaries of LLM spreadsheet interactions. It is good that authors classified tasks into six categories and observed different failure rates across them. However, it would be good to go further and understand the failures in more depth (also see questions).
I have read the author’s rebuttal.
The rebuttal addressed my concerns about understanding of why certain tasks fail by providing failure analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for a thorough review that will help us improve the work. Please see below for answers to your questions.
**W1: Paper contributions are moderate.**
We believe our work contributes to the field of tool-augmented LLM, for three primary reasons:
- We propose a novel framework enabling model-software interaction. The closed-loop control (Section 4.3) boosts success from 67.8% to 87.3% (row 2 vs. row 7 in Table 2). Our framework also substantially outperforms a VBA baseline (Table 1). This facilitates applying language models to software automation.
- Our dataset contains a diverse range of daily spreadsheet tasks (e.g., formulas, charts, and pivot table tasks). We believe this is a meaningful contribution because our work contributes a procedure for collecting software task datasets as well as a new testbed for comparing the planning capabilities of language models. To our knowledge, no existing dataset offers equally broad spreadsheet tasks, making ours useful for research on LLM-based agents.
- We thoroughly compare leading LLMs (Table 1 in the main paper and failure analysis in the Supplementary), elucidating the strengths and weaknesses of these well-known models from an aspect of task planning.
In summary, the proposed SheetCopilot, dataset, and experiments provide novel contributions. We would appreciate any suggestions to further communicate the value of our research to the readers.
**W2: Intermediate language similar to atomic actions is a known approach**
We agree that abstraction is standard in software engineering. However, to the best of our knowledge, no prior work has designed a unified abstraction for diverse spreadsheet platforms. Our atomic actions pioneer such an interface between LLMs and spreadsheets, analogous to an intermediate language abstracting assembly languages.
Our atomic actions offer two key advantages:
- They model spreadsheet functionality as virtual APIs (Section 4.2), enabling cross-platform control, as evidenced by Excel and Google Sheets compatibility (Fig. (b) in the response PDF). In contrast, macro languages like VBA typically target specific platforms.
- They can be used by LLMs to generate human- and machine-readable solutions (Figures F and G in the supplementary).
In summary, our atomic actions constitute a novel, unified spreadsheet abstraction.
**W3: Atomic actions have to be implemented in spreadsheet software**
It is right that current spreadsheet interfaces lack universality. This forces LLMs to relearn new rules when controlling new software, complicating prompting and debugging.
To address this, we propose software-agnostic atomic actions (Section 4.2) as a high-level abstraction. This enables cross-platform compatibility, as evidenced by SheetCopilot for Excel and GoogleSheets (Fig. (b)).
In summary, the platform-agnostic nature of our atomic actions is a strength rather than a weakness.
**W4: Atomic action name substitution seems hacky**
We would like to clarify that atomic action name substitution is a component of our method, instead of a standalone approach.
The goal is mitigating a well-known LLM issue - hallucination - where models generate confident response that is not justified by their training data or that contradicts input prompts [1][2]. For example, we observed GPT-3.5 hallucinating a non-existing argument such as “criteria=” and using SetFormat in a way similar to SetConditonalFormat).
The cause is that GPT-3.5 confuses the atomic action knowledge with its internal Excel knowledge. GPT-3.5, like other LLMs, generates responses based on patterns it learned during training [3]. It has no access to real-time data (e.g., the usage of atomic actions) and cannot absorb new concepts after its training is complete.
Therefore, if language patterns of the atomic actions are similar to those it has learned about Excel operations, GPT-3.5 will probably use the actions in a way like Excel operations. Figure I in Supplementary reveals such hallucination causes failures.
To tackle this, we substitute official names with dissimilar synonyms, enabling LLMs to strictly follow the usage of the atomic actions (Section 4.4). This is supported by Table 3 showing substitution improves both Pass@1 and efficiency.
In conclusion, atomic action name substitution is part of our method, playing a vital role in alleviating hallucination.
**Q1: Why do certain tasks fail & Limitations: need understanding of why certain tasks fail**
We appreciate your suggestion to analyze task failures.
To clarify, we conducted a comprehensive failure analysis (Section F, Supplementary). We found that:
- GPT-3.5 often applies incorrect formulas. For instance, it failed to use an approximate match in VLOOKUP when querying a price table.
- GPT-4 occasionally generates incomplete solutions due to token limit. For instance, GPT-4 output correct mid-steps but the tokens had run out.
- Claude tends to disobey requirements, like summarizing quantity rather than the requested revenue for pivot tables.
- All three LLMs struggle with absolute references when auto-filling.
Figure H compares failure patterns, showing that GPT-3.5 and GPT-4 share similar patterns while Claude exhibits a different one.
Figure I delineates GPT-3.5's failure proportions over the full dataset. Failures stemming from the model itself (wrong formula/range, hallucinating, etc) occupy 73.8% in total. The failures caused by the token limit of the OpenAI API (Incomplete solutions) occupy 18.9%. The failures caused by the atomic actions (API inability) make up 2.5%. This breakdown reveals incorrect GPT-3.5 predictions as the primary failure source.
We hope the detailed analysis in the supplementary can address your concern.
**Ref:**
[1]Survey of hallucination in natural language generation, 2023
[2]Shaking the foundations: delusions in sequence models for interaction and control, 2021
[3]Training language models to follow instructions with human feedback, 2022
---
Rebuttal Comment 1.1:
Title: Thanks for the comments
Comment: I appreciate the comments and additional discussion provided by the authors. | Summary: This paper introduces an LLM based spreadsheet manipulation task system from high level human language to spreadsheet manipulation tasks. The work is in the area of tool augmented LLM systems where LLM systems are used to create a chain of actions representing a complex manipulation task in a spreasheet system. The authors create a curated dataset consisting of a list of realworld spreadsheet tasks. They also build an assessment system. They study 3 popular LLMs - GPT-4, GPT-3.5turbo and Claude. They surprisingly find that GPT3.5turbo to be the best performing and also through ablation studies find that close-control action loops improve functional correctness, bringing in external docs improve correctness and surprisingly find that using synonyms far from the actual names also improve
Strengths: 1. The authors take on the challenging problem of automating complex modification task in spreadsheets rather than formula repair or generation. If it works can be of big benefit to spreadsheet users.
2. The authors create a curated dataset of tasks in a systematic way.
3. They also create a systematic evaluation system for evaluating the performance of the different popular LLMs in the context of the spreadsheet software system.
Weaknesses: 1. I would have liked to see a better discussion on the task colletions, the Q&A pair collection, how complex these tasks are, their categorization, why they are multicategorized, what do those "practical realworld spreadsheets" look like etc.
2. The techniques using LLMs are fairly straightforward with minor modifications like using synonyms (not sure why this works) adding documents etc. What is specific to the worksheet manipulation scenario that makes LLM easy or hard or different to work with. Lack appreciation of that.
3. Further investigation into why GPT-3.5Turbo is better than even GPT-4 would be useful as it is counterintuitive.
4. Instead of just exec@1, pass@1 softer metrics would be useful in understanding where these LLMs make mistakes and where they are good.
5. Instead of using a tag cloud for dataset evaluation something detailed metrics would have been more insightful.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Reference to Table 1 seems off. The citation says something and the table is about something else.
2. Is the dataset for creating tasks and benchmarking available to the public to compare.
3. Did you try smaller open source LLMs and see if they can be useful or how far you can get with them.
4
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. There is not much study on the nature of the dataset curated. But the paper is just at the beginning of something interesting. While it is unlikely that one may see issues with spreadsheet data in discrimination or bias or fairness it would be good to address them.
Flag For Ethics Review: ['Ethics review needed: Inadequate Data and Algorithm Evaluation']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review and insightful comments. Below, we address the concerns:
**W1: a better discussion on the task collections, the Q&A pair collection, how complex these tasks are, their categorization, why they are multi-categorized, what do practical realworld spreadsheets look like**
Section A.1 of the Supplementary Material elaborates on task collection methodology.
We collected ~27k spreadsheet Q&A pairs from SuperUser, filtered irrelevant pairs by keywords, further cleaned them by prompting ChatGPT to label invalid pairs, and selected representatives through clustering, resulting in a clean Q&A pair dataset.
Figure A of the Supplementary Materials visualizes task complexity, with instruction lengths ranging from 20 to 530 words. Our dataset encompasses simple and long-horizon tasks requiring up to 9 actions across sheets. The tasks are stored as a .xlsx file in the supplementary zip file for ease of viewing.
Task categorization occurred concurrently with LLM-based filtering (see Section A.1). Specifically, we prompted ChatGPT to classify each Q&A pair into 7 categories (i.e., Entry and Manipulation, Management, Formatting, Charts, Pivot Tables, Formulas, and Invalid). Figure (c) in the response PDF presents an example of categorization.
The tasks were multi-categorized since most of them involved spreadsheet operations belonging to multiple categories. For example, a data analysis task probably requires predicting formulas (Formulas), Auto-filling by dragging down (Entry and Manipulation) and plotting charts (Charts). If singular labeling is used, ChatGPT is prone to classification ambiguity and it is harder to evaluate the diversity and complexity of our dataset.
The "practical real-world spreadsheets" was also included in the supplementary zip file. Two brief examples: (i) SummerSales: The spreadsheet records the sales of a company. (ii) GDPBreakdown: The spreadsheet catalogs economic indicators over time.
**W2: The techniques using LLMs are fairly straightforward with minor modifications.**
We believe the state machine-based planning pipeline of SheetCopilot is a novel mechanism for solving complex spreadsheet tasks, because
- Plain text chatbots cannot directly execute spreadsheet actions, unlike SheetCopilot.
- The closed-loop control architecture (Observing Stage, line 205) enables LLMs to accomplish long-horizon, multi-sheet tasks.
- Our proposed revising stage (line 214) leverages software feedback to recover from errors, substantially boosting execution success from 67.8% to 92.8% (row 2 vs. row 4 in Table 2). The experimental result indicates that our design is original and powerful.
Action dependency and the complexity of combining spreadsheet operations make it hard for LLMs to work with spreadsheet software. Lines 196-200 in Section 4.3 explains that LLMs must comprehend how sheet state evolves after each step to succeed. For example, LLMs struggle to manipulate filtered data without understanding the resulting layout. SheetCopilot's specialized design overcomes this obstacle.
**W3: Further investigation into why GPT-3.5Turbo is better**
Thanks for the suggestion. Please refer to Q2 in the global response at the top.
**W4: softer metrics would be useful in understanding where these LLMs make mistakes and where they are good.**
We appreciate your perspective on the importance of softer metrics in understanding the performance of the LLMs.
Our study does employ multifaceted soft metrics. These include success rates across task categories and fine-grained failure analysis.
Figure 3 decomposes Exec@1 and Pass@1 across six categories, revealing tasks for which each LLM excels. GPT-3.5-Turbo, GPT-4, and Claude achieve peak Pass@1 on Management, Manipulation, and Manipulation tasks, respectively.
Moreover, Section F of the Supplementary presents an in-depth failure analysis by tallying the number of eight predefined failure types.
Statistics reveal GPT-3.5-Turbo often utilizes incorrect formulas, GPT-4 frequently generates incomplete solutions due to token limits, and Claude tends to misapply formulas and ranges. A common mistake is that they often fail to apply absolute references when auto-filling, causing calculation errors.
We hope this addresses your concern.
**W5: Instead of using a tag cloud for dataset evaluation something detailed metrics would have been more insightful.**
The supplementary details our dataset metrics. Figure A depicts the instruction length and atomic action distributions; Figure B illustrates category proportions and diversity within the core set; Figure C enumerates category combination frequencies. These three figures together demonstrate the richness and diversity of our dataset.
**Q1: Reference to Table 1 seems off.**
Regarding your comment about the reference to Table 1, we thoroughly reexamined the manuscript and table in question and believe there may be a misunderstanding.
Table 1 compares LLMs and a VBA-based method. Two references exist:
(i) The first (line 264) cites Table 1 to demonstrate GPT-4's strong planning capability.
(ii) The second (line 306) cites Table 1 to show SheetCopilot surpassing the VBA-based method.
We hope this addresses your concern. If there are further ambiguities, we would gladly address them.
**Q2: Is the dataset for creating tasks and benchmarking available to the public to compare.**
Yes. The dataset has been publicized on PapersWithCode anonymously and also included in the Supplementary zip file for ease of viewing.
**Q3: Did you try smaller open-source LLMs**
Yes. Please refer to Q1 in the global response above.
**Limitations: good to address discrimination or bias or fairness issues with spreadsheet data**
Thank you for providing the suggestion to improve the ethical integrity of our work. Please refer to Q5 in the global response for our explanation.
_We thank the reviewer again for the insightful review and feedback._ | Summary: The paper proposes a benchmark and a framework based on observe-propose-revise-act for tackling spreadsheet problems.
Strengths: - The paper is very well written and the ideas are neatly presented with figures and tables. It was a pleasant read!
- The ablation studies are quite interesting. It is interesting to see the effects of external documents, usage examples, state of the spreadsheet, and error messages separately. I also liked the ablation with the synonym substitution for official atomic action names.
- The paper proposes a comprehensive framework of observe-propose-revise-act, which even though shown to work on a restricted domain of spreadsheets, can be used as inspiration for other systems, though it is not clear how scaling it up to more complex scenarios will work in practice.
Weaknesses: - My main concern is over the benchmark creation process. Since the problems are picked up from a public domain site, I think there is a high possibility that these LLMs might have already seen this data during pretraining. Under this assumption, it becomes hard to understand the generalization capability of the proposed framework for unseen tasks. The authors don't discuss any deduplication efforts they performed to ensure that the performance of the shown metrics is not bloated.
- Parts of the framework have been proposed by other works as discussed in the related works. However, I would consider the framework as a whole as a novel contribution. At the same time, I believe that the tasks tacked by the paper can be considered too simplistic for these LLMs ( especially since it is shown that the LLM achieves 100% performance on some categories of these tasks). It is not clear to me how the proposed framework would scale to more complex settings like a Copilot for programming where it is difficult to retrieve the relevant state of the environment, external documentation, or usage examples. Even if there exists a way to get this information, it is unlikely that it will fit within the context length available in the input prompt.
- From my understanding, there is no timeout used and the LLM is queried multiple times until it gets the right answer. If this were the case, this is not a realistic assumption when deployed in practice. Users would not wait for long periods until they get the required task accomplished.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - It would be good to discuss the performance-latency tradeoff in terms of #queries to the LLM, and the memory and compute used for inference.
- It would be good to have a user satisfaction metric as well, e.g. if the users were satisfied by the time it look for the model to accomplish the task, etc.
- Line 112: What are LLM-based filters?
- It would be good to show the improvements brought by the framework on top of relatively smaller LLMs.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: There is no limitation discussed in the main paper. A discussion about the increased computing requirements during inference and generalization to complex settings should be included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful review and feedback. Please see below for answers to your questions.
**W1: hard to understand the generalization capability … discuss any deduplication efforts**
Thank you for this constructive comment.
A direct way to test generalization is to split our dataset into two parts - one containing tasks whose original questions were asked before 09/30/2021 and one after. Since the data cutoff for GPT-3.5 is 2021/09, this allows us to evaluate performance on truly unseen data. Conducting this experiment, we found SheetCopilot achieves Exec@1=89.1% and Pass@1=44.0% on the before split, and Exec@1=80.4% and Pass@1=45.7% on the after split. The similar performance across splits suggests SheetCopilot exhibits generalization capability.
We conducted deduplication when collecting our dataset (see Section A.3 in the supplementary). Through the adaptation process, task instructions differ from the original SuperUser questions. Tasks also apply to new sheets rather than those mentioned on SuperUser. Moreover, ground truth solutions are step-by-step, unlike the colloquial SuperUser answers.
To confirm our data can be considered unseen, we calculated the maximum similarity between each task and all SuperUser questions. Similarities ranged from 0.06 to 0.16 (avg 0.10), indicating slight semantics overlap.
We also calculated ROUGE scores between our tasks and SuperUser questions, finding ROUGE-1=0.38, ROUGE-2=0.17, and ROUGE -L=0.35. The low scores further confirm a slight overlap.
In summary, the dataset overlap with SuperUser is minor and SheetCopilot indeed owns generalization capability.
**W2: How the proposed framework would scale to more complex settings**
Our main focus is to address language-instructed spreadsheet manipulation. Actually, applying our method to coding tasks presents an interesting future research direction.
As a preliminary idea, we propose treating a program's logging and error flag status as part of its environment state and using API documentation for programming languages and libraries as external documents.
Regarding the potential context limitation, techniques like retrieval-augmented generation based on vector databases could help mitigate this issue when working with large-scale external documents.
**W3: Users would not wait for long periods**
We agree that fast response is beneficial for increasing users’ satisfaction.
One cause of long waiting is deadlock due to the repeated output of LLMs (this failure occupies 4.9% as shown by Figure I in the supplementary). However, SheetCopilot's solutions in these cases still contain partially correct mid-steps. Users can employ the interactive mode (see Figure (b) in the response PDF) to abort and relaunch queries, escaping deadlocks. In contrast, correcting mistakes in code-based (VBA) methods is more difficult without easy interaction.
Another cause is the long inference time. Fortunately, many methods of accelerating LLM inference, such as KV-cache [1], have been proposed. These could be applied to reduce query time when deploying SheetCopilot.
We will add timeout and acceleration mechanisms in future updates to improve response speed.
**Q1: The performance-latency tradeoff in terms of #queries to the LLM**
We agree that latency should be optimized if we want to productize SheetCopilot.
Currently, the majority of inference latency stems from our lengthy system prompt (typically 2k~3k tokens), while each generated step seldom exceeds 100 tokens.
A potential optimization for reducing latency is precomputing SheetCopilot's fixed prompt once and caching it via KV-cache [1] for subsequent requests, thereby reducing the query time to the time required to generate ~100 tokens.
Besides, performant open-source LLMs like llama2-7b [2] could already run at a speed of >150 tokens/s on a single 4090. In the future, this could potentially enable SheetCopilot to interact with users within hundreds of milliseconds.
Overall, leveraging the fast progress in open-source LLM inference infrastructure to boost SheetCopilot user experience presents an exciting direction for future work. We will incorporate a discussion of this topic in the final revision.
**Q2: good to have a user satisfaction metric**
Thank you for the suggestion. While human evaluation is indispensable for human-computer interaction research, it is beyond this paper's scope, which focuses on assessing LLMs' capacity for complex software control.
Nevertheless, we have implemented SheetCopilot on Google Sheets (see Figure (b) in the response PDF). We will use this platform to conduct satisfaction experiments in future work.
**Q3: What are LLM-based filters?**
The LLM-based filter is ChatGPT used to classify Q&A pairs (see Section A.1 in Supplementary). Specifically, we prompted ChatGPT to classify each pair. After classification, we remove ~2.4k pairs labeled as “Invalid”, thereby filtering out irrelevant pairs and obtaining a cleaner dataset.
Alpaca [3] and Self-instruct [4] adopted this filtering approach and inspired our work.
**Q4: the improvements on top of relatively smaller LLMs**
Thanks for the advice. Please refer to Q1 in the global response above.
**Limitations: There is no limitation discussed in the main paper**
The limitations of SheetCopilot are enumerated in Section H of the supplementary. We also list them briefly here:
- We have not optimized token usage for solving spreadsheet tasks. Future efforts should save tokens or use LLMs with a larger context window.
- Due to the token limit, state feedback provides only essential cell information.
- Our evaluation environment does not support all Excel built-in features (e.g. array formulae).
**Ref**:
[1]Efficiently scaling transformer inference, 2023.
[2]Llama 2: Open Foundation and Fine-Tuned Chat Models, 2023.
[3]Stanford alpaca: An instruction-following llama model, 2023.
[4]Self-instruct: Aligning language model with self-generated instructions, 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response to my comments, especially the deduplication point. I would encourage the authors to include the discussion on performance-latency in the paper.
---
Reply to Comment 1.1.1:
Title: Response to Official Comment by Reviewer XbAE
Comment: Thank you for your encouragement! We will include a performance-latency discussion in the final version and conduct further research on this topic in the future. | Summary: This paper introduces SheetCopilot, a model that aims to generate step-by-step executable command sequences for software control according to the natural language description. Besides, a benchmark dataset for evaluating software control tasks is collected. Experimental results based on the dataset are reported.
Strengths: - Generating step-by-step executable command sequences for software control according to the natural language description is a valuable problem.
- The proposed framework SheetCopilot takes a set of atomic actions as an abstraction of spreadsheet software functionalities and contains a state-machine-based task planning framework for LLMs to interact with spreadsheets, aiming to translate high-level task description into executable command sequences, which generally makes sense and is actually novel.
- Experimental comparison against baselines (including the state-of-the-art LLMs like GPT-3.5 & GPT-4) has shown the performance advantage of the proposed method. Besides, the ablation study indicates that the design of the proposed framework is beneficial for improving performance.
- In addition, a comprehensive dataset containing 221 spreadsheet-related tasks is collected in this work for performance evaluation, and this dataset could be further served as a benchmark dataset in this area.
Weaknesses: - Reporting the stability test results through the line chart may be clearer. Besides, there is still some space to report more empirical evaluation results.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Is the collected dataset publicly available?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive and insightful comments. Thank you for pointing out the strengths of our paper. Your concerns are addressed in detail below:
**W1: Reporting the stability test results through the line chart may be clearer.**
Thanks for the advice. We have conducted extra experiments at temperatures 0.4, 0.6, 0.8, and 1.0. The line charts in Figure (a) in the response PDF show that the difference between the highest Pass@1 (47.66%) at temperature = 0.4 and the lowest one (44.30%) at 0.0 is slight and the changes of the other metrics are also slight. These results suggest that SheetCopilot achieves stable performances even if the GPT-3.5 API temperature changes from 0.0 to 1.0.
**Q1: Is the collected dataset publicly available?**
Yes. Our dataset has been publicized on PapersWithCode anonymously. The reviewer can also check our dataset (including the task spreadsheets, task instructions, and ground truths) in the supplementary zip file.
We thank the reviewer again for the thorough review and feedback.
---
Rebuttal 2:
Comment: I've read the authors' response and decide to keep my score. | Rebuttal 1:
Rebuttal: **Global Response**
_The authors would like to thank all reviewers for their appreciation and instructive suggestions!_
The authors are encouraged to hear that the reviews commented that
- the studied spreadsheet automation task is **valuable** (bHxV) and **challenging** (TF2z)
- the proposed SheetCopilot makes sense and is **novel** (bHxV), **inspiring** (XbAE), and **of big benefit** to spreadsheet users (EedW)
- our dataset curation process is **systematic** (TF2z), and our dataset is **a useful benchmark** for similar works (bHxV and HU2o)
- the ablation studies are **quite interesting** (XbAE) and **clearly show** our contributions (HU2o)
- our paper is **very well written** (HU2o) and the ideas are **neatly presented** (XbAE)
- no ethical issues appear in our work (Hbb1 and DGUT)
We carefully considered all concerns and comments provided by reviewers and addressed all of them appropriately. Our responses are summarized below:
**Q1 (XbAE and TF2z): Performances of smaller open-source LLMs.**
We tested smaller LLMs (e.g. llama13b, 7b) but found them insufficiently capable of generating complete solutions. Nevertheless, it is an intriguing research direction to develop small LLMs capable of solving software automation tasks.
**Q2 (TF2z and DGUT): Why GPT-3.5 is better than GPT-4.**
We investigated why GPT-3.5 is better than GPT-4 in detail (Please see lines 265 - 268 in the main paper; Supplementary Section F).
To clarify, GPT-4 does surpass GPT-3.5 in task success rate (Pass@1) as shown in Table 1. The lower Exec@1 of GPT-4 results from the experimental setting. To match GPT-3.5's 4096 token limit, we capped GPT-4's prompt plus response at 4096 tokens. Exceeding the token limit causes execution failure. This renders GPT-4 prone to incomplete solutions, while GPT-3.5's short yet incorrect outputs suffer less from the limit. Consequently, GPT-3.5 achieves a higher execution success rate (Exec@1) despite GPT-4's superior Pass@1.
In summary, GPT-4 outperforms GPT-3.5 in Pass@1 but the token limit renders GPT-4 less performant in Exec@1.
**Q3 (XbAE and TF2z): A more detailed discussion about the benchmark creation process and the nature of the dataset.**
Section A (Supplementary) details our benchmark creation process. Briefly, we collected ~27k SuperUser Q&A pairs, filtered irrelevant pairs by keywords, further cleaned them by prompting ChatGPT to label invalid pairs, and selected representatives through clustering, resulting in a clean Q&A pair dataset.
Figures A, B, and C (Supplementary) demonstrate our dataset's diverse complexity across six critical task categories.
We took deduplication measures to guarantee that our dataset is unseen by LLMs like GPT-3.5 (Section A.3 in the supplementary). The cosine similarity and ROUGE-L between our dataset and the SuperUser where our raw data comes from are < 0.16 and < 0.35, respectively. This suggests that the SheetCopilot performances shown in Tables 1 and 2 are solid, without data leakage issues.
Overall, we believe our benchmark enables assessing LLM planning capabilities and will inspire further research at the intersection of language models and software agents.
**Q4 (TF2z and HU2o): About further investigation and failure analysis.**
We performed failure analysis for the tested LLMs by categorizing failure cases (Section F in the supplementary).
Key findings:
- GPT-3.5-Turbo struggles with formula prediction.
- GPT-4 often generates incomplete solutions due to token limits.
- Claude rarely repeats output but misapplies formulas/ranges.
- All models frequently neglect absolute references when auto-filling ranges, causing calculation errors.
The failure statistics also demonstrate interesting findings: GPT-3.5-Turbo and GPT-4 share similar failure patterns while Claude differs. This finding suggests that the two GPT models possess almost identical alignment in terms of spreadsheet manipulation and that a gap exists between the alignment of Claude and those of the GPT models.
We believe these results help researchers better understand the strengths and weaknesses of the leading LLMs.
**Q5 (TF2z and DGUT) While it is unlikely that one may see issues with spreadsheet data in discrimination or bias or fairness it would be good to address them.**
Thank you for bringing this to our attention. We understand the importance of fairness and unbiasedness in data. Rigorous steps were taken to uphold these principles:
Firstly, we removed any personally identifiable information from the collected spreadsheets to prevent discrimination. For example, fake names (e.g. numbers and letters) are used in the spreadsheets. Besides, the financial data in the spreadsheets are fabricated to guarantee that the sheets do not involve real individuals' or companies' financial information. These help to reduce bias, as it prevents LLMs from being influenced by personal characteristics.
Additionally, the authors thoroughly audited data, eliminating any offensive or biased content like racism or regional discrimination. For example, we removed tasks that process staff data according to country. One such task is “organizing them by name, country, and region within that country” which was labeled as “invalid” and discarded.
In summary, we implemented measures to address ethical concerns around bias and discrimination. We hope this reply addresses the reviewers’ concerns, and we are open to further suggestions to improve the ethical integrity of our work.
_Finally, we again show our greatest gratitude to all reviewers for their considerable and insightful comments._
Pdf: /pdf/d0b500647662d474f6af8bc0db033246798a5563.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Stable Diffusion is Unstable | Accept (spotlight) | Summary: This paper finds some vulnerabilities of stable diffusion model, and proposes an auto-attack model to generate attack prompts.
Strengths: This paper is well written and is easy to understand.
This paper discusses some vulnerabilities of the stable diffusion model, and does many experiments to verify.
The methodology illustration is clear and easy to follow.
Weaknesses: The purpose and motivation of ATM is not well explained. Authors have find some vulnerabilities of the stable diffusion model, but the motivation of ATM is not correlated well with them. It would be better to connect the ATM and the aforementioned vulnerabilities.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Are there some related work in this field? I would be better to mention some related work, and discuss or compare them in your experiments.
When describing the Variability in Generation Speed, the definition of speed is little confusing. What does the speed value represent? How to understand it?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper proposes an attack model, but lack of discussion of defense model. It would be better to discuss them together.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their diligent review of this paper and for providing constructive feedback.
**RE Purpose and Motivation:** Stable diffusion along with other text-to-image generative models (e.g., Midjourney and DALL-E 2) have wide-ranging applications and implications in the field of AIGC. Firstly, in practical application scenarios, the model may receive a variety of text prompts. Therefore, we need to explore under what form of text prompts, the text-to-image model will not be able to complete the image generation task, so as to improve the stability of the stable diffusion model. Our ATM algorithm is able to retrieve multiple attacking prompts based on a clean text prompt that prevents the stable diffusion model from generating images that match human aesthetics. Secondly, the current stable diffusion model still has many defects, some may be more common, but some are more invisible defects. Based on our algorithm to generate a large number of failure cases, researchers can systematically analyze the logic behind the failure cases. Also, the four vulnerabilities mentioned in the paper are summarized by observing these cases.
**RE Discussion of Defense Model:** Our method can inspire the design of defense strategies and can work as an effective and reliable metric to evaluate those strategies. We design some possible defense strategies against the four patterns of vulnerability we found as follows:
**Pattern 1. Variation in generation speed:** The speed at which the model generates images may depend on the relative generation speed in the text prompt, which may cause image features of one **noun** to overwrite those of another, or the sampling process enters into fine-grained generation information, the coarse-grained information of one of the **noun** has not been fully generated (the experiment that cannot generate categories without coarse-grained information can be seen in the third row of Figure 6 in the paper). defense mechanism may include:
1. The development of new training strategies, so that the model can generate image features of each **noun** in a more balanced manner.
2. Adopt or design a new mechanism to try to solve the coordination problem in the sampling process of each token in the prompt. Intuitively, it is to keep the generation speed of each token consistent, so as to solve the problem of content disappearing in the generated images.
3. Add or design diversity losses or regularization terms for model training to encourage the model to generate more balanced and diverse images.
**Pattern 2. The similarity of coarse-grained features:** When two **noun** have similar coarse-grained features, the model may generate images that mix the features of these two ** noun**. Defense mechanisms may include:
1. enhancing the feature discriminative ability of the model so that it can better handle **noun** that are different but have similar coarse-grained properties.
2. Introduce adversarial training to make the model better learn the differences between different **nouns**, instead of only focusing on their shared characteristics.
3. For coarse-grained similarity, we can preset multiple anchors during the generation process, forcing the category words in the prompt to be generated only in the corresponding anchors to solve the problem of attention map overlap.
**Pattern 3. Polysemy of Words:** When **noun** is polysemy, the model may produce images that do not match user expectations. Defense mechanisms may include a more comprehensive consideration of the semantic context of **noun** during model training, enabling models to more accurately understand the specific meaning of **noun**. A solution might be to introduce context-aware language models to enhance the model's understanding of lexical polysemy.
**Pattern 4. Word position:** The position of a descriptor in a sentence may affect the image generated by the model. Defense mechanisms may include
1. Train the model to better understand the location information of **noun** to reduce the impact of location changes on generated images.
2. Design a module that can manually adjust the weight of each word to eliminate the influence of location information.
**RE Related Work:** In the section of A in Supplementary Material we refer to previous related work in exploring and addressing the stability of stable diffusion model generation as follows. DAAM performs a text-image attribution analysis on conditional text-to-image model and produces pixel-level attribution maps. Their research focuses on the phenomenon of feature entanglement and uncovers that the presence of cohyponyms may degrade the quality of generated images and that descriptive adjectives can attend too broadly across the image. Attend-and-Excite investigates the presence of catastrophic neglect in the Stable diffusion model, where the generative model fails to include one or more of the subjects specified in the input prompt. Additionally, they discover instances where the model fails to accurately associate attributes such as colors with their respective subjects. Although those works have some progress, there is still work to be done to enhance the stability and reliability of text-to-image models, ensuring consistent and satisfactory results for a wide range of text prompts, StructureDiffusion discovers that some attributes in the prompt not assigned correctly in the generated images, thus they employ consistency trees or scene graphs to enhance the embedding learning of the prompt.
**RE Definition of Speed:** Take the generation process of an image as an example, stable diffusion requires N steps of the denoising process. We take the last step image as the reference image and use SSIM and LPIPS to calculate the distance from each step of the image to the last step. The speed of this distance converging to 1 (for SSIM) or 0 (for LPIPS) is the generation speed.
---
Rebuttal Comment 1.1:
Comment: Thanks for your replies. Your response has solved my doubts very well. I would like to raise my score to acceptance. | Summary: The paper introduces Auto-attack on Text-to-image Models (ATM), a method to efficiently generate attack prompts that closely resemble clean prompts.
The method modifies text prompts by replacing or extending words, using a Gumbel Softmax distribution for differentiability.
It further applies a binary mask to preserve the desired object noun and imposes fluency and similarity constraints to ensure similarity.
The authors identify four distinct attack patterns through this attack method.
The method utilizes Stable Diffusion as the target model, allowing for white-box attacks, and demonstrates the transferability of these attacks to other generative models (black-box attacks).
Strengths: 1. The authors introduce a novel method to generate successful attack prompts in text-to-image generation pipelines. It can aid in vulnerability investigation. The method enables the identification of a broader range of attack patterns, prompting further research in both attack and defense mechanisms. Therefore, it can enhance security in the industry.
2. Related to the previous point, the authors themselves employ their method to uncover four distinct attack patterns, which are highly enlightening. This convincingly demonstrates the significant importance of this approach in studying the vulnerability of generative models and enhancing their robustness.
3. In their experiments, the proposed method demonstrates a high success rate in white-box attacks. It also exhibits excellent transferability to other generative models, e.g. DALL-E 2 and Midjourney. This proves that the method has a broad range of applications and can be used for various generative models, not just limited to Stable Diffusion.
Weaknesses: 1. Two types of modifications, namely replacing and extending, are considered. It can be confusing to determine when to use each type. How do they determine whether to replace or extend a word?
2. Most of the analyses are solid. However, in Section 3.1, the authors mention embedding two prompts c_1 and c_2 separately to "eliminate the additional impact of all possible extraneous factors". It is unclear what specific impact they want to eliminate and how embedding prompts separately can achieve this.
3. The authors mention the use of the method to design defensive strategies but do not discuss how to utilize it. Is it possible to provide an example?
4. A minor issue: In Fig. 6, the steps should range from 49 to 0 instead of 1 to 50. This is because in the reverse diffusion process, T represents noise, and 0 represents the generated image (similar to the steps shown in Fig. 4).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address questions mentioned in "Weaknesses".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There are no potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their diligent review of this paper and for providing constructive feedback.
**RE Replacing and Extending:** The two types of modifications can be automatically selected by our sampling mechanism and can be optimized using gradients. In the process of prompt tokenization, we uniformly pad all the prompts into the maximum length that the Clip can accept, which is 77. After learning the attack distribution, when sampling by Gumble-Softmax, if the maximum value falls in the corresponding position of the clean text, it is the replacing, otherwise extending. For short text prompts, the prompt template (i.e., “A photo of a [NOUN]”) will affect the quality of the final generated image if the template is modified, some poor-quality images may be generated and make the classifier unable to recognize, but the category of the generated image is still the original category. In addition, the category keywords in the prompt also cannot be changed, because we want to remain the noun in the prompt and make the DM generate other objectives. Directly replace the original category keywords to make the original category disappear in the generated image, which is not a reasonable attack method. so only the extension is adopted.
**RE Additional Impact:** In section B.4 in the supplementary material, we mentioned that the different positions of the category keywords in the prompt will lead to different contents in the final generated image. The reason is that there is a difference between the language model and humans in the understanding of the text. Even if the relative order of the category words is changed, the semantics will not change for humans, but for Clip’s text encoder, there will be a certain gap in the extracted semantic information. Thus, we first split one prompt into two parts and then concatenate the two embeddings, which can cut off the positional embedding of the two keywords, therefore eliminating the impact of the positional information.
**RE Defensive Strategies:** Our algorithm can generate a large number and variety of attack text prompts, which is difficult to achieve manually.
**(1)** Researchers can discover more vulnerabilities in the stable diffusion model by observing and analyzing these attack samples and designing an effective algorithm to improve the stability of the stable diffusion model based on the observed failure cases. We design some possible defense strategies against the four patterns of vulnerability we found as follows:
**Pattern 1. Variation in generation speed:** The speed at which the model generates images may depend on the relative generation speed in the text prompt, which may cause image features of one **noun** to overwrite those of another, or the sampling process enters into fine-grained generation information, the coarse-grained information of one of the **noun** has not been fully generated (the experiment that cannot generate categories without coarse-grained information can be seen in the third row of Figure 6 in the paper). defense mechanism may include:
1. The development of new training strategies so that the model can generate image features of each **noun** in a more balanced manner.
2. Adopt or design a new mechanism to try to solve the coordination problem in the sampling process of each token in the prompt. Intuitively, it is to keep the generation speed of each token consistent, so as to solve the problem of content disappearing in the generated images.
3. Add or design diversity losses or regularization terms for model training to encourage the model to generate more balanced and diverse images.
**Pattern 2. The similarity of coarse-grained features:** When two **noun** have similar coarse-grained features, the model may generate images that mix the features of these two **noun**. Defense mechanisms may include:
1. enhancing the feature discriminative ability of the model so that it can better handle **noun** that are different but have similar coarse-grained properties.
2. Introduce adversarial training to make the model better learn the differences between different **nouns**, instead of only focusing on their shared characteristics.
3. For coarse-grained similarity, we can preset multiple anchors during the generation process, forcing the category words in the prompt to be generated only in the corresponding anchors to solve the problem of attention map overlap.
**Pattern 3. Polysemy of Words:** When **noun** is ambiguous, the model may produce images that do not match user expectations. Defense mechanisms may include a more comprehensive consideration of the semantic context of **noun** during model training, enabling models to more accurately understand the specific meaning of **noun**. A solution might be to introduce context-aware language models to enhance the model's understanding of lexical polysemy.
**Pattern 4. Word position:** The position of a descriptor in a sentence may affect the image generated by the model. Defense mechanisms may include
1. Train the model to better understand the location information of **noun** to reduce the impact of location changes on generated images.
2. Design a module that can manually adjust the weight of each word to eliminate the influence of location information.
**(2)** We can use this algorithm to generate a large number of offensive text prompts and use these attack samples to create a data set specifically for generation stability. This data set can be used to test the stability of the diffusion generation model, and can also be used for Adversarial training, to improve the stability of the model.
**RE minor issue:** Thank you for pointing out this mistake, we will correct it immediately.
---
Rebuttal Comment 1.1:
Title: post-rebuttal comments
Comment: The authors have addressed my concerns.
It is a good paper.
I vote for acceptance. | Summary: This paper proposes an adversarial attack against text-to-image models that can generate adversarial prompts to prevent the stable diffusion models from generating the desired subjects. The attack is gradient-based by utilizing the Gumbel Softmax to make the work embedding differentiable. Then, the authors provide a comprehensive analysis of the vulnerabilities of the stable diffusion models.
Strengths:
1. This paper is well-written and well-organized.
2. The proposed automatic attack is effective and convenient for implementation.
3. The authors provide interesting and inspiring analyses of the vulnerabilities of the generative models. They found the differences in the generation speed, the similarity of coarse-grained subjects in the prompt, and the polysemy are the important factors in the robustness of the generative models. This analysis is definitely beneficial to the future study of the robustness of generative models.
4. The authors show that the proposed attack is transferable as well.
Weaknesses: 1. Although the constraint in the attack seems to be novel, the attack objective is similar to the previous proposed C&W attacks.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. What are the computation consumption and requirement to conduct the white-box attack against the stable diffusion?
2. The attack success rate could be affected by the performance of the network. For example, the network could misclassify a subject even if the subject is correctly generated by the diffusion model. Could you discuss how to mitigate this issue during the evaluation procedure?
3. It seems the attack will make the FID score higher and IS score lower. Does it indicate a drawback of the attack? I am confused about whether we need to maintain the FID and IS score during the attack. Could the authors provide some explanations?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their diligent review of this paper and for providing constructive feedback.
**RE Objective and C&W:**
Our objective differs from that of C&W from three perspectives:
1. **The type of target models:** C&W attacks commonly target classification models, aiming to alter the model's classification results. On the other hand, in our optimization objective, a generative model is targeted, aiming to alter the content of the generated image to make it different from the target class. Although our pipeline also includes a classification model, its role is to measure the success of the attack, and this classification model is not our target of attack.
2. **The continuity/discreteness and differentiability of perturbations:** In C&W attacks, perturbations are applied to image inputs, which are continuous and differentiable (within the range of 0.0 to 1.0). In our optimization objective, the inputs consist of words represented by a dictionary of embedding vectors, which are discrete and non-differentiable. Therefore, it is necessary to introduce a mechanism that renders them continuous and differentiable, thereby enabling gradient-based optimization. To be specific, Gumbel Softmax distributions are introduced to make the perturbations continuous and differentiable. Two types of modifications, namely replacing and extending are to automatically learn the replacement or expansion of words, and then pass a text-to-image transformation function (i.e., stable diffusion) to detect the effectiveness of the perturbation.
3. **The complexity of constraints:** The constraints applied in C&W attacks include regularization terms to restrict the p-norm of perturbations and the clamping of pixel values within the range of 0.0 to 1.0 after the attack. These terms are used to make perturbations very small and thus difficult for humans to detect. However, in our method, p-norm and clamping cannot be used since the difference between the text and image. Therefore, we design constraints that are more suitable for the textual data,i.e., fluency constraints, and BERT similarity constraints. This constraint not only limits the size of the perturbation, but also limits how similar the perturbed text is to the original text, and how close the generated distribution corresponding to the perturbation is to the true distribution.
**RE Computation Consumption:** Our algorithm consumes 0.1 GPU Hours to learn the attack distribution of a sample on a single RTX3090 with 100 steps of gradient search, and 0.075 GPU Hours to sample 100 attack prompts from this distribution, so for a single sample, it consumes a total of 0.175 GPU Hours
**RE Attack Success Rate:** The classifier we use for the attack is CLIP ViT-B/16.
For eliminate the impact of the performance of a single classifier, we design a voting mechanism by using an ensemble of multiple classifiers. To be specific, the attack is defined as a failure if at least one classifier in the ensemble still recognizes the original categories in the text prompt. When we took this mechanism to evaluate the generated image after our attack, the success rate of the attack on the long and short prompts remained the same, 81.8% and 91.1% respectively. This further proves the effectiveness of our algorithm. Additionally, we test the voting mechanism with images generated from clean prompts covering the 1000 classes in ImageNet. As shown in the PDF, most of the classifiers have a relatively high Top1 accuracy. We conducted experiments with three distinct classifiers: CLIP+ViT-B/16, Swin-B, and the vanilla ViT-B/16. The classification accuracies of these models were evaluated using both long and short text prompts. For the CLIP+ViT-B/16 model, the classification accuracies with long and short text prompts were found to be 78.80% and 82%, respectively. The Swin-B model demonstrated classification accuracies of 76.20% with long text prompts and 79.80% with short ones. Lastly, the vanilla ViT-B/16 model achieved classification accuracies of 78.90% and 82% with long and short text prompts, respectively. The probability that at least one of them successfully recognizes the current generated image is 90.40% and 92.30% for long and short text prompts, respectively, which is much higher than using only one classifier. This proves that our mechanism is effective.
**RE FID and IS Scores:** This is not a drawback, instead this is an echo of our algorithm design. The attack objective of our algorithm, namely the margin loss, is used to reduce the classifier’s confidence in the true class $y$ and improve its confidence in the class with the largest confidence excluding $y$, until a margin $\kappa$ is reached. Thereby suppressing the categories in the clean text prompt from appearing in the generated images. Therefore, from the perspective of this loss function, whether it is to reduce the quality of the image or make the original category disappear from the generated image, both align with this expectation.
FID is a metric that measures the distance between the distribution of generated images and the reference dataset. In our paper, the reasons for the FID metrics increased can be concluded as follows. Firstly, we use the ImageNet validation set as the reference dataset, the content of the image generated after the attack will deviate from the distribution of this reference dataset, as the nouns used for replacement or expansion might not exist within ImageNet's 1000 categories.
The second point is that after the attack, images generated by the stable diffusion model indeed contain low-quality images. This allows us to explore the types of prompts that can diminish the ability to generate images of stable diffusion, which is also one of our attack purposes. Those aforementioned images will not only increase the FID but also reduce the IS score.
---
Rebuttal Comment 1.1:
Title: Solved my concerns
Comment: Thanks for your replies. I appreciate that the authors used a voting mechanism to eliminate the impact of the performance of a single classifier, which definitely makes the attack success rate more convincible. Other concerns are well solved as well. Therefore, I still lean towards Acceptance. | Summary: In this paper, the authors use Gumble softmax as well as the gradient based method to learn the distribution of an attack text prompt, defined as an attack text prompt that enables a text-to-image generation model to generate images that do not match the text description without changing the category keywords in the clean text prompt, for exploring the vulnerability of stable diffusion model. Based on the failure cases generated by the algorithm, the authors place the possible reasons for the failure of the generation into four main categories.
Strengths: 1. This algorithm able to automatically finds the prompt that causes the text to image generation model to fail, offering the possibility to explore the vulnerability of the stable diffusion model systematically.
2. Various patterns of generative failure have been identified based on a large number of generative failure cases, and a number of reasonable experiments have been designed to further support these conjectures.
3. This algorithm able to learn multiple attack text prompts based on a clean prompt, further boosting the number of generated failure cases.
Weaknesses: 1. It seems that in pattern 1 only the speed of generation of different categories under the same random seed was explored, would the speed of generation of the same category with different random seeds also be different?
2. In the section on quantitative experiments, what is the classification accuracy of CLIP for clean text prompts?
3. How do you handle the gradient of 50 steps in the stable diffusion model?
4. Most contents in this paper are clear and detailed, but the setting in the quantitative experiments section lacks some details, e.g. it does not explain how the random algorithm changed the text prompt, which Casual Language Model was used, and which version of Clip was used.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their diligent review of this paper and for providing constructive feedback.
**RE Generation Speed with Different Random Seeds:** We prefer to focus on the relative generation speed difference of different categories under the same initial noise since the different categories in the same prompt share the same initial noise. As shown in the pdf, we selected "tench" as an example, and used 100 different initial noises to generate 100 images of tench, and the results show that even for the same category, there is a significant difference in the generation speed under different random seeds. Although this phenomenon exists, it does not affect our previous conclusion that there is a difference in generation speed between different classes with the same initial noise
**RE Classification Accuracy:** In the process of learning attack prompt distribution and the sampling of Gumble-Softmax, we have used the pre-trained ViT-B/16 Clip as the classifier, which has a classification error of 18% in the short text, and when perturbed using a randomized algorithm, the classification error rate rises from 18% to 79.2%, and our proposed algorithm improves it from 18% to 91.1%. In the long text prompt, Clip's probability of classification error is 21.2%, and with the randomized algorithm, the classification error rate is only improved to 41.4%, while our algorithm improves it to 81.2%, and based on these results, the effectiveness of our algorithm is proved.
Additionally, we also add an experiment with an ensemble of various classifiers, which provides an improved classification performance. To be specific, the attack is defined as a failure if at least one of them still recognizes the original categories of this text prompt. This approach may eliminate the impact of the performance of a single classifier. When we took this mechanism to evaluate the generated image after our attack, the success rate of the attack on the long and short prompts remained the same, 81.8% and 91.1% respectively. thus further proving the effectiveness of our algorithm. To this end, we test the voting mechanism with images from the 1000 classes in ImageNet generated by the stable diffusion model. As shown in the table below, most of the classifiers have a relatively high Top1 accuracy. We conducted experiments with three distinct classifiers: CLIP+ViT-B/16, Swin-B, and the vanilla ViT-B/16. The classification accuracies of these models were evaluated using both long and short text prompts. For the CLIP+ViT-B/16 model, the classification accuracies with long and short text prompts were found to be 78.80% and 82%, respectively. The Swin-B model demonstrated classification accuracies of 76.20% with long text prompts and 79.80% with short ones. Lastly, the vanilla ViT-B/16 model achieved classification accuracies of 78.90% and 82% with long and short text prompts, respectively. The probability that at least one of them successfully recognizes the current generated image is 90.40% and 92.30% for long and short text prompts, respectively, which is much higher than using only one classifier. This proves that our mechanism is effective.
| | Short Text Promot | Long Text Prompot |
|-----|:-----:|:-----:|
| ViT-B/16(CLIP) | 82.0% | 78.8% |
| ViT-B/32(CLIP) | 80.8% | 74.1% |
| ViT-L/14(CLIP) | 84.0% | 80.8% |
|ResNet50(CLIP)| 77.2% | 69.9% |
| ResNet50 | 10.7% | 10.8% |
| Swin-B | 79.8% | 76.2% |
| ViT-B/16 | 82.0% | 78.9% |
**RE Gradient:** In order to save GPU computing cost and memory, in the 50-step sampling process, we turn off the gradient of image and text in the first 49 steps and only keep the gradient of the last step. According to our quantitative experimental results, our algorithm can effectively improve the classification error rate of the CLIP classifier for both short text and long text.
**RE Lack of Details:**
1. The random algorithm extends or replaces words randomly. For the long text prompts, we replace one or two words at random positions (without category keywords) and extend one word at the end. For short text prompts, the prompt template (i.e., “A photo of a [NOUN]”) will affect the quality of the final generated image, if the template is modified, some poor-quality images may be generated and make the classifier unable to recognize, but the category of the generated image is still the original category. In addition, the category keywords in the prompt also cannot be changed, because we want to remain the noun in the prompt and make the DM generate other objectives. Directly replace the original category keywords to make the original category disappear in the generated image, which is not a reasonable attack method. Therefore, we only extend one or two words at the end of the clean prompt, these words are randomly selected from the vocabulary of CLIP.
2. CLM is GPT2. It is trained from scratch with the wikitext-103 L dataset under Clip's tokenizer. Because in our optimization goal, we use GP2 as the reference model to constrain the language fluency of the generated attack text prompt. For the same text, different tokenizers will map different input ids. For the language model, That is, different semantics, and there is currently no open source GPT2 trained by a Clip tokenizer, so we need to retrain a GPT2 based on a Clip tokenizer.
3. The version of Clip is ViT-B/16. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their diligent review of this paper and for providing constructive feedback.
This PDF includes a Violin plot illustrating the generation speed of the same class with different initial noises. Additionally, there is a table that details the classification accuracy of various classifiers on both clean long and short text prompts.
Pdf: /pdf/10fe765d9388ef81c36f22fab61ca580e1d32f6a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
FACE: Evaluating Natural Language Generation with Fourier Analysis of Cross-Entropy | Accept (poster) | Summary: In order to distinguish between human-generated text and machine-generated text, the authors propose the use of the periodicity of cross entropy for discrimination. More specifically, they suggest analyzing cross entropy through the Fourier transform.
Strengths: 1. This paper is well-written and easy to follow.
2. The experimental section of this paper is fairly comprehensive. The authors' experimental objects have broadly encompassed the latest open-source large models. Although it lacks large language models like GPT-3.5 (the cross entropy can still be obtained through APIs).
Weaknesses: 1. Motivation. The motivation of the paper is not clear, as the authors do not clearly explain why the CE of human language would exhibit periodicity. In the related work section, they briefly mention previous works, but in my view, dialogue tasks are just a specific case of text generation. Overall, skipping the motivation part significantly reduces the soundness of this paper.
2. Method. The authors' method simply involves applying a FFT to the CE sequences, which I believe lacks substantial novelty. Why haven't the authors considered using the information in the frequency domain as input to a deep neural network to incorporate a powerful NN? Why only analyze information in the frequency domain using spectral similarity metrics? Additionally, most of these metrics have already been presented in [1]. Which method would better utilize this information for discrimination? In conclusion, the proposed method by the authors lacks both sufficient contribution and profound insight.
3. Experiments. In the experimental section, the authors did not compare against sufficient baselines. For instance, could we achieve good results by only training a contrastive model using human-generated text and LLM-generated text? How helpful is the frequency domain information in discriminating texts?
[1] Y. Xu and D. Reitter. Spectral analysis of information density in dialogue predicts collaborative task performance. ACL
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: see weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding Weakness 1:
Please read our general response where we addressed the motivation issue.
Regarding Weakness 2:
Actually applying FFT to CE sequences is our biggest innovation. First of all, as we discussed in detail in the general response, applying Fourier analysis on cross-entropy sequence is motivated by two pieces of previous work, Xu et al. (2017) and Dethlefs et al. (2016), in which the periodical patterns of cross-entropy (or surprisal, information density etc.) is discovered (Xu et al., 2017), and speakers are sensitive to the peaks and troughs of cross-entropy in human-machine dialogue (Dethlefs et al., 2016).
These evidence all pointed out the potentials of using frequency-domain information to distinguish human and machine languages. To the best of our knowledge, this perspective is never explored before in the community of natural language processing or AI in general.
Regarding the reason of why not using neural networks to take the spectra as input and cast it as a classification problem, we would like to argue that this paper is to answer "whether-or-not", rather than "how" or "how good". Indeed we have positive results on using the spectral features to classify human vs. model-generated languages, and this could potentially lead to a next step of making the most out of the spectral features by adding stronger NN-based models. But so far at this point, we believe that the current content of the paper has done its job in presenting all the promising proof-of-concept results.
Regarding Weakness 3:
Thanks for your review. In our experimental setting, we regard all the other existing open-ended text generation metrics as our baselines, which corresponds to the `metric` column in Table 3, as well as the heading rows in Table 4 and Table 5, respectively. Indeed, leveraging a "trained and tuned" contrastive model can be a reasonable baseline. However, considering all these widely-adopted metrics in open-ended text generation research, it is sufficient to claim that our proposed FACE metrics have explicable consistency compared to previous work and comparable performance compared to the state-of-the-art MAUVE. If needed, we could add the results of our hand-crafted baseline (e.g., evaluation scores from a contrastive model) to existing tables once our paper is accepted.
Speaking of the advantages of using frequency domain knowledge for distinguishing generated texts, one can refer to the vivid analogy which we present to Reviewer mMGf (quoted as follows):
```
the process of transmitting messages between speakers is like delivering cargo between two merchants, who care most about `what` has been delivered, that is, the semantic meanings of words, sentences, ... (whether they make sense or not, etc). However, another important factor that determines the transportation efficiency is `how much`, or, the `weight` of the cargo, which corresponds to the entropy of words. To utter and to understand a high-entropy word/sentence is expensive in terms of cognitive effort needed, and thus the merchants (speaker) need to arrange the cargo's weight in a reasonable way, so that it is neither too tiring for the delivery driver (language production devices) nor too busy for the receiver side (language comprehension devices).
```
In short, utilizing the cross-entropy sequences from the frequency spectra of texts can effectively perceive periodic patterns of high/low-entropy words and thus quantify the difference between two families of generated texts without concerning their surface-level or semantic-level representation. Moreover, our approach is computationally efficient as it only requires cross-entropy inferences with a pre-trained language model $m_{est}$, getting rid of the tedious and time-consuming feature-space operations (e.g., encoding texts into feature space before computing the cosine similarity).
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal.
However, applying FFT to CE sequences is not novely enough. It is so straightforward after reading [1].
Additionally, you said "we would like to argue that this paper is to answer "whether-or-not", rather than "how" or "how good".", but the thing is this paper lacks of in-depth analysis of the frequency domain information in discriminating.
---
Reply to Comment 1.1.1:
Comment: Our work is different from [1] (Xu et al., 2017) in research objectives and also has significant improvement in methods: \
1) Xu et al. used simple n-gram language models to compute the cross entropy. We are testing neural language models.\
2) Xu et al. used periodogram method to obtain spectrum, which we proved in our work is not the most appropriate method. Instead, we proposed to use the raw Fourier Transform without applying any commonly used smoothing windows, and will publish the code to better guide future researchers. This part of work is IS NOT trivial although it is hidden from the paper's main content. \
3) The proposed four metrics of FACE are already an in-depth analysis of the frequency domain information. From our results, we can conclude that most of the spectral difference between human and model is in the low-frequency components. There is space for deeper analysis, which we think should be left to future work.\
4) The idea of applying FFT to CE sequences only appeared once in the proceedings of ACL, and has never been considered as a way to evaluate natural language generation before (to the best of our knowledge). This, we believe, sufficiently proves the novelty of our work.
We think the NeurlIPS community should avoid the tendency of pursuing unnecessary mathematical complexity for the sake of complexity, and be more open to truly innovative works that promote interdisciplinary insights. Mathematician Joseph Fourier's method was invented in 1822, but it was not until 1965 that Cooly and Tucky invented the FFT algorithm. It was not until 1992 that FFT was used in image compression that led to JPEG. It was not until 2012 that FFT was first used in analytical chemistry [2]. Last but not least, it was not until 2023 that FFT was used for distinguishing human and model languages in the era of AI (our work).
The point we are trying to make here is that the science community should not let the merit of work be blinded by the superficial judgement of novelty solely based on the \emph{age} of method.
References\
Fernandez-de-Cossio Diaz, Jorge; Fernandez-de-Cossio, Jorge (2012-08-08). "Computation of Isotopic Peak Center-Mass Distribution by Fourier Transform". Analytical Chemistry. 84 (16): 7052–7056 | Summary: This paper proposes a new measure of natural language generation (NLG) quality based on similarity between the spectrum of cross-entropy in natural vs. generated text. Fourier Analysis of the Cross-Entropy of language (FACE) is inspired by NLP and psycholinguistic studies suggesting that surprisal is not uniformly distributed in natural text (e.g., content words tend to be more surprising than function words), occurring periodically. For a given generated text, FACE computes a discrete Fourier transform of the token-level cross-entropy sequence (under a separate FACE evaluation LM). Similarity between the vector of frequency magnitudes and that from a randomly selected, natural text corpus are then computed. The paper considers several definitions of FACE metrics, including spectral overlap, cosine similarity, and Pearson/Spearman’s rank correlation coefficients.
LMs from 125 million to over 7 billion parameters are evaluated on NLG of Wikipedia articles, news articles, and stories (with a short prompt of 35 subword tokens provided). Ultimately, FACE is found to be correlated with human judgments of how “human-like”, “sensible”, and “interesting” the generations are. The relationship is not as strong as an existing intrinsic measure, MAUVE. The relative ranking of decoding methods according to FACE agrees with prior works (e.g., greedy decoding < nucleus), as do model size (smaller models produce lower quality generations than larger models).
Strengths: The metric is well-motivated, evaluating whether generated text matches the surprisal statistics of natural text. The algorithm is simple and described sufficiently clearly. FACE is an automatic measure of NLG quality that is, on the face of it, complementary to existing measures. This paper would be of interest to many who work on (large) language models.
Weaknesses: While FACE is motivated by the desire to match surprisal statistics of natural text, it was not clear how different FACE is from existing metrics. Computing correlation between FACE and existing metrics would help alleviate this, as would providing anecdotes of cases with high/low FACE score vs. high/low MAUVE score, for instance.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Have you also considered the spectrum of hidden LM embeddings rather than cross-entropy, and considered how such a metric might differ from FACE?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding Weakness:
Thank you for your review. If needed, we can include examples in our paper once it is accepted. It is important to note that our approach considers the distinctions between human and model-generated languages in terms of cross entropy and periodicity, which sets it apart from most current mainstream metrics. Our research delves into the implicit aspects of human language usage, for example, exploring the periodicity of high/low-frequency words. For more insights, please check our PDF file, which provides corner cases and experimental results demonstrating SO is better on recognising the model-generated and human texts.
Regarding Question:
Our approach is designed based on the theory of FFT (Fast Fourier Transform), making the investigation of periodicity most intuitive and sensible using cross-entropy as inputs. Using hidden embeddings may induce other problems, such as defining the strength of a signal or making analogies. Additionally, our aim is to investigate the periodicity of high/low-frequency words, making the use of cross-entropy more appropriate.
---
Rebuttal 2:
Comment: I have read the author rebuttal. | Summary: This paper proposes a set of metrics based on Fourier Analysis of the estimated Cross-Entropy (FACE) of language. The main idea is to compute the similarity between the spectra of cross-entropy in model-generated texts and human-written texts. Experimental results show that FACE as a computationally efficient metric can scale with model size and reflect the outcomes of different sampling methods for decoding.
Strengths: 1. The idea to introduce the spectra of cross-entropy into the evaluation task of open-ended text generation is interesting since it may include some patterns (e.g. periodical patterns) to identify the difference between model-generated texts and human-written texts.
2. This paper is overall well-written and easy to follow.
Weaknesses: 1. The proposed method lacks deeper analysis on the spectrum of cross entropy in the evaluation task. The authors only use the spectrum of cross entropy as a feature vector of texts to compute similarities without clearly describing the characteristics of texts it can reflect. This seems like an empirical try without definite intuitions or theoretical supports. In comparison, the features which are commonly used in the existing metrics such as n-gram statistics (in BLEU) and contextual hidden vectors (in BERTScore) intuitively indicate the surface-level and semantic-level representation of texts, respectively.
2. From Table 5, the performance of SO is still worse than that of MAUVE proposed in 2021. I understand that pursuing SOTA is not necessary for each paper. But the authors should provide more insights into the advantages of SO over MAUVE in other aspects.
3. In Section 4.4, the authors mention that they use GPT-2 of different scales to compute the spectra of GPT-2 output data. I wonder whether this setting can introduce potential bias because the cross entropy may be exceptionally low when using GPT-2 to evaluate its own output data from my experience.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have included my questions in the weaknesses part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding Weakness 1:
Indeed, in our preliminary research, we conducted a comprehensive analysis of the spectrum of cross entropy and observed its ability to effectively reflect the periodic patterns of high/low entropy words.
It is important to note that our work is not just an empirical trial; rather, it is theoretically inspired and supported by relevant literature. References 7 and 46 in our research demonstrate that cross-entropy changes periodically in natural language, which presents a potential means to quantify differences in texts.
In daily-life language use cases, we may not explicitly pay attention to word entropy, as our primary task is to convey semantic message.
To raise an analogy, the process of transmitting messages between speakers is like delivering cargo between two merchants, who care most about `what` has been delivered, that is, the semantic meanings of words, sentences, ... (whether they make sense or not, etc). However, another important factor that determines the transportation efficiency is `how much`, or, the `weight` of the cargo, which corresponds to the entropy of words. To utter and to understand a high-entropy word/sentence is expensive in terms of cognitive effort needed, and thus the merchants (speaker) need to arrange the cargo's weight in a reasonable way, so that it is neither too tiring for the delivery driver (language production devices) nor too busy for the receiver side (language comprehension devices).
The most relevant previous work in psycholinguistics is from Xu et al. (2017). They pointed out that in natural dialogue, rational speakers should avoid overlapping `peaks` of entropy (cargo weight) to achieve better communication, which can be reflected on the similarity of entropy spectra.
It leads to the basic assumption of our study: probably a human speaker (biological being) tends to save effort by following some periodical patterns in delivering `heavy` words, while a machine speaker does not show such a tendency (because it does not need it).
Regarding Weakness 2:
Thanks! We understand and agree with your perspective on evaluating the quality of generated texts. Indeed, it is a subjective task, and different individuals may have varying opinions, making the state-of-the-art (SOTA) evaluation flexible.
Our approach, which focuses on detecting differences in periodicity and cross-entropy, seems to implicitly and profoundly explore human language usage. By delving into these aspects, our research aims to uncover and understand the nuances of how humans use language, which is an essential and valuable contribution to the field. It provides insights into the intricacies of human language expression and recognition, potentially leading to significant advancements in natural language processing and understanding. If needed, we could add examples once our paper is accepted. For more insights, please check our PDF file, which provides corner cases and experimental results demonstrating SO is better on recognising the model-generated and human texts.
Regarding Weakness 3:
Thank you for pointing this out. In our preliminary research, we tried GPT-2 with three different sizes, and we found that while the absolute values changed, they did not affect the periodicity of the results.
It is true that GPT2-xl will result in cross-entropy of smaller absolute values than GPT2-sm, but this will not affect the periodical patterns of cross-entropy. An analogy is that cross-entropy of the same sequence estimated by different models are the same signal amplified to different amplitudes, but they still have the same period.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks for your rebuttal. Regarding the response to weakness 1, my concern is that there is still a gap between the method based on the spectrum of cross entropy and the NLG evaluation task. Of course, I know that existing works have explored the properties of the spectrum. But they do not focus on NLG evaluation tasks. Since the main contribution of this paper in my view is applying it to NLG evaluation tasks, the authors should surely analyze the characteristics of texts reflected by the spectrum of cross entropy and clearly describe how they benefit the NLG evaluation task, instead of only citing related papers. | Summary: This paper proposes a new language generation evaluation metric.
Prior work in psycholinguistics has shown that surprisal changes periodically in natural language, with natural utterances displaying moments of high and low surprisal.
This paper thus proposes to evaluate natural language generation models by quantifying how similar its surprisal frequency patterns are to natural language's.
More specifically, this paper proposes to: (1) estimate the surprisal of natural and model-generated text using a separate pretrained language model; (2) get frequency spectra for this surprisals using discrete fourier transforms; (3) compute 4 different metrics of similarity between the frequency spectra of model- and human-generated surprisals.
They then experiment with this metric, showing how it evaluates models of different sizes, and with different decoding strategies.
In a final experiment, they present the correlation between their metric and human judgment scores for 8 gpt2-based language generation systems (small, medium, large, xl with either ancestral or nucleus sampling); this experiment shows that while the proposed method does better then some prior metrics (e.g. self-bleu) it produces worse correlations than mauve.
Strengths: I found this paper quite interesting.
The motivation was relatively clear and the proposed metric is well-motivated.
In particular, operationalising a psycholinguistic hypothesis of what consists human-like text, and then using it to evaluate language generation systems seems like a promising approach.
Further, I appreciate the idea of using Fourier transforms to analyse the frequency spectra of information/surprisal in text.
Weaknesses: I believe that the evaluation part of the paper could be improved:
* First, section 4.1 discusses how the proposed metric evaluates models of different sizes. (They generate text while prompting models with a few initial tokens from sentences of three datasets.) These results, however, seemed confusing to me; sometimes smaller or larger models are better with no clear explanation, and the main comparison point used is whether or not the proposed metric agrees with mauve. If mauve was to be considered a gold standard, however, we would not need a new metric.
* Second, section 4.2 evaluates the impact of decoding strategy on the evaluated scores. In these experiments, the authors (coherently) find that their proposed metric always evaluates contrastive decoding as the best strategy. How their evaluation metric fares when comparing other decoding strategies (e.g. nucleus vs ancestral sampling), however, is less clear. Further the table does not show any scores for human text, which I believe could work as an interesting sanity test. These could be computed by evaluating the proposed metric using half of this dataset against its other half.
* Third, the correlations with human judgement scores in section 4.3 are worse than mauve's. While I do not believe a paper needs to have state-of-the-art scores to be published, the paper does not put forward other reasons why it should be accepted besides these correlations, and treats these negative results as positive in its discussion.
In summary, this paper focuses on how its proposed evaluation metric produces good scores of what is human-like text, but does not demonstrate to be better than mauve at this. Further, it does not offer any other justifications (besides being a good metric of human-like text) for why one should use it. Together, this makes me think the impact of this paper might be quite limited.
Adding a longer and more detailed comparison between this proposed metric and previously proposed ones could help improve this paper's impact.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions:
* Which model was used to compute $m_{\text{est}}$ in the experiments?
* Line 21 cites Piantadosi (2014) for the claim that "For example, Zipf’s law can be used to distinguish between human and model distributions." I don't think this is a conclusion of Piantadosi (2014), however. Could you clarify where in the paper he reaches that conclusion? If this is about their comparison with "random typing models", those are qualitatively different from language models. When examining proper language models, Meister et al. (2021) reach the opposite conclusion (that language models follow a similar rank-frequency relationship to natural language).
Larger Suggestions:
* From the paper's text in section 2.2, I interpret that the discrete Fourier transform operates in a single sentence at a time, and thus the proposed metric was developed to compare the spectra of two sentences; not of two corpora. Figure 1, however, implies the Fourier transform takes as input all sentences in a dataset at once. Explaining section 2.2 in more detail could be helpful.
* Although the authors do discuss this in their paper, I believe the word cross-entropy is not accurate to describe what is being measured here. The word surprisal is the correct one. The authors themselves note this in line 62, but decide to use the term "cross-entropy" anyway because it was used this way before, as in, e.g. Genzel and Charniak (2002). Personally, I do not believe this to be a good reason—it's not because prior work used the wrong terminology that you should propagate it.
Smaller Suggestions:
* Line 156 states that Mauve "straightforwardly computes" the similarity of the model- and human-text distributions. However, computing this divergence is actually intractable (starting from the fact that human-text distributions are unknown), and so this is not actually straightforward. They just approximate this using clusters of word embeddings. I’d rewrite this as “attempt to compute” or “estimate”.
* Line 158 states that reference-based metrics are suited for close-ended generation settings, putting Mauve in that group. Mauve, however, was actually developed to analyse open-ended generation settings.
* Figure 4 is too small. At the current scale this figure is unreadable.
Meister et al. (2021). Language Model Evaluation Beyond Perplexity.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: I believe the paper doesn't mention at any point which language their data is in (i.e., English) and that most of the cited psycholinguistics research is English-centric (or at least Indo-European-centric). Addressing that as a potential source of limitation is important.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding Weakness 1:
Regarding the results from various model sizes, there are indeed some inconsistent cases where small models out-perform big ones. The data from our current experiments are insufficient to explain this inconsistency, but we are planning a more complete future study to address this. The reason for comparing with MAUVE, however, is not that we treat it as a gold standard, but rather as a means to prove the validity of FACE -- in the worst case, the new metric will be totally uncorrelated with existing ones (such as MAUVE), which did not happen in our study.
Regarding Weakness 2:
The main purpose of having Table 4 in the paper is to show the advantage of FACE in reliably identifying the superior decoding strategies, which, as shown in the previous work (Li et al., 2022), should be contrastive decoding. We admit it is possible that conrtastive decoding may not be the optimal strategy under some conditions, and so we agree with the reviewer that a thorough comparison on other strategies is a plus.
Just in case we understand it correctly, if the reviewer suggests that we should look into how FACE scores change when we change the $k$ in top-k sampling from 10 to 640 continuously, or to change the $p$ from 0.1 to 0.995 (as was done in Holtzman et al. 2020), then we agree that these more thorough experiments can better reveal the performance of our metrics.
Holtzman et al (2020), i.e., the nucleus sampling paper provided a complete output dataset with a various range of sampling parameters, so we would love to include them in our future investigation.
We appreciate the suggestion of using half the human data to conduct a sanity test. We have conducted the sanity test and found that human text data have better FACE-SO, -CORR, and -SAM scores, which prove the validity of FACE. Please find the results in the attached PDF document in our general response.
Regarding Weakness 3:
We recomputed the correlations with human judgement scores to keep those pairs in which there are exactly one item from human and the other item from model (i.e., a subset of data used for the analysis in Table 5 in section 4.3), and found that FACE-SO has stronger correlation than MAUVE in 2 out of the three dimensions (see table below; full results are in the attached PDF in the general response)
| MAUVE | SO
Human-Like | 0.214 | 0.357
Interesting | 0.667 | 0.524
Sensible | 0.706 | 0.995
It indicates that our metric is better than MAUVE in distinguishing human from model languages, rather than between different models. It also leads to the conclusion that Fourier analysis of cross-entropy can identify some hidden difference between human and model generated text, which is highly likely due to the cognitive capability of human beings in language production.
Regarding Question 1: Which model was used to compute $m_{est}$ in the experiments?
GPT2-small (345M parameters) is used.
Regarding Question 2 about Piantadosi (2014)'s work
Thanks for pointing out this inaccurate citation. It should be corrected to Holtzman et al. (2020), who used the Zipfian coefficient $s$ to compare the distribution in a given text to a theoretically perfect exponential curve, where $s = 1$. Here Holtzman et al. cited Piantadosi (2014). After checking the original Piantadosi paper, we believe the $s$ coefficient referred by Holtzman should be the $\alpha$ in Piantadosi (2014). So, technically, the method of using Zip'f coeffient to distinguish human and machine was first introduced by Holtzman (2020). Please correct us if there is our understanding here is wrong.
It is interesting to read Meister et al. (2021)'s results, which can be potentially useful to our future work.
Reference: A. Holtzman, J. Buys, M. Forbes, and Y. Choi. The Curious Case of Neural Text Degeneration. In Proc. of ICLR, 2020
Regarding Larger Suggestion 1:
It is true that FFT operates on a single sentence at time, and then FACE metrics are computed between two sentences. What Figure 1 shows is the process: we apply FFT to each and every sentence in the corpus, and FACE metrics are computed pair-wise between two shuffled corpora.
Regarding Larger Suggestion 2:
Thanks for pointing this out. Yes, `surprisal` is accepted more broadly in psycholinguistics, but we believe `cross-entropy` (CE) is also compatible here, because `cross` indicates CE measures the difference across two distributions, ground-truth and predictions. The way estimate surprisal also needs the use of a well trained model, and to run predictions with the model. The prediction at each word position is a set of probabilities whose size is the entire vocabulary, among which only one probability is picked based on the ground-truth. Thus, we believe the surprisal itself in nature is the cross-entropy between input sentence and the model. Anyway, it is useful and interesting discussion here.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: > Regarding Weakness 1:
>
> The reason for comparing with MAUVE, however, is not that we treat it as a gold standard, but rather as a means to prove the validity of FACE -- in the worst case, the new metric will be totally uncorrelated with existing ones (such as MAUVE), which did not happen in our study.
Are you saying that the comparisons between Mauve and FACE are intended only to sanity check the new metric? Section 4.1 is the longest results section in the paper. In my opinion, it doesn't read as a sanity check at the moment.
> Regarding Weakness 2:
>
> We appreciate the suggestion of using half the human data to conduct a sanity test.
Thanks for adding these new experiments! I think they are quite interesting.
As an additional suggestion (either for CR, in case the paper is accepted, or for a re-submission, in case it is not), Pimentel et al. (2023) recently analysed Mauve and in their Section 6.2 there are a number of experiments (also using only human text data) which would be interesting here to show the potential of the proposed FACE metric. They show, for instance, that removing articles from a text does not significantly impact Mauve, but I guess it should affect FACE, right?
> Regarding Weakness 3:
It is not clear to me exactly what is being measured in this experiment. What is the correlation being taken across? Is it still across the GPT-2 models? Or are you evaluating sentence-level quality directly now?
> Regarding Question 1: Which model was used to compute $m_est$ in the experiments?
>
> GPT2-small (345M parameters) is used.
Do you know how results change with the chosen model? Recent work in psycho-linguistic has shown that GPT-2 small has more predictive power over human reading times than larger and better models (e.g., GPT-2 XL or GPT-3; see Shain et al 2022. or Oh et al. 2023). But I would be curious to know whether that would also be the case for language generation evaluation.
# References:
Pimentel et al. (2023). On the Usefulness of Embeddings, Clusters and Strings for Text Generator Evaluation.
Shain et al. (2022). Large-scale evidence for logarithmic effects of word predictability on reading time.
Oh et al. (2023). Why does surprisal from larger transformer-based language models provide a poorer fit to human reading times?
---
Reply to Comment 1.1.1:
Comment: >Are you saying that the comparisons between MAUVE and FACE are intended only to sanity check the new metric? Section 4.1 is the longest results section in the paper. In my opinion, it doesn't read as a sanity check at the moment.
Thanks for your follow-up enquiry. In Section 4.1, our aim is not to solely perform the sanity check on FACE metrics. Instead, we focus on validating the effectiveness of our proposed metrics, concerning whether they scale well with model size. To this end, we design our own experimental protocol to evaluate text generations obtained from paired large-small models across three LLM families in a domain-specific manner, which is more comprehensive than MAUVE's.
We did not regard MAUVE as the gold standard in Section 4.1 because it is only one of the seven metrics (Diversity, Coherence, Zipf, ...) being investigated. Here, MAUVE acts more like a ``good-enough'' reference for us to observe the correlations.
We admit some of the results did seem confusing. We conjecture there are two reasons to the contradictory results in GPT-2 columns: First, unlike the text generations obtained from OPT and BLOOM models where we downloaded pre-trained LLM and ran inferences with our self-tuned hyperparameters, the generated texts from GPT-2 models are retrieved directly from OpenAI official repository. Second, as you have pointed out, GPT-2 small has been proved to possess more predictive power than GPT-2 XL or GPT-3 counterparts, which probably explains some of the opposite results in the GPT-2 columns (i.e., S better than L).
We hope the above comments addressed your confusion. We will add deeper and more extensive explanation to make clearer statements once our manuscript is accepted.
>As an additional suggestion (either for CR, in case the paper is accepted, or for a re-submission, in case it is not), Pimentel et al. (2023) recently analysed Mauve ... for instance, that removing articles from a text does not significantly impact Mauve, but I guess it should affect FACE, right?
Thanks for pointing Pimentel et al. (2023) to us. The stability of metrics is an interesting direction to test in our future work. We guess there must be some threshold of text amount that must be met in order to obtain a reliable FACE score. From our current experiments, more data will naturally result in smaller standard deviations in scores, but also this trend scales is yet to be studied.
>It is not clear to me exactly what is being measured in this experiment. What is the correlation being taken across? Is it still across the GPT-2 models? Or are you evaluating sentence-level quality directly now?
As a brief summary of the experiment, we are measuring the alignment between human judges and computational metrics on assessing model's generation capability. The evaluation data contain two input text columns $a$ and $b$, and at least one of them is generated from a model, or both of them are from models (of different sizes). Then in the third column $h$, human judges from crowd-source platform give rates on which one among $a$ and $b$ is better. The last column $m$ records which one among $a$ and $b$ gets higher score according to the metric (MAUVE, FACE, or others).
Therefore, higher alignment between columns $h$ and $m$ indicates that the metric aligns better towards human judges. The Bradley-Terry (BT) scores reported in Table 4 measures this alignment.
In our original results, MAUVE has overall higher BT scores than FACE. We think this is because most of the $<a,b>$ pairs are both from models, and FACE is not as good in distinguishing model from model as MAUVE does.
In our follow-up experiments, however, we found that FACE has higher BT scores than MAUVE after keeping only those $<a,b>$ pairs in which one source is human and the other is model. We think this is an exciting evidence indicating better alignment between FACE and human judges. It further potentially indicates the profound difference between human- and model- generated text in spectral space.
Regarding your question \emph{is it still across the GPT-2 models}: Yes. The evaluation dataset is provided by MAUVE's authors, which includes textual data from human-generated and four GPT-2 sizes times two sampling methods (8 combinations in total).
Regarding your question whether it is sentence-level evaluation or not: No, the evaluation isn't focused solely on sentence-level quality. Within the dataset, the text columns $a$ and $b$ consist of multiple sentences, making it a more comprehensive representation of language usage and quality.
>Do you know how results change with the chosen model?
The output of FACE metrics do not change with the chosen estimator model $m_{est}$. We conjecture this is because $m_{est}$ only affect the magnitudes of cross-entropy (e.g., GPT2sm has larger cross-entropy than GPT2xl), but does not the innate periodical patterns of cross-entropy. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for providing the useful feedback for further improving our paper. We notice that some reviewers suggest us to strengthen the motivation part, especially the reasons of using Fourier analysis on the cross-entropy sequence of language. We believe that the motivation is sufficiently done in the Introduction and Related Work sections, but probably due to that some of the reviewers are not from psycholinguistics background and thus lack the bigger context, so here we would like to compose a general response to better address this paper's motivation with as plain language as possible.
The most relevant two pieces of work from previous computational psycholinguistics studies are Xu et al. (2017) and Dethlefs et al. (2016). Xu's main finding is that the cross entropy of dialogue utterances has periodogical patterns, which can be used to predict the task success. Dethlefs' results tell about that human speakers' experiences are sensitive to the peaks and troughs in utterances from a human-machine dialogue system. Combining their findings, we distilled our main assumption/idea: capturing the periodical patterns of cross-entropy with FFT, and then test whether the spectral features is an indicator of "good" and "natural" language. It is following this assumption that we complete the entire study step by step.
As for why cross-entropy works, if we are to give a brief explanations of our assumption/, we would say it reflects human beings cognitive capability of producing and comprehending languages. This capability naturally fluctuates in time as humans adapt to information dynamically based on its cognitive load, which produces the temporal periodical pattern of cross-entropy in language. Machines/models, on the other side, do not present such a pattern because it does not have the cognitive load issue. We believe the current way models are trained i.e., via maximum-likelihood estimation-based learning of which words produced at when, does not capture the dynamics of cognitive load behind the curtain.
Perhaps reviewers outside the field will find our response to Reviewer mMGf an interesting explanation on why it works to use cross-entropy for evaluating language quality.
To sum, this paper is to answer "whether-or-not", rather than "how" or "how good". Fortunately we have positive results on using the spectral features to tell apart human vs. model-generated languages, and this could potentially lead to a next step of making the most out of the spectral features by adding stronger predicting models (as one of the reviewers suggest). But so far at this point, we believe that the current content of the paper has done its job in presenting all the promising proof-of-concept results.
In the attached PDF document, we included three experiments as suggested by the reviewers: 1) Sanity test of FACE by splitting human text data in half. 2) Corner cases of text whose MAUVE scores are higher than FACE, but actually did not read well; some qualitative analysis is included. 3) FACE-SO show higher correlation with human judgement scores than MAUVE, when we use a more reasonable subset of evaluation data.
Pdf: /pdf/c3708c8f935d6a2fdc4b7e9cf711ed828bf8f501.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a set of metrics to measure the distance between model-generated and human-written languages. Specifically, this paper uses FFT to analyze the cross-entropy sequences of the language data.
Strengths: 1. This new metric is efficient. Given the fact that our models are getting exponentially bigger, it is essential that we do not waste energy during evaluation.
2. This new metric correlates well with human judgment, and is statistically sound.
I personally really like the authors' attempt to interpret the metric. Understanding the why is sometimes much more important than understanding the how.
Weaknesses: 1. The related work on psycholinguistic motivation is limited. Entropy is also a popular metric in computational linguistics, which is probably worth citing.
2. The model size categorization seems to be very coarse.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could the authors be more specific about their motivations for using spectral similarity as a metric?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper is a good step towards addressing some of the problems brought by generative AI.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding Weakness 1: The related work on psycholinguistic motivation is limited. Entropy is also a popular metric in computational linguistics, which is probably worth citing.
Thanks for pointing this out. We will read through our paper in detail to include more comprehensive citations once our work gets accepted.
Regarding Weakness 2: The model size categorization seems to be very coarse.
Indeed, in our experimental setting, we only consider "polarized" groups where there is an extremely small language model having fewer parameters paired with a fairly large language model owning more than ten times the parameters for each model family, with the aim of showing FACE's scalability with model sizes. Adopting a finegrained model size categorization (e.g., five language models with linearly increased model sizes in each family) is definitely helpful to demonstrate the robustness of our proposed metrics. Nevertheless, we believe our current setting is sufficient for validating how FACE scales with model sizes since we have already taken two models with a "stark" contrast in the number of parameters across three LLM families into account. We have stated that larger models (with more than 100 billion parameters) need to be included for future work in line 299, but a more finegrained categorization strategy is also worth considering.
Regarding the question "Could the authors be more specific about their motivations for using spectral similarity as a metric?"
Please read our general response about the research motivation.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal. | null | null | null | null | null | null |
FATE: Fairness Attacks on Graph Learning | Reject | Summary: This paper proposes to attack different fairness definitions for a variety of graph learning models. The task is formulated as a bi-level optimization problem, which is solved in a meta learning manner. The major advantages of the proposed framework include 1) it is feasible to any fairness notion and graph learning model, 2) it can support continuous and discrete perturbation on the graph topology. Experiments demonstrate the attack efficacy and classification utility of the proposed method.
Strengths: 1. The proposed framework provides a good coverage for multiple types of (differentiable) fairness notions (e.g., statistical parity, individual fairness), and graph learning models (e.g., non-parametric, parametric);
2. Extensive experiments are conducted, considering possible fairness defenses (FairGNN and InFoRM-GNN) and transferability.
Weaknesses: 1. The clarity and rationale of methodology can be improved: there are inconsistent definitions (of the budget constraint), and the realization of budget constraint in the optimization procedure is not validated. See detailed questions;
2. The IID assumption for kernel density estimation may not hold in graph data;
3. Important baseline should be compared (a modified version of DICE based on the sensitive group); and more related works should be discussed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I appreciate the breath of this work which makes it of good potential. However, the authors are encouraged to address the following questions to improve the technical depth of this work.
**Methodology**
1. The realization of the budget constraint needs more justification. In line 180 for continuous cases, only limiting the learning rate cannot guarantee the final perturbation is under budget B (after several steps of gradient descent). The authors should either provide citation/reason to justify the validity of this design, or refer to the better justified technique – projected gradient descent. In Line 181-188 for discrete case, the approximation is also questionable: other two cases are directly discarded (i.e., $\Delta_A b[i, j]>0, (1-2A)[i,j]=-1$, and $\Delta_A b[i, j]<0, (1-2A)[i,j]=1$), which could result in large rounding error. The authors are recommended to check and discuss recent discrete attack techniques for graphs [1].
2. The kernel density estimation in Eq. (9) assumes that samples are IID, which however does not hold for graph data, as nodes are connected and non-IID. Can the authors provide further justification?
**Experiment**
3. One important baseline should be considered and compared: modifying DICE to consider communities based on nodes’ sensitive groups rather than their classification labels, so this heuristic baseline will add edges between nodes which belong to the same sensitive group and/or remove edges between nodes which are in different sensitive groups.
**Related work**
4. This work provides additional experiments on possible fairness defenses (i.e., FairGNN and InFoRM-GNN), which is highly appreciated. Beyond such works that are specified to a certain fairness definition, the authors are encouraged to discuss and/or test on the recent debiasing work [2], which does not rely on specific fairness definitions.
**Minors**
5. The definition of perturbation budget is not consistent in Line 98 and Line 121. Meanwhile, the “norm” in Line 98 is not in a typical math form: what is $\|A, \tilde{A}\|_{1, 1}$?
6. The budget defined in Line 121, $\|A- \tilde{A}\|_{1, 1}$, is mathematically the maximum absolute column sum of the matrix difference, which seems to be inconsistent with the attacker’s capability to “perturb up to B edges in the graph”. Meanwhile, based on Eq. (8) which limits the number of total edge changes, the budget definition should be corrected as zero norm, right?
7. The bias function $b()$ is not consistent in Line 94 and Line 99. The hyperparameter $\theta$ should not be an input to the function, right?
8. In Table 2, $\Delta_{SP}$ -> InFoRM bias.
**References**
[1] Lin, Lu, et al. "Graph structural attack by perturbing spectral distance." Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022.
[2] Wang, Nan, et al. "Unbiased graph embedding with biased graph observations." Proceedings of the ACM Web Conference 2022. 2022.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors provide reasonable discussion about the limitations of this work (i.e., other fairness notions, space efficiency).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive comments and valuable feedback for further improving our work. We summarize the main concerns and the point-to-point responses as follows.
**Q1. Justification of learning rate.**
Thank you for your comments. We would like to clarify that Eq. (6) discusses continuous attack in one step. When we consider multiple steps of attack, the update rule would be $\mathbf{A} \leftarrow \mathbf{A} - \eta_i \nabla_{\mathbf{A} b}$ where $\mathbf{A}$ is the input graph, $\eta_i$ is the learning rate in $i$-th step that should satisfy $\eta_i \leq \frac{\delta_i}{\left\|\nabla_{\mathbf{A}} b\right\|}$, and $\delta_i$ is the budget in $i$-th step. We will clarify it in the revised version.
**Q2. Two discarded cases in discretized attack**
Thank you for your comments. In our two discussed cases, when $\Delta_{\mathbf{A}}b$ is positive, it indicates that adding an edge will help increase the bias. Since $\left(\mathbf{I}-2\mathbf{A}\right)\left[i,j\right] > 0$ means adding an edge, positive $\Delta_{\mathbf{A}}b$ and $\left(\mathbf{I}-2\mathbf{A}\right)\left[i,j\right]>0$ indicates a strong preference in adding edge $\left(i,j\right)$ to increase the bias. However, if $\left(\mathbf{I}-2\mathbf{A}\right)\left[i,j\right]<0$, it suggests removing edge $\left(i,j\right)$, thus conflicting to the goal of fairness attacks. So, it should not be selected by FATE. Similarly, if we have negative $\Delta_{\mathbf{A}}b$ and $\left(\mathbf{I}-2\mathbf{A}\right)\left[i,j\right]<0$, removing edge $\left(i,j\right)$ would help increase the edge. However, $\left(\mathbf{I}-2\mathbf{A}\right)\left[i,j\right]>0$ suggests adding the edge. It is also conflicting to the goal of fairness attacks, and FATE would not select it.
**Q3. Justification of applying kernel density estimation on non-IID graph data.**
Thank you for your comments. To date, it is an open problem whether the learned node representations follow IID assumption on the low-dimensional manifold or not. Empirically from the experimental results, using KDE-based bias approximation effectively helps maximize the bias for fairness attacks. Meanwhile, relaxing the IID assumption is a common strategy in computing the distributional discrepancy of node representations. For example, MMD is a widely used distributional discrepancy measures, whose accurate approximation also requires IID assumption[1]. And recent studies[2,3] show that we can also adapt it on non-IID data which shows promising empirical performance.
**Q4. Modifying DICE to inject/delete edges based on sensitive attributes.**
Thank you for suggesting this baseline method. We have implemented it and evaluated its performance in attacking statistical parity. The additional results are provided in Table 2 of the one-page PDF. From the table, we observe that DICE-S could fail the fairness attacks in many cases. In other cases, DICE-S is less effective than FATE.
**Q5. Discussions about related works[4,5]**
Thank you for bringing up these two wonderful works. For [4], it samples the perturbation matrix for discretized attack and utilizes projected gradient descent-based method, which can be seen as an alternative to our preference matrix-based perturbation matrix generation. We would be happy to add a remark in our revised version to highlight such alternative strategy.
For [5], in the context of fairness attacks, it would be interesting to explore whether we can generate a bias-augmented graph, which may empower us to attack multiple group fairness definitions simultaneously. We would be happy to discuss this potential future directions in our revised version. We would like to also point out that we assume the attacker has its own targeted fairness definition to attack. For example, the attacker aims to prevent certain demographic groups from getting financial support. And FATE is consistent with the setting. The setting suggested by [5] can be seen as more challenging than the current one.
**Q6. Inconsistent budget constraints in line 98 and line 121.**
Thank you for your comments. We believe that these two constraints are the same. In line 98, we use e.g. (for example) to provide an example, whereas in line 121, we use i.e. (in another word) to explain the distance constraints for edges/features.
**Q7. Explanation for $\|\mathbf{A}\|_{1,1}$.**
Thank you for your comments. For the induced matrix norms, $\left\|\mathbf{A}\right\|_{1,1}$ would refer to the maximum absolute column sum of the matrix difference. However, we use the entry-wise matrix norms. It treat the elements of the matrix as one big vector (see [here](https://en.wikipedia.org/wiki/Matrix_norm#%22Entry-wise%22_matrix_norms) and [here](https://www.cs.ubc.ca/~schmidtm/Courses/Notes/norms.pdf)). Following the definition of entry-wise matrix norms, our notation means the absolute sum of each entry in $\mathbf{A}$.
**Q8.Typos about $\|\mathbf{A},\mathbf{\widetilde A}\|_{1,1}$ and Table 2.**
For $\|\mathbf{A},\mathbf{\widetilde A}\|_{1,1}$, we have corrected it to $\|\mathbf{A}-\mathbf{\widetilde A}\|_{1,1}$. We have also corrected the metric in Table 2.
**Q9.Inconsistent representation of bias function in line 94 and line 99.**
Thank you for pointing it out. We have corrected it into $b\left(\mathbf{Y},\Theta^*,\mathbf{F}\right)$.
```
[1] Badr-Eddine Chérief-Abdellatif, and Pierre Alquier. MMD-Bayes: Robust Bayesian estimation via maximum mean discrepancy. AABI2020.
[2] Qi Zhu, Natalia Ponomareva, Jiawei Han, and Bryan Perozzi. Shift-robust gnns: Overcoming the limitations of localized graph training data. NeurIPS2021.
[3] Qi Zhu, Yizhu Jiao, Natalia Ponomareva, Jiawei Han, and Bryan Perozzi. "Explaining and Adapting Graph Conditional Shift." arXiv:2306.03256.
[4] Lu Lin, Ethan Blaser, and Hongning Wang. Graph structural attack by perturbing spectral distance. KDD 2022.
[5] Nan Wang, Lu Lin, Jundong Li, and Hongning Wang. Unbiased graph embedding with biased graph observations. WWW2022.
```
---
Rebuttal Comment 1.1:
Title: Thank Authors for the Responce
Comment: I appreciate the efforts from the authors to address some of my questions. I would like to know the authors' thoughts on two places remaining unclear to me.
Followup on Q1, can the author provide rigorous proof to show by limiting the learning rate, it is ensured to meet the budget constraint (no matter whether it is a single-step or multi-step attack)? I would hope to hear more about this treatment, as it is not a common solution for constrained optimization problem.
Followup on Q4, DICE-S is evaluated under group or individual fairness? How is DICE-S implemented (e.g., how many edges are added/removed)? Is there any explanations on why DICE-S is not good enough (as based on Figure 2, the behavior shares similar patterns)?
I am open to increasing my score if the authors can clarify these two questions.
---
Reply to Comment 1.1.1:
Title: Response to your follow-up questions
Comment: We appreciate your time and efforts in evaluating our work!
**1. Proof for perturbation budget in continuous attack.**
For a single-step attack, let $\mathbf{\widetilde A} = \mathbf{A} - \eta \nabla_{\mathbf A} b$. Then we have
$$
\|\|\mathbf{\widetilde A}-\mathbf{A}\|\|\_{1,1}
= \eta \|\| \nabla_{\mathbf A} b \|\|\_{1,1}
\leq \frac{B}{\|\| \nabla_{\mathbf A} b \|\|\_{1,1}} \|\| \nabla_{\mathbf A} b \|\|\_{1,1}
= B
$$
The inequality in the middle holds because $\eta \leq \frac{B}{\|\| \nabla_{\mathbf A} b \|\|_{1,1}}$. Similarly, for multi-step case, following the same procedures, we can prove that the perturbation budget in the $i$-th step satisfies $\|\|\mathbf{\widetilde A}\_i -\mathbf{\widetilde A}\_{i-1}\|\|\_{1,1} \leq \delta\_i$ where $\mathbf{\widetilde A}\_0 = \mathbf{A}$, $\mathbf{\widetilde A}\_i$ is the poisoned adjacency matrix in the $i$-th step, and $\delta\_i$ is the budget in the $i$-th step. Then suppose we poison the adjacency matrix by $k$ steps. It is easy to get that $\|\|\mathbf{\widetilde A}\_{k} -\mathbf{A}\|\|\_{1,1} \leq B$ since we will choose the perturbation budget that satisfies $\sum\_{i=1}^n \delta_i \leq B$.
**2. Performance of DICE-S.**
The results in the global response is attacking statistical parity with GCN being the victim model. We have also mentioned in the global response that we only report results in attacking statistical parity on GCN due to limited space.
Regarding its implementation, we slightly modify the implementation of DICE in the [deeprobust package](https://github.com/DSE-MSU/DeepRobust/blob/master/deeprobust/graph/global_attack/dice.py). Specifically, by passing the demographic group membership as the ```labels``` parameter in ```def attack```, DICE-S will add edges with two nodes from the same demographic group (instead of adding edges with two nodes of different labels in original DICE) and remove edges with two nodes from different demographic groups (instead of removing edges with two nodes of the same label in original DICE). Then we will use the same experimental settings described in Appendix C.5 to run DICE-S. The perturbation rates of DICE-S are the same as all other compared methods (i.e., vary from 0.05 to 0.25 with a step size of 0.05).
About its suboptimal performance, we believe you are referring to the behavior in Figure 1 for attacking group fairness. We found out, for Pokec-z and NBA, of all the perturbations made by DICE-S, around 50% of perturbed edges (regardless of adding or removing) connecting two nodes with the same label (regardless of majority/minority label), and the rest 50% are edges connecting two nodes with different labels. According to the FA-GNN paper [1], if we add edges that connect nodes with the same sensitive attribute + different labels (case ED in [1]) and edges that connect nodes with different sensitive attribute + same labels (case DE in [1]), then the fairness attacks are not targeted. We conjecture that by perturbing edges of cases EE + ED with the same probability and cases DE + DD with the same probability, the acceptance rate for the advantaged group might be decreased, and the acceptance rate for disadvantaged group might be increased at the same time. And the effect might be neutralized, causing $\Delta_{\text SP}$ to be decreased after perturbations. Differently, for FATE, in addition to add edges with the same sensitive attribute, it also significantly increase edges that connect nodes to the minority class guided by the gradient of bias function, which makes the fairness attacks more targeted than DICE-S.
On a side note, we would like to mention that a rigorous analysis of which edge contributes the most to bias amplification would require exhaustively perturbing each edge and re-training GNN, which could be computationally prohibitive to be achieved.
Please let us know if you have any further concerns, and we would be happy to discuss with you.
```
[1] Hussain Hussain, Meng Cao, Sandipan Sikdar, Denis Helic, Elisabeth Lex, Markus Strohmaier, and Roman Kern. Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. ICDM 2022.
```
---
Rebuttal 2:
Title: Follow-up with Reviewer NH2N
Comment: Dear Reviewer NH2N,
We would like to thank you for your thorough review and kind suggestions to improve our work. We have provided a point-point response to your suggestions. Particularly, we have explained the use of our math notations and your concern related to KDE, added discussions about the suggested related work and a new baseline method, and conducted a thorough proofreading to correct typos and/or grammatical errors. Could you please check our response and let us know if you have further questions?
All the best,
Authors | Summary: The authors propose an attacking framework called FATE. Existing research in algorithmic fairness aims to prevent bias amplification but neglects fairness attacks. This paper fills this gap by formulating the fairness attack problem as a bi-level optimization and introducing a meta-learning-based attack framework. The authors present two instantiated examples, demonstrating the expressive power of the framework in terms of statistical parity and individual fairness, and validate the model's capability of attacking fairness through experimental verification. The paper contributes by providing insights into adversarial robustness and the design of robust and fair graph learning models.
Strengths: 1. This paper is well-structured and easy to follow.
2. The originality of the article deserves emphasis as it formulates the fairness attack on graph data as a bi-level optimization problem. This novel approach contributes to understanding the resilience of graph learning models to adversarial attacks on fairness.
3. The experimental results show that the proposed method is capable of attacking fairness without decreasing too much on accuracy.
Weaknesses: The motivation provided is somehow insufficient in persuading me. The authors state that “an institution that applies the graph learning models are often utility-maximizing” (line 82), which I totally agree with, but then concludes that “minimizing the task-specific loss function … for deceptive fairness attacks” (line 88). While I do agree that pursuing utility will lead to a preference for models with superior performance, and if the objective of the attack is to deceive victims into selecting an unfair learning model, then it does make sense to enhance the utility of the malicious model. But it’s not the case in this article, where the attack’s aim is to poison the graph data. Unlike models, we don't have much discretion when it comes to the data, and it is exceedingly challenging for me to envision a real-life scenario wherein an institution would discard the data due to unsatisfactory performance, as a more practical solution is data cleansing or a more capable model. Consequently, my concern is that, is it truly necessary to maintain the utility for “deception”, leaving aside that I’m not convinced that preserving the utility is necessary for successful deception. Please provide a more detailed explanation.
The experimental findings reveal the limitations of the attack's effectiveness. Although I agree that FATE is capable of attacking fairness and is relatively more stable, in 2 out of 3 datasets, FATE exhibits incompetence in reducing statistical parity compared to FA-GNN. Although the authors argue that the victim model achieved the best performance under the attack FATE, I’m skeptical about the cost-effectiveness of this trade-off.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Please refer to the weakness.
Found typo: in line 62, A[j,:] should be A[,:j]?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the authors have addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['Ethics review needed: Failure to comply with NeurIPS Code of Ethics (lack of required documentation, safeguards, disclosure, licenses, legal compliance)']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive comments and valuable feedback for further improving our work. We summarize the main concerns and the point-to-point responses as follows.
**Q1. Necessity of maintaining utility for deception.**
Thank you for your comments about the necessity of mataining utility. When the fairness attacks are not deceptive, it is easy for a utility-maximizing institution to detect the abnormal performance of the graph learning model, which is trained on the poisoned data. Such abnormal performance may trigger the auditing toward model behaviors, which may further lead to the investigation on the data used to trained the model. With careful auditing, it is possible to identify fairness attacks as shown in our analysis of manipulated edges due to significantly more edges that have certain properties (e.g., connecting two nodes with a specific sensitive attribute value). Then to avoid the negative consequences caused by fairness attacks, the institution might take actions such as data cleansing which is also suggested by the reviewer. Note that we do not suggest the institution to discard the data if fairness attacks happen. How to deal with such attacks and the poisoned data should be solely the decision of the institution. However, if the fairness attacks are deceptive, it is much less possible to trigger the auditing by monitoring the model performance, which increases the chance that the fairness attacks could make long-lasting negative societal impacts. Finally, we would like to also point out that, in addition to mataining similar utility, we also want the poisoned graph is close to the original graph by setting a budget on $d\left(\mathcal{G}, \mathcal{\widetilde G}\right)$ for deception.
**Q2. Incompetence in reducing statistical parity compared with FA-GNN.**
Thank you for your comments. We acknowledge that FA-GNN could achieve more bias than FATE in several cases. However, please note that our goal is not only increasing the bias, but also making the fairness attacks deceptive by ensuring the similarity between poisoned graph and benign graph as well as achieving comparable performance in the downstream tasks. Our goal is aligned with our problem definition (Problem 1). By considering the utility-maximizing institution, we believe that FATE provides a promising solution to achieve deceptive fairness attacks, since it can consistently increase the bias (for successful fairness attacks) while achieving much more comparable utility compared with FA-GNN (for making the attacks deceivable). Notably, FA-GNN has much more decrease in micro F1 score (i.e., 0.9\% decrease for Pokec-n, 2.9\% decrease for Pokec-z, 3.3\% decrease for Bail) while FATE's utility is more comparable to the performance on benign graph (i.e., 0.8\% for Pokec-n, 0.1\% increase for Pokec-z, 1.0\% decrease for Bail). Consequently, a utility-maximizing decision maker might be deceived to choose the model trained by poisoned graph generated by FATE since it achieve higher micro F1 score.
**Q3. Typos.**
Thank you for pointing it out. We have corrected the typo. Meanwhile, we will conduct a thorough proofreading to further improve our paper's writing in the revised version.
**Q4. Ethics review.**
Thank you for raising your concerns about potential ethics review of our work. We would like to further point out that the ethics reviewers do not find any violation of the conference guidelines about ethics. The goal of our paper is to investigate the possibility of making the graph learning results more biased, in order to raise the awareness of such fairness attacks. Meanwhile, our experiments suggest that existing fair graph neural networks suffer from the fairness attacks, which further highlight the importance of designing robust and fair techniques to protect the civil rights of marginalized individuals. We acknowledge that, when used for commercial purpose, the developed technique might be harmful to individuals from certain demographic groups. To prevent this situation from happening, we will release our code under [CC-BY-NC-ND license](https://libguides.hartford.edu/c.php?g=887688\&p=6380447\#:~:text=CC%2DBY%2DNC%2DND%20or%20Creative%20Commons%20Attribution%20NonCommercial,or%20ever%20use%20it%20commercially.), which prohibits the use of our method for any commercial purposes, and explicitly highlight in our released code that any use of our developed techniques will consult with the authors for permission first.
---
Rebuttal 2:
Title: Follow-up with Reviewer G7MB
Comment: Dear Reviewer G7MB,
We would like to thank you for your thorough review and kind suggestions to improve our work. We have replied to your suggestions, where we justified why maintaining utility could lead to more deceptive fairness attacks, explained the comparison with FA-GNN, and discussed the negative societal impacts. Could you please check our response and let us know if you have further questions?
All the best,
Authors | Summary: The paper proposes a novel framework named Fate, which is capable of attacking any fairness definition on any graph learning model, as long as the corresponding bias function and the task-specific loss function are differentiable. Fate is equipped with the ability for either continuous or discretized poisoning attacks on the graph topology.studies. The paper provides insights into the adversarial robustness of fair graph learning and sheds light on designing robust and fair graph learning in future studies. The empirical evaluation on three benchmark datasets shows that Fate consistently succeeds in fairness attacks while being the most deceptive (achieving the highest micro Fl score) on semi-supervised node classification.
Strengths: (1) This paper addresses an important problem in graph learning — fairness attacks. While previous work in the field focused on ensuring that bias in not perpetuated or amplified during the learning process, the proposed framework, FATE, allows for the study of adversarial attacks on fairness.
(2) FATE, a meta-learning based framework, is versatile and can be used to attack different fairness definitions and graph learning models.
(3)The experimental evaluation shows that the proposed framework can successfully attack statistical parity and individual fairness on real-world datasets with the ability for poisoning attacks on both graph topology and node features while maintaining the utility on the downstream task.
(4) This article is well-written and well-organized. It starts with an introduction that highlights the importance of fair graph learning and the need for resilience against adversarial attacks. Then it provides some background information and defines the problem of fairness attacks in graph learning. The paper then proposes the Fate framework as a solution to this problem, providing a detailed explanation of its design and mechanism. It also presents experimental results to evaluate the efficacy of Fate. Finally, the paper concludes with a summary of its contributions and future research directions. Overall, the writing logic is clear and easy to follow.
(5) The paper provides detailed information on how they implemented the proposed framework, including the optimization process, the selection of the bias function and the task-specific loss function, and the hyperparameter tuning process. Additionally, they provide a detailed description of their experimental setup, including the datasets used, the graph learning models, the evaluation metrics, and the implementation details of the Fate framework and other baselines. The authors also provide a thorough evaluation of the proposed framework through extensive experiments and analysis.
Weaknesses: (1) One weakness of this paper could be the limited evaluation of the framework on only three benchmark datasets and one task (semi-supervised node classification).
Further evaluations on various graph learning tasks and datasets could provide more insights into the effectiveness and generalizability of the proposed framework.
(2) The proposed Fate framework may not be effective in attacking other fairness definitions beyond statistical parity and individual fairness, and it may not work well on graph learning models with very large graphs.
(3) The paper assumes that the attacker has access to sensitive attributes of all nodes, which may not always be feasible in real-world scenarios.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: (1) Have you considered the potential ethical concerns of using the proposed framework to amplify the bias in graph learning models?
(2) Could you conduct more ablation studies to further prove the effectiveness of the proposed framework compared to other methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: see comments above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive comments and valuable feedback for further improving our work. We summarize the main concerns and the point-to-point responses as follows.
**Q1. More evaluation on different graph learning tasks and datasets.**
Thank you for your suggestions. We choose the task as node classification because this is the most common setting in literature about fair graph learning. For fair regression on graphs, it is still an open problem, and a suitable benchmarking dataset is also needed. For fair link prediction, adapting FATE could be nontrivial, because the structural attack might cause data leakage issue in link prediction (i.e., a test link might be am adversarial edge injected by FATE). A potential remedy might be applying FATE to manipulate the node features instead of graph topology, which is beyond the focus of our paper. We also provide FATE's performance on an additional dataset named NBA. The detailed results can be found in Table 1 of the one-page PDF. From the table, we can get similar observations in our submission: FATE is the only method that can consistently achieve effective and deceptive fairness attacks.
**Q2. FATE may not be effective in attacking fairness beyond statistical parity and individual fairness.**
Thank you for your comments. In our paper, we choose statistical parity and individual fairness because they are the two most widely studied fairness definitions. Per our response to reviewer BvDw, we have also discussed how to attack Rawls fairness with FATE. Basically, to attack Rawls fairness, we can set the bias function to be the loss of the best/worst group. It is worth noting that such attack is conceptually the same as adversarial attacks on the utility as shown in [2], but only focusing on a subgroup of nodes determined by the sensitive attribute rather than the validation set. We will add detailed discussion on how to attack Rawls fairness in the revised version.
**Q3. FATE may not work well when the graph is large.**
Thank you for your comments. Though FATE's one major limitation is the time complexity (as discussed in `D -- Limitations' in Section 3.2 of our submission), FATE does not have any constraint on the properties of input graph. And we believe FATE's performance would not be impacted by how large the graph is.
**Q4. Attacker having access to sensitive attributes of all nodes may not be feasible in real-world scenarios.**
Thank you for your comments. Please note that we follow the same assumption compared with existing works [1]. Regarding the real-world scenario, as pointed out from line 18 -- line 23, a malicious (or polarized) banker might have access to the demographic information of bank account holders and aims to degrade the accuracies of fraud detection for different demographic groups. This example is consistent with our setting because the malicious banker knows the sensitive information of all nodes (i.e., bank account holders) in the graph and tries to make the learning results more biased.
**Q5. Potential ethical concerns.**
Thank you for your concerns regarding the societal impacts of our paper. We will explicitly highlight the following statements about ethical impacts in the revised version: The goal of our paper is to investigate the possibility of making the graph learning results more biased, in order to raise the awareness of such fairness attacks. Meanwhile, our experiments suggest that existing fair graph neural networks suffer from the fairness attacks, which further highlight the importance of designing robust and fair techniques to protect the civil rights of marginalized individuals. We acknowledge that, when used for commercial purpose, the developed technique might be harmful to individuals from certain demographic groups. To prevent this situation from happening, we will release our code under [CC-BY-NC-ND license](https://libguides.hartford.edu/c.php?g=887688&p=6380447#:~:text=CC%2DBY%2DNC%2DND%20or%20Creative%20Commons%20Attribution%20NonCommercial,or%20ever%20use%20it%20commercially), which prohibits the use of our method for any commercial purposes, and explicitly highlight in our released code that any use of our developed techniques will consult with the authors for permission first.
**Q6. More ablation study on FATE.**
Thank you for your suggestions. In our formulation, the upper-level optimization and lower-level optimization are coupled together. The lower-level optimization problem learn the results with high utility using the input graph (the input graph will be a poisoned graph after the first step of attack), and the upper-level optimization problem uses such learning results to further poison the graph to maximize the bias. In that sense, the dependency between these two main components makes it unable to ablate different variants by removing certain part. We would highly appreciate if the reviewer could clarify the specific setting that needs to be tested, and we would be more than happy to provide additional results if possible.
```
[1] Hussain Hussain, Meng Cao, Sandipan Sikdar, Denis Helic, Elisabeth Lex, Markus Strohmaier, and Roman Kern. Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. ICDM 2022.
[2] Daniel Zügner, Oliver Borchert, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. ICLR 2019.
```
---
Rebuttal 2:
Title: Follow-up with Reviewer Lw8H
Comment: Dear Reviewer Lw8H,
We would like to thank you for your thorough review and kind suggestions to improve our work. We have replied to your suggestions by adding additional experimental results, explaining our choices of evaluation task (node classification, ablation studies), and discussing the limitation and ethical concerns of our work. Could you please check our response and let us know if you have further questions?
All the best,
Authors | Summary: This paper presents a novel approach for introducing fairness attacks in graph learning, which is impressive. To address this issue, the article proposes an attack framework for graphs and conducts experiments on the classic GCN model. Compared to two baseline methods, DICE and FA-GNN, the proposed method is more effective in attacking graph neural networks.
The strengths of the paper include:
1. The research problem is novel and interesting. There is little prior work on attacking graph models, and this article is the first to define this type of problem.
2. The paper provides code for the proposed method, which makes it more reproducible.
The weaknesses of the paper includes:
1. Some of the content organization is not optimal, such as placing the descriptions of the baseline methods in the appendix. Including them in the main text would have made it more convenient for readers.
2. The article does not mention the limitations of the work. More discussions are required.
Strengths: 1. The research problem is novel and interesting. There is little prior work on attacking graph models, and this article is the first to define this type of problem.
2. The paper provides code for the proposed method, which makes it more reproducible.
Weaknesses: 1. Some of the content organization is not optimal, such as placing the descriptions of the baseline methods in the appendix. Including them in the main text would have made it more convenient for readers.
2. The article does not mention the limitations of the work. More discussions are required.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: How is the efficiency of the method when the graph is growing large, say, with millions of nodes?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The article does not mention the limitations of the work. More discussions are required.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive comments and valuable feedback for further improving our work. We summarize the main concerns and the point-to-point responses as follows.
**Q1. Sub-optimal content organization.**
Thank you for your suggestions in content organization. In the revised version, we will add a brief description to discuss the general idea of each baseline method. However, please note that, due to limited space, we need to defer some detailed experimental settings in the appendix to comply with the submission guidelines.
**Q2. Discussion about the limitation.**
Thank you for your comments. We believe that the limitation of our proposed method has been included in the submission. Please refer to `D -- Limitations' in Section 3.2 of our submission for your reference. In our discussion, we have mentioned that one major limitation is the time complexity, which is shared by several related works as well. And we have provided a brief discussion about the potential remedy to the this limitation.
**Q3. Efficiency of \textsc{Fate} when the graph grows to millions of nodes.**
Thank you for your comments in testing the efficiency of FATE on million-scale graphs. As we pointed out in the discussion about FATE's limitations in our original submission and to your previous concern (Q2), calculating the gradient with respect to the adjacency matrix $\mathbf{A}$ would require $O\left(n^2\right)$ space complexity. Due to the limited computational resources, we cannot evaluate the performance of FATE on graphs with millions of nodes. Please note that addressing the time complexity remains an open problem shared by several existing works [1, 2, 3].
```
[1] Daniel Zügner, Oliver Borchert, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. ICLR 2019.
[2] Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. Graph structure learning for robust graph neural networks. KDD 2020.
[3] Zhe Xu, Boxin Du, and Hanghang Tong. Graph sanitation with application to node classification. WWW 2022.
```
---
Rebuttal 2:
Title: Follow-up with Reviewer AtfZ
Comment: Dear Reviewer AtfZ
We would like to thank you for your thorough review and kind suggestions to improve our work. We have replied to your suggestions by re-organizing content and discussing the efficiency and limitation of our work. Could you please check our response and let us know if you have further questions?
All the best,
Authors
---
Rebuttal Comment 2.1:
Title: Reply to author rebuttal
Comment: My concerns about the work are partially solved. Hope to see the supplemented contents in the paper. I have no further questions. | Rebuttal 1:
Rebuttal: To all reviewers,
We sincerely thank your time and efforts in evaluating our work. In our rebuttal, we have provided point-to-point responses to your concerns. Meanwhile, in this thread, we provide this one-page PDF that includes additional experimental results on a new dataset named NBA and an new baseline method.
If you have any questions regarding our responses, we would be more than happy to discuss with you and further clarify them.
Thank you,
Authors
Pdf: /pdf/bbd33464fd0d8e3167823e4db049ff7f4055a837.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies an interesting problem, attacking fairness on GNN. Specifically, the authors aim to amplify the unfairness while maintaining the performance of the downstream tasks. They propose a bi-level optimization scheme with a meta-gradient poisoning attack to achieve this goal. Experiments on both statistic fairness and individual fairness show the effectiveness of the proposed method. Overall, good problem and a solid framework.
Strengths: 1. Fairness attack on graph learning is an interesting problem, this paper gives an intuitive problem definition, which can extend to other bi-level optimization goal attacking scenarios.
2. Given the bi-level optimization goal, the authors design a meta-gradient graph poisoning attack and corresponding Low-level and High-level loss functions.
3. Experiments on statistical fairness and individual fairness demonstrate the effectiveness of the proposed method.
Weaknesses: 1. It will be more interesting if the authors provide more experiments on group fairness and Rawls fairness. How to attack some specific groups, such as best/worst accuracy group fairness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It will be more interesting if the authors provide more experiments on group fairness and Rawls fairness. How to attack some specific groups, such as best/worst accuracy group fairness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive comments and valuable feedback for further improving our work. Please see our point-to-point responses as follows.
**Q1. How to group/Rawls fairness by attacking a specific demographic group or best/worst accuracy group.**
Thank you for your comments in adding more discussions about instantiation of other fairness definitions. Here, we briefly discuss how to attack a specific demographic group and best/worst accuracy group. We will add detailed discussions in the revised version.
To attack a specific group, there can be two strategies: (1) decreasing the acceptance rate of the corresponding group and (2) increasing the gap between the group to be attacked and another demographic group. For (1), following our strategy of modeling acceptance rate as the CDF of Gaussian KDE, we can set the bias function to be maximize as the negative of acceptance rate (i.e., $b\left(\mathbf{Y},\Theta^*,\mathbf{F}\right)=-\operatorname{P}\left[\hat{y}=1\vert s=a\right]$) where $a$ is the sensitive attribute value denoting the demographic group to be attacked. For (2), suppose we want to attack the group with sensitive attribute value $s=0$. We can also attack this demographic group by setting the bias function to be $b\left(\mathbf{Y},\Theta^*,\mathbf{F}\right)=\operatorname{P}\left[\hat{y}=1\vert s=1\right]-\operatorname{P}\left[\hat{y}=1\vert s=0\right]$. In this way, we can increase the acceptance rate of demographic group ($s=1$) while minimizing the acceptance rate of the group ($s=0$).
To attack Rawls fairness by attacking best/worst accuracy, the general idea could be setting the bias function to be the loss of the best/worst group. It is worth noting that such attack is conceptually the same as adversarial attacks on the utility as shown in [1], but only focusing on a subgroup of nodes determined by the sensitive attribute rather than the validation set.
```
[1] Daniel Zügner, Oliver Borchert, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. ICLR 2019.
```
---
Rebuttal 2:
Title: Follow-up with Reviewer BvDw
Comment: Dear Reviewer BvDw,
We would like to thank you for your thorough review and kind suggestions to improve our work. We have replied to your concerns by discussing about how to attack a specific group for group fairness and how to attack best/worst accuracy group for Rawls fairness. Could you please check our response and see if you have further questions/suggestions?
All the best,
Authors | null | null | null | null | null | null |
CQM: Curriculum Reinforcement Learning with a Quantized World Model | Accept (poster) | Summary: - The proposed method addresses the challenge of learning a curriculum in goal-conditioned policies, which is a significant problem. Previous research on curriculum in goal-conditioned policies has often overlooked the importance of learning the underlying semantic goal space. This paper builds upon a recently proposed method that utilizes VQVAE to learn the semantic goal space and extends it to incorporate the learning of a curriculum. By selecting a sequence of sub-goals, this curriculum-based approach aids in achieving the final goal.
- The paper compares the proposed method to many different SOTA baselines.
Strengths: - The paper is very well-written.
- The paper effectively compares the proposed method with various baselines, providing a thorough analysis.
- Additionally, the paper conducts ablations to examine the impact of different design decisions made in this research.
Weaknesses: - It will be valuable to assess the effectiveness of the proposed method on both manipulation tasks and more intricate navigation tasks like 3D environments (ex. Habitat, AI2-Thor etc)
- It seems restrictive to the reviewer to use single-code representations with VQ-VAE so would be interesting to see how the results change if using multiple codes like in DGRL. (The reviewer understands this is mentioned as future work, but this seems a severe limitation if the proposed method works only with single code and not multiple codes).
- A minor comment: To enhance clarity, it is recommended to explicitly mention in the introduction that prior research, such as DGRL [1], has proposed and emphasised the significance of learning a semantic space for goal specification. It was only later in the paper that I realised vector quantisation had been proposed in previous work as an abstraction for goal-conditioned RL.
[1] Discrete Factorial Representations as an Abstraction for Goal Conditioned Reinforcement Learning, https://arxiv.org/abs/2211.00247
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I'm curious about how the performance varies by varying the number of codes in VQ-Dictionary.
- It will also be useful to study if the performance of the proposed method can be improved by using self-supervised objectives for learning visual representations as done in DGRL.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Refer to Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer vggh
We sincerely appreciate your constructive and insightful comments. We found them extremely helpful. We prepared our response below:
---
**Q1. It seems restrictive to the reviewer to use single-code representations with VQ-VAE so would be interesting to see how the results change if using multiple codes like in DGRL.**
**A1.** Although we have left utilizing multi-code representation for CQM as future work, we agree that providing some insights into CQM's compatibility with multi-code representation would be highly valuable. To explore this possibility, we conducted experiments where we expanded the single-code representation into a multi-code representation without tuning any other hyperparameters. The numbers of embedding vectors in the VQ-dictionary (i.e. codebook) for each setting we experimented with are as follows:
| | # of embedding vectors in VQ-dictionary |
| --- | --- |
| 1 (single-code repr.) | 128 |
| 2(multi-code repr.) | 32 |
| 4(multi-code repr.) | 16 |
For the experimental results, we kindly ask you to refer to *Fig. 9* in PDF uploaded to the global response. While there is some difference in convergence speed, they eventually exhibited successful performance.
**Q2. I'm curious about how the performance varies by varying the number of codes in VQ-Dictionary.**
**A2.** Following your comment, we attach an additional result with a varying number of embedding vectors (codes) in the VQ-dictionary. (*Fig. 10* in PDF)
**Q3. It will also be useful to study if the performance of the proposed method can be improved by using self-supervised objectives for learning visual representations as done in DGRL.**
**A3.** Thank you for suggesting another interesting experiment. Following your comment, we trained the encoder by adding a self-supervised objective in addition to the VQ-VAE objectives.
The DGRL [3] algorithm claims to be compatible with any off-the-shelf self-supervised learning method. For this experiment, we utilized a self-supervised representation learning method (LESSON [8]) which is more recently proposed than DeepInfoMax in DGRL.
$$\min_{\phi} \mathbb{E}[L_2||\phi(s_t)-\phi(s_{t+1})|| + \max(0, m-L_2||\phi(s_t)-\phi(s_{t+c})||)]$$
($L_2||\cdot||$ = L2 distance)
LESSON adopts the technique of triplet loss: it aims to minimize the distance in the latent space between neighboring states while enforcing that the distance in the latent space between states separated by c steps (c=10 in our case) is larger than the margin parameter m.
The training results (CQM+SSL) are included in the PDF (*Fig. 4*). Through the experiment, we found simply adding Self-Supervised Learning objectives could not show improved performance in the current stage. Nevertheless, we agree that exploring the appropriate SSL objectives for better performance could be a valuable research topic for future direction.
**Q4. It will be valuable to assess the effectiveness of the proposed method on both manipulation tasks and more intricate navigation tasks like 3D environments (ex. Habitat, AI2-Thor etc)**
**A4.** Thank you for your insightful suggestion. In the field of curriculum learning, most of the recent studies (CURROT, OUTPACE, GoalGAN, ALP-GMM, VDS…) have showcased their performance only in state-based RL settings. Thus, it's noteworthy that CQM not only outperforms these prior approaches in state-based scenarios but also demonstrates its efficacy in pixel-based environments. Thus, while we didn’t cover all the complex experiments in this study, we believe that CQM can provide valuable insights to future studies which try to establish effective curriculum learning in such complex settings.
**Q5. To enhance clarity, it is recommended to explicitly mention in the introduction that prior research, such as DGRL**
**A5.** We appreciate your comment. However, we would like to inform you that our manuscript already includes the references of prior studies (including DGRL [3]) in Section 2 - Related Works (lines 109-116). However, we agree that there is room for improvements (e.g. more detailed explanation of the prior works). Therefore, we will take your feedback into account and provide an improved one in the final version.
---
Thank you again for your valuable and insightful review.
We will gladly include these experimental findings in the final version of the manuscript, either in the main body or as an appendix.
Also, please let us know if our responses have addressed your questions. If anything needs further clarification, please do not hesitate to let us know as soon as possible.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for running extra experiments. I keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for replying to our response. We appreciate again your valuable suggestions and all your efforts during the review process. | Summary: The method works as follows. Graphs are built by (1) quantizing visual observations to create a goal space & (2) creating temporal relations over goal space vectors. Curriculum goals towards a "user"-specified goal are made using this graph.
A VQ-VAE is used to create the goal space where goals are decodings of one of k trainable embedding vectors. The authors also study an ablation where landmarks are decoded from the nearest neighbor goal encodings over replay buffer observations.
Graphs are derived by connecting edges based on a metric that exploits having rewards by -1 at all states except for the goal state. This is motivated by wanting edges to capture the geodesic between nodes.
A curriculum for exploring the graph is created by selecting nodes with high uncertainty (or low count via count-based methods) and high temporal distance. Curriculum goals are goals that maximize this sum of this uncertainty and geodesic distance away from the initial state.
The method uses Dikstra's algorithm to generate plans towards goals.
They compare against many baselines in the literature and consistently find a higher success rate. They also visualize the curriculum goals produced by their method.
Strengths: originality: The method is novel to the best of my knowledge.
quality: The paper has high quality. The experiments are very though and the authors also do an analysis.
clarity: The experiments are relatively clear.
significance: The paper seems to produce better performance than many methods in the literature with similar assumptions (e.g. exploiting diikstra's algorithm).
Weaknesses: The paper could strongly benefit from some figures that describe the details of the method. The method is relatively complicated (though not more complicated than comparable methods in the literature). Due its complexity, it's hard to understand how the method works from figure 1. I'm somewhat familiar with this literature so I think understand their method section was easier but other readers may find it more challenging. Another figure would strongly improve the clarity of this paper.
The baseline description method describes how the baselines work but it does not describe what comparing against each baseline (or sets of them) tells us about CQM. Without this, while the results are promising, it's hard to get a takeaway message. For example, OUTPACE also proposes an uncertainty and temporal distance aware curriculum. Comparing against OUTPACE tells us....? It might be the importance of jointly considering uncertainty and temporal distance but that isn't clear in the text. I recommend explicating this a bit more. I think it's fine if baseline methods are grouped together for describing the point of the comparison.
Given the complexity of this method, it would strongly benefit from an algorithm describing, how is data collected, how is the goal space learned, how is the graph constructed, etc. In particular, which of these are done in tandem and which are done in sequence is not clear.
Overall, this method seems promising, the motivation is clear, and the results are also promising. I think the biggest drawback is clarity right now. It's not that easy to understand the method. Right now I lean towards accept but I strongly encourage the authors to add figures/explicit algorithm/both for the methods section. This would likely push my score to a 7.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Across experiments, does the agent encode first-person observations, top-down observations over a map, or both? This seems important to how you quantize the observations. If top-down, this seems like a strong assumption. Can you justify it? Methods like SFL use first-person observations. At least some of the environments that you study permit first-person observations (e.g. AntMaze). I know you do only ego-centric for PointNMave-Viz but it's not clear for the other experiments.
What assumptions does this method make over the reward function?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: This limitations section is fine.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer eib8
We sincerely appreciate your constructive and insightful comments. We found them extremely helpful. We prepared our response below:
---
**Q1. The paper could strongly benefit from some figures that describe the details of the method. Another figure would strongly improve the clarity of this paper.**
**A1.** Thank you for your helpful suggestions on the need for some figures for CQM. Following your valuable comment, we have prepared a diagram (*Fig 8.* in the PDF attached to the global response) that provides a clearer understanding of each component of our algorithm. We will ensure that these visuals are incorporated into the final version of our paper. If there are any other ideas to further enhance the reader’s understanding, please do not hesitate to let us know. We will be happy to comply.
**Q2. Explanation is required regarding the differences between CQM and the baselines. While the results are promising, it's hard to get a takeaway message.**
We agree that a detailed comparison with the baselines can provide readers with more insightful takeaway messages. Following your suggestion, we summarized the additional comparisons below. We kindly ask you to refer to Table 1 in Appendix B (in supplementary material) when digesting the explanation below, for better understanding.
**A2-1.** vs GoalGAN, PLR, ALP-GMM, and VDS
Firstly, these methods each employ curriculum learning strategies, but they do not have a convergence mechanism to the final goal distribution. Thus, the agents learned with these methods repeatedly practice unexplored areas instead of converging towards the final goal even after the explored area “covers” the final goal distribution, leading to performance degradation.
**A2-2.** vs CURROT: CURROT can execute curricula directed towards the final goal. However, it requires an assumption that the distance between the samples can be measured by the Euclidean distance metric. Thus, it suffers in the environments such as N-shaped or Spiral-shaped Maze.
**A2-3.** vs DHRL and SFL
These methods are the RL algorithms employing graphs. Unlike our method, DHRL does not autonomously generate its curriculum goals; rather, it relies on externally supplied goals that are sampled from feasible areas within the environment. Consequently, DHRL's performance heavily depends on the availability of these externally provided random goals.
Although SFL allows exploration without external goal input, it does not execute final goal-directed curricula as the baselines mentioned in **A2-1.** Moreover, it solely conducts exploration based on uncertainty, unlike our approach which encompasses both uncertainty and temporal distance considerations.
**A2-4.** vs OUTPACE
In distinction to the studies above, OUTPACE is the only curriculum goal generation method that considers both uncertainty and temporal distance. However, there are two significant distinctions between OUTPACE and CQM:
(a) OUTPACE does not have a module to learn the goal space while CQM automatically defines the goal space and quantizes it.
(b) OUTPACE predicts uncertainty using the meta-learning-based method Conditional Normalized Maximum Likelihood (CNML) while CQM utilizes count-based uncertainty prediction with quantized state space.
CNML struggles with substantial prediction errors when there is no prior knowledge of a manually specified goal space (since CNML is not scalable to high-dimensional observations). Conversely, our approach employs a straightforward count-based uncertainty mechanism via state space quantization, enabling the generation of suitable curriculum goals even in high-dimensional input environments without a manually specified goal space.
**Q3. Clarification on each part of CQM**
**A3.** To enhance reader comprehension, we have succinctly presented how each module operates in the CQM algorithm. (Due to the word limit, we kindly ask you to refer to our answer (**G-A1**) in the global response). Also, following your comment, we will incorporate this alongside the diagram mentioned in A1 in the final version.
**Q4. About the environments (first-person observation / top-down observation)**
**A4.** Among the two visual input environments we experimented with, PointNViz is a first-person environment, while PointSpiralViz is a third-person environment. While it is true that SFL conducted experiments in a first-person environment similar to ours, it's important to note that they used a discrete action space rather than a continuous action space like ours, making our setting a more challenging problem. (Additionally, SFL utilizes a pre-trained ResNet backbone, whereas our approach does not assume access to a pre-trained network)
**Q5. What assumptions does this method make over the reward function?**
**A5.** The results of CQM and the baselines in this work are obtained by using sparse reward functions. In sparse reward settings, the agent receives a reward of 0 when it succeeds in reaching a goal and -1 otherwise. The criteria for success in each environment are indicated in Appendix C. Environment Details
---
Thank you again for your valuable and insightful review.
We will gladly include the figures and descriptions above in the final version of the manuscript, either in the main body or as an appendix. Also, please let us know if our responses have addressed your questions. If anything needs further clarification, please do not hesitate to let us know as soon as possible.
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Thank you for your response. I am satisfied with the rebuttal. I have raised my score by 1.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for replying to our response.
We are happy to hear that our response addressed your questions. We appreciate again for your valuable suggestions and all your efforts during the review process. | Summary: This paper proposes a curriculum RL approach using a VQ-VAE to learn a goal space, and then construct a graph with the VQ-VAE codes as nodes, and a temporal distance estimate of the Q-value as weighted edges. The curriculum is then constructed by doing frontier-based exploration on this graph, by sampling goals based on their visitation count and temporal distance. In addition, the graph can be used for planning intermediate goals as waypoints for the agent. The paper is well written, and it has proper benchmarks and ablation study. The authors use the term "world model" in their paper, but I would argue they don't really train a world model, rather a representation of the observation space. I would omit the use of "world model" in a further revision of the manuscript.
Strengths: The paper proposes a novel curriculum RL method, which learns a goal-space on high-dimensional pixel observations. Turning the VQ-VAE codebook into a graph, weighted by estimated temporal distance from the Q-value is a powerful idea for both curriculum learning as well as planning and long-range credit assignment.
Weaknesses: - The authors use the term "world model", however what they present is a discrete latent representation of the observation space. A world model would entail to e.g. also equip it with an action-conditioned forward dynamics model, as typically used in model-based RL. I would not call this a world model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How to get a distribution over the goal, encoding a set of exemplar goal observations and counting the bins of the VQ-VAE they get encoded in?
- In the Planning section the authors use state s_0 from S, although the previous MDP description uses observation o_0 from O. I assume these are the same or have you shifted to a POMDP setup?
- In the AntUMaze, the performance seems to deteriorate after 600k steps, i.e. the curriculum goals seem to get farther from the goal. Any insight on what is happening there?
- Since the goal space is trained as a VQ-VAE on individual observations, I think it will break when the environment is ambiguous, i.e. if the agent would have to traverse rooms, but 2 rooms would look identical, these observations would map onto the same goal and the graph might collapse?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - I think one limitation that is currently not touched upon is that this approach requires that the environment is not ambiguous (i.e. you don't see the same observations at different places in the environment).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer uT8t
We sincerely appreciate your constructive and insightful comments. We found them extremely helpful. We prepared our response below:
---
**Q1. Regarding the term “world model”**
**A1.** Thanks for your valuable comment. We understand your concern regarding the term "world model" used in our paper, which may confuse the readers as it is also used in general model-based RL.
We would like to clarify that the term "world model" we used is derived from the paper "World Model as a Graph: Learning Latent Landmarks for Planning (L3P) [6]". In the L3P paper, the term "world model" refers to a graph-structured environment, since it can also be viewed as a multi-step transition at a higher-level perspective.
After considering your input, we understand the need for more careful explanations when using the term "world model". We will incorporate additional explanations in the introduction of our paper, providing context and clarifying the specific meaning. If you believe that even with these clarifications, the term "world model" might still lead to confusion among readers, please kindly inform us, and we will explore alternative terms to effectively convey the intended meaning.
**Q2. How to get a distribution over the goal, encoding a set of exemplar goal observations and counting the bins of the VQ-VAE they get encoded in?**
**A2.** If you are asking about the goal distributions ($p_{ag}$ and $p_{g^f}$) in the section “Convergence to the final goal”(lines 201-218), we utilize kernel density estimator (KDE) and fit the KDE model to the goal distribution which is sampled from replay buffer. Also, following the prior work (MEGA [5]), we utilize the default Gaussian kernel for KDE.
**Q3. In the Planning section the authors use state s_0 from S, although the previous MDP description uses observation o_0 from O. I assume these are the same or have you shifted to a POMDP setup?**
**A3.** We appreciate your comment. As you mentioned, the symbols $s_0 \in S$ carry the same meaning as $o_0 \in O$. Once we have the opportunity to make manuscript revisions, we will promptly correct it.
**Q4. In the AntUMaze, the performance seems to deteriorate after 600k steps, i.e. the curriculum goals seem to get farther from the goal. Any insight on what is happening there?**
**A4.** Thank you for your comment. The performance drop in AntUMaze after 600k is mainly caused by the low $\alpha$ values after the discovered area covers the final goal area. This problem could be addressed with simple hyperparameter tuning, and we have just adjusted both beta and kappa to x1/2 to get better performance. We would like to share the new graph (we kindly ask you to refer to *Fig. 7.* in PDF uploaded to the global response).
**Q5. Since the goal space is trained as a VQ-VAE on individual observations, it will break when the environment is ambiguous, i.e. if the agent would have to traverse rooms, but 2 rooms would look identical, these observations would map onto the same goal and the graph might collapse?**
**A5.** Similar to the baselines and the prior curriculum learning studies (OUTPACE, SFL, VDS, DHRL, ALP-GMM, GoalGAN, Skew-fit, CURROT, PLR...), our algorithm also does not consider POMDP. (CQM inherits the limitations of the prior curriculum learning algorithms at this point. - We will gladly include this in the limitation part in the final version.)
Consequently, in scenarios involving indistinguishable rooms, additional modules are necessary to differentiate between them. Thus, we think that exploring the curriculum learning methods for partially observable environments is also a valuable and thought-provoking research topic for future direction. We believe that incorporating methodologies that can account for past observations could be a promising direction for addressing these problems.
---
Thank you again for your valuable and insightful review.
Please let us know if our responses have addressed your questions. If anything needs further clarification, please do not hesitate to let us know as soon as possible.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses to my questions, and I appreciate the additional results they provided.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for replying to our response. We appreciate again your valuable suggestions and all your efforts during the review process. | Summary: This paper introduces Curriculum RL with Quantized World Model (CQM), a novel approach that leverages a VQ-VAE to create a discretized goal space and constructs a graph structure over it. CQM further proposes a curriculum strategy based on uncertainty and temporal distance to guide the learning process. The authors evaluate the effectiveness of CQM through experiments conducted on variants of PointMazes and AntMazes, which serve as benchmarks for Hierarchical Reinforcement Learning.
Strengths: - This paper is well-written and easy to follow up
- Extensive experiments
Weaknesses: One significant weakness of this paper is the lack of clarity regarding why using the representation from VQ-VAE is suitable for graph-building. It is crucial to consider the temporal distances between nodes to adequately cover the visited state space, especially given the limited number of nodes. The proposed goal representation learning scheme by VQ-VAE does not appear to take into account these temporal distances, raising questions about the efficacy of the proposed representation in creating a meaningful semantic goal space. It would greatly benefit the paper to provide a more thorough explanation and justification for the use of VQ-VAE in graph construction and its ability to capture temporal relationships.
Furthermore, the proposed goal representation learning scheme should be compared to prior work on representation learning, such as NORL [1] or LESSON [2]. A comprehensive comparison would help establish the novelty and effectiveness of the proposed approach in relation to existing methods. Additionally, it remains unclear how the proposed goal representation learning scheme outperforms or differs from the simple approach of utilizing farthest point sampling from the replay buffer, which warrants further investigation and comparison.
Moreover, the paper introduces new hyperparameters (\alpha, \beta, and \kappa) for curriculum goal generation. However, it is unclear how these hyperparameters were determined and how the performance varies when these hyperparameters are varied. Additionally, it would be valuable to compare the costs associated with hyperparameter search between CQM and the baseline methods.
[1] Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine, “Near-Optimal Representation Learning for Hierarchical Reinforcement Learning”, ICLR 2019.
[2] Siyuan Li, Lulu Zheng, Jianhao Wang, Chongjie Zhang, “Learning Subgoal Representations with Slow Dynamics”, ICLR 2021
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How does performance vary when \alpha and \beta are changed? Could you provide insights into the selection process for these hyperparameters? Is it possible to set them automatically?
- Regarding planning over the graph, what happens if an agent is unable to directly reach a specific node w_{i}? Could the system handle a scenario where, after TemporalDist(\phi(w_{i-1}), \phi(w_{i})), the agent conditions its behavior on reaching w_{i+1} instead?
- Could you please explain how the landmarks were sampled from the replay buffer for the "CQM Landmark from Replay Buff" depicted in Figure 7? What criteria or process were used to select the landmarks from the replay buffer?
Minor:
- It appears that there are two different symbols used for \beta, one in Equation 2 and another in \alpha = 1 / \max (\beta + \kappa D_{\text{KL}} (p_{g}^{f} || p_{ag}), 1). I would recommend using distinct symbols to avoid confusion.
- In Figure 7, the y-label representing the distance is partially hidden.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer paJb
We sincerely appreciate your constructive comments. We found them extremely helpful. We prepared our response below:
---
**Q1. Lacks clarity on why using the representation from VQ-VAE is suitable for graph-building. VQ-VAE does not appear to take into account temporal distances.**
**A1.** We appreciate your valuable comment. While we agree that explicitly incorporating temporal distance information could potentially lead to better results, we would like to kindly emphasize that our current approach with VQ-VAE already demonstrates impressive coverage of the environment and consistently outperforms the curriculum learning baselines. Also, even though the temporal distance information is not considered in learning VQ-VAE, it eventually becomes incorporated into the graph construction process when generating edges. (It is worth noting that prior studies utilizing VQ-VAE to quantize state space (DGRL[3], Choreographer[4]) have achieved favorable results without explicitly considering the temporal distance.)
Furthermore, quantizing the dimensional space yields clear advantages for graph construction, notably in relation to reducing computational costs, enhancing robustness against Q-value errors, and enabling effective quantification of uncertainty. (more detailed explanations are in **A3.**)
**A1-1.** We present the images of decoded embeddings demonstrating that the VQ-VAE already provides sufficient coverage of the visited state (please refer to *Fig. 3* in the PDF uploaded to the global response).
**A1-2.** Also, we conducted additional experiments to incorporate temporal distance information into VQ-VAE training (*Fig. 4* in the PDF):
To investigate whether temporal distance information leads to practical performance improvements, we added auxiliary self-supervised loss in learning the latent space of VQ-VAE (CQM+SSL). The auxiliary loss minimizes the distance in the latent space between neighboring observations while enforcing the distance in the latent space between temporally distant observations to become larger than the margin. We found that simply adding this objective could not show improved performance in the current stage, and we conjecture that VQ-VAE already possesses the capability to construct a graph that appropriately covers the environments well. Nevertheless, we agree that testing the effectiveness of injecting temporal distance information into CQM would be valuable.
**Q2. Comparison with prior work on representation learning, such as NORL or LESSON.**
**A2.** Following your suggestion, we replaced the VQ-VAE part of CQM with LESSON [8] (CQM-LESSON) and compared the performance with the original CQM. For this experiment, we modified only the representation learning part while keeping the rest of the modules intact for fair comparisons. (Since the proposed algorithm not only focused on representation learning; it encompasses curriculum goal generation and other parts). The results are provided in the PDF (*Fig. 5*). Through the experiments, we found that the original CQM consistently shows better goal-reaching performances than CQM-LESSON.
**Q3. How the proposed goal representation learning scheme outperforms or differs from the simple approach of utilizing farthest point sampling (FPS) from the replay buffer.**
**A3.** We compared CQM with the simple approach of utilizing farthest point sampling from the replay buffer (CQM-onlyFPS), and the results are provided in the attached PDF (*Fig. 5* and Table 1). Also, we analyzed the factors contributing to the superior performance of origin CQM compared to CQM-onlyFPS:
- Since the computation complexity of FPS is $\Omega(n^2)$, where n is the number of observations in the initial sample of the FPS algorithm, using only FPS introduces significant computational overhead (Table 1.).
- Additionally, FPS is not robust to temporal distance estimation error and thus, the error affects the graph construction. (This problem has also been mentioned in prior work DHRL [7].)
- VQ-VAE enables count-based uncertainty measurement, but it is hard in onlyFPS option to measure the uncertainty.
**Q4. Regarding the hyperparameters ($\alpha$, $\beta$, and $\kappa$)**
**A4.** Our approach utilizes the mixture distribution of curriculum goals following MEGA[5] and thus, we also utilize a heuristic way to determine it. We conducted a grid search for hyperparameter tuning. *Throughout this process, we made fewer than 20 attempts per environment.* Also, the hyperparameter values found for PointNMaze were directly used without modification for all state-based Point environments, and the values for AntUMaze were directly used for all remaining environments. We provide the performance of CQM under varying hyperparameters in the attached PDF (*Fig. 6*).
**Q5. What happens if an agent is unable to directly reach a specific node $w_{i}$?**
**A5.** If the agent could not achieve the current waypoint $w_{i}$ even after spending TemporalDist($\psi(w_{i-1}), \psi(w_{i})$) timestep, CQM updates the current tracking waypoint to the next waypoint $w_{i+1}$ (as shown in line 12 of Algorithm 1 in Appendix A).
**Q6. How the landmarks were sampled from the replay buffer for the "CQM Landmark from Replay Buff" depicted in Figure 7?**
**A6**. For "CQM Landmark from Replay Buff" depicted in Figure 7 (Ablation study graph), we sampled states from the replay buffer and passed them through VQ-VAE. To do so, we randomly sampled a total of 1000 states, and after passing through VQ-VAE to quantize it. Thus, the states that fall into the same code are merged into a single state. Finally, the decoded states after this process form the nodes of the graph.
---
Thank you again for your valuable and insightful review.
Please let us know if our responses have addressed your questions. If anything needs further clarification, please do not hesitate to let us know as soon as possible.
---
Rebuttal Comment 1.1:
Title: My concern has been addressed
Comment: Thank you for the response that addresses my concern. Please add the experimental results and discussion into the final version. I would like to raise the score.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for replying to our response. We are happy to hear that our response addressed your concern.
Following your suggestion, we will discuss this in the final version. We appreciate again for your valuable suggestions and all your efforts during the review process. | Rebuttal 1:
Rebuttal:
Dear reviewers,
We sincerely appreciate your constructive and insightful comments.
We have prepared our responses at the bottom of each review you provided. This global response includes:
- [Additional Results] In this global response, **we attached a PDF containing the experimental results** for the responses to each reviewer. *(The button labeled "PDF" is at the bottom of this message)*
- [Optional] Also, In case you encounter difficulty in quickly recapping our pipeline, we have included a brief overview of the key components of our algorithm below.
Thank you again for your valuable review.
If anything needs further clarification, please let us know as soon as possible.
---
**G-A1. How CQM works?**
(a) How is data collected?
- CQM starts with empty replay buffers
- After performing a step in the environment, the obtained transition information is stored in the replay buffer. (line 18 in Algorithm 1. (Appendix A.))
(b) How is the goal space learned?
- Sample batch from replay buffer (line 26 in Algorithm 1. (Appendix A.))
- Train VQ-VAE with the batch via Eq. 2. (line 27 in Algorithm 1. (Appendix A.))
(c) How is the graph constructed?
- Decode each vector embeddings in VQ-Dictionary (the decoded embeddings are the landmarks).
- Using Eq. 3., connect the landmarks with the distance below the cutoff threshold.
(d) How the agent gets the curriculum goal
- Calculate the uncertainty of each node (landmarks) in the graph (by Eq. 5).
- Calculate the distance of the landmarks from the initial area (by Eq. 4).
- Sample a curriculum goal which is considered temporally distant and uncertain, among the nodes (landmarks) in the graph.
- Based on the $\alpha$ value from (f), the decision is made whether to provide the agent with a curriculum goal or a final goal. Then provide the selected goal to the agent.
(e) How does the agent utilize the graph?
- After the agent obtains the curriculum goal, it can start exploring the environment.
- CQM first finds a sequence of waypoints to achieve the curriculum goal (utilizing Dijkstra’s algorithm).
- The agent is guided to achieve each waypoint, and finally, tries to achieve the curriculum goal.
(f) How the curriculum goal converges to the final (desired) goal?
- At the beginning of the learning, we have some samples of the final goals (e.g. the picture taken at the end of the maze.)
- To get $P_{g^f}$, fit a kernel density estimator (KDE) to estimate the distribution of the final goal.
- To get $P_{ag}$, fit a kernel density estimator (KDE) to estimate the distribution of explored area.
- Calculate KL divergence, and then, calculate $\alpha$ in Eq. 7.
- Utilize the $\alpha$ when we get the curriculum goal (d)
**G-A2. References.**
[1] Ozair, Sherjil, et al. "Vector quantized models for planning." *international conference on machine learning*. PMLR, 2021.
[2] Kim, Seongun, Kyowoon Lee, and Jaesik Choi. "Variational Curriculum Reinforcement Learning for Unsupervised Discovery of Skills." (2023).
[3] Islam, Riashat, et al. "Discrete factorial representations as an abstraction for goal conditioned reinforcement learning." *arXiv preprint arXiv:2211.00247* (2022).
[4] Mazzaglia, Pietro, et al. "Choreographer: Learning and adapting skills in imagination." *arXiv preprint arXiv:2211.13350* (2022).
[5] Pitis, Silviu, et al. "Maximum entropy gain exploration for long horizon multi-goal reinforcement learning." *International Conference on Machine Learning*. PMLR, 2020.
[6] Zhang, Lunjun, Ge Yang, and Bradly C. Stadie. "World model as a graph: Learning latent landmarks for planning." *International Conference on Machine Learning*. PMLR, 2021.
[7] Lee, Seungjae, et al. "DHRL: A Graph-Based Approach for Long-Horizon and Sparse Hierarchical Reinforcement Learning." *Advances in Neural Information Processing Systems* 35 (2022): 13668-13678.
[8] Li, Siyuan, et al. "Learning subgoal representations with slow dynamics." *International Conference on Learning Representations*. 2020.
Pdf: /pdf/126c6c703dda8134573c2c361a53e829491b6d11.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposed a new curriculum reinforcement learning method, CQM, that uses VQ-VAE to learn a quantized goal space, constructs a graph on the quantized goals to propose curriculum goals by distance, and learns a goal-conditional policy.
Strengths: - The proposed method pioneers in "auto" curriculum RL that learns the goal space and proposes goals all by itself.
- The paper is clearly written and easy to follow.
- Detailed studies on various environments are presented to demonstrate its effectiveness.
- Code is provided for reproduction.
Weaknesses: - $\hat{o}_t$ in Eq. 2 should be $o_t$.
- [Vector Quantized Models for Planning](https://arxiv.org/pdf/2106.04615.pdf) has used VQ-VAE in RL. They did not explicitly generate curricula but it's very similar to the proposed method. The authors should compare and explain the difference.
- $Q(l_i, a, l_j)$ in Eq. 3 is not explained. I guess it's an expectation over the replay buffer?
- The proposed mazes are long but they mostly don't have branched dead ends, which contain unseen states but do not lead to the actual goal. [This work](https://openreview.net/pdf?id=U4r9JNyNZ7) and its experiments in more complicated mazes should be compared to.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Eq. 6 does not have a weight factor to balance the two terms. How are the two terms distributed in practice? Will one dominate the other?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations have been adequately addressed.
> After rebuttal
Some of my concerns have been addressed. I appreciate the authors' efforts, but the maze in either their submission or rebuttal doesn't match the complexity of the related work I requested for comparison. Thus I'm holding my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer G8ip
We sincerely appreciate your constructive and insightful comments. We found them extremely helpful. We prepared our response below:
---
**Q1. Differences between ‘Vector Quantized Models for Planning’ [1] and our research.**
**A1.** Thank you for your helpful suggestions on the need for a comparison with the prior work. In this answer, we want to emphasize that there are very few similarities between our research (CQM) and ‘Vector Quantized Models for Planning (VQM)’ [1] except for the use of VQ-VAE.
1. Our algorithm **does NOT require a dataset to pre-train VQ-VAE before learning** an agent, while VQM needs it [1].
- Our algorithm enables the RL agent to start learning from an initial distribution without any prior information or dataset about the environment. To do so, we simultaneously learn VQ-VAE and the RL agent with a curriculum.
- On the other hand, the mentioned algorithm (VQM) [1] is divided into two stages, where in stage 1, it pre-trains VQ-VAE using a pre-existing dataset. VQM has amassed 101,325,000 episodes to gather the dataset for pre-training VQ-VAE in the Deepmind Lab Env(section 5.2.1 in the VQM paper). To do so, they randomly placed the agents in the environment and utilized A2C algorithm. *It is crucial to consider how strong of an assumption it is to possess such pre-collected data from diverse locations in the environment even before initiating the main policy.*
1. Ours perform goal-conditioned decision-making, while VQM [1] does not.
2. Ours perform curriculum learning: we generate the goals autonomously.
3. Our work is compatible with general off-the-shelf RL algorithms while VQM [1] is based on MuZero (which incorporates MCTS).
4. VQM [1] corresponds to single-layer RL, while we correspond to multiple-layer RL.
In VQM, planning refers to determining each individual action. In contrast, planning in our algorithm carries a higher-level meaning, where nodes are connected by multiple transitions. Thus, the agents execute multiple goal-conditioned actions to facilitate movement between graph nodes in CQM. In other words, our approach can be seen as a hierarchical structure (graph at the high level, and RL agent at the low level).
To the best of the author’s knowledge, ours is the first VQ-VAE work capable of exploration without relying on pre-training datasets or extra exploration policy.
**Q2. This work [2] and its experiments in more complicated mazes should be compared to (e.g. branched dead ends).**
**A2-1.** We appreciate your comment. However, we would like to inform you that the experiment of our work already includes branched dead ends. (e.g. Point3WayMaze, Ant2WayMaze - the maps of each environment are visualized in the Appendix in the supplementary material.)
**A2-2.** Following your interesting suggestion, we created mazes (*Fig 1.* in the PDF attached to the global response) with a similar geometric shape to the maze used in the prior work [2] and evaluated our algorithm in it. Although the prior work was performed on a low-dimensional state-based environment, we took it a step further and expanded the scope to an image-based setup, thus exploring a more complex and challenging scenario.
The results are shown in *Fig. 2.* in the PDF. Interestingly, even without further hyperparameter tuning from another setting (PointSpiralMaze-Viz), we achieved remarkable success in this extended setting. We believe that these results demonstrate the capability of CQM to be applied in even more intricate environments.
**Q3. Explanation of $Q(l_i, a, l_j)$ in Eq. 3.**
**A3.** Throughout our manuscript, the Q-function follows the notation provided below.
$Q(\mathrm{current\ state}, \mathrm{action},\mathrm{goal})$
Thus, $Q(l_i, a, l_j)$ indicates the goal-conditioned state-action value where goal, action, and state is $l_i$, $a$, and $l_j$, respectively. (We will incorporate this explanation into the final version.)
**Q4. Questions and minor comments**
**A4-1.** Regarding $\hat{o}_t$: We appreciate your comment. Once we have the opportunity to make manuscript revisions, we will promptly correct it.
**A4-2.** Regarding the weight factor to balance the two terms in Eq. 6.
Actually, there are various ways to implement the curriculum goal sampling part.
- One approach is to implement it by adjusting the balance between the two using a weight factor. (In this case, finding a balance between the two is important.)
- Selecting the curriculum goal based on one criterion using top-K filtering and then sampling based on the remaining criterion also yields good performance.
We have found that both of these approaches can yield successful results with appropriate hyperparameters. While the equation in the manuscript corresponds to the former approach, we found that the latter approach tends to be more robust to the hyperparameters. We will include the weight factor & implementation details in the Appendix of the final version.
---
Thank you again for your valuable and insightful review.
Please let us know if our responses have addressed your questions. If anything needs further clarification, please do not hesitate to let us know as soon as possible.
---
Rebuttal Comment 1.1:
Title: Dear Reviewer G8ip
Comment: We hope this message finds you well.
We appreciate the time you have taken to review our work and consider the points we raised in our rebuttal. We hope that our response has provided a more comprehensive understanding of our research and its potential contributions to the field.
Please let us know if you need any further clarification. We appreciate again your valuable suggestions and all your efforts during the review process.
Thank you for your consideration.
CQM Authors. | null | null | null | null | null | null |
Tracr: Compiled Transformers as a Laboratory for Interpretability | Accept (spotlight) | Summary: This submission builds on the RASP language proposed by Weiss et al. It proposes a way to compile RASP program into real Transformer weights. It also comes with a case study showing the use case of Tracr to study a phenomenon called superposition which is widely known in Mechanistic Interpretability field. It's claimed by the authors that their Tracr can serve as a ground-truth for interpretability research.
Strengths: This piece of construction result is of evident significance to the community, considering Transformer is the dominant architecture these days. It furthers our understandings theoretically how Transformer can be implementing various algorithms inside of it. I hope this can inspire us to discuss what can and what **cannot** be expressed in a specified architecture.
Weaknesses: + On the first use case: it is studying a phenomenon not even well defined in the literature, called superposition. Wish to see more rigorous framing of what is really studied, e.g., compressed sensing. The results in Section 5 isn't impressive to me since it isn't an organic combination of compressing and Tracr.
+ On the second use case: It feels to me an overclaiming that Tracr could serve as a _ground-truth_ for evaluating interpretability methods. This is an understandable imagination but such idealism isn't correct. An interpretability algorithm can discover very different underlining algorithms and if the discovered is different from Tracr compilation; only the Tracr is to be challenged.
+ Summary up, I acknowledge this is an important theoretical results but framing it as interpretability or anything related to real Transformer behaviour sounds far-fetched.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: + On Line 209. The choice of $W^\top$ in Elhage et. al. to recover the compressed feature actually make more sense than in the case of this submission. They are forcing $W$ to be orthonormal so that $WW^\top=I$ stands, which is believable if it happens since their loss function is encouraging that. In Tracr's case, I cannot see why the residual added into the stream is supposed to be the same as the input.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: + This is indeed a good pedagogical tool for Transformer architecture. However, it's worth highlighting in the paper that LayerNorm or any similar operations, RMS Norm are left out, so to not mislead readers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > On the second use case: It feels to me an overclaiming that Tracr could serve as a ground-truth for evaluating interpretability methods. This is an understandable imagination but such idealism isn't correct. An interpretability algorithm can discover very different underlining algorithms and if the discovered is different from Tracr compilation; only the Tracr is to be challenged.
This might be a misunderstanding of what we mean by ground-truth. We do not mean that if we can compile an algorithm in Tracr, this algorithm is necessarily the correct interpretation for a learned transformer model completing the same task. Say we compile a model using Tracr to sort a sequence of numbers and train a transformer to do the same task. You are correct that the trained model might use a different algorithm from the compiled model. The advantage of the compiled model is that if we run an interpretability method on it, we know exactly the correct interpretation for _that compiled model_ – the RASP program serves as a ground-truth. We can then see how that ground truth is reflected in the interpretability method.
> On Line 209. The choice of in Elhage et. al. to recover the compressed feature actually make more sense than in the case of this submission. They are forcing to be orthonormal so that stands, which is believable if it happens since their loss function is encouraging that. In Tracr's case, I cannot see why the residual added into the stream is supposed to be the same as the input.
You are right; there is no principled reason why the model needs to use the same embedding at each point in the model. We choose the same embedding matrix throughout the model primarily for simplicity, but also because this is how uncompressed Tracr models are structured. As mentioned in response to Reviewer 767m, we found our compression procedure to be a particularly good tradeoff between being simple and producing more efficient and natural models. However, it is worth noting that recent empirical evidence from real transformers suggests that the input embedding matrix is still a meaningful interpretation of the residual stream at other points of the model (e.g., see [1]).
> However, it's worth highlighting in the paper that LayerNorm or any similar operations, RMS Norm are left out, so to not mislead readers.
Thanks, we will make sure to highlight this point.
**References**
[1] Guy Dar, Mor Geva, Ankit Gupta, and Jonathan Berant. 2023. Analyzing Transformers in Embedding Space. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
---
Rebuttal Comment 1.1:
Title: Thank the authors for the response
Comment: I appreciate the reply. Here is a little concern I still wish to hear your thoughts on: you mentioned "uncompressed Tracr models are structured", then how do you think of the possibility that there is a unbridgeable gap between real Transformer behavior and the Tracr (post-compression) compiled Transformer. If we apply MI techniques to Tracr Transformer and successfully rediscover the Tracr algorithm, does it mean anything that this MI technique can generalize to real Transformer? I.e., should we therefore trust any conclusion this MI get on real Transformer?
To be clear, by no means I'm against the acceptation or any reward of this paper. Just curious about how you think about this.
---
Reply to Comment 1.1.1:
Comment: We agree that because of structural differences between compiled and real transformers, we should be careful when interpreting results obtained using Tracr models. At present, as we mention in the paper, we would suggest seeing them as a minimum bar for methods to pass rather than a full validation.
If we find MI methods can pass this bar by unfairly exploiting structure specific to Tracr models that we don't expect in real models, future work can add additional layers of obfuscation and complexity to the compiler which would make it a more rigorous test.
However, we think that even with more advanced compilers, compiled models are always going to be to some extent artificial, so such evaluations will never fully replace evaluations on real models. We see evaluations on compiled models and on real models as complementary approaches with different strengths and weaknesses. | Summary: The paper presents Tracr, which is a compiler from RASP programs (a language designed to showcase a possible computational model used by Transformers) to Transformer architectures and weights. The paper first details the operation of the compiler, then discusses examples of several simple RASP programs and their respective Tracr-generated Transformer models. Finally, the paper experiments with compressing the dimensionality of these models, exploring an emerging hypothesis about superposition of feature representation in Transformers.
Strengths: * The paper addresses a significant gap in prior work, which is that while RASP purports to encode the computational model for Transformers, RASP programs are not directly compilable to Transformer weights. This is an active and interesting area of research, and providing such a compiler is likely to significantly impact work in this domain.
* Overall, the paper is very clearly written
* The examples given (Section 4) strongly demonstrate the success of the technique, and broadly help to explain RASP's computational model as well
* Section 5 is an interesting study on the effects of compression, though (as noted in Weaknesses below) I am not sure about the relevance or significance of these results.
Weaknesses: * The paper does not sufficiently discuss the accuracy of the compiled programs (Section 3). Line 49 says "Any elementwise operations in RASP can be approximately computed by an MLP layer", and line 119 says that the MLPs and attention blocks "approximate arbitrary functions". Having error in the compiled programs is a perfectly reasonable limitation, but it would be very good to know the fidelity of these approximations.
* The paper assumes a high level of familiarity with prior work, which could be inlined into the paper.
* Lines 59-62: I'm not familiar with Elhage et al. (2021)'s concept of a residual stream, but this seems to be a core concept in the paper. It would help to clarify these concepts and definitions in the paper. Appendix B.2 provides some information here, but this is still a bit insufficient (a "residual stream" is never actually defined).
* In Section 3, it could also help to include a figure with RASP's syntax to understand the specific functions that must be compiled (seemingly, `select`, `selector_width`, `aggregate`, and the element-wise operations; `selector_width` isn't mentioned until Section 4.2), and the specific operations that are not yet supported by Tracr (e.g., those on Line 187). Again, Appendix B.3 helps, but is still not quite sufficient (e.g., it also does not define `selector_width`).
* Section 5 reads as an entirely different paper, with its own significant limitations. As the authors note, "even with a fairly restrictive compression setup, compressed models may not stay faithful to the original RASP program". This is certainly an interesting finding (and has ramifications on compression techniques broadly). But, given that these compressed Transformers neither follow the original RASP program, nor is it clear that this the behavior observed when compressing non-Tracr Transformers, it's not clear what to take away from these results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * How precisely do the compiled Transformers implement the original algorithms?
* Regarding "Disallow[ing] arbitrary selector combinations" in Appendices C and G: does this restriction reduce the set of programs that it is possible to represent with RASP?
* This is more of a curiosity than a criticism, but in Section 5.1 when projecting out of the compressed space, is there a reason to apply W^T rather than say the psuedoinverse of W?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately discuss limitations in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > How precisely do the compiled Transformers implement the original algorithms?
Tracr can compile any algorithm to a finite model that implements it exactly, i.e., with zero approximation error. This is because we know the full (discrete) input vocabulary for the model at compile-time. So, while some functions are implemented “approximately” by MLPs, we ensure that for all values that can occur for inputs from the vocabulary, the MLP approximation is exact (this is also briefly discussed in Appendix E.1). We will add a sentence clarifying this in the main body.
> Regarding "Disallow[ing] arbitrary selector combinations" in Appendices C and G: does this restriction reduce the set of programs that it is possible to represent with RASP?
Strictly, yes, this reduces the set of programs that we can compile. For example, without selector combinations, we cannot compile the sort program by Weiss et al. 2021 because it uses a combined selector to handle duplicates in the input:
```
select(keys, keys, <) or (select(keys, keys, ==) and select(indices, indices, <)
```
However, any such program can be refactored into an equivalent RASP program that we can compile. We describe this procedure at the end of Appendix G, which also discusses the technical reasons for disallowing arbitrary selector combinations. In brief: this is primarily because combined selectors a) produce large and inefficient models, and b) would break the correspondence between selectors and attention heads. We therefore leave doing this refactoring to users to avoid surprises in compilation.
In practice, we have not found that this restriction introduces practical difficulties.
Thinking about this question again helped us clarify our thinking on this issue; thank you – we will update the paper and the appendix to reflect that.
> This is more of a curiosity than a criticism, but in Section 5.1 when projecting out of the compressed space, is there a reason to apply W^T rather than say the psuedoinverse of W?
We made this choice primarily to be consistent with Elhage et al. 2021. Using the pseudoinverse of W would also be a reasonable choice.
> The paper assumes a high level of familiarity with prior work, which could be inlined into the paper.
Thanks for flagging this! We aim to address the insufficient discussion of necessary background by moving Appendix B.2 to a dedicated background section and improving it. We will also improve our discussion of RASP syntax to make the paper more self-contained.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed response. The authors have given strong answers to my remaining questions about the paper, so I'm raising my score by a point. | Summary: This paper presents Tracr, a compiler that can take 'program' specifications and translate them into GPT (decoder only) style transformer models. Tracr is built on the RASP 'programming' language introduced by Weiss et. al., and translates a RASP program into model weights via an intermediate representation termed craft. The entire compilation step occurs first by translating a RASP program into a computational graph where operations in the program correspond to nodes in the graph. The RASP language is equipped with: sequence operations and selectors, and instructions: element-wise and select-and-aggregate operations, which correspond to the MLP layers and attention operation respectively. The nodes in the graph are then translated to MLPs and attention heads which are then assembled into a complete model. The authors show how to use Tracr in two examples: sorting and token counting. Lastly, they use RASP to examine the influence of various model compression schemes on the model's logic.
Strengths: **Originality**\
The key previous work is the RASP paper by Weiss et. al., however, this work significantly expands the scope of that previous paper into a usable compiler for assessing the effectiveness of interpretability methods on decoder-only transformer models. I find the series of choices made in the program translation process to also be intuitive. This paper presents an intriguing new tool will be useful for evaluating interpretability methods.
**Quality/Clarity**\
The paper is quite well written and the figures help demonstrate the point of the work. The use of the is_x example was very helpful in following the compilation steps that tracr implements. This work is of high quality and delivers a useful tool for the purpose for which it sets out.
**Significance**\
It is hard to judge significance of a paper, but I'd hazard a guess that this paper opens up a new line of work for testing mechanistic interpretability methods, which are a new increasingly popular method for reverse-engineering large-scale models. Personally, I am quite skeptical of most research in mechanistic interpretability as I think it is susceptible to just reading tea leaves, so the tracr compiler could end up taking the place of unit tests for new mechanistic interpretability schemes. Imagine compiling a program to a set of weights and then asking whether some new mechanistic interpretability technique can identify the underlying mechanism. Overall, this paper has interesting ideas that can likely be adapted in future work.
Weaknesses: Overall, I think this work is an important one, and opens several questions that could be follow-up work. None of the weaknesses I discuss below are disqualifying.
I'll start with my major qualms with this work:
**How different are tracr model weights from those that gradient descent learns?**: The model weights that one gets from tracr are going to be quite different from those that you get when you collect data from the RASP function and simply learn a model on that data. And I just don't mean that the exact values are different, but the weights could encode qualitatively different behaviors. It seems to me that based on the rules here, tracr models will have sparser weight matrices than traditional models learned via SGD. Since interpreting a sparse model is easier than a dense one, do the authors see problems with tracr models being quite different from learned ones?
**Details**: As expected, a paper like this is packed with insights, and the details here actually matter. Specifically, I am referring to the heuristics used to 1) decide values that an s-op will take, 2) combining layers. Of course one has to make concrete choices here, but it is a bit unclear to me how these rules should impact the 'quality'/type of function you get out of tracr. Similarly, the last step, a crucial one, is also under explained in the main draft. Specifically, it is unclear what the authors mean by "factor", and exactly how they arrive at the specific values for these weight matrices. Some of this information is discussed in the appendix.
**The interpretability assessment pitch**: A main motivation of this work is to test interpretability methods. However, this paper stops short of that goal. It would've been amazing for the authors to apply a mechanistic interpretability to say a dyck example to see whether these methods can recover such logic. Even further, while the tool is an important component of testing an interpretability method, it is unclear to me how you would actually use it to do so. Section 5 attempts to do this, but not in a straightforward manner.
**Minor Issues**
1) I had to go read Elhage et. al. to understand what a 'residual stream' is. Is this interpretation now standard in the literature? Section B.2 in the appendix is actually important background notation.
2) Defn of mechanistic interpretability in lines 54-55 is circular. What is a mechanistic explanation? I think the second sentence is actually the real definition?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See weaknesses section for a longer discussion of the key questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: There is extensive discussion of various limitations in the Appendix and Conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> How different are tracr model weights from those that gradient descent learns?
Your description is accurate: Tracr models are significantly sparser than real trained transformers, and, in particular, small Tracr models tend to be easy to interpret. There are straightforward ways to "obfuscate" the compiled models, e.g., by rotating their representations. Doing this makes the models significantly harder to understand. However, as you mentioned, Tracr models' weights look qualitatively different than the weights of compiled models. This is what we mean by Tracr models being “unrealistic” in Appendix A.2 when discussing limitations; we will make sure to clarify this in the main paper.
This observation motivates the second part of our paper on compressing Tracr models using SGD. We find that this compression procedure makes the models significantly more realistic while maintaining the "ground truth" computation. There is some trade-off between models being natural and models having a ground truth interpretation (e.g. see Section 5.2). But we primarily see Tracr models filling the role of "unit tests" for mechanistic interpretability (Thanks for this metaphor!). So, we think it is acceptable to sacrifice some amount of "naturalness" for having the ground truth interpretation.
> Specifically, it is unclear what the authors mean by "factor", and exactly how they arrive at the specific values for these weight matrices.
We assume you are referring to the sentence “we then factor the W_QK and W_OV matrices into separate W_q, W_k, W_o, W_v weight matrices” in lines 142 and 143.
In our implementation, we simply set $W_q = W_{QK}$ and $W_k = I$ as an identity matrix and $W_o = W_{OV}$ and $W_v = I$ as an identity matrix. However, we might use more sophisticated factorisations in the future. We agree this deserves some more details in the main paper.
> I had to go read Elhage et. al. to understand what a 'residual stream' is. Is this interpretation now standard in the literature? Section B.2 in the appendix is actually important background notation.
We agree that Section B.2. contains important background, and we aim to move it to the main paper when updating the paper.
> The interpretability assessment pitch: A main motivation of this work is to test interpretability methods. However, this paper stops short of that goal. It would've been amazing for the authors to apply a mechanistic interpretability to say a dyck example to see whether these methods can recover such logic. Even further, while the tool is an important component of testing an interpretability method, it is unclear to me how you would actually use it to do so. Section 5 attempts to do this, but not in a straightforward manner.
We agree that testing an interpretability method with a Tracr model would make the impact much more apparent. We are addressing this in follow-up work, but we felt it would have expanded the scope of this paper to the point that the contributions would have been harder to communicate clearly.
---
Rebuttal Comment 1.1:
Title: Concerns addressed
Comment: Hello,
I have read the rebuttal, and I am satisfied with the response. I think the tracr library is a nice contribution, and should lead to important insights on the effectiveness of mechanistic interpretability methods. | Summary: In this submission, the authors present Tracr, a compiler for RASP (a DSL for transformer computations) into transformer weights. The authors introduce their compilation approach, which includes an “assembly” language called Craft, which is used to represent the transformer weights agnostic to explicit implementations. They provide example models produced by Tracr (counting tokens & sorting, and refer to more examples in the repo). Most interestingly, they provide a way to compress these compiled models using SGD to study more involved topics in transformer explainability research, such as superposition.
Strengths: - The compilation (section 3) is exceptionally well explained
- Their approach is especially powerful as a didactic tool
- The approach can serve as a guide for explainability research, and in providing a “ground truth” for explainability research
- The open source implementation can serve as a starting point for many researchers interested in explainability of transformer models
- Section 5 on compression is interesting and circumvents some of the limitations (otherwise it would be impossible to do research based on the compiled models on more involved concepts)
Weaknesses: - Examples feel toyish; the limited size of programs (see their limitation section)
- That the compiled models could serve as a “ground truth” has to be taken with a grain of salt: Trained transformers, of course, optimize towards a different objective and solve a different problem.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - How do the authors try to approach the mentioned limitations (in their last section) in future work? What is the path from here?
- Can the authors elaborate more on the choice of compression (page 6, line 224+)? Can you provide more detail how this would affect the assumption that it can serve as a “ground truth”?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See their limitations section and Appendix A2. I feel that the authors have honestly addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > How do the authors try to approach the mentioned limitations (in their last section) in future work? What is the path from here?
There are technical limitations of Tracr that mostly come from design choices we made for simplicity (e.g., Tracr models embed different variables orthogonally, which can be quite inefficient). These limitations can be overcome with more sophisticated features. We plan to develop such features as they are needed for specific applications and we will encourage the research and open-source communities to contribute to Tracr.
The RASP language has some limitations, such as using only binary attention patterns. It is not fully understood which of these limitations are most severe, and we are excited about recent and ongoing work that studies the expressivity of different models of transformer computations (e.g., for binary attention patterns, Merrill et al. 2022 is relevant). In future versions of Tracr, we might extend RASP to remove some of these limitations.
Finally, there are fundamental limitations to compiling models. Of course, we will never compile a fully-fledged language model with Tracr. Instead, we think compiled models will be useful as an intermediate step between analysing toy models and real learned models.
For a more detailed discussion of Tracr’s limitations and possible future work, see Appendix A.2.
> Can the authors elaborate more on the choice of compression (page 6, line 224+)? Can you provide more detail how this would affect the assumption that it can serve as a “ground truth”?
We aimed to find a compression procedure that:
makes the compiled models more efficient and realistic;
maintains the “ground truth”-property of the compiled models, i.e., that we can identify which part of the model implements which computation;
is conceptually simple and natural.
Requirements (1) and (3) suggest a gradient-descent-based compression is a natural choice. To achieve (2), we freeze most of the compiled weights and only learn a new embedding matrix. Limiting the compression to a single shared matrix W ensures that the model still has the same structure. In most cases, we still fully understand the compressed models after investigating the learned W (e.g. see Section 5.2), which suggests that the compressed models are still useful as a ground truth. However, sometimes the learned compression can be quite complex, making the model more difficult to use as a ground truth (e.g. see Section 5.3).
We could try many other possible compression procedures here (and we might explore some in the future). But we think the proposed approach is a good trade-off between the three requirements and a natural starting point for studying compressing the compiled models. There is a fundamental trade-off between having a ground truth understanding of the model and having a very natural-looking model. While we provide a preliminary investigation of this in the paper, there is undoubtedly much room for future work here.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and for explaining the decisions; that made it clearer. I don't have any more questions. | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful comments. We are glad the reviewers found our paper clear and appreciated the contribution of Tracr to studying the computational models of transformers (Reviewers XH5L, ACvV), to advancing interpretability research (Reviewers 767m, PkEE), and as a didactic tool (Reviewer 767m).
We respond to any open questions in the individual responses to the reviewers, and we hope this addresses any remaining uncertainties about our paper. We will be available during the discussion period if any further questions arise. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Dynamical Systems from Noisy Data with Inverse-Explicit Integrators | Reject | Summary: This paper introduces a new integration method (mean inverse integrator) for learning dynamics from noisy data. Experiments on Hamiltonian systems show the effectiveness of the proposed method.
Strengths: - The problem of learning physical dynamics from noisy data is an interesting one.
- It combines techniques from the field of numerical computation with machine learning.
Weaknesses: - The usefulness and significance of the proposed method is not clear.
- I feel that comparative experiments are not sufficient.
- It is not clear what advantages the proposed method has over the naive noise handling method.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - To handle noisy data, the most common method is to introduce an observation model (e.g., Normal distribution) and learn the noise variance by maximum likelihood estimation. Can you tell us what advantages the proposed method has over such naive methods?
- It would be good to clarify situations that can only be solved by the proposed method and emphasize the significance of this paper. For example, when using Gaussian likelihood, the noise variance is usually assumed to be constant. Can the proposed method relax this assumption? In other words, can the proposed method be applied to cases where the noise variance depends on time and state?
- A thorough comparison with the latest methods (e.g., [9-12, 15]) would be helpful.
- The font size in Fig. 5 and 6 is too small and should be enlarged.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: It would be good to add a careful discussion of the advantages and disadvantages of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the helpful comments and suggestions provided by the reviewer. Below are our responses.
### Assumptions on noise (Q1 - Q2)
- The reviewer points to one of the advantages of MII, which we will make more clear in the revised version: that it puts no assumption on the noise / data distribution. In the paper we assume Gaussian noise, to make the analysis used to derive Theorem 5.2 more straightforward. However, the method itself is derived by exploiting the fundamental group property of the underlying flow of the exact solution (and approximating this in discrete time), allowing multiple independent approximations to be produced. Hence, the method is derived from a numerical analysis perspective, more than a statistics / Bayesian perspective.
- As the reviewer alludes to, the MII is thus distribution agnostic. The only underlying assumption is the group property of the flow map, namely that $\varphi_{h_1} \circ \varphi_{h_2} (y_0) = \varphi_{h_1 + h_2} (y_0)$. This is a very general property of dynamical systems and the MII is thus applicable to a wide range of problems. We will strive to include numerical experiments with time- and state-dependent noise in the final version of the paper and thank the reviewer for the suggestion.
### Comparison with other methods (Q3)
(References in this section are to the references in the paper)
- Chen et al. [10] introduce the method that is called Störmer + ISO in our paper, and is thus included in the experiments. We also extend on [10] by replacing Störmer-Verlet with RK$4$ and show that this is in many cases superior, even though it is not a symplectic scheme.
- Zhu et al. [11] apply inverse modified equations to provide an existence guarantee for Hamiltonian neural networks when the integrator is symplectic. The midpoint method is symplectic and thus included in the experiments. We have also included results from the symplectic Gauss–Legendre methods of order $4$ and $6$ in Figure 3.
- The works of David and Méhats [12] and Offen and Ober-Blöbaum [9] are similar to [11] in how they argue for symplectic methods, but take this a step further by exploiting the inverse modified equation to derive a correction term to the learned Hamiltonian. This approach is theoretically appealing and we will include a comparison with this method in the final version of the paper.
- Matsubara et al. [15] uses discrete gradient methods and exploits the discrete chain rule to derive a novel automatic differentiation algorithm for computing discrete gradients of neural networks. We have included a fourth-order method (DGM$4$) that builds on this work in the experiment in Figure 3. We have run multiple tests using discrete gradient methods of order $2$ and $4$ and find that they do not improve upon symplectic methods of the same order when there is no noise, and they are also not as robust to noise.
Figure 3 also includes experiments with the modified implicit midpoint method (MIMP$4$) proposed in the appendix. Neither of these methods differ significantly in accuracy from the methods used for experiments in the paper as it is.
### Other comments
- We will increase the font size in figures 5 and 6 in the final version.
- With the option to add one more page in the revised version, we will include an extended and improved discussion of the advantages and disadvantages of the proposed method, following along the lines of the comments above.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for your answer. I now understand the advantage of the proposed method (no need to assume noise distribution). I will raise my score by 1. | Summary: The paper investigates mono-implicit Runge--Kutta (MIRK) methods for learning dynamical systems from data. In particular, MIRK methods can be made explicit by introducing the external data into the solver step itself, leading to a more efficient integrator while keeping favorable stability, symmetry, and symplecticity properties. To handle noisy data, the paper proposes the "mean inverse integrator" as an efficient way to average multiple trajectories and learn meaningful vector fields from these. The methods are demonstrated in multiple numerical experiments.
Strengths: The paper is well-written, the proposed method is presented very clearly, and the main claims made are well-supported.
Weaknesses: I find the presentation of the explicit Runge--Kutta (ERK) baseline a bit confusing, and in particular counter-intuitive of my current understanding of the usage of ERK methods for inverse problems in the context of ODEs. Concrete questions are below, in "Questions".
This criticism also extends to insufficiently detailed baselines, and potentially to insufficient baselines overall.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See also "Weaknesses". Notably:
1. How exactly is the RK4 / ISO+RK4 baseline defined?
2. Why is (6) _the_ appropriate loss function? I would rather assume something of the type $\sum || \tilde{y}_n - \hat{y}_n ||$, where $\hat{y}_n$ is a state estimate computed by the method; for instance by applying a numerical solver to simulate a whole trajectory. In particular, extrapolating exactly from the last data point seems to be a strong restriction, in particular if the data point can be very noisy. I would not consider this "the inverse problem formulation", but rather a specific choice that is made, geared towards the specific algorithm of interest.
3. Related to the last question: Could you commend on the standard least-squares approach of integrating the whole trajectory from some initial value (estimate) with RK, as opposed to the described method of just integrating from one data point to the next? The latter seems to be related to "multiple shooting"-type approaches which, while they can be very beneficial, seems to be far from the standard for neural ODEs. Though, to the best of my knowledge, even in multiple shooting it is common to extrapolate some state estimates $\hat{y}$; though part of the objective is the closeness of these estimates to the data, and they are initialized on the data.
Other remarks:
- l. 201: "evaluationg the vector filed in $\tilde{y}_n$, as is done in all explicit RK methods": To the best of my understanding this is not done in all explicit RK methods, but only in multiple shooting-type approaches (and even there this might not hold). A standard least-squares approach simulates a whole trajectory and then computes an L2-loss---and thus does not need to evaluate $f$ on the data points themselves.
- l. 129: $\tilde{y}$ is missing an index
- Related work: The write-up is very helpful; but I think it could be further improved by highlighting better how the proposed paper relates to these methods and highlight similarities and differences.
- l. 72-73: I agree with the sentiment of the statement, but I think the strict _necessity for numerical integrators_ is somewhat disputed in the literature. Notably, "gradient matching" methods compute first an interpolant, and then adjust the vector field to match the known derivative of the interpolant---this circumvents using RK or any related numerical method, which is why such methods often claim to be "simulation-free".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors address limitations in a dedicated section, which is much appreciated.
One additional limitation that is not explicitly mentioned in the section that comes to mind is the influence of the noise on the data: Since the data is explicitly included into the numerical solver, as opposed to being just part of some L2 loss to guide some estimated trajectory, I would expect that for very noisy data other methods might be preferrable (in particular a least-squares approach with RK from a learned initial value).
Another potential limitation: I assume that this method is not able to be a plug-in replacement in some latent neural ODE setting, where the ODE trajectory is not observed directly but only in some transformed space, e.g. when having video data and modeling an ODE in latent space? This is of course far from a trivial setting and I do not expect that such specific scenarios need to be mentioned explicitly; but the necessity of actual trajectory observations, as opposed to partial or non-linear observations, could indeed be another limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, which we will use to improve our paper in its revised version. Below are our responses.
### Our method and baselines, and relations to other methods (Q1 - Q3)
The RK$4$ baseline is defined as in Eq. 6, meaning that the vector field is trained taking only one step with the numerical integrator. ISO + RK$4$ on the other hand is of the recurrent type, meaning a whole trajectory approximation is produced, integrating a full sequence $\hat y_n$ where we have first found the optimal $\hat y_0$ producing the trajectory of minimal distance to the proceeding $\tilde y_n$. Then the loss is computed using $\sum \\|\tilde y_n - \hat y_n \\|$, which is the loss mentioned by the reviewer and referred to as the least-squares approach.
Three comments should be made:
1. Computing the loss one step at a time is exactly what allows the MIRK class of integrators to be used explicitly with the inverse injection, as explained in Section 4 of the paper. For computing a full trajectory approximation with an implicit RK method, one would have to solve non-linear systems of equations in every step. Even though our approach limits the loss to be computed one step at a time, it allows symmetric, symplectic and stable integrators to be efficiently used in training.
2. The one-step approach is sensitive to noise, which is why we introduce MII to build an averaged approximation using linear combinations of the one-step approximations. This is the main contribution of the paper, and the approach is compared against what would be close to a least-squares approach in the ISO + RK$4$ baseline.
3. The least squares approach (both ISO methods) breaks down for the highly oscillatory FPUT problem, where MII produces high accuracy with a minimal increase in computational cost. A similar observation supporting multiple-shooting when the data is highly oscillatory is made in [1]. This demonstrates that MII is a more robust method for learning dynamics from noisy data.
We agree with the reviewer that this difference should be made more clear in the paper (rather than defining (6) as the default approach). In our revised version we will do this. We will also put a greater emphasis on the benefit of MII for highly oscillatory problems.
### Replies to the other remarks
- Regarding the first remark, about the sentence starting on line 201: We will reformulate this to make clear our meaning: that with our multiple-shooting type approach, using an explicit RK method means evaluating in $\tilde{y}_n$, which means that this class of integrators has a disadvantage in our case.
- Remark II: We will add the index.
- We will take remark III into account and improve the related work section. See our reply to reviewer yxuC for a more detailed comparison to the related work, which we will incorporate into the revision.
- Regarding the fourth remark: We will rework the specific sentence, since we agree that it is at best imprecise the way it stands now. Moreover, we thank the reviewer for pointing to the gradient matching approach, since we see that our paper would benefit from mentioning this and how it relates to our approach: they share the advantage that integration does not have to be done during training. Specifically, we will note the equivalence between our approach and gradient matching for certain choices. E.g., using linear interpolation in gradient matching could be equivalent to using our approach with the one-step method and either the explicit Euler method or the implicit midpoint method, depending on where the loss is evaluated.
### Limitations
- That a method based on integrating over several steps from a learned initial value might be better for very noisy data is a deft assumption. Indeed, we have performed numerical experiments that support this; see Figure 2 in the attached PDF, where ISO+RK$4$ is a method of this type and does indeed outperform the MII approach. We will include this result in the revision and comment on it in the limitations section.
- It is true that the method would not be a plug-in replacement in a latent neural ODE setting. However, we do believe our methodology would work in combination with other methods for handling that type of data, and we consider that an obvious avenue for future studies. We thank the reviewer for pointing out the necessity for trajectory observations as a limitation of our work at its current state, and will include this in the limitations section of the revised paper.
[1] : Turan, Evren Mert, and Johannes Jäschke. "Multiple shooting for training neural differential equations on time series." IEEE Control Systems Letters 6 (2021): 1897-1902.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for the reply and for for clarifying a few points. As a result I increased my score by 1. | Summary: The authors present a novel method, Mean Inverse Integrator (MII), used to aggregate data generated through numerical integration of the vector field characterizing Hamiltonian systems. In particular, the objective is to improve the training of Hamiltonian Neural Networks (HNNs) when the data used is noisy.
The authors train HNNs on two tasks, using data generated by different methods of numerical integration.
Strengths: - The paper is well written and complete. My background on the topic is limited, and so I was happy that Sections 3 and 4 were included.
- The theoretical analysis is thorough and convincing.
Weaknesses: - I am not sure how much the paper fits with the themes of NeurIPS, but it should still be of interest to some of the audience.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for the encouraging feedback, and are happy to argue for the relevance of our paper to NeurIPS. The paper builds largely on the ideas of Hamiltonian neural networks, first presented in [1], and is related to several NeurIPS papers where numerical analysis and deep learning is combined in problems where geometry is important, such as [2,3,4,5]. Additionally, the workshop "The Symbiosis of Deep Learning and Differential Equations" has taken place in two volumes (2021, 2022) [6,7].
[1] : Greydanus, Samuel, Misko Dzamba, and Jason Yosinski. "Hamiltonian neural networks." Advances in neural information processing systems 32 (2019).
[2] : Finzi, Marc, Ke Alexander Wang, and Andrew G. Wilson. "Simplifying Hamiltonian and Lagrangian neural networks via explicit constraints." Advances in neural information processing systems 33 (2020): 13880-13889.
[3] : Zhong, Yaofeng Desmond, Biswadip Dey, and Amit Chakraborty. "Extending Lagrangian and Hamiltonian neural networks with differentiable contact models." Advances in Neural Information Processing Systems 34 (2021): 21910-21922.
[4] : Matsubara, Takashi, Ai Ishikawa, and Takaharu Yaguchi. "Deep energy-based modeling of discrete-time physics." Advances in Neural Information Processing Systems 33 (2020): 13100-13111.
[5] : Chen, Yuhan, Takashi Matsubara, and Takaharu Yaguchi. "Neural symplectic form: Learning Hamiltonian equations on general coordinate systems." Advances in Neural Information Processing Systems 34 (2021): 16659-16670
[6] : https://neurips.cc/virtual/2021/workshop/21880
[7] : https://nips.cc/virtual/2022/workshop/49987 | Summary: The presented work considers a novel class of integrators that are used to train Hamiltonian Neural Networks (HNNs). This class is called mean inverse integrator and it averages the trajectories from mono implicit RK methods (MIRK) to obtain higher accuracy. The authors provide theoretical results on how MIRK convergence suffers from noisy data. Also, experimental results on test dynamical systems illustrate the performance of the proposed approach.
Strengths: The paper presents a comprehensive introduction to the different types of integrators and smoothly introduces the new one. The novel integrator is investigated w.r.t. the relations with classical RK methods (Theorem 4.4 and Proposition 4.3). Also, theoretical results on the robustness w.r.t. noise in the data are presented.
Weaknesses: 1) the experimental results show that the proposed approach is not always better than the alternative methods in terms of accuracy, but always slower in terms of runtime. So the use cases for the MII-related approaches should be stated more strictly.
2) the manuscript is mostly about integrators theory rather than learning dynamical systems from data. Revision of the structure and shift of the focus from integrators to its application in learning HNN can be very helpful
3) the connection of the proposed integrator and its usage in forward or in backward passes is ignored. Thus, it is unclear from the text how this new integrator should be incorporated into the existing pipeline of training HNN
4) experiments are performed only for systems, where the states are small dimensional. The scalability and robustness w.r.t the large dimension of states are unclear
5) since no batching is used for training, it indicates that a large amount of data are not tested yet and all available data are fitted in the GPU memory. It will be interesting to study how the stochasticity of the data and corresponding gradient estimates affect the convergence of the training curves.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) how do different integrators affect the way to perform backward passes? Please provide a detailed explanation and complexity analysis
2) even for 4d states the proposed approach (MII MIRK4) shows the worst runtime compared to the alternatives (Fig. 6). So, how do the MII-related integrators scale w.r.t. the state dimension? How are they applicable to learning dynamical systems with high-dimensional states?
3) all experimental results are shown for fixed noise level $\sigma= 0.05$, but authors highlight that the proposed approach is robust to noisy data. So please show how the test accuracy depends on the noise level in training data for different integrators
4) please provide analytical forms of the considered test dynamical systems for the reader's convenience
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors explicitly state the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We are grateful for the detailed review and suggestions for how to improve this work.
### Backward passes (Q1)
Since forward-mode automatic differentiation outperforms the adjoint sensitivity method for computing derivatives of ODE solutions for smaller-sized systems ($n<100$, and we have $n=4$), this is the method chosen in this work. See for instance the discussion in [1]. Hence, the implementation is rather straightforward in the sense that solvers are implemented with PyTorch and derivatives are obtained with the standard backpropagation.
It should be emphasized that the inverse injection allows the otherwise implicit MIRK methods to be implemented as explicit methods, which saves a significant amount of computations when using forward-mode AD (otherwise we would have to backpropagate through a non-linear solver or exploit the implicit function theorem). Through the repeated approximations using the inverse injection (see Appendix D), we end up with MII as a linear combination of MIRK steps. Hence, in theory the additional computational cost of MII should not be much greater than that of using MIRK directly: in the forward pass we have two matrix multiplications additionally and by the linearity of the differential operator, this should not cause a large increase in computational cost regarding the backward pass. However, as pointed out by the reviewer, MII is significantly slower than MIRK in practice. Understanding why is of high interest and we are currently investigating this in detail. A discussion on this will be included in the final version of the paper.
Comparing Figure 4 and 2 in the attached PDF it is interesting to note that MII is significantly faster than RK$4$ + ISO when Adam is used as optimization method. We prefer to use L-BFGS as this yields lower error overall. However, further investigations of the difference in computational cost between Adam and L-BFGS regarding the MII will hopefully provide insight into how to improve the efficiency of the current implementation.
[1] : https://www.juliabloggers.com/direct-automatic-differentiation-of-differential-equation-solvers-vs-analytical-adjoints-which-is-better/
### Scalability (Q2)
We have performed some initial numerical experiments of the $3$-body problem and of the spatially discretized KdV PDE and the scalability of MII is not excellent. As mentioned above, we believe it should be possible to work on optimizing the implementation to improve this. One idea is to use a sliding-window approach where only $N$ points (and not all possible as is currently done) in positive and negative time (to the "left" and "right") are used to compute the average approximation.
### Noise level (Q3)
- (Q3) The amount of noise ($\sigma$) should be considered in comparison to the average distance between two sequential points in the (unperturbed) training data:
$$
\begin{equation}
\frac{1}{N}\sum_{n=0}^{N-1} \\|y(t_n) - y(t_{n+1}) \\|
\end{equation}
$$
For the two systems and the three different step sizes $h$, this is given (computed empirically for HH and DP) by:
*Average distance in training data*
|**System** |$h=0.4$ |$h=0.2$ |$h=0.1$ |
|---|---|---|---|
|Double pendulum |$0.278$ |$0.141$ |$0.071$ |
|Hénon-Heiles |$0.197$ |$0.099$ |$0.050$ |
This tells us that the noise level of $\sigma = 0.05$ goes from a relative magnitude of approximately $25\%$ to $100\%$ in comparison to the average distance between sequential points, meaning that the noise level is fairly high. We will clarify this in the revised version. Moreover, we will include additional experiments with noise $\sigma \in \\{0.025,0.05,0.075\\}$ to get a better understanding of the robustness of the methods. Preliminary results are found in Figure 2 in the attached PDF. Due to the limited time and having to restrict to one page, the experiments were only run for the double pendulum problem and not re-run multiple times to compute the mean and standard deviation of the error. However, the results for $\sigma = 0.05$ is consistent with Figure 6 in the paper. We observe here that MII performs on-par with RK$4$ + ISO, and significantly better for the smallest level of noise $\sigma = 0.025$.
### Analytical forms of vector fields (Q4)
Currently this is found in Appendix A, but will be moved to section 6 for the final version which allows one page more.
### Batching in training
We have provided experimental results using Adam and batch size of $B = 256$ in Figure 4 in the attached PDF. These may improve / differ if more time is allowed for hyperparameter optimization; however, it is interesting to note that MII is significantly faster than RK$4$ + ISO when using Adam. The error is somewhat higher using Adam than using L-BFGS (see Figure 2 with $\sigma = 0.05$ in the attached PDF).
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thanks for the detailed response! I have the following comments
1) The incorporation of the proposed integrator in large-scale systems is an important part of experimental evaluation. In such a setup the naive implementation of the backward pass is not enough.
2) Scalability is also crucial for practical applications, so I find the current status of the work is preliminary.
To sum up, I have decided to keep my evaluation. | Rebuttal 1:
Rebuttal: Here we attach a PDF with four additional numerical experiments, responding to specific questions posed by the reviewers. The figures are referenced in the rebuttals below.
Pdf: /pdf/efc20a80f16854ee3897ca30271c05562a475ba4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a novel method aimed at learning the vector field of a dynamical system. The proposed approach is called the mean inverse integrator, which utilizes a neural network (e.g., SRNN) to accurately estimate the integrator in the presence of noisy data. The authors provide theoretical insights into the sensitivity of both the one-step target function and the mean inverse integrator to data noise. Additionally, the paper presents empirical evaluations by comparing the method to five different types of integrators.
Strengths: - The paper effectively conveys its ideas and arguments with clarity.
- The research addresses an important question and presents a new approach to handle noise when learning data dynamics.
- Theoretical analysis shows how the proposed method and the baseline approach respond to noise, contributing to a better understanding of their performance.
- Empirical results demonstrate the effectiveness of the proposed method, showing significant advantages over the baseline approaches.
Weaknesses: - The paper lacks clarity on why the mean inverse integrator yields more accurate estimates compared to the one-step baseline.
- Scalability to high-dimensional systems is not explored, as current experiments primarily focus on trivial test cases. It would be beneficial to investigate potential challenges arising from increased computational complexity, instability of estimation in the presence of chaotic dynamics, and provide examples demonstrating the method's efficacy.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1: The paper would benefit from running experiments on complex, high-dimensional data to showcase the method's capabilities.
Q2: It remains unclear whether the proposed method can generalize to more general dynamical systems beyond Hamiltonian systems.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and would like to provide the following response.
- The idea of the theoretical analysis behind Theorem 5.2 is precisely to provide an understanding of why the mean inverse integrator provides more accurate estimates, namely because the averaging over multiple approximations allows noise to be canceled out.
- (Q1) The reviewer has a fair point in that the systems used for numerical experiments are low dimensional. However, both Hénon-Heiles and the double pendulum problem do exhibit chaotic dynamics and the FPUT problem is highly oscillatory with fast oscillations along some dimensions and slow oscillations among others.
- (Q1) For tackling higher-dimensional systems (such as spatially discretized PDEs, the N-body problem or a coupled spring system) the methodology presented would have to be extended by considering other neural network architectures (i.e. CNNs for PDEs) or considering the problems along other coordinate systems, such as studied by Finzi et al. [1]. We consider this a highly relevant future work, and we have indeed performed experiments on the discretized KdV equation and the 3-body problem, but decided that it is outside the scope of this paper. This is partly because we wish for our results to be compared to the recent literature on Hamiltonian neural networks, where the focus has been on lower-dimensional systems, and partly because considering higher-dimensional systems adds complexity and further considerations about the neural networks used that may cloud the presentation and analysis of how our methods perform compared to benchmarks.
- (Q2) The proposed method does apply to learning vector fields in any form. We chose to focus on Hamiltonian systems in the paper so our method could be linked to and compared to the vast recent research on Hamiltonian neural networks, and to demonstrate that non-symplectic integrators may work very well also here. To demonstrate the utility of our method for general systems, we have included numerical experiments for the Lotka-Volterra system in Figure 1 in the attached PDF. Here, we see that MII is significantly better than ISO + RK$4$ and the MIRK methods are generally superior. Since the Lotka-Volterra system is not a canonical Hamiltonian system, the vector field is learned directly with a neural network $f_{\theta} : \mathbb{R}^2 \rightarrow \mathbb{R}^2$.
The Lotka-Volterra problem is given by the following ODE:
$$
\dot x_1 = x_1 - x_1x_2,
$$
$$
\dot x_2 = x_1x_2 - x_2.
$$
[1] : Finzi, Marc, Ke Alexander Wang, and Andrew G. Wilson. "Simplifying Hamiltonian and Lagrangian neural networks via explicit constraints." Advances in neural information processing systems 33 (2020): 13880-13889.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the reading of the authors’ replies and am aware of the discussion related to the noise scales. I decide to keep my score.
Thank you! | null | null | null | null | null | null |
Ignorance is Bliss: Robust Control via Information Gating | Accept (poster) | Summary: This paper empirically investigates a few approaches to modulating the amount of information used by a neural network learner in a variety of control-adjacent learning problems. The proposed approach, InfoGating, learns an input-conditioned continuous-valued mask that is applied to the same input or features thereof. The learner is trained to optimize a task loss and minimize the information allowed through by the mask. Experiments on representation learning via multi-step inverse dynamics modeling demonstrate the relative benefit of InfoGating over other approaches to information parsimony such as the variational information bottleneck. Experiments on Q-learning and fine-tuning pre-trained visual representations for behavior cloning demonstrate the benefit of InfoGating over naive baselines.
Strengths: ### Originality
The particular formulation of InfoGating as using input-conditioned masking amortized via a separate neural network is, to my knowledge, original. The authors remark that InfoGating is closely related to previously proposed ideas in the space of information-based regularization of neural networks.
### Quality
The scope of the experiments chosen to validate the proposed idea is perhaps the main strength of this work. Three distinct experimental testbeds (dynamics modelling + probing, Q-learning, fine-tuning pre-trained representations for behavior cloning) are considered, as are a few variants of InfoGating (input gating vs. feature gating, cooperative masking vs. adversarial masking).
### Clarity
The proposed methods are simple and clearly described. However, while some experimental details (architectures and hyperparameters) are presented in the supplementary material, this seems insufficiently detailed to guarantee reproducibility, so I would encourage the authors to release code.
### Significance
The considered problem is important and the proposed methods are simple and intuitively reasonable.
Weaknesses: I list my perceived weaknesses in rough decreasing order of importance.
- (W1) The experimental design leaves a lot to be desired. First and foremost, for the main experiment (multi-step inverse dynamics modeling + behavior cloning probe on half-cheetah), the authors report performance on "a noise-free observation space" (line 221), which I take to mean the original DeepMind Control Suite background or similar. I would instead expect evaluation to use held-out noise (distractions from the Distracting Control Suite) to properly assess the generalization capabilities of the models. As-is, contribution claim 3 (line 62) is unsubstantiated, since, to my knowledge, we typically do not use "generalization" to mean noisy train -> noiseless test (I would instead characterize this as denoising), and neither of the other testbeds include comparisons to other information-based regularization techniques.
- (W2) The design choice of operationalizing information regularization via a learned input-conditioned mask pushes the burden of generalization to the masking network. This seems to be the main distinguishing aspect of InfoGating over other forms of information regularization, yet the impact of this design is neither evaluated in isolation nor in toto.
- (W3) There are several key statements that are unclear or confusing.
- "Although IB approaches have been beneficial in some cases, adding an information bottleneck at the penultimate step of computation does little to prevent overfitting in the preceding steps of computation, which typically comprise the overwhelming bulk of the model" (line 34). It's not clear to me that IB approaches predominantly bottleneck just before computation output. In particular, methods that use informational/Gaussian/variational dropout seem to contradict this. I would appreciate citations supporting the quoted statements.
- "Roughly, the values in $ig(x)$ can be seen as specifying how many steps of forward diffusion to run in a Gaussian diffusion process initiated at $z$, where values near zero correspond to running more steps of diffusion and thus sampling from a distribution that is closer to a standard Gaussian in terms of KL divergence" (line 174). This sounds like an interesting connection, but it seems to be only true if the mask specifies a constant value. Otherwise, different pixels are noised to different extents, which doesn't seem to correspond correctly to the standard modelling assumptions in diffusion models.
- (W4) The material in the half-page Background section is completely orthogonal to the proposed idea. I would move a condensed version to the experimental section since the content really is just context for the experiments. Relatedly, Algorithm 1 is InfoGating applied specifically to multi-step inverse dynamics modeling, yet one of the contribution claims is that InfoGating is general purpose.
Addressing W1 and showing good performance on more meaningful out-of-distribution evaluation would cause me to increase my score substantially.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - (Q1) What do the learned masks look like for i) inputs whose underlying state is trained on, but with held-out distractors; ii) inputs whose underlying state is held-out, but with trained-on distractors; and iii) inputs whose state and distractors are both held-out? This would go a long way in explaining the benefits of (or diagnosing weaknesses with) the proposed approach.
- (Q2) Why does baseline inverse dynamics modeling in Table 1 do so poorly, given the protocol of train-on-noisy, test-on-noise-free? Concretely, why doesn't adding the noise to the training data confer benefits a la data augmentation, since it specifies input transformations that the output (actions) should be invariant to? Is it because the considered noise (frames from continuously playing videos) has temporal consistency?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and constructive feedback on our paper. We appreciate the time and effort you have put into evaluating our work.
**W1: Evaluations on noiseless environments and W2: Generalizations of masking network**
> Yes, “noise-free observation space” indicates the default DM Control Suite backgrounds. We agree that our main results with background distractors should include results where both training and test episodes include distractors. We performed these tests and present them in Table 1 in the PDF accompanying this rebuttal. We also performed tests in response to your point about offloading generalization duties to the masking network. Specifically, we include results for using the image encoder trained on IG-masked images with unmasked images at test time. Note that this encoder is trained with a mix of masked and unmasked images during training (so unmasked images are not “out of distribution”), and that all other methods use unmasked images (so this comparison is “fair” in this sense).
> From Table 1 in the rebuttal PDF, the image encoder trained via inverse dynamics with InfoGating outperforms the baselines when testing with unmasked images. Depending on how many distractors are seen during training relative to how many are held out for use in testing, we also see some signs of potential overfitting in the mask encoder. I.e., Inverse Dynamics + IG with masked images at test time can suffer when only a small number of distractors are seen during training, even though the same image encoder performs quite well when working with unmasked images during testing.
> We thank the reviewer for pointing out these gaps in our experiments and believe that the expanded results provide stronger support for the claimed contributions in our submission. These results suggest an interpretation of our method as performing task-adaptive data augmentation. I.e. the mask encoder applies cutout to the training images with the goal of cutting out as much as possible without making the task intractable. From this perspective, it is natural to use the IG-trained image encoder with unmasked images at test time, which makes generalization by the mask encoder less critical to our method.
**IB approaches predominantly bottleneck just before**
> We cannot provide a specific citation for our statement that existing IB-like methods predominantly implement bottlenecks at late stages of the overall computation (eg, the last linear layer as in the original work by Alemi et al.), since it is merely an observation on our part. We cite variational information dropout as a counterpoint to our observation, and will edit our claims to be more clear that we’re stating an opinion about how the potential gain from applying some sort of IB early or late in the compute graph is not well reflected in the balance of early/late IB usage in prior publications. We will also clarify that this observation only pertains to “actively info minimizing” forms of IB, and was not intended to include methods based on dropout where the degree of info minimization is set a priori.
**Connections to diffusion models**
> Regarding similarities between InfoGating and diffusion models, we note that the forward diffusion processes in standard DDMs are pixel-wise independent and we posit that one could train a model where the reverse process proceeds at different rates for each pixel. Generating data for training a reverse process with variable per-pixel diffusion rates would be simple due to the pixel-wise independence of the forward process, though one may need some extra bookkeeping to track how far through the forward/reverse process each pixel is. Of course, ease of implementation is not equivalent to ease of getting good results.
**W4: paper structure**
> We can rewrite Algorithm 1 in a more general form in the main paper. We chose to tailor it to inverse dynamics in the submitted draft since we focused a bit more on this setting in the paper.
**Q1: Generalization of IG masks**
> Figure 1 in the rebuttal PDF visualizes the mask for in-distribution and held-out distractors, showing that the the InfoGating masks generalize well to out-of-distribution distractors. Note that since we train on previously collected offline datasets, it is not possible to infer if an underlying state is in-distribution or not.
**Q2: Baseline inverse dynamics performance**
> We include a single, static distractor dataset while training and so your point on "a kind of data augmentation effect coming into play" would be valid if we trained on multiple distractor datasets (creating an implicit invariance to the distractors). The temporal consistency in the background is also a potential reason for the Walker experiments, which uses the single distractor video.
We hope that our rebuttal has addressed your questions and concerns, and we appreciate your consideration in revising the evaluation of our paper.
---
Rebuttal Comment 1.1:
Title: Rating Update
Comment: The authors have addressed my main concerns with the experiments. They have added evaluations on held-out distracting backgrounds, showing that their method outperforms prior methods (though not for the "hard" level of noise). They have also shown that a more performant way to use their model at test time is to simply discard the masking network, removing their method's dependence on the masking network's generalization to out-of-distribution inputs. I have increased my rating. | Summary: The authors hypothesise that gating information propagation in neural networks will lead to better generalisation. To achieve this they propose a system that performs gates information using a multiplicative differentiable operation, which they call InfoGating. To show the validity of their approach they compare to related models on several tasks such as contrastive dynamics control, Q-Learning and behaviour cloning.
Strengths: 1. The method is well motivated.
2. The tasks are relevant to the issue the model is seeking to solve.
3. The comparison models appear to be relevant and well justified.
4. They perform relevant ablations of their design choices.
Weaknesses: While the experimental results show improved generalisation, it is not clear how general this approach will prove to be. So, while I agree that masking irrelevant parts of the input so as to limit distractor information is a good strategy (and one that humans definitely use when interacting with the world), I am less clear that this specific approach fully captures how this process works and will generalise effectively beyond to more complex datasets.
To the author’s credit, they do point out limitations of their approach.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the relation between lambda and the overall performance? This is the main hyperparmeter of the system yet there is no graph showing how manipulating it affects performance.
2. If the authors have access to ground truth masks for some or all of the datsets, could they compare the masks produced by their approach with the ones produced by InfoGating?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Main technical issue I see is that the reliance on the lambda parameter is very similar to the reliance of a $\beta$ parameter in the $\beta$-VAE literature, which creates a tradeoff that is not always optimal.
It would have been interesting to test the information gating approach with other more principled techniques like object-centric learning which already segment the images into relevant parts (or at least they try to, eg [1]). Then info gating would be only responsible for selecting the appropriate objects for the downstream task.
This approach is also very similar to work in meta-learning which draws on the parallel between gating and neuromodulation in Neuroscience [2]. It would have been interesting to have both a conceptual and experimental comparison between the two.
### References
[1] Locatello, F., Weissenborn, D., Unterthiner, T., Mahendran, A., Heigold, G., Uszkoreit, J., ... & Kipf, T. (2020). Object-centric learning with slot attention. *Advances in Neural Information Processing Systems*, *33*, 11525-11538.
[2] Beaulieu, S., Frati, L., Miconi, T., Lehman, J., Stanley, K. O., Clune, J., & Cheney, N. (2020). Learning to continually learn. *arXiv preprint arXiv:2002.09571*.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and constructive feedback on our paper. We appreciate the time and effort you have put into evaluating our work.
**Question 1: relation between lambda and the overall performance / reliance on the lambda parameter**
> We illustrate the effect of lambda on performance in Figure 5 in the paper which shows how test performance varies with lambda. As lambda increases performance improves until further increasing lambda causes too much information loss. We have added new results in Table 3 of the rebuttal PDF, where we show that InfoGating is considerably robust to $\lambda$ value. Particularly, we trained InfoGating with three masking networks, each with a different lambda value. At each update step, a single masking network is selected at random. We see that the performance is similar to the one obtained when training with a single lambda value, hinting at how InfoGating is not too sensitive to the masking sparsity.
**Question 2: ground truth masks**
> There are no ground truth masks for the settings we consider, and it’s not clear how to produce such masks. However, Figure 7 in the Appendix shows how our masks generally capture the agent pose while removing most other information.
We will note the parallel between gating and neuromodulation in the paper by adding a few lines about the conceptual similarities. Thank you for the pointer.
We hope that our rebuttal has addressed your questions and concerns, and we appreciate your consideration in revising the evaluation of our paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the reply and additional experiments. My concerns have been addressed and I will update my score accordingly. | Summary: The authors proposed a mutual information-based encoder that generates masks to gate inputs in order to either pass on minimal information for downstream tasks, or to remove any useful information that could be used to optimize the downstream loss in an adversarial setting. This is a general model that can be applied to a variety of tasks, and the authors focus on the application to reinforcement learning tasks.
Strengths: * Through experiments, the authors show that the idea of directly removing visual information in a learnable way through dynamic masking improves the downstream policy learning in terms of performance and training stability when there are visual distractors.
* InfoGating is very general and could be applied to input or any intermediate representations, with good interpretability
Weaknesses: * training a minimax objective is usually not very stable, and the authors are not very clear in algorithm 1 or the appendix about the details of their minimax training. See questions.
* The idea of masking inputs directly is reasonable, and experiments show that it improves performance and training stability. But it seems not so obviously advantageous on intermediate layers. The authors discussed in Section 6 and 7, but there's no clear statement or experiments towards this line.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: For the minimax objective, are there any nested loops when updating the two sets of parameters?
Is there code available?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors appropriately addresss ethical or social impacts of their research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and positive feedback on our paper. We appreciate the time and effort you have put into evaluating our work.
**Minimax objective details**
> We did not use nested loops or different numbers of gradient updates for optimizing the image and mask encoders in the (minimax) adversarial objective. For adversarial losses, we simply switch the sign prior to backprop depending on which parameters we’re updating. We did not encounter stability issues.
**Masking intermediate features**
> We agree that we do not provide strong evidence that masking intermediate features is particularly useful in our RL settings. But, we believe it could be useful in other settings, e.g., for sparsifying attention to reduce compute in LLMs. See, e.g., works like “DARTS: Differentiable Neural Architecture Search” (Liu et al., ICLR 2019). | Summary: This paper introduces a novel masking technique called InfoGating for learning masks in contrastive loss settings with InfoNCE. The proposed approach is simple, well-motivated, and is evaluated in several RL setting including inverse dynamics models, Q-learning, and behavior cloning. Some ablation studies on regularization of the mask, as well as variants of the objective are also considered.
Strengths: * The proposed approach is a novel way for learning masking and is applied for RL tasks, whereas traditional approaches are typically evaluated for classification, or SSL tasks. Evaluation of the proposed approach covers multiple RL frameworks including Q learning, behavior cloning, and inverse dynamics.
* The proposed method is stated very clearly including all hyperparameters for reproducibility, clear pseudocode for inverse dynamics, and a clear description of the approach with intuition.
Weaknesses: * My primary concern with this paper is that the proposed approach does not have sufficient evaluation not situate itself well with prior works. There are two notabl method comparisons, VIB (2017), and RCAD (2022). Neither of these approaches, however are masking-based approaches which is the main line of this work from my point of view. These methods are also not included in comparisons on behavioral cloning and Q-learning experiments. I would like to see additional comparisons on Q-learning and behavioral cloning, and perhaps some experiments comparing other masking techniques which from my understanding have been used for RL such as Q-learning: https://proceedings.neurips.cc/paper_files/paper/2022/hash/a0709efe5139939ab69902884ecad9c1-Abstract-Conference.html, https://proceedings.neurips.cc/paper_files/paper/2022/hash/802a4350ca4fced76b13b8b320af1543-Abstract-Conference.html. Although these approaches approach masking from a slightly different angle, my understanding is that they are still aimed at better observation learning. There are also works orthogonal to this that focus on sparse convolutions: https://openaccess.thecvf.com/content_CVPR_2020/html/Verelst_Dynamic_Convolutions_Exploiting_Spatial_Sparsity_for_Faster_Inference_CVPR_2020_paper.html. I think contrasting with these approaches may also distinguish the paper.
* Additionally, cheetah run is only one of many tasks within the continuous control suite. It would be good for the authors to clarify why this task in particular, and if IG performs well across multiple additional tasks. Further, the reported cheetah run numbers are a fair bit lower than those traditionally reported for the return (usually 300+). Is there commentary on why the numbers are much lower in this setting, and why learning curves are not provided?
* There are some followup questions about whether or not information gating without controlling for consistency in all layers is good. In particular, whether this may violate properties of the IB principle which say that the information will only decrease throughout the network.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * How does infogating satisfy or violate the IB principle in deep learning? Is it possible for the model to result in higher information in later layersdepending on how much gating happens in features of the network? Does this violate any benefits of the IB principle, for example better generalization?
* How does IG perform compared with other methods for masking/information bottleneck on Q-learning and behavior cloning settings?
* What is the connection between IG and adversarial masking?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The proposed approach is not compared with other masking approaches for traditional augmentation and image classification settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and constructive feedback on our paper. We appreciate the time and effort you have put into evaluating our work.
**Comparison with Masking-based methods**
> We ran new tests using mask-based latent reconstruction (MLR). Even after extensive hyper-parameter tuning, we were not able to avoid collapse in representation. While we were able to produce meaningful results after slightly modifying the method, the results in each case were inferior to our method. In our RL setting, our method is best seen as a form of regularization. This is a somewhat different goal from the Dynamic Convolutions work.
**Results beyond Cheetah domain**
> Our new tests include results in the Walker domain. We primarily focused on Cheetah because the vd4rl domain only included distractor datasets for all three policy levels for Cheetah. For the Walker results, we collected our own distractor datasets. The results are provided in Table 2 in the rebuttal PDF. We largely observe that InfoGating performs better than the other five baselines.
**InfoGating and the IB principle**
> Our method strictly removes information from the input, and information can’t be added as one progresses through a compute graph (without taking additional inputs) due to the data processing inequality.
**Other IB methods in the Q-learning and Behavior Cloning Setting**
> Q-learning - Since the Q-learning setup is quite similar to the inverse dynamics, i.e. they both use the same encoder, we expect the dropout results to be similar to that of the inv w/ dropout results in Table 1 of the main paper.
> Behavior Cloning - In our preliminary experiments, we added dropout to the policy network for both behavior cloning and behavior cloning w/ IG versions. We did not see any significant improvement in performance for both versions, while keeping the model architectures fixed. It is quite likely that since the heavy lifting is done by the representation encoder, adding dropout to the policy network does little to improve performance.
> Because of the bottleneck on compute resources, we could not run more experiments to validate these preliminary observations. We are happy to include these in the final version of the paper if the reviewer feels strongly about these results.
**Connection between IG and Adverarial Masking**
> The “cooperative” version of InfoGating is quite different from Adversarial Masking as in ADIOS. In particular, the IG masks in this setting seek to reveal the minimal info required for solving a task. The “adversarial” version of IG is similar to a single-mask version of ADIOS. Extending IG to use multiple masks is straightforward.
We hope that our rebuttal has addressed your questions and concerns, and we appreciate your consideration in revising the evaluation of our paper.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Thank you for the response, and addressing many of my concerns with the extra experiments.
* Given experiments on other masking approaches are completed, it would be nice to see the final results in the paper (or appendix) as I believe this is an important comparison.
* Experiments with the Walker look convincing, and demonstrate the approach generalizes across environments.
* My concern with the data processing inequality is with masking. If I denote input at layer $\ell$ as $X_{\ell}$, then data processing inequality says $MI(X_{\ell}; Y) \geq MI(X_{\ell+1}; Y)$. My question is whether this holds when applying masking i.e. $MI(M(X_{\ell}); Y) \geq MI(M(X_{\ell+1}); Y)$ for the mask function $M$. A potential violation could happen if there is no constraint on $M$, for example $M$ could mask everything at $\ell$ and nothing at $\ell+1$.
Nonetheless, I will update my score to reflect positive results from the updated experiments. | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback. We have individually responded to points made by each reviewer and provided further experiments in support of our response (please refer to the attached rebuttal PDF). Below is a list of additional experiments we have included:
1. **Evaluations of all baselines and InfoGating on multiple distractors (including unmasked evaluations for IG):** We show that even when evaluations are not done on noise-free observations, InfoGating is considerably better performing.
2. **Results on the walker domain, for all baselines and InfoGating.**
3. **Visualizations of InfoGating masks on in-distribution and held-out distractors.**
4. **InfoGating with multiple masking networks**, each with a different $\lambda$ value, showing that IG relies very loosely on the particular value of $\lambda$.
We hope our response and the additional results can help address the reviewers' questions and concerns. For criticisms not addressed in this rebuttal, we invite the reviewers to point out those which seem most pressing during the interaction period. Thank you!
Pdf: /pdf/c23f35085bce9260a7769258377e76cc7c822c55.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors propose InfoGating as a way to learn parsimonious representation that could achieve better generalization by being robust to noise and spurious correlations. Representations that identify the minimal information required for certain task. Those representations attempt to be robust to out-of-distribution observations.
In contrast with previous works that applied a similar concept, i.e., information bottleneck, in later stage of the computation, InfoGating is applied to the input layer, which allows models to learn which information is key to solve the task. For example, what pixels matter for a given task.
There are two approaches proposed for InfoGating namely cooperative and adversarial. The former aims to identifying minimal sufficient information. The latter to identifying any useful information.
Strengths: S1. The method is simple and well explained, easy to read. It sounds correct.
S2. I found the experimental setup very interesting and well choose. I have seen methods motivated by similar concepts in self-supervised learning and efficient transformer [ATS: Adaptive token sampling for efficient vision transformers. Fayyaz et al. ECCV-22], but not for RL tasks.
S3. Ablation results on feature space vs pixel InfoGating shows the importance of removing distractors early in the pipeline. Supporting the motivation of this work.
S4. Appendix, Visualization and additional results are good and provide good insides to readers.
Weaknesses: W1. Since the InfoGaiting is presented as a general method to obtain more robust representations that can deal with out-of-distribution observations, I was expecting to see experiments in more dedicated settings like NICO++ [NICO++: Towards Better Benchmarking for Domain Generalization]
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1. Can you please explain further what is the metric used on the experiments?
Q2. In the experiments with inverse dynamic models, How consistent is the masking between two consecutive frames?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I found the limitations are well covered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and positive feedback on our paper. We appreciate the time and effort you have put into evaluating our work.
**Benchmarking on vision-based datasets**
> Since we focus on RL settings, we did not test on vision-based domain generalization, but this could be interesting in future work. Our new experiments evaluate on multiple domains (multiple background distractors, please see Table 1 in the rebuttal PDF), going beyond the leave-one-out evaluation strategy, as pointed out in the NICO++ paper.
**Q1. Can you please explain further what is the metric used on the experiments?**
> We evaluate using the return obtained after training a linear policy on representations learned with any given method. The representations are trained with distractor-based observations while the policy is evaluated without distractors. Our new experiments show that these conclusions still hold if the policy is evaluated with distractors.
**Q2. In the experiments with inverse dynamic models, How consistent is the masking between two consecutive frames?**
We observe masks to be consistent across consecutive frames. The masking network learns to focus on the robot contours, as shown in Figure 7 in the Appendix. It would be interesting to see what happens when inferring masks simultaneously across multiple frames, since temporal consistency could be leveraged to further reduce the number of visible pixels.
We hope that our rebuttal has addressed your questions and concerns, and we appreciate your consideration in revising the evaluation of our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for addressing my questions and concerns. They have added experiments on multiple backgrounds distractors, Table 1, a similar setup of NICO++. I will update my score accordingly. | null | null | null | null | null | null |
Language Semantic Graph Guided Data-Efficient Learning | Accept (poster) | Summary: This paper proposed a general framework for exploiting semantic information within labels in classification tasks to improve the performance of deep nueral networks. The framework is both task-agnostic and model-agnostic, making it applicable to a wide range of classification tasks and modalities. The framework first concatenates label concepts/descriptions with prompting templates, and feed these templates into a frozen language models to generate embeddings. These embeddings are subsequently used to construct a semantic graph, which is trained using a graph convolutional network (GCN). When applying the semantic graph for guiding the training process, two regularization objective is added based on the semantic grpah in order to enhance the model with label semantics. The goal of the whole framework is similar to the prompt learning in NLP, while its capability of applying to tasks in other modality improve its novelty and contributions. In experiments, the authors show that their proposed framework successfully outperforms many other baselines based on transfer learning, semi-supervised learning and data augmentation.
Strengths: 1. The paper itself is clearly written and presented, enabling readers to understand the proposed framework easily.
2. The proposed framework for incorporating semantic information of labels for tasks in various modalities is novel and widely applicable.
3. From the experiment section, the performance of the proposed framework significantly outperforms several baselines based on transfer learning, semi-supervised learning, and data augmentation.
Weaknesses: The proposed framework requires that the target task be a classification task and that each class has explicit and rich semantics. If a class does not have an explicit semantic meaning, the LSG may not work. This requirement limits the applicability of the LSG framework to many classification tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My questions are mainly about the generalization ability of the proposed framework:
1. Can you provide more examples of prompt templates used for generating embeddings?
2. What is the performace of data on some large-scale datasets (i.e. when the data source is rich)? In Co-tuning's paper, they also conducted experiments COCO-70, which is considered a large-scale dataset with 1k samples per class.
3. What is the ituition behind adding sample-sample interaction in the augmented graph (i.e. the matrix M). Does it bring any benefit to LSG?
4. I noticed that all the tasks used in the experiments have at least 50 categories. I am curious if the LSG framework can be applied to binary classification tasks or tasks with a small number of categories (<10). In my opinion, it may be difficult to apply LSG to binary classification because the descriptions of the labels are generally two sides of one statement. For tasks with fewer than 10 categories, it may be challenging to train a GCN. It would be great if the authors could have some disscussion about these tasks.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not discuss limitations of the work. I do not see any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback and comments on our submission. Please find our responses your questions below.
**Q1. Scenarios where label semantics are weak.**
A1. Thanks for your insightful comment. It is a solid concern, as there exists certain scenarios like defect detection where different classes are merely represented as "defect 1", "defect 2" and etc. In this case, the label embedding obtained from pre-trained language model is less meaningful.
However, we argue that **the idea of SG (semantic graph) guided knowledge transfer may still be applicable if we can construct a graph reflecting the relationship between these classes without the help of the language model**. For example, if we know that some types of the defects are caused by similar machine fault, we can use these relations to build graphs like stochastic block model, and still apply the proposed method to guide the primary model with the graph topology.
To demonstrate the wide applicability of our method, we investigate an alternative scenario where labels in the training data contain noise. It shares certain similarity with the situation you mentioned, as part of the training labels can no longer reflect the true nature of the samples. We randomly add label noise to the clean datasets and compare LSG with fine-tuning baseline.
The table below shows that LSG still outperforms fine-tuning baseline significantly, and results in less accuracy degradation from clean label scenarios. (20% noise means that 20% of the training data are randomly assigned to false labels.)
| Variants| Air-15% | Air-30% | Air-50% | Air-100% | Car-15% | Car-30% | Car-50% | Car-100% | CUB-15% | CUB-30% | CUB-50% | CUB-100% |
| -- | -- |-- |-- |--|--|--| --| -- |--|--|--|--|
| Fine-tuning clean | 41.6 | 57.8 | 68.7 | 80.2 | 41.1 | 65.9 | 78.4 | 87.8 | 51.2 | 64.6 | 74.6 | 81.8 |
| **LSG clean** | **55.6** | **72.0** | **79.5** | **86.7** | **55.4** | **75.5** | **83.8** | **90.7** | **57.7** | **70.6** | **77.5** | **82.2** |
| Fine-tuning 20% noise | 27.2 | 38.8 | 50.7 | 62.6 | 21.7 | 43.6 | 58.1 | 71.8 | 36.2 | 50.1 | 57.4 | 69.7 |
| **LSG 20% noise** | **41.5** | **54.8** | **61.9** | **72.6** | **36.3** | **54.3** | **67.9** | **80.8** | **42.1** | **57.3** | **64.2** | **73.8** |
| Fine-tuning 40% noise | 15.7 | 20.7 | 33.7 | 44.3 | 13.9 | 27.8 | 37.7 | 51.3 | 23.0 | 35.2 | 44.0 | 52.6 |
| **LSG 40% noise** | **27.5** | **35.6** | **44.7** | **52.8** | **23.4** | **38.0** | **46.4** | **58.6** | **27.9** | **39.5** | **47.2** | **56.9** |
In summary, we acknowledge that this is an important future direction for LSG and argue that our proposed method has the potential to adapt to these more challenging scenarios. We will add discussion on this issue in our paper to motivates future works.
**Q2. Examples for prompt templates.**
A2. We adopt the first twenty original hand-crafted prompt templates provided by CLIP, which includes:
- "This is a bright photo of a {}",
- "This is a bad photo of a {}",
- "This is a photo of many {}",
- "This is a sculpture of a {}",
- "This is a tattoo of a {}" etc.
We discover that this set of prompt templates works well even for the video and audio task, and thus do not manually design new templates.
**Q3. Performance on large-scale COCO-70 dataset.**
A3. Following Co-tuning, we evaluate LSG on COCO-70 (DenseNet-121) and report the results in table below. **Our method is superior than Co-tuning under every labeling ratio on this large-scale benchmark**, implying a boarder application range of LSG.
|Method| Air-15% |Air-30%|Air-50%|Air-100%|
| -- | :--: | :--: | :--: | :--: |
|Fine-tune| 76.60|80.15|82.50|84.41|
|Co-tuning|77.64|81.19|83.43|85.65|
| LSG| **79.50** | **82.33** | **84.14** | **86.11** |
**Q4. About sample-sample interaction.**
A4. The interaction described in adjacency matrix $M$ creates additional path for knowledge transfer for each image feature, which brings impact to the optimization of both losses $\mathcal L_{align}$ and $\mathcal L_{r}$.
- For $\mathcal L_{align}$, it means that each image features are affected not only by label embeddings but also image features from the same class. To guarantee correct node classification under such influence, the consistency between image features are promoted.
- On the other hand, the existence of these sample-sample interaction means that the image feature after GCN refinement (i.e., $\mathcal F(\tilde{h})$) naturally incorporates features from other images and consequently have richer information. Therefore, $\mathcal L_{r}$ obtains better targets for the original feature $h$ to learn. The following ablation study, in which we replace $M$ by an identity matrix $I$ , supports our argument.
| Variants| *Aircraft-15%* | *Aircraft-30%* | *Aircraft-50%* | *Aircraft-100%* |
| -- | :--: | :--: | :--: | :--: |
| LSG w/ $I$ | 55.0 | 71.4 | 79.1 | 86.1 |
| LSG w/ $M$ | **55.6** | **72.0** | **79.5** | **86.7** |
**Q5. LSG on tasks with fewer categories.**
A5. Good question. We must say that it is merely a coincidence that all the tasks we evaluate in the paper involve many categories. Here we show that **LSG is still semantic meaningful and works well when the category number is small**. We evaluate single domain generalization on another classical benchmark PACS that has only 7 categories (see Table C in pdf). We find surprisingly that LSG brings more significant performance improvement than on OfficeHome.
As for the case of binary classification, it is correct that the category relationship captured by the semantic graph is no longer useful, since the only thing that matters then is to discriminate between the two classes. However, the label embeddings provided by the pre-trained language model may still be useful to prevent feature distortion.
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: Thank you for the detailed rebuttal, It makes me feel clearer toward this paper. After reviewing all other reviewers opinions and the corresponding response from the authors, I decide to remain my current score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We thank you again for your valuable time and feedback. | Summary: This paper employ a language semantic graph to cature the relationship among different class, with the hope to alleviate the requirement of extensive training data and, in particular, human supervision. Generally, this paper first build the language semantic graph with the pre-trained language models. After that, the paper introduces two additional losses $L_align$ and $L_r$ that exploit the discarded label
semantic information for complement.
Strengths: I have carefully read this papar some times, and I'm interested in injecting knowledge to the model for effective learning. The strengths can be describued as follow:
1. The idea is meaningful and interesting that employing prior knowledge for effective learning, and the proposed method is novel, at least I've not read some related paper.
2. The experiment results are effective compred with baseline models.
Weaknesses:
1. The author not provide the code, and I'm very confued on the key section.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. The final loss function in eq.(7) don't include the loss L_node, and what is the influce of L_node?
2. The language semantic graph is built with the embedding of label, so why you not using the embedding to capture the smantic relathionship among labels rather than using a discrete semantic graph?
3. I'm confused on m the size of the prompt set, what is the prompt set?
4.The section 3.2 are very confused for me, why the two proposed loss can help data effective learning. The author doesn't give a detail description for eq(5) and eq(6). And I'm very confused on why need the proposed two align loss, and the expective result of the proposed align loss.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The developed method are depend on pre-trained language model, and sometimes the labels in a task maybe not have an effective embedding with the pretrained language models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your questions. We'll do our best to explain our method here and will thoroughly revise the paper to improve its clearity.
**Q1. Code for better understanding.**
A1. Our code is now provided to AC, please refer to it for better understanding.
**Q2. About the effect of loss $\mathcal L _{node}$.**
A2. The proposed LSG consists of two stages. In the first stage, we train a GCN on the language semantic graph $\mathcal G$. In the second stage, we train the primary model $F$ (i.e., ResNet) with the aid of the trained GCN. The loss $\mathcal L_{node}$ is only used for training GCN in the first stage, thus it is not appeared in eq.(7) for the second stage.
$\mathcal L_{node}$ optimizes the GCN to correctly classify each node on the semantic graph $\mathcal G$. By doing this, each node processed by the GCN aggregates discriminative and more comprehensive semantic information from neighboring nodes (see T-SNE visualization of the label embeddings in figure 3(a)).
The trained GCN is then fixed as a graph processor and deployed on the augmented semantic graph $\mathcal G_{aug}$ to pass semantic information to new nodes in the second stage. More specifically, eq. (5) and eq. (6) leverage the ($\mathcal L_{node}$) trained GCN to guide the feature learning of the primary model, which will be explained later.
**Q3. About the prompt set.**
A3. The prompt set is a collection of hand-crafted prompts where each one can complete a given concept into sentence. For example, one prompt can be 'This is a photo of {}' or 'This is a sculpture of {}' where we can replace '{}' by categories. The size of the prompt set $m$ determines the number of different sentences we can create for each concept, which consequently determines the number of text embeddings. (That is why we have total $|T|=mK$ embeddings for $K$ classes.) Via prompt set, we can obtain multiple different embeddings corresponding to the same category. The advantage of such diversity is fully utilized in the constructed semantic graph that will be explained next.
Table below shows how $m$ affects the accuracy. We choose $m=20$ to achieve a balance between performance and cost.
||$m=$5 |10|20|40|80|
|--|:--:|:--:|:--:|:--:|:--:|
|Air-15%|54.8|55.0|55.6|55.7|55.7|
**Q4. Using a semantic graph instead of the original label embeddings**
A4. The proposed semantic graph provides a better supervision than using the original embeddings directly does. The reason is three fold.
- First, the semantic graph is built upon label embeddings (i.e., $\mathcal K^{(0)}=\mathcal K$), and thus includes all the information that label embeddings have.
- Second, information can be passed on among neighboring nodes to produce more discriminative and comprehensive node embeddings (see T-SNE), and further be transferred to new node of image features through new links. Therefore, our semantic graph, with the aid of the GCN trained by $\mathcal L_{node}$, allows the image features capture semantics from label embeddings.
- Third, we empirically verify that using semantic graph is more effective than simply supervise primary model features by label embeddings using different classical alignment strategies. As shown in table below, both the prototype alignment and contrastive alignment method that only leverage initial label embeddings perform worse than LSG.
||Air-15%|Air-30%|Air-50%|Air-100%|
|--|:--:|:--:|:--:|:--:|
|Prototype align.|51.1|68.8|76.5|84.6|
|Contrastive align.|54.2|70.7|78.3|85.1|
|LSG|**55.6**|**72.0**|**79.5**|**86.7**|
**Q5. About Section 3.2**.
A5. This section describes the process of injecting label semantic knowledge into the primary model. The motivation behind it is that common practice of cross-entropy loss turns labels into one-hot vectors, during which the semantic relationship between categories are erased. (All these one-hot vectors are perfectly perpendicular to each other.) Thus, to introduce the lost semantic information into the network makes the better use of the training data and promote data-efficient learning. Take vision model as an example, our goal is to align the image features towards the semantic space of label embeddings provided by the PLM, in which the relationship between categories are better reflected than in the one-hot space.
The alignment is conducted both implicitly (via eq. (5)) and explicitly (via eq. (6)), and their effect are explained as follows:
- Eq. (5) is the node classification loss on new nodes from image features $h$. Since that the GCN trained by $\mathcal L_{node}$ in the first stage can already classify label embeddings (original nodes) correctly and it is now set fixed, lowering the cost of eq. (5) forces the image features become similar to their corresponding label embeddings, achieving implicit feature alignment.
- Eq. (6) computes the $l_2$ distance between original image features and the new features encoded by the GCN, therefore explicitly pushes the former one towards the latter one (which is detached). Note that the new features have aggregated label embeddings from their neighbors in the augmented semantic graph, thus are desirable representations for the vision model. Please refer to the algorithm provided in the pdf for the complete training process.
The ablation study conducted in the paper shows the effectiveness of both losses, and we present it here.
||LSG w/o $\mathcal L_{align},\mathcal L_r$|LSG w/o $\mathcal L_{align}$|LSG w/o $\mathcal L_r$|LSG|
|--|:--:|:--:|:--:|:--:|
|Air-15%|41.6|44.2|50.7|**55.6**|
**Q6. On tasks without effective language embeddings.**
A6. This is a solid concern. Although LSG cannot be directly applied to these tasks, we argue that our idea of semantic graph guided knowledge transfer is applicable if we can construct a graph reflecting the relationship between classes without PLM. In this way, our proposed method can once again guide the model to learn the category relations.
We hope these responses address your concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reponse. Some concern have been solved.
I don't know how can I get code from AC.
---
Reply to Comment 1.1.1:
Title: We have posted a new official comment and asked for permission to post code link
Comment: According to the official notification, we cannot post our code link here. (In notification) "All the texts you post (rebuttal, discussion and PDF) should not contain any links to external pages. If you were asked by the reviewers to provide code, please send an anonymized link to the AC in a separate comment."
To address this issue, we have posted a new official comment visible to all reviewers and the AC at the top, and asked for AC's permission to post our code link there. We believe you will be informed once we get the permission. | Summary: The paper addresses the importance of labels’ semantic meanings when training models. First, the framework constructs an LSG graph. Node features are text embeddings generated by language models, and the similarity matrix constructs edges. After that, a GCN is trained to aggregate node features of the LSG with the node classification loss. The GCN will be used for data-efficient learning afterward (regularization loss and alignment loss).
Strengths: * The motivation and proposed method is sound.
* The details are well illustrated and are easy to understand.
* Experiments cover a variety of tasks and well support the claims. Ablation studies also show the effectiveness of each component.
Weaknesses: * Since the LSG is the critical component of the proposed framework, the quality analysis of LSG is preferred. For example, a visualization of LSG showing how well it grasps the semantics of labels.
* The notion should be clearer. The F, C and $\mathcal{C}$, $\mathcal{F}$ are confusing when first reading. I would suggest using subscripts to distinguish them.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * How to make the inference? Is GCN used in the inference or only the primary model is used?
* I am curious about the stability of the training. The augmented graph contains a batch of inputs, possibly introducing interference between samples after GCN. Will this affect the training quality/stability? For example, will the training be affected much if we change the batch size or random seed?
* Is the primary model feature encoder the same as the model used to generate node embeddings? I.e., are “LM” and “F” in the figure 1 the same?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors do not include limitations. Some limitations I can think of:
1. The current structure may have much more training burden than vanilla fine-tuning.
2. The method is only applicable for classification methods.
3. Authors could also try experiments on text tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback and comments on our submission. Please find our responses your questions below.
**Q1. Quality analysis of LSG.**
A1. We provide two analysis to thoroughly evaluate LSG.
- We show the T-SNE visualization of the initial node embeddings and the GCN refined node embeddings based on LSG (see Figure 3(a)). We can see that in the initial embedding space there exists a few "prompt clusters" where label embeddings from different categories are clustered because of some poorly-crafted prompt templates. On the other hand, **the GCN refined embedding space consists of cleaner and more compact clusters, demonstrating that LSG improves the quality of the label embeddings** (which will in turn benefit the primary network via loss $\mathcal L_{align}$ and $\mathcal{L}_r$).
- Secondly, please recall that loss $\mathcal{L}_r$ utilizes image feature refined by LSG (i.e., $\mathcal{F}(\tilde{h})$) as target to supervise the original one. Therefore, the improvement brought by $\mathcal{L}_r$ gives credit to the rich semantics captured in LSG.
|Variant |Air-15%|Car-15%|CUB-15%|
|--|:--:|:--:|:--:|
|LSG w/o $\mathcal L_r$ | 44.2 |43.3|53.9|
|LSG | **55.6** | **55.4** | **57.7**|
**Q2. Change of notations.**
A2. Thanks for your suggestion. We change the notation $\mathcal{C}$ and $\mathcal{F}$ to $\mathcal{C}_g$ and $\mathcal{F}_g$ to better represent the classifier and the encoder of the GCN, and we keep the notation of $C$ and $F$ for the primary network for simplicity.
**Q3. About model inference.**
A3. The GCN (and correspondingly the projector) is not used during inference and the classification result is still produced by the classifier. For this reason, **no extra computational cost is needed in inference time**.
**Q4. Training stability.**
A4. To test the stability of our method, we conduct three experiments.
- We plot the three loss curves for the primary training stage in Figure 3(b) and show that all the losses decrease steadily and converges.
- We run LSG five times with different random seed. The accuracy curve in Figure 3(c) shows that LSG maintains a small accuracy variance throughout the training process.
- As demonstrated in the following table, batch size adjustment within a certain range has minimal effect on the performance. In fact, the sample-sample interaction within a mini-batch, which is also mentioned by reviewer A9AK, is found to be beneficial to the performance. The reason is that such interaction allows each image feature to aggregate more information.
The table below shows the performance of LSG with different batch size.
| Batch Size| StanfordCars-15% | StanfordCars-30% | StanfordCars-50% | StanfordCars-100% |
| -- | :--: | :--: | :--: | :--: |
| 24 | 55.3 | 75.4 | **84.1** | 90.7 |
| 48 (in paper) | 55.4 | **75.5** |83.8|90.7|
| 64 | **55.5** | 75.2| 83.8 | **90.9** |
The table below shows comparison between enabling sample-sample interaction (w/ $M$) or not (w/ identity matrix $I$).
| Variants| Aircraft-15% | Aircraft-30% | Aircraft-50% | Aircraft-100% |
| -- | :--: | :--: | :--: | :--: |
| LSG w/ $I$ | 55.0 | 71.4 | 79.1 | 86.1 |
| LSG w/ $M$ | **55.6** | **72.0** | **79.5** | **86.7** |
**Q5. LM is different from $F$.**
A5. "LM" is a pre-trained language model such as BERT that encodes sentences with concepts in to text embeddings (node embeddings). The primary model feature encoder $F$ is a model from a non-language modality such as ResNet. We will revise our paper to clarify the meaning of "LM" in figure 1.
**Q6. About the extra training burden compared to vanilla fine-tuning.**
A6. The proposed LSG does not bring too much training burden due to the following reasons.
- Firstly, both GCN and the projector added to the network are lightweight modules with minimal parameters compared to the primary model.
- Secondly, training GCN in stage one only requires less than 3 minutes, and the trained GCN can always be reused to guide different primary models on the very same task. Moreover, computing $\mathcal L_{align}$ and $\mathcal L_r$ is fast and will not increase much training time.
The following table shows the comparison between LSG and vanilla fine-tuning (ResNet-50 is adopted as backbone). It is concluded that the extra training cost brought by LSG is not overwhelming.
|Method | Parameters (training) | Parameters (inference) | Training time |
| -- | :--:|:--:|:--:|
| Fine-tuning| 25.6M | 25.6M|31.9 min|
|LSG|28.8M |25.6M|32.2 min|
**Q7. Extension to boarder application range.**
A7. Good comment! In fact, we are actively investigating to extend LSG to image semantic segmentation task. As shown in table below, we conduct preliminary experiments on standard Pascal VOC dataset under SSL setting with different labeling ratio, and find that LSG is also beneficial to data-efficient segmentation. We will leave further studies to future work.
| Network| Method| 1/16 (662 labeled) | 1/8 (1323 labeled) | 1/4 (2645 labeled) |
| -- | -- | :--: | :--: | :--: |
| DeepLabV3+ | Suponly | 64.8 | 68.3 | 70.5 |
| + | PseudoLabel (baseline) | 69.4 | 72.1 | 73.9 |
| ResNet-50 | **LSG (extended)** | **71.8** | **74.6** | **75.3** |
As for the text task, it may require certain modification of the current method in order to be applied. It is because the current semantic graph is constructed using pre-trained language model and thus may not be much helpful to other language models as the primary model. Still, we point out that alternative ways to build the semantic graph can be proposed and improve model performance on language models using our proposed alignment strategy in section 3.2.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response! I will raise my confidence score.
One remaining question (which also mentioned by some other reviewers) is whether GCN is necessary. Since the training time in stage 1 is so short, the training objective may be quite easy. I am unsure whether GCN uses the embedding as a "shortcut" to do classifications and ignore edges.
---
Reply to Comment 1.1.1:
Title: Experiments that proves our GCN does not ignore edges and is necessary
Comment: Thanks for your valuable feedback. To answer your question, similar to our response to reviewer ZDzt, we would like to demonstrate from three aspects that our GCN classifies nodes **according to both the initial node features and the graph topology** (it does not ignore edges).
- We conduct a new ablation study as follows.
- The GCN is firstly trained using standard procedure in our first stage. Then we change the adjacency matrix $A$ of the original graph to an identity matrix $I$ (in other words, we erase all the topology information and leave the node features unchange). We use the trained GCN to predict the nodes on the new graph and denote this variant as *GCN w/ $I$*. **We observe significant accuracy drops compared to the prediction accurcies on the original graph** (*GCN w/ $A$*). This observation shows that the trained GCN actually depends on the adjacency matrix $A$, rather than solely using the initial node features.
- For a better comparison, we train a linear classifier on the node features, which can be equivalent to a GCN "using the shortcut" as you mentioned. The results in table below show that the linear classifier performs better than our GCN when only node features are provided, yet underperforms the GCN when the topology information is included. Therefore, we conclude that the GCN trained by $\mathcal L_{node}$ does not simply using the shortcut.
- | Variant| *Aircraft* | *StanfordCars* | *CUB200* |
| --| :--: | :--: | :--: |
| GCN w/ $A$ | 100 | 99.7 | 100 |
| GCN w/ $I$ | 89.7 | 90.9 | 93.5 |
|linear probe | 96.4 | 98.8 | 100 |
- We refer back to the results in Fig. 2(d), which are reported here below (the GCN accuracy is tested on a validation graph based on another node feature set). Please note that our designed graph includes a few edges connecting nodes from different classes, where its amount is controlled by cross-label edge ratio. Therefore, different ratio corresponds to different graph topology on top of a same node feature set. We observe that the accuracy of GCN is influenced by the edge ratio, namely the topology information. Moreover, GCN trained from different topology structure has different impacts on the primary model accuracy in the 2nd stage, all proving that GCN pays attention to the edge information.
- |Cross-label Edge Ratio|0.1%|0.2%|0.3%|2.0%|
| -- | :--: | :--: | :--: | :--: |
| GCN acc. (validation graph) | 99.8 | 99.6 | 99.1 | 75.5 |
| primary model acc.| 70.7 | 71.2 | 71.3 | 68.7 |
- The T-SNE visualization also supports our claim. In the original node feature space, there exist a few "prompt clusters" which is hard to discriminate by class. Specifically, each sentence we sent into PLM consists of a prompt template and a label. In some cases, the prompt template dominents in the text embedding and overshaows the class discriminative information. However, the GCN refined embedding space no longer has such prompt cluster, **demonstraing the effect of graph edges that connect these nodes to others sharing same label**.
Next, we show that the trained GCN is necessary for the 2nd stage via showing full results of the prototype alignment variant.
The prototype alignment refers to aligning image features with prototypes of original label embeddings, therefore it only leverages the embedding information and ignores the graph topology. We see from results that **prototype alignment is less beneficial than our GCN on all datasets**.
|Method| Car-15% | Car-30% | Car-50% | Car-100% | CUB-15% | CUB-30% | CUB-50% | CUB-100% |
|--|--|--|--|--|--|--|--|--|
|Prototype Alignment| 52.4 | 73.0| 82.6| 89.5 | 56.0| 70.1| 76.8| 81.9 |
|LSG| **55.4** | **75.5** | **83.8** | **90.7** | **57.7** | **70.6** | **77.2** | **82.2** |
| Source domain|Ar|Ar|Ar|Ar|Cl|Cl|Cl|Cl|Pr|Pr|Pr|Pr|Rw|Rw|Rw|Rw| Avg. ID| Avg. OOD|
|--|:--:|:--:| :--: | :--: | :--: | :--: | :---: | :--: | :--: | :--: | :---: | :------: | :------: | :------: | :------: | :------: | :------: | :------: |
| Target domain | **Ar**|Cl| Pr| Rw | **Cl** | Ar | Pr | Rw | **Pr** | Ar | Cl | Rw |**Rw**|Ar|Cl|Pr| Avg. ID| Avg. OOD|
| Prototype Alignment | 85.0 | 56.9 | 71.3 | 78.1 | 86.2 | 69.5 | 72.9 | 75.9 | 94.3 | 61.9 | 50.4 | 80.6 |90.7|71.8|55.9|81.0|89.0|68.8|
| Contrastive Alignment | 85.6 | 57.1 | 71.6 | 78.5 | **86.3** | 69.9 | 73.0 | 76.7 | 95.0 | 62.5 | 52.1 | 80.7 |90.8|72.2|56.2|81.4|89.4|69.3|
| **LSG**| **85.8** | **57.7** | **74.0** | **79.9** | **86.3** | **71.7** | **75.4** | **77.8** | **95.1** | **65.8** | **54.7** | **82.1** |**91.2**|**73.8**|**58.0**|**82.3**|**89.6**|**71.1**|
Finally, **$\mathcal L_r$ cannot be applied without GCN in the 2nd stage**, whereas its effectiveness is demonstrated in the ablation study in table 6. In conclusion, we argue that $\mathcal L_{node}$ trained GCN is necessary for our proposed method. | Summary: The paper introduces the Language Semantic Graph (LSG), a novel approach to data-efficient learning that leverages semantic information from labels. The LSG is used to train an auxiliary graph neural network, which then guides the primary model's training, enhancing the utilization of label knowledge. This method is applicable across various modalities, including image, video, and audio, and has shown significant performance enhancement in both Transfer Learning and Semi-Supervised Learning scenarios.
Experiments were conducted on seven standard datasets covering images, videos, and audios, using several deep neural networks with different architectures and pretraining datasets. The results show that LSG significantly outperforms other methods, especially when labeled data is scarce. It also demonstrates promising potential in semi-supervised settings, achieving the best performance across all labeling rates and datasets. When applied to self-supervised pretrained models, LSG shows consistent gains. It also improves model performance on both in-distribution and out-of-distribution samples, indicating that label semantic relations help the model learn more robust features.
In video and audio experiments, LSG consistently improves the fine-tuning accuracy across all tasks with limited labeled samples. It outperforms other methods, boosting accuracy significantly. For audio experiments, LSG achieves an average of 5.56% accuracy enhancement from the baseline, demonstrating its wide applicability across various modalities.
"LSG consists of two parts: an auxiliary graph neural network that extracts knowledge from the semantic graph and two novel optimization objectives that transfer the knowledge to primary models." The authors demonstrate that LSG is applicable on image, video and audio models and brings significant performance gains to the model under Transfer Learning and Semi-Supervised Learning scenarios.
Strengths: * Innovative Approach: The paper introduces a novel method, the Language Semantic Graph (LSG), which leverages semantic information from labels to guide the training of machine learning models. This approach offers a new perspective on data-efficient learning that has not been extensively explored in previous research.
* Versatility Across Modalities: The LSG method is applicable across various modalities, including image, video, and audio. This wide applicability demonstrates the robustness and flexibility of the proposed method.
* Superior Performance: The LSG method shows significant enhancement in performance compared to other data-efficient learning approaches in both Transfer Learning and Semi-Supervised Learning scenarios. This is a strong indication of the effectiveness of the proposed method.
* Robustness to Data Scarcity: The LSG method performs particularly well when labeled data is scarce, which is a common challenge in machine learning. This makes it a valuable tool for scenarios where data collection and labeling are costly or impractical.
* Improved Out-of-Distribution Performance: The LSG method improves model performance on both in-distribution and out-of-distribution samples. This suggests that the label semantic relations help the model learn features that are more robust to distribution shift, enhancing the model's generalizability.
Weaknesses: The effectiveness of the LSG method relies heavily on the quality and semantic richness of the labels. In scenarios where labels are sparse, ambiguous, or not well-defined, the performance of the LSG method could be compromised.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: *The LSG method relies heavily on the quality and semantic richness of the labels. How does the quality of the labels impact the performance of the LSG method? Could the LSG method be adapted to work effectively with less informative or ambiguous labels, and if so, how?
* The paper primarily focuses on classification tasks. Could the LSG method be adapted or extended to other tasks, and if so, what modifications would be necessary?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: * The LSG method relies heavily on the quality and semantic richness of the labels. If the labels are not well-defined, sparse, or ambiguous, the performance of the LSG method could be compromised. This dependence on high-quality labels could limit the applicability of the method in certain scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive feedback and comments on our submission. Please find our responses for your questions below.
**Q1. Performance evaluation on low quality labels.**
A1. Thanks for the good comment. To investigate the effectiveness of LSG on low quality label scenarios, we stimulate the label corruption scenarios by manually adding label noise to the three original datasets in Table 1, following standard protocol in learning from noisy label studies. The results is shown in the following table. (20% noise means that 20% of the training data are randomly assigned to false labels.)
| Variants | Air-15% | Air-30% | Air-50% | Air-100% | Car-15% | Car-30% | Car-50% | Car-100% | CUB-15% | CUB-30% | CUB-50% | CUB-100% |
| --------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| Fine-tuning clean | 41.6 | 57.8 | 68.7 | 80.2 | 41.1 | 65.9 | 78.4 | 87.8 | 51.2 | 64.6 | 74.6 | 81.8 |
| **LSG clean** | **55.6** | **72.0** | **79.5** | **86.7** | **55.4** | **75.5** | **83.8** | **90.7** | **57.7** | **70.6** | **77.5** | **82.2** |
| Fine-tuning 20% noise | 27.2 | 38.8 | 50.7 | 62.6 | 21.7 | 43.6 | 58.1 | 71.8 | 36.2 | 50.1 | 57.4 | 69.7 |
| **LSG 20% noise** | **41.5** | **54.8** | **61.9** | **72.6** | **36.3** | **54.3** | **67.9** | **80.8** | **42.1** | **57.3** | **64.2** | **73.8** |
| Fine-tuning 40% noise | 15.7 | 20.7 | 33.7 | 44.3 | 13.9 | 27.8 | 37.7 | 51.3 | 23.0 | 35.2 | 44.0 | 52.6 |
| **LSG 40% noise** | **27.5** | **35.6** | **44.7** | **52.8** | **23.4** | **38.0** | **46.4** | **58.6** | **27.9** | **39.5** | **47.2** | **56.9** |
The results show that both methods are influenced by the label noise and their performance degradation is observed. Still, LSG outperforms fine-tuning baseline significantly on each noise level, and achieves relatively less accuracy degradation from clean label scenarios. The possible reason behind such robustness is that **LSG regularizes the image feature space to maintain the category relationships, which prevents the feature encoder learns heavily distorted feature space by overfitting to the noisy data**. Moreover, methods that identifiy and remove noisy labels can be integrated to LSG on these scenarios to improve its performance.
This experiment shows the potential of using LSG on scenarios with low quality labels. Still, we acknowledge that there are other scenarios where label quality is not satisfying, and we will leave it for future study.
**Q2. Extension to semantic segmentation task.**
A2. Good question! We are actively investigating the extension of LSG to semantic segmentation. Table presented below demonstrates the results of LSG applying on the classical Pascal VOC dataset under semi-supervised learning. The labeling ratio is 1/16, 1/8 and 1/4. Our current extension involves:
- Both losses $\mathcal L_{align}$ and $\mathcal L_{r}$ are applied on the feature map produced by the ResNet-50 backbone in a similar manner as in classification tasks.
- Our key modification is to **group the neighboring pixel features together to reduce the number of features added to the language semantic graph each time**, since there will be too many new nodes if every pixel feature is calculated independently for dense prediction tasks. In fact, if all the pixel features are counted individually, the total number of new nodes within the mini-batch will exceed the number of original node of label embeddings, which affects the alignment.
We can observe from the following table that LSG brings improvement to the baseline in data-efficient scenarios for segmentation. As it is just a preliminary study, we will continue the investigation of making extension of the proposed LSG.
|Network| Method | 1/16 (662 labeled) | 1/8 (1323 labeled) | 1/4 (2645 labeled) |
|--| -- | :--: | :--: | :--: |
|DeepLabV3+| Suponly | 64.8 | 68.3 | 70.5 |
|+| PseudoLabel (baseline) | 69.4 | 72.1 | 73.9 |
|ResNet-50| LSG (extended) | **71.8** | **74.6** | **75.3** |
We welcome further questions that you find worth discussing!
---
Rebuttal Comment 1.1:
Title: Thanks for the update, the authors addressed my concerns well. So I update the score from 6 to 7
Comment: Thanks for the update, the authors addressed my concerns well. So I update the score from 6 to 7
---
Reply to Comment 1.1.1:
Comment: We thank you for your valuable time and feedback. | Rebuttal 1:
Rebuttal: Dear Reviewers and Area Chair,
We extend our utmost gratitude for your dedicated commitment in meticulously assessing our manuscript and for providing us with your profound insights. Your considered evaluations serve as a testament to the rigor and importance of our research.
We are heartened by the reviewers' recognition of the novelty (biPo, 3r7Y, A9AK), strong motivation (ZDzt, pDMD), clear presentation (biPo, pDMD, A9AK), and significant contribution in the results (ZDzt, biPo, pDMD, 3r7Y, A9AK) of our work.
In our comprehensive individual responses to the reviewers, we have diligently addressed all raised questions and concerns. We eagerly welcome any further inquiries that may arise.
With highest regards,
Authors
Pdf: /pdf/3d8fbc43d6e0af2e566a73b1e3a27da4f1362275.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The typical supervised training ignores the semantic information in the labels. This paper proposes to use the label semantic information during fine-tuning. 1) A label semantic graph is constructed by calculating the sentence embedding similarity of the label descriptions; 2) a GNN is trained on the semantic graph with a node classification objective; 3) the GNN is frozen and guides the training of a visual classifier with two additional loss terms. Specifically, at training time, for each data batch, we can create a graph with the images and labels as nodes; the GNN can be used to encode the graph and the data representations are treated as the initial node features; one loss terms classify the final node features while one loss term minimises the distance between the data representations and the final node features after GNN.
The experiments are extensive, covering image, audio, and video classification. The approach can be used in a semi-supervised setting as well, where the pseudo-label data are only used in the additional loss terms and do not bias the classification head. The performance improvement is significant in many cases.
Strengths: 1. The high-level motivation makes sense. It is desirable to utilise the semantic information within the labels during training.
2. The performance improvement is significant in many settings and the evaluation is comprehensive.
Weaknesses: 1. While the idea of using label semantic information is good, the presented solution seems ad-hoc and not well motivated.
Conceptually, the new information we introduce is by constructing the semantic graph A with a pre-trained text encoder; after this, it seems rather unclear what motivates the two-step approach with a GNN.
In addition, I find the training objective of the GNN unclear (see Question).
The role of multiple language prompts is also unclear. Is it because using multiple prompts builds a more informative graph?
2. One straightforward way to utilise the label semantic information is to directly use a language encoder to encode the labels and then fine-tune the language encoder along with the data encoder (this is different from the Language Head baseline where the language embeddings are fixed). According to ELEVATAR, this greatly outperforms vanilla fine-tuning.
I wonder if the authors have considered this baseline, especially in the setting of Table 2: can we use CLIP image encoder + CLIP language encoder? Basically, if our pre-trained model already considers the semantic information of the labels (e.g., CLIP), does the approach still bring much improvements?
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: For training the GNN, isn’t the node classification objective trivial? The objective is basically recovering the concept mentioned in the text prompt, which seems like an easy to solve task; the node feature at the first layer (K^0) should be enough to perfectly solve the task. This is different from the attribute classification task in typical GNN training, where there is not enough information to determine the node label without the adjacency matrix.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No evident negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your endorsement of our idea and the insightful feedback. Please find our responses for your questions below.
**Q1. Motivation of the two-step approach with GNN.**
A1. We take vision model as an exemplar of the primary encoder $F$ to explain the idea. Our main goal is to align the image features encoded by $F$ towards the semantic space of label embeddings obtained from pre-trained language model. The GNN-based two-step approach promotes the alignment with the following two-fold reason:
- In the first stage, we adopt multiple language prompts to increase the information and diversity of label embeddings for each category. Yet a side effect of is that some poorly-crafted prompt template may generate "prompt cluster" in embedding space due to its dominance influence on the label embedding over the concept that we want to distinguish (see Figure 3(a)). To minimize their negative impact, **GNN with its strength in massage passing, refines the initial node embeddings by leveraging graph topology**. We train GNN to obtain better label embeddings that are both semantic discriminative and structure-preserving. We observe from the T-SNE in Figure 3(a) that the node embeddings after refinement are more discriminative and preserves semantics.
- More importantly for the second stage, GNN acts as the key component of the proposed alignment method. **By joining the image features as new nodes to the graph, we directly control the knowledge transfer process on the augmented graph via GNN**. To show the superiority of GNN-based alignment method, we conduct an ablation study by considering two classical alignment strategy as alternatives: *Prototype alignment* refers to minimizing the distance between image features and their corresponding category prototype computed from the average of label embeddings; *Contrastive alignment* refers to using supervise contrastive loss that treats label embeddings from the same category as positive samples. The results below validate the superiority of GNN-based alignment.
| Variants| Aircraft-15% | Aircraft-30% | Aircraft-50% | Aircraft-100% |
| -- | :--: | :--: | :--: | :--: |
| Prototype alignment | 51.1 | 68.8 | 76.5 | 84.6 |
| Contrastive alignment | 52.2 | 69.7 |77.3|85.1|
| LSG | **55.6** | **72.0** | **79.5** | **86.7** |
**Q2. Explanation of the GNN training objective.**
A2. Thanks for your question. It is a good question, and the answer to it reflects the key technical innovation of LSG. If we only consider the node classification on the original graph $\mathcal G$, leveraging the initial label embeddings is pretty enough. However, the purpose of the GNN is not only about classifying these nodes, but more importantly to guide image features learn label semantics. Therefore, we design a pair of losses $\mathcal L_{node}$ and $\mathcal L_{align}$ that works cooperatively to align image features towards corresponding label embeddings in the graph, which differs from the classical purposes of GNN.
Specifically, due to the large difference between image features and label embeddings at the beginning of the second stage, even a well-trained GNN cannot classify the new nodes correctly. Since the GNN is set fixed, the only way to minimize $\mathcal L_{align}$ is to update $F$ to generate image features more similar to label embeddings, allowing the GNN to recognize them. Therefore, the purpose of loss $\mathcal L_{node}$ is to prepare a GNN ready for $\mathcal L_{align}$ in the second stage. As shown in the following table, if we use a randomly initialized GCN in the second training stage (i.e., w/o $\mathcal L_{node}$), the performance will drop and be similar to the variant where $\mathcal L_{align}$ is excluded.
|Variants | Cars-15% | Cars-30% | Cars-50% | Cars-100% |
| -- | :--: | :--: | :--: | :--: |
| LSG w/o $\mathcal L_{node}$ (1st stage) | 48.6 | 68.2 | 80.4 | 89.1 |
| LSG w/o $\mathcal L_{align}$ (2nd stage) | 48.9 | 68.5 |80.5|89.3|
| LSG w/ trained GCN| **55.4** | **75.5** | **83.8** | **90.7** |
**Q3. Improved baseline and CLIP$_{text}$ as language model**.
A3. Good comment. As you suggested, we conduct the following experiments under the same setting as table 2 with two new variants.
- *Tunable language head* refers to the stronger baseline by making the language embedding initialized classifier tunable. We draw the same conclusion as in ELEVATER that the tunable head achieves better performance than random initialization. Yet, our method still outperforms this language-augmented baseline on all the tasks with a great margin. We will add this new baseline to the paper.
- *CLIP$_{text}$* refers to substituting language model from BERT to CLIP language encoder. We find it achieves similar performance to BERT encoder. Reason behind this effectiveness is that, although CLIP image encoder already considers the label knowledge contained in CLIP$_{text}$ in pre-training stage, the fine-tuning process will distort the image feature space [31] and LSG serves as a regularization to prevent the model from forgetting such label semantic knowledge.
Please refer to Table A in the pdf for results.
**Q4. The effect of multiple language prompts.**
A4. We agree with your opinion. Different prompts will put the label concept into different contexts, and consequently produce divergent label embeddings. Such diversity leads to increased information and improved performance. As shown in table below, increasing the number of prompt $m$ leads to better performance. However, consider that the number of nodes in the semantic graph is proportional to $m^2$, and notices that the performance gain is no longer significant when $m>20$, we find a balance between accuracy and training cost by setting $m=20$.
| Task| $m=$5 | 10 | 20 | 40 | 80 |
| -- | :--: | :--: | :--: | :--: | :--:|
| Air-15% | 54.8 | 55.0 | 55.6 |55.7| 55.7 |
We hope these responses address your concerns, and welcome any further questions that you find worth discussing!
---
Rebuttal Comment 1.1:
Title: Method Contribution Still Unclear
Comment: Dear authors,
Thank you for the detailed rebuttal. For Q2, while I agree that L_align makes sense, my question was mainly targeted at the first-stage training. As you mentioned, when training only with L_node in the first stage, the training objective appears very simple: leveraging the initial label embeddings (K^0) is pretty enough to do node classification; I fail to see why "train GNN to obtain better label embeddings that are both semantic discriminative and structure-preserving". If the node classification can be perfectly performed with only node features at K^0 without information from the adjacency matrix, then I think the network is not encouraged to preserve/utilize the graph structure? I.e., the GCN may simply reduce to a network that acts like it directly predict the class from K^0.
Is there some kind of loss/accuracy we could report for the first-stage training? I would image that the model can achieve 100% accuracy easily.
In other words, I do not see the benefit of using a GCN and L_align in the second stage (compared to the language head baseline), as I suspect that the GCN may reduce to a very simple node-feature-readout-network since its first-stage objective is easy.
---
Reply to Comment 1.1.1:
Title: Experiments that proves our GCN is more than a readout network and is necessary
Comment: Thanks for your insightful feedback. First of all, we would like to demonstrate from three aspects that our GCN classifies nodes **according to both the initial node features and the graph topology**.
- We conduct a new ablation study as follows.
- The GCN is firstly trained using standard procedure in our first stage. Then we change the adjacency matrix $A$ of the original graph to an identity matrix $I$ (in other words, we erase all the topology information and leave the node features unchange). We use the trained GCN to predict the nodes on the new graph and denote this variant as *GCN w/ $I$*. **We observe significant accuracy drops compared to the prediction accurcies on the original graph** (*GCN w/ $A$*). This observation shows that the trained GCN actually depends on the adjacency matrix $A$, rather than solely using the initial node features.
- For a better comparison, we train a linear classifier on the node features, which can be viewed as the node-feature-readout-network you mentioned. The results in table below show that the readout network performs better than our GCN when only node features are provided, yet underperforms the GCN when the topology information is included. Therefore, we conclude that the GCN trained by $\mathcal L_{node}$ is more than a readout network.
- | Variant| *Aircraft* | *StanfordCars* | *CUB200* |
| --| :--: | :--: | :--: |
| GCN w/ $A$ | 100 | 99.7 | 100 |
| GCN w/ $I$ | 89.7 | 90.9 | 93.5 |
|linear probe | 96.4 | 98.8 | 100 |
- We refer back to the results in Fig. 2(d), which are reported here below (the GCN accuracy is tested on a validation graph based on another node feature set). Please note that our designed graph includes a few edges connecting nodes from different classes, where its amount is controlled by cross-label edge ratio. Therefore, different ratio corresponds to different graph topology on top of a same node feature set. We observe that the accuracy of GCN is influenced by the edge ratio, namely the topology information. Moreover, GCN trained from different topology structure has different impacts on the primary model accuracy in the 2nd stage, all proving that GCN learns the structure information.
- | Cross-label Edge Ratio|0.1%|0.2%|0.3%|2.0%|
| -- | :--: | :--: | :--: | :--: |
| GCN acc. (validation graph) | 99.8 | 99.6 | 99.1 | 75.5 |
| primary model acc.| 70.7 | 71.2 | 71.3 | 68.7 |
- The T-SNE visualization also supports our claim. In the original node feature space, there exist a few "prompt clusters" which is hard to discriminate by class. Specifically, each sentence we sent into PLM consists of a prompt template and a label. In some cases, the prompt template dominents in the text embedding and overshaows the class discriminative information. However, the GCN refined embedding space no longer has such prompt cluster, **demonstraing the effect of graph edges that connect these nodes to others sharing same label**. Table below compares the Calinski-Harabasz Index between the two embedding space, where higher score means better clustering, and shows GCN learns discriminative info.
- | Embedding| *OfficeHome* | *Aircraft* | *StanfordCars* | *CUB200* |
|--| :--:|:--: | :--: | :--: |
| Initial Label Embedding |85.5|86.5|137.2|157.8 |
| GCN refined Embedding | **1395.2** |**1364.1** | **1481.7** | **916.3** |
Next, we show the loss and accuracy curve of GCN on Aircraft below. GCN finally reaches 100% accuracy on training graph, yet requires a period time of training.
|Iter |1| 10| 50 |100 | 200 | 500| 1000 | 2000 | 5000 |
|--| --|--| --| --|--|--|--|--|--|
| loss |4.61| 4.55 | 4.08 | 3.95 | 3.86 | 3.78 | 3.72 | 3.66 | 3.63 |
| acc. |0.0| 2.0 | 60.2 | 85.7 | 94.1 | 97.0 | 99.0 | 100 | 100 |
Finally, we show results of prototype alignment on more datasets. According to your description, we think this prototype alignment is what you mean by "language head baseline", since that the prototypes calculated by averaging node features is a readout network and it **supervises the projected image feature** $h$, replacing our GCN. In contrast, by language head we refer to directly replacing the random initialized classifier by language initialization, therefore is not suitable to ablate GCN. We see from results that a **readout network (prototype alignment) is less beneficial than our GCN**. Moreover, **$\mathcal L_r$ cannot be applied without GCN in the 2nd stage**. Therefore, we argue that $\mathcal L_{node}$ trained GCN is necessary in the 2nd stage. (OH stands for OfficeHome dataset and the detailed results can be found in response to reviewer 3r7Y.)
|Method| Car-15% | Car-30% | Car-50% | Car-100% | CUB-15% | CUB-30% | CUB-50% | CUB-100% | OH OOD Avg. |
|--|--|--|--|--|--|--|--|--|--|
|Prototype Alignment| 52.4 | 73.0 | 82.6 | 89.5 | 56.0 | 70.1 | 76.8 | 81.9 | 68.9 |
|LSG| **55.4** | **75.5** | **83.8** | **90.7** | **57.7** | **70.6** | **77.2** | **82.2** | **71.1** | | null | null | null | null | null | null |
ForecastPFN: Synthetically-Trained Zero-Shot Forecasting | Accept (poster) | Summary: The work present the ForecastPFG a zero-shot forecasting method trained using only synthetic dataset and it is evaluated on several real world dataset.
Strengths: The paper is well written and the steps well described.
Moreover, I think that it could be an interesting approach when you have very few data.
Weaknesses: Weakness are inserted in the Question section
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Rows 71-74: This a repetition of what authors wrote in rows 16-20. This part should be improved with some details. The same in rows 83-85 and rows 23-24.
The relation between Zero shot forecasting, Transfer Learning in the forecasting context and the concept of Global Models is not discussed properly. I suggest authors to discuss more about this relation.
Regarding global models authors can refer to recent works as: Januschowski et al., "Criteria for classifying forecasting methods", 2020; Buonanno et al., "Global vs. Local Models for Short-Term Electricity Demand Prediction in a Residential/Lodging Scenario", 2022; Montero-Manso, P.; Hyndman, R.J. "Principles and algorithms for forecasting groups of time series: Locality and globality", 2021, etc.
Rows 102-104
Authors use a multiplicative decomposition ($y_t = \gamma(t) \cdot z_t$). Why this choice and not the additive decomposition ($y_t = \gamma(t) + z_t$)? How this choice impact on the results of the work?
Row 109-110
Could you clarify this tense? The $y_t = \gamma(t) \cdot z_t$ is always valid not only when $\gamma(t)$ is deterministic. Moreover, if it is present $z_t$ sampled by a Gaussian, there are different possible output values and not only $y_T$. Do I miss something?
Rows 113-114
The athors wrote:
"We optimize the PFN’s parameters $\theta$ by making predictions on a held-out test set from D"--> Usually with test set is named the set used exclusively for the evaluation and that is not involved in learning of the model. In this case authors define a test set used to evaluate the loss used for the parameter tuning. I think that the name is misleading. Authors can evaluate to use different names (e.g. Training Input and Training Output) to identify the two parts of the timeseries used in the training phase.
Rows 129.
Is not crucial to know the prior distribution of timeseries in order to have a good result on real dataset? In this work authors use a large dataset of timeseries that try to cover different situations but how the choice of the prior distribution impact on the results? Maybe this can be a idea for future works.
Rows 144
Is there a motivation on the usage of Weibull distribution for the noise?
How a choice of a different noise distribution impacts on the results?
Rows 146-147
Now $\gamma(t)$ contains two contributions: trend and seasonal. In the row 102, instead, is written only trend. I got that in 102 authors want to introduce the general frame of the problem but I would not use the name trend in row 102.
Rows 169
"Predict the MSE of this query" --> Does not the architecture predict the output? The next rows in fact the authors wrote "the output is a single prediction at a future time".
Also at row 178 it is written "the loss" and not the output.
Rows 180-183
It is not clear to me why the classic scaling techniques are not appliable in this case. As discussed after, the problem is the outlier that has to be excluded since they can be very different across several timeseries. Please express better this issue.
Rows 188-191
Is MSPE the mean squared prediction error?
Could you describe mathematically the issue raised in this section because I didn't get it.
Row 196
How authors evaluate the convergence?
Row 235-236
Not always the forecasting models are evaluated in retraining mode. For having a fairer comparison authors could force the forecasting model to not retrain during the evaluation test. This condition is, in fact, more similar to what happen in the real context where you don't retrain continuously your model once it is deployed. Moreover, this modality (no retrain) is followed in several works about forecasting. Using this modality (no retrain) we should have a fairer comparison between ForecastPFN and other models.
Row 255
Cross product --> do you mean cartesian product?
Figure 4
Is the Time Budget the same of wall clock budgets defined in row 243
Row 301
Is not more important to compare classic standardization (without outlier removal) with robust scaling? The authors, instead, compared robust scaling with min-max scaling.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: A limitation I see is the maximum number of training data used for the other models, limited to 500. It looks clear that with more other models outperform the ForecastPFN (Fig.5) while with few data ForecastPFN looks a good solution.
Moreover, the training of ForecastPFN requires 30hr so it has a cost that should be taken in account for having a fair comparison with other approaches.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the excellent and detailed feedback. We are glad to see your positive view of our work. Your comments and suggestions will greatly improve the final version of our paper. We reply to each question below.
**Q1 Repetition of text.**
Thank you for catching this. We have now added more details to lines 71-74 and 83-85.
**Q2: Relation to transfer learning and global models.**
Thank you for pointing this out. We will add a section on transfer learning, and another section on global models to our related work section.
**Q3: Multiplicative decomposition.**
Yes, additive vs. multiplicative noise is an interesting question. We chose multiplicative noise to better balance the amount of signal to noise across all series. If we used additive noise, then the impact of the noise depends on the ratio of the scale of the base series and the scale of the noise. Since our base series have linear and exponential terms, additive noise would cause the signal-to-noise ratio to vary substantially over series with a trend component. Therefore, we used multiplicative noise, which is a simpler parametrization to achieve a signal to noise ratio for a series. We now updated our paper to include this discussion, and we agree that including an ablation study is a good idea for future work.
**Q4 Clarify equations.**
Thank you for pointing this out. This is indeed a typo, because $p(y_T|T,\varphi)=1$ is only true if there is a deterministic function $y_t=\varphi(t)$ with no noise, and this is not the case we consider in Section 3.2.
**Q5 Clarification on test set.**
Thank you for catching this! As you say, this is simply a naming mistake. We will now refer to the two parts of each training series as “training input” and “training output.”
**Q6 Ablation on prior distribution.**
We agree that the choice of prior distribution is important and that it would be good to have an ablation study. It is challenging to do a thorough ablation study, since it takes 30 GPU hours to train a model. As mentioned in the paper, we did light tuning in the early stages of the project, using the training loss. We are currently working on a simple ablation study on the noise of the synthetic data, which we will finish during the NeurIPS discussion period.
**Q7 Weibull distribution.**
This is a great question. Real world datasets seem to frequently have two types of noise: Gaussian and exponential; the latter is in the form of non-negative time series with high value outliers. The Weibull distribution is a simple and natural way to parametrize between these two distributions. While we do not claim the Weibull distribution is representative of the noise of all real world series, getting convergence on Weibull is already particularly non-trivial. Further study of the noise distribution is an interesting idea for future work.
**Q8 Naming of phi.**
We agree that it is clearer to replace “trend” in line 102 with a different phrase, such as “base series.”
**Q9 Prediction.**
Thanks for pointing this typo on rows 169 and 178. Yes, the architecture predicts the output, and then we compute the MSE of the output with the ground truth.
**Q10 Classic scaling techniques.**
Yes, we will make this part clearer in the paper. Since ForecastPFN is designed to be a universal forecasting model that can handle a huge variety of time series, appropriately scaling the data becomes tricky. Our robust scaling procedure as described on line 184 gives the best level of standardization and outlier removal, which simpler methods such as min-max scaling or z-score normalization do not achieve.
**Q11 MSPE.**
By MSPE, we are referring to the [mean squared percentage error](https://www.sktime.net/en/stable/api_reference/auto_generated/sktime.performance_metrics.forecasting.mean_squared_percentage_error.html).
It is a great idea to give the equations of MSE, MSPE, and our scaled MSE. We will make this change to the final version of our paper.
**Q12 Convergence.**
We used the word convergence informally to mean that the training loss is much higher than the final trained ForecastPFN. So, if we set the noise of the synthetic data too high, the training loss is very high. We will fix this in the text.
**Q13 Retraining.**
Thank you for pointing out this confusion. By “not a fair comparison”, we meant that the baseline methods are allowed to train on earlier time series from the same dataset. At test time, we do not permit the baseline methods to train on the test input. Indeed, we do the common technique of training the algorithms once per dataset and prediction length, and then, at test time, providing input to the model for $n$ timestamps and then asking the model to predict $l$ points in the future. We have updated the manuscript to clear up this confusion.
**Q14 Cross product.**
Yes, in row 255 we meant to say the cartesian product. Thank you for spotting this.
**Q15 Figure 4.**
Yes, “training time” in row 243 is the same as “time budget” in Fig. 4. We will make this clear in the paper.
**Q16 Standardization.**
That is a good point. In future work, we can do a much larger ablation study for scaling methods and the data distribution. It is challenging to do a large ablation study because each ForecastPFN model takes 30 GPU hours of pretraining.
Thank you very much, once again, for your positive view of our work and your excellent suggestions. If you find our responses satisfying, we respectfully ask that you consider increasing your score. On the other hand, we would be very happy to answer any follow-up or additional questions you have.
---
Rebuttal Comment 1.1:
Comment: Thanks for the answers and clarifications. I decided to increase the my evaluation. | Summary: The paper proposes to pre-train a deep learning model on synthetic data which follows characteristics of real world time series data. This model can then be used for any downstream forecasting dataset. The authors performed experiments to show that the proposed method performs well compared to existing classical and deep forecasting models in constrained scenarios (restricted number of training data, restricted amount of time for training).
Strengths: The paper proposes an exciting direction of pre-training deep forecasting models with synthetic data, which can be quickly adapted in a Bayesian manner on a new unseen real time series, and be able to perform accurate forecasts. The synthetic data is proposed using a prior distribution which has properties similar to real world data. Such a direction is exciting and of huge significance, unlocking the power of deep learning models to be pre-trained on large datasets. The paper is well written, neatly organised in a logical flow, and contains comprehensive experiments.
Weaknesses: * While the premise of the paper is indeed exciting, I believe the claims made in the abstract and introduction are greatly exaggerated. Claims are made that the proposed method, ForecastPFN is able to beat SOTA forecasting methods. However, the experiments are only performed on severely handicapped scenarios (SOTA models are only given access to 100s of data points or maximum of 2 minutes of training). More effort should be made to highlight this handicapped scenario.
* Claims that "the zero shot methods are at a disadvantage because the other six methods were allowed to train on data from the evaluation time series" (lines 280 - 282) seem a little disingenuous since the zero shot methods were pre-trained extensively (30 hours of pre-training for ForecastPFN), whereas in certain cases the six methods were only allowed 1 second of training?
* The claim in line 173 - 175, that existing transformer models have prediction length to be fixed is inaccurate. This is a feature of Transformers in the long sequence time series forecasting setting, but not true of time series Transformers in general. See [1] for one such example.
* Only number of wins/ranking is given for the results, but not the actual MSE scores. The actual MSE values should be given for readers to understand whether the comparisons are meaningful at all.
* Following up on the previous point, 2 more very simple baselines should be added - the naive forecast (current value is taken as the forecast) and the historical mean. The number of wins/rankings doesn't really matter if any of these models can't beat such naive baselines (which is a fair comparison regardless of the train/time budget).
[1] Rasul, Kashif, et al. "VQ-TR: Vector Quantized Attention for Time Series Forecasting." (2022).
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: 1. Are the existing methods (FEDformer, etc.) also pre-trained on the synthetic data? If not, it seems quite artificial to claim wins when these methods are only given 50 datapoints (1 batch?) for training, or 1 second to train. In fact, it seems unfair to compare to ForecastPFN which has 30 hours of pre-training on a huge synthetic dataset.
2. How was Meta-N-BEATS trained on the M4 dataset? In the paper, 1 model was trained for each frequency. Which subset of M4 was the baseline trained on?
3. What does number of MSE wins mean?
4. What does training on 1 second mean? How is this implemented? Was a single gradient step even performed in this setting?
5. What are the experiment details of the baseline models? What machine was used to run these baseline experiments? It seems a little iffy to compare the time budget in terms of seconds. How to compare 120 seconds on a H100 GPU vs a K80 GPU or even a CPU?
6. How was the time restricted training for ARIMA/Prophet implemented? I understand for deep learning models you can easily stop the training, but what about ARIMA/Prophet if the model has not been fitted?
Please consider using number of batches/epochs trained instead of wall clock seconds.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 3 good
Contribution: 3 good
Limitations: The paper doesn't adequately address limitations. The limitations given in the limitations section aren't really true limitations, but rather things that are out of scope of this paper. Some suggestions of limitations to consider:
1. What if the proposed prior for synthetic data is completely different from the downstream real world data? Perhaps some sort of adversarial forecasting task?
2. ForecastPFN doesn't compute the posterior predictive distribution, it is just an approximation.
3. Does not consider probabilistic forecasting, which has more implications on the synthetic prior (not just the shape of time series, but also the distribution of data, need to consider some autocorrelation in the errors)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the excellent feedback. We are very glad to see that you view this direction as exciting and of huge significance, and that you found the paper well-written, neatly organized, and with comprehensive experiments. We very much appreciate your comments, which we believe will substantially improve the final version of our paper. We reply to each question below.
**W1-2: “handicapped scenario” / “disingenuous claims.”**
Thank you for bringing up these points. We understand your points and have now made the experimental setup and settings very clear in our paper. We would like to bring up a few points for consideration. **The crux of this issue is that we focused on a novel paradigm that diverges significantly from prior work:** zero-shot forecasting; pretraining; and a setting that is important in practice: very limited in-distribution data and/or inference time. This makes it challenging to compare to prior work in general. To be clear, ForecastPFN is trained only once ever, and the weights are never changed when evaluating on new, unseen tasks. On the other hand, FEDFormer and the other transformers were not designed to be pretrained or to run zero-shot predictions. However, based on your comments, we *do* try out pretraining for FEDFormer. We find that after a week of attempting to train FEDFormer on the synthetic data, we were unable to achieve any non-degenerate performance on the 7 real-world datasets. Note that even training the ForecastPFN architecture (designed to be flexible and universal) was non-trivial: as described in the paper, we faced technical challenges such as handling scaling robustly, and a delicate parameterization of the synthetic data parameters.
Overall, our work introduces a new paradigm in which prior works cannot easily fit, and our model also performs particularly well in an important setting -- the low resource setting, for which most prior work was not designed. We are now much more explicit with our claims by mentioning the exact training settings, and noting how these differ from the settings for which prior work was developed. Thank you very much for giving us the opportunity to further clarify these points in our paper.
**W3. Transformer prediction length.**
Thank you for catching this! We have now corrected it.
**W4. Raw MSE values.**
Thanks for this suggestion; we can easily add the raw MSE values. See [our reply to Reviewer MwwS](https://openreview.net/forum?id=tScBQRNgjk¬eId=Z6mb79DbuS) and also the one-page pdf.
**W5. Add two additional baselines.**
We really like this suggestion. We added these two baselines; see the one-page pdf. We find that the historical mean achieves second place behind ForecastPFN in the lowest training and runtime settings. This is not unexpected and is a testament to the challenge of predicting a series when given very little training data or training time.
**Q1. FEDFormer and other models were not pretrained.**
Thank you for this suggestion. We give our full answer in the first bullet point above: in general it is non-trivial to achieve non-degenerate zero-shot performance (even for specially designed architectures such as ForecastPFN), and we have not been able to do so for FEDFormer after a week of effort.
**Q2. Meta-N-Beats setup**
Meta-N-Beats was trained separately for each configuration of prediction length and dataset frequency, just like the other Transformer baselines. We use the subset of M4 according to the corresponding frequency. In contrast, the more flexible ForecastPFN can accommodate different frequencies and prediction lengths as a single model.
**Q3 What is “number of MSE wins.”
This means that we compute the MSE for a given configuration and count the number of times each model has the best MSE. For example, in Figure 3, we count the number of wins for each model across 7 datasets and 7 prediction lengths, for each train budget.
**Q4&6 Training on 1 second / Constrained budget for ARIMA/Prophet.**
Thank you, we will explain this better in the paper. Following the experimental setup of [TabPFN](https://arxiv.org/abs/2207.01848), we give each training run a time budget. After each training step, we stop the training if the total budget was exceeded. In case an algorithm is unable to output any predictions within the train budget (such as ARIMA with a 1 second budget), we set the output to 0’s.
**Q5 Hardware used.**
We ran all the experiments ourselves on a V100; therefore, all of our results are fair to compare. We respectfully point out that since we include a diverse set of methods, the epoch/step times are not comparable, and so the best practice is to report the wallclock time using the exact same hardware, as we did. For example, [footnote 8 here](https://arxiv.org/abs/1909.02453) has a good discussion. We would be happy to include other compute resource metrics in the appendix while being explicit about the aforementioned caveats.
**Limitations.**
Thank you for these great suggestions. We agree; our synthetic prior was created to focus on “human-like” or “earth-like” time series, but it would not do well on a purely “mathematical” time series with uncommon periods such as 41, 89, or 97. We also will clarify that our method only approximates the posterior predictive distribution and makes pointwise predictions. Since this is a new type of work, we focused on the pointwise predictions, but probabilistic predictions are an exciting area for future work.
Thank you very much, once again, for your excellent feedback. Your perspective and comments are very important for us to properly convey our work. We hope that you start to see our perspective on the main issues. Please let us know if you have any follow-up comments. If this starts to address your questions, especially since you agree that this direction is exciting and significant, we hope that you might reconsider giving our work a “strong reject” rating; we would very much appreciate it if you consider increasing your score.
---
Rebuttal Comment 1.1:
Comment: Thank you authors for the time and effort in crafting an extensive rebuttal. Overall I still have major concerns regarding the fairness of the comparisons.
1. I am still confused what is the point of comparing the settings for 1 - 10 seconds of training. Loading the data probably takes more than 1 second?
2. Why not let the fallback forecast when the model fails to output any predictions be the naive forecast?
3. There is also another simple baseline called the seasonal forecast. https://otexts.com/fpp2/simple-methods.html
4. Can we get some indication on how often in your plots, does ARIMA and Prophet fail to output a prediction?
5. I am wondering how realistic / in what real world settings is the case of having a limited time budget. It may be the case for ARIMA, whereby you usually would want to retrain the model each time new observations come in. But for deep learning models, usually we train the model once, and can use it for a long time, i.e. no need to train the model again when new observations come in.
6. I agree that number of steps/epochs is not the right measure of compute resources in this case, since the setting being explored is when time is a constrain. However, wall clock is still not a reliable measure. There are so many unknowns as a reader - what was the batch size picked? Was the most efficient batch size picked for all methods? Was the most efficient implementation of ARIMA used? Does Prophet have a GPU implementation? I find it hard to draw any conclusions from the results without knowing any of these details. FLOPs seems to be a much more suitable measure of compute resource here, as it allows us to ignore all these questions about efficiency of implementation.
7. You mentioned that the ForecastPFN architecture is non-trivial. I am a little confused regarding this - based on my understanding, the architecture is a standard Transformer. Apart from the synthetic data generation and some scaling pre-processing, what is new in the architecture?
---
Reply to Comment 1.1.1:
Title: Second reply to 3SYZ, part 1
Comment: Thank you very much for replying and giving us a chance to further clarify and improve our submission. Overall, we are happy to see that your new questions are relatively minor, such as correctly handling low-runtime baseline predictions, clarifying the metrics, and discussing real-world applications for the low-resource setting, and we especially appreciate your pointer to the new baseline, which we have now added.
**1. Data loading time.**
Loading training data can take longer than 1 second. However, our training time, as in the TabPFN paper, only starts the runtime after the data and model are loaded and gradient computations start. See #5 for motivation for why it is important to evaluate 1-10 second of training.
**2. Fallback forecasts when the model fails.**
After further thought, we believe that the best way to handle this issue is to do the same thing as the original [TabPFN paper](https://arxiv.org/abs/2207.01848) (Fig. 5): for each baseline, start showing results when the baseline is able to make non-trivial predictions on **all** datasets/settings. We will also include raw results for each dataset for completeness. This is the best way to update our paper so as not to cause confusion about the low-runtime setting, and it is backed up by existing work. (For the specific runtimes of baselines, see answer #4.)
You bring up a great point that in real life, if using a model such as FEDFormer, then the practitioner would use FEDFormer for high budgets and the seasonal baseline for low budgets. The performance of this combination can easily be inferred from our plots by looking at the two corresponding methods, and we will make a point of this as a comparison to ForecastPFN in our updated manuscript.
**3. Seasonal baseline.**
Thank you, we are happy to add more baselines. Many of our low-resource experiments involve series that are fewer than 365 days long, so instead of monthly we use weekly, which uses the historical mean from the same day of the week as the current prediction. This baseline now becomes the 2nd best method in low train size and low runtime settings, 2nd to ForecastPFN, across all axes (MSE wins across train and time budgets, mean MSE rank across train and time budgets). See part 3 of our reply for these results. We have updated all plots in our paper to include this baseline.
**4. ARIMA and Prophet minimum times.**
We have calculated the percentage of configurations (dataset, prediction lengths, seeds) that failed, for each time budget and each model which requires training or being fit. We see that Arima is the model which struggles the most at low time budgets.
| Model/Time Budget | 1.0 | 5.0 | 10.0 | 15.0 | 30.0 | 45.0 | 60.0 | 120.0 |
|:------------|------:|-------:|-------:|-------:|-------:|-------:|-------:|--------:|
| Arima | 9.68 | 77.42 | 88.71 | 92.9 | 100 | 100 | 100 | 100 |
| Autoformer | 70.97 | 97.74 | 100 | 100 | 100 | 100 | 100 | 100 |
| FEDformer | 71.94 | 99.68 | 100 | 100 | 100 | 99.35 | 100 | 100 |
| Informer | 72.58 | 100 | 100 | 100 | 100 | 100 | 97.42 | 100 |
| Prophet | 88.71 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
| Transformer | 66.77 | 100 | 100 | 100 | 98.39 | 100 | 100 | 100 |
---
Reply to Comment 1.1.2:
Title: Second reply to 3SYZ, part 2
Comment: **5. Motivation for the low runtime setting.**
First, we emphasize that there is a wide range of applications for a limited training set size, as cited in our paper (lines 23-24, 83-88).
Next, there is also a wide range of applications which require a low time (or computational) budget. As you say, deep learning models can train once and be used for a long time for in-distribution data. But, they will need to be retrained for new applications (out-of-distribution), unlike ForecastPFN. As an example of a nice property, ForecastPFN can be loaded onto a CPU (a standard function of tensorflow) and used on edge devices to make predictions. As an example of a specific application, ForecastPFN can then be used to forecast personalized utilities usage across households and across towns in developing countries. Other deep learning models would need to retrain when the data becomes out of distribution. Another application is as a forecasting data-exploration tool, allowing users to see *instant* forecasting visualizations as they navigate their dataset.
**6. Runtime.**
We respectfully disagree with your statement that “wall clock is still not a reliable measure”.
- First, recall that on lines 218-220, we mentioned that all of the baselines use the hyperparameters from their official release, and we specified the ARIMA implementation. We also open-sourced our entire codebase (line 61), so that readers can see the exact code and reproduce experiments if they want.
- Second, there is an enormous precedent throughout the ML literature to use wallclock time as a metric. Wallclock time is used in many prior works and is mentioned in best practices guides.
We do agree that we will specify the fact that ARIMA and Prophet do not use GPUs, and the CPU version (it's an N1 series from GCP). We are happy to include additional plots that use FLOPs instead of wallclock time, but we will need to add caveats that FLOPs can sometimes be misleading. As a quick example, a vanilla transformer is highly GPU-memory bottlenecked, and would appear to be slower in FLOPs than an LSTM, even though LSTMs are substantially slower in wall clock time. (This issue is solved by using Flash Attention for transformers).
**7. ForecastPFN architecture.**
The transformer is more flexible than existing transformers for forecasting, due to the input that can take in any timestep and value (not just a sequential input), and robust scaling. Our main point is that *training* a universal forecasting model is highly nontrivial. Notably, the form and the hyperparameters of the synthetic data distribution, especially the noise parameters, must be set precisely in order for the model to achieve non-degenerate performance.
Thank you very much once again, for your time in reviewing our paper.
---
Reply to Comment 1.1.3:
Title: Second reply to 3SYZ, part 3
Comment: Here, we include results with the new "Weekly" baseline (from our answer to question 3). It often achieves 2nd place behind ForecastPFN in the low train budget and low runtime settings.
Train Budget vs. MSE Wins:
| Model | 50.0 | 100.0 | 150.0 | 200.0 | 250.0 | 300.0 | 500.0 |
|:--------------|-------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Arima | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Autoformer | 1 | 3 | 8 | 9 | 8 | 10 | 6 |
| FEDformer | 2 | 4 | 5 | 7 | 13 | **16** | **16** |
| ForecastPFN | **32** | **30** | **26** | **23** | **20** | 11 | 11 |
| Informer | 0 | 0 | 1 | 0 | 1 | 8 | 12 |
| Last | 9 | 9 | 9 | 9 | 9 | 9 | 9 |
| Mean | 5 | 4 | 3 | 1 | 2 | 0 | 0 |
| Meta-N-BEATS | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Prophet | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Weekly | 13 | 12 | 8 | 8 | 6 | 3 | 2 |
| Transformer | 0 | 0 | 2 | 5 | 5 | 4 | 5 |
Time Budget vs. MSE Wins:
| Model | 1.0 | 5.0 | 10.0 | 15.0 | 30.0 | 45.0 | 60.0 | 120.0 |
|:--------------|------:|------:|-------:|-------:|-------:|-------:|-------:|--------:|
| Arima | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Autoformer | 0 | 3 | 4 | 5 | 9 | 6 | 6 | 6 |
| FEDformer | 2 | 5 | 10 | 11 | 10 | **18** | **18** | **18** |
| ForecastPFN | **30** | **23** | **21** | **19** | **14** | 12 | 11 | 11 |
| Informer | 1 | 1 | 3 | 4 | 9 | 10 | 10 | 10 |
| Last | 10 | 9 | 9 | 9 | 9 | 9 | 9 | 9 |
| Mean | 5 | 4 | 3 | 2 | 1 | 0 | 0 | 0 |
| Meta-N-BEATS | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Prophet | 5 | 5 | 5 | 5 | 5 | 4 | 4 | 4 |
| Weekly | 9 | 7 | 3 | 2 | 1 | 0 | 0 | 0 |
| Transformer | 1 | 6 | 5 | 6 | 5 | 4 | 4 | 4 |
Train Budget vs. MSE Rank:
| Model | 50.0 | 100.0 | 150.0 | 200.0 | 250.0 | 300.0 | 500.0 |
|:--------------|-------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Arima | 7.44 | 8.29 | 8.59 | 7.96 | 7.97 | 8.26 | 8.26 |
| Autoformer | 6.33 | 5.52 | 3.65 | 3.33 | 3.69 | 2.78 | 3.26 |
| FEDformer | 4.89 | 4.78 | 4.43 | 3.3 | **2.65** | **2.54** | **2.83** |
| ForecastPFN | **2.03** | **2.42** | **2.85** | **2.98** | 3.13 | 3.47 | 3.56 |
| Informer | 7.71 | 6.21 | 6.15 | 6.26 | 6.04 | 4.9 | 4.27 |
| Last | 2.78 | 3.04 | 3.71 | 4.3 | 4.35 | 4.94 | 5.01 |
| Mean | 2.36 | 2.68 | 3.15 | 3.72 | 3.98 | 4.56 | 4.64 |
| Meta-N-BEATS | 3.2 | 3.41 | 4.14 | 4.7 | 4.72 | 5.29 | 5.41 |
| Prophet | 9.11 | 9.13 | 8.9 | 8.88 | 8.97 | 8.83 | 8.96 |
| Weekly | 2.95 | 3.07 | 3.95 | 4.51 | 4.61 | 5.25 | 5.15 |
| Transformer | 6.12 | 6.33 | 5.47 | 5.05 | 4.9 | 4.19 | 3.64 |
Time Budget vs. MSE Rank:
| Model | 1.0 | 5.0 | 10.0 | 15.0 | 30.0 | 45.0 | 60.0 | 120.0 |
|:--------------|------:|------:|-------:|-------:|-------:|-------:|-------:|--------:|
| Arima | 5.03 | 6.78 | 7.23 | 7.53 | 8.19 | 8.34 | 8.28 | 8.38 |
| Autoformer | 6.83 | 5.72 | 5.31 | 5.22 | 3.83 | 3.29 | 3.26 | 3.32 |
| FEDformer | 4 | 4.28 | 3.6 | 3.34 | **3.45** | **2.85** | **2.72** | **2.77** |
| ForecastPFN | **2.03** | **2.75** | **2.9** | **3.04** | **3.45** | 3.6 | 3.59 | 3.66 |
| Informer | 5.86 | 5.99 | 5.63 | 5.72 | 4.74 | 4.48 | 4.38 | 4.33 |
| Last | 2.78 | 3.66 | 4.06 | 4.18 | 4.85 | 5.06 | 5.07 | 5.13 |
| Mean | 2.42 | 3.36 | 3.79 | 3.84 | 4.44 | 4.63 | 4.58 | 4.76 |
| Meta-N-BEATS | 3.21 | 4.13 | 4.52 | 4.6 | 5.24 | 5.46 | 5.47 | 5.53 |
| Prophet | 6.09 | 7.28 | 7.62 | 7.66 | 7.9 | 7.95 | 7.89 | 8.01 |
| Weekly | 3.05 | 3.87 | 4.27 | 4.36 | 5.04 | 5.31 | 5.3 | 5.4 |
| Transformer | 6.12 | 6.3 | 5.77 | 5.4 | 3.77 | 3.98 | 4.32 | 3.71 | | Summary: This paper proposes a zero-shot prior-data fitted network (PFN) for time-series forecasting. Existing works have challenges in designing a general and flexible PFN for general time-series distributions, and tuning an architecture and training scheme. This work overcomes these by designing a novel synthetic time-series distribution in training and using a transformer architecture for queries to arbitrary future timestep. Experiments have been presented to demonstrate the improved performance of the proposed model over comparison models.
Strengths: 1. The proposed work tackles an interesting setting where few initial observations are available in forecasting.
2. The transformer-based network aims to predict arbitrary future time steps and the robust scaling deals with the extreme scale of time series, which increase the novelty of the methodology.
3. The results seem compelling as well, with a clear outperformance of the proposed model over comparison baselines.
Weaknesses: 1. In Section 3.2, it is not clear what motivates the author to choose a multiplicative data generation model with seasonal and trend components as the synthetic prior. It would be appreciated if the author could elaborate more on that. Also, the ablation study could be more sufficient: the diversity of the synthetic data generated for training should be explored since the model relies on a general synthetic data distribution.
2. The author could provide more details about the architecture in Section 3.3: 1) How does the network learn diverse temporal features of the input data? 2) How does the network deal with different input sizes? 3) What is the benefit of using transformer-based layers vs residual blocks or recurrent neural networks?
3. In Section 4, the author should explain the definition of train budget and time budget. Are highlighted texts “training time” and “training data” related to them? It would be better to make them consistent.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the weaknesses mentioned above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the excellent feedback. We are glad to see that you are positive towards our work, including our novel methodology and our results. We also find that your comments will substantially improve the final version of our paper. We reply to each question below.
**W1.1. Multiplicative data generation model.**
Yes, additive vs. multiplicative noise is an interesting question. We chose multiplicative noise to better balance the amount of signal to noise across all series. If we used additive noise, then the impact of the noise depends on the ratio of the scale of the base series and the scale of the noise. Since our base series have linear and exponential terms, additive noise would cause the signal-to-noise ratio to vary substantially over series with a trend component. Therefore, we used multiplicative noise, which is a simpler parametrization to achieve a signal to noise ratio for a series. We now updated our paper to include this discussion, and we agree that including an ablation study is a good idea for future work.
**W1.2. Ablation on the diversity of synthetic data.**
This is a great idea. Since each training procedure takes 30 GPU hours, it is challenging to do a thorough ablation study. As mentioned in Section 3.3, we performed light HPO on the synthetic data parameterization in the early stages of the project, using the training loss as a signal. We are currently working on a simple ablation study on the noise of the synthetic data, which we will finish during the NeurIPS discussion period.
**W2.1. How does the network learn diverse temporal features.**
This is a good question. We chose to use a transformer architecture because there is a large body of empirical evidence across different areas of research that transformers learn spatio-temporal features (e.g., vision, speech and language). We used a fairly lightweight transformer (one multi-head attention layer and two feedforward layers; details in Section 3.3), and our results show that it learns diverse temporal trends. In fact, a concrete demonstration that we can do in the future would be to train our transformer to approximate a partial Fourier transform.
**W2.2 How does the network deal with different input sizes.**
Again, since we use a transformer, it can easily deal with variable input lengths. In our experiments, we set a maximum input length to 200 just because of computation constraints that we imposed. A good experiment in future work would be to increase the maximum input length to, e.g., 5000 inputs, although this would be substantially slower.
**W2.3 What is the benefit of using transformer-based layers vs residual blocks or recurrent neural networks.**
Note that ForecastPFN is a fully pretrained model – similar to an LLM, but for forecasting. In other research areas that create “foundation models” for language modeling and even computer vision, transformers are by far the most common choice. However, as you say, we could also have tried out other types of architectures. One other reason is that transformers are currently very fast on hardware due to receiving a lot of attention from the research community. Furthermore, other models such as RNNs are inherently sequential and harder to saturate GPU compute; therefore, transformers are the most efficient and best-performing choice for ForecastPFN.
**W3. Make definitions consistent.**
Thank you. We will clarify this point in Section 4 better. We define training time as the amount of wall clock time that is permitted for each model to train (1 to 120 seconds) and training budget as the amount of data points given to each model to train (50 to 500). This is given on lines 242-244 in the present manuscript, but we have rewritten the description to be more clear.
Thank you very much, once again, for your positive view of our work and your excellent suggestions. If you find our responses satisfying, since we addressed your weaknesses, we respectfully ask that you consider increasing your score. On the other hand, we would be very happy to answer any follow-up or additional questions you have.
---
Rebuttal 2:
Title: Quick reminder and new ablation
Comment: Dear reviewer 5AR9, thank you once again for your insightful review. We appreciate that you find our work is novel, that it tackles an interesting setting, and that we achieve compelling results.
We would like to check in to see if you have any follow-up comments on our rebuttal. We replied to your three weaknesses on the motivation for a multiplicative data model, on the details of the architecture, and on the definitions of train budget and time budget.
We would also like to mention that we now have preliminary results for a synthetic data ablation study (which you mentioned in part 1 of your weaknesses). We trained ForecastPFN with three scales of our noise parameter, $m_{\text{noise}}$: 1 (original ForecastPFN), $2/3$ (low noise), and $1/6$ (lowest noise). We do not change the scale of the noise in the validation data. Below, we report the train and validation MSE on synthetic data for each model.
Training MSE Loss:
| Epoch | 0 | 1 | 25 | 50 | 100 | 200 | 300 |
|---| ---| ---| ---| ---| ---| ---| ---|
| ForecastPFN with lowest noise | 23.71 | 5.009 | 2.326 |1.991 | 1.484 | 0.8781 | 0.5912 |
| ForcastPFN with low noise| 0.3593 | 0.294 | 0.08648 | 0.08536 | 0.04981 | 0.03857 | 0.3835 |
| ForecastPFN| **0.2074** | **0.1061**| **0.06425** | **0.06642** | **0.04882** | **0.03696** | **0.03196** |
Validation MSE Loss:
| Epoch | 0 | 1 | 25 | 50 | 100 | 200 | 300 |
|---| ---| ---| ---| ---| ---| ---| ---|
| ForecastPFN with lowest noise | 1.13E6 | 1.58E5 | 2.78E4 | 1.31E5 | 7.12E4 | 4760 | 1.35E4 |
| ForcastPFN with low noise | 5.06E4 | 2.65E4 | **3.09E4** | **1.09E6** | 3.52E4 | 4487 | 1.57E4 |
| ForecastPFN | **4.94E4** | **2.28E4** | 4.2E4 | 1.88E5 | **1.66E4** | **4267** | **1.22E4** |
Recall that we remove noise in the ground-truth *predictions* of the training data as a design decision that improves performance (lines 200-203 and Appendix E.6), which is why the scale looks different for the train and validation losses. This new ablation study complements our ablation study in Appendix E.6 which considers noise in the predictions of the training data. We will continue updating this ablation study by adding more values of noise and more parameters.
Please let us know if you have any additional questions or comments. Thank you!
---
Rebuttal Comment 2.1:
Comment: Thank you for addressing my concerns and adding experiments. | Summary: ForecastPFN is a zero-shot forecasting model designed to overcome the limitations of traditional forecasting methods when dealing with data-sparse applications. Unlike most approaches, ForecastPFN is trained solely on synthetic data, which captures diverse time series patterns and incorporates multi-scale seasonal trends, global trends, and noise. This prior-data fitted network approximates Bayesian inference and enables accurate and fast predictions in a single pass. Extensive experiments presented in the paper demonstrate that ForecastPFN outperforms state-of-the-art methods across multiple datasets and forecast horizons, even when the latter are trained with significantly more data points.
Strengths: 1. The manuscript is well-written, and the presented ideas are easy to follow.
2. The concept of training a foundational time series model solely on synthetic data is innovative and exciting. This approach not only addresses the issue of lower forecasting accuracy in data-sparse scenarios but also has positive environmental implications.
3. The execution of this work is commendable. The paper provides detailed information on the generation of the synthetic dataset, the model architecture, and the training process. Additionally, the authors conducted an extensive set of experiments, exploring various model configurations, prediction horizons, and datasets from multiple domains
Weaknesses: The paper predominantly relies on plots to present the results, without including tables. While it is understandable that summarizing all the experiments in tabular form would be challenging due to the extensive nature of the research, it can be difficult for readers to extract meaningful information from a large number of plots. Additionally, certain plots, such as Fig 5, are scaled down to a point where they become challenging to interpret. The same issue extends to the plots presented in the appendix. To enhance clarity, it is recommended that the authors include some key results in tabular form and adjust the scale of the plots to improve readability for readers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Considering the focus on commercial forecasting applications in this work, it would be valuable to understand the reasons behind modeling ForecastPFN as a point forecaster instead of a probabilistic forecaster.
2. In real-world forecasting applications, the presence of exogenous features can provide additional information that directly affects the accuracy of forecasts. How does ForecastPFN handle the incorporation of exogenous features?
3. It would be interesting to know if the authors explored the option of fine-tuning ForecastPFN on the target data. This analysis could provide insights into the impact of fine-tuning on the performance of ForecastPFN in downstream forecasting tasks.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the excellent feedback. We are glad to see that you find our idea innovative and exciting, and that you find the execution commendable. We also find that your comments will substantially improve the final version of our paper. We reply to each question below.
**W1: No tables.** Thank you, we can easily add tables for all of our main experiments, and we are happy to do so. We present tables below which show the superiority of ForecastPFN on a majority of datasets at different train budgets with a prediction length of 24.
We have printed tables with MSE values for Train Budget = 50 below.
| | ECL | ETT1 | ETT2 | exchange | ili | traffic | weather |
|-------------|------------:|------------:|------------:|------------:|------------:|------------:|------------:|
| Arima | 1.844 | 0.344 | 1.590 | 1.175 | 5.039 | 2.696 | 0.041 |
| Autoformer | 2.368 | 0.850 | 1.299 | 0.381 | 1.858 | 3.803 | 0.371 |
| FEDformer | 0.899 | 0.762 | 1.175 | 0.786 | 1.549 | 2.331 | 0.817 |
| ForecastPFN | 1.104 | **0.121** | **0.340** | 0.061 | **1.102** | 2.029 | **0.010** |
| Informer | 1.078 | 1.967 | 1.105 | 4.694 | 11.045 | 5.848 | 0.325 |
| Last | 0.889 | 0.191 | 0.492 | **0.024** | 1.487 | 3.030 | 0.017 |
| Mean | **0.658** | 0.174 | 0.649 | 0.042 | **1.102** | **1.558** | 0.012 |
| Metalearn | 0.871 | 0.192 | 0.500 | 0.025 | 1.572 | 2.795 | 0.014 |
| Prophet | 2.181 | 3.298 | 14.060 | 421.577 | 11.813 | 2.328 | 0.101 |
| Transformer | 0.827 | 0.630 | 0.538 | 1.833 | 5.578 | 3.093 | 0.182 |
And with Train Budget = 500 below.
| | ECL | ETT1 | ETT2 | exchange | ili | traffic | weather |
|-------------|------------:|------------:|------------:|------------:|------------:|------------:|------------:|
| Arima | 1.974 | 0.201 | 1.075 | 1.175 | 4.852 | 1.628 | 0.041 |
| Autoformer | **0.482** | 0.141 | 0.360 | 0.073 | **0.705** | **0.473** | 0.189 |
| FEDformer | 0.499 | 0.135 | 0.374 | 0.072 | 0.727 | 0.455 | 0.208 |
| ForecastPFN | 1.104 | **0.121** | **0.340** | 0.061 | 1.102 | 2.029 | **0.010** |
| Informer | 0.442 | 0.144 | 0.263 | 0.518 | 4.978 | 1.028 | 0.186 |
| Last | 0.889 | 0.191 | 0.492 | **0.024** | 1.487 | 3.030 | 0.017 |
| Mean | 0.658 | 0.174 | 0.649 | 0.042 | 1.102 | 1.558 | 0.012 |
| Metalearn | 0.871 | 0.192 | 0.500 | 0.025 | 1.572 | 2.795 | 0.014 |
| Prophet | 15.698 | 3.226 | 4.907 | 13.347 | 3.478 | 1.530 | 0.056 |
| Transformer | 0.621 | 0.157 | 0.277 | 0.351 | 3.576 | 0.804 | 0.012 |
All other tables have been put in the appendix of the paper.
**Q1. Probabilistic forecaster.**
This is a great suggestion. Since this work already introduces a new paradigm, we wanted to start with the simplest regression setting, but we agree that adding probabilistic predictions is an exciting and natural area for future work. Towards that direction, we are training three ForecastPFN models to show that we can achieve probabilistic forecasts by ensembling (which is a strong technique for uncertainty calibration). For future work, it will be interesting to have the model itself output probabilities.
**Q2. Exogenic features.** This is another great suggestion. Once again, we started out with a simpler setting so that we could clearly demonstrate our approach, and so that we could compare to the main experiments in prominent existing work such as FEDFormer and Informer. We think this is a great idea for future work.
**Q3. Fine-tuning.** While this is once again a nice suggestion, we make three points. First, one of our main contributions is to create a universal forecasting method that can perform well on any new series, without any fine-tuning. Second, we believe that the settings where ForecastPFN stands out the most compared to prior work, is when there is very little inference time available and/or when there is very little training data available -- settings in which it would be very challenging to fine-tune. Third, we note that prior-data fitted networks (PFNs) are still a new research area just one year old, and fine-tuning has not yet been tried even for classification-based PFNs. From a theoretical standpoint, it is not clear that fine-tuning would maintain the desirable Bayesian properties of PFNs. Although, intuitively we agree that fine-tuning could help in practice.
Thank you very much, once again, for your positive view of our work and your excellent suggestions. If you find our responses satisfying, since we addressed your weaknesses, we respectfully ask that you consider increasing your score. On the other hand, we would be very happy to answer any follow-up or additional questions you have.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal and doing the additional experiments. I would suggest moving these tables into the main paper as well. After reading the other reviews and the corresponding rebuttal discussions, I would like to maintain my original score.
---
Reply to Comment 1.1.1:
Title: Second reply to MwwS
Comment: Thank you very much once again for your review. We appreciate that you found our work innovative and exciting, with positive environmental implications, and also that our work is well-written and easy to follow with extensive experiments.
Thank you for your update. Yes, since you listed the lack of tables as a primary weakness, we created tables with [raw MSE values](https://openreview.net/forum?id=tScBQRNgjk¬eId=Z6mb79DbuS), [MSE wins and MSE rank](https://openreview.net/forum?id=tScBQRNgjk¬eId=WZZvHCWdd0), and [percentage completion](https://openreview.net/forum?id=tScBQRNgjk¬eId=F2RWvstySO), and we added all of these into our paper.
We would also like to mention that we just added a preliminary [ablation study](https://openreview.net/forum?id=tScBQRNgjk¬eId=IkeN4k7sXF) on the noise of our synthetic data, and we included [parameter counts](https://openreview.net/forum?id=tScBQRNgjk¬eId=OexP4WpgRv) for the deep learning models.
Please let us know if you have any additional follow-up questions or reservations about our work, and we will be happy to reply. Thank you for your time! | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback and suggestions. Our work introduces the first synthetically-trained, zero-shot, universal forecasting model, which performs particularly well in low-resource settings (small amount of in-distribution data, and/or low inference time budget), which are very important settings in practice. We appreciate that the reviewers find our method novel and exciting (**MwwS, 5AR9, 3SYZ**), our paper well-written and easy to follow (**MwwS, 3SYZ, 458V**), our experiments extensive and compelling (**5AR9, 3SYZ**), and the low-resource setting interesting and impactful (**MwwS, 5AR9, 458V**).
We would like to go over a few small points of confusion, just so that we are all on the same page. At a high level, we found that since our work is a substantially different paradigm from most prior work (which most reviewers mentioned as an exciting direction), there was also some confusion about our method and experimental setup. Please see our points below.
- **ForecastPFN was trained only once, ever.** Similar to foundation models such as GPT4, we only needed to train ForecastPFN once, and the weights are never updated even when evaluating on new, unseen time series.
- **ForecastPFN is a _universal_ forecasting model**. ForecastPFN, a single model, can be used for different input lengths, prediction lengths, and frequencies. It empirically performs well in different settings such as currency exchange, weather, consumer electricity usage, and illness trends. The time to output predictions for a brand new series is under one second -- just a model evaluation. To the best of our knowledge, ForecastPFN is the first model to have this level of universality, and the first to be pretrained on synthetic data.
- **We overcame significant technical challenges to create a universal, pretrained model**, such as handling dynamic scaling, designing a general and flexible architecture, and carefully designing and parameterizing the synthetic data, especially the noise distribution. **Other architectures such as FEDFormer are not able to pretrain with this level of generality** (see below).
We would also like to highlight the additional experiments we finished during the rebuttal period.
- **The FEDFormer architecture (the second-best algorithm in our experiments) cannot be pretrained with our synthetic data.** Based on the suggestions of reviewer 3SYZ, we (attempted to) pretrain FedFormer using our synthetic data. Although we spent significant time during the week-long rebuttal period, we so far are not able to get the FEDFormer architecture to achieve non-degenerate performance on the real-world datasets. Note that as we mentioned in a point above, it is non-trivial to design and train a model to achieve zero-shot, universal forecasting performance, and the ForecastPFN architecture was specifically designed for this goal.
- **We included several new tables of raw MSE values, as well as table versions of our main figures**. Based on the suggestions of reviewers MwwS and 3SYZ, we included raw MSE values as well as more tables, to better view our results. See these tables in [our reply to Reviewer MwwS](https://openreview.net/forum?id=tScBQRNgjk¬eId=Z6mb79DbuS) and in our one-page pdf.
- **We added two very simple baselines: historical mean, and previous value.** See the figures in our one-page pdf. We see that the historical mean baseline does very well, achieving second-best performance behind ForecastPFN when there is very low train budget or train time. This is a testament to how challenging it is to predict with very little training data or very little training time.
We would be very happy to keep the discussion going for any points that might be still unclear or any new comments. Thank you very much, once again, for giving suggestions for our paper.
Pdf: /pdf/c652314bf7b56a56d459f5410b95f98c85f8d571.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Human-Guided Complexity-Controlled Abstractions | Accept (poster) | Summary: This paper proposes a method for learning discrete representations whose complexity can be smoothly annealed. Experiments demonstrate that, at the appropriate level of complexity, these representations can be useful for downstream classification tasks involving abstract categories.
Strengths: - The work proposes an interesting method for learning a discrete 'codebook' with a controllable level of complexity.
- Experiments demonstrate a clear non-monotonic relationship between complexity and usefulness in a downstream task involving abstract categories.
- The method outperforms baselines in this setting.
Weaknesses: I have previously reviewed this work, and it seems that the paper has not been revised in a manner that sufficiently addresses the concerns raised by myself and other reviewers. Specifically, a major concern with this work as currently formulated is that the proposed human-in-the-loop approach does not seem very realistic. In the abstract, the authors note that 'humans learn discrete representations ... at a variety of abstraction levels ... and use the appropriate abstraction based on tasks'. This is a compelling motivation, but unlike humans, the proposed approach does not involve any method for autonomously selecting the appropriate level of abstraction for a given task. It is unclear how this human-in-the-loop approach will be scaled in a realistic manner, and the authors do not devote enough attention toward addressing this limitation.
It would help if the authors could add more discussion of this issue, and also describe more concretely the scenarios in which they envision this approach being useful. Given pre-trained models over a range of complexity levels, and some downstream task on which a user wants to fine-tune, would it not be much more straightforward to simply use a validation set from the downstream task to evaluate the full set of pre-trained models? This also suggests the possibility of automating the process of selecting the right abstraction level in a meta-learning setup.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would appreciate if the authors could devote more attention to the limitations identified in the previous section, and also provide more concrete detail about the intended use case for the proposed approach.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There are no discernible negative societal impacts related to this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their continued engagement with our work!
>This is a compelling motivation, but unlike humans, the proposed approach does not involve any method for autonomously selecting the appropriate level of abstraction for a given task.
The reviewer is correct that we do not provide a method for autonomously selecting the appropriate level of abstraction for a given task. Fundamentally, we propose a human-in-the-loop framework rather than an autonomous selection framework.
Nevertheless, we appreciate the suggestion to regularize the finetuning models, using a validation set to tune regularization. As noted in our general response, we conducted such experiments and found that 1) there is a small benefit to regularization but 2) our main trends hold, such that tuning to the right complexity level supports the greatest finetuning accuracy, therefore resulting in our method still outperforming the (stronger) regularized baseline.
We discuss an additional method for autonomously selecting the right encoder based on the number of prototypes in our overall response. This method, too, fails for both theoretical and empirical reasons.
Our findings, with strengthened baselines and methods for choosing models based on the number of prototypes, establish the importance of our human-in-the-loop framework rather than autonomously selecting models. Ultimately, given the positive results from our human-in-the-loop framework, and our negative results from various autonomous selection baselines, we hope that the reviewer will reconsider their score.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks to the authors for these clarifications and additional experiments. The regularization experiments are interesting, but that is not what I had in mind when I mentioned fine-tuning with a validation set. What I meant was that, given some downstream task, and given models with a range of complexity, a natural solution would be to generate a validation set for evaluating those models based on accuracy for the downstream task. That seems like a much more straightforward solution than having a human look at the prototypes generated by the model, but perhaps I am missing something.
I also still feel that the paper needs to more clearly acknowledge the limitations of the human-in-the-loop paradigm, and the need to develop a method for autonomously selecting the right encoder. The paper does not really discuss these issues, and the authors have not indicated that they will revise the paper to include such a discussion.
---
Reply to Comment 1.1.1:
Title: Followup experiments; reframing in overall response
Comment: # Paper Reframing
In light of the discussion and reviewer’s recommendations, we have proposed a reframing of the paper. Please see the general comment above!
# Experiment: validation set
We thank the reviewer for the clarification on validation set experiments. We have now conducted such experiments and report our approach and the results below:
Using the suite of pre-trained encoders from the original paper, for each $k$, we generated a train-validation split by holding out a validation set, $v$, from the trainset. Predictors were trained on the remaining training data and evaluated on the validation set. We conducted 5-fold cross-validation for different train-validation splits to record the average validation accuracy. We then selected the encoder with the highest average validation accuracy and finetuned it on both the train and validation data. We then recorded the test-set accuracy of these “optimal” models. We repeated this experiment over 5 random seeds (corresponding to the 5 pre-training runs).
Results, averaged across the 5 random seeds, from our experiments are included in the tables below (for VQ-VIB$\_\mathcal{C}$). (Given the long duration of these cross-validation experiments, we are still waiting for the iNat results for $|v| = 5$ and will report them upon completion. So far, trends across all domains have been consistent.)
## FashionMNIST
| $k$ | 2 | 5 | 10 | 50 |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| $\|v\|=1$ | 0.73 | 0.89 | 0.93 | 0.97 |
| $\|v\|=5$ | - | - | 0.94 | 0.97 |
| $\|v\|=10$ | - | -| -| 0.97 |
## CIIFAR100
| $k$ | 2 | 5 | 10 | 50 |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| $\|v\|=1$ | 0.71 | 0.79 | 0.83 | 0.90 |
| $\|v\|=5$ | - | - | 0.82 | 0.91 |
| $\|v\|=10$ | - | -| -| 0.91 |
## iNat
| $k$ | 2 | 5 | 10 | 50 |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| $\|v\|=1$ | 0.65 | 0.79 | 0.89 |0.90 |
| $\|v\|=5$ | - | - | | |
| $\|v\|=10$ | - | -| -| 0.89|
With enough finetuning data, and a large enough validation set, this method indeed recovers near-optimal encoders. However, as in many of our experiments, in the small-data regime, this method remains suboptimal. For example, for FashionMNIST, $k=2$, the selected model on average achieves 73% accuracy; lower than the peak of roughly 80%. We suspect that this method fails in low data regimes because performance on a small validation set may not reflect performance on the test set, so using a validation set for selection only works with a large enough validation set. Indeed, with small validation sets, this method sometimes selected high-MSE models which happened to perform well on the validation sets but which performed sub-optimally on the test set.
We thank the reviewer again for suggesting this technique. | Summary: This paper proposes a framework for human-in-the-loop training of machine learning models where humans select among pretrained models with different complexity levels based on prototypes. The authors demonstrate that finetuning performance is significantly impacted by representation complexity in the experiments considered. Moreover, a user study demonstrates that humans are relatively successful at helping choose the correct complexity level for finetuning.
Strengths: - The idea of using humans to help select abstractions for a given problem is an interesting idea for human in the loop ML.
- The proposed VQ-VIB approach seems like a reasonable implementation for this use case.
- There seems to be solid empirical verification of the very intuitive connection between representation complexity and fine-tuning performance.
Weaknesses: - The authors only show transfer performance within single benchmarks rather than across tasks or in realistic pretraining and finetuning settings such as ImageNet or LLMs.
- The authors do not compare with other methods for human in the loop training in which the amount of human effort can be compared. For example, feature engineering is another way that humans can potentially impact downstream performance -- a technique widely used in practice.
- The paper also does not discuss the effort involved in providing this human guidance.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Can you quantify the amount of effort needed by human annotators to select the best abstraction level?
2. Can you quantify the amount of sub-optimality in the human study when the correct abstraction was not selected?
3. How does the effort and efficacy involved in prototype based abstraction selection compare with that of feature engineering?
4. There is something off about the bird abstraction example (line 20). What precisely is the task the abstraction is being used for in this case? Is the expert birdwatcher teaching the child or other experts? What does complexity and simplicity mean in this case?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do not really address the limitations of the work. I believe the main limitations are the lack of alternative methods for human-in-the-loop training considered i.e. feature engineering and the use of somewhat synthetic benchmarks for evaluation that are disconnected from real-world use cases. It is natural to wonder if humans can be as helpful at prototype based abstraction selection for transfer tasks that are more distant from each other and not drawn from the same benchmark.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >The authors only show transfer performance within single benchmarks rather than across tasks or in realistic pretraining and finetuning settings such as ImageNet or LLMs.
While we do not consider LLMs (we did not wish to expand to language domains), we do use feature extractors pre-trained on ImageNet (as noted on line 457).
More generally, we do not seek to make any claims or consider transfer to different domains. Cross-domain transfer is an interesting idea, but here we seek to answer the first question of finetuning efficiency within a single domain, with the goal of understanding how to best develop visual representations that are adaptable and flexible for different end tasks.
>The authors do not compare with other methods for human in the loop training in which the amount of human effort can be compared. For example, feature engineering is another way that humans can potentially impact downstream performance -- a technique widely used in practice. Can you quantify the amount of effort needed by human annotators to select the best abstraction level? How does the effort and efficacy involved in prototype based abstraction selection compare with that of feature engineering?
While comparison to other human-in-the-loop pretraining methods is an interesting (and novel!) space to explore, we believe our method exists orthogonal to traditional “manual feature engineering”. First, feature engineering typically requires the designer to have full knowledge of the desired downstream task, effectively requiring *a priori* knowledge of the test task during pretraining (e.g. we need to first know that we are trying to classify birds to identify that beaks are important) [1,2,3]. A specific model is then trained to fit those features. In contrast, our method assumes that we do not possess this information (nor access to the end user during pre-training), but rather “fits” the right representation adaptively at test time. Second, traditional human feature engineering is known to be labor and expertise-expensive [1,2], often requiring either designer knowledge of the feature extraction space or specific visualization tools designed to extract this information [3]. For example, in order for us to run this comparison in line with [3], we would have needed to design a web-interface tool for FashionMNIST, CIFAR100, and iNat, and then have brought in human participants to design features for each downstream task we desired, which would be extremely time consuming (for both the designer and participants).
We apologize that we did not report results on the time/ease of our human experiment and will correct this omission. Our method averaged ~1m30s per human participant, INCLUDING TEACHING TIME, for each question. This required no designer knowledge of the test task, nor time spent deploying visualization tools or interfaces for extracting end user knowledge of desired features – we simply needed to output the representations that were already generated during pretraining. This illustrates the relative ease and flexibility of deploying our method across different tasks with different end users.
>Can you quantify the amount of sub-optimality in the human study when the correct abstraction was not selected?
Our computational results, in conjunction with the human study, indicate the suboptimality of choosing the incorrect abstraction. For example, when users selected the lowest MSE model, that corresponded to accuracy of 55\%, as opposed to 70\% accuracy when selecting the best model (via our method), for $k=1$. (Percentages taken from Figure 4c in the main paper). One can generally quantify suboptimality by looking at the options selected by participants and then looking up the accuracy of such options in our computational experiments.
>There is something off about the bird abstraction example (line 20). What precisely is the task the abstraction is being used for in this case? Is the expert birdwatcher teaching the child or other experts? What does complexity and simplicity mean in this case?
We describe two tasks: an expert classifying birds (line 23) or teaching a child to classify birds into crude categories (line 24). In this context, complexity corresponds to the level of detail about the bird used for the task (e.g., specific plumage patterns for exact species classification vs. general color for the simpler task). Thus, we expect the expert to use a complex representation for classifying species and a less complex representation for discussing high-level colorings with children.
[1] "An empirical analysis of feature engineering for predictive modeling." Heaton.
[2] "Runtime Support for Human-in-the-Loop Feature Engineering System." Anderson et al.
[3] "Get a human-in-the-loop: Feature engineering via interactive visualizations." Gkorou et al.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your detailed responses to my questions. This definitely helped clarify with respect to some of my concerns. In particular, it was quite interesting to know about the average time taken by the human participants. This was lower than what I would have expected. As a result, I have decided to raise my score after the rebuttal. | Summary: The authors observe that the downstream tasks for pretrained models can rely on representations of varying level of complexity: as a running example, a birdwatcher relies on significantly more complex representations of images to classify bird species, relative to a child who may want to identify the color of a bird. They thus propose pretraining a variety of models with representations of varying complexity, having a human choose the appropriate pretrained model for the task, and then finetuning that model for the downstream task. Relative to finetuning the model with the most complex representations, this should lead to increased data efficiency.
The authors suggest a modification of VQ-VIB (vector-quantized variational information bottleneck) for the pretraining in this setting, called VQ-VIB_C (C standing for categorical) – my understanding is that the main difference is that in the authors’ method, the representation is divided into n chunks and each of the n chunks is snapped to a quantized vector (the loss remains the same). I have not spent much time delving into the details as the authors say this is not their main contribution: in particular I may be wrong about the differences from VQ-VIB.
The authors implement this idea for a variety of image-based datasets: FashionMNIST, CIFAR-100, and iNaturalist 2019 (iNat). They pretrain an encoder using a reconstruction loss as well as a classification loss for the labels defined by the dataset, and increase the regularization on the representations to generate a variety of checkpoints that produce representations of different complexity. They then evaluate these encoders in a downstream classification task that is coarser than the original classification task (e.g. living vs. non-living for CIFAR-100).
Their experiments show that (1) VQ-VIB_C outperforms two baselines (VQ-VIB, $\beta$-VAE), and (2) when the classification task is simple (e.g. 2-3 classes) and there are very few data points (e.g. 1-5 examples per class), it is better to use a pretrained encoder with lower representational complexity (though in all other cases it is typically fine to use the encoder with maximal representational complexity).
They also perform a human experiment in which they show that, given visualizations of the learned quantized vectors, humans can correctly identify which encoder will perform best during finetuning.
Strengths: 1. To my knowledge, the idea of using simpler representations for downstream tasks in order to improve data efficiency is novel, and I think it is an interesting hypothesis to explore.
2. Although the authors don’t consider it their main contribution, their experiments suggest that the VQ-VIB_C performs notably better than other alternatives on their task. (However, I did not investigate in detail, e.g. it is possible that the baselines were not tuned well while the VQ-VIB_C was.)
3. More generally, the experiments are quite detailed and look at the effects of various hyperparameters in the method.
4. The paper has an experiment with real humans to justify its claim that humans could choose an appropriate model to use for a downstream task. While I think the experiment is pretty different from realistic conditions, this is still better than the vast majority of papers on ideas like these, which often don’t bother testing their claims about real humans at all.
Weaknesses: I have a few concerns:
1. Significance: It seems to me that the author’s experiments suggest that the main idea only has a benefit in very limited settings, which are unlikely to arise in practice.
2. Baselines: For k > 1, the appropriate baseline would be to use the most complex representation, with strong regularization determined using a validation set at finetuning time.
3. External validity: The experiments involve downstream tasks that are pure coarsenings of the pretraining task, whereas in typical settings the downstream tasks will likely not be these “pure coarsenings”, which would reduce performance.
4. Relevance of the human experiment: The author’s experiment seems to “give away” the answer (though there is a case to be made that this reflects a strength of the method, rather than a weakness of the experiment).
Overall I feel conflicted about this paper. On the one hand, it has a nice idea with careful technical work done to flesh it out, and a large set of experiments to understand how the idea works in practice. On the other hand, I think the experiments suggest that the idea is not very practically useful (whereas the paper suggests they validate the idea), despite the experiments having some aspects that bias them towards showing the idea to be good. Overall I’m recommending a borderline reject, but I can see the case for acceptance as well.
**Significance**
Looking at the experiment results (including the ones in the appendix), it appears that if n > 3, or k > 5, or you use any method other than VQ-VIB_C, it is usually best to use the most complex representation (i.e. the one that achieves highest reconstruction loss). Thus the benefits of the idea only occur when n <= 3 and k <= 5, which implies a tiny dataset of 15 examples or fewer. As we might expect from such a small dataset, the resulting finetuned classifiers do not perform particularly well, getting around 70-90% accuracy.
Thus it seems like the idea in this paper is only helpful when (1) there is a very simple classification task, (2) there is very limited finetuning data available, (3) you use VQ-VIB_C rather than a different method, and (4) the user is happy with performance numbers of 70-90% accuracy. This seems extremely restrictive, and I find it hard to think of a realistic setting that satisfies all of these constraints (especially #4). Overall, I would characterize these experiments as providing a negative result.
(I would actually be more inclined to accept a version of this paper that was upfront about this, and framed the paper as a negative result that has taught us that lower representation complexity only buys you a little bit of data efficiency, that is overwhelmed very quickly by a large enough dataset.)
**Baselines**
In the paper’s experiments, the finetuning is done the same way for representations with varying levels of complexity (I believe, at least I didn’t see anything to contrary in Appendix 7). However, a natural baseline would be to use the regular (maximum-complexity) representations, but use stronger regularization to learn a simpler classifier. One might reasonably argue that it is unclear how to choose this hyperparameter – but at least when k > 1, it should be possible to take 1 example (or more) per class to form a validation set that is used to tune the hyperparameter. I think it is plausible that this significantly improves performance for high-complexity representations. If it does work it would be a significant improvement, as there would no longer be any need for human input.
**External validity of experiments**
All of the experiments in the paper have the downstream task labels $Y_t$ being a strict coarsening of the pretraining task labels $Y_p$ (or in other words, $Y_t$ is a deterministic function of $Y_p$). However, this may not be the case in realistic settings – for example, in the running example, there may be birds that are of the same species but have different colors, and so a representation that just identifies the species would be insufficient for predicting the color. Should we expect the method to work even in these cases? I’m not sure; the fact that the pretraining includes a reconstruction loss suggests that it would still work (though likely not as well as in the paper’s experiments). Ideally however the authors would conduct such an experiment to test it empirically.
**Relevance of the human experiment**
Looking at the pdf for the user study (Appendix 11), the instructions seem to “give away” the answer. In particular, the instructions contain:
> Generally, the more images the second robot is using, the less general it will be, and the worse it will do at categorizing the 3 new high-level categories.
and
> If the visualization shows that (1) the robot is not using many images, and (2) they roughly represent the three high-level categories [...] then the robot should perform well
and
> For example, here is one robot’s visualization which is perhaps too general, as there is not even three categories being used.
From which it is easy to infer “choose the option with fewest categories, subject to having at least three categories” – which then leads to the desired answer.
Arguably this represents a strength of the method – the approach for selecting the appropriate model is so simple that it can easily be communicated even to non-experts. However, in this case, it is so simple that there isn’t really any need for human input – we could equally well find the appropriate model using a simple heuristic of choosing the option that has a few more “most important images” than the number of classes in the classification task. (Perhaps this is another baseline which should be compared against.)
**Minor issues**
Line 91:
> Lastly, we assume that the pre-training labels are a sufficient statistic for the task-specific labels: $I(X; Y_p) = I(X; Y_t)$. Intuitively, this states that the pre-training objective must include relevant information for the finetuning task.
I don’t think that’s the right criterion. For example, for reconstruction pretraining with red-yellow classification as the downstream task, we have $Y_p = X$ and $Y_t$ is whether the bird is red or yellow. Then $I(X; Y_p) = I(X; X) = H(X) \geq 1$ (since entropy of natural bird images is large), but $I(X; Y_t) \leq H(Y_t) \leq 1$ (since $Y_t$ is a binary variable).
I think what you mean to say is that $I(X; Y_t \mid Y_p) = 0$, that is, conditioned on knowing $Y_p$, there is no more information about $Y_t$ that can be learned from $X$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could you give examples of realistic settings in which using low-complexity representations would be useful – bearing in mind the four requirements I mention in the “Significance” section in Weaknesses above? (Or alternatively, if I’m wrong about the four requirements, how / why am I wrong?)
2. Is there reason to expect poor performance from finetuning the most complex representation with stronger regularization? (Possibly with a validation set, as described above)
3. How well should we expect the method to work when $Y_t$ is not a deterministic function of $Y_p$?
Minor suggestion: Please increase the size of the text in Figure 4(c).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations brought up in the weaknesses section are not mentioned or addressed. In particular, I think the issues raised in “Significance” and “External validity of experiments” should obviously be mentioned (or otherwise addressed); the others are more debatable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful review. They understood the paper quite well and provided useful suggestions for strengthening baselines (which we implemented).
# Significance
>[T]he benefits of the idea only occur when n <= 3 and k <= 5… classifiers do not perform particularly well, getting around 70-90% accuracy. Thus it seems like the idea in this paper is only helpful when [4 constraints are met]…
The reviewer is correct that for every architecture other than VQ-VIB$_\mathcal{C}$ (our method), we see no benefit to selecting a less complex representation. However, it is not true that “the benefits of the idea only occur when n <= 3 and k <= 5.” For example, for FashionMNIST, performance peaks at the “right” complexity level even for $k = 50$ (Figure 10 in Appendix 8). Notably, this peak supports roughly 96% accuracy.
We acknowledge that the advantages of our method are limited to the small-data regime. However, rather than frame the whole paper as a negative result, we wish to emphasize that while some existing methods ($\beta$-VAE, VQ-VIB$\_\mathcal{N}$) do not confer benefits from simpler representations, VQ-VIB$\_\mathcal{C}$ does! In low-data scenarios, our method demonstrates its value over existing techniques.
# Limitations
>I think the issues raised in “Significance” and “External validity of experiments” should obviously be mentioned.
We note limitations of our approach as the amount of finetuning data increases (line 167) and repeat this conclusion throughout Appendix 8 (line 631). In future revisions, we will expand such discussion.
# Questions
>Could you give examples of realistic settings in which using low-complexity representations would be useful – bearing in mind the four requirements I mention…? (Or alternatively, if I’m wrong about the four requirements, how / why am I wrong?)
We believe that the real requirements are less strict than noted by the reviewer: we found benefits to our method when finetuning for FashionMNIST using 50 examples, and achieved 96% accuracy. Thus, requirements 2 (limited finetuning data, k <= 5) and 4 (performance under 90%) are not supported by our results. We also do not see how using VQ-VIB$_\mathcal{C}$ is a limitation: it outperforms baselines and therefore is the method that we advocate for in this work.
We believe that our approach could be relevant for low-volume visuomotor robotics settings. Considerable research considers how to rapidly train robots on sorting tasks from vision using few demonstrations, where minimizing the number of new demonstrations required at test time from the human user is desirable [1,2,3]. Using our approach, we could train a robot to identify clothing for sorting (as in our FashionMNIST example) using only 10 examples per class and achieve roughly 93% accuracy! If a factory wanted to retrain the robot on a different sorting task (e.g., a binary defect-identification task), it could quickly supply the small number of finetuning examples needed. This type of flexible test-time adaptation motivates our framework.
>Is there reason to expect poor performance from finetuning the most complex representation with stronger regularization?
We appreciate the suggestion to train with stronger regularization and did so! Results, as indicated in our overall response, are included in the attached pdf.
We found that regularization improved model performance overall, but there was nevertheless still the “peaking” effect of optimal performance when tuned to the right complexity level. Intuitively, this arises because regularization can help address aspects of domain complexity, but fundamentally it is still easier to train with less complex domains. Thus, our method continues to outperform the strong baseline of training a regularized classifier on the most complex representation.
> (Paraphrased:) What if the finetuning task labels are not a deterministic coarsening of the pre-training labels?
This is an interesting question which we explicitly flagged as outside the scope of our current paper. (“Lastly, we assume that the pre-training labels are a sufficient statistic for the task-specific labels…”) Nevertheless, we can speculate what behavior we may see.
We believe that, if the finetuning labels are less aligned with the pre-training labels, we would see peaked finetuning performance at higher complexity (lower MSE) than in our current experiments (or no peak at all). Generally, compressing model representations helped with finetuning, but only up to the point where information necessary for the finetuning task started being compressed out. With worse pre-training labels, we might expect models to start compressing information that is important for finetuning earlier. Thus, peak performance should occur at higher complexity values.
> (Paraphrased:) Could one automatically select the right complexity level based on the number of prototypes?
We discuss this question in our overall rebuttal. While an interesting hypothesis, there are theoretical reasons that this approach should fail, which we back up in experiments.
# Minor issues:
Thank you for pointing out the error. We wrote the correct text, “we assume that the pre-training labels are a sufficient statistic for the task-specific labels” but miswrote the equation. We will correct the document to write $I(X, Y_t) = I(Y_p, Y_T)$.
[1] "Robot learning from randomized simulations: A review.” Muratore et al.
[2] "A Methodology to Design and Evaluate HRI Teaming Tasks in Robotic Competitions" Marrella, Andrea, et al.
[3] “Preferred interaction styles for human-robot collaboration vary over tasks with different action types” Schulz et al.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I have read through the authors' rebuttal and the other reviews, and am maintaining my score of 4.
## Baselines
Thanks for running the updated baseline! I'm more convinced now that the method shows benefits in the very-low-data regime ($n \leq 3, k \leq 5$).
## Significance
I am still not convinced that the method shows benefits for $k > 5$. Looking at $k = 50$ in Figure 10 in Appendix 8, the curves of best fit show a peak at non-maximal complexity, but if you ignore the curves of best fit and just look at the actual data, it is far from clear that there is an actual difference between maximal complexity vs. the point at which the curve of best fit peaks. If there is a difference, it is very small in magnitude. If we instead look at the version with updated baselines (lightest green points in Figure 24a in the author rebuttal pdf), there is even less of a visible difference. To me, these graphs suggest that there is a plateau of maximal performance that includes the maximal complexity representation.
However, even if we grant that Figure 10 and Figure 24a show an improvement in the $k = 50$ case for 3-way FashionMNIST, that is a cherrypicked example. If we instead look at Figures 24g / 13 (iNat 3-way), Figure 12 (CIFAR100 20-way), Figure 14 (iNat 34-way), Figure 15 (iNat 1010-way), we see that the best choice at $k = 50$ is the maximal complexity representation. Only Figure 11 / 24d (CIFAR100 2-way) show a peak at non-maximal complexity, and those are even more arguable than Figure 10 and 24a.
(Incidentally, I don't trust the curves of best fit. My guess is that the authors are fitting cubic polynomials to the data? When one fits a cubic to a set of data that increases and then saturates / asymptotes creating a plateau, the cubic will tend to put its maximum value in the middle of that plateau -- thus creating an illusion that the maximum value is in the middle of the plateau, rather than at the end of it.)
## Constraints and realistic settings
> Using our approach, we could train a robot to identify clothing for sorting (as in our FashionMNIST example) using only 10 examples per class and achieve roughly 93% accuracy.
Yes, I would agree that this would be a plausible application. But currently, my sense is that at $k = 10$: (a) the method provides a very small boost (if any) relative to using the maximum complexity representation, and (b) this only happens in some of the settings tested (FashionMNIST), while in others (iNat) there is no boost. So overall this does not seem like a significant improvement, especially given the additional cost of human intervention.
To put it a different way, the best case seems to be: "By default, we can train a robot to identify clothing for sorting using only 10 examples per class and achieve roughly 92% accuracy. With our method, we can get costly human input to select an appropriate representation, and then we can train it to 93% accuracy". And it is not clear to me whether even this is achieved, given the experiments in settings like iNat.
> Reviewers 99tN and TMXo suggested the hypothesis that the optimal encoder is the one that uses approximately the same number of prototypes as finetuning labels. Such an approach does not succeed.
I don't mean to imply that this holds in general. I am saying that your human experiment involved instructions that strongly suggested that the humans follow the rule "choose the encoder that has the minimum number of prototypes, but at least as many as there are finetuning classes", and that in the specific case studied in your human experiment this was the right answer. So I cannot tell from this experiment whether humans would choose encoders appropriately in more complicated settings, like the handbags with / without straps example you give.
---
Reply to Comment 1.1.1:
Title: Updated framing!
Comment: # Paper reframing
Please see the general comment above for a proposed paper reframing!
# Updated human experiment
> I cannot tell from this experiment whether humans would choose encoders appropriately in more complicated settings, like the handbags with / without straps example you give.
Thank you for clarifying this concern! As written in the general response, we also conducted a follow-up user study for the FashionMNIST experiment where models were finetuned on an arbitrary 3-way classification task (such as the theoretical example posed). Groupings were intentionally non-meaningful (Group 1: Tshirt, Trouser, Sandal; Group 2: Pullover, Dress, Sneaker; Group 3: Coat, Shirt, Bag, Boot).
In this domain, peak performance was achieved for models using far more than just 3 prototypes (and therefore the “choose the encoder that has the minimum number of prototypes, but at least as many as there are finetuning classes” heuristic fails). Using this novel 3-way grouping for FashionMNIST, our follow-up user study confirmed that participants were **still able to correctly identify the optimal abstraction level for this task**, indicating that humans are still able to correctly select the optimal encoder in more complicated settings when the heuristic does not hold. We are happy to present both human-validated tasks in the revision. | Summary: This paper explores an interesting premise -- how does the level of abstraction captured by a discrete (visual) representation dictate downstream task performance, where downstream tasks can be at arbitrary levels of abstraction. Specifically, the running example from the work that I really like is that of bird-watching; when communicating amongst experts, the extremely specific species names (e.g., "white osprey") is very useful, and captures the right level of information. However, when communicating with small children who are looking at things in the wild, coarser descriptions (e.g., "the white bird" or "the red bird") are much more useful.
To study this premise, this paper makes two contributions: first, it introduces a new method for learning discrete representations at various levels of abstraction, the Vector-Quantized Variational Informational Bottleneck - Categorical (VQ-VIB_c). Second, and more importantly, it presents a thorough evaluation, including a human-in-the-loop user study (N = 17), showing conclusively for downstream tasks with "known" abstraction levels, the "best" representation to use is the one that roughly matches that level of abstraction.
Strengths: This paper starts with a strong motivating question, introduces a new method to explore the hypotheses formed from this question, and critically performs a comprehensive evaluation and user study that confirms the proposed hypothesis. I think the user study linking "prototypes" from different models (capturing different levels of abstraction) to the downstream finetuning task's level of abstraction is the most convincing, and most meaningful result in this work. I really like this type of study, and it really helps support the paper's key contributions.
Weaknesses: Unfortunately, I have several concerns with this paper, stemming from the very confusing presentation of the paper (it took me a very long time to understand the experimental protocol, and I found that several details about the pretraining/finetuning procedure were missing or placed in scattered places in the Appendix), to the actual useability of such an approach for more practical applications.
First, on the presentation/soundness side -- I ask this question more explicitly below, but what is the exact link between the process of "degrading" a high-accuracy model (pretrained to high accuracy on the "full" classification task), and emerging levels of abstraction? In other words, how exactly does the process of changing the loss hyperparameters lead to models that capture different levels of granularity? From my read of the paper/appendix, this link is never made explicit, and the process is further left underspecified -- if hyperparameters are changing in a staged way (or you're running for a limited number of additional gradient steps given the "high accuracy" model), couldn't a possible confound be the initial batches seen during this "degradation process"? Another confound could just be the per-example/per-class difficulty, which is known to be uneven across classes in the datasets (e.g., iNaturalist has severe label imbalance out of the box, papers from the Distributionally Robust Optimization literature report huge discrepancies in average vs. worst class accuracy for other datasets)? Is this controlled for in any way?
Beyond these questions, I find it unclear how such an approach would be useful for practical applications? Is the idea that given some supervised dataset, you should train all models to "best accuracy" then selectively degrade the resulting representations depending on your (family of) downstream tasks? Isn't this expensive/redundant?
---
EDIT (Post-Rebuttal): I think many of my soundness concerns have been addressed during rebuttal, and in discussion with the other reviewers. Raising my score to a 5.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: I am very confused at the two-phase approach for varying representational capacity during pretraining. The paper seems to indicate that at first, models are pretrained until they obtain perfect accuracy at the (full) classification problem (e.g., the 10-class classification in FashionMNIST, the 100-class classification in CIFAR10). Then, after that happens, by tweaking the loss hyperparameters (e.g., downweighting "utility" and upweighting "representation conciseness" / "commitment loss"), you compress the representations until they are not meaningful, and look at various saved models that are produced in the course of that process for downstream finetuning (hence the plots that track accuracy on downstream vs. MSE).
Why does this approach make sense at all for evaluating the "conciseness/abstraction" of a representation? How are you supporting the claim that the "trajectory of models" between high accuracy and "indistinguishable" representations is capturing different levels of abstraction in the representation space? This feels like a major major connection that needs to be made explicit in the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The main body of the paper does not explicitly discuss limitations; the appendix discusses some of the limitation of the VQ-VIB_c approach in passing, but a proper treatment of the pros/cons of this approach is not present.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review!
# Clarifications
The reviewer raises several questions about how we generate “degraded representations.” Here, we seek to clarify our approach with a brief summary and address specific questions later.
First, we train a VQ-VIB$_\mathcal{C}$ model to high accuracy and low reconstruction loss, by setting high values for $\lambda_I, \lambda_U$ and a low value for $\lambda_H$. After convergence, we increase $\lambda_H$ by a small amount every epoch. By increasing $\lambda_H$, we imposing a higher penalty on the entropy of representations. Thus, as $\lambda_H$ increases, the model uses fewer representations, each of which represents more abstract concepts. The idea of varying a hyperparameter to induce different representation complexity levels is widespread in Information Bottleneck literature for continuous [1, 2] and discrete [3, 4, 5] representations.
# Questions
>What is the exact link between the process of "degrading" a high-accuracy model… and emerging levels of abstraction? … [H]ow exactly does... changing the loss hyperparameters lead to models that capture different levels of granularity?
We increase $\lambda_H$ after convergence to penalize the entropy of representations, resulting in the model using fewer representations, which necessarily represent more abstract concepts. This in turn leads to decreased model accuracy. These links between hyperparameter tuning, representation complexity, and model accuracy have also been well established in prior literature, such as [3], who explicitly show that decreasing complexity leads to more abstract representations. Similar ideas are explored both theoretically [6] and empirically [1, 2, 4] in other works as well.
We note in our paper that we increase $\lambda_H$ during pre-training (line 179) and show in experiments how representations change over the course of varying $\lambda_H$ (e.g., Figures 4, 7, 16, and 18).
>If hyperparameters are changing in a staged way… couldn't a possible confound be the initial batches seen during this "degradation process"? Another confound could just be the per-example/per-class difficulty…?
The reviewer makes an interesting point, but it appears unlikely that the suggested confounds had an effect in our experiments. We trained and evaluated all methods on the same classes and data, so class imbalance would be unlikely to favor one method over another. Furthermore, we trained across many random seeds and for many epochs, using traditional methods of loading randomly shuffled batches. Therefore, we believe the change in performance across models may be attributed solely to our manipulated variables, e.g. changes in $\lambda_H$ (for a given model type) or changes in architecture (e.g., comparing VQ-VIB$_\mathcal{C}$ to $\beta$-VAE).
>I am very confused at the two-phase approach for varying representational capacity during pretraining... [M]odels are pretrained until they obtain perfect accuracy… after that happens, by tweaking the loss hyperparameters (e.g., downweighting "utility" and upweighting "representation conciseness" / "commitment loss"), you compress the representations until they are not meaningful...
> Why does this approach make sense at all for evaluating the "conciseness/abstraction" of a representation? How are you supporting the claim that the "trajectory of models" between high accuracy and "indistinguishable" representations is capturing different levels of abstraction in the representation space?”
The reviewer mostly summarized our pre-training process correctly, although we note that we only vary a single hyperparameter during training to penalize the complexity of representations (and not, as stated by the reviewer, by varying the commitment or utility weights). (See line 179.)
We have already discussed above how we use $\lambda_H$ to penalize the entropy of model representations, which is motivated by substantial theoretical and empirical existing literature.
We show in experiments that VQ-VIB$\_\mathcal{C}$ models indeed learn different levels of abstraction at different complexity levels during our pre-training process. We show examples of a model from a single training run at high and low complexity in Figure 4. We show a similar example in the iNat domain in Figure 7. Moreover, in Figures 16 and 18, we show how representations evolved during training (as we penalize complexity of representations) for VQ-VIB$_\mathcal{C}$ models. In each figure, we show how lower-complexity models (which differ from high-complexity models only by training with a greater $\lambda_H$) use fewer and more abstract representations. Lastly, in every plot with MSE as an x axis, we implicitly show that in pre-training, we forced models to go from low-MSE (high complexity) to high-MSE (high complexity) representations.
> The main body of the paper does not explicitly discuss limitations… a proper treatment of the pros/cons of this approach is not present.
We thank the reviewer for raising this point. We would like to highlight that we do include a discussion of limitations of our approach in the main paper (line 265), but will work to ensure they are emphasized more clearly in the future draft.
We hope these clarifications help increase the reviewer’s understanding of our work, and we ask that the reviewer adjust their score if their confusion is sufficiently
addressed, or suggest specific changes they would like to see to address their concerns.
[1] Deep Variational Information Bottleneck. Alemi et al.
[2] beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. Higgins et al.
[3] Trading off Utility, Informativeness, and Complexity in Emergent Communication. Tucker et al.
[4] Proto-VAE: A Trustworthy Self-Explainable Prototypical Variational Model. Gautam et al.
[5] Representation Learning in Deep RL via Discrete Information Bottleneck. Islam et al.
[6] The Information Bottleneck Method. Tishby et al.
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal Comment - Raising Score to a 5
Comment: Thanks to the authors for their clarifications! After reading through these and the global comments, as well as the comments from the other reviewers, I feel like many of my original questions around the paper have been addressed.
I still believe that the link between the current process of degrading a representation via an entropy penalty and making broader claims about emerging levels of abstraction to be theoretically tenuous, but the user study and responses from the authors raise many good points, downweighting this initial comment from my review. I think many of my issues around the soundness of the paper have been addressed.
I'm still not sure about the experiments controlling for confounds that I mention, and the general viability of the new framing of the paper (the human-in-the-loop approach doesn't seems super motivated, for the reasons that Reviewer TMXo stated) either, but I am raising my score to a borderline accept (5). | Rebuttal 1:
Rebuttal: We thank all reviewers for their comments. We have replied to specific questions in individual responses. Here, we briefly highlight results that address some common themes:
# Regularization baselines:
Reviewers 99tN and TMXo asked for stronger finetuning baselines, with regularized models trained with a validation set. We thank the reviewers for this excellent suggestion.
We implemented this approach, using both L2 and L1 regularization on model weights, sweeping through regularization weights from 0.00001 to 0.1 at powers of 10 (and the default of 0.0) and selected the best weight when assessed on a validation set of 1 example per class. Results from such experiments (using L2 regularization), are included in the attached pdf, for $\beta$-VAE, VQ-VIB$\_\mathcal{C}$, and VQ-VIB$_\mathcal{N}$, in all domains (using $n = 1$ for both VQ-VIB methods, as it performed best). Results were nearly identical for L1 regularization.
Overall, regularization improved finetuning performance of all models by a small amount, but the main trends of our experiments still held. That is, for small $k$, we observed that finetuning performance first increased and then decreased as VQ-VIB$_\mathcal{C}$ models used more abstract representations, confirming the value of our method.
The greatest benefit from using regularized models seemed to be in the lowest MSE range. With a “perfect” regularization method, we might expect that performance would plateau at low MSE rather than worsen. Instead, using L2 regularization, we found that performance decreased slightly as MSE decreased, but performance decreased less than without regularization. This aligns with the intuition that regularization can help for the most complex representations, but consistently finding the right regularization method remains challenging.
Critically, these findings corroborate the main idea of our paper: that compressing representations remains an important mechanism for optimal finetuning performance in low-data regimes, and that end users are adept at selecting the optimal compressed representations adaptively at test time. Upon acceptance, we would update all figures in the main paper to use the regularized baselines.
# Real-world use cases
Reviewers 99tN and TMXo asked for a real-world use case of our method. We acknowledge in our paper that our method confers the largest benefit in low-data visual regimes (we do not claim to solve all transfer learning scenarios), which are well motivated in many real-world tasks such as visuomotor robot learning and end user home personalization, where the ability to collect data personalized to users remains limited [1]. Consider the scenario where a visuomotor policy has been trained to help a user sort clothing in their home or identify mugs for making coffee [2]. In this problem setting, we wish to be able to identify specific types of clothing or mugs that the user uniquely possesses in their home, and adapt our policy to these personalized preferences without the need for the user to provide large-scale training data from their home. Using our approach, we could train a robot to identify clothing for sorting (as in our FashionMNIST example) using only 10 examples per class and achieve roughly 93% accuracy. Moreover, if the user decides to give the robot to their family member, we could adaptively perform the same finetuning process to the new user’s preferences! This type of flexible test-time adaptation would be extremely desirable in these scenarios.
# Autonomously selecting the right representation via prototypes
Reviewers 99tN and TMXo suggested the hypothesis that the optimal encoder is the one that uses approximately the same number of prototypes as finetuning labels. Such an approach does not succeed.
Consider the following theoretical example in FashionMNIST. Using the same pre-trained encoders as in our main paper, we seek to finetune models based on a binary classification task: handbags with straps and handbags without straps. Given that there are two finetuning labels, one might consider encoders that use two prototypes. However, our FashionMNIST encoders that only use two prototypes typically use one prototype to represent shoes and one prototype to represent everything else. Such encoders would fail catastrophically at fine-grained discrimination between types of handbags.
Empirical results corroborate the intuition from this theoretical example.
In the CIFAR100 20-way finetuning task (which contains odd groupings, such as two distinct categories for vehicles) in the main paper, peak performance was at the lowest MSE values, which corresponded to using roughly 100 prototypes. Conversely, using 20 prototypes resulted in worse performance.
Furthermore, we conducted an additional FashionMNIST experiment where models were finetuned on a new 3-way classification task. Groupings were intentionally non-meaningful (Group 1: Tshirt, Trouser, Sandal; Group 2: Pullover, Dress, Sneaker; Group 3: Coat, Shirt, Bag, Boot). In this domain, peak performance was achieved for models using far more than just 3 prototypes. Once again, this establishes that if the fine-grained labels do not align with how VQ-VIB$_\mathcal{C}$ models learn compressed representations, models benefit from having more prototypes (and less compressed representations).
These experiments establish that autonomously selecting the optimal encoder for a finetuning task remains challenging. Conversely, humans are adept at choosing the right model. We conducted an additional online user study using the novel 3-way grouping for FashionMNIST and found that participants were able to correctly identify the optimal abstraction level for this task as well. This reinforces the value of our human-in-the-loop framework.
[1] "Robot learning from randomized simulations: A review." Muratore et al.
[2] "Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation." Peng et al.
Pdf: /pdf/c2d5d52bc832ca67823221cce400e550dfd5bc5b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.