title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Segment Anything in 3D with NeRFs | Accept (poster) | Summary: This paper introduces a simple and efficient method for general 3D segmentation based on SAM (a powerful 2D foundation model) and NeRF-style representation. Instead of building a 3D foundation model from scratch, SA3D uses two steps to lifts 2D SAM segmentation results to 3D in a more concise and efficient way. Based on a well-trained NeRF, a rendered reference view and the human input prompts are sent into SAM to get the first segmentation. Then the proposed mask inverse rendering and cross-view self-prompting strategy will help optimize a volume-based 3D mask field and propagate segmentation information to different views in an iterative and incremental manner. The comprehensive experiments prove the effectiveness of the designed pipeline.
Strengths: 1. The proposed method is simple but effective. It provides a general interactive 3D object segmentation paradigm which do not rely on heavy pre-training.
2. The method is pretty efficient. Given a pre-trained NeRF, the 3D segmentation can be completed within only minutes with the help of SAM.
3. The experiments have covered variant datasets and comparison to SOTA methods, which are comprehensive and persuasive.
4. The paper is well-written and easy to follow.
Weaknesses: 1. I expect that segment “anything” in 3D means the method can segment all the things in a 3D scene, so that it is consistent with the purpose of SAM. But the proposed SA3D can only segment one object at each time.
2. Furthermore, using prompts (such as points, scribbles, text) to achieve 2D segmentation of one target object is a long-studied subject. Besides SAM, there should be many alternative choices such as [1][2][3], which should have been evaluated for this 3D task. Or the irreplaceability of SAM needs to be explained.
3. I wondering if the method requires complete observation of the target object in at least one image. What if the target object is not observed completely in any views? How will the choice of reference view affect the 3D segment results? Is there any necessary strategy to achieve best performance? The robustness of the method should be evaluated.
[1] Sofiiuk K, Petrov I A, Konushin A. Reviving iterative training with mask guidance for interactive segmentation[C]//2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022: 3141-3145.
[2] Liu Q, Xu Z, Bertasius G, et al. SimpleClick: Interactive image segmentation with simple vision transformers[J]. arXiv preprint arXiv:2210.11006, 2022.
[3] Chen X, Zhao Z, Zhang Y, et al. Focalclick: Towards practical interactive image segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 1300-1309.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. It seems that the method is not restricted by NeRF-based representation. Since the authors assume NeRF provides good depth, the proposed method should also work under other representations such as mesh and point cloud?
2. The 3D mask is modeled as dense voxel grid, which may cause high memory consumption for high resolution representation. How will different resolutions affect the performance? Maybe using a MLP is a better choice?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable comments. We answer your questions as follows and hope the response could clear your concerns.
### Weaknesses
> W1: ... SA3D can only segment one object at a time.
**A1:** SAM achieves “segment everything” by densely prompting on the image. These prompts are stacked at the batch dimension and fed to the model. Such implementation, in essence, is equivalent to querying the SAM decoder for many objects one by one.
SA3D also inherits the ability and mechanism. Segmenting multiple objects can be achieved by stacking multiple 3D mask grids into a 4D data structure (i.e. HxWxL -> NxHxWxL), where N denotes the number of objects. Then, iterative self-prompting and inverse rendering can be performed for all objects simultaneously. We attach some visualization results in Fig 3 (global response PDF).
By the way, as stated in Lines 282-285, we admit that the current method still has limitations in segmenting everything. For example, it is difficult for SAM to guarantee cross-view consistency for extremely small parts, because some parts may be segmented into different instances with similar semantics in different views. Fixing these cases in 3D is a challenging issue and is left for future research.
> W2: ... Besides SAM, alternative choices [1-3] ...
**A2:** Yes. SA3D can generalize to these methods effortlessly, which demonstrates our claim in the abstract: "... lift a 2D vision foundation model to 3D, as long as the 2D model can steadily address promptable segmentation across multiple views". We conduct experiments on the NVOS dataset and the 'Replica office 0' scene to evaluate SA3D with these interactive segmentation methods:
||SAM||SimpleClick||RITM||FocalClick||
|:-:|-|-|-|-|-|-|-|-|
|Metrics|IoU|Acc|IoU|Acc|IoU|Acc|IoU|Acc|
|Mean|90.3|98.2|87.7|0.9778|81.2|96.3|88.9|98.1|
|Replica_office_0|SAM|SimpleClick|RITM|FocalClick|
|-|-|-|-|-|
|mIoU|84.4|72.7|69.0|63.6|
From the above results we find lifting these interactive segmentation methods with SA3D can beat the previous SOTA (ISRF, mIoU 83.8; NVOS, mIoU 70.1) on the NVOS benchmark. Additionally, when encountered with complicated indoor scenes like Replica, other methods perform much worse, showing the robustness of SA3D as a foundation model.
> W3: What if not observed completely in any views? ... the choice of reference view ...? ... necessary strategy ...? robustness ...
**A3:** There are two cases: part of the object does not appear in some views (but appear in other views), or it does not appear at all. For the former case, SA3D can recover it using information from other views; for the latter case, there can be parts missing in the target object (see Fig 9, global response PDF). This is an interesting future direction (e.g. applying generative models such as diffusion models).
To evaluate the robustness of SA3D, we randomly select 3 new reference views from the training set and compute the mean metrics on the NVOS dataset. The results are as follows:
||Random 3 views||Reported in the Paper||
|-|-|-|-|-|
|Metrics|IoU|Acc|IoU|Acc|
|Mean|89.8|98.2|90.3|98.2|
Visualization results are provided to demonstrate the robustness of SA3D to reference view. Please check Fig 8 (global response PDF).
### Questions
> Q1: ... not restricted by NeRF ... mesh and point cloud?
**A4:** Yes. Extending SA3D to other 3D formats is straightforward. Here we elaborate a pipeline for point clouds:
1. Generate multi-view images on the point cloud data using off-the-shelf methods (e.g. Point2Pix, CVPR'23). Practically, we find projecting 3D points with RGB information onto a 2D plane is enough for SAM to segment
2. Select a reference view and input prompts; use SAM for segmentation
3. Assign the predicted mask to the corresponding 3D points (Mask Inverse Rendering)
4. Project the 3D mask onto a new view and get the corresponding 2D image (Mask Rendering)
5. Extract self-prompting points from the 2D mask (Self-prompting)
6. Feed the prompt and 2D image to SAM for segmentation
7. Repeat 3-6 until getting the full object
> Q2: ... dense voxel grid ... high memory consumption ... different resolutions affect ... MLP is better?
**A5:** Good question! In our implementation, the resolution of the mask grids is set to be TensoRF grids (320^3). We study different resolutions of mask grids on the NVOS dataset, results shown below and visualization provided in Fig 7 (global response PDF). The effect of the grid resolution is slight.
|$320^3$||$160^3$||$80^3$||$40^3$||MLP||
|-|-|-|-|-|-|-|-|-|-|
|mIoU|mAcc|mIoU|mAcc|mIoU|mAcc|mIoU|mAcc|mIoU|mAcc|
|90.3|98.2|90.1|98.2|89.2|98.1|88.0|97.8|80.9|96.4|
We also report the results of an MLP version (not better) in the above table. Below are the advantages of explicit mask grids over MLP.
1. Optimization for mask grids is explicit and straightforward. In contrast, assigning a 3D point as positive in an MLP may unexpectedly make other points positive in the 3D space, leading to unstable self-prompting.
2. In gradient descent, using explicit mask grids saves a lot of memory because its computation graph is simple. For MLP, the computation graph contains too many non-leaf nodes for different layers of the MLP.
3. By adopting mask grids, the 3D segmentation results are explicit, which makes the downstream task, *e.g.* editing, more convenient. For example, removing the target object or extracting it from the scene only requires direct matrix multiplication between the mask grids and the density grids (if the density grids are also stored explicitly).
We agree that MLP enjoys theoretically infinite resolution and smaller storage costs. Therefore, exploring hybrid mask representations is a promising future direction.
---
Rebuttal Comment 1.1:
Title: Fixing Two Typos in the Rebuttal
Comment: Dear reviewer,
We would like to fix two typos in our rebuttal. In **A2**, the Acc of SimpleClick (in the table) should be **97.8** rather than 0.9778. Additionally, the final sentence "… showing the robustness of SA3D as a foundation model." should be revised to "… showing the robustness of **SAM** as a foundation model."
We apologize for any potential misunderstanding that may have arisen.
Best,
Authors | Summary: This paper proposes to lift 2D segmentations from foundation models such as SAM to 3D by iterating between SAM and NeRF, without re-training or re-defining either. Given a trained NeRF model, the model first renders a view, which is also processed by SAM given a user click. With the segmentation by SAM, the model optimizes a 3D segmentation volume such that it volume renders into a mask consistent with what SAM produced (“mask inverse rendering”). Next, the model projects this initial segmentation volume to other viewpoints, producing incomplete 2D masks. Finally, the model computes “good prompts” for SAM to complete these masks (“cross-view self prompting”). The model iterates between these steps until a complete segmentation volume has been produced.
Because of the generalization power of SAM, the model is able to segment almost anything in 3D, without requiring applying changes to SAM or NeRF, making itself a framework that can be applied to any 2D foundation models that we want to lift to 3D.
Strengths:
The proposed method is simple yet effective, following the recent trend of developing foundation models and/or open-vocabulary LLMs. More importantly, it also bridges the gap between powerful 2D models and 3D understanding as required by robots or autonomous vehicles. It is general and applicable to any 2D foundation models. The results are strong both qualitatively and quantitatively.
Weaknesses: Since the framework is claimed to be (and I think it is) general and applicable to any 2D foundation models, the paper will be much stronger if the authors could demonstrate the use of this framework to lift another foundation model’s output into 3D.
The paper will also benefit from showing some 3D shape results that are extracted from the 3D segmentation volume. Often, it’s the underlying 3D geometry that matters for, say, robotic manipulation. “RGB looking good” is separated than that.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors:
Has runtime been reported? I recommend augmenting Table 4 with an additional row of runtime.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable feedback and hope our following clarifications and responses could clear your concerns.
### Weaknesses
> W1: Demonstrate the use of this framework to lift another foundation model’s output into 3D.
**A1:** Thanks for the suggestion. We try to lift SEEM [1] with SA3D, which is a concurrency work of SAM. As suggested by Reviewer jjUN, we also lift three interactive methods with SA3D. SEEM supports more modals but suffers from relatively worse segmentation performance. We evaluate the performance of SA3D with SEEM (and the three other interactive methods) on the NVOS dataset. The results are shown as below:
| | SEEM | | SimpleClick | | RITM | | FocalClick | |
|:------------:|------------|--------|-------------|-----------|-----------|-----------|------------|-----------|
| Scene | IoU | Acc | IoU | Acc | IoU | Acc | IoU | Acc |
| fern | 0.7921 | 0.9293 | 0.8109 | 0.9347 | 0.7662 | 0.9154 | 0.8214 | 0.9404 |
| flower | 0.9313 | 0.9836 | 0.9362 | 0.9849 | 0.8912 | 0.9728 | 0.9372 | 0.9852 |
| fortress | 0.9851 | 0.9972 | 0.9761 | 0.9954 | 0.9755 | 0.9953 | 0.9839 | 0.997 |
| horns_center | 0.8182 | 0.9627 | 0.9554 | 0.9921 | 0.8642 | 0.9759 | 0.9547 | 0.9920 |
| horns_left | - | - | 0.8599 | 0.9908 | 0.8451 | 0.9896 | 0.8407 | 0.9894 |
| leaves | - | - | 0.9326 | 0.9957 | 0.9341 | 0.9958 | 0.9258 | 0.9952 |
| orchids | 0.8655 | 0.9771 | 0.7869 | 0.9636 | 0.5744 | 0.9089 | 0.8984 | 0.9827 |
| trex | 0.7700 | 0.9701 | 0.7598 | 0.9647 | 0.6418 | 0.9472 | 0.7507 | 0.9636 |
| mean | 0.8604 | 0.9700 | 0.8772 | 0.9778 | 0.8116 | 0.9626 | 0.8891 | 0.9807 |
Please kindly note that the missing entries for SEEM are because it cannot generate reasonable segmentation results for the reference view images no matter how we adjust the prompts. Some visualization results of SEEM are provided in Fig. 2 of the global author response PDF. SEEM has the advantage of accommodating multiple cross-modal prompt inputs, but exhibits inferior segmentation performance compared to SAM. Further utilizing the cross-modal prompting ability of different foundation models to enhance the behaviour of self-prompting is a promising direction.
> W2: Showing some 3D shape results that are extracted from the 3D segmentation volume.
**A2:** We extract meshes from the segmented 3D objects based on the standard marching cubes algorithm. Visualization results can be found in Figure 4 of the global author response PDF. Please note thatthe quality of these meshes can be further improved by applying more effective NeRF2Mesh methods [2][3]. The scripts for mesh extraction will also be released to facilitate the following research.
### Questions
> Q1: Augmenting Table 4 with an additional row of runtime.
**A3:** We update Table 4 with runtime supplemented as follows. It shows a trade-off between time cost (related to the number of views) and the segmentation quality.
| |||||
|----------------------------------|----------|----------|----------|------------|
| Number of Views | 5 (10%) | 9 (20%) | 21 (50%) | 43 (100%) |
| IoU on Fortress (forward facing) | 97.8 | 98.3 | 98.3 | 98.3 |
| Time Cost(s) | 7.56 | 12.80 | 28.98 | 58.97 |
| Number of Views | 11 (10%) | 21 (20%) | 51 (50%) | 103 (100%) |
| IoU on Lego (360 degrees) | 84.5 | 84.8 | 91.5 | 92.2 |
| Time Cost(s) | 23.49 | 43.54 | 103.83 | 204.93 |
[1] Zou, Xueyan, et al. "Segment everything everywhere all at once." arXiv preprint arXiv:2304.06718 (2023).
[2] Tang, Jiaxiang, et al. "Delicate textured mesh recovery from nerf via adaptive surface refinement." arXiv preprint arXiv:2303.02091 (2023).
[3] Yariv, Lior, et al. "BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis." arXiv preprint arXiv:2302.14859 (2023).
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal Update
Comment: I thank the authors for an informative rebuttal, which includes several interesting new results. Provided that the authors will include these new results -- lifting another foundation model, 3D mesh extraction, etc. -- in their final paper (which will be a much stronger one), I remain positive about this paper and support its acceptance.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Response
Comment: Thank you very much for your response and efforts! Your comments have greatly improved our paper. The related new content will be incorporated into our final version. | Summary: This paper proposes a method for segmenting a pre-trained NeRF by utilizing SAM. Given a pre-trained NeRF, it first asks users to provide prompts (e.g., some points) for a reference view. It then utilizes the SAM to generate a 2D segmentation for the reference view and utilizes the 2D mask to optimize a 3D segmentation mask. After that, it will iteratively update the 3D mask by (a) selecting a random review, (b) rendering the view, (c) utilizing the 3D mask to automatically generate prompts for the view, (d) utilizing prompts to query SAM, (e) utilizing SAM output to update 3D mask. The method proposes a self-promoting strategy to generate prompts given the 3D mask and an IoU-aware view rejection to ignore the bad prediction from SAM.
Strengths: 1. The paper proposes a novel method for segmenting a pre-trained NeRF by using SAM.
2. The paper proposes a self-promoting strategy for automatically generating prompts for SAM and a rejection strategy to ignore bad SAM predictions.
3. The authors conduct experiments on three datasets and provide some ablation studies.
4. The paper is easy to follow.
Weaknesses: 1. The method requires minutes for a single segmentation (e.g., of an object), which may greatly limit the usage of the method in many real-world applications (e.g., robotics manipulation).
2. Recently, there are also many NeRF/Point Cloud-based open-world (vocabulary) 3D segmentation (for both scene-level and part-level) methods[1-8] that leverage pre-trained 2D VLM (e.g., CLIP). However, the discussion and comparison with them is missing. It seems that many of these prior methods don't need per-instance optimization and can generate a 3D segmentation mask in just seconds. Please cite these papers and discuss the advantages of the proposed methods.
3. In Line 153, the paper states, "Given an incomplete 2D rendered mask". Why is the rendered mask always incomplete? Is it possible that some SAM predictions are wrong and include extra regions, which leads to an enlarged 3D segmentation mask? If this is possible, the self-prompting strategy will also generate a wrong prompt for SAM in the later steps. Please explain whether this case is possible and how the proposed method can handle the wrong SAM prediction (including extra regions).
4. The negative refinement term (Line 137) is unclear to me. Please explain in more detail about the motivation for this term.
5. Equation (7) is not clear to me. Could you explain in more detail? Also, it would be better to provide an ablation study to verify the necessity of the confidence decay step (Equation (7)). Can the self-prompting strategy still work without confidence decay?
6. For all tables, could you include runtime for both the proposed method and baseline methods?
7. It would be better to include evaluations on some standard 3D segmentation benchmarks (e.g., ScanNet and PartNet) to have extensive comparison with existing methods as well.
[1] Kerr, Justin, et al. "Lerf: Language embedded radiance fields." arXiv preprint arXiv:2303.09553 (2023).
[2] Peng, Songyou, et al. "Openscene: 3d scene understanding with open vocabularies." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3] Ding, Runyu, et al. "PLA: Language-Driven Open-Vocabulary 3D Scene Understanding." arXiv preprint arXiv:2211.16312 (2022).
[4] Ha, Huy, and Shuran Song. "Semantic abstraction: Open-world 3d scene understanding from 2d vision-language models." 6th
Annual Conference on Robot Learning. 2022.
[5] Zhang, Junbo, Runpei Dong, and Kaisheng Ma. "Clip-fo3d: Learning free open-world 3d scene representations from 2d dense clip." arXiv preprint arXiv:2303.04748 (2023).
[6] Liu, Minghua, et al. "Partslip: Low-shot part segmentation for 3d point clouds via pretrained image-language models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[7] Yang, Jihan, et al. "Regionplc: Regional point-language contrastive learning for open-world 3d scene understanding." arXiv preprint arXiv:2304.00962 (2023).
[8] Jatavallabhula, Krishna Murthy, et al. "Conceptfusion: Open-set multimodal 3d mapping." arXiv preprint arXiv:2302.07241 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How many gradient descent iterations do you need in each inverse rendering step?
2. In Table 2, SA3D performs worse than the baseline method (MVSeg). Is there any common pattern for these instances? Or any reasons for the performance differences?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your instructive comments.
### Weaknesses
> W1: ... minutes for a single segmentation ...
**A1:** The time cost reported in our paper is an upper bound. As shown in Table 4 (global response text), SA3D achieves satisfactory segmentation with a few sampled views, which only requires < 10 seconds.
> W2: ... [1-8] that leverage VLM ... cite and discuss ...
**A2:** Thanks. We will cite all these papers and add discussions below.
The most relevant work to SA3D is LERF [1], which trains a feature field of the VLM together with the radiance field. Compared with SA3D, LERF focuses on coarsely localizing the specific objects with text prompts but not fine-grained 3D segmentation. The reliance on CLIP features makes it insensitive to the specific location information of the target object. When there are multiple objects with similar semantics in the scene, LERF cannot perform effective 3D segmentation. We evaluate the segmentation ability of LERF on the NVOS dataset and attach the results in the global response PDF. Moreover, we also provide the visualization results in Fig 6 of the global response PDF to support our statement.
The remaining methods mainly focus on point clouds. By connecting the 3D point cloud with specific camera poses with 2D multi-view images, the extracted features by VLM can be projected to the 3D point cloud. The data acquisition of these methods is more expensive than ours, i.e., acquiring multi-view images for NeRFs.
> W3: Line 153 ... rendered mask always incomplete? ... handle the wrong SAM prediction?
**A3:** Sorry for the ambiguous statement "incomplete 2D rendered mask". We will replace "incomplete" with "inaccurate".
It is possible that SAM generates inaccurate predictions. We designed several mechanisms to tackle this problem:
- The negative refinement term in the mask inverse rendering loss (Eq. 5). In each iteration, if a region is predicted as background by SAM, its mask confidence score is then suppressed, which significantly alleviates inaccurate segmentation by SAM under certain views.
- The confidence decay term in the self-prompting strategy (Eq. 8). This term adjusts rendered mask confidence scores according to the distance in the 3D space between the selected prompts and the candidate coordinates. This facilitates closely prompted points in 3D.
- The IoU-aware view rejection (Lines 171-176). When SAM predicts a mask that greatly differs from the currently rendered one, i.e., obtaining a low IoU, this view will be skipped to avoid wrong mask allocation.
> W4: The negative refinement term ... unclear ...
**A4:** This term is used to suppress the extra regions included in SAM predictions on the mask grids: When a region is not segmented as foreground by SAM, its mask confidence score is suppressed. Thus the mask grids are labeled as foreground only if the region is consistently classified as foreground in different views. We will clarify it in Lines 135-138.
> W5: Eq. (7) not clear ... ablation study...?
**A5:** Eq. 7 defines a confidence decay term for self-prompting, based on the intuition that prompt points should not be too far apart in the 3D space. Assuming we have gathered N prompt points. When we try to select the (N+1)-th point from the remaining candidates, we first check the distance between these candidates and the existing N prompt points. If a candidate is far from all of them, the confidence score is suppressed heavily.
To realize this, for a candidate, we traverse the existing prompts and get a set of decay terms. The smallest of them is determined as the final decay term for the candidate. This decay term between a candidate and an existing prompt point is defined as a product involving the confidence score of the prompt point and the min-max normalized 3D Euclidean distance between the two points.
In common cases, without the decay step, the self-prompting still works. But for some hard cases, it may fail. For the concrete ablation and discussion, please refer to our response to R-Xgwo's Q2.
> W6: ... runtime ...
**A6:** See the global response texts for the time cost.
> W7: ... evaluations on some standard 3D segmentation benchmarks ...
**A7:** Please kindly note SA3D performs interactive segmentation, which is quite different from traditional 3D segmentation approaches. However, making SA3D support traditional 3D segmentation is an interesting research direction. We still provide the experimental results on Scannet for a more comprehensive evaluation. The experimental setting follows Table 3. Some results (mIoU) are shown as follows:
||scannet0050_02|scannet0144_01|scannet0300_01|scannet0354_00|scannet0389_00|
|-|-|-|-|-|-|
|Single View|53.0|63.5|61.2|56.9|60.6|
|SA3D|72.9|77.1|75.8|69.8|78.9|
### Questions
> Q1: ... iterations ... in each inverse rendering step?
**A8:** Only one gradient descent iteration is required.
> Q2: In Table 2, SA3D performs worse than ... MVSeg.
**A9:** Yes. This gap stems from the inductive bias of SAM and some ambiguous segmentation targets. We provide some visualization results to support the statement in Fig 5 (global response PDF).
In the 'Orchids' scene, the segmentation target is a group of flowers. SAM tends to segment each flower separately. If SAM is forced to segment them into one, it may involve some unexpected regions in the prediction.
A similar phenomenon happens in the 'Room' scene. The SPIn-NeRF dataset treats the table along with objects on it as a whole. SAM segments the table separately, ignoring the placed objects.
For the 'Pinecone' scene, both MVSeg and SA3D perform well. This scene involves many details, which are ignored by the ground-truth sometimes. We believe a 0.5% mIoU gap is reasonable since both our segmentation results and corresponding annotations are not perfect.
---
Rebuttal Comment 1.1:
Title: Thank you!
Comment: Thank you for your detailed response! Most of my concerns have been addressed.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Response
Comment: We greatly appreciate your response and once again extend our sincere gratitude for your valuable time and effort spent on the review. If there are any points that require further clarification, we wholeheartedly welcome additional discussion. | Summary: The authors propose a combination of the newly introduced Segment-Anything Model (SAM) with Nerf, yielding the Segment Anything in 3D (SA3D) system.
SA3D cam take a 3D scene reconstructed by Nerf and based on a user prompt (e.g. a few keypoints) can carve out distinct 3D objects from the scene. This is shown to outperform previous state-of-the-art systems.
They two novel bits to successfully combine the two approaches:
Firstly a loss function in order to induce a 3D mask field on a voxel grid based on the SAM outputs (Eq. 4) and secondly (and most importantly) a method to prompt SAM on another view based on the existing 3D mask view - allowing one to only prompt from one image and then perform "prompt propagation" to the rest.
Strengths: Simple approach that should be easy to reproduce - the authors also share their code, which looks reasonably clean.
Solid experimental improvements - I am convinced by these numbers that the method outperforms some of the latest state-of-the-art systes.
Good ablations.
Interesting -but a bit hand-wavy - results that SA3D can improve SAM; some more validation of this would be welcome.
Weaknesses: - Overstatement: the authors are using SAM in tandem with Nerf, to get 3D objects out of a scene, which is great - but practically this is similar to classic co-segmentation or the systems that they compare to, but a bit better because of piggy-backing on SAM.
Stating in the abstract that "Our research offers a generic and efficient methodology to lift a 2D vision foundation model to 3D, as long as the 2D model can steadily address promptable segmentation across multiple views." suggests more than just this - one can start imagining extending the attention operations to 3D, supervising for depth/volumetric reconstruction etc, or more importantly having a foundation model for 3D, none of which is the case based on what we have in the present paper. Technically the statement is not false - but implies more than what is actually happenning in the paper.
- I could not find any discussion about the computational efficiency of the proposed algorithm. At the moment it's unclear what is the importance of this.
Minor:
- Unclear intuitively what the second term does in Eq. 5 from the way it's written. I think a more obvious rewrite is
(lambda - 1) L_proj + lambda sum_r M(r) \propto L_proj + lambda/(lambda-1) sum_r M(r),
and explain that the second term is just a regularization term on the optimized segmentation field.
- l. 280: "we demonstrate limitations of SA3D in panoptic segmentation" -> could not find any pointer to this in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -some more validation of the statement that SA3D can improve SAM would be welcome.
- Self prompting strategy: I am not too sure I see the necessity of this particular prompting strategy. The authors are lifting every prompt point to 3D, reducing the mask score in 3D in its neighborhood, regenerating the mask, and picking the next strongest point. How much better is this than a plain 2D-based nonmaximum suppression strategy? This should be more efficient computationally and yield roughly the same results - if not, how much worse are they?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your efforts on the review work and your detailed comments. We answer your questions as follows and hope the response could clear your concerns.
### Weaknesses
> W1: Overstatement: ... "Our research ... as long as ..." suggests more ...
**A1:** Thanks for the suggestion. We acknowledge that SA3D may not have all the abilities you mentioned, but it indeed has some valuable potential. To better support our statement, we demonstrate the generalization ability of our framework with some other models (please check our **A1** to b3JM).
Following your suggestion, we change our statement to “Our research reveals a **potential** methodology to lift **the ability** of a 2D vision foundation model to 3D, as long as the 2D model can steadily address promptable segmentation across multiple views.” We welcome any other suggestions on improving this part.
> W2: ... discussion about the computational efficiency ...
**A2:** Please check the global response for a comprehensive discussion about computation efficiency.
> W3: Unclear ... Eq. 5 ... a more obvious rewrite ...
**A3:** Thanks for the suggestion. The second term is essentially a regularization term that adds a negative effect on all regions involved in rendering the current mask to suppress the potential inaccurately-segmented region.
Only if SAM consistently predicts a region as foreground from different views, SA3D marks its corresponding 3D region as foreground. We will clarify the statement around Line 135-138 and take this suggested equation as a supplement for better understanding.
> W4: l. 280: "we demonstrate limitations ... " -> could not find ...
**A4:** Sorry for the misleading. Here what we want to express is we admit that SA3D has some limitations in panoptic segmentation.
We will revise the statement in Line 280 as “SA3D has limitations in panoptic segmentation”. Hope this can resolve the ambiguity.
### Questions
> Q1: ... validation of ... SA3D can improve SAM ...
**A5:** To further demonstrate our statement, we provide more visualization results (Fig. 1 of the global response PDF) on the 'bonsai' scene of the Mip-NeRF 360 dataset. Here we give a detailed explanation about why “SA3D can improve SAM” for clarification.
Segmentation models often face challenges in accurately capturing object details such as small holes and gaps due to limitations in resolution. Even though SAM exhibits fine-grained segmentation capabilities, this issue still persists.
The ability to improve SAM's performance stems from the fine-grained depth estimation (or the geometry information) provided by NeRF. The utilization of such information to aid segmentation has been a long-standing problem [1-3].
Specific to SA3D, the geometry information is utilized through the incorporation of a negative refinement (regularization) term in the projection loss (Eq. 5). When SAM overlooks small holes and gaps, the mask passes through these regions and gets projected onto the background behind the object. However, with a viewpoint switched, these inaccurately-segmented regions shift from being behind the target object to the side. In these new views, SAM's foreground prediction no longer includes these regions. Consequently, the mask confidence score for these regions is effectively suppressed by the negative refinement (regularization) term. We hope this explanation can help clear up the confusion.
Presently, we cannot find a dataset that supports evaluation for both fine-grained segmentation and NeRF reconstruction. Consequently, we are unable to provide quantitative metrics to validate the statement. We believe that cultivating such a dataset or benchmark represents a promising avenue for future research.
> Q2: Self prompting strategy: ... the necessity of this particular prompting strategy... How much better than a plain 2D-based NMS? ... yield roughly the same results - if not, how much worse ...?
**A6:** Our self-prompting strategy can be treated as a variant of the NMS algorithm. Without the 3D distance based confidence score decay, it degenerates into a simple 2D NMS, which selects a prompt point with the highest confidence score and then blocks out a surrounding region of it (as shown in Line 159-160). To answer the question, we conducted an ablation experiment on the NVOS dataset:
||w/ Confidence Decay Term||w/o Confidence Decay Term||
|-|-|-|-|-|
|Scene|IoU|Acc|IoU|Acc|
|fern|82.9|94.4|82.9|94.4|
|flower|94.6|98.7|94.6|98.7|
|fortress|98.3|99.7|98.4|99.7|
|horns_center|96.2|99.3|96.2|99.3|
|horns_left|90.2|99.4|88.8|99.3|
|leaves|93.2|99.6|93.2|99.6|
|orchids|85.5|97.3|85.4|97.3|
|trex|82.0|97.4|64.0|93.3|
|mean|90.3|98.2|87.9|97.7|
The above table shows that for most cases, a simple NMS self-prompting is enough. But for hard cases like 'LLFF-trex' (a trex skeleton, as shown in Fig. 5 of our paper), where a large number of depth jumps, the confidence decay contributes a lot. In such a situation, inaccurate masks bleed through gaps in the foreground onto the background. If the self-prompting mechanism generates prompts on these inaccurately-segmented regions, SAM may produce plausible segmentation results that can cheat the IoU-rejection mechanism and finally the segmentation results will involve unwanted background regions.
In addition, performing the decay step only involves some basic matrix calculation, which does not result in big time consumption. Taking the 'LLFF-fern' scene as an example, replacing the self-prompting strategy with a simple NMS only decrease the 27.04 seconds cost to 25.73 seconds.
[1] Couprie, Camille, et al. "Indoor semantic segmentation using depth information." arXiv preprint arXiv:1301.3572 (2013).
[2] Zhang, Zhenyu, et al. "Joint task-recursive learning for semantic segmentation and depth estimation." Proceedings of the European Conference on Computer Vision (ECCV). 2018.
[3] Kerr, Justin, et al. "Lerf: Language embedded radiance fields." arXiv preprint arXiv:2303.09553 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply - I upgrade my recommendation from weak accept to accept, as I have no issues with the paper based on the rebuttal.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Response
Comment: We sincerely thank you for your response! Your help in reviewing our paper has been very valuable in making it better. | Rebuttal 1:
Rebuttal: ## Common Response for Time Cost Analysis
We thank all reviewers for the insightful comments.
Since several reviewers (Q1 of Reviewer b3JM, W1 & W6 of Reviewer JhYG, W2 of Reviewer Xgwo) express concerns about the time overhead of our method, we discuss this issue here.
We provide per-scene time cost for the LLFF dataset and some unbounded scenes involved in Table 2. Please kindly note that the experiments of Table 1 and Table 2 are both based on the scenes of the LLFF dataset. Consequently, we omit the time cost for Table 1. Besides, the average time cost (seconds) comparisons for each object of the Replica dataset are shown in Table 3. Table 4 supplements time cost analysis regarding the number of views to illustrate the trade-off between time cost and accuracy.
The modified tables and attached table are shown as follows.
---
**Table 1. Comparison with ISRF**
There are three methods involved in the comparison in Table 1: Graph Cut (3D), NVOS and ISRF. The first two methods neither provide the code to reproduce nor clarifies the time overhead in their paper. ISRF provide rough time cost estimations. Thus we compare with ISRF roughly for segmenting an object in a scene:
| ISRF || SA3D | |
|:-:|:-:|:-:|:-:|
| Step |Time Cost | Step |Time Cost |
|||User Intervention (One Time) |-|
|||Initial Segmentation |< 1 second |
| Training feature field | 2.5 minutes |Training Mask Grids | 10 seconds - 3 minutes|
|User Intervention (Many Times)| - || |
| K-Means Clustering | 2 seconds |||
| 3D Feature Query | 1 seconds |||
| Bilateral Region Growing | 0.3 seconds|||
The coarse time cost of ISRF is gathered from their paper. For segmenting an object, ISRF requires many iterations of user intervention (and the following steps). Compared with ISRF, SA3D takes similar or smaller time cost but enjoys a more concise procedure. Please note that for both methods the time cost of pre-training a NeRF is omitted for clear comparisons.
---
**Table 2. Comparisons on LLFF scenes and some 360 degrees scenes**
|Scenes|Single View|||MVSeg|||SA3D (ours)|||
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
||IoU(%)|Acc(%)|Time Cost(s)|IoU(%)|Acc(%)|Time Cost(s)|IoU(%)|Acc(%)|Time Cost(s)|
|Orchids|79.4|96.0|-|92.7|98.8|264|83.6|96.9|33.9|
|Leaves|78.7|98.6|-|94.9|99.7|271|97.2|99.9|30.1|
|Fern|95.2|99.3|-|94.3|99.2|246|97.1|99.6|26.1|
|Room|73.4|96.5|-|95.6|99.4|284|88.2|98.3|52.7|
|Horns|85.3|97.1|-|92.8|98.7|276|94.5|99.0|75.3|
|Fortress|94.1|99.1|-|97.7|99.7|273|98.3|99.8|46.8|
|Fork|69.4|98.5|-|87.9|99.5|244|89.4|99.6|27.7|
|Pinecone|57.0|92.5|-|93.4|99.2|257|92.9|99.1|73.4|
|Truck|37.9|77.9|-|85.2|95.1|261|90.8|96.7|333.5|
|Lego|76.0|99.1|-|74.9|99.2|263|92.2|99.8|180.9|
|mean|74.6|95.5|-|90.9|98.9|264|92.4|98.9|88.0|
Note that as we introduced in Line 215-217, "Single View" denotes directly projecting the mask of the reference view onto the mask grids without any following operations, which takes almost no time cost. Thus we omit its time cost in Table 2 and Table 3.
---
**Table 3. Comparisons on Replica**
|Methods|metrics|office0|office1|office2|office3|office4|room0|room1|room2|mean|
|:-:|:-:|-|-|-|-|-|-|-|-|-|
|Single View|mIoU(%)|68.7|56.5|68.4|62.2|57.0|55.4|53.8|56.7|59.8|
||Time Cost(s/object)|-|-|-|-|-|-|-|-|-|
|MVSeg|mIoU(%)|31.4|40.4|30.4|30.5|25.4|31.1|40.7|29.2|32.4|
||Time Cost(s/object)|1567|1360|1343|1617|1301|1292|1431|1527|1430|
|SA3D(ours)|mIoU(%)|84.4|77.0|88.9|84.4|82.6|77.6|79.8|89.2|83.0|
||Time Cost(s/object)|81.2|53.9|46.4|62.1|76.4|58.1|61.5|101.2|67.6|
MVSeg does not report the time cost in their paper. We reproduce MVSeg with its official code on the Replica dataset to measure its time cost. As shown in the modified Table 3, MVSeg takes a high time cost as MVSeg needs to train a Semantic-NeRF for each target in its official implementation.
---
**Table 4. Ablation on different numbers of views**
||||||
|-|-|-|-|-|
|Number of Views|5(10%)|9(20%)|21(50%)|43(100%)|
|IoU on Fortress (forward facing)|97.8|98.3|98.3|98.3|
|Time Cost(s)|7.56|12.80|28.98|58.97|
|Number of Views|11(10%)|21(20%)|51(50%)|103(100%)|
|IoU on Lego (360 degrees)|84.5|84.8|91.5|92.2|
|Time Cost(s)|23.49|43.54|103.83|204.93|
Please kindly note that the time cost of SA3D depends on the number of views during the mask training phase. Consequently, there exist differences in the time overhead of SA3D between different scenes. Nonetheless, on the whole, the time overhead of SA3D typically remains within 1 minute.
---
We also provide a possible accelerating solution for future updating. Except for the first reference view, the following optimization can be parallelized. In each iteration, SA3D can select multiple new views and conduct self-prompting and inverse rendering for them simultaneously. Gradients on mask grids can then be gathered for optimization.
We hope the discussion can address the concern of time consumption.
Pdf: /pdf/f023906ab6a2ed5f183d019bef1f343b29fb67b6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Gaussian Membership Inference Privacy | Accept (poster) | Summary: The proposed f-MIP method incorporates a practical membership inference attack threat model, offering easily interpretable privacy guarantees. This approach improves utility, especially when the attacker's capabilities are realistically constrained. Through a theoretical analysis of likelihood ratio-based membership inference attacks on noisy stochastic gradient descent (SGD), μ-Gaussian Membership Inference Privacy (μ-GMIP) is introduced. μ-GMIP requires less noise compared to the corresponding Gaussian differential privacy (GDP) guarantees, resulting in higher utility.
Strengths: - The concept of applying control over type I and type II errors to MIP is interesting.
- The proposed approach holds promise for enhancing privacy protection in SGD.
- The numerical study provides evidence that the proposed method outperforms noisy SGD in terms of utility, highlighting its potential for practical applications.
Weaknesses: - The paper lacks a theoretical exploration of the relationship between f-MIP and f-DP, which hinders a comprehensive understanding of the proposed method. For instance, while mu-GMIP and mu-GDP are compared at the same mu in Figure 1, it remains unclear whether they actually share the same mu value.
- The absence of a discussion on post-processing or the composition rule (in terms of mu) limits the practical applicability and usefulness of the proposed approach.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As mentioned in the paper, it is necessary to focus on cases where the membership is accurately determined among those classified as members, which corresponds to achieving a high true positive rate (TPR) at low false positive rate (FPR) in threshold-type tests. Therefore, it would be logical to concentrate on scenarios with low FPR. However, the trade-off function is compared across the entire range [0, 1] rather than solely focusing on these low FPR cases.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - The iid assumption is not realistic in practical scenarios, which suggests that the actual privacy guarantees would be lower than those stated in the paper.
- Since the f-MIP method is specifically designed for SGD, it raises questions about its generalizability to other problem domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and for referring to our approach as interesting and promising. We will answer the specific questions below.
> The paper lacks a theoretical exploration of the relationship between f-MIP and f-DP [...] in Figure 1, it remains unclear whether they actually share the same mu value.
The value of $\mu$ defines the possible trade-off curve attainable by an attacker. This can be seen as a proxy for susceptibility of a model to specific types of attack. Regarding Figure 1, we compute noise levels $\tau$, such that the privacy notions share the same nominal value of $\mu$. The required noise levels across both f-DP and f_MIP are different and shown in Table 4 (App.), where we show that more noise is required for $\mu$-GDP than for $\mu$-GMIP. The premises underlying MIP and DP are different however. We refer to the general response where we explicitly compare the threat models more clearly.
The relationship between the $\mu$ values from G-MIP and G-DP can also be made explicit through the following Corollary:
**Corollary: Converting Trade-Offs between Gaussian-DP and Gaussian-MIP**
Under the usual setting of $K=d$ and the conditions stated in Corollary 4.1., the following conversions between the privacy parameter $\mu$ of a single step of noisy SGD hold:
1. Converting $\mu_\text{DP}$ into $\mu_\text{MIP}$
$\mu_{\text{MIP}} = \sqrt{\frac{d}{n +\frac{4C^2}{\mu_{\text{DP}}^2}+\frac{1}{2}}}$
2. Converting $\mu_\text{MIP}$ into $\mu_\text{DP}$
$\mu_{\text{DP}} = \begin{cases}
\frac{2}{\sqrt{\frac{d}{\mu_{\text{MIP}}^2}-n -\frac{1}{2}}}, & \text{if } \mu_{\text{MIP}} < \sqrt{\frac{2d}{2n+1}} \\\\
\infty, & \text{else}
\end{cases}$
**Proof Sketch.** This result follows from solving Corollary 4.2 and the result from Dong et al. [1, last line on page 30] for differential privacy of a single SGD step, $\mu_{\text{DP}} = \frac{1}{\sigma} = \frac{2C}{\tau n}$ (where Dong et al.'s $\sigma= \frac{\tau n}{2C}$ in our notation) for $\tau$, setting the two terms equal, and solving for $\mu_{\text{MIP}}$ or $\mu_{DP}$ correctly. The same strategy can be applied for composed results.
We observe that, as expected, $\mu_{\text{DP}}$ and $\mu_{\text{MIP}}$ increase and decrease mutually. For high values of $\mu_{\text{MIP}}$ (weak privacy without any additional noise added), no DP-guarantee can be found, while some level of GMIP privacy can be guaranteed.
> The absence of a discussion on post-processing or the composition rule (in terms of mu) limits the practical applicability and usefulness of the proposed approach.
We would like to highlight that our notion of privacy enjoys powerful post-processing and composition results as discussed in Appendix C.3 of our paper. Due to the hypothesis formulation, these follow from Theorem 4 and Theorem 11 of Dong et al. [1] and are used in Lemma 5.1 of this work. For example, Dong et al. [1, Corollary 2] provide a result which explicitly states that the n-fold composition of $i=1,\ldots, n$ steps which are $\mu_i$-GDP each is $\sqrt{\mu_1^2+ \ldots + \mu_n^2}$-GDP. This result also holds for $\mu$-GMIP.
> it would be logical to concentrate on scenarios with low FPR. However, the trade-off function is compared across the entire range [0, 1] rather than solely focusing on these low FPR cases
Our theory covers the entire trade-off curve, including arbitrary low FPRs. Following the suggestion of the reviewer, we investigate this regime further and provide the observed trade-off curves in Figures 1 and 2 of the uploaded rebuttal PDF. We see that our bounds hold very well in the low FPR regime. At very low FPR values of $10^{-4}$ the standard errors (across multiple runs) become very large due to the small number of samples, but the empirical estimates always lie well within one standard error.
> The iid assumption is not realistic in practical scenarios, which suggests that the actual privacy guarantees would be lower than those stated in the paper.
Note that the assumption that the distribution of the gradients in subsequent gradient descent steps are independent makes our bounds more conservative (we are thus more cautious which is as desired from a privacy perspective and apply more noise than required and obtain more privacy). This is so as the attacker gets some redundant information about the samples in subsequent steps, because the sample gradients are correlated as the model and the sample gradients stay very similar through small model updates. This is confirmed in Figure 3b, 3c, where the lower curves indicate that the attack is harder to execute for the attacker than predicted by our theory.
> Since the f-MIP method is specifically designed for SGD, it raises questions about its generalizability to other problem domains.
DP SGD is the workhorse to implement Differential privacy in machine learning [2] and the standard object of study in recent works in privacy-preserving ML literature (e.g., [1,3]). The derivation of potential other mechanisms to implement GMIP is a fruitful avenue for future work.
------
Thank you for your thoughtful comments and constructive feedback, which we will gladly incorporate into our manuscript. We have addressed the key concerns including the relation between f-DP and f-MIP, composition theorems, and the low-FPR regimes. In light of our response, we kindly ask the reviewer to reconsider the overall assessment of this work.
------
[1] Dong, J., Roth, A., and Su, W. J. Gaussian differential privacy. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(1):3–37, 2022.
[2] Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., and Zhang, L. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318, 2016.
[3] Nasr, Milad, et al. "Tight Auditing of Differentially Private Machine Learning." arXiv preprint arXiv:2302.07956 (2023).
---
Rebuttal Comment 1.1:
Comment: I appreciate the responses provided by the authors; these have certainly improved my understanding of the paper. Nevertheless, concerning the composition rule, I am still uncertain about how to obtain the same result as proven in GDP. Could you kindly offer a step-by-step heuristic illustration?
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer vdUQ
Comment: Thank you for your kind reply.
Our general composition result (Lemma 4.1) follows as an application of Corollary 4 from Dong et al., which directly follows from Theorem 11 from Dong et al. Their Theorem does not only hold for f-DP in particular, but more generally as it only relies on the premise that *‘[...] $f$ is a symmetric trade-off function such that $f(0) = 1$ [...]’*. Note that our Gaussian trade-off function from Corollary 5.1 fulfills this condition as it fulfills all criteria of a trade-off function and is symmetric. Hence, we can directly apply Theorem 11 from Dong et al. to our setting.
We are also happy to provide the outline of the proof of the specific composition result for Gaussian trade-offs which also highlights why their theorems are generally applicable to our trade-off functions. To this end, we consider Theorem 4 by Dong et al., which covers finite compositions (whereas Theorem 11 covers an infinite number of compositions). While this Theorem is stated in terms of $f_i$-DP, note that the $f_i$s are general (not DP specific) trade-off functions that originate from comparing two distributions against one another using hypothesis testing.
The proof then proceeds in the following steps:
Consider 2 sequential tests with trade-off functions $f$ and $g$. We want to compose the results from these two tests in the best possible way (i.e., what is the best trade-off the attacker can obtain that features the results from both tests $f$ and $g$). Trade-off functions can equivalently be represented by pairs of distributions that are being tested against each other, e.g., $f = \text{Test}(P, Q)$, where $P$ and $Q$ are distributions.
Theorem 4 of Dong et al. states that composing tests $f=Test(P,Q)$ and $g=Test(P’,Q’)$ results in the combined trade-off $f \otimes g = \text{Test}(P \times P’, Q \times Q’)$, where $\times$ denotes the joint distribution and $\otimes$ the composed trade-off. As shown in their Lemma C.2, the choice of representation (in terms of $P, Q$) does not matter and has no effect on the result. This part of their proof implies that we can apply their result to testing distributions stemming from both f-DP and f-MIP, as long as we can characterize the tests’ trade-off curves.
As an example, suppose that we are specifically composing the Gaussian trade-offs $g_{\mu_1}$ and $g_{\mu_2}$ with parameters $\mu_1$ and $\mu_2$. Recall that the $g_{\mu} = \text{Test}(N (0, 1), N (\mu, 1))$ is defined via testing two unit Gaussians of variance one at distance $\mu$ to each other. We can then do the following calculation:
$g_{\mu_1} \otimes g_{\mu_2} = \text{Test}(N (0, 1), N (\mu_1, 1)) \otimes \text{Test}(N (0, 1), N (\mu_2, 1))$
$= \text{Test}(N(0, 1)\times N (0, 1), N (\mu_1, 1) \times N (\mu_2, 1))$
$= \text{Test}(N ((0,0)^\top, \mathbb{I}), N ((\mu_1,\mu_2)^\top, \mathbb{I})$
$= \text{Test}(N ((0,0)^\top, \mathbb{I}), N ((\sqrt{\mu_1^2+\mu_2^2}, 0)^\top, \mathbb{I}) = \text{Test}(N (0, 1) \times N (0, 1), N (\sqrt{\mu_1^2+\mu_2^2}, 1)\times N (0, 1))$
$= \text{Test}(N (0, 1), N (\sqrt{\mu_1^2+\mu_2^2}, 1)) \otimes \text{Test}(N (0, 1), N (0, 1)) =
g_{\sqrt{\mu_1^2+\mu_2^2}} \otimes \text{Id} =g_{\sqrt{\mu_1^2+\mu_2^2}}$
We use the fact that a rotation does not change the hardness of the test. The last line follows from the fact that composition with the $\text{Id}$ test (testing two identical distributions) contains no information (formally proven in Dong et al.) and $\mathbb{I}$ denotes a 2x2 unit matrix.
We hope that our clarifications regarding the applicability of Theorem 11/Corollary 4 from Dong et al. and the simple outline of the underlying proof for Gaussian compositions clear the reviewer’s uncertainty. Please let us know if you have further questions. | Summary: The authors introduce a new privacy notion called f-membership inference privacy (f-MIP), which relaxes strict Differential Privacy (DP) assumptions thereby promising better model utility. The paper proposes a theoretical analysis of membership inference attacks on DP-SGD based on trade-off curves (similar to f-DP) and introduces a family of f-MIP guarantees called µ-Gaussian Membership Inference Privacy (µ-GMIP)(similar to GDP). The analysis follows a similar approach to the original DP-SGD analysis: first the privacy budget is derived for single step with subsampling which is then composed over the training run.
The paper then verifies the theoretical analysis by introducing gradient attacks based on likelihood ratio tests.
The attack requires to know the underlying gradient distribution parameters. They present results for a single DP-SGD step with known parameters and the privacy guarantees seem tight. They also present results for estimated distribution parameters where the guarantees look loser.
Strengths: - The derivations seem sound and follow the well regarded hypothesis testing interpretation in DP
- Investigating the tightness and potential relaxation of DP-SGD is an important problems and the authors make a solid contribution.
- The paper is clearly structured and easy to follow.
Weaknesses: - It would be nice to see an investigation on the validity of the initial assumptions. The paper was motivated that DP is overly conservative since it also holds for pathological datasets such as empty datasets and singletons with an adversarial sample. While this is true, it has been recently shown that a simple canary insertion in an otherwise natural dataset may be sufficient to produce tight lower bounds for DP accountants [Nasr et al 2023].
- It would be great to extend the FPR and TPR ranges in the plots in figure 4 to smaller values. Ideally all the way to the first data point. These ranges capture very relevant adversary objectives where an adversary only cares about identifying one sample but that with high confidence.
- Minor:
- Only asymptotic guarantees for composition. Recently, there has been significant progress in numerical composition of DP guarantees.
**References**
Nasr, Milad, et al. "Tight Auditing of Differentially Private Machine Learning." arXiv preprint arXiv:2302.07956 (2023).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Is it assumed that $\tau$ in the noise parameter includes the batch size? Algorithm 1 does not scale the noise by the batch size which is different to typical DP-SGD [Abadi et al 16].
- Figure 4a. Purchase and CIFAR10 seem to exceed the analytical bound for low FPRs. Is this because of the earlier mentioned assumption that the challenge points are sampled from the distribution which for low FPRs may be already distributional outliers?
- Are the techniques for composing DP guarantees numerically also applicable in this setting?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: As discussed in weaknesses. I believe the discussion about the validity of the initial assumptions is limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their thoughtful review and were pleased to hear that the Reviewer found our paper to be *clearly structured* and to be a *solid contribution*. We will answer the individual points raised below.
> It would be nice to see an investigation on the validity of the initial assumptions. [...] it has been recently shown that a simple canary insertion in an otherwise natural dataset may be sufficient to produce tight lower bounds for DP accountants [Nasr et al 2023].
The reviewer’s observation is accurate. The problem of finding a tight lower bound for DP is considered solved as canary insertions (i.e., outlier sample insertions to datasets) make the DP bound tight (Nasr et al 2023). Therefore, to improve the utility over DP-trained models, new takes on privacy are in demand where one maps the exact threat model to a suitable privacy notion, which is precisely what our work accomplishes. If we can rule out pathological canary insertions (e.g., because the adversary, by construction, has no dataset access), other notions than standard differential privacy can be considered. This is what has recently been identified as a fundamental open problem by a large group of leading researchers from the field of privacy preserving ML ([1, sections 2.1, 4.1, 4.3].
We explicitly compare the assumptions underlying DP and MIP in new table in the general comment. Regarding the validity of our theoretical assumptions, we conduct 4 ablation studies in the rebuttal PDF file (Figure 3). In all these cases, we find that our theoretical results accurately predict attack success.
> It would be great to extend the FPR and TPR ranges in the plots in figure 4 to smaller values
We thank the reviewer for this constructive suggestion. We provide the curves in Figures 1 and 2 in the uploaded rebuttal PDF (Figure 1 shows individual runs up to the first data point). Our bounds hold very well in the low FPR regime. At very low values for the FPR, the standard errors (across multiple runs) become very large due to the small number of samples, but the empirical estimates always lie well within one standard error.
> Is it assumed that $\tau$ in the noise parameter includes the batch size? Algorithm 1 does not scale the noise by the batch size which is different to typical DP-SGD [Abadi et al 16].
This is correct. Different works use different parameterization of the noise (e.g., Abadi et al. and Dong et al. [3] also use different parameterizations). As in our paper, we denote the dimension-wise variance of the total noise that is added to the average gradient by $\tau^2$. Thus, our parameter $\tau$ can be converted to the $\sigma$ used by Abadi et al. as follows: $\tau^2 = \frac{\sigma^2_{\text{Abadi}}C^2}{n}$. If it would make the manuscript more accessible to the reviewer, we will be happy to express the added noise through the parameterization used by Abadi et al. in our manuscript.
> Figure 4a. Purchase and CIFAR10 seem to exceed the analytical bound for low FPRs. Is this because of the earlier mentioned assumption that the challenge points are sampled from the distribution which for low FPRs may be already distributional outliers?
Our results cover the entire trade-off curve including the low-FPR regimes. Our theoretical predictions are probabilistic and our empirical results are always within the statistical standard error. To make this more clear, we provide plots using more samples in the rebuttal PDF file. On the setup corresponding to Figure 4a in the main paper (Figure 2a in the rebuttal PDF) our results always reside within the standard error and match the theoretical prediction almost exactly at the earlier limit of $10^{-2}$. At very low values of FPR of $10^{-4}$ the standard errors (across multiple runs) become very large due to the small number of available samples to estimate the FPRs and TPRs.
> Are the techniques for composing DP guarantees numerically also applicable in this setting?
We would like to highlight that our privacy notion already comes with powerful composition and post-processing results that were transferred from the work of Dong et al. [3]. We agree that the asymptotic result under subsampling and composition presented in Lemma 4.1. may potentially be improved using numeric estimation techniques similar to those proposed by Gopi et al. [2], which is a great suggestion for follow-up work.
> I believe the discussion about the validity of the initial assumptions is limited.
To further address this point, we provide a table detailing the different assumptions underlying various attacks and privacy notions in the general comment, which we will gladly include in our final manuscript.
-------------------
We were happy to hear that the Reviewer found our work to be solid and sound in general, as mentioned in your review. We sincerely hope that we have addressed your remaining concerns regarding the low-FPR regime and the discussion of the initial threat models, leaving no major concerns unaddressed. In light of this response and your positive assessment, we would kindly ask you to consider updating the review score in light of this rebuttal and the changes listed in the general response.
----------------
**References**
[1] Challenges towards the Next Frontier in Privacy, Cummings et al (2023), arXiv:2304:06929
[2] Sivakanth Gopi, Yin Tat Lee, and Lukas Wutschitz: Numerical Composition of Differential Privacy, arXiv:2106.02848
[3] Dong, J., Roth, A., and Su, W. J. Gaussian differential privacy. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(1):3–37, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed clarification and including the low FPR plots.
I think the table is a great addition, however, I think that some statements about the usefulness and the relevance of the threat model are overstated.
> The sample $x'$ for which membership is to be inferred is drawn from the data distribution $D$. Therefore, MI is concerned with typical samples that can occur in practice
I would argue that inserting a worst case sample is an easy task for an adversary in practice e.g. Census data, recommendation datasets, language modelling datasets. However, I admit that there are applications where the threat model in this paper holds and worst case samples are hard to insert e.g. medical datasets, etc.
I think a clear discussion of when the proposed threat model is applicable and **also when not** would strengthen the paper. The table is a great first step.
---
Reply to Comment 1.1.1:
Title: Reply to Official Comment by Reviewer Uu8E
Comment:
Thank you for your positive reply. We agree with the reviewer’s observations that the threat model is useful for some applications, but may not always be applicable as it requires an adversary with restricted dataset access:
- In financial and healthcare applications, the data is often collected from actual events (e.g., past trades) or only a handful of people (i.e., trusted hospital staff) have access to the records. In such scenarios, it might be overly restrictive to protect against worst-case canary attacks as attackers cannot freely inject arbitrary records into the training datasets.
- In other cases such as online surveys and census data, as mentioned by the Reviewer, an attacker may indeed be able to do sample injection. These attacks are not covered by $f$-Membership Inference Privacy and a more general notion such as Differential Privacy should be preferred.
We will happily add these clarifications regarding to our paper. We will also extend the table by another line with recommended example usages:
| | $f$-Differential Privacy | $f$-Membership Inference Privacy | | | | | | | |
|---------------------------|---------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---|---|---|---|---|---|---|
| Best used in applications | where specific attack model is unknown. Offers a form of general protection. | where dataset access (e.g. canary injection) of an attacker can be ruled out and the main attack goal lies in revealing private training data (e.g., membership inference, data reconstruction). | | | | | | | |
| | |
Let us know what you think. We would be happy to incorporate further suggestions! | Summary: the submission extends the work on privacy guarantees specifically for protecting data membership inference attacks, and the extension is on using the Gaussian distribution to characterize the trade-off function. The proposed Gaussian Membership Inference Privacy has the same parametrization as the Gaussian Differential Privacy, but provides weaker privacy guarantees in the sense that an attacker only has access to the learned machine learning model and true data distribution or the public data.
The submission further shows that a single step of noisy SGD is approximately Gaussian membership Inference privacy with a privacy parameter $\mu$, and empirically shows that the theoretical trade-off curve is the upper bound of the ROC curves of practical attacks with the given public knowledge.
Strengths: 1. the membership inference attack captures a class of realistic attacks against machine learning models, where the attacker can't manipulate the private training data but has access to the data distribution and the learned model through an API. It indeed provides weaker privacy guarantees than Differential Privacy does, but studying it might lead us to a more practical privacy definition for protecting individuals in machine learning systems.
2. one-step noisy SGD can be captured by f-membership inference privacy, and can be captured approximately by Gaussian Membership Inference privacy.
3. a very important observation is that averaging over norm-clipped gradients satisfies GMIP with a bounded privacy parameter, which means that it already protects, to some extent, against membership inference attacks on gradients.
Weaknesses: 1. Gaussian Differential Privacy exactly captures the trade-off function of the Gaussian mechanism, which is used to release noisy gradients. However, Gaussian Membership Inference Privacy only approximately captures so, and I am wondering what 'approximately' entails, and whether it is actually useful compared to f-membership inference privacy.
2. the assumption for the algorithm that an attacker would use is that the algorithm has to return binary answers, but we've seen successful attacks using ranking, for example, the watchdog experiments in [1]. Thus, I am questioning this assumption on top of an attacker's algorithm.
3. I was hoping to see experiments with real machine learning models with only API access to them, e.g. an attacker can only have access to the trained classification neural network through the final probabilistic predictions over classes. The distributional assumption over the susceptivity does not hold anymore, and it is interesting to see how the proposed privacy definition captures the success of membership inference attacks.
[1] https://www.pnas.org/doi/epdf/10.1073/pnas.2218605120
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: please see the weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: please see the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful for the positive comments, which highlight the relevance of the studied Membership Inference threat model, our ability to analytically capture SGD steps with f-MIP, and which highlight the finding that we can obtain MIP through averaging of gradients only. We will address the remaining points below.
> Gaussian Membership Inference Privacy only approximately captures so, and I am wondering what 'approximately' entails, and whether it is actually useful compared to f-membership inference privacy.
First note that our exact formulation from Theorem 5.1 contains the CDF of a $\chi^2$ distribution, while our approximation from Corollary 5.1 uses the Gaussian CDF. In general, it is well known that for large $d$, the $\chi^2$ distribution converges to a Gaussian distribution at a convergence rate of $\mathcal{O}(1/\sqrt{d})$ (see Theorem 1 in [7], see Figure 2a of the paper for an illustration). Regarding $n$, we can apply the Berry-Esseen Theorem to get a finite sample bound for the error between the true averaged gradient distribution and the Gaussian distribution. The finite sample error depends on a universal constant (some number depending on third moments), and scales with $\mathcal{O}(1/\sqrt{n})$. Hence, as we increase the number of gradient vectors in a batch, we expect the error to decrease. To demonstrate that this is in fact the case, we have added ablation results in Figure 3 of the rebuttal PDF file for small dimensions (d) and for small batch sizes (n). In these experiments, we start with small values of $d$ and $n$ and increase these which empirically demonstrates that our approximations are accurate even for small $d$ and $n$ despite our theory being carried out for larger sample sizes.
> the assumption for the algorithm that an attacker would use is that the algorithm has to return binary answers, but we've seen successful attacks using ranking, for example, the watchdog experiments in [1]. Thus, I am questioning this assumption on top of an attacker's algorithm.
Membership Inference attacks are a common standard in the literature (e.g., [1-6]) and are one of the simplest attacks that can be carried out by an adversary. The assumption about the output of the membership inference attack follows the standard definition originally introduced by [4], which has since been adopted in many popular prior works (e.g., [1,5]). The reference provided by the reviewer considers reconstruction attacks, which are different in the sense that full samples are reconstructed. As opposed to reconstruction attacks, membership inference attacks attempt to only verify whether a given sample was part of the training dataset. Due to the simplicity of the MI attack, protecting against MI should also offer protection against other, more complex attacks as well. For example, one could cast the data reconstruction attack as a series of sequentially applied membership inference attacks where the task consists of verifying whether a given token was part of the training data set. We will gladly include a discussion of reconstruction attacks such as the one mentioned in our manuscript.
> I was hoping to see experiments with real machine learning models with only API access [...] it is interesting to see how the proposed privacy definition captures the success of membership inference attacks.
The membership inference threat model in our work uses access to gradients, which is a common scenario in federated learning setups. Due to the use of the gradients our attack is stronger than attacks that rely solely on API access. To showcase the superiority of our attack, we run the state-of-the-art API-only attack suggested by Carlini et al. [1] on real machine learning models and obtain the following results shown in Figure 5 of the Appendix. These results show that our attack (Figure 4) is 2-10 times stronger than the state-of-the-art attack based on API access only. Thus, adding the noise required to protect under our optimal attack under gradient access, also protects against attacks that rely on API access only.
-------
We appreciate the positive comments by the reviewer and are glad to hear that the Reviewer appreciated the “Soundness”, “Presentation”, and “Contribution” of our work by awarding a rating of "Good". We hope that we have addressed the remaining concerns with the new ablation studies and the results on the state-of-the-art API-only attack by Carlini et al. [1]. Given the positive evaluations and our clarifications, we would kindly ask the Reviewer to reconsider their overall score.
-----
**References**
[1] Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramèr. Membership inference attacks from first principles. IEEE Symposium on Security and Privacy (SP), 2022.
[2] Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, and Reza Shokri. Enhanced membership inference attacks against machine learning models. ACM CCS, 2022.
[3] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), IEEE, 2017.
[4] Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In 31st IEEE Computer Security Foundations Symposium, 2018..
[5] Martin Pawelczyk, Himabindu Lakkaraju, Seth Neel. On the Privacy Risks of Algorithmic Recourse. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics (AISTATS), 2023.
[6] Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, and Nicolas Papernot. Label-only membership inference attacks. In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
[7] Donagh Horgan and Colin C. Murphy. On the Convergence of the Chi Square and Noncentral Chi Square Distributions to the Normal Distribution. IEEE Communications Letters, Vol. 17, No. 12, 2013 | Summary: Authors came up with a privacy definition that is more relaxed than DP. It is called Gaussian Membership Inference Privacy (GMIP) which consists of a hypothesis testing which is supposed to decide whether a single instance is present in the training data. DP implies GMIP. The proposed privacy framework is applied with SGD and in experiments found to be much better than DP in terms of utility.
Strengths: * There is a need to come up with some privacy definition that is more practical than DP.
* Research direction is promising.
Weaknesses: * I believe that standard notions from statistical hypothesis testing are reinvented, and the results seems not so surprising taken into account results already published in testing. I would suggest to reuse those results (if it is possible). Nevertheless, please address my questions!
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Definition 4.1 is basically the most powerfull test. $\mathcal{E}$ is the significance and $\beta$ is the power function, or more concretely 1 - power function?
* Regarding Theorem 4.1: according to the Neyman-Pearson fundamental lemma the risk function is always convex when the null and alternative consist of a single distribution. And the testing problem presented in (3) is like that. See Lehamnn-Romano: Testing Statistical Hypotheses, Sec 3.2.
* Testing are applied in many rounds and i guess only union bound is used over the individual SGD steps. Some sequential hypthesis testing might be worth to consider?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I believe that this research direction is interesting and promising. However, convexity of risk function seems not novel observation which is the key of having most uniformly powerful test. And this is what the paper relies on. I do not understand why to stick to Gaussion distribution, since any distribution from the exponential family can be used in a very similar way. Furthermore, FNR are controlled in each SGD step independently from each other however sequential testing approaches might applied here which would make this paper much more interesting. So technical contribution is somewhat limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their thoughtful review and the questions raised. We will clarify the individual points below.
> I believe that standard notions from statistical hypothesis testing are reinvented, and the results seems not so surprising taken into account results already published in testing. [...]
We would like to highlight that we only *leverage* standard notions from statistical hypothesis testing. However, we frame the problem of membership inference privacy as a hypothesis testing problem to be able to use testing tools to develop a constructive theory. This theory helps us set up a privacy notion geared towards the realistic and empirically well studied threat model of membership inference attacks and a constructive algorithm which effectively protects against such attacks. Such results are highly sought after in the privacy literature, where the need to reconsider threat models has recently been proclaimed by leading researchers in the field [1, sections 2.1, 4.1, 4.3].
> Definition 4.1 is basically the most powerfull test. $\mathcal{E}$ is the significance and $\beta$ is the power function, or more concretely 1 - power function?
The reviewer is right in that Definition. 4.1 is reminiscent of the most powerful test. However, there are several differences to the standard definition of the most powerful test that are important in our work and that motivate the need for Definition 4.1. Most prominently, we adjust the definition of the most powerful test (i.e., hypothesis tests) to be applicable to the membership inference problem (this is what definition 4.1 accomplishes). Please note that a straightforward construction of the most powerful test does not work in this setup. This is the case because the adversary does not only run one hypothesis test to figure out whether one sample belongs to the training data set or not; instead, the adversary draws samples $x$ and runs individual, sample-dependent and different hypotheses tests for each drawn sample. This is necessary due to the formulation of the distribution $A_1(x)$ under the alternative hypotheses in the formulation of the test (Eqn. 3), which depends on the sample $x$. The value of $x$ is known to the adversary. We therefore require a tool to compose the results from the different hypothesis tests, which we carefully craft in Definition 4.1.We can then compute the expected trade-off curve for an adversary that runs tests with different powers according to the observed samples $x$.
> Regarding Theorem 4.1: according to the Neyman-Pearson fundamental lemma the risk function is always convex when the null and alternative consist of a single distribution. And the testing problem presented in (3) is like that. [...]
In Definition 4.1., the distributions are not simple, but instead depend on a stochastic sample $x$. In particular, the distribution under the alternative hypotheses, $A_1(x)$ in Eqn. (3), depends on the individual sample $x$, which is known to the attacker and can be used to run sample-specific tests. Therefore, to compute the expected trade-off curve that can be reached by an attacker who samples $x$, we use the trade-off defined in Definition 4.1. While its properties shown in Theorem 4.1 may seem intuitive, the proof is not trivial, which is why we decided to include it for completeness. Note that we consider our main contributions to be Theorem 5.1 and Corollary 5.1, which are the first of its kind and precisely quantify the factors that lead to successful Membership Inference.
> Testing are applied in many rounds and i guess only union bound is used over the individual SGD steps. Some sequential hypthesis testing might be worth to consider?
While we do not use the union bound for the composition as suggested by the reviewer, our result composing multiple steps of SGD given in Lemma 5.1 and follows from Theorem 11 and Theorem 4 in Dong et al. [2]. These results already provide tight composition bounds for hypothesis tests that satisfy all our needs. We are however confident that it would be possible to derive similar results using union bounds as well.
> I do not understand why to stick to Gaussion distribution, since any distribution from the exponential family can be used in a very similar way
Please note that we don’t *assume* the Gaussian distribution. Instead, we consider averages over parameter gradients commonly used in minibatch stochastic gradient descent, which then follow a Gaussian distribution by the central-limit theorem (CLT). We put no distributional assumption's on the gradients at all.
> Furthermore, FNR are controlled in each SGD step independently from each other however sequential testing approaches might applied here which would make this paper much more interesting. So technical contribution is somewhat limited.
Note that we use tight results from the testing literature, which consider the full trade-off curve of a sequence of hypothesis tests (see, e.g., Theorem 4 of Dong et al. [2]). These bounds are optimal for any overall FNR and also cover cases where, e.g., different FNRs at targeted at the individual tests or the statistics of different tests are combined in non-trivial ways.
Overall, we would like to stress that we *use* tools from the hypotheses testing literature, but this is not our key contribution. The key contribution of our work lies in defining MI privacy through the composed trade-off curve and bounding the attack risk of DP-SGD with respect to membership inference attacks. We will adjust our manuscript to better differentiate our contributions from the existing tools that we use. We hope that our replies have clarified the matter and are happy to answer any follow-up questions.
**References**
[1] Challenges towards the Next Frontier in Privacy, Cummings et al. (2023), arXiv:2304:06929
[2] Dong, J., Roth, A., and Su, W. J. Gaussian differential privacy. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(1):3–37, 2022.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal.
Comment: Authors addressed my concerns properly. I liked the topic of the paper because to better understand inference attacks is a very timely question. Therefore I recommend the paper be accepted. | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive feedback, which allows for direct improvements of our work. Inspired by the reviewer comments, we plan to make the following changes and intend to use the additional page provided in the camera-ready version for the following additions:
* We add a table comparing the assumptions and premises of f-DP and our f-MIP privacy notion (shown below). We will additionally compare our suggested class of new membership inference attacks with other attacks from literature.
* We empirically investigate the low-FPR regime using additional plots up to a FPR of $10^{-4}$ and find our empirical results to match our theory well.
* We ran 4 new empirical studies on the effect of the size of the parameter vector $d$, the number of samples $n$, the cropping threshold $C$ and the type of gradient distribution. They confirm that 1) our theoretical bounds are highly accurate across parameter ranges, 2) that the CLT can in fact be applied to means of cropped random variables (addressing Reviewer ``DVNo``s concern) and 3) that our theory holds with minor ramifications when $n$ and $d$ are extremely small. To explain this (surprising) behavior we provide theoretical insights on the convergence of the errors in the response to Reviewer ``qYbt``.
* In Response to Reviewer ``vdUQ``, we provide a new Corollary relating the values of the parameter $\mu$ between $\mu$-GDP and $\mu$-MIP.
We would additionally like to stress that our work makes *several impactful contributions*:
1. We are the first to formulate membership inference as a hypothesis testing problem and use the trade-off function of the test to define a versatile notion of f-Gaussian Membership Inference Privacy. Such results are highly sought after in the privacy literature, where the need to reconsider threat models has recently been proclaimed by leading researchers in the field [1, sections 2.1, 4.1, 4.3].
2. Our hypothesis test formulation is valuable, as it allows for a fine-grained theoretical analysis of membership inference attacks. Our main contributions are Theorem 5.1 and Corollary 5.1, which precisely quantify the factors that lead to membership inference attack success in a step of stochastic gradient descent. Our results rely on constructing the most powerful test and are remarkably general: They also cover all ML models trained with standard gradient-based optimization, even without noise or gradient cropping.
3. Notably, through this formulation we can transfer composition and post-processing results from existing literature. We apply those to steps of noisy stochastic gradient descent to bound the attack success for training of an entire model
4. Based on our theoretical insights, our final contribution consists of a constructive algorithm that quantifies the required noise level in SGD to defend against membership inference attacks.
Thank you again for your feedback. We will be happy to answer any further questions.
-------------------------
**Table: Comparing $f$-Differential Privacy and $f$-Membership Inference Privacy**
| | $f$-Differential Privacy | $f$-Membership Inference Privacy |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Adversary goal | Distinguish between $D$ and $D^\prime$ for any $D$, $D^\prime$ that differ in at most one instance | Distinguish whether $x^\prime \in S$ (training data set) or not. |
| Data access | Attacker has full dataset access. For example, the attacker can poison or adversarially construct datasets on which ML models could be trained; e.g., $D = \\{ \\}$ and $D^\prime = \\{100000\\}$ | Attacker has no access to the training data set; i.e., the model owner privately trains their model free of adversarially poisoned samples. |
| Protected instances | The instance in which $D$ and $D^\prime$ differ is arbitrary. This includes OOD samples and extreme outliers | The sample $x^\prime$ for which membership is to be inferred is drawn from the data distribution $D$. Therefore, MI is concerned with typical samples that can occur in practice |
| Model knowledge | The attacker knows the full model architecture including hyperparameters and has full access to samples from the distribution and the parameters during training | The attacker knows the full model architecture including hyperparameters and has full access to samples from the distribution and the parameters during training |
**References**
[1] Challenges towards the Next Frontier in Privacy, Cummings et al (2023), arXiv:2304:06929
Pdf: /pdf/71d332478bbbcd112f7d4adbbe3230d890b4a685.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a notion of Gaussian membership inference privacy (GMIP) to capture the information leakage of a training algorithm about a data point $x$ when all the remaining training datasets are randomly drawn from a data distribution. The new GMIP definition has two main benefits compared to the prior leakage definition.
1. It captures the entire trade-off curve of membership inference as a hypothesis test and is thus more informative than prior average-case privacy definitions such as membership inference advantage.
2. It stochastically composes the trade-off curves over the randomly drawn remaining training dataset (other than the target data). It, therefore, captures leakage against a more realistic adversary that does not have control over the remaining training dataset (compared to the worst-case f-DP definition).
Following this new GMIP definition, the paper analytically proves GMIP for the DP-SGD algorithm via an analytical likelihood ratio attack on an observed noisy gradient (under assumptions on the gradient distribution, model dimension, and dataset size). The proved GMIP bound has interesting dependencies on the model dimension and a data-dependent constant associated with the gradient distribution. To further illustrate the tightness of this GMIP bound, the authors evaluate the performance of various attacks for a mu-GMIP noisy SGD algorithm and show when the attack performances are close to the GMIP upper bound. Finally, the authors evaluate model accuracy under mu-GMIP and show that it improves over the model accuracy under mu-GDP, which indicates a privacy-utility trade-off gain due to relaxed adversary assumption.
Strengths: - The paper studies an important problem of analytically bounding informative information leakage of training algorithms against realistic adversaries.
- The critical component is a novel analytical derivation of the likelihood ratio test, assuming that the aggregated noisy gradients follow a multivariate Gaussian distribution.
- The proved GMIP bound is novel and has interesting dependencies on various factors about the model and data distribution. The discussion about the tightness of the bound is detailed and supported by empirical evaluations.
Weaknesses: [W1] The main weakness is that the proved GMIP bound is based on several approximation arguments and relies on assumptions about the gradient distribution, the model dimension, and the dataset size. The authors should clarify these approximations and assumptions in the comparison in Figure 1. Otherwise, the comparison to the mu-GDP algorithm is unfair or misleading.
[W2] Specifically, one of the assumptions used in approximating the LRT (line 618 appendix D.1) is that the distribution of averaged gradient follows a Gaussian distribution, provided that the number of averaged samples is large enough. This assumption is invalid because the gradients are clipped before averaging, so the distribution of averaged gradient is bounded and is not close to a Gaussian distribution.
[W3] The proved GMIP bound Theorem 4.2. grows indefinitely with regard to model dimension d. Such dimension dependency does not exist in the standard mu-GDP bound for DP-SGD (which implies GMIP). This suggests that the GMIP bound may be less tight than the standard mu-GDP bound for the high-dimensional problem, which is the case for training a deep neural network.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: [Q1] What exactly are the approximations and assumptions used for the proved GMIP bound, for the comparison figure 1? How much would possible issues with assumptions (such as the one mentioned in weakness [W2] and insufficiently large dataset size n and model dimension d) break the proved GMIP bound?
[Q2] Is the proved GMIP bound less tight than the standard mu-GDP bound for DP-SGD, when the model dimension is large? See weakness [W3] for more information.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations regarding assumptions made for the analysis should be discussed more. See weakness [W1] and [W2] for more details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their thoughtful review and the questions raised. We clarify the individual points below.
> [W1] The main weakness is that the proved GMIP bound is based on several approximation arguments and relies on assumptions about the gradient distribution, the model dimension, and the dataset size.The authors should clarify these approximations and assumptions [...]
We clarify that we do not make parametric approximation assumptions about the gradient distribution. Our theoretical results stem from the application of the central limit theorem (CLT) to averages formed over mini-batches of gradients and do *not* use parametric assumptions on the gradient distribution. Loosely speaking, the CLT states that the distribution of sample means converges to a Gaussian for large enough sample sizes regardless of the distribution of individual gradients. Therefore, we dont need to make assumptions about the shape of the gradient distribution. For a discussion on other parameters see the response to [Q1].
We provide a full discussion of threat models of other privacy notions in the general comment due to space constraints, and will incorporate them into the final version of our manuscript.
> [W2] Specifically, one of the assumptions used in approximating the LRT (line 618 appendix D.1) is that the distribution of averaged gradient follows a Gaussian distribution [...] This assumption is invalid because the gradients are clipped before averaging, so the distribution of averaged gradients is bounded and is not close to a Gaussian distribution.
While it is correct that the gradients are clipped, we stress that, in non-technical terms, the Central Limit Theorem states that the **distribution of sample means converges to a normal distribution for large enough sample sizes, regardless of the shape of the distribution of the individual samples** [1, p.66-68]. This means that the individual gradients may be clipped and that the CLT can still be applied to averages over clipped gradients. We demonstrate this empirically through an ablation study (see answer to [Q1])
> [W3] The proved GMIP bound Theorem 4.2. grows indefinitely with regard to model dimension d. [..] This suggests that the GMIP bound may be less tight than the standard mu-GDP bound for the high-dimensional problem, which is the case for training a deep neural network.
> [Q2] Is the proved GMIP bound less tight than the standard mu-GDP bound for DP-SGD, when the model dimension is large? [...]
Indeed there might exist settings where our results require more noise for $\mu$-MIP than $\mu$-DP. In these cases one should resort to $mu$-DP as it implies $mu$-MIP. This is however not the case for the realistic models considered in this work.
The dependency on the parameter $d$ is a consequence of our intentionally broad proving strategy. Our proof approach consists of two key steps: First, we establish an optimal LRT framework under general gradient distributions, without imposing any cropping constraints (App. D1). This initial step serves as the foundation for our subsequent analysis and is (1) as general as possible, i.e., no distribution assumptions and (2) optimal in the sense of the Neyman-Pearson Lemma, i.e., it cannot be improved. Our result covers all models trained with standard SGD and is remarkable in its generality as it is the first to suggest clear conditions when adding noise is not required to reach $mu$-MIP.
Second, we specialize our findings to noise addition on cropped random variables (App D2). This analysis may potentially be improved. We offer the following intuitive rationale: Since variance of noise is considered fixed across dimensions, the introduction of additional dimensions naturally leads to an overall increase in norms, approx. on the order of $O(sqrt(d))$. It's important to note that the dependence on $d$ may thus be removed through full incorporation of the fixed cropping threshold $C$ as is common in DP. We leave this improvement for future work as it requires a substantially more complicated analysis
Finally, we emphasize that our work already achieves a significant milestone by being the first to analytically characterize the entire trade-off curve for membership inference attacks.
> [Q1] What exactly are the approximations and assumptions used for the proved GMIP bound, for the comparison figure 1? How much would possible issues with assumptions [...] break the proved GMIP bound?
While we require that the gradient dim. $d$ and batch size $n$ are sufficiently large, we put these aspects into perspective. Following the suggestion of the Reviewer, we ran four ablation studies in the rebuttal PDF to investigate the effects of the params. on the GMIP bound. We do this by averaging gradients that follow a Gaussian or a uniform distribution and observe that the $\mu$-GMIP bound closely reflects the empirically observed trade-off curves for:
* Realistic values of batch size $n$ ($n >= 10$) both when the Gradients follow a Gaussian (Figure 3a, e) or a Unit distribution (Figure 3b, f)
* A wide range of cropping thresholds C from 1 to 10 (in this example, the expected value of $\lVert \theta_i \rVert = 5$, Figure 3c, g). We see no effect on the validity of our results (see CLT argument).
* Small values of $d$. Our bound is an extremely good approximation even for values as small as $d=2$ (Figure 3d, h)
---
We thank the Reviewer again for the feedback, which will certainly help improve our manuscript. We sincerely hope that we have particularly addressed your main concern of why the CLT is in fact applicable and why no distributional assumptions are required, which are empirically confirmed in the new ablation experiment provided. We are happy to include these remarks in the final version of the paper. We would kindly request the Reviewer to reconsider the review score in light of this response.
Reference
[1] Y. Dodge. The Concise Encyclopedia of Statistics. Springer, 2008
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks the authors for clarifying their CLT argument for the mean of clipped gradient distribution. It definitely clarifies my doubt about the application scope of the bound. However, the CLT argument is an approximation that is only exact at infinite $n$. The fact that the authors are analyzing the converged Gaussian distribution due to CLT argument makes their current upper bound invalid for any finite $n$ with certain probability. To this end, a correct bound would require error correction terms related to using CLT (similar to [Theorem 3.7, Dong et al.]). I'd like to request the authors to update the statement of their GMIP bound to either add explicit error correction terms due to approximations, or add precise descriptions of the assumptions required for the bound to hold (e.g. the gradient mean exactly follows the Gaussian distribution).
Another remaining concern (which is direct consequence of the approximation error mentioned above) is that the proved privacy bound grows with model dimension, and is less tight than the dimension-independent Gaussian DP bound for DP-SGD under large model dimensions. (As the authors acknowledge in the rebuttal.)
I still find the work interesting, and as the authors point out, it is the first-time that a trade-off function under MIA is analytically estimated. However, it also has the above two important limitations and therefore I'm keeping my score still as borderline accept.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer DVNo
Comment: Thank you for your thoughtful reply.
We will be happy to make this point more clear in our manuscript. We will also discuss how Berry-Esseen’s theorem, which yields an error of order $\mathcal{O}(1/\sqrt{n})$, might be used to bound the error. It is also worthwhile to check whether the bound of Dong et al. (pointed out by the reviewer) or its proving technique are applicable. Their bound provides an even faster convergence rate of $\mathcal{O}(1/n)$. Finally, we would like to stress that our empirical studies suggest that the error seems to be negligible even for moderate $n$ in the general and in the relevant low-FPR regime.
Thank you for your time and your continued support of our work. | null | null | null | null | null | null |
Permutation Decision Trees using Structural Impurity | Reject | Summary: The authors proposed the permutation decision trees method, which uses Effort-To-Compress as the impurity measure to model the order dependencies of data instances, and extended the proposed permutation decision tree to a variant of random forest. They also did some experiments to compare the performance of the proposed methods with random forests.
Strengths: The proposed structural impurity can actually capture the order dependencies of data instances, as shown in the examples in Table 1.
Weaknesses: The paper exhibits several weaknesses, which are outlined below:
1. Insufficient clarity regarding the chosen setting: The authors' intended focus appears to be on time series data; however, the task discussed pertains to multi-class classification, which is an i.i.d. setting. The authors are recommended to formalize the problem setup.
2. Inconsistent use of notation: The paper demonstrates inconsistencies in notation usage. For instance, the features presented in Table 3 are denoted as $f_{1}, f_{2}$, whereas in Figures 3-7, they are represented as $x_{0}, x_{1}$.
3. Unfair experimental setup and insignificant results: The experiment setup lacks fairness, and the obtained results do not exhibit statistical significance. In the only out-performing dataset, the random forest model employed only one tree, while the proposed permutation decision forest utilized five trees, indicating an apparent unfairness in the comparison. Furthermore, the hyperparameters' n_estimators vary across different datasets, which is deemed unreasonable.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: 1. Which type of tasks does this paper care about? See Weaknesses #1.
2. If the concerned tasks are order-sensitive, why can the data instances be casually permuted when generating a forest?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: The authors adequately addressed the limitations in Section 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Addressing Weakness 1**: The applicability of the proposed method is not limited to just timeseries data. In our paper, we are interested in the use case where order in which data instances are presented plays a crucial role, we are not interested in the dependency on a single data instance. In the modified manuscript, we have added the following section titled “Temporal vs. Spatial Ordering in Decision Making” to address the comments.
In the toy example (Table 2), the data instances were represented spatially (refer to Figure 2). However, it is also possible to imagine the data instances to be events in time. This has significant bearing on the decision making process.
Imagine that there is an outbreak of a new virus (like COVID-19) and data instances (rows of Table 2) correspond to chronological admission of patients to a testing facility. The decision to be taken is to quarantine (if virus is present) or not quarantine the patient (if no virus present). Let $f_0$ represent the severity (including diversity) of symptoms of the incoming patient and $f_1$ represent the value obtained by a molecular test performed on a bio-sample (for eg., blood sample) taken from the patient. The labels $1$ and $2$ correspond to the presence of virus (POSITIVE) and absence of the virus (NEGATIVE) respectively. Consequently, the patient with POSITIVE label is quarantined. It is also the case that determining the severity (and diversity) of symptoms is relatively straightforward as it involves a thorough examination by the attending physician. On the other hand, performing the molecular test on the patient's blood sample involves considerable cost and time, and not to mention the discomfort the patient is subjected to. The decision making process needs to factor in all these additional constraints.
Now, the permutations A-E of the data correspond to different realities in which the events happen over time. Even though all the $14$ patients with exactly the same severity of symptoms and molecular test value occur (we assume that the molecular test is done on all the patients for the purposes of medical records), the order in which they arrive is different in each of these realities. Consequently, the decision tree obtained by our method is going to be different in each reality which is intuitive and reasonable. Conventional DT with information gain or Gini impurity would make no distinction between these realities and yield the same DT in each reality. However, this would not be ideal as we shall show.
To illustrate the contrasting decision trees obtained by the ETC based impurity measure, consider the trees obtained for permutation B and permutation D. We have re-drawn them below to better aid the comparison (refer to Figure 8 and Figure 9). Both decision trees fit the same data (up to a permutation) and hence have identical performance metrics. However, the former needs molecular testing of all $14$ patients whereas the later needs the testing to be done on only $4$ patients. The argument to be made here is that in the reality encountered by permutation D, the incoming patients arrive in a particular order where it becomes necessary to decide to perform the molecular testing on all of them. Whereas, in the alternate reality corresponding to permutation B, the patients arrive in such an order that the decision to test first on severity of symptoms is the correct one. This has a huge impact on future events -- in Figure 8, each and every patient in the future will also be subjected to the molecular testing whereas not in the case of Figure 9. Thus, decision making strictly depends on the chronological ordering of events - as is the case in real life.
Please find the link to content we have added in the updated manuscript, Figures for Permutation B and D are provided in the below link: https://drive.google.com/file/d/1o91TpYjnbj-OZ7fHUeuSLhKnRwVSxPyA/view?usp=sharing
**Addressing Weakness 2**: Thank you for pointing it out. In the revised manuscript, we have corrected the same.
**Addressing Weakness 3**: In response to the reviewer's feedback, we have made significant improvements to the manuscript. We have now included a dedicated section that presents a thorough performance comparison between the permutation decision tree and the classical decision tree, using various real-world datasets.
To ensure a fair and robust evaluation, we have provided detailed information on the hyperparameter tuning process. We conducted cross-validation experiments to validate the effectiveness of both approaches, and the results have been included in the revised manuscript.
Furthermore, we have included the test results obtained from the comparison, providing comprehensive insights into the performance of the permutation decision tree in comparison to the classical decision tree. Please find the screenshot of the results and insights:https://drive.google.com/file/d/1RzuO_-3Hyo96vKQKxMQLtVcVekw9UDzv/view?usp=sharing
In the revised manuscript, we have added the hyperparameter tuning details of Random Forest.
**Response to Questions**: In this paper, our focus lies on tasks involving data instances with temporal dependencies. Although the dataset need not be limited solely to time series data, we are interested in scenarios where each data instance exhibits a dependency on its preceding data instance. Our interest extends to encompass various cases where the relationships between consecutive data instances play a crucial role in the underlying patterns and decision-making process. These dependencies could manifest in diverse domains, and our proposed method aims to effectively capture and leverage them for enhanced performance and interpretability.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing such a comprehensive explanation. However, I regret to mention that the issues I raised remain unresolved.
I'm struggling to grasp the precise scope of the settings the authors intend to address, as it appears to be a somewhat ambiguously defined problem. Regrettably, the revised version of the paper still lacks a concrete mathematical formulation of the dependency in question. In light of this, I would like to suggest that the authors focus on clarifying the following points in their forthcoming iteration:
1. What is dependency in mathematics? (for example, a transition rule in time series analysis)
2. What is the objective in the concerned setting? (for example, minimizing the accumulated loss in times series analysis)
I believe that addressing these aspects in the next version of the paper will substantially enhance its comprehensibility and value.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for the valuable comments. We will work on the suggestion in the next version. | Summary: In traditional decision tree algorithms such as CART and C4.5, impurity measures are used to split internal nodes. The paper proposes a decision tree induction method by using effort-to-compress (ETC) measure, which can capture order dependencies in the data. With ETC’s ability to capture order dependencies, permuting the data can result in different trees, thereby constructing a forest without the need for bagging. This proposed decision tree induction method can be used for datasets with temporal order.
Strengths: The paper uses ETC as a new impurity measure for constructing decision trees. Since ETC is sensitive to the order of data points, the tree built using this measure may be well-suited for temporal datasets. And it also provides a different way for constructing diverse trees and thereby getting a forest. Overall, the paper is clearly written and easy to follow.
Weaknesses: - What about the bias and variance in the permutation decision forest? Random forest uses bagging and random feature selection to make trees in the forest uncorrelated, thereby reducing variance. But trees in the permutation forest are not uncorrelated. Using the ensemble of these correlated trees may not reduce variance.
- In the toy example, some leaf paths are shown in different trees. I am wondering if there will be a significant number of duplicated leaf paths within the permutation forest.
- Experiments only show the comparison between random forest and permutation tree forest in terms of F1-score. How about other evaluation metrics, e.g. misclassification loss? The results don’t show that the proposed method outperforms random forest. And there is no comparison between the performance of single trees, such as CART vs. ETC tree.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I am wondering how the hyperparameters are chosen in the experiments. Random forest tends to have lower depth but more estimators, while permutation tree forest has deeper trees with fewer estimators.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed the limitations of the paper and propose future directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Addressing Weakness 1**: In the revised manuscript, we have thoughtfully included a dedicated section titled "Model vs. Domain Interpretability, Temporal Generalizability, and Causal Decision Learning." In this section, we have explored the interpretability aspects of our proposed model, highlighting the key differences between our permutation decision tree and the random forest.: The procedure of bagging that is employed in Random forests is problematic from the perspective of interpreatability or explainability. Leaving out some of the features and a random sampling of data instances is bound to result in biased decision trees. Even though the final classification performance of Random forest may be good, the use of biased trees due to random sampling and feature subsampling leads to a loss of interpretability and reliability in the decision making process. It should be noted that leaving out a particular feature or a data instance for constructing a decision tree (as is the case in Random forest algorithm) is completely unjustified from the point of view of the application domain. It is an arbitrary adhoc step that has no valid justification from an explainability/interpretability point of view. Consider the scenario where the left out data instance is an anomaly or a rare event with physical or engineering significance. For example, the data instance that has been removed pertains to a system overload event or a fault event in the monitoring of a power system. In a cyber security scenario, the left out data instance could be an adversarial attack (which is sparse and very rare event). The objective or the goal of the learning task in these applications is in fact to model such rare/extreme/anomalous events in order to understand and garner insights, in which case, the decision tree used to arrive at the final classification rule in Random forest is completely non-intuitive.
In Permutation Decision forest, every permutation of the data instance corresponds to an `alternate' reality (a counterfactual?!) where that particular order of the data instances are presented to the algorithm to result in a specific set of decisions made subsequently by the classifier. Two different permutations of the same data could mean/signify entirely two different realities or states of the world. If each data instance corresponds to a specific time, then a different ordering corresponds to a different temporal sequence of events - which is clearly a very different reality to one that produce the training data. Random forest has no way of capturing these counterfactual realities. Thanks to the sensitivity of the structural impurity measure to data ordering, Permutation Decision forest is able to efficiently capture this via different decision making rules. In effect, what Permutation Decision forest is learning is a {\it generalized} set of decision rules that are invariant (under all permutations) to temporal re-ordering. This is a form of generalization which is missing in Random forest. We could call this type of generalization -- a form of \emph{temporal generalization} that respects counterfactual realities. Thus, Permutation Decision forest is a pre-cursor to a causality informed decision tree algorithm. Future research work will focus on making these causal underpinnings more explicit and pronounced with suitable enhancements to upgrade Permutation Decision forest to a full-blow causal reasoning/ causal decision learning algorithm.
**Addressing Weakness 2**: It is possible that different permutation may sometimes give the same tree (if the ETC values are similar).
**Addressing Weakness 3**: In response to the reviewer's feedback, we have made significant improvements to the manuscript. We have now included a dedicated section that presents a thorough performance comparison between the permutation decision tree and the classical decision tree, using various real-world datasets.
To ensure a fair and robust evaluation, we have provided detailed information on the hyperparameter tuning process. We conducted cross-validation experiments to validate the effectiveness of both approaches, and the results have been included in the revised manuscript.
Furthermore, we have included the test results obtained from the comparison, providing comprehensive insights into the performance of the permutation decision tree in comparison to the classical decision tree. Please find the screenshot of the results and insights:https://drive.google.com/file/d/1RzuO_-3Hyo96vKQKxMQLtVcVekw9UDzv/view?usp=sharing
In the revised manuscript, we have added the hyperparameter tuning details of Random Forest.
**Response to Question**: Please go through weakness 3, where we describe how the hyperparameter tuning is done.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for their detailed rebuttal. However, I believe the proposed methodology still needs more work. I will maintain my current score.
Looking at Table 4, DT outperforms PDT on some datasets. This makes me doubt if PDT is the best option.
I appreciate adding a discussion about interpretability and temporal generalizability. I agree that a single PDT with fewer leaves is interpretable. But when there are many individual trees in the permutation decision forest, there could be many decision rules and these rules may not always be consistent. How can you ensure interpretability when it becomes a tree ensemble?
The new section also mentions model interpretability and domain interpretability. Showing multiple DTs to domain experts and allowing them to select the most meaningful decision tree relate to the idea of Rashomon set or model multiplicity. But it is not clear to me how PDT or permutation decision forest outperforms a single optimal decision tree and the Rashomon set of trees.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for the valuable comments. When it comes to tree ensembles, the Permutation Decision Forest distinguishes itself by abstaining from any form of random sub-sampling or feature selection. In contrast to conventional methods, the Permutation Decision Forest preserves the entirety of its features, utilizing each one in its decision-making process. This approach results in the generation of distinct decision trees through the permutation of data instances alone. This commitment to utilizing all features is pivotal for achieving interpretability, as it ensures that no potentially crucial attributes are overlooked. Unlike the utilization of random features, which could inadvertently omit vital information, the Permutation Decision Forest effectively mitigates this concern. | Summary: The paper "Permutation Decision Trees using Structural Impurity" introduces a novel split criterion for the training of decision trees that also takes the order of labels inside the training data into account. This way, to obtain a forest, one only needs to shuffle the data before training individual trees. Moreover, the novel split criterion supposedly works better for data that includes (temporal) dependencies, although der are no experiments to support this claim.
Strengths: - I think the idea of tackling non-iid data with a novel split criterion is nice, and Effort-To-Compress as an impurity measure seems like a good choice
Weaknesses: - The experimental evaluation is very weak. The authors compare their method on 6 small real-world datasets and one artificial dataset and compare it only against Random Forests. Moreover, their method seems to be worse compared to RF. Hyperparameters are also incomparable, as the RF uses smaller trees than their method although it is well-known that RF benefits more from larger trees. In addition, the number of estimators changes for every experiment. There is no clear experimental protocol, and the authors do not use random repetitions and/or cross-validation but resort to a single train/test split. The experimental evaluation is hence borderline useless and can only be seen as a first test-experiment.
- The paper contains limited valuable information. While the Effort-To-Compress (ETC) measure seems to be of central interest here, the authors do not present a formal mathematical explanation of it. They mention the NSRPS algorithm to compute ETC, but also do not explain it mathematically, and only offer a single example. A thorough mathematical explanation and the typical explanations of the notation (Model function f(x), samples X, labels Y, etc.) is missing entirely.
- The authors deal with the case in which the order of samples is important. This is completely against the typical IID assumption we have in Machine Learning. Unfortunately, the authors neither discuss this (certainly interstring) difference in more detail nor do they really present any real-world example of this.
- A dedicated Related Work section is missing, although there is plenty of space left in the paper. The authors decided to waste roughly two pages by printing different DTs, which does not add any new information to the paper. This space would have been used better to highlight related work or pinpoint the novelty of this work in more detail.
- Eq. (1) and Tab. 4 do not fit the page width
Technical Quality: 1 poor
Clarity: 1 poor
Questions for Authors: - Are there any datasets and/or real-world applications you are aware of in which the order of samples matter? What are the implications for "classical" IID ML here?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 1 poor
Contribution: 1 poor
Limitations: The authors acknowledge that their method is worse compared to the state of the art and intend to perform more testing. As this paper presents a novel method I don't see any immediate negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Addressing Weakness 1**: In response to the reviewer's feedback, we have made significant improvements to the manuscript. We have now included a dedicated section that presents a thorough performance comparison between the permutation decision tree and the classical decision tree, using various real-world datasets.
To ensure a fair and robust evaluation, we have provided detailed information on the hyperparameter tuning process. We conducted cross-validation experiments to validate the effectiveness of both approaches, and the results have been included in the revised manuscript.
Furthermore, we have included the test results obtained from the comparison, providing comprehensive insights into the performance of the permutation decision tree in comparison to the classical decision tree. Please find the screenshot of the results and insights:https://drive.google.com/file/d/1RzuO_-3Hyo96vKQKxMQLtVcVekw9UDzv/view?usp=sharing
In the revised manuscript, we have added the hyperparameter tuning details of Random Forest.
**Addressing Weakness 2**: In the supplementary materials, we have provided the mathematical explanation of NSRPS algorithm.
**Addressing Weakness 3**: In our paper, we are interested in the use case where order in which data instances are presented plays a crucial role, we are not interested in the dependency on a single data instance. In the modified manuscript, we have added the following section titled “Temporal vs. Spatial Ordering in Decision Making” to address the comments.
In the toy example (Table 2), the data instances were represented spatially (refer to Figure 2). However, it is also possible to imagine the data instances to be events in time. This has significant bearing on the decision making process.
Imagine that there is an outbreak of a new virus (like COVID-19) and data instances (rows of Table 2) correspond to chronological admission of patients to a testing facility. The decision to be taken is to quarantine (if virus is present) or not quarantine the patient (if no virus present). Let $f_0$ represent the severity (including diversity) of symptoms of the incoming patient and $f_1$ represent the value obtained by a molecular test performed on a bio-sample (for eg., blood sample) taken from the patient. The labels $1$ and $2$ correspond to the presence of virus (POSITIVE) and absence of the virus (NEGATIVE) respectively. Consequently, the patient with POSITIVE label is quarantined. It is also the case that determining the severity (and diversity) of symptoms is relatively straightforward as it involves a thorough examination by the attending physician. On the other hand, performing the molecular test on the patient's blood sample involves considerable cost and time, and not to mention the discomfort the patient is subjected to. The decision making process needs to factor in all these additional constraints.
Now, the permutations A-E of the data correspond to different realities in which the events happen over time. Even though all the $14$ patients with exactly the same severity of symptoms and molecular test value occur (we assume that the molecular test is done on all the patients for the purposes of medical records), the order in which they arrive is different in each of these realities. Consequently, the decision tree obtained by our method is going to be different in each reality which is intuitive and reasonable. Conventional DT with information gain or Gini impurity would make no distinction between these realities and yield the same DT in each reality. However, this would not be ideal as we shall show.
To illustrate the contrasting decision trees obtained by the ETC based impurity measure, consider the trees obtained for permutation B and permutation D. We have re-drawn them below to better aid the comparison (refer to Figure 8 and Figure 9). Both decision trees fit the same data (up to a permutation) and hence have identical performance metrics. However, the former needs molecular testing of all $14$ patients whereas the later needs the testing to be done on only $4$ patients. The argument to be made here is that in the reality encountered by permutation D, the incoming patients arrive in a particular order where it becomes necessary to decide to perform the molecular testing on all of them. Whereas, in the alternate reality corresponding to permutation B, the patients arrive in such an order that the decision to test first on severity of symptoms is the correct one. This has a huge impact on future events -- in Figure 8, each and every patient in the future will also be subjected to the molecular testing whereas not in the case of Figure 9. Thus, decision making strictly depends on the chronological ordering of events - as is the case in real life.
Please find the link to content we have added in the updated manuscript, Figures for Permutation B and D are provided in the below link: https://drive.google.com/file/d/1o91TpYjnbj-OZ7fHUeuSLhKnRwVSxPyA/view?usp=sharing
**Addressing Weakness 4 and 5**:In the revised manuscript, we have added relevant previous works. We have also modified Equation 1 and Table 4.
**Response to Question**: Please go through the response to Weakness 3. We have provided an example where the use case of the method is highlighted
---
Rebuttal Comment 1.1:
Comment: After reading the author's response, I am still not convinced.
**Addressing Weakness 1**: I acknowledge the added comparison of PDT vs DT in the provided screenshot. However, the results you are now showing are inconsistence. For example, on the Iris dataset looking at the cross-validated results, PDT beats DT, yet they have the same performance on the test set. On the breast cancer dataset, the effect is even worse: DT beats PDT when looking at cross-validation but loses on the test set. I understand that cross-validation is difficult when the order of data is essential (and there are a few papers discussing this in the time-series domain, although I am unsure how applicable they are. See e.g. [1]), but since the values are partially contradicting each other, I am still unsure which method is the best.
**Addressing Weakness 2**: Unfortunately, I cannot find the supplementary material for this paper. Is this part of the rebuttal?
**Addressing Weakness 3**: I am sorry for the miscommunication here. I appreciate the additional example, but I still feel there is a lack of notation and proper mathematical explanation. In the typical Machine Learning setting, we assume that data instances are i.i.d samples. Now the authors introduce an algorithm that also respects the order of samples in the dataset. Hence the underlying assumption is that certain permutations might occur more frequently and/or certain permutations impact certain labels (as shown in the example). Now I wonder: How can we formally, i.e. in a mathematical sense, characterize this dependency? I believe this is the core issue I have with the paper, as a proper mathematical model would also help with the proper evaluation of the model.
[1] "A note on the validity of cross-validation for evaluating autoregressive time series prediction" by Bergmeir et al. 2018 | Summary: The paper proposes a novel in Decision Tree literature splitting criteria based on Effort To Compress (ETC) gain. Use of this criteria is justified by a desire to work with data that doesn't conform to i.i.d. assumption about the generating distribution. There is an experiment on a synthetic data that shows that different decision trees are generated when different orderings of the data are used for training. A permutation voting forest is introduced, that allows using random permutations of full data to obtain multiple different decision trees for use in the final ensemble of trees. There is an evaluation of Permutation Voting Forest against regular random forests on multiple real world datasets that however show slightly lower results when using proposed method.
Strengths: - The paper opens a novel line of research about using Decision Trees for modeling data, that comes in a sequence and does not follow i.i.d. assumption.
- There is a novel application of ETC measure as a splitting criteria in decision trees.
- A generalization of the proposed model: Permutation Decision Forest is introduced, that uses a novel idea of shuffling the data in the context of a splitting criteria that generates different trees for different permutations of the training data.
Weaknesses: - It is noted that usage of ETC allows to get rid of i.i.d. assumption. But this claim needs more thorough theoretical analysis. If we want to keep sequential structure of the data, the sequence still gets destroyed upon split: split does not split examples in a consecutive way; some examples may go to the left split, then some to the right, then again to the left part of the split and so on. So, new left and right sequences after the split will have completely different properties. Considering an example from introduction, where ETC is used: musical compositions. Splitting the musical composition according to some feature, like presence of some range of frequencies at a given moment, will result in an unpleasant music on both sides of the split, because instead of hearing half of the musical composition, we will hear a "fractal" - small parts of original sequence with small gaps inbetween, that got assigned to left or right parts of the split
- Related to the previous point: at the testing phase there is no "memory" in the model, and the model still predicts elements by looking at them one by one. So, shuffling the testing set will result in exactly same predictions. Can we say that the problem of non-i.i.d. distribution is solved, if the behavior on the testing set is equal to the behaviour of the i.i.d. models?
- Testing of regular Decision Trees with proposed splitting criteria on real data is needed (only the forests were tested on real data, but proposed forests work differently due to the proposed shuffling of the input data, so regular trees must be evaluated separately as well). It would be nice to both test on regular datasets (that are not sequentially ordered, like the datasets from section 3.2), and also to find at least some example real datasets where ordering is important, and where proposed model (regular permutation decision tree) is both practically and theoretically better than the baseline decision tree models.
- In section 3.2 a more thorough experimental design would be more convincing. (a) If we compare proposed model to the baseline, why hyperparameters are different for same dataset? If hyperparameters tuning was done, that should be thoroughly described. (b) Experiments should be run several times on different train-test splits and mean scores and standard deviations should be reported to allow fair comparison in the presense of noise.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - The proposed decision tree and forest work very differently, because forest does internal data shuffling, which destroys ordering of the original input data. Shall we consider two variants of the forest: one with shuffling, and one without (where ETC splitting is used, but instead of shuffling we do regular feature and example subsets selection)? Then the forest with shuffling may be compared to original Random Forest (because both are not using sequential information); and the Forest without shuffling may be compared to the single proposed permutation decision tree.
- Minor note: typo on line 41: "muscial"
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 4 excellent
Limitations: Limitations are well described in the paper, which is good. Since there were identified significant limitations in terms of accuracy of the proposed permutation decision forests (that may also affect proposed single permutation decision trees), it would be more convincing to include additional experiments that will clarify the extent of such limitations right away without deferring them to the future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for the valuable feedback. Please find the point by point response to each of the weakness pointed out.
**Addressing Weakness No 1**: In our paper, we are interested in the use case where order in which data instances are presented plays a crucial role, we are not interested in the dependency on a single data instance. In the modified manuscript, we have added the following section titled “Temporal vs. Spatial Ordering in Decision Making” to address the comments.
In the toy example (Table 2), the data instances were represented spatially (refer to Figure 2). However, it is also possible to imagine the data instances to be events in time. This has significant bearing on the decision making process.
Imagine that there is an outbreak of a new virus (like COVID-19) and data instances (rows of Table 2) correspond to chronological admission of patients to a testing facility. The decision to be taken is to quarantine (if virus is present) or not quarantine the patient (if no virus present). Let $f_0$ represent the severity (including diversity) of symptoms of the incoming patient and $f_1$ represent the value obtained by a molecular test performed on a bio-sample (for eg., blood sample) taken from the patient. The labels $1$ and $2$ correspond to the presence of virus (POSITIVE) and absence of the virus (NEGATIVE) respectively. Consequently, the patient with POSITIVE label is quarantined. It is also the case that determining the severity (and diversity) of symptoms is relatively straightforward as it involves a thorough examination by the attending physician. On the other hand, performing the molecular test on the patient's blood sample involves considerable cost and time, and not to mention the discomfort the patient is subjected to. The decision making process needs to factor in all these additional constraints.
Now, the permutations A-E of the data correspond to different realities in which the events happen over time. Even though all the $14$ patients with exactly the same severity of symptoms and molecular test value occur (we assume that the molecular test is done on all the patients for the purposes of medical records), the order in which they arrive is different in each of these realities. Consequently, the decision tree obtained by our method is going to be different in each reality which is intuitive and reasonable. Conventional DT with information gain or Gini impurity would make no distinction between these realities and yield the same DT in each reality. However, this would not be ideal as we shall show.
To illustrate the contrasting decision trees obtained by the ETC based impurity measure, consider the trees obtained for permutation B and permutation D. We have re-drawn them below to better aid the comparison (refer to Figure 8 and Figure 9). Both decision trees fit the same data (up to a permutation) and hence have identical performance metrics. However, the former needs molecular testing of all $14$ patients whereas the later needs the testing to be done on only $4$ patients. The argument to be made here is that in the reality encountered by permutation D, the incoming patients arrive in a particular order where it becomes necessary to decide to perform the molecular testing on all of them. Whereas, in the alternate reality corresponding to permutation B, the patients arrive in such an order that the decision to test first on severity of symptoms is the correct one. This has a huge impact on future events -- in Figure 8, each and every patient in the future will also be subjected to the molecular testing whereas not in the case of Figure 9. Thus, decision making strictly depends on the chronological ordering of events - as is the case in real life.
Please find the link to content we have added in the updated manuscript, Figures for Permutation B and D are provided in the below link: https://drive.google.com/file/d/1o91TpYjnbj-OZ7fHUeuSLhKnRwVSxPyA/view?usp=sharing
**Addressing Weakness 2**: As you correctly pointed out, in the current scenario, the shuffling of test data does not have an effect. However, this issue can be addressed by implementing a method where each test data instance is predicted individually. After predicting a test data instance, we include it in the training set and build a new model. We then utilize the updated model to predict the subsequent test data instance, and this process continues iteratively. By adopting this approach, we effectively incorporate memory into the model, enabling it to consider the evolving knowledge from previously predicted test data instances.
**Addressing Weakness 3 and 4**: In response to the reviewer's feedback, we have made significant improvements to the manuscript. We have now included a dedicated section that presents a thorough performance comparison between the permutation decision tree and the classical decision tree, using various real-world datasets.
To ensure a fair and robust evaluation, we have provided detailed information on the hyperparameter tuning process. We conducted cross-validation experiments to validate the effectiveness of both approaches, and the results have been included in the revised manuscript.
Furthermore, we have included the test results obtained from the comparison, providing comprehensive insights into the performance of the permutation decision tree in comparison to the classical decision tree. Please find the screenshot of the results and insights:https://drive.google.com/file/d/1RzuO_-3Hyo96vKQKxMQLtVcVekw9UDzv/view?usp=sharing
In the revised manuscript, we have added the hyperparameter tuning details of Random Forest.
We believe that these additions significantly enhance the quality of the paper, addressing the concerns raised by the reviewer regarding the performance comparison. We are grateful for the valuable feedback, which has allowed us to strengthen our research and present more comprehensive and conclusive findings.
---
Rebuttal Comment 1.1:
Comment: Thank you for detailed response. I acknowledge your clarifications, and I would like to consider the following, as of now not changing my evaluation of the paper:
- On Weakness 1, you have provided an example, where f_1 is more "costly" to obtain than f_0. However in the real world (in other dataset or example, that consists of the same feature values and examples ordering, but with different meaning) it could be all the way around, where f_0 would be more costly. So, in my opinion, this example does not answer the questions in general, and provides only one specific case which is hard to generalize into a theory.
- On Weakness 2 I acknowledge your suggestion.
- Regarding your response about Weaknesses 3 and 4 I would like to note that a hyperparameters search over depths from 1 to 150 with step size 10 is not sufficient, because depths over 10 are very unlikely to be optimal, because they result in overfitting on such small datasets. This corresponds to the resulting fact that only depths 1 and 11 were selected by hyperparameters search, and both are likely to be suboptimal, because range from 2 to 10 was not covered by the search. Since the hyperparameters space was not fully explored, the conclusions may not be valid in the possible presence of more optimal results. | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for the valuable feedback. We have modified the manuscript and addressed the comments raised by the reviewers. Following are the changes made:
1. A concrete example that illustrates the practical use case of the proposed Permutation Decision Tree. Please find the link for the same: https://drive.google.com/file/d/1o91TpYjnbj-OZ7fHUeuSLhKnRwVSxPyA/view?usp=sharing
2. We have thoughtfully included a dedicated section titled "Model vs. Domain Interpretability, Temporal Generalizability, and Causal Decision Learning (Section 5)." In this section, we have explored the interpretability aspects of our proposed model, highlighting the key differences between our permutation decision tree and the random forest. Due to lack of space, please find the link to the contents of this section: https://drive.google.com/file/d/1-bn79_d9VYPLZ1QT3xQTNnBzp0UoVMTo/view?usp=sharing
3. In response to the reviewer's feedback, we have made significant improvements to the manuscript. We have now included a dedicated section that presents a thorough performance comparison between the permutation decision tree and the classical decision tree, using various real-world datasets.
To ensure a fair and robust evaluation, we have provided detailed information on the hyperparameter tuning process. We conducted cross-validation experiments to validate the effectiveness of both approaches, and the results have been included in the revised manuscript. Furthermore, we have included the test results obtained from the comparison, providing comprehensive insights into the performance of the permutation decision tree in comparison to the classical decision tree. Please find the screenshot of the results and insights:https://drive.google.com/file/d/1RzuO_-3Hyo96vKQKxMQLtVcVekw9UDzv/view?usp=sharing. In the revised manuscript, we have added the hyperparameter tuning details of Random Forest
4. The mathematical details of the Non-sequential Recursive Pair Substitution (NSRPS) algorithm has been added in the supplementary material.
5. Correction to the notational inconsistency pointed out by the reviewer.
6. We shall provide the GitHub link to the codes used to build the proposed model.
Thank you for the valuable feedback. The feedback helped us to do more experiments and further provide more insights about the proposed permutation decision tree. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Unifying Predictions of Deterministic and Stochastic Physics in Mesh-reduced Space with Sequential Flow Generative Model | Accept (spotlight) | Summary: The paper proposes PbGMR-GMUS to encode the graphs of physics systems into low-dimensional features, which are further reconstructed into the desire graph representations. An attention-based model is integrated into flow-based generative models to predict the future dynamics in the latent space. The proposed method is able to handle both deterministic and stochastic fluid dynamics thanks to the probabilistic model and achieve superior performance over previous methods.
Strengths: * An effective model to learn the graph representations in low-dimensional space.
* The simulation of dynamics is achieved by a probabilistic method using generative models, which rollout vivid predictions for long-term predictions.
* The method is able to solve both deterministic and stochastic fluid dynamics.
Weaknesses: * Miss citation at L62 for GMR-GMUS.
* On top right of Figure 1, is it "Coupling layer" instead of "Couling layer"?
* Unclear caption for Figure 5. Which one is the variance?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * How to prove the statement about the first benefits at L163-164? Is there any further explanation?
* Is the main difference to between simulating deterministic and stochastic fluid dynamics that the $\mu$ in Equation 14 would change with time for stochastic dynamics?
* Any video demos for the predictions of both deterministic and stochastic fluid dynamics? The temporal consistency can be better illustrated by videos.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed PbGMR-GMUS is technically limited, which is a variant of GMR-GMUS with simple modifications. Video demos are missing to verify the temporal consistency of the predictions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your important support. And we seriously address your concerns accordingly; they are extremely helpful to the submission.
> **Q1:** Miss citation at L62 for GMR-GMUS.
**Response:** Thanks for pointing it out. We already added reference in the revised version.
> **Q2:** On top right of Figure 1, is it "Coupling layer" instead of "Couling layer"?
**Response:** Yes, you are right. It should be "coupling layer" and we already fixed it.
> **Q3:** Unclear caption for Figure 5. Which one is the variance?
**Response:** We agree that the caption should be more clear. The upper is the mean of the velocity, and the bottom is the variance of the velocity. We added more explanations in the revised version.
> **Q4:** How to prove the statement about the first benefits at L163-164? Is there any further explanation?
**Response:** As a stochastic process, probabilistic models on the original graph space $p(Y_t|Y_{t+1})$ face severe one-to-many mapping problems (ill-posed or ill-condition problem)[24]. That is, given the previous step state $Y_t$, it can correspond to many different next step $Y_{t+1}$, which increases the learning difficulty. Incorrect modeling on the
ill-posed problem could result in overfitting on the training set and poor generalization on the test set. Thus, converting $Y$ into more compact $z$ will make the mapping between $z_t$ and $z_{t+1}$ less ill-posed and ease the model learning. We will add more explanation in the revised version.
> **Q5:** Is the main difference to between simulating deterministic and stochastic fluid dynamics that the $\mu$ in Equation 14 would change with time for stochastic dynamics?
**Response:** Thanks for asking the clarification. Here $\mu$ is a time-invariant global physical parameter, such as $Re$. And the numerical simulation has another time-varying parameter such as the stochastic boundary condition for the stochastic fluid dynamics. As for our model, predictions of deterministic and stochastic physics are unified; the model can automatically capture the stochasticity conditioning on $\mu$ during the training.
> **Q6:** Any video demos for the predictions of both deterministic and stochastic fluid dynamics? The temporal consistency can be better illustrated by videos.
**Response:** Thanks for this good suggestion. We already uploaded some video demos. This link is anonymized and contains not author information. Based on the policy, we first sent it to the AC in the official comment.
We appreciate your careful reading and suggestions for this paper. We believe these revisions and new supplementary experiments help improve the manuscript a lot. And your support means a lot to us. | Summary: This paper introduces a novel mesh-based machine-learning approach for modeling stochastic fluid dynamics. Similar to [11], the state transitions are modeled in a compact latent space (referred to as the mesh-reduced space) rather than the high-dimensional mesh space. Compared with [11], this paper makes several technical contributions below.
1. It proposes to use virtual pivotal positions instead of selecting pivotal nodes from the mesh topology.
2. It employs a deep generative model based on a normalizing flow approach, RealNVP, to model the stochastic dynamics. The model also incorporates multi-head attention over sequential latent states.
The experimental results demonstrate that the proposed model significantly outperforms [11] in the deterministic setup, while in the stochastic setup, the model is compared against an off-the-shelf CFD method named URANS, rather than learning-based models.
Strengths: Originality: To the best of my knowledge, this paper introduces the first learning-based probabilistic model that simulates fluid systems upon the mesh topology. The key innovation lies in integrating a flow method for stochastic temporal modeling into the GMR-GMUS [11] framework, which presents a reasonable and creative combination of existing approaches.
Quality and clarity: Overall, the paper is clear and easy to follow.
Significance: The proposed model is shown to be effective in deterministic setups, which consistently outperforms MeshGraphNet and GMR-GMUS [11] by substantial margins.
Weaknesses: Main concerns:
1. The proposed model appears to be a straightforward extension of GMR-GMUS [11] combined with RealNVP [25].
2. Ablation studies for using virtual pivotal positions, residual connections, and the conditional normalizing flow are lacking, which makes it difficult to determine the specific contributions of these components to the performance improvement shown in Table 1. It is suggested that the authors address this issue and provide a more comprehensive analysis.
3. Although the paper addresses the new problem of probabilistic modeling of fluids, the lack of comparison with existing learning-based models such as GMR-GMUS and MeshGraphNet in Section 5.2 raises doubts about the true challenges of the prediction task for stochastic fluid systems and the superiority of the proposed approach.
Minor concerns:
1. Accurate citations with corresponding publication sources should be provided. For example, GMR-GMUS [11] was published at ICLR 2022. his will enhance the credibility and traceability of the referenced papers.
2. It would be valuable to include a brief discussion regarding the limitations of the research to provide readers with a broader perspective on the potential areas for improvement or expansion.
3. In Line 62, the authors introduce GMR-GMUS without relevant sources, which made me confused until I came to Section 3.1.
4. In Lines 113-121, it would be good if the authors could describe the learning method by referring to specific components or steps in Figure 1, so the readers can better grasp the concepts and processes discussed in the text.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In Line 118, the claim that the proposed model enables the enlargement of the detection distance compared to GMR-GMUS lacks detailed explanations. Further elaboration on this point would be beneficial.
2. In Line 87, it is mentioned that the stochastic dynamic system includes a random parameter $\mu$ that varies over time, such as perturbations in the boundary conditions of turbulent flow. However, in Line 161, $\mu$ is referred to as the global physical system parameter and seems to be time-invariant. It would be helpful for the authors to clarify whether $\mu$ is time-varying across the input and output sequences.
3. Regarding $\mu$, as well as the positions $p(i)$ of the mesh cells, it is not clear from the provided context whether they are given in the future prediction time horizon.
4. I am not very familiar with RealNVP, but based on the information provided, should it be written as $P(z_{1:T} \mid \mu, z_0, x; \theta)$ in Equation (10) instead of $P(z_{1:T} \mid \mu, z_0; \theta)$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No discussions on limitations or broader societal impacts are presented in the current text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First and foremost, we express our heartfelt gratitude for your warm acknowledgment of the novelty of the learning-based probabilistic model, meticulously simulating fluid systems across intricate mesh topologies. Here are our responses to your questions.
> **Q1:** The proposed model appears to be a straightforward extension of GMR-GMUS [11] combined with RealNVP [25].
**Response:** Our endeavors transcend the boundaries of the deterministic system set by the prior SOTA works, significantly improved the SOTA reconstruction technique, and more importantly, without loss of the generalizability, unified the deterministic and stochastic system.
By replacing pivotal nodes with pivotal positions, we can enlarge the detection distance for each position embedding, thus aggregating information more efficiently. Also, a residual connection is added to the message-passing layer, which can make the training more stable. Such advantages make PbGMR-GMUS's reconstruction accuracy higher against GMR-GMUS. And low reconstruction error is significant for the success of latent generative models. To test the improvement brought by PbGMR-GMUS, we conducted comprehensive ablation studies, and the results are shown in Tables 8 and 9 (supplementary experiments). The result shows that all the newly added components can improve the performance. Detailed analysis is shown in the global rebuttal.
Furthermore, our attention-based temporal conditioned generative model is not a simple extension of RealNVP. The original RealNVP is only a probabilistic model, and it is not straighforward to include temporal information. To achieve this, we propose an encoding-decoding transformer structure to incorporate fixed physical parameters and all previous steps into condition vector $c$. Then we prove that it is still easy to calculate the probability of a coupling layer if we concatenate condition vector $c$ at every coupling layer. Actually, the key design is attention-based temporal conditional model, not the selection of RealNVP. The proposed framework can be combined with other probabilistic model. To prove this, we replace RealNVP with MAF, another flow model, Table 12 indicates the results are comparable to RealNVP. So the whole framework is a flexible and useful paradigm. We also test within a single RealNVP, removing the temporal conditional model. The performance in Table 10 drops very obviously.
> **Q2:** Ablation studies for using virtual pivotal positions, residual connections, and the conditional normalizing flow are lacking ...
**Response:** We obey your command. We have added the ablation studies on virtual pivotal positions, residual connections, and conditional normalizing flows and the results are in Table 8-10 (supplementary experiment results). In general, all of the proposed components help improve the performance. Detailed analyses are in global rebuttal **Q1**.
> **Q3:** The lack of comparison with existing learning-based models such as GMR-GMUS and MeshGraphNet for stochastic fluid systems ...
**Response:** The reviewer's concern is absolutely reasonable. We agree that more learning-based baselines are needed for section 5.2 and train the GMR-GMUS[11] and MeshGraphNet[3] to address the concern. The results are attached in Table 13 and Figure 20. Our model has the best performance, and the deterministic model cannot capture the critical stochastic nature of the underlying physics. Detailed analyses are shown in the global rebuttal **Q2** due to the word limitation.
> **Q4:** ... clarify whether $\mu$ is time-varying across the input and output sequences.
**Response:** Thanks for your help with clarification. The correct form for Equation 2 is $\frac{\partial{u}}{\partial{t}} = j(\mathbf{u}, \mathbf{\mu}, \mathbf{\iota})$ where the $\iota$ is the random parameter over time for stochastic systems (e.g., boundary conditions). So $\mu$ in lines 71, 86, and 87 should be replaced with $\iota$.
In Line 161, the $\mu$ is the global physical system parameter and is time-invariant (e.g., Re number). We revised the notations to make it more clear. We appreciate this excellent comment.
> **Q5:** Regarding $\mu$, as well as the positions $p(i)$ of the mesh cells, it is not clear from the provided context whether they are given in the future prediction time horizon.
**Response:** Thank you for asking the clarification. During time series prediction in the latent space, $\mu$ (time-invariant global physical parameter) is given through the transformer encoder in equation (12). While the positions of the mesh cells are not given during the prediction in time. We only need to guarantee that the node order is fixed when concatenating them into latent vector $z$.
> **Q6:** Should it be written as ... in Equation (10) instead of ...?
**Response:** Thanks for raising this discussion. RealNVP is a probabilistic model. We define such conditional probability only on latent space. It should be noticed that here 'latent space' is not the same as mentioned in section 2.2, which describes internal operation of RealNVP. In Equation 10, we only care about how to define such temporal conditional probability, and there is no relation with specific probabilistic model. Though we select normalizing flow model in this work, it can be defined by other conditional distributions.
> **Q7:** It would be valuable to include a brief discussion regarding the limitations of the research.
**Response:** Thanks for this concern. We agree it is necessary to discuss the limitation of the proposed model. Please refer to global rebuttal **Q3** for details.
> **Q8:** Accurate source of GMR-GMUS [11], and proper reference for GMR-GMUS in Line 62.
**Response:** Thanks for pointing it out. We revised several references with accurate sources and added a reference in Line 62.
We believe your suggestions are helpful. Thanks a lot for your careful reading and efforts on our submission. We revised the minors in our manuscript.
---
Rebuttal Comment 1.1:
Title: Additional reply
Comment: Because of the word number limitation, there are two questions we can not reply to in the rebuttal. But we think these two suggestions are very important and we already revised our manuscript based on these questions. We add our replies to these questions here:
> **Q9:** In Line 118, the claim that the proposed model enables the enlargement of the detection distance compared to GMR-GMUS lacks detailed explanations. Further elaboration on this point would be beneficial.
**Response:** We agree that we should add detailed explanations here. For pivotal nodes, it can only obtain nearby nodes through message-passing layers. However, due to over-smoothing issues, the number of message-passing layers can not be large (generally, people adopt 4 or 5 layers). Thus, pivotal nodes can only detect nodes within graph distance 4 or 5. Therefore, the real spatial distance is very small. However, for the pivotal position in equation (8), it selects nodes directly based on spatial distance. Thus, choosing a relatively large $k$ can consider more nodes and detect a larger area at the beginning of recovery.
> **Q10:** In Lines 113-121, it would be good if the authors could describe the learning method by referring to specific components or steps in Figure 1, so the readers can better grasp the concepts and processes discussed in the text.
**Response:** Thanks for this perceptive suggestion. We believe it will be really helpful for readers to understand the whole framework. We followed this suggestion in our revised version.
---
Rebuttal Comment 1.2:
Comment: Thank you for the replies to my questions and comments. After reading the other reviews and answers, most of my concerns are addressed, so I’d be happy to support acceptance, and I’ll raise my score.
---
Reply to Comment 1.2.1:
Title: Thank you so much for your support
Comment: Thank you so much for your time and efforts in reviewing this paper. Your insightful questions really help us improve the manuscript. We really appreciate that you support acceptance of our submission! | Summary: The authors present a unified framework for solving deterministic and stochastic physical dynamical systems on high-dimensional mesh space. Instead of updating values at each discretized mesh, the paper introduces an approach that evolves states in a low dimensional latent space through encoding states and physical parameters. This encoding is done by a message passing graph neural network and multi-head attention (MHA) model, which encodes physics parameters into a conditional vector. The encoded state is then evolved using a conditional normalizing flow, dependent on the encoded conditional vector. The effectiveness of this method is shown through experiments, in which it outperforms other mesh-based ML models in terms of accumulation error in deterministic problems, with applications in stochastic problems also being demonstrated.
Strengths: The proposed method is simple and applicable to wide range of PDE-simulation problems with discretized domain. Wide variety of experiments are conducted, and both quantitative and qualitative evaluations are provided.
Weaknesses: About the novelty, it is still unclear that what components of the proposed model enables solving both deterministic and stochastic systems efficiently. Are there any limitations that prevent existing models from being applied to both systems?
It is concerning that the proposed method take all the previous (latent) vectors as input to predict the next state, as opposed to fixed small number of input vectors of MeshGraphNets reported in Section 5, that may result in unfair advantages over other methods. Also, although in Table 1 the reconstruction error of PbGMR-GMUS are compared against GMR-GMUS, it is still unclear how the proposed model would compare against an ablation model whose encoder is GMR-GMUS.
The proposed model is compared against baselines which evolve states in original space. There is a possibly missing reference [1], and the motivation of using latent evolution is similar to the presented one. How does this work compare against it? Is the proposed method also applicable to grid space?
[1] Tailin Wu, Takashi Maruyama, and Jure Leskovec. Learning to accelerate partial differential equations via latent global evolution (NeurIPS 2022)
**Minor comments**
In Appendix A.10, Figure 11 is cited, but seemingly wrong.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please have a look at the weakness mentioned in the strength and weakness section and address these. Overall the idea seems interesting, the authors need to substantiate their claims in light of existing literature and possibly more experiments.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the limitations are discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The reviewer is positive about the application and performance to wide range of PDE-simulation problems of the proposed framework. This is an excellent support to our work. And reviewer encourages us to further experiment to highlight our novelty over previous models. Here are our responses to your questions.
> **Q1:** It is still unclear that what components of the proposed model enable solving both deterministic and stochastic systems efficiently. Are there any limitations that prevent existing models from being applied to both systems?
**Response:** This is a great and important question. Please refer to our global rebuttal **Q2**. The previous models' most challenging part is predicting stochastic systems in mesh space. Our new experiment results indicate previous SOTA models can not capture close distribution or get different samples with the same physical statistics. (Table 13, Figure 20, and Figure 21 in new supplementary experiments)
Also, existing CNN-based models, such as video generation models, can not handle irregular mesh space. To overcome the above limitations, we first propose PbGMR-GMUS to encode the graph space into latent space with highly accurate reconstruction. A novel attention-based temporal conditioned flow model is then developed as the probabilistic model. At each time step, only a low-dimensional vector needs to be predicted. And the model can capture spatiotemporal dependencies and stochastics efficiently.
We did comprehensive ablation studies, as shown in Table 8 and Table 9 in the one-page rebuttal. The results demonstrate that the proposed model can increase the accuracy of deterministic training. Moreover, the proposed flows model enables learning stochastic processes and generates different realizations for the stochastic systems with low cost. Furthermore, we test MGN and GMR-GMUS on the stochastic dataset. In Fig 20 and Table 13, the performance of the proposed model is better than other learning-based models, such that the MGN can not capture the real distribution at all. The result of GMR-GMUS with a sequential model, though looks reasonable in statistics, can only reproduce the same output given the same input, which doesn't reflect the stochasticity.
> **Q2:** It is concerning that the proposed method take all the previous (latent) vectors as input to predict the next state, as opposed to fixed small number of input vectors of MeshGraphNets reported in Section 5 ...
**Response:** MeshGraphNet's in/output are defined on the high-dimensional mesh space instead of low-dimensional vector space. In contrast, the proposed model makes such predictions on latent space with vectors as inputs. Though our model takes more latent vectors as input, the computation is even less than MeshGraphNets during the roll-out stage if the steps are not very large. Since the model can benefit from longer time dependencies, we take all previous steps in the experiment setting. Also, we can use the moving window to reduce the cost further. We did one additional experiment of reducing the number of inputs during the inference time, as shown in supplementary experiments Table 11. The result of reducing the window size from 400 to 150, the degradation in accuracy is not significant. So the model is also applicable to cases with thousands or even longer steps with the sliding window.
> **Q3:** Although in Table 1 the reconstruction error of PbGMR-GMUS are compared against GMR-GMUS, it is still unclear how the proposed model would compare against an ablation model whose encoder is GMR-GMUS.
**Response:** Since PbGMR-GMUS improves GMR-GMUS, we prove that PbGMR-GMUS has a better reconstruction error than GMR-GMUS. The following time sequence model can benefit from it. In Table 1, our model performs better on all three datasets compared with [11], whose encoder is GMR-GMUS. To help evaluate the performance against an ablation model with a GMR-GMUS encoder, we listed ablation study results in Table 8 (in new supplementary experiments) and Table 9. In Table 8, residual connection and pivotal positions are necessary for a highly accurate reconstruction of graphs. In Table 9, even with the same attention-based conditional normalizing flow model, the model (Variant 1) with GMR-GMUS as encoder has a worse performance compared with the proposed model.
> **Q4:** The proposed model is compared against baselines which evolve states in original space. There is a possibly missing reference [1]. How does this work compare against it? Is the proposed method also applicable to grid space?
**Response:** Thanks for pointing out this related reference. We added it to the related work. The two works have a similar motivation to make predictions in the latent space for effective computation. However, this reference uses convolutional neural networks to encode the original space, which poses difficulties in applications to irregular mesh spaces. Using CNN to encode images is a very mature technique, but it is difficult to encode graphs. That is also the motivation of this work to propose a general method to encode graphs better. Also, [1] only concentrates on deterministic processes and did not demonstrate on stochastic physics. Still, this reference predicts the PDE process in the latent space, and it will be helpful to readers to know more related works in this area. It is not straightforward to compare the proposed method since we concentrate on systems defined on the irregular mesh. The proposed method is also applicable to the grid space. But the existing CNN already works very well on such tasks.
> **Q5:** In Appendix A.10, Figure 11 is cited, but seemingly wrong.
**Response:** Thanks for pointing it out; we already corrected it in the appendix. It should be Figure 10.
Following the reviewer's comment improves the quality of the submission significantly. We genuinely appreciate the reviewer's effort and help.
---
Rebuttal Comment 1.1:
Title: Follow-up question
Comment: I appreciate that the authors provided detailed explanation for my questions as well as conducted additional experiments. My concerns were satisfactorily addressed, and I was convinced that the proposed method is novel and very effective to solve deterministic and stochastic problems. Although the architecture looks straightforward given the scope of the paper, each of the components is chosen adequately — its adequateness is supported in experiments in the main text and strengthened by additional experiments such as ablation study and comparison against other strong (deterministic) baselines.
I still have one follow-up question. What would be the inference time of the proposed model and how does it compare against that of other baselines? Are there any components being bottleneck when performing forward simulation with the proposed model? While the runtime is out of the scope of this paper, it would be beneficial to make sure that the model does not incur significant increase in the computational cost.
---
Reply to Comment 1.1.1:
Title: Inference time
Comment: We really appreciate your careful reading of our response and immediate reply. And we are very glad to know your concerns are addressed. We do think it is important how inference time changes when we unify the deterministic and stochastic problems into the same framework. We compared the inference time of the proposed model with GMUS + Transformer [11] on all four datasets. The results are shown in the following table (unit is seconds, all datasets and models are tested on RTX A6000). There are mainly two steps during inference for the two models: 1. temporal prediction in the latent space. 2. mapping latent vectors into mesh space (decoding). For the decoding part, Pb-GMUS has the same inference time as GMUS. And for temporal prediction, we find that the proposed conditional flow takes slightly more time. To include stochastic systems, there is an additional sampling process for the flow model which accounts for additional time. For both models, the decoding part takes more time especially when the original mesh size is large (for example, stochastic flow). So actually the new proposed model doesn't significantly increase the total computational cost. We will add this result to the revised version and we believe such an analysis is beneficial to this work. Thanks for your constructive suggestions!
| Dataset | (Pb-)GMUS | Transformer only | Conditional flow (Ours)|
|------------------------------------|-----------|------------------|------------------------|
| Cylinder flow (for 400 time steps) | 2 | 0.93 | 1.66 |
| Sonic flow (for 40 time steps) | 0.20 | 0.59 | 0.78 |
| Vascular flow (for 250 time steps) | 2.63 | 0.79 | 1.27 |
| Stochastic flow (for 240 time steps) | 6.93 | 0.87 | 1.38 | | Summary: The authors present a new approach for modeling fluid dynamics. The approach involves using GNNs to derive global latent space representations and construct a transformer-based conditional generative models for the dynamics. The resulting model is able to generate stochastic predictions from given initial conditions and system parameters at inference time. The proposed framework is compared against some competitive baselines on a few deterministic and stochastic benchmarks.
Strengths: * This work seeks to perform probabilistic modeling of high-dimensional dynamical systems originating from parametric PDEs. This is a very important problem as traditional methods are typically quite expensive and lacking ways to quantify uncertainties. The method proposed here is a nice attempt that addresses both challenges.
* The method presented is very flexible and easily adapts to various problems and configurations. Because of the complexity of the cases, it has the potential to serve as a fast surrogate for modeling turbulence useful for inner-loop applications.
* The introduced regeneration learning framework is quite novel as it effectively integrates different blocks (graph representation learning, attention-based model, normalizing flows). Such formulation offers an efficient and attractive alternative to directing learning long-time spatiotemporal fields on the original mesh. Moreover, the framework can handle deterministic/stochastic systems in a unified way.
* The presentation is very clear, logical and easy to understand in general. The plots are of high quality and informative.
* Claims are backed by strong and convincing empirical evidence. The examples in the numerical experiments section demonstrate sufficient complexity and variety to help justify the value of the proposed method.
Weaknesses: * A number of minor typos in texts and figures (see section below).
* More discussions on the limitations would strengthen the paper further.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Questions
* How much of the improvements can be attributed to using *pivotal positions* and how much to using a different architecture for the latent dynamical model? It would be nice to have some ablation studies
* What is the advantage of using the RealNVP vs other generative models (e.g. there is a number of normalizing flows [here](https://github.com/VincentStimper/normalizing-flows))?
Comments
* Ln 168 - I believe it’s inaccurate to say the geometry of the graph is not included in the latent vector - it’s really embedded implicitly in the representation.
* Ln 243 - “.. potentially improving the interpretability of deep learning systems” - UQ and interpretability are different concepts.
Typos
* Equation (1), $j$ is not defined in text. Based on what follows, it can be a stochastic operator as well?
* Ln 95 - functions that *are* parameterized by neural networks
* Equation (5) - consider not using $p$ for position to avoid confusion with probability densities?
* Typeset errors on ln 219 and 289?
* Figure 3 - axis labels are missing
* Table 3 - what is referred to by “Flow” is your model? I don’t believe this is referenced elsewhere in the text.
* In appendix, Ln 589 - the results are in Figure 10 not 11
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors do present ablation studies showing that a certain number of *pivotal positions* are required. Other potential areas to discuss are scalability, applicability to more complex systems, sampling requirements and parameter extrapolations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the reviewer's supportive evaluation of our work. We are trying our best to address your concerns with the following answers.
> **Q1:**: A number of minor typos in texts and figures (see section below).
**Response:** Thanks for your careful reading. We already fixed them in the revised version.
> **Q2:**:More discussions on the limitations would strengthen the paper further.
**Response:** Thanks for this concern. We agree it is necessary to discuss the limitations of the proposed model. Please refer to global rebuttal **Q3** for details.
> **Q3:** How much of the improvements can be attributed to using pivotal positions and how much to using a different architecture for the latent dynamical model? It would be nice to have some ablation studies
**Response:** Thanks for asking this question. We already made more comprehensive ablation studies and the reviewer can refer to our global rebuttal and new supplementary experiment table 8 and table 9. Ablation studies show that pivotal positions are helpful to improve reconstruction accuracy. And the proposed conditional flow model performs better against a single transformer model with the same encoder-decoder structure. We believe these new ablation studies are helpful in identifying the use of each component in our new design.
> **Q4:** What is the advantage of using the RealNVP vs other generative models (e.g. there is a number of normalizing flows here)?
**Response:** Thanks for the reviewers' insightful question on different normalizing flows models and providing the links. The key design for the time series prediction model is to adopt a transformer-based encoding-decoding structure to capture temporal conditions as well as global physical parameters. The reason we adopt the flow model is that there are fewer parameters to compute and it is easier to get samples at each step compared with other probabilistic models such as Gaussian distribution. We think other normalizing flows are also feasible here. In supplementary experiment table 12, we test another normalizing flow model MAF, and find the result is comparable to RealNVP. It indicates that the proposed framework is flexible.
> **Q5:** Ln 168 - I believe it’s inaccurate to say the geometry of the graph is not included in the latent vector - it’s really embedded implicitly in the representation.
Ln 243 - “.. potentially improving the interpretability of deep learning systems” - UQ and interpretability are different concepts.
**Response:** Thanks for reading our script carefully and giving constructive feedback. We agree that the geometry is implicitly embedded into the representation. Moreover, we agree that UQ and interpretability are different concepts. The current model can capture the UQ for deterministic systems and generate physical realizations for stochastic systems. We already fixed these in the revisions.
We've made updates based on the feedback provided, and we believe that these changes substantially improve the manuscript. We're grateful for the valuable comments and thank you for your time and expertise.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. The additional ablation studies are very informative - it is nice to see the effects of pivotal positions, residual connections, and conditional normalizing flows in isolation. And the additional baselines further justify the effectiveness of the proposed model.
I have another clarification question: regarding errors (both RMSE and CRPS) reported, are they based on the mean of several realizations of your model prediction? If so, what is the ensemble size?
---
Reply to Comment 1.1.1:
Comment: Thank you so much for carefully reading our rebuttal and giving us feedback! We are glad our rebuttal helped the reviewer better understand our novelty and significance of unifying predictions of deterministic and stochastic dynamics in one model. Moreover, our proposed framework can achieve this without introducing too much computational overhead during inference. As for rmse, we reported the mean rmse error with ensemble size 10. For CRPS, we generated 30 different realizations for the stochastic dataset. We appreciate the reviewer's clarification question and will add this to the manuscript. | Rebuttal 1:
Rebuttal: Dear reviewers,
We would like to express our gratitude for your constructive feedback on our manuscript. The insights provided have been invaluable in refining our work. We have uploaded one-page supplementary experimental results in response to your comments.
Here we want to highlight several contributions mentioned in the reviews:
- The proposed method is simple and applicable to wide range of PDE-simulation problems with discretized domain. Wide variety of experiments are conducted, and both quantitative and qualitative evaluations are provided.
- This paper introduces the first learning-based probabilistic model that simulates fluid systems upon the mesh topology.
- This work seeks to perform probabilistic modeling of high-dimensional dynamical systems originating from parametric PDEs. This is a very important problem as traditional methods are typically quite expensive and lacking ways to quantify uncertainties. The method proposed here is a nice attempt that addresses both challenges.
- The introduced regeneration learning framework is quite novel as it effectively integrates different blocks (graph representation learning, attention-based model, normalizing flows). Such formulation offers an efficient and attractive alternative to directing learning long-time spatiotemporal fields on the original mesh. Moreover, the framework can handle deterministic/stochastic systems in a unified way.
- The simulation of dynamics is achieved by a probabilistic method using generative models, which rollout vivid predictions for long-term predictions. The proposed method is able to handle both deterministic and stochastic fluid dynamics thanks to the probabilistic model and achieves superior performance over previous methods.
Also, based on constructive suggestions, we conducted several ablation studies to identify the use of each component and compare the results on stochastic physics with the learning-based model MeshGraphNet[3] and GMR-GMUS + transformer[11]. Here, we want to solve the issues the reviewers are concerned mostly.
> **Q1** There should be ablation studies to identify the use of each component. (Reviewer YdeK, 9WF1, and 8v2N)
**Response:** We appreciate the reviewers' suggestions and believe such ablation studies are helpful. We first check the velocity reconstruction error on the Backward-facing step flow dataset with or without residual connection and pivotal positions. The result of Table 9 indicates each component can improve the graph reconstruction accuracy. We also test the influence of PbGMR-GMUS and the attention conditional flow model to the final performance of the deterministic task. Table 10 in supplemental experiments indicates both the encoder-decoder part and the flow model is necessary for the success of the whole framework.
> **Q2** The model should be compared against other learning-based models, such as MeshGraphNet and GMR-GMUS, on the stochastic dataset to prove the superiority of the proposed approach. What are the true challenges of the prediction task for stochastic fluid systems?(Reviewer 9WF1, YdeK)
**Response:** It is very hard for the previous models to predict stochastic mesh-based physical systems. Defining a probabilistic model on such sequential data is not easy. If we use a Gaussian distribution for the targeted system, which contains $N$ meshes, at each time step, the model should predict $N$ means and $N$x$N$ covariance. Also, the model should take temporal- and spatial- dependencies into consideration. Both make the computation too heavy to be applicable. Also, sampling from such distribution is computation-consuming. Most of the previous models, such as MeshGraphNet[3], and GMR-GMUS decoding-only[11], are not probabilistic models, so they can not be applied to any stochastic system. Given the same initial state and physical parameters, they always produce the same prediction. In contrast, as a probabilistic model, the proposed PbGMR-GMUS + conditional flow model is trying to fit such a stochastic process and get various samples with the same physical statistics. To demonstrate these, we test MGN and GMR-GMUS on the stochastic dataset. In Table 13, we can see that the performance of the proposed model is better than other learning-based models on several metrics. In Figure 20, we visualize the mean and variance of velocity from each model, and find the MGN can not capture the real distribution at all. The result of GMR-GMUS model, though looks reasonable in statistics, can only produce the same output given the same input condition, which doesn't reflect the stochasticity of the last dataset. In Figure 21, we get two samples from GMR-GMUS[11], and find the two samples are exactly the same.
> **Q3** More discussions on the limitations would strengthen the paper further. (Reviewer 9WF1, fgSk)
**Response:** We discussed the limitation of the proposed model in Appendix, section A.12. Reviewer YdeK
also mentioned that the limitations are discussed in the appendix. The model can not capture an accurate distribution for stochastic systems close to boundary areas and regions where flow is not fully developed. (Details and figures are also in this part) We agree that readers may ignore such discussions, and we will add some analysis to the main paper and give a reference to the appendix.
In conclusion, we have incorporated the insightful feedback provided by the reviewers, enhancing the overall quality of our paper. We sincerely value the time and effort invested in reviewing our work and believe that the revised manuscript and supplementary experiments address the concerns raised.
Pdf: /pdf/c9a06bb347c70158b791a789bc50aa2939ed64d7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Constructing Non-isotropic Gaussian Diffusion Model Using Isotropic Gaussian Diffusion Model for Image Editing | Accept (poster) | Summary: Gaussian diffusion is a common concept used in image processing, physics, and machine learning. It refers to a process where values (like the pixels in an image) are "smoothed out" or diffused according to a Gaussian function (also known as a normal distribution or bell curve), which provides a natural and mathematically convenient model for this diffusion process. In the context of Gaussian diffusion, the term "isotropic" means that the diffusion is uniform in all directions. In contrast, "non-isotropic" Gaussian diffusion means that the diffusion is not uniform in all directions. The amount a pixel gets diffused might depend on its direction.
The authors propose a method to use a pre-trained Isotropic Gaussian Diffusion Model (IGDM) for sampling in the context of a Non-isotropic Gaussian Diffusion Model (NGDM). In the first step, they define the NGDM with added independent non-isotropic Gaussian noise. This suggests that they are introducing a type of Gaussian noise that's directionally dependent, in line with the concept of non-isotropic diffusion. Then, they detail how to implement the NGDM using the pre-trained IGDM. This process involves 'rectifying' the spatially different times of the noise and denoise procedures in the NGDM. It seems they're addressing the inherent differences between isotropic and non-isotropic models and adjusting the processes accordingly. Finally, they present a data sampling algorithm for the proposed NGDM that uses the pre-trained IGDM. This suggests that they're leveraging the pre-trained model's capabilities in the new context of non-isotropic diffusion.
Strengths: 1. Overall, they're taking an existing model trained on isotropic diffusion, adapting it for non-isotropic diffusion, and presenting a new algorithm for sampling data within this context.
2. The authors show through experiments that their proposed method to be competitive.
Weaknesses: 1. The authors discussed the settings of the parameters \alpha and \beta but I do not see a strong link to the quality of the generated images.
2. From the generated images shown in Figures 2-4, 6, I cannot really say that the proposed method outperforms the other consistently. There is simply no "wow" effect. Figure 6 shows that the color of the head also changed, not described in "black leather jacket".
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Please compare computational cost of all involved methods resented in Section 4.
2. Please show a couple of examples where NGDM performs worse, and give the reasons to explain why.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: It is simply impossible to thoroughly "prove" a method is better by just displaying some selected results. A much more robust and comprehensive empirical study followed by why the proposed NGDM works better would be helpful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The link of $a$ and $b$ to the quality of the generated images.**
We present the visual results when the hyper-parameters $a$ and $b$ vary respectively in Figure 6 of the main paper. Keeping $b$ constant, when $a$ is small, all element values of the weighting matrix obtained after Sigmoid transformation are relatively small. At this point, the noise variance on each pixel of the image is small, resulting in that the generated image is similar to the source image. However, the generated image does not align well with the target prompt. When $a$ gets larger, the generated image begins to align with the target prompt. Keeping $a$ constant, $b$ does the opposite. When $b$ is small, the generated image does not preserve well the source image information. When $b$ gets larger, the generated image will preserve more source image information.
**Q2: The proposed method does not outperform the other consistently. Figure 6 shows that the color of the head also changed, not described in "black leather jacket".**
We focus on controllable image editing that edit image based on target prompt while minimally modifying the source image. Compared with other methods, our method can better preserve the background, pose, etc. For example, for images with complex backgrounds in columns 2-5 of Figure 2, our method can accurately keep the background of the source image unchanged while translating cats into dogs. Other methods either blur the background or fail to maintain the source image background. In Figure 3, the source images in columns 2 and 4 have detailed backgrounds, and our method can preserve these details to the greatest extent while editing images.
As we analyzed above and the results shown in Figure 6 of the main paper, when $a$ is 20 or $b$ is 0, the source image information cannot be well maintained, resulting in changes in the head's color. We can achieve a balance between the two by choosing appropriate parameters. When $a=10.0$ and $b=5.0$, our method can preserve the head color described in "black leather jacket".
**Q3: Computational cost comparison.**
Tables r6-1 and r6-2 show the computational efficiency comparison. Our method has relatively small inference time and requires no additional training.
Table r6-1: Computational time and memory cost of methods with image space-based diffusion model.
|Method|SDEdit|ILVR|EGSDE|DDIB|DiffuseIT|NGDM (Ours)|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Time per iteration (s)$\downarrow$|**18**|44|62|210|48|42|
|Memory(GB)$\downarrow$|3.3|**2.8**|4.5|3.8|16.6|7.4|
Table r6-2: Computational time and memory cost of methods with latent space-based diffusion model.
|Method|SDEdit|DiffEdit|SINE|DDS|InstructPix2Pix|EDICT|NGDM (Ours)|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Time per iteration (s)$\downarrow$|**3**|9|3480|46|12|648|6|
|Memory(GB)$\downarrow$|10.0|**6.7**|28.0|16.7|18.0|13.8|**6.7**|
**Q4: A couple of examples where NGDM performs worse and reasons for explaining why.**
We show a few failure examples in Figure 6 of the Appendix. We think that the failure of the image in the first column may be due to the artifacts generated by the underlying model we depend on, anf the failures of images in columns 2 and 3 may be due to that the computed weighting matrix is not accurate.
**Q5: A much more robust and comprehensive empirical study followed by why the proposed NGDM works better.**
We achieve controllable image editing by adding a corresponding degree of noise to each pixel to the extent that it needs to be edited. Our method can effectively preserve the original content by adding noise with small variances to the regions irrelevant to the editing task. Compared with mask-guided image editing, such as DiffEdit, our method can avoid the edge artifact problem caused by the mask.
We add more experimental results to justify the effectiveness of our proposed NGDM, including the comparison results on more datasets and more SoTA methods, the results of user study, and the results of computation efficiency.
We add two natural datasets (COCO-S and DreamBooth Dataset [Ruiz N, et al., CVPR2023]), and three SoTA methods DDS [Hertz A, et al., ], InstructPix2Pix [Brooks T, et al., CVPR2023] and EDICT [Wallace B, et al., CVPR2023] for comparison. The quantitative results shown in Table r6-3 indicate that our method outperforms competing methods by achieving a better trade-off between CLIPScore and LPIPS value.
Table r6-3: Quantitative comparison on COCO-S dataset and Dreambooth dataset.
|Method|SDEdit|DiffEdit|DDS|EDICT|InstructPix2Pix|Ours ($a$=10.0,$b$=5.0)|Ours ($a$=10.0,$b$=6.0)|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|CLIPScore$\uparrow$ (COCO-S dataset)|28.58|30.31|30.48|30.77|31.28|**31.75**|31.45|
|LPIPS$\downarrow$ (COCO-S dataset)|55.76|29.60|31.53|24.00|36.55|28.80|**23.43**|
|CLIPScore$\uparrow$ (Dreambooth dataset)|19.64|19.88|**20.17**|19.70|19.64|19.81|19.75|
|LPIPS$\downarrow$ (Dreambooth dataset)|59.82|27.47|33.26|22.52|40.50|24.86|**19.95**|
We conduct user study by inviting 40 participants and providing each of them with 30 randomly selected source images where the corresponding generated results of different methods are displayed randomly. Participants were asked to choose the image that better applies the requested edit while preserving most of the original image details. The percentage of votes for each method is shown in Tables r6-4 and r6-5, which demonstrate that the participants exhibit a strong preference for our method.
Table r6-4: User study results on Cat $\rightarrow$ Dog task.
|ILVR|SDEdit|EGSDE|Ours|
|:-:|:-:|:-:|:-:|
|11.5%|10.5%|12.5%|**65.5%**|
Table r6-5: User study results on the remaining tasks.
|SDEdit|DiffEdit|DDS|EDICT|InstructPix2Pix|Ours|
|:-:|:-:|:-:|:-:|:-:|:-:|
|4.5%|10.0%|3.0%|4.5%|6.0%|**72.0%**|
---
Rebuttal Comment 1.1:
Title: Further clarification on qualitative comparisons in Figures, effects of $a$ and $b$, more experiments and analysis.
Comment: Dear reviewer, thanks for your comments and questions. Besides the above rebuttal, in the author-reviewer discussion phase, we would like to further clarify as follows.
(1) **Additional clarification on qualitative comparisons in Figures and user study.** Beside the rebuttal, we additionally clarified the qualitative comparisons and user studies in the official comment with the title "To ACs and Reviewers: further clarification on the qualitative comparison with other methods and on the user study in the responses", following the "Author Rebuttal by Authors". Please refer to them for detailed clarifications.
(2) **The effects of $a$ and $b$ on the quality of generated images.** Besides the responses to Q1, we have also reported the metrics of FID and SSIM (on Cat $\rightarrow$ Dog translation task) in Table 2 of the paper, by fixing one and changing the other. Please refer to the responses to Q1 and Lines 193-200 in the paper for the discussions.
(3) **About the selection of the displayed examples.** We clarify that the displayed examples in Figures 2-4 are randomly selected from each dataset for fair comparisons. Note that due to space limit, we can only show a few examples, but overall these conclusions are the same in the other examples of these images in the datasets.
(4) **More comprehensive experimental study.** In the rebuttal, we additionally conduct experiments on two natural datasets (COCO-S and DreamBooth Dataset) and additionally compare with three SoTA methods DDS, InstructPix2Pix, and EDICT. The quantitative results are shown in Table r6-3, and the qualitative results are shown in Figure r1 in the uploaded one-page pdf in the Author Rebuttal with the title "Author Rebuttal by Authors". Additionally, we conduct user studies in the rebuttal, of which the results are reported in Tables r6-4 and r6-5. Please refer to them.
(5) **Why NGDM works better.** Our method works better mainly because it is based on the proposed non-isotropic Gaussian diffusion model that takes different noise variances for different pixels in the Gaussian diffusion model. The input image is edited based on adding non-isotropic noises and then denoised by Algorithm 1 in the paper to generate the edited image, using the pre-trained isotropic Gaussian diffusion model. The pixel's noise variance is computed based on Eq. (8), according to the pixel's relevance to the editing task using the soft weighting matrix $\lambda(I)$. The pixels with higher noise variance will be edited more heavily than the pixels with smaller noise variance, to ensure that the input image is correctly edited while preserving the image regions unrelated to the editing task unchanged. The experiments and qualitative comparisons have demonstrated the advantage of our method.
---
Rebuttal 2:
Comment: Reviewer LtuA: Please respond to the author's rebuttal ASAP. | Summary: This paper presents a novel Non-isotropic Gaussian Diffusion Model (NGDM) for image-to-image translation and image editing tasks. The central idea of NGDM is to add independent Gaussian noises with different variances to different pixels, thus achieving controllable image translation and editing based on the amount of noise variance added to each pixel. Unlike prior models, the NGDM doesn't need specific training; instead, it rectifies into an isotropic Gaussian diffusion model where different pixels have varying total forward diffusion time.
To generate images from the diffused ones, the authors propose a sampling method that initiates denoising at different times for different pixels using a pre-trained isotropic Gaussian diffusion model. This process allows for the preservation of certain parts of the image while editing or translating others, thereby providing flexibility in performing such tasks.
The paper demonstrates the effectiveness of NGDM through experiments on three datasets encompassing real and synthetic images. The results show that NGDM outperforms state-of-the-art score-based diffusion models (SBDMs) in terms of FID and SSIM metrics for Cat → Dog image translation task. Similar superior performance was also observed in text-guided image translation and image editing tasks.
In summary, the main contributions of the paper are:
1. The proposal of a novel Non-isotropic Gaussian Diffusion Model (NGDM) for image-to-image translation and image editing.
2. Demonstration of how to rectify the NGDM into an isotropic Gaussian diffusion model, which allows leveraging a pre-trained isotropic model for denoising and image generation.
3. A new sampling method for generating images by starting denoising at different times for different pixels.
4. Validation of the proposed model against state-of-the-art SBDMs across several tasks and datasets, showing improved performance.
5. Exploration of the trade-off between image fidelity and alignment with desired translation/editing targets, providing an avenue for controlling this balance with varying hyperparameters.
Strengths: **Originality:** The proposed Non-isotropic Gaussian Diffusion Model (NGDM) demonstrates a high level of originality. It creatively deviates from the standard practice of isotropic diffusion by adding independent Gaussian noises with different variances to different pixels, enabling more controllable image translation and editing. This concept is novel and adds an interesting dimension to image generation models.
**Quality:** The paper is of high quality. The authors provide a comprehensive explanation of the model, including detailed descriptions of the methodology. They describe the process of rectifying NGDM into an isotropic Gaussian diffusion model, which allows leveraging a pre-trained model for denoising and image generation. The experimental results on multiple datasets convincingly support their claims, showing that their method outperforms state-of-the-art score-based diffusion models in several tasks.
**Clarity:** The paper is well-structured and clear. The authors do an excellent job explaining complex concepts in an understandable manner, making the paper accessible to readers with varying levels of familiarity with the topic. Diagrams and visual aids would further enhance the clarity of the paper.
**Significance:** This work holds significant potential for advancing the field of image-to-image translation and image editing. The method provides a new way to control the translation/editing process by varying the noise added to different pixels, which could be invaluable in numerous applications. Moreover, this model's superior performance across multiple tasks and datasets shows promise for practical use in real-world scenarios. By demonstrating how to leverage existing isotropic Gaussian diffusion models within the NGDM framework, the authors also open up possibilities for future research in this area.
Weaknesses: While the paper presents a novel and promising approach in the Non-isotropic Gaussian Diffusion Model (NGDM) for image-to-image translation and image editing tasks, there are several areas that could be improved:
1. **Contextual Understanding:** The authors might consider providing more information on how their work compares to other existing methods. While they mention a few specific models and techniques, a more extensive literature review and comparison would be beneficial. This would allow readers to better understand the novelty and advantages of NGDM.
2. **Experimentation:** The experiments conducted show promising results; however, the testing could be expanded. The authors only use three datasets for validation. More diverse datasets could help provide a more comprehensive performance evaluation of the method across different scenarios. Also, it would be helpful if the authors could include comparative visual results along with the quantitative metrics.
3. **Model Interpretability:** While the model's performance is impressive, its interpretability seems not addressed. It might be challenging to understand why specific noise variances are assigned to certain pixels during the diffusion process. Providing some insights or analysis into this aspect may make the model more understandable.
4. **Potential Limitations and Pitfalls:** The paper lacks a discussion about potential limitations and considerations when using the proposed model. Addressing possible issues such as computational cost, scalability, and any conditions under which the model might fail or perform suboptimally would provide a more balanced and realistic view of the proposed method.
5. **Societal Impact:** As with any machine learning model, it's crucial to consider the ethical implications and potential misuse cases. Given the model's capacity for image manipulation, discussions around data privacy and consent, as well as potential ways the technology could be misused, should be included.
These suggestions should help to strengthen the paper and broaden the understanding and applicability of the proposed NGDM.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Here are a few questions and suggestions that would benefit from further clarification by the authors:
1. **Noise Variance Distribution:** How is the variance of the Gaussian noise chosen for each pixel? Is there any strategy or algorithmic process behind this, or is it random? Knowing more about this could help understand how the NGDM manages to preserve certain parts of an image while translating/editing others.
2. **Comparison with Other Models:** Could you provide a more detailed comparison between NGDM and other state-of-the-art models? It would be beneficial to include both qualitative (visual comparisons) and quantitative (additional metrics) results where possible.
3. **Performance on Different Scenarios:** How does the model perform when applied to different tasks, especially those that have not been covered in the experimental section? Understanding its versatility and limitations across various scenarios will paint a fuller picture of NGDM's applicability.
4. **Computational Efficiency:** Could you elaborate on the computational efficiency of your model? Specifically, how does the time complexity of the proposed method compare to other isotropic diffusion models? Does adding noise with different variances to different pixels significantly increase computational cost?
5. **Ethical Considerations:** As your model allows substantial manipulation of images, what ethical considerations should be taken into account? Discussions around potential misuse, data privacy, and consent would add value.
6. **Potential Improvements:** Lastly, can you suggest potential avenues for further improving the performance of NGDM? This might include parameter tuning, integrating with other models, or even extending the model to other domains beyond image translation and editing.
Looking forward to your responses to these queries and suggestions, which I believe will provide a clearer understanding of your work and its implications.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Based on the provided information, it appears that the authors have not thoroughly addressed the potential limitations and negative societal impacts of their work.
Here are a few suggestions for improvement:
**Limitations:**
1. **Computational Efficiency:** The authors should address the computational efficiency of their proposed model. Adding noise with different variances to different pixels might increase computational cost, which could be a limitation in some scenarios.
2. **Generalizability:** The paper does not discuss the performance of the model when applied to other tasks or datasets beyond those tested. Addressing this would give a clearer picture of the model's versatility and robustness.
3. **Model Interpretability:** The interpretability of the NGDM also seems unclear. Understanding why specific noise variances are assigned to certain pixels during the diffusion process could be complex. Discussing these aspects would help readers better understand and apply the model.
**Societal Impacts:**
1. **Ethical Considerations:** As the proposed model allows substantial manipulation of images, it is important to consider the ethical implications. The authors should provide a discussion about potential misuse, data privacy, and consent.
2. **Potential Misuse:** With advanced image editing and translation capabilities, there may be potential for misuse of the technology, such as deepfake creation or unauthorized alteration of images. The authors should address these concerns and possible measures to prevent misuse.
Discussing these potential limitations and societal impacts would create a more balanced view of NGDM and help prepare users for any challenges they might face when applying the model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Comparison with more methods and experiments on more datasets.**
For Cat $\rightarrow$ Dog translation task, we add SDDM [Sun S, et al., ICML2023] for comparison. SDDM decomposes the score function into an image “denoising” part and a content “refinement” part for translation. Differently, we perform translation by adding independent noise with different variances to different pixels. Results in Table r5-1 show that our method achieves the best results among the compared methods.
Table r5-1: Quantitative comparison on Cat $\rightarrow$ Dog translation task.
|Method|ILVR|SDEdit|EGSDE|SDDM|NGDM (Ours)|
|:-:|:-:|:-:|:-:|:-:|:-:|
|FID($\downarrow$)|74.37±1.55|74.17±1.01|65.82±0.77|62.29±0.63|**61.39±0.27**|
|SSIM($\uparrow$)|0.363±0.001|0.423±0.001|0.415±0.001|0.422±0.001|**0.478±0.001**|
For image editing task, we add DDS [Hertz A, et al., arXiv:2304.07090, 2023], InstructPix2Pix [Brooks T, et al., CVPR2023] and EDICT [Wallace B, et al., CVPR2023] for comparison. DDS utilizes delta scoring to provide effective gradients for editing. InstructPix2Pix trains a conditional diffusion model for editing that enables zero-shot generalization. EDICT proposes exact inversion of real and generated images. Differently, we edit the image by adding independent noise with different variances to different pixels.
We additionally consider two natural datasets (COCO-S and DreamBooth Dataset [Ruiz N, et al., CVPR2023]). The qualitative results are shown in Figure r1 in the uploaded one-page pdf. The quantitative results shown in Table r5-2 indicate that our method outperforms competing methods by achieving a better trade-off between CLIPScore and LPIPS value.
Table r5-2: Quantitative comparison with different methods on four datasets.
|Method|SDEdit|DiffEdit|DDS|EDICT|InstructPix2Pix|Ours ($a$=10.0,$b$=5.0)|Ours ($a$=10.0,$b$=6.0)|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|CLIPScore$\uparrow$ (Imagenet)|30.39|28.98|28.13|27.64|29.28|**30.66**|29.61|
|LPIPS$\downarrow$ (Imagenet)|58.73|32.40|43.57|44.93|41.41|37.10|**31.32**|
|CLIPScore$\uparrow$ (Imagenen)|35.81|35.19|34.69|34.75|**36.06**|35.66|35.53|
|LPIPS$\downarrow$ (Imagenen)|50.84|21.81|31.38|**19.31**|46.07|24.29|20.28|
|CLIPScore$\uparrow$ (COCO-S)|28.58|30.31|30.48|30.77|31.28|**31.75**|31.45|
|LPIPS$\downarrow$ (COCO-S)|55.76|29.60|31.53|24.00|36.55|28.80|**23.43**|
|CLIPScore$\uparrow$ (Dreambooth)|19.64|19.88|**20.17**|19.70|19.64|19.81|19.75|
|LPIPS$\downarrow$ (Dreambooth)|59.82|27.47|33.26|22.52|40.50|24.86|**19.95**|
**Q2: Model interpretability.**
Image editing task aims to modify the source image under target guidance while leaving the regions that are unrelated to the editing task unchanged.
It is empirically known that the diffusion model can generate more diverse novel content if adding noise with larger variance to the image, and preserve the content if adding smaller variance noise. Motivated by this, we achieve controllable editing by adding different variances of noise to different pixels.
As illustrated in the lower part of Figure 1, with $\mathbf{\Lambda}(\mathcal{I})$ computed on the source image by method DiffEdit in Section 3.4, the nose region is given with large variance noise for translating cat to dog, while the background is preserved by adding small variance noise.
**Q3: Potential limitations and improvements.**
A limitation of our method could be that incorrect weighting matrix may lead to the failure of the method. Moreover, our method relies on a pre-trained diffusion model. Artifacts are produced when the edit involves generation failure cases of the underlying model. We will add the limitations in the paper.
In future work, we will design a better way to calculate the weighting matrix more precisely and efficiently.
**Q4: Societal impact, ethical considerations and potential misuse.**
In our experiments, all the considered datasets are open-sourced and publicly available. Our work aims to manipulate images with minimum effort. However, this method might be misused by faking images. We will take care to exploit the method to avoid the potential negative social impact and we will help research in identifying and preventing malicious editing. To mitigate potential misuse, we will release our code under a license focused on ethical and legal use, stating explicitly that illegal and unethical use will not be allowed.
**Q5: Noise Variance Distribution.**
We described in Section 3.4 of the main text about how to choose the variance of the Gaussian noise for each pixel.
**Q6: Performance on different scenarios.**
We perform two tasks that were not covered in the experimental section, including local style transfer and gender transformation. The local style transfer task aims to transform the specified object in the image into another style while preserving the content of the rest of the region. Gender transformation aims to transform male into female. Due to space limitations, we show a few examples in Figure r2 of the uploaded one-page pdf, and we will show more examples in Appendix B in the revised version.
**Q7: Computational efficiency.**
Tables r5-4 and r5-5 show the computational time and memory cost of different methods. Our method is comparable to the other methods in both computational time and memory cost.
Table r5-4: Computation time and memory of methods with image space-based diffusion model.
|Method|SDEdit|ILVR|EGSDE|DDIB|DiffuseIT|NGDM (Ours)|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Time per iteration (s)$\downarrow$|**18**|44|62|210|48|42|
|Memory(GB)$\downarrow$|3.3|**2.8**|4.5|3.8|16.6|7.4|
Table r5-5: Computation time and memory of methods with latent space-based diffusion model.
|Method|SDEdit|DiffEdit|SINE|DDS|InstructPix2Pix|EDICT|NGDM (Ours)|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Time per iteration (s)$\downarrow$|**3**|9|3480|46|12|648|6|
|Memory(GB)$\downarrow$|10.0|**6.7**|28.0|16.7|18.0|13.8|**6.7**|
---
Rebuttal Comment 1.1:
Comment: I acknowledge I have read the rebuttal.
---
Reply to Comment 1.1.1:
Title: Thank Reviewer ev4D for the comments.
Comment: Thanks, and we will carefully revise the paper according to these questions and comments in the reviews. | Summary: The proposed model use differentiated reverse sampling strategy for image editing and translation.
Strengths: The proposed model use differentiated reverse sampling strategy for image editing and translation.
Weaknesses: 1. Please include the user study results on evaluating the natural image editing for qualitative evaluation.
2. The method lacks novelty. Using different starting point with masked and non-masked region is just a variation of DiffEdit approach. Please elaborate the difference between baseline models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Please include the user study results on evaluating the natural image editing for qualitative evaluation.**
We conduct user study by inviting 40 participants and providing each of them with 30 randomly selected source images where the corresponding generated results of different methods are displayed randomly. Participants were asked to choose the image that better applies the requested edit while preserving most of the original image details. We newly added three SoTA methods DDS [Hertz A, et al., arXiv:2304.07090, 2023], InstructPix2Pix [Brooks T, et al., CVPR2023] and EDICT [Wallace B, et al., CVPR2023] for comparison. The percentage of votes for each method is shown in Tables r4-1 and r4-2. The results demonstrate that the participants exhibit a strong preference for our method.
Table r4-1: User study results of Cat $\rightarrow$ Dog task.
|ILVR|SDEdit|EGSDE|Ours|
|:-:|:-:|:-:|:-:|
|11.5%|10.5%|12.5%|**65.5%**|
Table r4-2: User study results of the tasks on ImageNet, Imagen, COCO-S, and Dreambooth datasets.
|SDEdit|DiffEdit|DDS|EDICT|InstructPix2Pix|Ours|
|:-:|:-:|:-:|:-:|:-:|:-:|
|4.5%|10.0%|3.0%|4.5%|6.0%|**72.0%**|
**Q2: The method lacks novelty. Using different starting point with masked and non-masked region is just a variation of DiffEdit approach. Please elaborate the difference between baseline models.**
First, our method and the DiffEdit method have different motivations. DiffEdit automatically generates a mask to indicate the regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Then use the inferred mask to replace the background with pixel values from the encoding process at the corresponding timestep. Differently, we add noise to each pixel based on its relevance to the editing task to achieve controllable editing. We establish the relationship between the noise variance and the timestep through rigorous theoretical deducing, and propose to denoise at different timesteps for different pixels to utilize the pre-trained isotropic Gaussian diffusion model.
Second, in terms of algorithm implementation, we are in a soft way compared with DiffEdit which uses the hard mask to guide the denoising process. Our algorithm starts the denoising process for a few pixels, and then gradually more pixels are included in the denoising process. Each pixel takes the denoising timestep according to its relevance to the editing task. This helps to generate more natural images and avoids artifacts caused by hard masks.
Finally, in terms of generation quality, our method can generate images with more natural appearance. DiffEdit may generate unsmooth images around the boundary of the mask. For example, the images generated by DiffEdit shown in the first and fourth columns of Figure 3 in the main text have unnatural combination of the background and foreground.
We will clarify more on these differences in revision.
---
Rebuttal Comment 1.1:
Comment: The author rebuttal addressed my concerns, I keep my original score.
---
Reply to Comment 1.1.1:
Title: Thank Reviewer rj2K for the comments.
Comment: Thanks, and we will carefully revise the paper according to these questions and comments in the reviews. | Summary: The authors proposed a Non-isotropic Gaussian Diffusion framework, which they used for Image-to-Image translation and editing, which apparently are basically “soft inpainting” tasks. The authors crafted a non-uniform version of regular gaussian diffusion but ultimately used some knowledge from it to drive a regular gaussian diffusion model. The method looks very similar to a naive inpaiting method, with some notable differences.
Strengths: The paper shows a framework for image-editing, which is an increasingly important task for practical deployment. The authors also justified their method with some theretical backing, which is good.
Some good quantitative and qualitative results are shown.
Weaknesses: Despite the method being overall sound, I feel it is explained with more complication than necessary. The crux of the method is relatively simple than it looks.
Effectively, what the method is doing is the following: It creates a “soft mask” (as shown in Fig.1 of supplementary) $\Lambda_k = [ \lambda_k ]_k$ for a given image that dictates which pixel $k$ requires more change in order to accomplish the translation/editing task. A time endpoint $T_k$ is created for each pixel based on $\lambda_k$ — which is higher for pixels that require relatively more change during the generative process. Starting the generative process at $t=T$, the generation is “faked” (use of Eq. 11) till every pixel hits it’s own $T_k$, after that, the original de-noiser is used to “fill the gaps”.
- In light of the above explanation, it seems all the theoretical description about NGDM is sort of unnecessary and over-complicated. The NGDM isn’t really used in it’s true sense. All it does is figures out $T_k = \xi_k(t = T)$ — the expression Eq. 8 isn’t really necessary apart from it’s value at $t=T$. Essentially, the NGDM part is solely used to establish “how long to fake the generation for each pixel”.
- Carrying on the previous point, the expression of $\xi_k(t=T)$ is also arbitrarily related to the soft mask values $\lambda_k$. There are many possible ways to incorporate $\Lambda(\mathcal{I})$ into the forward SDE, opening design choices for Eq.6. Depending on how $\Lambda(\mathcal{I})$ is incorporated into the SDE, the expression in Eq.8 will change. In that sense, it’s not clear as to why this specific design was chosen. One can completely do away with the NGDM framework and use an arbitrary function that relates $T_k$ to $\lambda_k$ — what’s wrong with that?
- Section B.2 has a rather good explanation with the hard-soft weighting matrix comparison. This should’ve been in the main paper and the explanation should’ve been geared towards “in-paiting with soft wighting”.
- The really important part of this framework seems to be computation of the attention map $\mathcal{A}(\mathcal{I})$, which isn’t really a contribution of this paper (section 3.4). How costly is that, in terms of computation and relative to the core editing part ?
- Quantitave results are pretty limited, unlike qualitative ones.
- What’s the difference between the editing and image-to-image translation task? How do you incorporate “cat” and “dog” class into the translation task? Isn’t it related to text-based editing itself?
Overall, I like the method, but I think the paper should’ve been written in a very different way. I would like the authors to take inspiration from the RePaint [1] paper.
[1] https://arxiv.org/pdf/2201.09865.pdf
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weakness section for consolidated comments and questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Not much limitation is mentioned. Some failure case and their possible explanation is written in suppl. They should be in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Necessity of theoretical description and Eqs. (6)&(8)**
We clarify the necessity of theoretical description and Eqs. (6,8) as follows.
Firstly, our motivation is to achieve controllable image editing by adding noise with different variances to different pixels of the image. This motivates us to construct the non-isotropic Gaussian diffusion model (NGDM).
Secondly, to avoid retraining, we propose to use the pre-trained isotropic Gaussian diffusion model (IGDM) for achieving the data sampling for NGDM. The key challenge is how to formulate the relation between NGDM and IGDM. Section 3.2 analyzes the relation between the variance of noise in NGDM and the time step in IGDM. The deduced result in Eq. (8) establishes the transformation from noise variance to time step.
With this theoretical foundation, we can first calculate the weighting matrix $\mathbf{\Lambda}(\mathcal{I})$ based on the input image $\mathcal{I}$, then derive the denoising time steps of each pixel based on Eq. (8). If we do away with NGDM and discard the Eq. (8), we have no theoretical guarantee for designing function that relates $T_k$ to $\lambda_k$. Our theoretical analysis and Eq.(8) in Lemma 1 build the theoretical support for the selection of the function with theoretical interpretation.
**Q2: Including an explanation of the hard-soft weighting matrix comparison in the main paper with an explanation geared towards "in-painting with soft weighting".**
We will include the hard-soft weighting matrix comparison in the main paper. Mask-guided image editing is similar to the inpainting task. Both tasks have the same problem that the boundary of mask is prone to be with incorrect artifacts. Our Algorithm 1 gradually increases the denoising region with the increase of the denoising steps in the diffusion. Each pixel begins to be denoised with the denoising time step according to its relevance to the editing task. This helps to generate natural avoiding artifacts caused by a hard mask.
**Q3: Cost of computing the attention map.**
We conduct experiments on the single NVIDIA GeForce RTX3090. For an image with resolution of 512 × 512, the computation time of the attention map $\mathbf{\Lambda}(\mathcal{I})$ takes about 2.2 seconds, and the computation time of the core editing part takes about 3.8 seconds. A complete edit to an image takes about 6 seconds in total.
**Q4: Quantitative results are pretty limited, unlike qualitative ones.**
We add two natural datasets (COCO-S and DreamBooth Dataset [Ruiz N, et al., CVPR2023]) and SoTA methods DDS [Hertz A, et al., arXiv:2304.07090, 2023], InstructPix2Pix [Brooks T, et al., CVPR2023] and EDICT [Wallace B, et al., CVPR2023] for more comparison. The quantitative results are shown in Table r3-1. It can be seen that our method outperforms competing methods by achieving a better trade-off between CLIPScore and LPIPS value.
Table r3-1: Quantitative comparison on COCO-S and Dreambooth dataset.
|Method|SDEdit|DiffEdit|DDS|EDICT|InstructPix2Pix|Ours ($a$=10.0,$b$=5.0)|Ours ($a$=10.0,$b$=6.0)|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|CLIPScore$\uparrow$ (COCO-S dataset)|28.58|30.31|30.48|30.77|31.28|**31.75**|31.45|
|LPIPS$\downarrow$ (COCO-S dataset)|55.76|29.60|31.53|24.00|36.55|28.80|**23.43**|
|CLIPScore$\uparrow$ (Dreambooth dataset)|19.64|19.88|**20.17**|19.70|19.64|19.81|19.75|
|LPIPS$\downarrow$ (Dreambooth dataset)|59.82|27.47|33.26|22.52|40.50|24.86|**19.95**|
We conduct user study by providing 40 participants with 30 randomly selected source images and the corresponding generated results of different methods are displayed randomly. Participants were asked to choose the image that better applies the requested edit while preserving most of the original image details. The percentage of votes for each method is shown in Tables r2-1 and r2-2. The results demonstrate that the participants exhibit a strong preference to our method.
Table r3-2: User study of Cat $\rightarrow$ Dog task.
|ILVR|SDEdit|EGSDE|Ours|
|:-:|:-:|:-:|:-:|
|11.5%|10.5%|12.5%|**65.5%**|
Table r2-2: User study results of the tasks on ImageNet, Imagen, COCO-S, and Dreambooth datasets.
|SDEdit|DiffEdit|DDS|EDICT|InstructPix2Pix|Ours|
|:-:|:-:|:-:|:-:|:-:|:-:|
|4.5%|10.0%|3.0%|4.5%|6.0%|**72.0%**|
We test the computational efficiency of SINE on NVIDIA Tesla V100, and the remaining methods on NVIDIA GeForce RTX3090. It can be seen that our method is comparable to other methods in both computational time and memory cost.
Table r3-4: Computational time and memory cost.
|Method|SDEdit|DiffEdit|SINE|DDS|InstructPix2Pix|EDICT|Ours|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Time per iteration (s)$\downarrow$|**3**|9|3480|46|12|648|6|
|Memory (GB)$\downarrow$|10.0|**6.7**|28.0|16.7|18.0|13.8|**6.7**|
**Q5: Difference between the editing and image-to-image translation task and the way to incorporate "cat" and "dog" classes into the translation task.**
Image editing is to modify source images under specific guidance while the image-to-image translation aims to learn the mapping between two visual domains.
For translation, the "cat" images are used in the forward process, and "dog" images are used to train the specific pre-trained diffusion model for generating dog images without using the text information.
**Q6: The paper should’ve been written in a very different way by taking inspiration from [Lugmayr A, et al., CVPR2022].**
We will cite the paper and take inspiration from the paper in the final version preparation, if accepted.
**Q7: Discussing limitation. Including failure case and their possible explanation in the main paper.**
A limitation of our method is that incorrect weighting matrix may lead to the failure of the method. Moreover, our method relies on a pre-trained diffusion model. Artifacts are produced when the desired edit involves generation failure cases of the underlying model. We will include limitations and failure cases in the main paper.
---
Rebuttal Comment 1.1:
Title: Response #1 to rebuttal
Comment: Thanks for the clarifications and extra results. Some of them were helpful.
However, my primary objection remains -- I still do not see how the non-isotropic gaussian theory is necessary. Even though you said ..
> "If we do away with NGDM .. we have no theoretical guarantee for designing function that relates $T_k$ to $\lambda_k$"
.. I do not see any theoretical guarantee here either. Even if there is, it's not quite clear in the paper or in your response.
Also, I tend to agree with two other reviewers who said the qualitative results aren't clearly better as other methods do quite good. Yet, surprisingly, your user study subjects prefer your method with significantly high percentage !
At the end, I will keep my BA rating, but at the same time I can see there are grounds for rejection.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Cf2x (Part 1/3)
Comment: Thanks for the comments. We would like to further clarify the necessity of non-isotropic Gaussian diffusion model (i.e., Eq. (6)) and the deduction of the relation between $T_k$ and $\lambda_k$ (i.e., Eq. (8)), and the detailed comparison of different methods as follows.
**(1) Necessity of NGDM in Eq. (6).**
In controllable image editing, our goal is to translate/edit a specific object/thing of an image, while preserving the remaining parts of the image. As empirically shown by the previous works [10,16], in the diffusion models, if adding larger scale (i.e., noise variance) noise to the image and then denoising it, it tends to largely modify the input image. While adding a smaller scale of noise to the input image and then denoising it will better preserve the information of the input image after editing. This motivates us to add noises with different variances to different pixels in diffusion models for controllable editing, to ensure that the image regions with larger noise variance will be changed more heavily in editing. This idea is formulated as the Non-isotropic Gaussian diffusion Model (NGDM) in Eq. (6). In NGDM, we introduce a $\lambda_k\in [0,1]$ for each pixel in the forward VP-SDE to control the scale of the added noise and the degeneration of the image.
**(2) Necessity of the relation between $T_k$ and $\lambda_k$ in Eq. (8).**
Based on the NGDM in Eq. (6), we may train a score-based model for NGDM and then conduct the corresponding reverse SDE to obtain the edited image. However, to avoid the training of the Non-isotropic Gaussian diffusion model (NGDM), we utilize the existing pre-trained Isotropic Gaussian diffusion Model (IGDM) to realize the image sampling for the NGDM, which is the major contribution of this paper. Theorem 1 established the conclusion that the NGDM in Eq. (7) can be rectified to an IGDM model in Eq. (9) but with different total diffusion time $T_k$ for different pixel indexed by $k$, determined based on Eq. (8). This inspires us to utilize the pre-trained IGDM to achieve the data sampling of NGDM for image editing. Since the IGDM in Eq. (9) has different total diffusion time $T_k$ for different pixel $k$, in the reverse process of IGDM, we should set different starting time, i.e., $T_k$, to different pixel $k$ for denoising, using the pre-trained IGDM. And the corresponding data sampling algorithm is in Algorithm 1. In summary, the NGDM, Theorem 1, and Eq. (8) motivate and guide us to design the image sampling algorithm 1.
If we do away with the NGDM & Eq. (8) and focus on heuristically designing the relation between $T_k$ and $\lambda_k$, we would face the following problems. (I) It is unclear how to design the relation between $T_k$ and $\lambda_k$. Actually, there even lacks a good motivation to determine whether $T_k$ and $\lambda_k$ are positively or inversely correlated. (II) The formulation of the relation between $T_k$ and $\lambda_k$ is unclear. Even though we can empirically choose it, the explanation is lacking. (III) The motivation and guidance for the design of the image sampling algorithm, i.e., different pixels should start at different times in the reverse diffusion process, are unclear.
**(3) Comparison with examples of heuristic designed relation between $T_k$ and $\lambda_k$.**
To further testify whether the heuristic designed relation between $T_k$ and $\lambda_k$ works, we compare our approach with the following examples of the heuristic designs: $T_k=\lambda_kT$, $T_k=\lambda_k^2T$, $T_k=\sqrt{\lambda_k}T$.
Table r-2-1: Results for different designs of the relation between $T_k$ and $\lambda_k$ on image editing task on COCO-S dataset.
||$T_k=\lambda_kT$|$T_k=\lambda_k^2T$|$T_k=\sqrt{\lambda_k}T$|Ours|
|:-:|:-:|:-:|:-:|:-:|
|CLIPScore$\uparrow$|31.22|28.78|31.47|**31.75**|
|LPIPS$\downarrow$|31.30|**27.55**|29.12|28.80|
From Table r-2-1, we can see that our approach achieves the best CLIPScore and second best LPIPS. The $T_k=\lambda_k^2T$ achieves the best LPIPS but an obviously lower CLIPScore than the other approach. Correspondingly, we find that $T_k=\lambda_k^2T$ does not successfully modify the object in the images to the desired one to accomplish the editing task. The result of $T_k=\sqrt{\lambda_k}T$ better approaches that of our method, which may be because $\sqrt{\lambda_k}T$ is close to the $\xi_k(T)$ in Eq. (8) (in Eq. (8), if we set $\beta_{\rm min}=0$, we have $\xi_k(T)=\sqrt{\lambda_k}T$). Nevertheless, as discussed in **(2)**, we do not have the motivation for the choice of $\sqrt{\lambda_k}T$.
---
Reply to Comment 1.1.2:
Title: Response to Reviewer Cf2x (Part 2/3)
Comment: **(4) Comparison of different methods qualitatively.**
We focus on controllable image translation and image editing that aim to modify the image regions related to the task, while leaving the other regions unchanged to preserve the structure/details of the source image as much as possible. We next clarify that our approach better achieves this goal qualitatively.
In Figure 2 in the main paper for cat->dog translation, it can be observed that the regions (background) of the source images outside the cat are preserved in the translated images (the 2nd row) by our approach. While the other approaches can not always preserve the regions of the source images outside the cat. For example, (1) *in the 2nd column*, the green grass on the background wall of the source images disappeared in the translated images by the EGSDE & SDEit & ILVR, but our approach keeps the green grass; (2) *In the 5th column*, the white stones in the source image was preserved by our approach, but EGSDE & SDEit & ILVR produce white blurred things which are not stones. (3) *In the 6th column*, the pose of the cat in the source image is preserved when the cat is translated to be the dog by our method, but the poses of the cats in the images generated by the other methods are changed. Note that for the other columns, our method can also better preserve the regions outside the cat. Please zoom in on the figure to see these differences between the translated images.
In Figure 3 in the main paper for translation on ImageNet dataset, it can be observed that our method (the 2nd row) can always accurately translate the category of the source image into the category given by the target prompt, while maintaining the original image regions that are not related to translation task. While the other methods may fail to translate or fail to maintain information irrelevant to editing in some cases. For example, (1) *in the first column*, DiffEdit produces artifacts when editing oystercatcher into flamingo, which can be seen from the black area in the middle part of the generated image. DiffuseIT does not successfully edit the oystercatcher into flamingo. DDIB and SDEdit do not maintain the background region outside the oystercatcher in the source image. (2) *In the 3rd column*, DiffEdit & DiffuseIT & DDIB & SDEdit do not preserve the regions of the source images outside the convertible, but our method preserves them well. (3) *In the 4th column*, the boundary of the lemon in the generated image by DiffEdit has artifacts. DiffuseIT & DDIB & SDEdit do not preserve the regions of the source images outside the custard apple. However, our method can preserve most of the details that are irrelevant to editing. (4) *In the 6th column*, DiffEdit & DDIB & SDEdit do not preserve the tree below the kite in the source image. DiffuseIT does not generate bald eagle based on the target prompt. Note that for the other columns, our method also accurately translates the category of the source image into the category given by the target prompt, while preserving information that is not related to translation. Please zoom in on the figure to see these differences between the translated images.
In Figure 4 in the main paper for image editing on Imagen dataset, our method can successfully edit the source image based on the target prompt while making minimal modifications to the source image. For example, (1) *in the first column*, SINE does not edit the beach into the mountain. The appearance of the panda in the generated image by SDEdit changes compared to the panda in the source image. (2) *In the 2nd and 3rd columns*, DiffEdit and SDEdit do not preserve the background outside the animal in the source image. (3) *In the 4th column*, SINE does not edit the mountain into the beach and SDEdit changes the appearance of cat while editing. (4) *In the 5th column*, images generated by DiffEdit and SDEdit can not preserve the appearance of the cat, while SINE can not edit sunglasses into hat. Note that for the other columns, our method can also successfully edit based on prompt, while preserving information that is not related to editing. Please zoom in on the figure to see these differences between the translated images.
---
Reply to Comment 1.1.3:
Title: Response to Reviewer Cf2x (Part 3/3)
Comment: In Figure r1 in the uploaded ong-page pdf (in the attachment of official comment with title "Author Rebuttal by Authors") for image editing on COCO-S and Dreambooth dataset, our method can always edit the source image based on the target prompt, while maintaining the information irrelevant to editing unchanged, compared with the SoTA method. For example, (1) *in the first column*, DiffEdit & SDEdit & InstructPix2Pix & EDICT do not generate images that match the target text. Moreover, DiffEdit & SDEdit & DDS & InstructPix2Pix & EDICT can not preserve the detailed background outside the bird. (2) *In the 2nd column*, DDS & InstructPix2Pix & EDICT do not generate luggage based on the target prompt, and DiffEdit & SDEdit & DDS & InstructPix2Pix & EDICT can not preserve the regions that are not related to editing. (3) *In the 5th column*, DDS & InstructPix2Pix & EDICT do not change the background into mountain and SDEdit does not preserve the stuffed animal in the source image. (4) *In the 6th column*, the cats in the generated images by SDEdit & DDS & EDICT do not wear a rainbow scarf. The regions below the cat in the images generated by DiffEdit & SDEdit & DDS & InstructPix2Pix & EDICT are not similar to the corresponding region in the source image. Our method not only generates image of a cat wearing a rainbow scarf, but also preserves the detailed background below the cat. Note that for the other columns, our method can also successfully edit based on prompt, while preserving information that is not related to editing. Please zoom in on the figure to see these differences between the translated images.
In summary, our method can not only successfully edit the source image based on the target prompt, but also keep the information irrelevant to editing in the source image unchanged.
**(5) About the user study in the responses.**
For the user study, we queried 40 participants to score 30 groups of randomly selected source images and the corresponding generated results by different methods. The generated images of ours and the other methods are displayed randomly in order, and the participants do not know the methods corresponding to these generated images. Participants are suggested to select the best result that better applies the requested edit while minimally modifying the source image. For Cat $\rightarrow$ Dog translation task, we set the question: "Which image below better translates cat into dog, while minimally modifying the source image?" For the text-guided image editing task, we set the question: "Which image below better applies the requested edit to the source image on top, while minimally modifying the source image?" From the results of the user study, most participants favored our method. As analyzed based on the qualitative results above, our approach can better preserve the image regions outside the regions related to the editing task that should be modified. In the user study, the participants would like to choose the successfully translated images that are minimally changed from the source image, which is consistent with the qualitative results. | Rebuttal 1:
Rebuttal: Dear ACs and reviewers,
Thanks for the insightful comments and suggestions on our paper. We have carefully responded to the comments of each reviewer. Meanwhile, we have uploaded a PDF file to show visual results as support material. We will revise our paper accordingly in the final version if accepted.
Best,
Authors
Pdf: /pdf/b1c2e34d7a421df0aec62ce56fe81f2525189dba.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a non-isotropic gaussian diffusion model, in contrast to current popular isotropic gaussian diffusion models. The paper also proposes new forward and reverse diffusion processes in accordance to the non-isotropic gaussian corruption framework that is proposed. The motivation behind using non-isotropic gaussian noise is that "the diffusion model can generate more diverse novel content if adding noise with larger variance to the image while preserving the image content if adding smaller variance noise". Experimental results on image editing show superior quantitative performance for the proposed method.
Strengths: - The idea of using non-isotropic gaussians for diffusion corruptions seems novel, to the best of my knowledge.
- The forward and backward processes make sense and seem to work well enough to achieve similar performance to vanilla isotropic models.
- The design allows for use of isotropic models which makes this a flexible method that can be easily inserted in many applications without retraining the model, which is very positive.
- The main strength seems to be that the method is able to insert a different level of detail in different parts of the image, which improves editing (since different parts of the image have to be edited in different ways to achieve strong results). For example some parts of the image should be largely preserved while others should be heavily structurally changed. Once can see this in Fig. 3 with the custard apple -> lemon example.
Weaknesses: - It's hard to see, on average, a large perceptual difference and superiority of the method in the qualitative translation figures. Maybe pointing to critical regions would help, but on average it seems like other methods are not too bad. Also, how were the samples for figures selected?
- A user study on a large amount of data would clarify how users perceive these changes.
- The method does achieve low LPIPS for a large CLIP score, but DiffEdit is close in relative terms.
- The paper focuses on image editing but the title does not include the term. Would be good to include to be more specific.
Some related work that could potentially be included (not mine):
Bansal, Arpit, et al. "Cold diffusion: Inverting arbitrary image transforms without noise." arXiv preprint arXiv:2208.09392 (2022).
Daras, Giannis, et al. "Soft diffusion: Score matching for general corruptions." arXiv preprint arXiv:2209.05442 (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I think the paper is well presented and I understood it. The experiments are well laid out. The question would be to directly try to rebut the weaknesses I claim, and I will take into account other reviews to see whether I am being too harsh with respect to the relevance of the effect size on experiments (and the breadth of tasks tackled).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I think the main limitation is the motivation and experiments. Although the method is interesting, and the theoretical motivation (different parts of the image need to be treated slightly differently) is compelling, the final results don't seem to support the motivation so strongly and are isolated to the editing task. Maybe tackling more tasks with such a general methodology could be compelling? Or a user study that shows that users prefer these images, with large effect size?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: On the performance improvement over baselines and the way to select visualization samples**
We tackle controllable image editing that modifies the image regions related to the editing task while leaving the other regions unchanged. As shown in Figure 2 in the paper, our method can better preserve the background, pose, etc., compared with other methods. For example, Figure 2 shows that for images in columns 2-5 with relatively complex backgrounds, our method can accurately keep the backgrounds of the source images unchanged while translating cats into dogs. Other methods either blur the backgrounds or fail to maintain the source image backgrounds correctly. In Figure 3, the source images in columns 2 and 4 have complex backgrounds, and our method can better preserve these details when editing images. Besides the metrics reported in the paper, we also conduct user study in the rebuttal, which shows an obvious performance improvement achieved by our method (please refer to Q2).
These displayed examples are randomly selected from each dataset, typically with source images having various poses and backgrounds or with various editing types (such as background replacement or object transformation). Due to space limit, we can only show a few examples, but overall these conclusions are the same in the other examples of these images in the datasets. We also randomly selected several failure examples in Figure 6 of Appendix.
**Q2: A user study on a large amount of data.**
We conduct user study by providing 40 participants with 30 randomly selected source images and the corresponding generated results of different methods are displayed randomly. Participants were asked to choose the image that better applies the requested edit while preserving most of the original image details. The percentage of votes for each method is shown in Tables r2-1 and r2-2. The results demonstrate that the participants exhibit a strong preference for our method.
Table r2-1: User study results of Cat $\rightarrow$ Dog task.
|ILVR|SDEdit|EGSDE|Ours|
|:-:|:-:|:-:|:-:|
|11.5%|10.5%|12.5%|**65.5%**|
Table r2-2: User study results of the tasks on ImageNet, Imagen, COCO-S, and Dreambooth datasets.
|SDEdit|DiffEdit|DDS|EDICT|InstructPix2Pix|Ours|
|:-:|:-:|:-:|:-:|:-:|:-:|
|4.5%|10.0%|3.0%|4.5%|6.0%|**72.0%**|
**Q3: The method does achieve low LPIPS for a large CLIP score, but DiffEdit is close in relative terms.**
DiffEdit is close to our method on CLIPScore and LPIPS metrics. However, from a qualitative comparison point of view as shown in Figure 3, our method can generate more natural images, while DiffEdit is prone to produce boundary artifacts caused by hard mask, resulting in an unsmooth combination of foreground and background. Please refer to the images in the first and fourth columns of Figure 3 for the examples. This may not be reflected by the CLIPScore and LPIPS. Note that the results of user study in Table r2-2 show that the participants prefer the results of our method.
**Q4: Including "image editing" in title.**
Thanks for this suggestion, we will consider changing the title to "Constructing Non-isotropic Gaussian Diffusion Model Using Isotropic Gaussian Diffusion Model for Image Editing", as suggested.
**Q5: Including the related works [Bansal A, et al., arXiv:2208.09392, 2022] and [Daras G, et al., arXiv:2209.05442, 2022.]**
We will include these references in the Introduction section. Cold Diffusion [Bansal A, et al., arXiv:2208.09392, 2022] is based on generalized diffusion models that were built on arbitrary image transformations like blurring, downsampling, etc. And a trained restoration network is created to perform denoising. Soft diffusion [Daras G, et al., arXiv:2209.05442, 2022.] is based on general linear corruption processes and it learns the diffusion model by training objective of Soft Score Matching for the linear corruption process. Differently, we consider non-isotropic Gaussian noise and utilize a pre-trained isotropic Gaussian diffusion model to achieve sampling without retraining.
**Q6: More experimental tasks and user study to support motivation.**
Image editing task aims at modifying source images under the target prompt guidance while leaving the regions that are unrelated to editing unchanged, to generate images that are as similar as possible to source image. It is empirically known that the diffusion model can generate more diverse novel content if adding noise with a larger variance to the image while preserving the image information if adding smaller variance noise. Motivated by this, we employ a non-isotropic diffusion model to add noises with different variances to different image pixels. We achieve controllable editing by adding different variance noises to different image pixels, with varying noise variance considering the degree to which the corresponding pixels should be edited/preserved. We will explain this motivation for using NGDM for controllable image editing in more detail in the Introduction section.
**More tasks.** We add experiments on two additional tasks that are not covered in the experimental section, including local style transfer and gender transformation. The local style transfer task aims to transform the specified object in the image into another style without changing the structure, while preserving the information of the rest of the region. For example, transforming a "real dog" into a "sculptural dog", while keeping the background unchanged. Gender transformation aims to turn males into females while keeping the structure of the face unchanged. Due to space limit, we show a few examples in Figure r2 of the uploaded one-page pdf, and we will show more examples in Appendix B in the revised version.
**User study.** As suggested, we report the results of user study in Tables r2-1 and r2-2. The results show that the participants exhibit a strong preference for our method.
---
Rebuttal Comment 1.1:
Title: Further clarification on qualitative comparisons and user study
Comment: Dear reviewer, thanks for your comments and questions. Besides the rebuttal, in the author-reviewer discussion phase, we additionally clarified the qualitative comparisons and user studies in the official comment with title "To ACs and Reviewers: further clarification on the qualitative comparison with other methods and on the user study in the responses.", following the "Author Rebuttal by Authors". Please refer to them for detailed clarifications.
---
Rebuttal Comment 1.2:
Comment: Thank you. The addition of (large enough) user studies and the careful rebuttal, along with the comments from other reviewers have convinced me to increase my score to 5.
---
Reply to Comment 1.2.1:
Title: Thank Reviewer Jr5M for the positive comments.
Comment: Thanks, and we will carefully revise the paper according to these questions and comments in the reviews. | Summary: The authors proposed a Non-isotropic Gaussian Diffusion Model for the task of image-image translation and image editing. The NGDM is achieved by adding different noise variances to different image pixels so as to control the regions to edit. Experimental results have demonstrated the state-of-the-art quality of the proposed paper.
Strengths: - The paper proposed a practical method for image-image translation/editing by utilizing the off-the-shelf diffusion model. It seems like the proposed method is easy to re-implement and has demonstrated satisfactory results.
- The presentation of the paper is good.
Weaknesses: - Why the proposed method would choose to implement the non-isotropic diffusion process by controlling each pixel's denoising steps? I can come up with one alternative quickly: at each iteration, we can add different levels of noise to different pixels (by also using the input-dependent weight matrix), and denoise as usual. It is a bit strange to set different denoise time steps for different pixels.
- I would like to see comparisons with the recent paper "Delta Denoising Score", which also targets the task of controllable image editing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The reason for implementing NGDM by controlling each pixel’s denoising steps.**
It is empirically known that the diffusion model can generate more diverse novel content if adding noise with a larger variance to the image while preserving the image information if adding smaller variance noise. Motivated by this, we employ a non-isotropic Gaussian diffusion model to add noises with different variances to different image pixels for controllable image editing.
As suggested by the reviewer, we can directly add different levels of noise to different pixels, and denoise as usual using pre-trained diffusion models. Actually, we have tried this strategy when working on this paper, but only produced noises rather than images. The main reason could be that the transition kernel at time $t$ of the isotropic Gaussian diffusion model is different from that of the non-isotropic Gaussian diffusion model, which is related to the weighting matrix that determines the variance. Therefore, the score under isotropic Gaussian diffusion model at time step $t$ does not match the score under the non-isotropic Gaussian diffusion model. The pre-trained isotropic Gaussian diffusion model will produce incorrect score predictions at time $t$ for non-isotropic Gaussian diffusion model.
One way is to retrain the non-isotropic Gaussian diffusion model. We instead use the off-the-shelf pre-trained isotropic Gaussian diffusion model for achieving the date sampling to avoid retraining. To implement this idea, we use Lemma 1 in the main paper to establish a relationship between the noise level in the non-isotropic Gaussian diffusion model and the time step $t$ in the isotropic Gaussian diffusion model. We prove in Theorem 1 that the transition kernel at time $t$ in the non-isotropic Gaussian diffusion model is equal to the transition kernel at time step $\tau$ in the isotropic Gaussian diffusion model, where $\tau$ depends on the noise level in the non-isotropic Gaussian diffusion model. Finally, since isotropic Gaussian diffusion model needs to accept the entire image as input, we propose a sampling algorithm of NGDM (Algorithm 1) to generate images using pre-trained isotropic Gaussian diffusion model.
**Q2: Comparisons with "Delta Denoising Score".**
DDS [Hertz A, et al., arXiv:2304.07090, 2023.] utilizes score distillation sampling mechanism for image editing. The authors optimize to produce the edited image given a text description by distilling the output of the score-based model on the reference source image-text pair. Differently, we aim to edit the image and leave the region unrelated to target prompt unchanged. We achieve this goal using a non-isotropic Gaussian diffusion model that adds independent noise with different variances to different pixels. We further rectify it into an isotropic Gaussian diffusion model with different pixels having different total forward diffusion times, for utilizing the pre-trained isotropic models for sampling.
We conduct experiments on the ImageNet dataset and Imagen dataset introduced in Section 4 of the paper. In the rebuttal, we additionally consider two datasets (COCO-S and DreamBooth Dataset [Ruiz N, et al., CVPR2023]) for image editing task.
We show qualitative visual results on COCO-S and Dreambooth datasets in Figure r1 of the uploaded one-page pdf. We report the CLIPScore and LPIPS metric in Table r1-1, the results of user study in Table r1-2, and the computational time and memory cost in Table r1-3. Table r1-1 shows that our method consistently achieves the best results on the four datasets. A larger CLIPScore denotes better alignment with the target text, while a smaller LPIPS value suggests higher fidelity to the source image. Compared with DDS, our method achieves a smaller LPIPS distance with a larger CLIPScore, which shows that our method can make smaller changes to the source image when editing.
Table r1-1: CLIPScore ($\uparrow$) and LPIPS ($\downarrow$) on four datasets.
|Method|Imagenet (CLIPScore)|Imagenet (LPIPS)|Imagen (CLIPScore)|Imagen (LPIPS)|COCO-S (CLIPScore)|COCO-S (LPIPS)|Dreambooth (CLIPScore)|Dreambooth (LPIPS)|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|DDS|28.13|43.57|34.69|31.38|30.48|24.00|**20.17**|22.52|
|Ours|**30.66**|**31.32**|**35.66**|**20.28**|**31.75**|**23.43**|19.81|**19.95**|
We conduct user study by providing 40 participants with 30 randomly selected source images and the corresponding generated results of different methods are displayed randomly. Participants were asked to choose the image that best achieves the requested edit while preserving most of the original image details. The percentage of votes for each method is shown in Table r1-2. The results demonstrate that the participants exhibit a strong preference for our method.
Table r1-2: User study results.
|SDEdit|DiffEdit|DDS|EDICT|InstructPix2Pix|Ours|
|:-:|:-:|:-:|:-:|:-:|:-:|
|4.5%|10.0%|3.0%|4.5%|6.0%|**72.0%**|
We test the computational efficiency on NVIDIA GeForce RTX3090. From Table r1-3, it can be seen that the running time of DDS is about 7.7 times that of ours and the memory occupied by DDS is also higher than ours.
Table r1-3: Computational time and memory cost.
|Method|DDS|Ours|
|-|:-:|:-:|
|Time per iteration (s)$\downarrow$|46|**6**|
|Memory (GB)$\downarrow$|16.7|**6.7**|
---
Rebuttal 2:
Comment: Reviewer xHHG: Your review is very thin and you have not responded to the authors rebuttal. I will not be able to take your review into account unless you engage more meaningfully with this paper. | null | null | null | null |
A General Theory of Correct, Incorrect, and Extrinsic Equivariance | Accept (poster) | Summary: The authors analyze how equivariant models behave under mismatches between the symmetries of the model and data distribution. They advance three notions, correct, extrinsic, and incorrect pointwise equivariance. They then propose bounds on the error for equivariant neural networks that are sensitive to the data distribution and give an example where having an equivariant model can hurt learning performance.
(Edit, 8/13: in light of comments by the Authors, I have changed my score to a 5)
Strengths: Theoretically, the bounds present are more general than those proposed in Wang et al. 52. The examples clearly demonstrate the underlying mathematical principles involved. Moreover the error bounds presented, while not unsurprising to practitioners more versed in group theory, seem mathematically solid. On the whole, the task of attempting to precisely characterize how the data distribution interacts with symmetries imposed on the model is a worthy one.
Weaknesses: While the paper is solid, I think certain difficulties in presentation prevent it from fulfilling its potential. Most immediately, I think there are some issues in the presentation in the concepts of pointwise correct / incorrect / extrinsic equivariance. First, equivariance is a property of maps rather than spaces: saying a point has correct / incorrect / extrinsic equivariance, or even equivariance of any kind, isn’t precise. It would be better to say something the lines of “h has correct / incorrect / extrinsic equivariance with respect to f and p at x”, particularly since the notion of correctness here is dependent on the model: this has the advantage of being a more close parallel with Defs 3.1-3.3. Second (forgive me if I missed it), but at no point is the notion of pointwise equivariance being dependent on specific group elements used in the rest of the paper. It is not clear what is gained by this added generality.
Continuing on to Section 5, it is not clear to me to what extent the notions advanced in section 4 are necessary for stating these bounds: if I understand correctly the bounds presented are for general probability distributions and are independent of the definitions proposed in 4.1-4.3.
Section 6 seems a bit trivial for me: it seems obvious to me that if you can’t represent a decision boundary due to a constraint in the model you might end up having a bad day. Indeed, the specific example shown here seems to follow straightforwardly from https://doi.org/10.48550/arXiv.2110.07472. I think this section should be moved up and be a motivating example for the bounds in Section 5.
I feel like the meat of the paper is the error bounds, more specifically Theorem 5.8. I’m not convinced the concepts in Definitions 4.1-4.3 contribute much towards the construction of this bound. I would recommend a rewrite of the paper that focuses more on the bounds, their implications and applications, and maybe expand further.
Finally, (and this might just be me being a pedant), but I’m not sure the title really is reflective of the content of the paper. It is not clear to me what the general theory is that is being advanced. Typically a “general theory” is a precise characterization of a class of mathematical objects. If this is intended to refer to the decomposition into correct / incorrect / extrinsic equivariance, then this characterization seems to reduce to the fact that that equalities can be true, not true, or involve the empty set. I don’t think any insight is gained innately from this characterization, rather than this being a useful language to describe further results in the text. The error bounds, while good contributions, are specific to certain choices of loss function. I mention this not to diminish their contributions, merely to observe that they are not “general” in a mathematical sense.
(edited for some spelling mistakes, although I'm sure more of them abound...)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I’m having a little trouble parsing the notion of Fundamental domain in Definition 5.1. What happens if the orbit of the passes over X multiple times. For instance, consider the real line, where the group is S_8 and the group action is the antisymmetric representation (reflect the line on odd permutations)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations of the work have been appropriately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The Authors thank the reviewer for their insightful review. Please see our response below.
> equivariance is a property of maps rather than spaces...
This is a good point, we definitely agree with the reviewer that ‘equivariance’ is a property of maps rather than spaces. In the revision, we will revise the definitions based on the reviewer’s suggestions to make them more precise.
> at no point is the notion of pointwise equivariance used here being dependent on specific group elements used in the rest of the paper...
> it is not clear to me to what extent the notions advanced in section 4 are necessary for stating these bounds...
The pointwise equivariance is actually used in Proposition 5.4 as a way of calculating the error bound for the classification task. Later on, we also used it in Example 5.5 and the experiment in Section 7.1. The pointwise definitions are necessary for understanding our lower bounds because, intuitively, the bounds calculate the error of the model where pointwise incorrect equivariance exists.
> Section 6 seems a bit trivial...
We agree that the discussion of Section 6 is not overly complicated. However, we believe it is still important to complete our theory, as extrinsic equivariance can be another source of failure for an equivariant model beyond the incorrect equivariance discussed in Section 5. Moreover, while incorrect equivariance is often considered in the prior works, the effect of extrinsic equivariance is under explored. The prior work by Wang et al. [52] showed that extrinsic equivariance is helpful in all of their experiments. Just by reading [52], one might think that extrinsic equivariance is always helpful, but it is not true. Section 6 provides a discussion about the scenarios when extrinsic equivariance can be harmful.
> Indeed, the specific example shown here seems to follow straightforwardly from https://doi.org/10.48550/arXiv.2110.07472.
Although the example In Figure 1 of https://doi.org/10.48550/arXiv.2110.07472 [A] is similar to our example in Section 6, it does not directly follow. Figure 1 in [A] demonstrates that there exists a subspace that is fixed by the group action and that it affects the perceptron capacity. While they consider separating invariant orbits of data samples, our example shows an extrinsically equivariant scenario where the C2 group action produces samples outside of the data domain. For example, the group action transforms $x=(0,0,-1)$ to $gx=(0, 0, 1)$, which is outside of the set of 4 points in Figure 5a and so the symmetry relates in-distribution samples to points that are out-of-distribution. We will clarify these differences in the main text.
>I think this section should be moved up and be a motivating example for the bounds in Section 5.
Unfortunately, we respectfully disagree that Section 6 can be moved up to motivate Section 5, as they discuss the error caused by two different types of equivariance: Section 5 discusses incorrect equivariance and Section 6 discusses harmful extrinsic equivariance. Although both of them can cause the model to have an unsatisfactory performance, the underlying reasons are different. Specifically, incorrect equivariance leads to an inevitable error in the model due to the equivariant constraint, and we can calculate the theoretical lower bound of such an error. On the other hand, harmful extrinsic equivariance increases the complexity of the task, but unlike incorrect equivariance, it can be solved by increasing model capacity.
>I feel like the meat of the paper are the error bounds...
This is a good suggestion, thanks for pointing this out. We agree that the most important contribution of the paper is Section 5, rather than Section 4. In the revision, we propose to make the following changes to our paper:
1. We will rework the story to focus more on the contribution of the paper: the cases where equivariant models might underperform due to approximation errors and harmful extrinsic equivariance.
2. We plan to merge Section 3 and Section 4 into a combined background and preliminaries section.
>I’m not sure the title really is reflective of the content of the paper...
Thank you for the comment. Our paper builds heavily upon [52] where the authors defined the terms of correct, incorrect, and extrinsic equivariance. However, their discussion is not complete as they assume that the data density over the domain is group invariant and that the group is a finite group. Our work generalizes theirs by removing those simplifying assumptions, discussing the lower bounds for regression (rather than just classification as in the prior work), and discussing the possibility of extrinsic equivariance being harmful. Thus, we named our paper “A General Theory”. Nevertheless, we don’t mean to misuse the term “general” in a mathematical sense, and we are happy to remove the word “general” in the title of our paper.
>I’m having a little trouble parsing the notion of Fundamental domain...
Generally, if the orbit passes over $X$ multiple times, the stabilizer will be non-empty, and the factor $\alpha(x, g)$ in Equation 3 will account for the over-counted conjugates. In the example, the group $S_8$ acts on $\mathbb{R}$ s.t. $s_i \cdot x = -1^{i}x$ for $x\in \mathbb{R}, s_i\in S_8$, the fundamental domain $F$ would be either $\\{x>=0 | x\in\mathbb{R}\\}$ or $\\{x<=0 | x\in\mathbb{R}\\}$. Notice that this does not violate our assumption in line 161 because $\cup_{g_1F \neq g_2F} (g_1F \cap g_2F) = \\{0\\}$ has 0 measure under $\nu$. Since $S_8$ is a finite group, Equation 3 becomes $k(Gx) =\min_{y\in Y} \sum_{g\in S_8}p(gx) \mathbb{1} (f(gx) \neq y) \alpha(x, g) dg$, where $\alpha(x, g)=\frac{2}{8}$ will account for the over-counted conjugates.
---
Rebuttal Comment 1.1:
Title: Response to the Response
Comment: Thank you to the authors for their detailed response to my comment. I still have reservations, but in light of proposed changes and the feedback of other reviewers, I am happy to raise my score to a 5.
A lot of the nature of the feedback is related to the question of, "how straightforward or how non-obvious are certain elements." I'll discuss this more in my response the other reviewers above.
There are two technical points I wanted to follow up on.
> The pointwise equivariance is actually used in Proposition 5.4 as a way of calculating the error bound for the classification task. Later on, we also used it in Example 5.5 and the experiment in Section 7.1. The pointwise definitions are necessary for understanding our lower bounds because, intuitively, the bounds calculate the error of the model where pointwise incorrect equivariance exists.
There are two issues at play here. The first is, "is p-wise equivariance necessary *in section 5*" It definitely is not in proposition 5.4 (which holds far an arbitrary density), and I maintain that ex. 5.5 could easily be rewritten to not use these definitions. This is not a criticism of the contribution of the bounds though: rather, it is me saying that the bounds in Section 5 are actually general for *all* probability densities, not just for incorrect equivariance as stated in the paper. My proposed (but only suggested) change is that the authors be bolder at selling the bounds, as they are a separate co-equal contribution to the theoretical concepts in section 3: currently, I think and importance of the bounds is undersold.
The second is, "at any point in the paper, is the fact that of p-wise equivariance depends *on a specific group element $g$* used? As far I can tell it is not: only the dependence on $x$ is used, even in the examples mentioned by the authors. Consequently, it should be possible to make more specific definitions that only have dependence on $x$. However, I recognize this might be more of a style thing, and changing the definition at this point may require rewrites).
> Generally, if the orbit passes....
Ok, this is how I thought the fundamental domain worked. But in this case, isn't the assumption that intersection of any two conjugates have measure 0 violated? As far as I can tell, the authors use the word "conjugate" to mean a set as $g F$ or some $g$ in the group. (I might be wrong about this: the use of conjugate in this paper is not the typical one in group theory, and I don't see a definition in the paper). In the example we are discussing, the conjugate of the group elements $e$ and of any even permutation have as their conjugate {$x \geq 0 | x \in R$}. So, by {assumption $x \geq 0 | x \in R$} has measure 0. By similar arguments, the same is true for {$x < 0 | x \in R$}. But this suggests that, by countable additivity, the entire real line has zero measure, which seems like a problem to me.
Did I misunderstand something? If not, their use of fundamental domain could noticeably restrict the applicability of their results, but I think this could be fixed without too much effort by modifying the non-intersection assumption to only hold for coset representatives for cosets of the stabilizer subgroup.
*edit: I just saw the Author's response above giving their definition of conjugate, which makes me more confident in the analysis above.
---
Reply to Comment 1.1.1:
Comment: The authors appreciate the reviewer’s follow up discussion and the score increase. Please see our response below.
>The first is, "is p-wise equivariance necessary in section 5" It definitely is not in proposition 5.4 (which holds far an arbitrary density), and I maintain that ex. 5.5 could easily be rewritten to not use these definitions. This is not a criticism of the contribution of the bounds though: rather, it is me saying that the bounds in Section 5 are actually general for all probability densities, not just for incorrect equivariance as stated in the paper. My proposed (but only suggested) change is that the authors be bolder at selling the bounds, as they are a separate co-equal contribution to the theoretical concepts in section 3: currently, I think and importance of the bounds is undersold.
First, we would like to kindly point out that pointwise equivariance is indeed used in Proposition 5.4: the integral is $\int_G p(gx') \mathbb{1}((x',g)\in I)\alpha(x', g) dg$, where $I$ is the set of pointwise incorrect equivariance. However, you are right that both Proposition 5.4 and Example 5.5 can be rewritten to not use the pointwise definitions (for Proposition 5.4, the version that does not use the pointwise definitions is Theorem 5.3.). We also agree that the bounds in Section 5 are general for all probability densities. Notice that we set 5.3 as a Theorem but 5.4 as a Proposition exactly because 5.3 is a more general form and does not depend on the pointwise definitions.
The pointwise equivariance is the source of the error bounds: if there is no pointwise incorrect equivariance, the error bounds would just be 0. However, we agree that Section 5 (without 5.3 and 5.4) can be mathematically correct without the pointwise definitions at all. In other words, Section 4 and Section 5 are connected, but not dependent on each other. We realize that the current paper might look like Section 5 entirely depends on Section 4, and the importance of the bounds is undersold. We appreciate the reviewer's suggestion for being bolder at selling the bounds, and will revise the paper to strengthen the importance of the bounds.
> The second is, "at any point in the paper, is the fact that of p-wise equivariance depends on a specific group element $g$ used? As far I can tell it is not: only the dependence on $x$ is used, even in the examples mentioned by the authors. Consequently, it should be possible to make more specific definitions that only have dependence on $x$. However, I recognize this might be more of a style thing, and changing the definition at this point may require rewrites).
The authors thank the reviewer for the question. The pointwise equivariance depending on a specific group element is used in Proposition 5.4, where we have the integral of the incorrect equivariance over all group elements. When we were developing the paper, we tried to make the pointwise equivariance depend on $x$ only, but could not find a satisfactory definition since we cannot evaluate the equivariance of the model on a particular $x$ without referencing its transformations. If we define the pointwise equivariance on $x$ only, there needs to be some averaging over the group, which will lose some information.
Regarding the conjugates, the authors appreciate the reviewer’s thoughtful response. In the paper, when we say "the intersection of any two conjugates has 0 measure under $\nu$", we meant "two distinct conjugates" ($g_1F \neq g_2F$, not necessarily $g_1 \neq g_2$). This is spelled out on Line 161, but not in the definition. We will be sure to clarify that in the paper. This agrees with your suggestion to modify the “non-intersection assumption to only hold for coset representatives for cosets of the stabilizer subgroup.”
The authors thank the reviewer again for the insightful discussion, please let us know if our response addresses your concerns. | Summary: This paper provides a general theory of correct, incorrect, and extrinsic equivariance of functions, mainly extending the framework of Wang et al., 2023 to more general case of pointwise equivariance of functions defined for pairs of group element and input data. The theory mainly concerns deriving lower bound of errors for classification and (invariant and equivariant) regression tasks. The authors further identify cases where extrinsic equivariance can be harmful for performance opposed to the empirical observations of Wang et al., 2023. The authors provide a range of experiments mainly on empirically verifying the derived lower bound of errors, and also demonstrating the cases where certain extrinsic equivariances can be harmful.
Wang et al., The surprising effectiveness of equivariant models in domains with latent symmetry (2023)
Strengths: S1. The theory proposed in the paper is indeed quite general as it considers pointwise equivariance, and covers a large range of partial symmetries. The fact that it addresses a wide range of tasks (classification, invariant regression, equivariant regression) also strengthens the generality of the paper.
S2. The paper is overall well written, with intuitive illustrations on pointwise equivariance and experimental setups, as well as results.
S3. The experimental results support the main claims of the paper on error bounds and harmful cases of extrinsic equivariances.
Weaknesses: W1. A weakness of this work is that, while certain cases are presented where extrinsic equivariances can be harmful, it offers little principled understanding or general theory of in which specific cases extrinsic equivariance is harmful or beneficial.
W2. While the presented theory on pointwise equivariance covers a wide range of approximate or misspecified symmetries, I am not sure if it is immediately useful, since in many applications involving approximately or misspecified equivariant neural networks, we would not be able to exactly know the extent of correct, incorrect, and extrinsic pointwise equivariance of the model (unlike the synthetic experimental setups considered in the paper). This could be a weakness of the proposed theoretical framework.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have no particular questions, but would like to hear the opinions of the authors on the aforementioned weaknesses.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have identified the limitation of the work in Section 8.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors thank the reviewer for their helpful comments. Please see our response below.
> W1. A weakness of this work is that, while certain cases are presented where extrinsic equivariances can be harmful, it offers little principled understanding or general theory of in which specific cases extrinsic equivariance is harmful or beneficial.
You are right, in the context of our paper, we only show the possibility of extrinsic equivariance being harmful, instead of providing an understanding of when it would be harmful or helpful (in part, this is because the prior work [52] shows helpful extrinsic equivariance and we are trying to do a counterpart). This is a limitation of our work and a very important future direction. We will address this in the limitation section.
> W2. While the presented theory on pointwise equivariance covers a wide range of approximate or misspecified symmetries, I am not sure if it is immediately useful, since in many applications involving approximately or misspecified equivariant neural networks, we would not be able to exactly know the extent of correct, incorrect, and extrinsic pointwise equivariance of the model (unlike the synthetic experimental setups considered in the paper). This could be a weakness of the proposed theoretical framework.
We agree that in many real-world applications, we might not exactly know the extent of correct, incorrect, and extrinsic pointwise equivariance of the model and it might be hard to compute the lower bound. However, our analysis also provides a theory that can guide the model selection process in such a case. Equivariant models are usually selected on the basis of prior reasoning about properties of the ground truth function. We believe our theory provides intuition which can be part of this prior reasoning process and can be used to understand and analyze model performance in an iterative model development cycle. Moreover, in some real-world applications, we can indeed analyze the extent of different equivariance types. For example, in robotic manipulation, we might know a priori how the objects are distributed on a tabletop, so we can analyze how rotations of the entire workspace change the distribution.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the response. I recommend the authors to supplement the discussions on limitations in Section 8 with the provided response. I have no further questions for now. | Summary: An obvious limitation of equivariant networks is the assumption that the symmetry they hard-code matches the symmetry of the underlying ground truth function exactly. What happens when the symmetry is only partially present in the domain affecting a symmetry mismatch between the ground-truth function and the neural network? This paper presents some systematic analysis of this situation building on prior work by Wang et al. 2023.
Wang et al. classified models as having "correct" equivariance (the underlying function provided by nature matches the symmetry of the equivariant model), "incorrect" equivariance (there is a disagreement between the underlying true function and that encoded by the equivariant network), and "extrinsic" equivariance (model symmetry transforms in-distribution data to out-of-distribution data). This paper defines these notions pointwise which allows one to generalize to a continuum of equivariance types when some tasks may have different proportions of these three situations (and does not exclusively conform to only one of them). Lower bounds are provided on the model error from incorrect model symmetry (while improving some earlier results) for classification. Morally similar lower bounds are provided for the regression case in terms of the variance of the function to be modelled over the orbit of the group under consideration. It is also shown that extrinsic equivariance can be harmful (which doesn't contradict but can still be contrasted with earlier experimental results in the literature).
The theoretical results are provided in section 5 (after first introducing the notions of correct, incorrect and equivariance of Wang et al, and then stating their pointwise generalizations which open the path to interpolate between them -- a central aim of the paper). The lower bound for classification is intuitive and basically says that it is equal to the integral of the total dissent over the fundamental domain. The total dissent measures how many elements in the orbit of G have a different label than the majority label. Two examples are given for binary and multiclass cases to build intuition (also illustrated in figures 3 and 4). Next, the case of invariant regression is treated. The result is similar in spirit to the result from earlier -- the error is bounded by an integral over the fundamental domain of p(Gx) times the variance of the function on the orbit Gx (so instead of the dissent, we look at the variance). A slightly more careful consideration permits an easy generalization for equivariant regression. A simple theoretical argument is then provided to show that extrinsic can be harmful for generalization, also raising the question that it remains open to understanding its effect on generalization.
Strengths: - The paper considers an important problem that has only recently started getting treated in the literature. It is clear that equivariant networks make a strong assumption about the symmetry of the ground truth function. However, it is not clear how much a mismatch matters. This paper provides some approximation lower bounds due to the mismatch for some different settings.
- Being able to interpolate between the three types can definitely help improve model generalization and permit more flexible models.
- While many of the points raised in the paper seem somewhat obvious at times, and computing the lower bound might not be possible in practice, the presented work can still give some guidance on model selection.
Weaknesses: - Somewhat fortuitously, I was a reviewer for this paper for an earlier iteration. Whatever weaknesses that I had raised at the point seem to have been adequately addressed by the authors (in fact they were right then). I also notice that the other weaknesses bought up by other reviewers at the time (mostly in terms of experiments, and comparison to the work of Wang) have also mostly been incorporated. While I can nitpick, I have no hesitation in simply voting for acceptance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors thank the review for their careful review. We are glad that the reviewer acknowledges that we have addressed the problems from the earlier iteration. If you have any other questions regarding our paper, please don’t hesitate to let us know and we are more than happy to discuss them. | Summary: The paper analyzes error bounds for models constrained to satisfy symmetries that only partially agree with the ground truth functions. The main suggestion is to generalize previous definitions, to the point level, allowing to derive a lower bound on the model error with respect to the volume of the portion of the domain where symmetries are mismatched. The theoretical analysis is validated on some toy examples and some (relatively) small-scale datasets
Strengths: The proposed analysis including the definitions, propositions, and theorems seems to be simple.
The paper is well-written and easy to follow. I appreciate the illustrations and toy examples provided.
The experiments seem to support the theoretical analysis. Where mismatches occur, reasonable explanations are provided.
Weaknesses: Missing discussion
I feel the text should elaborate more on the assumptions taken in the analysis of error bound. For example, it is assumed that the model assigns labels by taking a majority vote in an orbit. How reasonable is this assumption in relation to existing equivariant models?
In addition, it would be interesting to incorporate a model that approximates the majority vote by sampling for one of the experiments. E.g., does it improve the digit classification model?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No further questions.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors thank the reviewer for their thoughtful review. Please see our response below.
> I feel the text should elaborate more on the assumptions taken in the analysis of error bound. For example, it is assumed that the model assigns labels by taking a majority vote in an orbit. How reasonable is this assumption in relation to existing equivariant models?
Note that label assignment by majority vote is *not* an assumption of Theorem 5.3, which applies to any invariant $h$. We will clarify the hypothesis in the theorem statement. The $h^*$ which uses majority voting is part of the proof for Theorem 5.3. Intuitively, we compute a lower bound on $err(h)$ for any invariant $h$ by comparing it to the error of the best case invariant hypothesis $err(h^*)$. If the model were not to follow this majority voting scheme, it would make more mistakes in an orbit, resulting in a higher error (as illustrated in the Equation at the end of page 4).
> In addition, it would be interesting to incorporate a model that approximates the majority vote by sampling for one of the experiments. E.g., does it improve the digit classification model?
Thank you for suggesting this experiment. We interpret the proposed approach as follows: instead of learning an invariant model for the digit classification task, we will have a sample-based model. To predict the label of $x$, this model will access $f(gx)$ for all $g\in G$ and calculate the majority vote as the output. This is a very interesting approach, but we think it might not be practical in our experiment because it will need access to an oracle ground truth function $f(x)$ during evaluation. Please let us know if we didn’t interpret your suggestion correctly. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents an extension of a lower bound on error in finite labeled classification introduced in "The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry" and investigates lower bounds on error for G-equivariant and G-invariant regression, offering valuable insights into the performance of symmetry-preserving models in regression tasks.
Strengths: This extension holds significant practical importance, particularly when considering real-world natural symmetry groups such as $G = SO(3)$, which go beyond the scope of the original paper. Moreover, the authors have done an excellent job in presenting the material in an accessible manner, making the paper easy to read and well-structured.
Weaknesses: 1.
The paper's focus on pointwise definitions rather than emphasizing their implications may obscure the novelty of the results, potentially hindering a clear understanding of the significance of their contributions.
2.
While the experiments conducted in the paper offer valuable insights, their omission of infinite groups may limit the demonstration of the full strength of the new lower bound. Additionally, the observations suggesting equivariant models' potential ineffectiveness in classical learning tasks raise valid questions about their relevance and applicability.
3.
Although the paper draws upon certain notions from "The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry," it does so only partially. This partial utilization of concepts, coupled with the use of different evaluation metrics such as "accuracy" in the original paper and "error" in this work, may create challenges when attempting to directly compare the results between the two studies.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.
Can you elaborate on the third experiment? Which state is flipped? (line 532, page 9)
2.
Did you mean "orbits" instead of conjugates in the definition of the fundamental domain? (line 158, page 4)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The author did address the limitations of reflection symmetries on the expressivity of models in Chapter 6 (line 254, page 7), and the requirement of knowledge about the density function (line 386, page 9)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors appreciate the reviewer’s insightful comments and questions. Please see our response below.
> The paper's focus on pointwise definitions rather than emphasizing their implications may obscure the novelty of the results, potentially hindering a clear understanding of the significance of their contributions.
We appreciate your feedback. The pointwise definitions are a generalization of the prior work [52]. They help us analyze the problem of incorrect/extrinsic equivariance more completely and can be used to calculate the lower bound, as is shown in Proposition 5.4. However, the major contribution of our work is the bounds in Section 5 instead of the pointwise definitions. We plan to merge Section 4 into Section 3 and iterate on our writing to make the contributions clearer.
> While the experiments conducted in the paper offer valuable insights, their omission of infinite groups may limit the demonstration of the full strength of the new lower bound.
Thanks for the comment. While our theoretical findings are not restricted to finite groups, we opted to use finite groups in our examples and experiments for clarity and ease of understanding.
> Additionally, the observations suggesting equivariant models' potential ineffectiveness in classical learning tasks raise valid questions about their relevance and applicability.
Equivariant models have been shown to be effective in many learning tasks. However, given the constrained nature of equivariant models, it is possible that they will lead to inevitable errors in some cases. Our study explores these potential scenarios of ineffectiveness, which we believe can provide valuable insights for model selection and analysis.
> Although the paper draws upon certain notions from "The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry," it does so only partially. This partial utilization of concepts, coupled with the use of different evaluation metrics such as "accuracy" in the original paper and "error" in this work, may create challenges when attempting to directly compare the results between the two studies.
Our work builds upon the prior work “The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry” to extend a theory that the prior work only partially addresses. Consequently, some of their definitions and framework are too limited and make too many assumptions. Removing these assumptions and generalizing that work is a key part of our goal. For example, the prior work limits their analysis in classification tasks, so using “accuracy” makes sense in their context. However, our work aims to calculate a more accurate bound for both classification and regression, thus “error” would be a better choice.
> Can you elaborate on the third experiment? Which state is flipped? (line 532, page 9)
Thanks for asking this question. The state is a top-down RGBD image of the workspace, and by flipping the state, we mean that the image is flipped horizontally. Similarly, flipping the action means that the (x, y) component of the robot action is flipped horizontally. Due to the page limit, those details were put in Appendix K.5 instead of the main paper.
> Did you mean "orbits" instead of conjugates in the definition of the fundamental domain? (line 158, page 4)
We appreciate the question. However, we indeed meant "conjugates" not "orbits". An orbit of $x\in X$ is defined as the set $\\{ gx| g\in G\\}$, whereas a conjugate $gF$ is defined as $gF=\\{gx | x\in F\\}$ (please see https://mathworld.wolfram.com/FundamentalDomain.html). We understand that this could be a point of confusion, and we will ensure to clarify the definition of a conjugate in the revision. | Summary: This work addresses the situation in which data and model equivariance does not match exactly. It extends previous work from Wang et al., by proposing a pointwise version of their definition of correct, incorrect and extrinsic equivariance. The usefulness of these new concepts are demonstrated in three new performance lower-bounds derived from classification and regression problems. A series of examples and experiments are presented to illustrate the new proposed notions, confirm the lower-bounds derived and show that they usually seem tight in practice.
Strengths: ### Originality
Although it builds heavily on previous work from Wang [52], the paper presents novel theoretical concepts and results, as well as new experiments supporting them.
### Clarity
Overall, the paper is well written and structured.
### Quality
The paper presents new theoretical results with their corresponding proofs, which look sound to be (disclaimer: I have some familiarity with group theory, but I am not an expert). It also presents interesting results in a large range of experiments. The latter are all rather small/toy, but are still quite convincing IMO.
Weaknesses: ### Clarity
Minor:
- I think there is a mistake in the xlabel of fig 7b. Shouldn't it be "incorrect - correct"? Same for figure 7a: should be "correct - extrinsic" I guess, since when x=1 you have the highest INV model performance, which should correspond to c=1 and e=0 according to the text.
- L.363: I think you wanted to reference fig.9, not 8.
### Clarity and Quality
1. I think the definition of $p$ could be clarified. It is introduced in line 92 as the “probability density function of the domain”, so, at first reading, I thought p(x) was the “true” underlying population distribution from which both training and test examples are samples. This seems to be confirmed by line 387 in the conclusion: “our theoretical lower bounds require domain knowledge like the density function over the domain”. But in section 6, it seems that you define extrinsic equivariance with respect to the actual examples in the training set, which is not the same thing: “F_E corresponds to an extrinsically equivariant class for **this data**” l.272
2. I also have other few questions regarding the example presented in section 6. First of all, does the data from figure 5 represent training or test data? Is the data $S$ the whole support of $p(x)$? What is the true labeling function $f$ in this example?
These elements seem important to conclude. The reason I ask is that if $f$ is indeed the “exclusive or” on (x,y) coordinates (i.e. it is indeed C2-invariant) and if the examples from figure 5a are just the training data but the test data can go outside (e.g. their symmetric elements in fig 5b could be the test set), then despite the 0% error rate of the unconstrained linear model on the training data, it would learn an incorrect labeling function and its test performance would be 50%, while the invariant model would still have a 25% error rate, which would be better.
3. If $p$ really denotes the population distribution, I wonder whether we can talk about $p$ independently of $f$ and vice-versa. From your figures 1 and 2, it seems that $f$ can be defined outside of the support of $p(x)$, and I wonder whether this makes sense. For example, if $f$ is the labeling function in a digit classification problem, what should be its output for an image of the digit “9” rotated by 90 degrees? Should it be 9 or 6?
### Originality:
Minor:
4. In the related work, some references to class-specific and instance-specific automatic data augmentation works are missing ([1, 2, 3] for example), while they are strongly related to the idea of pointwise invariance proposed in the paper.
[1] https://arxiv.org/abs/2106.13695
[2] https://arxiv.org/abs/1510.02795
[3] https://arxiv.org/abs/2206.00051
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See 3 questions in the weaknesses section above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors mention one limitation of their work, which is the need to know $p$, which is never the case in practice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors thank the reviewer for their careful review. Please see our response below.
> I think there is a mistake in the xlabel of fig 7b.
We apologize for any confusion regarding the `-` in `incorrect - correct` and the other x label in Fig 7. The `-` here is not meant to denote subtraction, rather, it indicates that the distribution transitions linearly from incorrect to correct as we move along the x-axis from left to right. We appreciate your comment and will revise the label to make it more explicit by changing the x label of Fig 7a to ‘correct ratio’, and labeling the x=0 as ‘c=0, e=1’ and labeling x=1 as ‘c=1, e=0’.
> L.363: I think you wanted to reference fig.9, not 8.
We appreciate your careful reading. Indeed, there is a referencing error. We will correct it in the revised version.
> I think the definition of $p$ could be clarified...
Thank you for pointing this out. $p$ is indeed the probability density function of the domain and this description was omitted in Section 6. We consider the probability density $p$ to be uniform for this domain, where the data domain consists only of the 4 samples and $p$ is zero everywhere else. The $C_2$ group action acts along the z-axis and the transformed data contains samples that were not part of the original domain (e.g., $x=(0,0,-1), gx=(0, 0, 1)$). We will add these clarifications to the final revision.
> I also have other few questions regarding the example presented in section 6. First of all, does the data from figure 5 represent training or test data? Is the data $S$ the whole support of $p(x)$? What is the true labeling function $f$ in this example? These elements seem important to conclude. The reason I ask is that if $f$ is indeed the “exclusive or” on (x,y) coordinates (i.e. it is indeed C2-invariant) and if the examples from figure 5a are just the training data but the test data can go outside (e.g. their symmetric elements in fig 5b could be the test set), then despite the 0% error rate of the unconstrained linear model on the training data, it would learn an incorrect labeling function and its test performance would be 50%, while the invariant model would still have a 25% error rate, which would be better.
In this example, we do not differentiate between training and test data and focus on only the learnability of a non-equivariant/invariant function class for a specific distribution and not on its generalization. The domain is discrete where Figure 5a shows the only four possible values of x with probabilities p(x) = 1/4. As we consider the model’s capacity to fit arbitrary labelings (since we use Rademacher complexity), there is no ground truth labeling function and Figure 5a shows one instance of an assignment of labels. We consider the training set S to be all 4 data points. Section 6 serves to show that for a linear class of models, an extrinsically-equivariant/invariant model has a non-zero error rate for this data. The example you compute is correct if the test data contained the 8 points shown in Figure 5b, but we do not consider this scenario as it is not an example of extrinsic equivariance. The test and training data is only the 4 points shown in Figure 5a. We will add these clarifications to the text.
> If $p$ really denotes the population distribution, I wonder whether we can talk about $p$ independently of $f$ and vice-versa. From your figures 1 and 2, it seems that $f$ can be defined outside of the support of $p(x)$, and I wonder whether this makes sense. For example, if $f$ is the labeling function in a digit classification problem, what should be its output for an image of the digit “9” rotated by 90 degrees? Should it be 9 or 6?
Thank you for raising this question. In our analysis, we only consider the behavior of $f$ within the support of $p$ because the output of $f$ outside the support of $p$ will not change our bounds. Note that in Equations 2 and 3, the equations are weighted by $p$, so for x outside the support of $p$, the value of $f$ does not matter. In Figures 1 and 2, we draw $f$ outside of the support of $p$ to make it continuous for ease of understanding. If they are confusing, we are happy to iterate on them to remove the part where $p(x)=0$.
In practice, there can be circumstances where defining $f$ outside the support of $p$ is meaningful. For example, $f$ can be expanded outside the support of $p$ in scenarios like a random crop data augmentation, where the output of $f$ will not change after a random crop, even though the cropped image is out of distribution. In the reviewer’s example regarding digits 6 and 9, if the rotated 6 is outside the support of $p$ (like in MNIST where the rotated 6 and 9 are different due to handwriting), $f$’s output would be undefined.
> In the related work, some references to class-specific and instance-specific automatic data augmentation works are missing ([1, 2, 3] for example), while they are strongly related to the idea of pointwise invariance proposed in the paper
[1] https://arxiv.org/abs/2106.13695
[2] https://arxiv.org/abs/1510.02795
[3] https://arxiv.org/abs/2206.00051
Thank you for pointing out those references! Yes, those related works are strongly related to our work. First, the data augmentation methods can be viewed as learning pointwise extrinsic equivariance with respect to transformations defined by the augmentation function. Second, the instance and class-specific augmentations can also be viewed as applying data augmentation where pointwise correct or extrinsic invariance exist, while avoiding incorrect invariance. Moreover, while our current work is focused on constrained model classes, our theory can also potentially be applied to analyze these data augmentation methods. We will definitely add them to the related work section.
---
Rebuttal Comment 1.1:
Title: Answer to authors
Comment: Dear authors,
Thank you very much for your thoughtful rebuttal.
I can confidently say you have addressed all my (few) concerns. The only points I would like to slightly insist on after reading your clarifications would be:
1. I think indeed that it would be nice to make clear that $S$ is the whole support from which any data can be sampled in section 6 (even though the example is very simple) ;
2. I also think you should say explicitly what is $p$ and that you assume there is never a distribution shift (so $p$ is always the underlying distribution when training **and** testing). I understand now why you have drawn $f$ outside of its domain, as your figure would indeed be less clear if you had restricted it to $p(x>0)$.
---
Reply to Comment 1.1.1:
Comment: The authors appreciate the reviewer's helpful comments. We will be sure to update the paper based on your suggestions. | null | null | null | null |
ViSt3D: Video Stylization with 3D CNN | Accept (poster) | Summary: This paper proposes a 3D CNN based video stylization method which explicitly disentangles motion and appearance and adopts multi-phrase training. Experiments show that the method achieves high quality results.
Strengths: The proposed 3D CNN based framework and multi-phrase training is novel and effective, the paper also expand the AdaIN in time dimension to AdaIN3D. The quality of stylization results is also good.
Weaknesses: I think the main weakness lies in the experiments.
1. In ablation study, the authors only showed and described the results but didn't not analyze the reason
2. In the qualitative results, this paper compares with only a few previous methods, while some related works should also be discussed. for example,
Kotovenko D, Sanakoyeu A, Lang S, et al. Content and style disentanglement for artistic style transfer[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 4422-4431.
Kotovenko D, Sanakoyeu A, Ma P, et al. A content transformation block for image style transfer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 10032-10041.
Both papers showed high-quality video stylization results
3. For tasks where the quality is largely depended on subjective assessment, user study is a common criterion. For this paper, I feel the comparison is not sufficient and a user study is needed
4. In quantitative comparison, the proposed method doesn‘t obviously outperform previous method.
5. Lack a comparison to single image stylization with optical flow motion compensation
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why are only three previous methods included in the qualitative and quantitative comparison? There have been quantities of style transfer papers every year and many of them are about video stylization.
2. Why is user-study not included in the experiments?
3. What is the advantage compared to adding optical flow motion compensation to single frame style transfer method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: This paper lacks several important parts:
1. The discussion and analysis of each part of the method are not included in the paper
2. User study is missing in experiments
3. More previous methods should be included in comparison
4. It would be better if more results of diverse scenarios could be shown.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable comments, we answer your questions and concerns as follows.
### Why only three methods compared
Video stylization is relatively less popular cf. image stylization. We reported the recent methods which are leading in this task to the best of our knowledge. We have further included quantitative results from two more papers in the comparisons table (final table provided in global comments), but as expected they perform below the leading works we had already cited and compared with. In the absence of specific citations, we are unable to give any better answer.
### Ablation study, results not analyzed
The ablation studies basically support the motivations given in our approach section. Due to lack of space we could not reiterate them, we will do so in camera ready. We will also point out specifically that there are supporting videos in the supplementary material (showing before after results) originally uploaded with the paper.
To summarize again:
- First ablation shows that when the appearance and motion components in the features are not disentangled the results have jerky motion artifacts as the static style clip induces zero movement in the content clip. Once we do the disentanglement this is resolved. The supporting
videos are in supplementary material folder Videos_with_naive_extension_of_2D_stylization
- Second ablation shows that temporal loss is needed to make sure that in videos pixel motion consistency is maintained. Without the loss we get jittery movements in videos, but adding the temporal loss resolves this. Supporting videos are in supplementary material folder Videos_without_temporal.
- Third ablation shows that without intra clip loss, there is a slight drift in colors as the stylization is not sufficiently tied for the different frames of the clip. But with intra clip loss that is resolved. Supporting videos are in supplementary material folder Videos_without_intra.
### User study
Not all previous works have reported a user study. Based on your kind suggestions we did a user study during the rebuttal period and are reporting it in the global response which we request you to kindly refer to. In summary, the proposed method is competitive wrt existing methods in terms of user preferences as well, while being a first demonstration of stylization using 3D CNNs.
### Comparison with methods
In the short period of time we could not do a comparison to the methods specifically mentioned in the review. Their code or pre trained models for inference were not available.
However, we were able to compare with SANet and MAST quantitatively, and also used these in the user study (so compared qualitatively).
We iterate again however, the task of stylization is not a well defined task with a single true output for a specific (content, style) pair. We have demonstrated that the proposed method is competitive to the existing methods in terms of quantitative metrics, as well as subjective qualitative results and a user study.
### Advantage cf. adding optical flow to image stylization methods
We also found another paper which does specifically this [A], i.e. adds optical flow to 2D CNN based image stylization methods. We have added a video with comparison to some of their results. We observe that in the first example, there is a large amount of flickering in the first example generated with [A], while in the second example the method smoothes out the texture of the pebbles on the beach, while the proposed method does not have heavy flickering, and it also maintains the texture of the pebbles.
The two approaches give different results which may be used by practitioners in different use cases.
[A] Wang et al., Consistent Video Style Transfer via Relaxation and Regularization, Trans. Image Processing, 2020
### More previous methods in comparison
We had compared with AdaAttn which had itself compared to many methods so we hoped demonstrating better results wrt AdaAttn would be sufficient. However, at your suggestion we have included two more methods in the comparison table, and have given the updated table in the global response. Kindly refer to that please.
### More results in diverse scenarios
Due to space limitation we couldn’t include more visual results in the main paper PDF. But, we had already included a very variety of results in the supplementary which we summarize here: in the folder "Comparative Analysis” we had given 4 video, each containing 3 (input video, target style image) pairs for our method as well as AdaIn, MCCNet and AdaAttn. These covered indoors, outdoors, high motion, animation and sports. In addition, in the folder ``Our results” we provided 6 result videos of a one input video each (4 short videos and 2 longer videos) stylized with three different style images, these also cover a variety of scenes and styles. We request you to kindly refer to the supplementary results videos. In addition we used 20 videos for the user study with 1 style each, which the users found reasonable as well (user study reported in global author response). | Summary: Image stylization is more popular than video stylization, research on video stylization is few due to its challenging. This manuscript first applied 3DCNN to video stylization task, it first explicitly disentangles motion and appearance, and then stylizes the appearance part, and then adds back the motion to decode the final stylized video. The method is trained in multiple phases. Experimental results show the superiority of the proposed method with existing methods. Hwever, there are still some problems in this manuscript.
Strengths: Stylization has been a very popular research area, however video stylization is more challenging than image stylization. Most existing methods directly apply image stylization methods to videos by processing the videos frame by frame, results are not well. This manuscript proposes a novel method based on 3DCNN and achieves better results than existing methods. Moreover, a large-scale dataset with 10, 000 content clips curated from public benchmark Sports1M is built for this task.
Weaknesses: The task is interesting and the proposed method has some improvement, but paper writing is a little weak. The logic of the article is confusing, especially in Section 3 APPROACH. The description and details of network structure is not clear, it's not easy to understand.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. How to disentangle motion and appearance by 4 appearance subnets is not clear in your description.
2. In line 190, ‘O and I are the output and input clips respectively’, and in line 213, ‘O is the stylized clip’. It is confusing whether they are content or style clip?
3. Combined with the previous question, in Fig.2, it can be found that in all three phases, only content clip is used, where is the style clip? How to add style information to appearance and preserve motion?
4. In line 134, ‘Our encoder is a recent state-of-the-art 3D CNN, i.e. C3D [23]’, while C3D was proposed in 2015, ‘recent’ may be not suitable.
5. There are some fonts are inconsistency, e.g., line 172 ‘Appearance Subnet 4’, line 202 ‘relu1_1, relu2_1, relu3_1, relu4_1”.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. The authors point out that some flashing still occurs in challenging edge cases.
2. In Table 1, it can be found that the proposed method did not achieve minimum error on all videos, which means it cannot preserve the motion in the original clip very well.
3. From the video results in supplementary material, it can be seen that the degree of stylization by the proposed method is not obvious, and the results still have some artifacts.
4. It is suggested to reorganize section 3 to introduce your method more clearly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable comments, we answer your questions and concerns as follows.
### Disentangle motion and appearance by 4 appearance subnets
The disentanglement of motion and appearance, by the 4 appearance subnets, happens by the phase 2 training. In Phase two we keep the 3D CNN encoder fixed, so the features which are input to the appearance subnets have entangled motion and appearance information. We train the network to minimize the VGG-19 feature loss for each frame, and hence only to remember appearance information in the frames. When trained like that the appearance subnets only gate the appearance information from the 3D encoder features as only that is required to minimize the loss, which is applied independently to each frame and has no dependence on motion. The evidence for such disentanglement is indirect, as when we do not do this, we get motion artifacts which we explain by the static nature of the style clip (constructed by repeating the style image), but when we do this those motion artifacts disappear indicating that that the features and the AdaIN3D based statistics transfer only affected the appearance and not the motion.
### In line 190, ‘O and I are the output and input clips respectively’, and in line 213, ‘O is the stylized clip’. It is confusing whether they are content or style clips?
We apologize for the confusion.
The losses as functions of two variables, $I$ and $O$ are correct and what those variables are depends on the training phase.
To clarify, in L190 Eq.5 the reconstruction loss applies to phase 1 training of the autoencoder, so the two inputs to the loss function are the input clip, and the output reconstructed clip.
While in L213, Eq.10 the intra clip loss applies to phase 4 of the training of full stylization, so the inputs to the loss function are the frames of the output stylized clip.
Nonetheless, to avoid confusion, we will change $O$ in L190 to $O_r$, as it is the reconstructed output clip from the auto-encoder.
### How is style clip used and how is appearance and motion preserved while doing stylization?
Style clip is only used in the final phase of training shown in Fig.1. Style information is added using AdaIN3D operation, which generates a combined feature by taking content, and style features as input. The AdaIN3D operation introduces style to disentangled appearance features only, which are then combined with motion features. The combined features are then decoded and the final losses (content, style, temporal and intra-clip, Eq11) make sure that the output is close in appearance to the content clip, while having style of the style clip, and at the same time not having jittering and flashing artifacts.
### C3D proposed in 2015, ‘recent’ may be not suitable.
We will remove recent from the text mentioned.
### Font inconsistency, e.g., line 172 ‘Appearance Subnet 4’, line 202 ‘relu1_1, relu2_1, relu3_1, relu4_1”.
We wanted to highlight that those words indicate certain important layers’ features in the VGG-19 networks that are used. We will clarify this in the notations section.
---
Rebuttal Comment 1.1:
Comment: Some questions have been solved and additional results are provided, I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your time and feedback on the paper, and reading the reviews. | Summary: The paper studies the task of video stylization. The paper aims to stylize the video using 3D CNN and AdaIn3D. To perform the stylization, the authors propose to disentangle motion and appearance first, and then stylize the appearance part using AdaIn 3D. The results show state-of-the-art results compared to the baseline methods.
Strengths: 1) The paper proposes a novel framework using 3D CNN to perform the stylization. To achieve this, multiple designs are proposed. Such as the appearance subnet, and Entangle network.
2) It is reasonable to train the model in different stages to enforce the corresponding networks to learn their functionalities.
3) The results are significantly better compared to the baseline methods.
4) It is non-trivial to train a model with many sophisticated designs and achieve plausible results.
Weaknesses: 1) It is unclear why four appearance subnets are needed. If the number of appearance subnets is cut down, how the performance is affected?
2) Why the Entangle Subnet is necessary? What if the feature maps fed into the decoder are the weighted average of the output of the appearance network and C3D encoder?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see my questions regarding the details of the network in Weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable comments, we answer your questions and concerns as follows.
### Unclear why four appearance subnets needed, affect on performance if the number of appearance subnets is reduced
The different appearance subnets work at different scales. The C3D architecture can be seen as four sequential logical blocks, and we used one appearance subnet for the output features of each of those blocks. This was inspired by skip connections architecture in many pixel to pixel networks, e.g. UNet for segmentation etc.
If we remove the appearance subnets, we would observe a deterioration in the stylization quality along with undesirable artifacts. In C3D features qualitatively it has been shown that appearance contributes more to the lower features (initial blocks) while motion contributes more to the higher features. If we remove the lower layer appearance subnet we would observe distorted appearance in the output while if we remove the upper layer, motion artifacts will corrupt the output.
### Why is Entangle subnet needed, decoder features could be weighted average of appearance net and C3D encoder features
Other than needing a simple projection layer to match different feature dimensions, a weighted average as you suggest could be a valid candidate for feature fusion. The Entangle network we use is very lightweight, i.e. just one 3D conv and one ReLU layer, and it effectively does a non-linear combination cf. a linear weighted average, ensuring the fusion is more effective.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your comments and feedback, we are happy to answer any more questions you might have. Thank you for your time.
---
Rebuttal Comment 1.2:
Comment: I have read the rebuttal provided by the authors. The rebuttal addresses my questions regarding the details. I keep my rating as Weak Accept. | Summary: This paper proposes ViSt3D, which utilizes 3D CNN (C3D) as the encoder backbone for video style transfer. However, the motion and appearance information in C3D is intrinsically entangled. To address this problem, ViSt3D aims to separate these two features with appearance subnets and AdaIN3d. Results on sports1M datasets are used to demonstrate the performance of the proposed method.
Strengths: - As claimed, this paper is the first to use 3D CNN for video stylization.
- The temporal and intra-loss improve the stylization stability as shown in the supplementary videos.
- Quantitative evaluations of optical flow demonstrate the effectiveness of the proposed method.
Weaknesses: - The usage of 3D CNN is not well motivated. The authors should put more emphasis on the motivations and advantages of using 3D CNN for stylization. This also makes the technical contributions a bit weak. And as mentioned in the limitations 3D CNN also brings extra computational cost compared with 2D CNN-based methods.
- In the paper, the style clip is formed by repeating a static image multiple times. Thus, I do not understand why you need a C3D to extract the style feature -- a 2D CNN could be sufficient. I suggest the authors conduct related experiments and provide thorough discussions.
- In Equation (10), why not use the difference of warped features to measure the intra-clip consistency as (9)? I think using the mean and standard deviation will hurt the results when the scene changes or there is a large motion.
- The model needs to be trained with four cascaded phases. This makes the model less elegant and may be hard to train and reproduce.
- Comparison with SOTAs should be more comprehensive. Only the optical flow metric is not enough to evaluate the style transfer results (for example, directly outputting the content video will result in a relatively small error).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The authors are encouraged to address the weaknesses mentioned above, especially the motivations and contributions for using 3D CNN. If a 2D CNN is enough to address the stylization, it would be unnecessary to further design a disentangled module to solve a problem that is originally not there.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discussed the limitations of the increase in both inference time and memory consumption. Although it may not be appropriate to directly compare the time and memory requirements of our approach with other methods based on 2D CNNs, it would be still helpful to give related statistics for a better understanding of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable comments, we answer your questions and concerns as follows.
### 3D CNN vs 2D CNN, solved problem, motivation
Stylization is not a deterministic problem, in the sense that there is no single correct stylization for a (input content, target style) pair. It is akin to a high level filter which transfers characteristics from an image to another image or video. 3D CNN was a natural choice when working with videos. They had never been used for the task, and we expected to obtain different stylization characteristics upon using them, as eventually we did and reported in the paper. 2D CNNs have also been used with some success for the task, but none of the methods (2D or 3D CNN based) could be THE final solution. 3D CNNs do take more computational resources but they produce competitive and useful results as we reported in the paper and reinforce in the user study provided in the other response. Hence we believe our contribution of successfully doing stylization with 3D CNNs for the first time, is valuable for the research community as well as practitioners. The simple motivation of using 3D CNNs was to determine if stylization can be done with 3D CNN at all (which surely in hindsight, given the paper, looks obvious now, but was a hard problem to solve) and to derive a novel and different stylization which is useful.
### Why 3D CNN for style clip and not 2D CNN
Stylization requires transfer of feature statistics from style image to the content video/image. Just as for image stylization using one model for encoding the content image (e.g. ResNet) and another for encoding the style image (e.g. InceptionNet) is not expected to lead to sensible results, as the statistics for intermediate features are expected to be very different to be successfully transferred, in the present case using two different networks for encoding content and style assets is not expected to work as well. We did initial experiments with 2D CNN encoding for style images, as a sanity check, but as expected it failed completely, i.e. the outputs did not converge to reasonable videos. Hence we moved on to encoding the style image with 3D CNN, by making a static clip, which was the naive extension to AdaIN with 3D CNNs for content and style both. We provided those results in the supplementary materials folder Videos_with_naive_extension_of_2D_stylization.
Please note similar repetition (as used for single style image), albeit for features, was used to bootstrap/initialize the 3D CNN parameters in the one of the 3D CNN for video classification papers, and so is not a new concept and has been used successfully in the past.
### Intra-clip loss, temporal loss, high motion, scene change
The intra-clip loss in Eq.10 is to match the global color appearance of successive frames. While the temporal loss in Eq.9 is to preserve pixel level motion characteristics after considering the appropriate warping of frames. Even in case of large motion, since Eq.10 works with global averaged values, it successfully performs its job. We had provided qualitative results without intra-clip loss (folder Videos_without_intra) and also without temporal loss (folder Videos_without_temporal), in the supplementary material. Without temporal loss, we observe strong jittering and without intra clip loss we observe a subtle flashing, as the average color of the frames drifts, and then resets on a clip boundary. Adding the intra clip loss fixes this flashing.
Large sudden motions are challenging for all stylization methods, we had provided qualitative comparisons in supplementary (folder Comparative Analysis, result 2 has a large motion clip from Spiderman movie, and result 4 has a clip from Avengers movie). The proposed method does competitively or better than existing methods.
None of the video stylization methods address shot changes where the scene completely changes. The stylization is done on videos which have been separated by performing shot detection first as we explain in our dataset creation section (l265).
### Model/training with four phases, less elegant, not reproducible
We agree that the current version of the model and training are quite involved. The current paper demonstrates that stylization can be done with 3D CNNs. In the work we are currently doing, we are shrinking the models in size and experimenting with end-to-end training with promising initial results. As a first result, we believe the current method is interesting for the community. We had already given the code of each subnetwork in the supplementary PDF, and each stage of training is a well understood training procedure for the community (auto-encoder, perceptual and reconstruction loss minimization), so we believe that the results can be reproduced by a reasonably skilled student researcher.
### Comparison to SOTA, optical flow not good/sufficient metric
We agree to this point, but we highlight that evaluating a subjective task is always hard, and some proxy metrics are generally used. Stylization, like some other tasks, perhaps (semantic) edge detection, image captioning etc., do not have one true answer. So using proxy metrics to evaluate aspects of the output have been used by the community which we also follow. In addition, as another reviewer’s question’s response, we also did a user study in the present rebuttal, and we request you to refer to that as well. In particular among responses rating for preservation of content and transfer of style, we see a trade-off. The methods which preserve the content (in extreme case output the same video as you mention) lag in transferring style. Overall the proposed method strikes a balance as good as or better than the existing methods, while being a completely novel way of doing stylization.
### Time and memory cf. 2D CNN
We processed 144 frames of 640x360. AdaAttN took 14, 8GB of GPU memory, while proposed method took 60s, 16GB of GPU memory on a machine w/ Core i9-10900X & A4000 GPU.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your time and feedback. Kindly let us know, if you have any further questions after the rebuttal. | Rebuttal 1:
Rebuttal: We thank the reviewers and ACs for their valuable time and constructive comments on the paper.
The reviewers have raised many valid points and concerns which we have answered to the best of our abilities and hope that the reviewers will find them satisfactory.
We would like to reiterate that video stylization with 3D CNNs was not attempted before, and while we do not think any method can claim to be the absolute best for the task, as the task is subjective, the method we propose gives distinctive stylization when compared to existing methods. On the request of the reviewers we also did a user study where we found that our method is competitive to the existing methods in terms of the trade-offs between stylization and content preservation as evaluated by human subjects. We are looking forward to any questions or queries the reviewers might have given our responses.
### Details and discussion of user study
We selected 20 random stylized videos from a combination of 15 content videos, 15 style images. We considered four other leading style transfer methods for comparison, namely AdaIN, MAST, MCCNet, AdaAttN.
We took a total of 900 votes from 15 users for this user study. We asked the users to vote separately for three preferences: Style, Content and Overall, i.e. which of the five presented options (i) transfers the style best, (ii) preserves the content best and (iii) is preferable overall.
The results of the user study are given in the attached PDF.
We observe that MCCNet performs best by a big margin in Style preference (31.3% MCCNet vs 20% AdaAttn and 16.7% proposed). While, our method leads in Content preference, closely followed by AdaAttN (43.3% vs 41%). In the case of Overall preference our method leads, and is closely followed by both AdaAttN, MCCNet (28.7%, 28.3%, 27.7% resp.). MCCNet tends to heavily stylize the output and distort the content, while the proposed method as well as AdaAttN maintain the content better. However, the styling as we observe in numerous qualitative examples in the supplementary as well, is quite different for these top performing methods. Hence we conclude that the proposed method is a competitive stylization method useful for end users.
### Discussion on the Optical flow metric table
As a response to a comment by reviewer qz5E, we could add two more methods for quantitative evaluation in the rebuttal period, i.e. SANET, MAST. We are giving the results here (Table in attached PDF) as they might be interesting for other reviewers too. Optical flow metric is one of the possible metrics for the task, it helps to make sure that the stylization method is preserving the motion in the original clip and there are no obvious/drastic failures. All the style transfer methods are expected to distort the content in some ways while performing stylization. Considering the average optical flow error our method has the least mean optical flow error and thus we can conclude our method is at par wrt the leading methods for the task. Our method also achieves mostly rank 1 and 2 among the methods.
Pdf: /pdf/9c02cf71f0ec0d37a94924a291768bc5d454e00c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Improved Bayesian Regret Bounds for Thompson Sampling in Reinforcement Learning | Accept (poster) | Summary: The paper presents a refined analysis of TS in Bayesian regret reinforcement learning. Regret bounds are derived for tabular, linear, and finite mixture MDPs. The paper uses an information theoretical approach: the information ratio, representing the exploration-exploitation trade-off, is analyzed and bounded by the episode length. The paper uses discretization of the environment space defined by fixing a previous proof from the literature. The paper concludes by discussing related work and the optimality of the obtained regret bounds.
Strengths: The paper provides state-of-the-art Bayesian regret bounds for Thompson Sampling in reinforcement learning through a refined analysis of the surrogate environments and information ratio in many different settings, including tabular, linear, and finite mixtures. The obtained bound is general and independent of the dimension of transition/reward function space. Proof sketches are provided.
Weaknesses: The proposed analysis and bounds may have limited applicability to real-world RL problems beyond the considered settings.
The paper does not provide experimental results or empirical validation of the derived bounds.
Some assumptions and prior works are discussed without providing detailed explanations or comparisons.
Overall, the paper makes significant contributions to the theoretical understanding of TS in RL by providing refined analysis and state-of-the-art Bayesian regret bounds in various settings. However, practical applicability and empirical validation of the derived bounds need further investigation.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Could you please provide more references that specifically highlight the theoretical analysis of TS in the context of RL and Bandit problems? Usually, bayesian regret is easier than frequentist to analyse. Do you know if your approach could be applicable to frequentist regret ?
What is the relationship between the Kolmogorov dimension for $l_1$-distance, the "value partition for surrogate learning," and the time-inhomogeneous Bayesian RL problem?
How does leveraging this relationship help in bounding the information ratio and achieving regret bounds in TS ?
Can concrete estimates of the dimension be provided for linear, tabular, and finite mixtures applications? What is the significance of isolating the contributions of the information ratio and the cumulative mutual information terms?
Can you clarify the specific experimental results or empirical evidence that support the conjecture regarding the optimality of the regret bound when substituting $H$ in place of the variable?
Are there any specific experiments conducted with TS, assuming access to an oracle, that provide insights into the performance of the regret bound in practical scenarios?
You mention related work on bounding Bayesian regret for TS, including the use of confidence regions and algorithms such as UCBVI and OPPO. How does the proposed work in this paper compare to these approaches in terms of regret bounds and optimality? Are there any notable advantages or limitations of the proposed approach compared to the existing literature?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Can you provide a motivation for using TS-based policies rather than OFU ones ? If this is about empirical performances, then it makes sense to run some experimental comparisons.
I am not sure to clearly understand what evidence or rationale is provided to support the claim that the regret in surrogate environments serves as a proxy for the main problem.
I think you could elaborate more on the methodology used in this analysis. How does it contribute to a better understanding of the trade-off between exploration and exploitation? Are there any limitations or assumptions associated with this analysis?
minor:
Lemma 11 : fix the X vs x notation
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your questions and comments improving our paper. Please read the Author Rebuttal beforehand.
W:
1. See Q.6 and L.1 for experiments and empirical validations.
2. See L.4 and replies to reviewers: W.2 of 4ohk and W.1 of bSE3.
Q:
1. (a) Here are more refs: [1111.1797] for multi-armed bandit, [1611.06534] for linear bandit. For RL: [1306.0940, 1607.00215] for tabular Bayesian RL problem with variants of TS. Theory of TS in general Bayesian RL only includes [2206.04640] which is the closest to our work.\
(b) It is yet unclear if our approach could be applied to the frequentist setting, as some of the derivation need the average over the prior. We hope posterior consistency tools will play a role similar to concentration inequalities in spirit in frequentist analysis.
2. The $l_1$-distance determines the $l_1-$dim of the environments space, and is a bound on the size of any discrete set that approximates this space up to $\epsilon$ error. This surrogate set is used for surrogate learning (Lemma 1). Surrogate learning is meant to act as a proxy to the main learning problem for time-inhomogeneous Bayesian RL (Lemma 2). This proxy regret is then used to bound the main one, expressed by the $d_{l_1}$ estimation of the size of the surrogate set (Theorem 4). More precisely, the regret is isolated into an information ratio and cumulative mutual information terms, and the latter is bounded by the size of the surrogate space. This shows how this relationship helps in regret bounding.
3. These estimates are in Corollaries 5-7, Tabular: $SAH$; Linear: $l_1-$dim of the feature maps space ; Finite mixtures: $l_1-$dim of the space of mixtures coefficients.
4. This is a technique introduced in [29]. On the significance of this decomposition of regret: (1) The ratio quantifies the exploration/exploitation trade-off of the algorithm at each step. Using Pinsker's inequality relating expectation and mutual information, one can bound this ratio. (2) The cumulative mutual information terms can always be bounded by entropy measures or relevant dimensions of the environments space. Our work improves and/or finds correct estimates for both terms in different RL settings.
5. The simulation supporting our conjecture is in [25, Fig 9]. There are 3 sets of data points in that figure, and the one in blue (PSRL) has the best asymptotic behavior of $\tilde{O}(\sqrt{HSAT})$. Note that we added $\sqrt{H}$ to convert to time-inhomogeneous.
6. Yes, such works include Optimistic PSRL [2209.14414], wherein PSRL with oracle access is the most performant. See also recent practical work in this area [2305.00477] stating: "Our extensive experiments on the Atari benchmark show that PSDRL significantly outperforms previous state-of-the-art randomized value function approaches, its natural model-free counterparts, while being competitive with a state-of-the-art (model-based) reinforcement learning method in both sample efficiency and computational efficiency".
7. Our work has few reasonable assumptions, but does not assume a special RL setting like tabular. Previous UCB based works achieve competitive regret for frequentist model-free settings, and [1] has shown optimality with regret $\tilde{O}(d\sqrt{HT})$, where $d$ is the functional approximation dimension of the MDP. As frequentist model-free vs Bayesian model-based are quite different, one must be careful in comparing them. Frequentist optimal/lower regret bounds can be larger than Bayesian regret bounds; Our work achieves a regret $\tilde{O}(H\sqrt{d_{l_1}T})$, and conjectures the lower bound $\tilde{O}(\sqrt{Hd_{l_1}T})$, while not being in contradiction with the optimal bound $\tilde{O}(d\sqrt{HT})$ in frequentist model-free setting. In terms of limitations of TS: the main one is the oracle access to an optimal policy, which cannot always be satisfied efficiently. Nevertheless, clever engineering can make TS work even in large scale Deep RL, as cited above.
L:
1. We cite the paper "why is PS better than optimism for RL?" directly addressing this: "Computational results...demonstrate that PSRL dramatically outperforms existing algorithms based on OFU". In addition, more up-to-date works on PSRL [2209.14414, Fig. 1,3] compares this recent OFU based UCBVI/UCBVI-B wherein PSRL shows the lowest regret. These experiments provide enough support for the empirical performance of TS.
2. We address this by explaining the intuition and evidence. Intuition: The discrete surrogate environments set $\Theta^\epsilon$ are built to approximate the original environments $\Theta$ up to a chosen accuracy $\epsilon$. The rationale behind building such a set is that any regret bound on this set is, up to some error related to $\epsilon$, a regret bound for $\Theta$, hence the use of the phrase "proxy for the main problem". Evidence: The supporting evidence is mathematically laid out in Section 4 and 5, culminating in (Theorem 4), where it is shown that the regret of the main problem scales by the size of $\Theta^\epsilon$. This is established by replacing every environment of the main problem by their $\epsilon$-close surrogate environment, and analyzing the regret in the surrogate space (Lemma 2). This proves the intuition behind the surrogate serving as a proxy to the main problem.
3. This reply is complementary to Q.3. We demonstrate how this trade-off is better understood in terms of the $l_1-$dim of $\Theta$, and a new notion called $\lambda$, the value diameter. Using posterior consistency tools, we achieve sublinear dim scaling $d_{l_1}^{1/2}$ (vs linear in previous tabular works). We also make a conceptual contribution by showing that, unlike previous studies, the number of time-steps $H$ is not the right quantity to bound the information ratio. Instead, it is the value diameter.
4. On limitations/assumptions, as noted in Section 5, the posterior consistency assumption is needed to show $d_{l_1}^{1/2}$ scaling for the information ratio bound.
---
Rebuttal Comment 1.1:
Comment: I have read the other reviews and the rebuttal. I am satisfied with the answer provided by the authors. I will not change my score. | Summary: The paper shows a refined analysis of Thompson Sampling in RL. The analysis leverages the notion of Kolmogorov dimension, and results in an improved rate of the regret. The authors further presented the bounds in terms of several specific settings, which match the state-of-the-art results.
Strengths: The writing of the paper is clear and easy to follow. The paper studies an important problem in RL. The notion of Kolmogorov dimension as well as the corresponding analysis is novel. The paper also presents a complete discussion for specific settings, and makes sufficient comparisons with previous works, which shows the significance of its results.
Weaknesses: 1. While the bound relies on a new notion $\lambda$, the term can still be as big as $H$ in many cases. It hard to to quantify how much improvement is made by this notion, and therefore, in the discussion of specific RL settings, the corresponds bounds only match the state-of-the-art results, but don't improve them.
2. The paper doesn't provide a lower bound in terms of the proposed notion, which weakens the significance of this notion.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The authors mentioned the incorrectness of [12] in terms of surrogate loss. Can the authors make some discussion on it and how did they fix the issue?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper has addressed it limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, which we will include accordingly in our final revision as detailed below. We also invite the reviewer to read the author rebuttal beforehand. Below are our replies:
**Weaknesses**:
1. Thank you for the comment. Given other reviewer's remarks, we have realized that we need to discuss a bit more the potential conceptual and practical improvement brought about by $\lambda$. A summary of this reply will appear in the revised version.\
The value-diameter $\lambda$ is bounded by another notion of environment-diameter, which is roughly speaking the longest path among the shortest paths between any two states. If states can be reached from one another quickly in environments, then $\lambda$ can be far smaller than $H$ (as alluded to after Definition 10). Perhaps more importantly, the conceptual contribution is the realization that the information ratio is not bounded by the term $H$ and $d\_{l\_1}$, but the value-diameter $\lambda$ and $d\_{l\_1}$. While it is true that $\lambda$ can be always upper bounded by $H$, it is important to recognize the nature of what is bounding the information ratio, which is not $H$, but the value-diameter; this opens the door to further optimizations in specific cases, one of which we just mentioned. Regarding the discussion of specific RL settings and the significance of our results, we would like to note again that our results generalize to nonlinear settings, and further, we achieve the first correct and state-of-the-art regret bounds for the Bayesian linear and finite mixtures settings.
2. Thank you for the comment. We understand your statement as asking for a regret lower bound based on the notion $\lambda$. We thank you for reminding us, as we have mentioned this as a limitation of our work (after the section Conclusion). We will include this more explicitly as an open problem for future work in our revised version.
**Questions**:
1. Thank you for your question. We assume that you are referring to surrogate "environments"/"regret", as there is not a notion of surrogate "loss" in our paper (or in the related work [12]). We will include a summary of our reply below in our revised paper to provide an outline of this matter.\
We recall that the goal is to construct surrogate environments that have a surrogate regret which approximates that of every environment in the same $\varepsilon-$value partition. We claim that the construction of surrogate environments in our paper corrects the one in [12]; the formalized statement is our Lemma 2, with proof and details given in our Appendix B. The incorrect proof by [12] in [App. B.1, 12] contains the use of a technical inequality lemma that does not apply to the setting the authors claim (please refer to our Remark 6 in Appendix B, line 502).\
What we show is that the desired property of a surrogate environment is achieved when the value function of TS is smaller than the average value over the partition. This informs us to take the surrogate environment as the average of the environments in the partition, and prove the statement.\
In addition to surrogate environments construction, we corrected a fundamental issue with the surrogate information ratio bound. We have quoted and explained the authors' argument in App. J.2.1, and shown with equations in details, and explained the intuition, for why their argument fails. We elaborate on the intuition here as well.\
Note this ratio is used to bound the surrogate (and therefore, main) regret. In [12], the authors claim to bound this ratio by showing that the numerator (surrogate regret) is smaller than a particular surrogate mutual information: $\mathbb{I}^{\pi^*\_{\mathcal{E}}}\_\ell(\widetilde{\mathcal{E}}^*\_\ell;\mathcal{H}\_{\ell,H})$. Recall that the denominator in the (surrogate) information ratio is supposed to represent the information gain by the algorithm on the true (or surrogate) environment. What we observe is that the surrogate mutual information ratio that one should bound is:\
\
$
\frac{\left(\mathbb{E}\_\ell\left[V\_{1,\pi^*\_{\mathcal{E}}}^{\widetilde{\mathcal{E}}\_\ell^*}(s\_1^\ell)-V\_{1,\pi}^{ \widetilde{\mathcal{E}}\_\ell^*}(s\_1^\ell)\right]\right)^2}{\mathbb{I}\_\ell^{\pi}\left(\widetilde{\mathcal{E}}\_\ell^*; \mathcal{H}\_{\ell, H}\right)},
$\
\
where $\pi$ is the algorithm (so we select $\pi=\pi\_{\text{TS}}$). Indeed, clearly the algorithm can not know about the true environment $\mathcal{E}$, which makes it questionable to try to bound the surrogate regret
$\mathbb{E}\_\ell\left[V\_{1,\pi^*\_\mathcal{E}}^{\widetilde{\mathcal{E}}\_\ell^*}(s\_1^\ell)-V\_{1,\pi}^{\widetilde{\mathcal{E}}\_\ell^*}(s\_1^\ell)\right]$ by a mutual information such as $\mathbb I\_\ell^{\pi^*\_{\mathcal{E}}}\left(\widetilde{\mathcal{E}}\_\ell^*; \mathcal{E}\_{\ell, H}\right):= \mathbb I\_\ell\left(\widetilde{\mathcal{E}}\_\ell^*; \mathcal{H}\_{\ell, H}|\pi^*\_\mathcal{E} \right)$, where there is **assumed** knowledge of the true environment in the conditional $\pi^*\_\mathcal{E}$, as opposed to conditioning on the algorithm itself like in the ratio above. Therefore, the mutual information used in [12] is not the right one.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. My score will remain the same. | Summary: The paper presents uniform Bayesian regret bounds for Thompson Sampling by utilizing a uniform bound of information ratio and specific bounds of the Kolmogorov dimension in different settings.
Strengths: 1. The paper presents a uniform Bayesian regret bound for Thompson Sampling which yields results in a variety of settings, improving upon previous approaches in some scenarios.
2. The authors incorporate a comprehensive discussion with previous works which helps to understand the contribution of the proposed bounds.
Weaknesses: 1. Potential overclaim: in the introduction, the authors state that they first define Bayesian RL with time inhomogeneous settings, which might be an overclaim since there are also previous works discussing this setting like [1].
[1]Variational Bayesian Reinforcement Learning with Regret Bounds. Brendan O'Donoghue
2. The presentation of the paper would benefit from a table that includes all the results discussed in the paper for a comprehensive comparison.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why is time inhomogeneous specially highlighted in this work? If we consider time also as a part of the state observation, it would be homogeneous (i.e., share the same model across all timesteps). With that being considered, is it possible just to extend the time inhomogeneous bounds to this setting, and is the proposed bound also better than those?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments improving the paper, which we will include in our final revision. We also invite the reviewer to read the author rebuttal beforehand. Below are our replies:
**Weaknesses**:
1. Thank you for pointing this out. The phrase "we first define" was intended to mean that we are *starting* our presentation with this notion, similar to "we first do X/we then do Y", but we realize how the phrasing causes confusion, and so we change it accordingly in our revision. We have previously referred to [12] and other works for where this notion was previously introduced, but we thank the reviewer for providing the additional reference (to be also included in our revision).
2. We would like to thank the reviewer for this suggestion. For the Bayesian RL, there are very few works, of which we can name [25] (for time-homogeneous Dirichlet-prior tabular) and [12] (for time-inhomogeneous Bayesian RL), and while the latter is general and makes comparable claims, part of our paper is essentially spent on correcting their claims and proofs. We will expand on the comparisons with these works using a table. We note that frequentist model-based bounds in the table would be misleading, and such bounds are also limited to tabular settings.
**Questions**:
We thank the reviewer for their question and observation. Indeed, we see that if one applies the proposed mapping, we get a time homogeneous environment as a result. Hence, we could apply the regret bounds in that setting. The regret bound in this setting would be, to our knowledge, the first Bayesian regret bound for these general time-homogeneous RLs. However, we are not sure if this mapping is surjective on the set of time-homogeneous RL problems. We appreciate this remark. We will include it in our revised paper after our main theorem and mention the reviewer's contribution in the acknowledgments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply and I'll keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your reply and feedback. Please let us know if you have any further questions. | Summary: The authors propose the novel Bayesian regret analysis for posterior sampling for reinforcement learning algorithm. The proposed regret bounds are applicable in a large variety of different RL settings, such as tabular, linear and finite mixture MDPs.
Strengths: - The novel analysis for posterior sampling algorithm in the setting of Bayesian regret;
- The presented result holds not only in the setting of tabular MDPs but also in linear and finite mixture MDPs.
Weaknesses: - The weak notion of Bayesian regret is the main weakness of the presented result. Currently, there exists near-optimal results in posterior-sampling based algorithms in the frequentist setting (see section Questions for precise references).
- The computational side of the presented algorithm was not discusses.
- The upper bound in linear setup seems to be a contradiction with established lower bound in the setup of linear contextual bandits (see reference below for example). This effect requires additional explanations why is it possible in the presented setting.
- Lattimore, Tor, and Csaba Szepesvári. *Bandit algorithms*. Cambridge University Press, 2020.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Is there any results in literature where this type of dimension were called “Kolmogorov”? Seems that this definition is a just usual covering dimension (at least in the presented setup of $\ell_1$ distance).
- Is there examples where $\lambda$ is much smaller than $H$?
- What is the limitation to show this analysis for a more general class of MDPs such as MDPs with a finite Eluder dimension?
- Where is topological structure of $S$ and $A$ were used during the proofs? What is the topological structure of them?
- The missed references of frequentist regret bounds for TS-based exploration in tabular and linear MDPs:
- Zanette, Andrea, et al. "Frequentist regret bounds for randomized least-squares value iteration." *International Conference on Artificial Intelligence and Statistics*. PMLR, 2020.
- Agrawal, Shipra, and Randy Jia. "Optimistic posterior sampling for reinforcement learning: worst-case regret bounds." *Advances in Neural Information Processing Systems* 30 (2017).
- Tiapkin, Daniil, et al. "Optimistic posterior sampling for reinforcement learning with few samples and tight guarantees." *Advances in Neural Information Processing Systems* 35 (2022): 10737-10751.
- Agrawal, Priyank, Jinglin Chen, and Nan Jiang. "Improved worst-case regret bounds for randomized least-squares value iteration." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 35. No. 8. 2021.
- Line 83: In [32] they consider not episodic setup. The state-of-the-art result in the episodic setup were presented in [4].
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper presented the theoretical research on Bayesian regret for posterior-sampling algorithms for reinforcement learning, thus it does not require discussion of ethical limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments improving the paper, and invite them to read the author rebuttal beforehand. Below are our replies:
**Strengths**:
Thank you for the strength highlights. We also like to point out that the mentioned results are corollaries of our main contribution on general nonlinear RLs stated in Theorem 4.
**Weaknesses**:
1. This suggests that the PSRL problem in the frequentist setting is (nearly-)solved. However, references considered, there exists results only for tabular Dirichlet-based RLs, a small subset of general RLs. Our work expands the scope of knowledge on the Bayesian bounds for PSRL further than what exists in the frequentist case. Regarding the references: Thank you, and we shall include them in the intro for more context. Two of those are for model-free RL (ours is model-based PS), and the other two's results are very limited in scope (tabular RL Dirichlet-based priors). To our knowledge, a frequentist bound for PS is still an open problem for general RLs and those tabular cases solved optimally are for a *variant* of PSRL (Optimistic PSRL). Our bound is an appropriate first step towards the much more general principled study for Bayesian RLs.
2. A discussion on this will appear in our revision. For general RL settings, we refer to the ICML23 [arXiv:2305.00477] where they show experiments regarding TS outperforming other methods in Deep RL. We will also point out PSRL papers with experiments (some cited in Related Works) on TS and its variants [29,20,17,13,14] and discussions on computational efficiency of PSRL.
3. Despite sharing "linear" in name, said bandit problem is not directly related to linear RLs. Though it could be cast into an RL problem, it will not be an episodic/Bayesian/inhomogeneous/and finally not a linear RL either, unless made to be so:(1) No "episode" in the definition of linear contextual bandits. (2) The inhomogeneity condition should be added too. (3) [Bandit Algorithms, 2020] mentions the frequentist case only. (4) Contexts $X_t$ are like states in an environment. According to the bandit's definition, these are pre-selected, regardless of the actions, implying an environment where transitions $P(s'|s,a)$ are independent of $a$. (5) A linear RL requires these transitions to be linear-- expressible as $\langle \phi(s), \psi(s')\rangle $; there is no such parallel assumption in the bandit problem. So one needs to consider an "episodic inhomogeneous Bayesian linear contextual bandits with linear contextual transition functions"! It is unclear what the lower bound for the regret would be after imposing this many assumptions.
**Questions**:
1. The term "Kolmogorov", as some limsup of covering *numbers*, has appeared before in e.g. [arXiv:1406.1853]. Regarding terminology usage: Our $l_1-$dimension is a type of upper box dimension, also referred to as "Kolmogorov" dimension [wikipedia, Minkowski-Bouligand dimension]. While most notions of dimensions match on well-behaved metric spaces, their definitions/scopes are not identical. In particular, *covering dimension* [wikipedia, "Lebesgue covering dimension"'] is a topological notion, while the upper box requires a metric. Further, we have (for metric spaces):\
upper-box dim $\ge$ lower-box dim $\ge$ Hausdorff dim $\ge$ large inductive dim = covering dim;\
The first two are in the Minkowski-Bouligand wiki. The last two are in [wikipedia, Inductive dimension]. Every single one of these inequalities can be strict, e.g. the metric space of rational numbers in $[0,1]$ with $l_1$ metric, has box dimension one, and Hausdorff dimension zero, so covering dimension is also zero [wikipedia, Hausdorff dimension]. All in all, this motivates the use of the exact and appropriate dimension terminology.
2. Thank you for the question. This discussion will be in our final version. Assume every state is reachable from another in at most $D$ steps (defined more precisely as "MDP diameter"). Using $0\le r(s,a)\le 1$, it can be shown: $\lambda \le D$. Therefore, if environments have small diameters, $\lambda$ can be much smaller than $H$. In addition, we note the conceptual contribution: the information ratio is bounded by (the $l_1-$dimension of $\Theta$) times (the diameter of the value function). The latter is always $\le H$, but its nature is different from $H$.
3. Thank you for the question. We will summarize what follows in our conclusion/future works.With regards to applicability to general MDPs, so long as our assumptions regarding posterior consistency hold, the analysis applies. We conjecture that (at least a weakened version of) our posterior consistency assumption should hold in general, and we leave that for future work. As for Eluder dimension (in)finiteness, especially used in frequentist bounds, it is of no relevance here. As Theorem 4 states, there is no assumption except for the posterior consistency. One could ask the relation between Eluder and $l_1$ dimensions, and which is easier to derive. We leave this for future work.
4. There is no assumption other than those in the preliminaries: topological spaces with probability measures. The Bayesian bound does not depend directly on some dimension of these spaces. However, given that transition functions are defined on $\mathcal{S},\mathcal{A}$, it is not surprising to see the $l_1-$dim being expressed in terms of $\mathcal{S},\mathcal{A}$, such as in the tabular case.
5. We addressed this under "Weaknesses 1.".
6. Thank you for the correction. Please note this is for tabular only. To our knowledge, there is no lower bound in the frequentist model-based for general RLs.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. In particular, my main concern regarding the potential contradiction with the lower bound in linear setting was properly addressed.
Just as a small remark, under the covering number I meant the definition of [1905.00475]. However, I appreciate a deep discussion on the comparison between different types of the dimensions.
Overall, I am happy to increase my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for the reply. We appreciate very much your valuable feedback and the score increase. | Rebuttal 1:
Rebuttal: We thank very much all reviewers for their questions and comments, which we will include in our revision. Please note that we had to address each reviewer's response within the character limit. We kindly invite all reviewers to ask any further questions they have in the discussion period.
Please also note that in our replies, references are cited with either of these two formats:
(1) With the arXiv code, e.g. "[1406.1853]" is meant to reference arXiv:1406.1853, Or,
(2) With the associated number in the reference section of the paper, e.g. [29] refers to "Daniel Russo and Benjamin Van Roy. Learning to optimize via information-directed sampling. Advances in Neural Information Processing Systems, 27, 2014." | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper considers the problem setting of Bayesian reinforcement learning, in which both the transition function and the reward function are sampled from a known prior distribution. The authors study Thompson sampling in this setting and prove a Bayesian regret bound of order O(\lambda \sqrt{ d T}) where \lambda is the average value diameter induced by the prior, d is the Kolmogorov dimension (with respect to l1 distance) of the environment, and T is the number of episodes the learner interacts with the environment.
The authors instantiate their general result to two previously studied settings.
- In the tabular setting, the authors show that their main result implies a regret bound that matches the previous state of the art by Osband and Van Roy (2017), but generalized to hold over any prior distribution rather than specific to the product of Dirichlet distribution prior considered by Osband and Van Roy.
- In the linear MDP setting, the authors give a regret bound of $O(\lambda \sqrt{d^f T})$, where $d^f$ is the Kolmogorov l1 dimension of the feature space. The authors also present a counterexample showing that the previously claimed state of the art due to Hao and Lattimore (2022) was in fact incorrect.
The paper also provides corollaries for specialized finite mixtures settings.
Strengths: The paper's main contribution is a general treatment of Thompson sampling in MDPs. Specifically, the paper claims to provide the most general results to date on the Bayesian regret of Thompson sampling in RL. These results appear to generalize the results of Osband and Van Roy (2017) in the tabular case, and, when their counterexample to Hao and Lattimore (2022) is taken into account, provides the tightest upper bounds in the linear MDP case. The paper clearly signposts these contributions and provides proofs for their claims.
Weaknesses: Some of the theorem statements/assumptions could be more explicit. For example, the strong consistency assumption (Assumption 1) is not rigorously defined until Appendix J. Also, in my reading of the proof of Theorem 4, it appears that there is another assumption needed in the statement (Assumption 2 from Appendix D). I think it is also worth pointing out that T_0 in Theorem 4 is doing a lot of heavy lifting, as it seems like it could actually be a very large constant, depending on the prior and the structure of the RL problem.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: On line 138, it is claimed that the law of Thompson sampling aligns with the true posterior distribution. Is this true with no other assumptions? For example, if two environments induce the same optimal policy, then it seems like this would not be true.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does point out that they do not have lower bounds to substantiate the tightness of their upper bounds. Another limitation of the paper, not mentioned by the authors, is that the analysis is limited to the setting where the prior is specified with perfect accuracy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to thank the reviewer for their thorough and careful summary of our paper and our results. We greatly appreciate the time and the comments made to improve the manuscript. We invite the reviewer to read the Author Rebuttal beforehand.
**Weaknesses**:
$\bullet$ Thank you for raising this issue. We will give more context and rigor to Assumption 1 within the main text, and we will bring forth the second assumption into the main text as well.
$\bullet$ We will emphasize the limitation with respect to $T_0$, and indeed, while it may not change the asymptotics of the bound, its effect can be dominant for even large $T$s in practice.
**Questions**:
Thank you very much for your question. This raises a subtle mathematical question which could be addressed without any additional assumptions. Our definition of TS is exactly the one by [12], which itself follows the seminal work on IDS [29] (Van Roy \& Russo, 2014). We will include the following reply in our revised version and acknowledge the reviewer's contribution.
The question is: If two or even a nonzero-measure set of environments give the same optimal policy, then how could one have
$\mathbb{P}(\mathcal{E}|\mathcal{D}\_\ell)=\mathbb{P}(\pi^\ell\_{\text{TS}}=\pi^*\_\ell|\mathcal{D}\_\ell),\forall\mathcal{E}$, while also having the latter as a probability measure on the set of optimal policies $\Pi^*$, i.e. $\int\_{\Pi^*} \mathbb{P}( \pi^*|\mathcal{D}\_\ell) \text{d} \rho\_{\Pi^*}= 1$?
The answer is to define the measure $\rho\_{\Pi^*}$ on $\Pi^*$ to have appropriate measure on optimal policies $\pi^*$, based on the set environments of which $\pi^*$ is an optimal policy. Mathematically, this means that the map $star : \Theta \to \Pi^*$, where $star(\mathcal{E}) = \pi^*\_{\mathcal{E}}$, must be used to define $\rho\_{\Pi^*}$:
$ \rho\_{\Pi^*}(\mathcal{O}) := \rho(star^{-1}(\mathcal{O})), \ \forall \mathcal{O} \subset \Pi^* $
i.e., $\rho\_{\Pi^*}$ is defined as the push-forward of the prior measure $\rho$ on the set of environments under the map $star$. This ensures that even when a nonzero measure set of environments have the same optimal policy, it is possible to postulate the law of TS to be $\mathbb{P}( \mathcal{E}|\mathcal{D}\_\ell) = \mathbb{P}( \pi^\ell\_{\text{TS}} = \pi^*\_\ell |\mathcal{D}\_\ell), \forall \mathcal{E}$.
**Limitations**:
We thank the reviewer for their observation. Studying cases with prior misspecificity will be included as part of our limitations and future studies in our revised version.
---
Rebuttal 2:
Comment: Thank you to the authors for the thorough response and for answering my question. After reading the response and the other reviews, my view of this paper remains positive, and I will keep my score as accept. | null | null | null | null | null | null |
Any-to-Any Generation via Composable Diffusion | Accept (poster) | Summary: They present Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. Unlike existing generative AI systems, CoDi can generate multiple modalities in parallel and its input is not limited to a subset of modalities like text or image. Despite the absence of training datasets for many combinations of modalities, they also propose to align modalities in both the input and output space.
Strengths: 1 One model that takes any combination of modalities as input or output is novel and promising.
2 As the lack of training data, the alignment of different modalities is very difficult. The proposed method for the alignment is very interesting.
Weaknesses: 1 The simple weighted interpolation of different representations is not so convincing. Why does this method work?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: see above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: not addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! Please find our response below:
> **1. "Missing discussion of limitation and societal impact."**
The discussion of limitation and societal impact can be found in Section D of the appendix. We attach those paragraphs below:
Deepfakes and Misinformation: As part of a common issue for generative AI models, the ability of CoDi to generate realistic and synchronized multimodal outputs also raises concerns about the creation and dissemination of deepfakes. Malicious actors could exploit this technology to create highly convincing fake content, such as fabricated videos or audio clips, which can be used for misinformation, fraud, or other harmful purposes.
Bias and Stereotyping: If the training data used for CoDi is biased or contains stereotypes, the generated multimodal outputs may also reflect these.
> **2. "The simple weighted interpolation of different representations is not so convincing. Why does this method work?"**
The input modalities are encoded by contrastive learned encoders and thus their features representation is aligned and scaled in the same space and distribution. Assuming that the individual embeddings are in a continuous space where distances and directions have meaningful interpretations, the weighted average should also lie within this space. The final representation would maintain these properties, preserving the relationships between concepts.
There are also many advantages and reasons to use simple weighted interpolation:
**Simplicity and explainability:** Weighted averaging is a straightforward operation that's easy to understand, implement. The weighted average approach allows for some level of interpretability, as you can analyze the contribution of each modality by manipulating and observing the effect of the weights.
**Robustness:** If one modality is noisy or missing some information, the other modalities can compensate. The weighted average can be seen as a form of ensemble, potentially providing a more stable and robust representation.
**Flexibility in Emphasizing Modalities:** By adjusting the weights, you can emphasize or de-emphasize certain modalities according to their relevance or reliability for the task at hand.
**Efficiency:** This approach can be computationally efficient, as it doesn't require extensive fine-tuning or additional layers to merge the embeddings.
That said, we agree that it is a meaningful future work direction for creating learned or more complex representations of the contrastive aligned input embeddings to improve the complex interaction of different modalities.
---
Rebuttal Comment 1.1:
Title: Rebuttar Readed
Comment: Thanks for your answering and I have read it. | Summary: This paper presents a method that can generate any combination of output modalities, including language, audio, image, or video, from any combination of input modalities. The idea here is to align four modalities in a shared feature space first, and then learn to generate one or more modalities based on the shared feature space. This design enables many combinations of modalities despite the lack of training datasets. Since the feature space is shared, it also flexible to extend to other modalities.
Strengths: * The idea of any-to-any generation is interesting, and it enables many different tasks in one model.
* The framework is flexible and customizable to many other potential modalities, such as semantic maps, heat map, depth map and so on.
* The performance of the proposed method achieves comparable or better results than previous SOTA methods.
Weaknesses: * The method part is not clear. The relation among image diffusion model, video diffusion model, vision encoder and vision unet is confusing. Since 4 diffusion models are introduced and only 3 types of encoders and unet are shown in Figure 2, It s not clear whether image and video models share the parameters or not.
* The evaluation of Table 3 is not sufficient. Only the text-video faithfulness (CLIPSIM) is evaluated, while the video quality (FVD) is not evaluated.
* The proposed framework enables many different tasks. However, it does not outperform previous SOTA methods in many tasks, such as text-to-video generation, text-to-image generation, image captioning and video captioning.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * From Table 8, using both text and audio as input achieves higher FID compared to using each single modality as input. Could you explain why model achieves worse performance with more information as input?
* From table 2 and table 3, CoDi does not outperform previous SOTA results. Do you think a model that can do all tasks need to sacrifice its performance on each specific task?
* During training, the text encoder weights are frozen after training with images, would it result to a suboptimal problem when training with other modalities?
* In Sec 3.3, image diffusion model and video diffusion model are introduced separately. However, in Figure 2, only vision UNet and Vision Encoder are shown. Does it mean image diffusion model share parameters with video diffusion model during training?
* In table 4, why CoDi can outperform other diffusion-based method in image captioning?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately address the limitations and potential negative sosietal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful review. We are committed to addressing the concerns raised to enhance the quality of this paper.
> **1. "The method part is not clear. The relation among image diffusion model, video diffusion model, vision encoder and vision unet is confusing. Since 4 diffusion models are introduced and only 3 types of encoders and unet are shown in Figure 2, It’s not clear whether image and video models share the parameters or not."**
In line number 148-149, we stated that we construct the video diffuser by extending the image diffuser with temporal modules. This implies that the image and video generation share the same model. Concretely, both the encoder and UNet use the same architecture for image and video. For vision encoder, we add temporal layers on top of the image encoder for video encoding. Similarly for vision UNet, we also add temporal layers on top of the image UNet for video generation. In the Appendix A, we have also discussed the detailed video architecture. We will make this clearer in the paper.
> **2. "The evaluation of Table 3 is not sufficient. Only the text-video faithfulness (CLIPSIM) is evaluated, while the video quality (FVD) is not evaluated."**
In the table below, on video generation on UCF-101, CoDi performs competitively with state-of-the-art which matches similar observation in CLIPSIM metric conducted on MSRVTT.
| Method | Zero-Shot | IS ($\uparrow$ ) | FVD ($\uparrow$ ) |
|------------------------------------|-----------|---------|----------|
| CogVideo (Chinese) | Yes | 23.55 | 751.34 |
| CogVideo (English) | Yes | 25.27 | 701.59 |
| Make-A-Video | Yes | 33.00 | 367.23 |
| Video LDM | Yes | 33.45 | 550.61 |
| **CoDi (Ours)** | Yes | 32.88 | 596.34 |
> **3. "The proposed framework enables many different tasks. However, it does not outperform previous SOTA methods in many tasks, such as text-to-video generation, text-to-image generation, image captioning and video captioning."**
Please see our response in General Response 2 for full discussion. We’d like to reiterate that the main focus of CoDi is not to beat previous text-to-X SOTA methods, because many of them are proprietary and use private training data. In fact, our audio and text diffusion model achieves SOTA performance, and the image diffusion model (DM) uses the best open-sourced one (Stable Diffusion 1.5) at the time of this project. We acknowledge that CoDi’s video DM has a gap between previously-reported SOTAs (such as imagegen-vide, make-a-video), however, those DMs are closed-sourced and trained on private data (such as company-internal videos). In contrast, CoDi’s video DM is trained on public data and open-sourced.
The main focus of CoDi is enabling different pretrained DMs to communicate and interact with each other, such that we can achieve any-to-any generation even without the existence of paired training data.
Moreover, CoDi is designed with composability and modularity at its core. This means that it can readily incorporate various diffusion models into the framework without requiring a significant amount of retraining. This advantageous quality ensures that CoDi remains agile and adaptable, allowing it to seamlessly leverage many state-of-the-art (SOTA) diffusion models as they become available.
> **4. "From Table 8, using both text and audio as input achieves higher FID compared to using each single modality as input. Could you explain why the model achieves worse performance with more information as input?"**
Please see our response in General Response 2 for full discussion. In general, the primary goal of the model under discussion was to demonstrate the ability to handle multiple modalities without significant performance degradation. The slight increase in FID scores from 14.2 to 14.9 is not seen as a meaningful degradation, especially given the statistical significance level (p=0.086), and this trade-off is justified by the model's increased flexibility and broader application potential. Additionally, CLIPSIM metrics affirm the model's faithfulness in video generation to the input text, even when adding audio modalities. Table 10 further illustrates CoDi's effectiveness in integrating different input modalities, showing clear improvement for video and audio joint generation when adding the image modality.
> **5. "During training, the text encoder weights are frozen after training with images, would it result to a suboptimal problem when training with other modalities?"**
The proposed method bridging alignment leverages the property that the text encoder is frozen. There are several advantages or reasons:
**Efficiency:** The training is much more efficient (without finetuning the text encoder and jointly training all combinations of contrastive learning). Finetuning the text encoder will result in double the cost for all encoder training since text encoder is the bridging encoder that participates in all training.
**Data scale:** The text and image models are trained with a very large scale dataset with 400M which has very strong generalization potential. Such joint image and text embedding learned will be sufficient to be extended to other modalities. On the other hand, the text-audio and text-video dataset has a much smaller scale with only a few million samples.
**Learnable diffusion:** Regardless of the contrastive learned encoders’ performance, the diffusion model cross attention is trainable and will adapt its distribution to the desired or target tasks.
> **6. "In table 4, why CoDi can outperform other diffusion-based method in image captioning?"**
We use LAION-400M as the training data whereas previous works like SCD-Net shown in the table 4 uses a much smaller dataset, MSCOCO, which has below 1M examples. Training on significantly more data with similar architecture will result in better performance.
---
Rebuttal Comment 1.1:
Comment: For question 2, it's necessary to provide FVD results on MSRVTT to compare video qualities with other methods. Competitive CLIPSIM does not guarantee competitive FVD. Competitive FVD on UCF101 also does not guarantee competitive FVD on MSRVTT.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review.
We acknowledge the importance of comparing our method with previous work using the FVD metric on MSR-VTT. However, almost no previous work tested FVD on MSRVTT. Plus most of these works are not open-source and unable to test. CogVideo stands out as an exception, being the only open-source video generation model (that is already in Table 3) available, and as such, we have included its FVD score in our table.
Below, please find the extended Table 3 showcasing MSR-VTT text-to-video generation performance, now including FVD scores.
### Table 3: MSR-VTT text-to-video generation performance extended to FVD.
| Method | Zero-Shot | CLIPSIM $\uparrow$ | FVD $\downarrow$ |
|-----------------|:---------:|:---------:|:---------:|
| GODIVA | No | 0.2402 | _ |
| NÜWA | No | 0.2439 | - |
| CogVideo | Yes | 0.2631 | 801.98 |
| Make-A-Video | Yes | 0.3049 | - |
| Video LDM | Yes | 0.2929 | - |
| CoDi (Ours) | Yes | 0.2890 | 612.02 | | Summary: The paper introduces Composable Diffusion (CoDi), an innovative generative model capable of producing any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. Unlike existing generative AI systems, CoDi can generate multiple modalities simultaneously and is not limited to a subset of modalities like text or images. To address the challenge of lacking training datasets for many modalities combinations, the authors propose a modality alignment approach in both the input and output space. This enables CoDi to condition freely on any input combination and generate any group of modalities, even if they are not present in the training data. CoDi employs a unique composable generation strategy that establishes a shared multimodal space through alignment in the diffusion process. This allows for the synchronized generation of intertwined modalities, such as temporally aligned video and audio. Offering high customization and flexibility, CoDi achieves impressive quality in joint-modality generation and either outperforms or matches the state-of-the-art unimodal models for single-modality synthesis.
Strengths: 1. The paper is addressing an important problem of mapping modalities from any domain to any domain without fully paired data.
2. The proposed method is novel and reasonable. It is good to see that each different component can be trained separately.
3. The proposed bridging alignment is interesting.
Weaknesses: The proposed method shares some similarities with previous works. Nevertheless, this paper still contributes to the community in my opinion. It could be better to have a more specific discussions on the difference with the related work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks, we agree that specific comparisons can provide more insights and clarify the unique contributions of CoDi. Here's how we position our work with different areas and community in extension to the the discussion in the related work section:
>**1. "Comparison with previous diffusion models."**
We first provide a context to the current diffusion models community, in addition to the diffusion background introduced in 66-81.
**Scope:** Most diffusion models focus on specific generative tasks, such as text-to-X or single-to-single generation. Some examples are text-to-video, text-to-image, image/audio/video captioning, etc. In contrast, our work ambitiously aims to build a versatile framework for any-to-any generation within the realms of image, text, audio, and video.
**Multi-modal Integration:** Previous works [1,2,3,4] that involve multi-modal generation often lack a unified encoding space [1] or a method to maintain high correspondence between output modalities effectively [2,3]. CoDi innovatively addresses these limitations by proposing bridging alignment that facilitates the integration of various input modalities and ensures high correspondence in the generated outputs.
[1] Liu, Haotian, et al. "Visual instruction tuning." arXiv preprint arXiv:2304.08485 (2023).
[2] Xu, Xingqian, et al. "Versatile diffusion: Text, images and variations all in one diffusion model." arXiv preprint arXiv:2211.08332 (2022).
[3] Masahiro Suzuki and Yutaka Matsuo. A survey of multimodal deep generative models. Advanced Robotics, 36(5- 6):261–278, 2022.
[4] Wu, Mike, and Noah Goodman. "Multimodal generative models for compositional representation learning." arXiv preprint arXiv:1912.05075 (2019).
>**2. "Comparison with previous general multimodal frameworks."**
We also provide a broader context to other general multimodal frameworks like BLIP [5], Flamingo [6], Llava [1], etc, in addition to the multimodal background introduced in 82-89.
**Focus on Generation vs. Reasoning:** While these frameworks emphasize multi-modal reasoning tasks such as question answering or dialogue, CoDi's primary goal is to enable flexible generation from any combination of input modalities. This distinct focus sets our work apart and addresses a different set of challenges and opportunities.
**Bridging Alignment Contribution:** A key innovation in CoDi is the design of bridging alignment, which reduces the quadratic training/data cost to linear, enabling efficient handling of unpaired data. This technical advancement further distinguishes our work and has broad implications for efficiency and scalability.
We recognize the value in a more comprehensive discussion with related works and will indeed include this in the revised manuscript. This discussion will provide readers with a deeper understanding of CoDi's place within the existing landscape and highlight our novel contributions more prominently.
[5] Li, Junnan, et al. "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation." International Conference on Machine Learning. PMLR, 2022.
[6] Alayrac, Jean-Baptiste, et al. "Flamingo: a visual language model for few-shot learning." Advances in Neural Information Processing Systems 35 (2022): 23716-23736.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I have no other questions. | Summary: The paper presents a new generative model called Composable Diffusion (CoDi). This model is capable of generating any combination of output modalities from any combination of input modalities, including language, image, video, or audio. Unlike other models that are limited to a subset of modalities like text or image, CoDi can generate multiple modalities in parallel.
The authors have designed CoDi to align modalities in both the input and output space. This allows the model to condition on any input combination and generate any group of modalities, even if they are not present in the training data.
A key feature of CoDi is its novel composable generation strategy. This involves building a shared multimodal space by bridging alignment in the diffusion process. This feature enables the synchronized generation of intertwined modalities, such as temporally aligned video and audio.
The paper reports that CoDi achieves strong joint-modality generation quality. It either outperforms or is on par with the unimodal state-of-the-art for single-modality synthesis.
Strengths: 1. Originality: The paper introduces Composable Diffusion (CoDi), a new model in multimodal generation. This model is designed to process and generate modalities across text, image, video, and audio simultaneously. This is a novel contribution as it enables the generation of various output modalities from different combinations of input modalities.
2. Quality: The authors have conducted extensive experiments to demonstrate the capabilities of CoDi. The results show that CoDi can generate single or multiple modalities from a wide range of inputs. The model's performance is competitive with state-of-the-art models in tasks such as image and video generation, video captioning, and image synthesis from multiple input modalities.
3. Clarity: The paper is well-structured and provides clear explanations of the model's architecture and its generation strategy. The use of figures and tables helps to understand the model's capabilities and performance.
4. Significance: This work represents a step towards more comprehensive human-computer interactions by enabling the generation of multiple modalities in parallel. CoDi has potential applications in various areas, from content creation to human-computer interaction. The authors also provide a basis for future research in generative artificial intelligence.
In summary, the paper presents a significant and original contribution to the field of multimodal generation, demonstrating high-quality research and clear presentation.
Weaknesses: The paper presents a novel approach to multimodal generation, but there are several areas where it could be improved:
1. Evaluation Metrics: The evaluation of the model's performance is primarily based on quantitative metrics such as Frechet Inception Distance (FID) and CLIPSIM. These metrics, while useful, may not fully capture the perceptual quality or coherence of the generated outputs. Incorporating user studies or other qualitative evaluations could provide a more comprehensive understanding of the model's performance.
2. Quality of Generated Results: The quality of the generated results could be improved. The generated videos are relatively short, the quality of the images is perceptually low, and the generated text is often short and discontinuous. These factors could limit the practical utility of the generated outputs.
3. Preservation of Input Modality: The model primarily focuses on understanding between modalities, but it does not always preserve the faithfulness of the input modality. For instance, the output video and images do not consistently preserve the identity of the input image. This could limit the model's ability to generate accurate and coherent outputs across different modalities.
4. Cross-Modality Benefits: The paper does not convincingly demonstrate that the generation results benefit from cross-modality conditions. For example, Table 8 shows that the quality of image generation can even degrade when using conditions from two modalities. Similarly, Table 9 shows only marginal improvements in video quality when using multiple modalities. The authors should establish a benchmark that clearly demonstrates the benefits of using multiple modalities for generation. Without such evidence, the necessity of the proposed architecture could be questioned.
5. Omission of Baselines: In Table 2, the authors omit the StableDiffusion v1.5 baseline, which is the image Latent Diffusion Model (LDM) they used. Including this baseline could provide a more comprehensive comparison of the model's performance.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Evaluation Metrics: Could you provide more details on why you chose FID and CLIPSIM as the primary evaluation metrics? Have you considered incorporating user studies or other qualitative evaluations to assess the perceptual quality and coherence of the generated outputs?
2. Quality of Generated Results: Could you elaborate on the factors that might be contributing to the short and discontinuous text, short video length, and perceptually low-quality images? Are there potential improvements or modifications to the model that could address these issues?
3. Preservation of Input Modality: How does the model ensure the preservation of the identity or characteristics of the input modality in the generated outputs? Are there specific mechanisms in place to ensure this, or is it an area for future work?
4. Cross-Modality Benefits: Could you provide more evidence or a clearer explanation of how the generation results benefit from cross-modality conditions? The results in Tables 8 and 9 suggest that the benefits might be marginal or even negative in some cases. Could you clarify this?
5. Omission of Baselines: Why was the StableDiffusion v1.5 baseline omitted from the comparisons in Table 2? Including this baseline could provide a more comprehensive view of the model's performance relative to existing methods.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review.
> **1. "Evaluation Metrics: Incorporating user studies or other qualitative evaluations could provide a more comprehensive understanding of the model's performance."**
We perform a small scale user study due to time constraints. We will perform more comprehensive studies in the next version of the paper.
**Text to Audio**
We first conduct text to audio user study by comparing CoDi to previous text-to-audio generation SOTA AudioLDM on 30 text-to-audio generation examples.
For the user 0: CoDi was favored in 23 instances. AudioLDM was favored in 7 instances.
For the user 1: CoDi was favored in 18 instances. AudioLDM was favored in 12 instances.
For the user 1: CoDi was favored in 21 instances. AudioLDM was favored in 9 instances.
Overall, the three users clearly showed a preference for the CoDi method for text-to-audio generation.
**Audio to Image + Text**
We next conduct audio to image+text user study by comparing CoDi with joint generation to CoDi without joint generation capacity.
For the user 0: CoDi without joint generation was favored in 10 instances. CoDi with joint generation was favored in 20 instances.
For the user 1: CoDi without joint generation was favored in 10 instances. CoDi with joint generation was favored in 20 instances.
For the user 2: CoDi without joint generation was favored in 8 instances. CoDi with joint generation was favored in 22 instances.
Overall, the three users clearly showed a preference for the CoDi method with joint generation.
> **2. "Quality of Generated Results: The quality of the generated results could be improved."**
Please see General Response 1 for full discussion. In general, CoDi focuses on multi-modal generation and enabling different pretrained DMs to communicate and interact with each other. We only maximally maintain competitive single generation performance without sacrificing individual task performance. Previous SOTA diffusion models are often proprietary and use private data while our model is fully open-source. Moreover, the individual diffusion models are already very competitive and some achieve SOTA performance as shown in tables 2-7.
> **3. "Preservation of Input Modality: The model primarily focuses on understanding between modalities, but it does not always preserve the faithfulness of the input modality."**
Thank you for the observation regarding the preservation of input modality. To this end, we have experimented with finetuning CoDi for image animation, i.e., input an image and generate a video that animates it. See **examples in the uploaded pdf**. The **generated videos** are also shared with the AC. Concretely, on top of the original CoDi video diffuser that takes in encoder embeddings, we concatenate the image to the diffuser inputs to improve faithfulness of image animation. This shows that CoDi can be easily modified or finetuned to perform other downstream tasks with higher faithfulness.
In general, faithfulness, or modality preservation, is an ongoing challenging topic in the diffusion model and AIGC community. Still, it is a meaningful future work to modify the architecture and training on specific tasks where faithfulness to input images for example can be preserved.
The design choices of CoDi does not focus on preserving the faithfulness of the input modality because of the following reasons:
(1) Focus on Modality Conversion: CoDi's primary goal is to explore modality conversion, enabling seamless transformation between various modalities like text, images, audio, and video. It's designed to handle flexible and innovative tasks, rather than strictly preserving the input modality. This approach can lead to creative and adaptive generation capabilities.
(2) Use Cases: The preservation of input modality may be context-dependent. In some applications or use cases, a higher degree of preservation may be desirable, while in others, more flexible and transformative generation may be preferred. Our model is designed to be versatile, and potential adjustments to emphasize input preservation could be explored depending on specific requirements or user needs.
> **4. "Cross-Modality Benefits: The paper does not convincingly demonstrate that the generation results benefit from cross-modality conditions."**
Please see General Response 2, where we discuss the difference in FID scores and CLIPSIM between single and multi-modal approaches in a model. The main goal was to show that the model can handle multiple modalities without a significant drop in performance, rather than achieving a lower FID score. Though there is a slight increase in FID scores, it's not statistically significant (p = 0.086) and is outweighed by the model's increased flexibility and potential applications. CLIPSIM, a metric for video generation fidelity, shows that adding audio modalities as input conditions maintains similarity between video output and text input. The data in Table 10 further supports the effectiveness of integrating different input modalities using CoDi for joint video and audio generation (row Text -> Video+Audio (0.240 / 0.255) and row Text + Image -> Video+Audio (0.247 / 0.259),).
> **5. "Omission of StableDiffusion v1.5 baseline"**
In the table below, we compare our work to StableDiffusion v1.5 and still performs comparably. And note that CoDi is composable and modular. Therefore, it will be very efficient to add other diffusion models to the framework without a significant amount of training. Such advantage allows CoDi to take advantage of many SOTA diffusion models.
| **Method** | **FID $\downarrow$** |
|-------------------------------|-----------|
| CogView | 27.10 |
| GLIDE | 12.24 |
| Make-a-Scene | 11.84 |
| LDM | 12.63 |
| Stable Diffusion-1.4 | 11.21 |
| Stable Diffusion-1.5 | 11.12 |
| Versatile Diffusion | 11.10 |
| **CoDi (Ours)** | 11.26 | | Rebuttal 1:
Rebuttal: We are glad all reviewers appreciated our work and found it well-motivated (7M7p, dG2H, mZPw, hGLR), well-written (7M7p, dG2H, mZPw), and original in introducing Composable Diffusion as a novel model (7M7p, dG2H, mZPw, hGLR). The recognition of CoDi's capability to generate any combination of output modalities and the novel method of alignment (7M7p, dG2H, hGLR), the extensive experiments demonstrating CoDi's competitive performance with state-of-the-art models (7M7p, mZPw), the clear structure and explanation of the model's architecture (7M7p), and the idea of any-to-any generation that provides flexibility and customization (mZPw) were encouraging. Furthermore, the acknowledgment of CoDi's potential applications in various areas and its significance as a step towards more comprehensive human-computer interactions (7M7p) validates the impact of our work. We appreciate the thorough assessment and constructive feedback from all reviewers.
>1. **General Response 1 (the performance of each individual diffusion model)**:
We thank the reviewers for their feedback on the quality of the generated results.
It is essential to clarify the primary objectives of CoDi. Our key contribution lies in enabling different pretrained diffusion models to communicate and interact with each other. The goal is to establish a framework that facilitates this interaction, rather than fine-tuning individual aspects of unimodal diffusion models.
In building CoDi, we maximally leverage all the open source models including StableDiffusion, CLIP, etc, for better reproduction and open access. We train our own audio, text, video diffusion models. All the training data we used is open sourced. We devote our resources to explore joint generation and generation from multiple modalities input and only aim to maintain a reasonably competitive single generation performance.
**Videos**: It is worth noting that at the time of doing this project, previously reported video diffusion models SOTA are closed-sourced (imagegen-video, gen-2, etc), and trained on internal video data. While the generated videos are acknowledged to be relatively short, our video diffusion model demonstrates competitive performance, aligning with SOTA video diffusion models as shown in Table 3. Generating short videos as demos are common approaches of the video community [1,2,3]. Most video models can be extended to longer length by further finetuning and modifications of the architecture and autoregressively generating the video as also shown in [1,2,3]. Still, generating long and coherent video is an ongoing and challenging topic in the video generation community.
[1] Hong, Wenyi, et al. "Cogvideo: Large-scale pretraining for text-to-video generation via transformers." arXiv preprint arXiv:2205.15868 (2022).
[2] Khachatryan, Levon, et al. "Text2video-zero: Text-to-image diffusion models are zero-shot video generators." arXiv preprint arXiv:2303.13439 (2023).
[3] Blattmann, Andreas, et al. "Align your latents: High-resolution video synthesis with latent diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.1.5
**Images**: Regarding the perceived low quality of the images, our model is initialized from Stable Diffusion 1.5, the best open-source image diffusion model available at the time of submission. This foundation provides a robust starting point, and we welcome future work to enhance visual quality within the constraints of our novel cross-modal framework.
**Text**: The generated text primarily serves as a caption, encapsulating the core information in the audio/video/image. This brevity is intentional and in line with common practices in captioning. We have showcased SOTA performance on diffusion-based image captioning, SOTA performance on audio captioning, and competitive performance on video captioning in Tables 4, 5, and 6.
>2. **General Response 2 (multiple inputs performance)**:
We appreciate the observation regarding the difference in FID scores and CLIPSIM between the single modality and multi-modal approaches. However, our primary goal was to demonstrate that our model can successfully handle multiple modalities without significant performance degradation, rather than to achieve a lower FID score per se. The observed difference in FID scores from 14.2 to 14.9, while present, does not constitute a meaningful degradation (statistical significance with only (p=0.086, (0.05<p<0.1)), especially considering the expanded capabilities of handling multiple input types. This trade-off in performance is outweighed by the increased flexibility and potential applications that our model offers.
CLIPSIM is an evaluation metric that evaluates how faithful the video generation is to the input text. We can see in table 9 that by adding audio modalities as input conditions, the similarity between the video output and text input does not decrease, showing the effectiveness of how CoDi can integrate different input modalities while being faithful to each.
In Table 10, row Text -> Video+Audio (0.240 / 0.255) and Text + Image -> Video+Audio (0.247 / 0.259), we can see a clear improvement for video and audio joint generation by adding the image modality. This further supports the effectiveness of CoDi on integrating different input modalities.
Pdf: /pdf/292757621123af3d0bb89e3b442e4aed5d8cc989.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning to Compress Prompts with Gist Tokens | Accept (poster) | Summary: Due to the computationally inefficient nature of long prompts for language models (LMs) today, this work proposes one framework to compress prompts into a smaller set of "gist" virtual tokens. Different from existing work that distills the context for a single NLP task, this work focuses on a distribution of tasks by predicting the gist tokens (via LMs learned embedding) for each task, seamlessly built upon instruction tuning. The design of their generator G would be the key to their success of the generality to gist tokens. The experiments show the effectiveness of their designs.
Strengths: 1. As soft prompts limit themselves in the computational efficiency, and fine-tuning is limited by the bulk nature by retraining to adapt LMs to each specific task, this work combines the strengths of these two, by learning some virtual tokens seamlessly upon the supervised instruction tuning (a common practice in current LLM deployment and serving). To me, the simple design working well for both training-time and inference-time is elegant and interesting;
2. The compression ratio looks really promising, making their technical contributions solid and interesting to some practitioners;
3. The paper is nearly very structured, and the experiments on some instruction tuning datasets look convincing.
Weaknesses: I still have some confusions about its working mechanisms, regarding unseen prompts, and # of gist tokens (newly added vocabulary to the LM).
Firstly, is it learning one gist token (concatenate k of them in training) per task, and for new task, we do need to add more vocabularies? So it would be like one task one gist token?
Secondly, for unseen and seen prompts, is the training of gist tokens like "average of the context distillation loss" over all seen prompts for one particular task, and for unseen prompt, but still the same task type, you simply re-use existing gist tokens (the gist token embeddings would be updated by the cross-attention by the unseen prompts and learned embeddings in inference, though no updates)? Hope my understandings are correct.
I am happy to raise my scores if you can help me address them, and make me feel more interested.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See my weaknesses part. Clarifying these in your paper clearly would at least be useful to some readers.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: In addition to what they claimed in their paper,
The starting point of their proposal is significant to the community, as deploying LLMs with structured, templated prompts are becoming the common practice to a wide range of NLP, or general AI tasks, reducing the computational budgets would be beneficial to a lot of practitioners. However, their framework still require the access to the model logits in their in-context distillation loss, restricting the impacts to white-box LMs. And more precisely, how to reduce the costs for black-box LLMs is more interesting to others.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you to reviewer nmKf for their detailed and thoughtful review! We appreciate you find the paper “elegant and interesting”, with “really promising” compression ratios and “solid and interesting” technical contributions.
The main concern of reviewer nmKf seems to be confusion about the gist compression mechanism, which we agree should be explained more clearly in the paper. Below we’ll answer reviewer nmKf’s questions and explain how we’ll revise the paper for clarity. Please also see our response to reviewer 3Sa1, who had some similar questions regarding the explanation of gist tokens in our paper.
> Firstly, is it learning one gist token (concatenate k of them in training) per task, and for new task, we do need to add more vocabularies? So it would be like one task one gist token?
**Only one additional gist token,** ***total,*** **is needed to enable gist caching, regardless of the number of tasks.** In other words, we increase the size of the LM embedding matrix by 1. **For each task, we concatenate k copies of the same gist token.** We will make this clearer by clarifying in L96 that the “k successive gist tokens in between” are **k copies of the same gist token,** and that the gist tokens are the same for each task at both train and test time.
The reason why the gist token remains unchanged across tasks is because, as reviewer 3Sa1 also points out, **what changes is not the gist token or gist token embedding for each task, but rather the transformer activations computed *on top* of the gist tokens.** Providing the gist token as input to the model serves as a sort of “control signal” that encourages the model to learn to compress the prompt into the activations on top of the gist tokens. The model ideally learns to compress arbitrary prompts into the gist activations, so that it can generalize zero-shot to new prompts at test time.
> Secondly, for unseen and seen prompts, is the training of gist tokens like "average of the context distillation loss" over all seen prompts for one particular task, …
This description of the training process is mostly correct, though the loss is not “for one particular task”; the loss is the average distillation loss across all tasks, where you can think of a prompt as being equivalent to a task.
> … and for unseen prompt, but still the same task type, you simply re-use existing gist tokens (the gist token embeddings would be updated by the cross-attention by the unseen prompts and learned embeddings in inference, though no updates)? Hope my understandings are correct.
Here as well, we are not sure what you mean by “task type”: the unseen prompts define unseen tasks that the model needs to compress at test time.
Hopefully some of the confusion here is cleared up by our answer to your first question. **At inference time, for unseen prompts, we add the exact same gist token seen during training after the new prompt.** The embedding of the gist token is the same as what was learned during training, and no updates to this embedding happen during test time. **What *does* change is the transformer activations on top of this gist embedding:** via self-attention, the Transformer computes a new unique set of activations on top of the gist tokens, that hopefully contain a compressed version of the new prompt, even though the prompt was never seen during training time.
To summarize, by training to compress a wide distribution of tasks (=prompts) into the activations on top of the fixed gist tokens appended after the prompt (by averaging the context distillation loss across all training tasks), we hope that the transformer learns a compression mechanism that generalizes zero-shot to unseen tasks (=prompts) at test time.
Again, please see our response to Reviewer 3Sa1 for additional ways in which we will make the description of our methods clearer in the paper, and let us know if you have further points of confusion—we are happy to continue answering any follow up questions.
> However, their framework still require the access to the model logits in their in-context distillation loss, restricting the impacts to white-box LMs. And more precisely, how to reduce the costs for black-box LLMs is more interesting to others.
This is a good point! To facilitate sharing and reproducibility with the scientific community, we focus our current paper on white-box, open source LMs for now, though optimizing inference costs of black-box LLM APIs is a fascinating problem.
---
Rebuttal 2:
Title: Have we addressed your concerns?
Comment: As the discussion period is coming to an end, we would like to know if you have had the chance to read the rebuttal? It seems like the main concern of your review is regarding clarity on the methods in the paper—given that we've endeavored to explain the methods better (both in the rebuttal and in the paper), please let us know if we have addressed your concerns and if you are open to increasing your score.
Thank you!
---
Rebuttal Comment 2.1:
Title: Replies from Reviewer nmKf
Comment: Thanks for your consistent help in clearing up my confusions. I am happy that my concerns have been addressed! I like your neat design, and I think it could be interesting to the community. So I retain my current positive ratings, and also I hope you can further improve your paper (e.g., make your technical details more clear to others) in the next round! | Summary: Prompting is the current way of using LLMs, but it occupies the context spaces. Instead of training the LLMs (e.g. fine-tuning), the paper presents a way to compress the prompt into gist tokens, which can be efficiently cached and reused. The method shows 26x compression rate, 40% FLOP reduction and wall clock time speed up.
Strengths: (1) The idea is well motivated and straight-forward.
(2) The implementation through masking is well derived, simple, and well illustrated.
(3) The baselines for experiments are well proposed.
(4) The experiment results support the proposed method well (Neg < TF-IDF < Gist <Pos) and is intuitive.
(5) The failure cases are shown and well explained.
Weaknesses: (1) The method achieve 4% wall-clock time reduction, which is not significant, especially when it compromises the accuracy. The claim that "longer sequence length and larger batch sizes can lead to higher speedup" (Line 267) is not shown quantatively.
(2) Compressing from 26 tokens to 1 token is not significant, compared to usually 2K context length of the model.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please address the weakness above. In particular, when does prompt has longer sequence length? What would be the corresponding speedup?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The compression is lossy. The reviewer thinks this is significant because the benefits are not clear yet.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you to reviewer tZ74 for the detailed and thoughtful review! We appreciate that you find our method well motivated, straight-forward, and intuitive. The main concern of reviewer tZ74 is that the efficiency gains reported in the paper are “not significant”, regarding (1) the wall clock time reductions, and (2) the 26x prompt compression rate.
We will respond to both claims below, but also keep in mind that **efficiency speedups are not the only contributions claimed by our paper**.
## On wall clock times
Reviewer tZ74 claims that a “4% wall-clock time reduction…is not significant.” Note that Table 3 has 95% confidence intervals which show a *statistically significant* improvement in wall times, so we assume the reviewer is using the phrase “not significant” colloquially as “not important”. Respectfully, we believe that the threshold for significance is highly dependent on the use case, and that for large models, a 4% wall-clock time reduction can add up to **significant cost savings over time,** especially when LMs are deployed and used thousands of times daily.
### A highly optimized implementation of our method is out of scope for this paper
For ease of experimentation and reproducibility, we implemented gisting framework in PyTorch in the popular Huggingface Transformers library (see supplement), with substantial Python logic that likely diminishes our reported wall-time improvements. **We believe an optimized implementation of gisting** would increase wall-time improvements, **but is also out of scope for a proof-of-concept paper** introducing our technique, especially given the numerous non-wall-time benefits of gisting.
## On compression ratios
Reviewer tZ74 also argues that a 26x compression rate is not significant. Note there is some debate here, with reviewer nmKf describing the compression ratio as “really promising” and “interesting to some practitioners”. We have two responses:
### 1. 26x compression is an *average* compression rate and is the *maximum possible compression rate for this dataset*
First, 26x is an **average compression rate** for the Alpaca+ human validation split. The prompts range from a minimum of 9 to a maximum of **117 tokens** long, i.e. **117x compression** in some cases. Although reviewer tZ74 correctly notes that these prompts tend to be shorter than an LM context window, we expect that this level of compression will allow LM users to avoid “context window exceeded” errors in many cases.
Additionally, by using just a single gist token, we have achieved the **maximum possible compression rate** explorable with Alpaca+, which to our knowledge was created from the largest instruction following datasets available to us at the time of our experiments (Self-Instruct, Alpaca). Gisting may be competitive at compressing even longer prompts, but exploring this requires a larger and richer dataset that was unavailable to us at the time. We are happy to hear suggestions from reviewers for additional datasets to try.
### 2. 26x memory improvement is significant, *especially for prompt caching workflows*
We'd like to highlight two clearly significant improvements of gisting for **prompt caching**, where the KV caches of common prompts are stored to speed up inference. This is one of the primary methods for speeding up LLM inference in production (see paper for citations). Within this framework, gisting offers:
- **A**: **An *order of magnitude* decrease in storage and memory costs.** As stated in L284, gisting allows developers to cache up to **26x more prompts from users** than full instruction caching, using the same amount of storage. Even though the relative memory requirement of a *single prompt* is insignificant compared to the memory required for an LLM, if a developer wishes to have hundreds or thousands of prompts cached in GPU VRAM for fast decoding, the prompt storage requirements quickly dominate. For example, the average memory required to store the KV cache of a prompt in the human validation split for LLaMA-7B is 27.3 MB (see L281 in the paper), so caching 100 or 1000 prompts requires **2.73 or 27.3 GB VRAM**, respectively. In such cases, gisting *greatly* increases the number of possible prompts that can be simultaneously cached. We believe this is an important contribution, even if one believes the 26x reduction in prompt length is insignificant for a single prompt.
- **B**: **New options for prompt caching in encoder-decoder models.** As stated in L258, gisting enables a form of **prompt caching** that is not possible in ordinary encoder-decoder models, since the encoder normally expects to perform bidirectional attention between the full instruction and the input. We believe this is a valuable contribution which opens up new workflows for encoder-decoder LMs.
**In fact, we believe A and B above stand by themselves as valuable contributions to the community, even if there were no other reported efficiency benefits (e.g. wall-time).** We will make it clearer in the paper that these memory improvements are just as important as wall-time, expanding on L284-285.
**Overall,** while it is accurate to describe gisting as lossy, our empirical results show an often negligible impact on downstream task performance. Given the numerous improvements discussed above, we believe gisting will be a useful option to developers as they consider various tradeoffs between accuracy and efficiency in LM inference.
## Other paper contributions besides efficiency speedups
Finally, please keep in mind that efficiency reductions are not the only reported contributions of our paper. We also provide:
- A mathematical framework for “meta-context distillation” of a language model (Section 2.1);
- A novel way to learn compression in transformers via token-dependent attention masking (Section 3).
We believe that these aspects of the paper are also stand-alone, useful contributions for those interested in memory, compression, and efficiency in transformers.
---
Rebuttal 2:
Title: Have we addressed your concerns?
Comment: As the discussion period is coming to an end, we would like to know if you have had the chance to read the rebuttal? Please let us know whether we've addressed your concerns—we are happy to discuss more. We hope we've highlighted a variety of benefits of our method that are not explicitly concerned with wall time or the compression of a single prompt.
Thank you!
---
Rebuttal Comment 2.1:
Comment: Thanks for the great rebuttal! I will raise my score to positive ones. Please considering accepting this paper for all the strengths illustrated.
However, I am not convinced 4% speedup is significant. Usually, a good system paper will claim at least 15% speedup. Please consider further optimizing this. | Summary: The authors tackle the problem of wasted compute and waste context window space from repeatedly encoding a prompt.
The authors propose gisting, in which gist tokens are inserted after the prompt, and the attention mask is modified such that tokens after the gist tokens cannot attend to tokens before the gist tokens. This forces the gist tokens to encode and compress the prompt.
Experiments are conducted with two models, LLaMA 7B and FLAN-T5-XXL 11B using a dataset the authors create from Self-Instruct and Stanford Alpaca. A hold-out set of 1000 seen, 1000 unseen, and 252 human prompts are used for evaluation.
The evaluation is conducted using a ROGUE-L score, ChatGPT3.5 model evaluation, and human evaluation score conducted on a subset of the human prompts.
Compared to an “upper-bound” of training a model with a single gist token, but no modifications to the attention mask, gisting scores near this “upper-bound” using both ROGUE-L and ChatGPT3.5.
Human evaluation results show gisting winning 52.3% and 40.6% of the time over the “upper-bound” baseline for LLaMA and FLAN-T5-XXL respectively.
Compression rates from gisting are material compared with no caching strategy, but more modest compared with caching the full instruction.
Strengths: Gisting is simple to implement, requiring only adding one or more gist tokens, and a few modifications in attention masks
Gisting learns prompt compression and instruction following at the same time, incurring no additional training
Gisting can be used even with unseen prompts and has some generalization capability
Experiments use relatively large models and a relatively large dataset
Paper is clearly written
Weaknesses: The evaluation scheme is the primary weakness of the paper. Human evaluators are critical given the nature of the tasks in the dataset (which consist of many open-ended generation tasks) to check how reliable ChatGPT (along with the prompt the authors present in the Appendix) is as an evaluator. We see that human evaluators have low agreement (kappa of 0.24 and 0.33, Table 2). The overall experimental results would be more convincing if human agreement was high. For example, it would be interesting to see results on a high human agreement subset (say tasks with clear objective answers) to see if Gist’s Win % over Pos was still near 50%. Furthermore, Negative Control scores ~25-30% using ChatGPT on various validation splits, which is high considering that ~59% of the tasks in Alpaca+ have no input, and the task is the only information used to generate the output. It is plausible that human evaluators have very high agreement on those outputs generated with Negative Control, but with much lower scores than ChatGPT scores, which would indicate ChatGPT + author’s prompt as an unreliable evaluator. Presented as is, it is hard to tell whether 1) the gisting process is doing well at compressing the task t, resulting in a similar performance as using the original prompt or 2) there is a large amount of randomness inherent in the evaluation process (from factors such as differing human preferences), resulting in a near 50% score (as measured by ChatGPT). These observations, along with the odd fact that reducing the compression rate does not materially improve the scores as measured by ChatGPT casts into doubt the experimental results.
As Section 5.1 shows, for some tasks, specific details need to be preserved to successfully accomplish the task. However, the authors claim that increasing the number of gisting tokens k does not help with performance. Furthermore, the choice of the number of gisting tokens is fixed for all tasks prior to training. Therefore, this method does not allow for dynamically trading off compression levels and model output quality. It would be interesting to see evaluation results on tasks requiring differing levels of specificity.
The efficiency gains of Gisting is modest compared to instruction caching
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Could you please clarify “having too many gist tokens hurts performance in some cases … the increased capacity enables overfitting to the training distribution”. In the extreme case, having enough gist tokens to copy the longest prompt t, we would expect equal performance between gisting and using original task instructions t.
Could you please clarify the tie-breaking process referenced in this sentence: “Due to the subjectivity of breaking ties in a forced-ranking task…”. This is fairly important given that this is a proposed explanation for the low agreement numbers. Based on the prompt given to ChatGPT, instructions given to human evaluators, and evaluation examples found at the end of the Appendix, it appears no tie-breaking has occurred.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Gisting loses some nuance in the original instructions.
The compression level is not dynamic relative to the task.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you to reviewer W7yi for the detailed and careful review!
## On low inter-annotator agreement
> The overall experimental results would be more convincing if human agreement was high.
**On the contrary, when comparing two models of equal quality, we do not expect high inter-annotator agreement!** We will explain by answering this question:
> Could you please clarify the tie-breaking process: “Due to the subjectivity of breaking ties in a forced-ranking task…”.
Good question; we will revise this line for clarity. What we mean is that due to the subjectivity of open-ended generation tasks, there will be noisy reasons why an annotator prefers one response to another, **even if the responses come from models that are equally good.** We expect this to be true even for tasks with objective answers, since in open-ended settings, an answer can be phrased in several ways and annotators might prefer one wording over another.
In our experiments, annotators have the option to call a tie, and our agreement metric takes ties into account, but neither humans nor ChatGPT call ties often (27% for humans and 13% for ChatGPT). Thus our evaluation is basically a forced choice task, where even if responses are equally good, annotators will often arbitrarily choose one response over another.
**We ran a follow up experiment to verify this by asking 3 additional human annotators to rate *two different sets of samples from the LLaMA positive control***, using the same setup in the paper. As expected, the average win rate of one set of samples over the other is 50% according to humans (52%, 48%, 50%) and ChatGPT (54%). **Importantly, the average pairwise agreement of the humans in this experiment is low, and similar to the paper: a kappa of 0.30.** Meanwhile the agreement of ChatGPT with humans is 0.37. Please see the full Table in the general rebuttal PDF, and recall Table A.2 (a) in the paper shows an average agreement of **.24** for humans and **.29** for ChatGPT for LLaMA.
This means **we expect low inter-annotator agreement when models are hard to differentiate.** This supports the idea that ChatGPT is similar to human annotation, and that despite low annotator agreement, the win rates across human(s) and ChatGPT are similar, and converge on gisting being similar to the positive control in many cases.
As a final point, we stress that open-ended LM evaluation is challenging, and the field lacks common evaluation standards. Our forced-choice eval is similar to those done by the LMSys Chatbot Arena and recent LM projects such as Alpaca, Koala, and Vicuna. To mitigate these challenges, we used a wide spectrum of evaluations: first, lexical overlap compared to a gold response (ROUGE-L); second, AI annotation (via ChatGPT) across 1000s of outputs; third, human annotation. Together, we believe these signals provide converging evidence that gisting is competitive with positive controls, even if the inter-annotator agreement is low by design.
## Should more gist tokens improve performance?
> In the extreme case, having enough gist tokens to copy the longest prompt t, we would expect equal performance between gisting and using original task instructions t.
**It is not the case that you would expect equal performance to the positive control when there are many gist tokens.** This is because gist compression is an ***entirely new model capability*** that the LM has to learn **almost completely from scratch**, since the new gist embedding is randomly initialized and the LM has never seen gist masking before.
The LM needs to learn a new, internal “model” $G(t)$ to compress prompts, and the number of gist tokens determines the “capacity” of this new internal model (more gist tokens = more gist activation parameters). **This means that the generalization performance of the compressor $G$ is subject to the same generalization tradeoffs we consider in standard machine learning:** more gist tokens (“a bigger model”) may result in lower training error (better compression of seen prompts) but worse test error (poorer OOD generalization).
Given a finite training set, in the extreme case, a model with enough gist tokens could learn to **memorize** each training prompt with a completely unique prefix, matching the positive control during training but failing to generalize.
Reviewer W7yi is correct that if the model learned to copy the prompt activations into the gist activations, and attend to the gist activations as if they were the original prompt activations, then this would result in equal performance as the positive control. **However, our results show that models do not learn this mechanism through optimization pressure alone.**
## Other points
> Negative Control scores ~25-30%...which is high considering that ~59% of the tasks in Alpaca+ have no input
We will clarify in the paper that the 59% figure refers to the **overall distribution** of Alpaca+. For evaluation, **we hold out prompts with non-empty inputs** (except for the Human split). **All prompts in the seen and unseen splits have inputs;** 83% of the prompts in the human split have inputs. See L126-128, though we will further clarify the unseen prompts also have inputs. This means that the negative control can usually make an educated guess about the task given the input. We believe that the fact the negative control is preferred sometimes demonstrates learnable biases in the distribution of Alpaca+, rather than issues with our evaluation.
> does not allow for dynamically trading off compression levels and model output quality.
We believe the problem of dynamically estimating how much compression is possible for a prompt is a fascinating one, but we leave this for future work.
> efficiency gains of Gisting is modest compared to instruction caching.
Please see our response to reviewer tZ74. Briefly:
- Gist caching is **1 OoM more memory efficient** than instruction caching.
- Instruction caching is not possible for encoder-decoder models.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. My concerns related to evaluation are mostly addressed. I still think it is far more convincing to add as a supplement experiments that use a dataset with objective answers (e.g. a multiple choice dataset) and compare raw scores for gisting vs baselines without using model evaluation.
However, after considering the authors' response, I believe the evidence weighs in favour of the author's claim that "gisting being similar to the positive control in many cases". I think the paper should be accpeted and increase my score to 6.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thanks to the reviewer for updating their score and for the additional feedback!
We agree that an MC choice evaluation would be interesting—one of the reasons we did not explore such an eval is because it is likely OOD for the models trained in our paper, since MC often requires remembering verbatim details in the input (e.g. the choices), which the gist models are not optimized to do (as we show in our limitations). It would be interesting to explicitly explore training the on existing MC datasets (so that gist models are encouraged to remember relevant details), and perhaps this might even improve overall performance on open-ended generation tasks, but we have not yet explored this avenue due to compute constraints. | Summary: This paper presents "gisting", which learns language models to compress instruction prompts into smaller sets of compressed context -- that includes special '<GIST>' tokens and the activation stacks above these tokens. Compressing instruction prompts allows saving context windows and saving compute for encoding instruction prompts. The task settings that this paper considers include:
* [instruction prompt][input][output] --> [compressed instruction prompt][input][output]
* [instruction prompt][output] --> [compressed instruction prompt][output]
The experimental results show that it is indeed possible to represent instruction prompts with much shorter compressed context while having only marginal performance loss. The authors evaluated the gisting model against uncompressed model on instruction following tasks comprehensively with ROUGE-L, preference of ChatGPT, and human evaluation.
Strengths: * Novelty: compressing instruction prompts to save context windows and save compute for encoding instruction prompts is a novel and useful idea that can have good impact on real-world applications.
* Very solid and well-designed experiments and evaluations: the authors present comprehensive evaluations by ROUGE-L (which is lexical), ChatGPT (which can be considered an automatic semantic evaluation), and human. Furthermore, when showing the human evaluation (Table 2), which is based on a subset of evaluation set, it is compared with ChatGPT results side-by-side and shows that outcomes are consistent and inter-annotator agreement level is similar.
* Writing clarity: the paper is very easy to follow in most of the part. Figures, plots, and tables are well-polished.
Weaknesses: There are some descriptions in the paper such as **G(t) will be a set of soft gist tokens smaller than the number of tokens in t ...** (line 66-68), **Gisting compresses prompts into “gist tokens”** (Figure 1 caption), etc, suggesting that the gist **tokens** contains compressed information. However, if I understand the paper and code correctly, a gist token is really just a special additional token <GIST> in vocabulary and the gist token embedding does not change according to task t. What really contains the compressed information is those activations above the gist tokens.
In other words, gisting compresses prompts into gist "activations" instead of gist "tokens" (are inputs to the model). G(t) are activations, not "soft gist tokens". When I think of soft tokens, I would think of the type of soft tokens as in prompt tuning, and this seems clearly different. Please correct me if I'm wrong but I think these descriptions are incorrect and misleading and need to be fixed before the paper can be published.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: How is compression factor (in figure 3) estimated? It doesn't seem to be explained in the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are properly discussed in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you to reviewer 3Sa1 for their detailed and helpful review! We are glad you find the paper “novel and useful”, with “very solid and well-designed experiments and evaluations”, as well as “very easy to follow”. Here are some responses to your comments and questions:
> However, if I understand the paper and code correctly, a gist token is really just a special additional token <GIST> in vocabulary and the gist token embedding does not change according to task t. What really contains the compressed information is those activations above the gist tokens.
Thank you for pointing this out! Your understanding is correct. We agree that we should clarify in the paper that the goal is to compress prompts into gist activations, rather than gist tokens, of which there is a single gist token embedding that remains unchanged for each task. An analogous way to think about gisting is that it compresses prompts into a soft KV prefix (a la prefix-tuning) rather than soft tokens (a la prompt tuning). We will make the following changes in the camera-ready version of the paper, if accepted:
- L6-7 (abstract): change “gist tokens” to “transformer activations on top of ‘gist’ tokens”
- L31: Change “gist tokens” to “transformer activations on top of ‘gist’ tokens”
- Figure 1 caption: change “compresses prompts into ‘gist tokens’” to “compresses prompts into transformer activations on top of ‘gist’ tokens”
- L67: Change “will be a set of soft gist tokens” to “will the transformer activations on top of a set of gist tokens”
- L100: Replace “forces the model to compress the information in the prompt into the gist tokens” to “forces the model to compress the information in the prompt into the gist prefix”
- L142: change “failed to compress any information into the gist tokens” to “failed to compress any information into the gist prefix”
- L185: change “compressing prompts into a single token” to “compressing prompts into a single token’s worth of activations”
- L260: Replace with “Caching the compressed activations G(t) on top of the gist tokens”
Also see our response to reviewer nmKf, who has similar questions about the paper.
> How is compression factor (in figure 3) estimated? It doesn't seem to be explained in the paper.
Thank you for pointing this out. The compression factor is calculated by computing the average length (in tokens) of the instruction in each validation split and dividing by the number of gist tokens in the model (since we are replacing the number of tokens in the prompt with N gist tokens). For a single gist token model, a 26x compression factor implies that the average length of the instructions in the human evaluation set is 26 tokens long, as stated in L130. We will make this clearer by adding a sentence after L189 in the paper.
---
Rebuttal Comment 1.1:
Title: Have we addressed your concerns?
Comment: As the discussion period is coming to an end, we would like to know if you have had the chance to read our rebuttal? Please let us know if we have addressed all of your concerns in the initial review, and if you have any additional follow-up questions!
Thank you! | Rebuttal 1:
Rebuttal: Thank you to reviewers 3Sa1, W7yi, tZ74, and nmKf for their uniformly detailed and constructive reviews, and to the area chair for overseeing this process!
We are glad that a majority of the reviewers are currently positive on the paper, and that reviewers found our ideas “novel and useful” (3Sa1), “well motivated and straight-forward” (tZ74), “simple to implement” (W7yi), and “elegant and interesting” (nmKf), and our experiments “very solid and well-designed” (3Sa1), “convincing” (nmKf) and “support[ing] the proposed method well” (tZ74). We are especially appreciative that reviewers find the paper “easy to follow” (3Sa1), clearly written (W7yi), and “very structured” (nmKf).
We have responded individually to each reviewer’s concerns in direct replies.
**Our attached rebuttal PDF contains a table of inter-annotator agreement statistics we observed from an experiment run in response to reviewer W7yi;** please see that review for additional context. For convenience, we have also reprinted the inter-annotator agreement statistics reported in the original paper as well.
Pdf: /pdf/970e8e031ef2d63e8fc3ddd3afface905295e47e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Navigating Data Heterogeneity in Federated Learning: A Semi-Supervised Federated Object Detection | Accept (poster) | Summary: To solve the challenges with limited high-quality labels and non-IID client data in federated learning, the authors present a pioneering SSFOD framework, designed for scenarios where labeled data reside only at the server while clients possess unlabeled data. Meanwhile, they propose the FedSTO, which consists of selective training, orthogonal enhancement, and personalized EMA-driven semi-efficient teacher. Finally, FedSTO achieves 0.082 and 0.035 higher mAP@0.5 when compared to partially supervised and SSFL baselines respectively.
Strengths: 1. Good motivation. On the one hand, selective training can address the primary challenge of establishing a robust backbone for object detectors in FL. Specifically, it fosters more consistent representations by sharing the same non-backbone part. On the other hand, orthogonal enhancement reduces the bias towards specific weather conditions and the heterogeneity of object categories, leading to improved performance.
2. Many experiments have also verified the effectiveness of the proposed FedSTO.
Weaknesses: 1. The authors do not detailed introduce Personalized Pseudo Labeling for Unlabeled Clients, which is not easy to understand.
2. I don't understand what Fig.3 wants to express, why there will be Fully supervised in Unlabeled overcast, rainy, and snowy.
3. In Fig.1, I don't know how the orthogonal enhancement works on the neck and head. Besides, I can not understand how to generate pseudo labels and assign them to the client models.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. What do T1 and T2 in Algorithm 1 represent?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the authors have described limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1. No explanations for personalized pseudo-labeling for unlabeled clients
- Thank you for your question. We apologize if the presentation of the “Personalized Pseudo Labeling for Unlabeled Clients” was not clear in the main body of the paper due to space constraints. We have provided a comprehensive explanation in **Appendix H.2** which offers more details on this aspect.
- In our SSFOD framework, we adopt a local Exponential Moving Average (EMA) model for generating pseudo labels, which facilitates efficient utilization of each client's unlabeled data. The EMA model operates on an infinite impulse response filter that assigns decreasing exponential weights. This arrangement ensures a balanced consideration of both historical and immediate data points, aiding in making reliable predictions.
- Each client maintains a local EMA model whose weights are a weighted average of the client's own weights and the weights of the global model. This correlation is depicted in **Eq. (6) in the Appendix**. After each server broadcast, the weights of the Local EMA model are reinitialized with the global model’s weights, ensuring global model consistency along with local data awareness.
- The client's local EMA model gradually tunes to the unique characteristics of its local data, integrating updates from the client's unlabeled data. Consequently, we create a balance of personalization and model performance in a FL environment while alleviating communication overheads.
- Following the weight updates, the local EMA model functions as the pseudo labeler for each client. This method ensures the generation of stable and reliable pseudo labels, despite the limited interactions with the global model. The local EMA model guarantees independent pseudo label generation at each client’s end without requiring frequent server updates. It also enhances the learning process’s robustness, as the local EMA model remains largely unaffected by the client model’s noisy gradient updates.
- We believe this explanation provides a better understanding of our approach. We hope to edit the main text for clarity and to add more details if we get extra pages for the camera-ready version of the paper. **For now, please refer to Appendix H.2 for a detailed understanding.**
### W2. The purpose of Fig 3
- Thank you for your question about Figure 3. We apologize if it caused any confusion. In this figure, “Fully supervised” refers to the model trained with labeled data across all weather conditions—Cloudy, Overcast, Rainy, and Snowy. “Partially supervised” signifies the model trained only on labeled data from the “Cloudy” condition, while “Vanilla SSFL” implies the model trained on labeled data from 'Cloudy' and unlabeled data from “Overcast”, “Rainy”, and “Snowy” conditions. The performance of these models was then evaluated across all weather conditions, regardless of their training status.
- We understand that the labeling of the figure might be a bit confusing, and we appreciate your feedback. We will work on clarifying the labeling in the figure to avoid any such confusion in the future.
### W3. Overview of the method
- Thank you for your inquiry regarding Fig.1 and the processes involved.
- The Orthogonal Enhancement functions by imposing an orthogonality regularization on the neck and head of the model (line 217). This process penalizes the kernel weight matrices to be orthogonal, being robust towards the feature bias (theoretically supported in the Appendix Section F).
- Regarding the generation of pseudo labels, this is accomplished using a pseudo labeler. As outlined in Appendix Section G, the pseudo labeler generates predictions which then undergo a non-maximum suppression step to form pseudo annotations. These pseudo annotations are subsequently used in conjunction with the loss function described in the same section of the appendix, guiding the learning of the client models.
- Due to space constraints in the main body of the paper, we have detailed these implementation specifics in the appendix. We appreciate your understanding and welcome any further queries you might have.
### Q1. What are T1 and T2 in Algorithm 1?
- T1 and T2 in Algorithm 1 represent distinct phases in the learning process. Specifically, T1 refers to the number of pretraining rounds, which means “Representation Learning with Selective Training”. This phase aids in establishing a decent starting point for subsequent learning. On the other hand, T2 denotes the number of rounds dedicated to Orthogonal Enhancement. This phase is primarily designed to increase the diversity of feature learning by discouraging the kernel weight matrices from becoming orthogonal.
- We will further clarify it by explicitly mentioning T1 and T2.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
- In response to the feedback, we've conducted rigorous additional experiments to enhance the depth and robustness of our work. This includes:
1. Real-world experiments using 100 clients through 100 virtual machines.
2. An extensive analysis of communication costs.
3. Further ablation studies for a more comprehensive understanding.
- Additionally, we'd like to highlight that several points brought up during the review process have been addressed in our appendix. We've emphasized these in our updated responses for your convenience.
- Given the tight timeline, with the discussion phase concluding on Aug 21st at 1pm EDT, we kindly request you to review our responses. We believe our detailed responses provide clarity on the concerns raised. Your feedback is pivotal to the quality of our work, and we earnestly await your thoughts, especially since we have less than 6 days remaining.
Thank you for your time and understanding. | Summary: This paper introduces a novel framework called Semi-Supervised Federated Object Detection (SSFOD) to tackle the problem of object detection in a federated learning setting. In this framework, the server possesses labeled data, while the clients hold unlabeled data from different distributions. The proposed approach consists of two stages: selective training and orthogonal enhancement. In the selective training stage, the focus is on updating only the backbone parameters on the clients to establish a robust backbone for the object detector. This selective approach helps improve the model's generalization capabilities across different distributions. The orthogonal enhancement stage follows, where all parameters are fine-tuned with orthogonal regularization. This regularization promotes representation divergence and robustness, further enhancing the model's performance. The paper also introduces a personalized pseudo label assigner based on a local exponential moving average (EMA) model. This assigner generates high-quality pseudo labels for object detection tasks, facilitating the training process in the semi-supervised setting. To evaluate the proposed SSFOD framework, this paper conducts experiments on three datasets: BDD100K, Cityscapes, and SODA10M. The results demonstrate that the proposed method achieves state-of-the-art performance when compared to existing approaches in both semi-supervised and federated learning domains.
Strengths: 1, This paper is well-motivated. In practical applications, not all data on clients are labeled, and how to leverage unlabeled data is important for FL.
2, The writing is clear and easy to follow.
3, The paper performs extensive experiments on three diverse datasets, encompassing varying scales, complexities, and domains. Moreover, the proposed method is compared against multiple baselines as well as state-of-the-art techniques. The experimental results consistently demonstrate improvements across different metrics, object categories, weather conditions, and data distributions. This comprehensive evaluation reinforces the effectiveness and robustness of the proposed method, highlighting its superiority over existing approaches in various scenarios.
Weaknesses: 1, This paper lacks discussion on the communication efficiency and scalability aspects of the proposed method, which are important considerations for its practical implementation. Specifically, it does not address the communication overhead associated with uploading only the backbone parameters or utilizing local exponential moving average (EMA) models. Furthermore, the paper does not investigate the performance of the method as the number of clients or the size of unlabeled data increases.
2, It could be better to compare or relate the proposed method with existing works on semi-supervised or self-supervised FL [1,2,3,4]. What are the advantages of the proposed method over these existing methods?
[1] Zhuang, Weiming, Yonggang Wen, and Shuai Zhang. "Divergence-aware federated self-supervised learning." arXiv preprint arXiv:2204.04385 (2022).
[2] Zhang, Fengda, et al. "Federated unsupervised representation learning." arXiv preprint arXiv:2010.08982 (2020).
[3] Wu, Yawen, et al. "Federated contrastive learning for volumetric medical image segmentation." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2021.
[4] Dong, Nanqing, and Irina Voiculescu. "Federated contrastive learning for decentralized unlabeled medical images." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the Weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to the Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1. Communication efficiency and scalability aspect + # of clients and size of unlabeled data increases
- We thank the reviewer for this thoughtful comment. Communication efficiency and scalability are indeed crucial considerations for any FL implementation. In our work, we are mindful of these aspects, and we have attempted to address them in the following ways:
1. **Communication Efficiency**: Our method communicates only the backbone parameters rather than the entire model, substantially reducing communication payload. As such, our selective communication. As the backbone typically constitutes a fraction of the model parameters, selective communication indeed reduces overhead. We have uploaded the relevant costs **in the attached pdf of the general response (Table D)**.
2. **Local EMA models**: The utilization of local EMA models helps mitigate communication constraints as it enables each client to independently generate pseudo labels. This reduces the need for frequent server updates, further improving communication efficiency.
3. **Scalability**: We appreciate your insight, and we agree that it is crucial to test our framework in a setting with a larger number of clients. To support this, we have conducted experiments with 1 server and 100 clients, as detailed in the supplementary material (**Appendix Section M**). The results presented in the appendix were obtained with a client sampling ratio of 0.1. Additionally, we put more results in **Table C in the attached pdf**.
- We would like to stress that our experiments are not merely theoretical or synthetic. They are conducted in a real-world setting, with genuine network communications occurring between 100 AWS virtual machines. We do this to underscore the practicality and applicability of our findings, and to convince the community that our proposed approach to personalized, semi-supervised FL is viable.
- As for detailed discussions and evaluations regarding these aspects, due to space limitations in the main paper, we have allocated this content to the **Appendix**. But, if extra pages are granted for the camera-ready version, we will certainly incorporate more details about these practical aspects in the main text.
- We hope this clarifies the reviewer's concern, and we appreciate the suggestion to focus more on these practical aspects. It will undoubtedly make our work more robust and closer to real-world applicability.
### W2. What are the advantages of the proposed method?
- We appreciate the reviewer's suggestion to compare and relate our method with existing works on semi-supervised or self-supervised FL. Indeed, these are relevant and important works in the domain of federated learning. Here is how our method differentiates and improves over these works:
1. **Domain-Specificity**: Our work primarily targets object detection tasks in the FL setup, specifically focusing on the challenges presented by real-world scenarios such as autonomous driving. While the works suggested primarily concentrate on image classification or medical image segmentation, we are operating in a different problem domain with its unique challenges.
2. **Methodological Advances**: Our SSFOD (Semi-Supervised Federated Object Detection) framework proposes a novel federated semi-supervised learning approach tailored to object detection tasks. It integrates techniques like Personalized Pseudo Labeling, Orthogonal Enhancement, and Selective Communication to effectively leverage unlabeled data, enhance feature representation, and reduce communication overhead, respectively.
3. **End-to-End Solution**: Our work presents an end-to-end solution: problem formulation, benchmark setting, improved methods, and experimental results. We first formalize the problem, set up a benchmark for semi-supervised federated object detection, and finally propose a novel method to solve it.
- That said, the suggested works do offer valuable insights and techniques in their respective fields. Some of their methodologies might be applicable and potentially beneficial to our work. For instance, Contrastive Learning techniques from [3] and [4] could be incorporated into our framework to further improve the feature representation capabilities of our model.
- We appreciate the reviewer pointing out these references, and **we will certainly consider adding a discussion regarding these works in our paper**, focusing on how they relate to our work and how their methodologies could potentially be integrated into our framework. This will also serve to highlight the unique contributions of our work. We believe such comparisons would enrich the context of our paper and make it more comprehensive.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
- In response to the feedback, we've conducted rigorous additional experiments to enhance the depth and robustness of our work. This includes:
1. Real-world experiments using 100 clients through 100 virtual machines.
2. An extensive analysis of communication costs.
3. Further ablation studies for a more comprehensive understanding.
- Additionally, we'd like to highlight that several points brought up during the review process have been addressed in our appendix. We've emphasized these in our updated responses for your convenience.
- Given the tight timeline, with the discussion phase concluding on Aug 21st at 1pm EDT, we kindly request you to review our responses. We believe our detailed responses provide clarity on the concerns raised. Your feedback is pivotal to the quality of our work, and we earnestly await your thoughts, especially since we have less than 6 days remaining.
Thank you for your time and understanding. | Summary: This work focuses on a practical application of federated learning, federated semi-supervised learning for object detection. It assumes that the server has labeled data and the clients only have unlabeled data. The proposed method is two-fold: selective training and orthogonal enhancement.
Strengths: - This paper integrates multiple existing techniques for federated semi-supervised learning. It is technically sound.
- The paper is generally well-written, clear, and easy to follow.
- Experiments on three object detection datasets demonstrate the effectiveness of the proposed method.
Weaknesses: - The novelty is somewhat limited. The novelty lies in the integration of existing methods and making them work on the target use case, while the key algorithms are more or less adopted from existing works.
- The evaluation scale can be extended to a larger number of clients. The majority of the experiments are run with 3 clients and only one experiment is run with 20 clients. The autonomous driving use case is more like cross-device FL. It’s important to evaluate on more clients with client sampling.
- Some existing works with related techniques are not discussed: [1] and [2] fixed the head and only trains the backbone like the selective training. [3] and [4] also uses EMA in local training.
- Since the first stage is to learn representation, a comparison could be done with federated self-supervised learning methods [3][4] for learning visual representations.
- The organization of section 3 is not very straightforward. For example, 'Personalized Pseudo Labeling for Unlabeled Clients' is more suitable in Section 4 instead of problem statement.
[1] Fedbabu: Towards enhanced representation for federated image classification, ICLR’22.
[2] Spherefed: Hyperspherical federated learning. ECCV’22
[3] Collaborative unsupervised visual representation learning from decentralized data. ICCV’21
[4] Divergence-aware federated self-supervised learning. ICLR’22.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Line 3 of the Algorithm conducts client sampling, while it seems that the experiments do not conduct client sampling.
- The server and client are assumed to be from the dataset. What would be the impact if the dataset in the server and clients are not from the same dataset? It would be more practical as we are unable to assume that we can collect the data in a server similar to the clients.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not discussed in the paper. It could contains some of the aspects such as the scale of the experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We address each below.
### W1. Novelty
- We would like to respectfully disagree with the assertion that the novelty of our work is limited. When trying to naively apply existing techniques to the SSFOD setting, we observed notably poor performance, which led to our novel formulations. As we have pointed out in our paper, related FL research tends to focus on image classification and small image sizes, like in CIFAR. In contrast, our work tackles object detection, which is a more complex problem with larger and noisier images.
- To elaborate on the novelty in the general response, we propose the first SSFOD framework, where the server has labeled data while clients only have unlabeled data. To the best of our knowledge, this is a novel problem formulation that has not yet been addressed in the literature. Rather than simply predicting classes in image classification, pseudo labeling in object detection involves predicting ground-truth boxes, applying non-maximum suppression, and assigning class labels to the predicted boxes. Because each step is intricate, constructing and training a useful pseudo labeler—especially in the presence of data heterogeneity—is far from simple (theoretical and ablation analysis is provided in the **Appendix**).
- In the centralized learning setting, semi-supervised object detection usually refers to labeled data to correct the training of unlabeled data and perform domain adaptation with every batch. However, in our federated learning scenario, the clients are assumed to have no access to labeled data. Simply applying existing techniques in this context would result in suboptimal and unstable performance (**Alternate Training part in Table A** of the attached pdf & **Table 1 in the main paper**). We believe FedSTO is a novel contribution and hope that this explanation will reinforce this viewpoint.
### W2. Evaluation on a larger number of clients, client sampling & Q1. Client sampling
- We agree that it is crucial to test our framework in a setting with a larger number of clients. To support this, we have conducted experiments with 1 server and 100 clients, as detailed in **the Appendix Section M & Table C in the attached pdf**.
- We would like to stress that our experiments are not merely theoretical or synthetic. They are conducted in a real-world setting, with genuine network communications occurring between 100 AWS virtual machines. We do this to underscore the practicality and applicability of our findings, and to convince the community that our proposed approach to personalized, semi-supervised FL is viable.
### W3. Discussions for related techniques
- Thank you for highlighting related works [1]-[4]. Regarding the freezing in [1] and [2], we have evaluated similar methods, especially those in "Personalized Federated Learning with Feature Alignment and Classifier Collaboration, ICLR 2023.”, which emphasizes enhancing feature alignment through freezing the head. Though similar, they focus on image classification while our work addresses the unique challenges of object detection. We list more results and the discussions of the ablation study about these methods [1, 2] in **Table 2, Appendix Section I & Appendix Table 7**.
- For the EMA in local training as in [3] and [4], we acknowledge the similarities in terms of leveraging EMA models. However, while [3] and [4] seem to be inclined towards using pseudo labels or representations directly from the EMA model in the image classification task, our method involves generating annotations after the non-maximum suppression step using the EMA model's predictions in the object detection task.
- **Given acceptance, we'll incorporate the references you provided in our final version.** This will certainly strengthen the connections to related methods.
### W4. 1st stage for self-supervised learning
- Thank you for pointing out the potential comparison with federated self-supervised learning methods [3], [4]. While our approach has similarities with conventional self-supervised methods, our main focus was to progressively develop SSFOD, given the nascent state of research in this topic. We recognize the value of the cited works; however, they primarily target image classification. Directly applying their techniques to federated object detection introduces unique challenges not present in their original context. We are nonetheless enthusiastic about exploring a federated self-supervised backbone in our future research, and appreciate your insightful suggestion.
### W5. Not Straightforward Organization of section 3
- Thank you for your keen observation on the organization of Section 3. We concur that "Personalized Pseudo Labeling for Unlabeled Clients" plays a pivotal role in our problem formulation. Given its significance, especially when juxtaposed with "data heterogeneity", we believe its initial discussion is warranted in Section 3. Our proposal is to retain an introductory segment on personalization in Section 3 and defer the deeper, algorithmic details to Section 4. This restructuring aims to strike a balance, addressing the problem's essence while streamlining technical content.
### Q2. Server and Client have different dataset
- Thank you for the insightful comment. We concur that the server and client data from differing datasets introduce a valuable layer of heterogeneity.
- Our current focus revolves around addressing complexities like weather-induced feature distribution skew and label density heterogeneity, particularly in applications like autonomous driving, detailed in **Appendix Section E**.
- While the scenario you highlight—which could encompass out-of-distribution data or introduce new classes—is not directly addressed in our work, we anticipate our methods, such as personalized pseudo labeling, might provide some adaptability. However, this is a rich area warranting further exploration. In the future work, we will explore these scenarios more comprehensively.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
- In response to the feedback, we've conducted rigorous additional experiments to enhance the depth and robustness of our work. This includes:
1. Real-world experiments using 100 clients through 100 virtual machines.
2. An extensive analysis of communication costs.
3. Further ablation studies for a more comprehensive understanding.
- Additionally, we'd like to highlight that several points brought up during the review process have been addressed in our appendix. We've emphasized these in our updated responses for your convenience.
- Given the tight timeline, with the discussion phase concluding on Aug 21st at 1pm EDT, we kindly request you to review our responses. We believe our detailed responses provide clarity on the concerns raised. Your feedback is pivotal to the quality of our work, and we earnestly await your thoughts, especially since we have less than 6 days remaining.
Thank you for your time and understanding.
---
Rebuttal Comment 1.2:
Comment: Thank you for your response. Most of the concerns are well addressed with provided explanations and supplemented experiments, the reviewer would like to increase the score to 5 and suggest the authors incorporate these content in the final version.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer,
We are truly grateful for your thorough evaluation and the subsequent score adjustment. We will certainly incorporate the provided feedback into the final version of our manuscript to ensure its quality and coherence. Your insightful comments have been pivotal in refining our work.
Thank you once again for your time and valuable input.
---
Rebuttal 2:
Comment: Dear Reviewer 8JqK, I'd like to kindly ask you to read the rebuttal provided by the authors. Please respond and/or update your rating if necessary. Thank you. -AC | Summary: This paper explores Semi-Supervised Federated Object Detection (SSFOD), a pioneering framework for distributed data sources with limited high-quality labels and non-IID client data, particularly in applications like autonomous driving. The authors present a two-stage strategy, FedSTO, encompassing Selective Training followed by Orthogonally Enhanced full-parameter training, to address data shift while representing the first implementation of SSFOD for clients with 0% labeled non-IID data. The proposed approach includes selective refinement of the detector backbone to avert overfitting, orthogonality regularization to enhance representation divergence, and local EMA-driven pseudo label assignment to produce high-quality pseudo labels.
Strengths: The paper provides helpful figures, explores the valuable direction of semi-supervised Federated Object Detection (SSFOD), and presents an effective two-stage strategy called FedSTO for addressing data shift in a distributed data source. The proposed approach achieves state-of-the-art results in multiple datasets. Additionally, the paper provides a clear problem statement that is easy to understand.
Weaknesses: However, there are a few areas for improvement before the paper can be considered for publication. First, the references list is incomplete as some essential references are missing, such as "Federated learning with label distribution skew via logits calibration."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not discuss limitation and ethical issues in the main body.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address your comments below.
### W1. Additional references (such as `Federated learning with label distribution skew via logits calibration')
- Thank you for pointing out the omission in our references list and for suggesting the inclusion of "Federated learning with label distribution skew via logits calibration." We appreciate your recommendation and acknowledge the significance of this work in the domain of federated learning.
- However, we would like to kindly note that while both our research and the suggested reference operate within the realm of federated learning, the specific challenges and methodologies they address are quite distinct. The cited paper delves into addressing label heterogeneity in federated image classification through logits calibration. In contrast, our work primarily targets federated object detection, placing particular emphasis on weather-conditioned heterogeneity. Moreover, the inherent complexity of label skewness in object detection – which encompasses annotation, objects per image, and class heterogeneity – sets it apart from image classification. The nuances of the problems and the intricate differences between the two contexts may not make the suggested reference directly applicable to our study.
- Nevertheless, we recognize the value of drawing connections between related yet distinct works in federated learning. Should we be granted additional pages for the camera-ready version, we are fully committed to incorporating the recommended reference into **Section 2.1 "Federated Learning (FL): Challenges and Advances"**, highlighting its relevance and distinction from our approach.
- Once again, thank you for your valuable feedback and suggestions. We appreciate the time and effort you have dedicated to reviewing our paper.
### Limitations and ethical issues are not discussed
- Thank you for pointing out the lack of an explicit "limitations and negative social impact" section in the main body of the manuscript.
- Due to space constraints, and in an effort to maintain the coherence of the main text, we have extensively discussed the potential limitations and negative social implications **in the Appendix**. We understand the importance of addressing such concerns in contemporary AI research and have made a dedicated effort to ensure that these topics are addressed in detail, albeit in the supplementary section.
- We believe that it is essential for readers and practitioners to be aware of the possible pitfalls, limitations, and societal implications of the proposed methods. We hope that the provided discussion in the Appendix serves this purpose. In subsequent versions of this paper, we'll strive to make such sections more prominent by referencing it in the main text. Thank you for your understanding and feedback.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
- In response to the feedback, we've conducted rigorous additional experiments to enhance the depth and robustness of our work. This includes:
1. Real-world experiments using 100 clients through 100 virtual machines.
2. An extensive analysis of communication costs.
3. Further ablation studies for a more comprehensive understanding.
- Additionally, we'd like to highlight that several points brought up during the review process have been addressed in our appendix. We've emphasized these in our updated responses for your convenience.
- Given the tight timeline, with the discussion phase concluding on Aug 21st at 1pm EDT, we kindly request you to review our responses. We believe our detailed responses provide clarity on the concerns raised. Your feedback is pivotal to the quality of our work, and we earnestly await your thoughts, especially since we have less than 6 days remaining.
Thank you for your time and understanding.
---
Rebuttal Comment 1.2:
Title: Thank you for the rebuttal
Comment: Thank you for the additional details and explanations, which resolved some of my concerns. Nevertheless, the overall quality is still borderline, and my rating keeps unchanged.
---
Reply to Comment 1.2.1:
Comment: Thank you for recognizing the efforts we've made to address the concerns you raised.
Given that we believe we've addressed the sole concern you mentioned, we're curious about any additional aspects of our manuscript that might still seem ambiguous or unclear to you. **In the rebuttal, the only weakness pointed out was related to the reference list**. Your feedback is invaluable to us, and if there are other areas that need further clarification or improvement, we'd be eager to know.
Your insights and guidance are instrumental, and we would appreciate any additional feedback you might have.
Thank you once again for your time and consideration. | Rebuttal 1:
Rebuttal: We extend our gratitude to all the reviewers for providing comprehensive and thoughtful feedback on our manuscript. We appreciate your valuable insight into the strengths and areas for improvement of our work.
### Summary of Strengths cited by Reviewers
- **Novelty in Approach and Framework:** We are encouraged by the observations of **Reviewers 2CR4** and **Reviewer qq85** noting the significant strength of our proposed SSFOD and FedSTO approaches, particularly in non-IID weather conditions. While it may seem prior methods tackled similar problems for image classification, these methods perform poorly in SSFOD's substantially more complex and practically relevant setting. The novel components of FedSTO — two-stage selective training and high-quality personalized pseudo-labelers — substantially boost performance compared to simply applying prior work to SSFOD (**we present ablation studies and theoretical analysis in Tables 1 & 2 of the main text and Sections F - J of the Appendix**). Indeed, FedSTO nears the performance of centralized learning models even with only having access to 25% of the labels and non-IID data.
- **Impact**: We appreciate **Reviewers AM8c, Reviewer 1XGa**, and **Reviewer qq85** for noting the important motivation and impact of our work. FedSTO reaches competitive performance while also exhibiting crucial real-world benefits of not sharing data to a central server and not requiring labeling at the edge. Preserving user privacy and reducing training cost are necessary improvements for feasibility in the real-world for a wider range of applications.
- **Clarity, Structure, and Presentation:** We are pleased to note the collective positive feedback on the clarity and presentation of our work, as acknowledged by **Reviewers 2CR4**, **Reviewer qq85**, **Reviewer 8JqK**, and **Reviewer AM8c**.
- **Technical Soundness and Integration:** **Reviewers 8JqK** and **Reviewer AM8c**'s emphasis on our paper's technical solidity is greatly appreciated. Presenting multiple techniques for semi-supervised FL and providing theoretical analysis (in the Appendix) while ensuring the coherence of the overarching framework was one of our primary objectives.
- **Extensive Experiments and Comparative Analysis:** We are encouraged by the feedback from **Reviewers AM8c** and **Reviewer 1XGa** regarding our empirical approach. We conducted comprehensive experiments across diverse datasets and made comparisons against multiple baselines—as well as SOTA techniques—to showcase the efficacy and robustness of FedSTO. **Reviewer 1XGa** specifically acknowledges the effectiveness of selective training in establishing a robust backbone for object detectors in FL, particularly its ability to reduce biases and heterogeneities.
- **Relevance to Practical Applications:** The observations made by **Reviewer AM8c** highlight our work's significance in real-world scenarios, where not all client data may be labeled. Leveraging unlabeled data in FL remains an imperative challenge, and our work attempts to provide tangible solutions in this direction. FedSTO also reduces communication cost by 20.52% in comparison to conventional FL algorithms, as **detailed in the attached PDF of tables**.
### Core Contributions of Our Work
- **Semi-Supervised Federated Object Detection (SSFOD)**: We are glad that **Reviewers 2CR4**, **Reviewer qq85**, **Reviewer 1XGa**, and **Reviewer AM8c** acknowledged the novelty of SSFOD, especially the challenging landscape of FL for distributed data sources with limited labeled data and non-IID client data. Our approach is positioned to significantly benefit applications such as autonomous driving. Not requiring labels in non-IID settings promotes not only cost reductions in training but also importantly preserves privacy as data is retained at the edge.
- **FedSTO**: As pointed out by **Reviewers 2CR4**, **Reviewer qq85**, and **Reviewer 1XGa**, the hallmark of our paper is the introduction of the two-stage training strategy, FedSTO, specifically tailored for clients with 0% labeled non-IID data. This strategy, with its selective training and orthogonally enhanced full-parameter training, is designed to tackle data shifts and represents a pioneering implementation of SSFOD.
- **Selective Training & Orthogonal Enhancement**: **Reviewers AM8c** and **Reviewer 8JqK** have emphasized our distinctive approach of selective training, aimed at refining the detector backbone to prevent overfitting, and orthogonal enhancement, which fosters representation diversity and robustness. These strategies collectively enable our framework to improve generalization capabilities across diverse data distributions.
- **Personalized Pseudo-labeling through EMA**: Both **Reviewers qq85** and **Reviewer AM8c** brought attention to our personalized EMA-driven pseudo-label assignment, a novel contribution that ensures the generation of high-quality pseudo labels for object detection tasks. This component elevates the performance of object detectors in a semi-supervised, non-IID FL context.
- **Empirical Validation and Superior Performance:** As recognized by **Reviewer 2CR4**, **Reviewer AM8c**, and **Reviewer 1XGa**, the empirical validation of our method across multiple datasets (BDD100K, Cityscapes, and SODA10M) demonstrates the superiority of our proposed methodology over existing federated and semi-supervised learning methods. The notable improvements in mAP@0.5 metrics when compared to baseline models further accentuates the efficacy of our approach.
### Things in Pdf
In the attached PDF, we included the following results:
- Results for fully supervised FL
- mAP@0.75 results
- Global model's performance
- Results with 100 clients and various sampling ratio
In light of the feedback, we are committed to refining our manuscript further, addressing any lingering queries, and incorporating your valuable insights into the final version of our paper.
Pdf: /pdf/341947ee8034f275699110599ecf270f64cb776a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a Semi-Supervised Federated Object Detection (SSFOD) framework featuring a two-stage training strategy, FedSTO, designed to address the challenges of heterogeneous unlabeled data in federated learning. The proposed framework employs selective training and orthogonality regularization with personalized EMA-driven pseudo-labeling to facilitate robust and diverse feature learning, enhancing object detection performance across multiple weather conditions and data distributions. The empirical results provide evidence of the merits of FedSTO over existing federated and semi-supervised learning methodologies. Notably, despite non-IID clients having no labels, FedSTO achieves performance comparable to fully supervised centralized models.
Strengths: • The paper introduces a solution for SSFL for object detection with a one-stage detector in non-IID weather conditions.
• The paper outlines a new approach to SSFOD, combining existing methods and adapting them to the federated learning setting.
• The paper is well-structured, has clear headings and subheadings, and is easy to follow.
Weaknesses: 1. The inclusion of a federated setting with fully labeled data on at least one dataset would have provided a valuable comparison to the proposed approach. Also, while the comparison with the partially supervised baseline helps establish the lower bound of server pretraining, it is not a fair comparison due to the significant difference in training data volume.
2. Although the models are evaluated on their respective datasets, it would have been beneficial to assess their performance on a global test set as well. Context-specific evaluations are necessary, but examining the models' generalization can help avoid the issue of overly specialized clients, which is a concern in personalization.
3. An interesting analysis would have been to investigate how the performance gap of the proposed method changes with increasing amounts of data. The reported performance of the Fully Supervised Centralized Yolo-v5 Large in the paper is notably lower compared to other papers on full-scale datasets (e.g., YoloV5s achieves 77.2 in https://arxiv.org/pdf/2108.11250v7.pdf). This raises the question of whether the low performance and the closure in performance with the centralized setting is due to the limited training data. It would be valuable to compare the proposed approach against these scenarios with more data available or if possible employing a full server pretraining.
4. Regarding Cityscape, since the dataset does not provide precise weather information for each annotation, the data is distributed uniformly at random, and it's not clear how it is non-IID. Also, the non-IID aspect mentioned in the paper is limited to addressing the skew in feature distribution induced by weather variations. The authors could have utilized foggy-Cityscape and KITTI datasets o obtain a more realistic non-IID setting.
5. The proposed SSFOD problem setting is similar to unsupervised domain adaptative object detectors and test-time domain adaptation. Even, the ingredients of the proposed solution such as EMA are also known in the semi-supervised OD and domain adaptation literature. To me, the only difference is that the model parameters are updated on local clients instead of central weight update. Please clarify the differences and advantages of the proposed solution compared to similar domain-adapted object detection methods.
6. Many key components such as FPT with orthogonal regularization are adapted from existing literature such as [16]. Although I agree that the proposed problem setting is unique and novel, it is also important to clarify the novel technical contributions of the proposed solution.
7. What is the rationale behind using Yolov5? It will be beneficial to show that the proposed solution is generalizable on other families of object detectors such as Faster-R CNN and recent transformer-based object detectors such as DETR/Deformable DETR.
8. mAP@0.5 is not an ideal evaluation metric to qualify localization capabilities. Please report mAP@0.75 and COCO-style mAP for a better understanding of the precise localization capabilities of the model.
9. EMA, pseudo-label assignment on unlabelled data needs to be better explained. What are the training loss functions used for backbone weight update during the selective training step?
10. It is mentioned that the proposed solution uses augmentations such as Mosaic, left-right flip, large-scale jittering. But, it is not clear how are the corresponding ground truth box positions determined in the case of unlabeled images.
11. As the authors mention that SSFOD introduces additional computational overhead, it is desired to quantify the computational resources employed.
12. The paper contains some errors, such as a missing dot in Line 139 and the use of "IID" in Table 5.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses mentioned above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not have an explicit section for "limitations and negative social impact", but that is not a major concern for the research topic studied in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback in helping refine our work.
### W1. Results of fully labeled FL & W8. mAP@0.75
- As you suggested, we added fully-supervised IID and non-IID FL in Table A of our Global response's PDF as well as comprehensive evaluations using mAP@0.75. These results demonstrate that our FedSTO approach achieves competitive performance even when it only has access to 25% of the labels.
- The partially supervised baseline establishes lower bounds of server pretraining and shows utility of unlabeled data. We agree that the strength of FedSTO is highlighted by 3 other baselines — significant improvements against vanilla SSFL and close performance to fully-supervised centralized and FL results.
### W2. Global performance
- We concur in analyzing the generalizability of our models to other data distributions. Table E of our Global response's PDF shows FedSTO's competitive global performance.
### W3. Lower performance compared to the previous work
- Upon inspection, the Yolov5's 0.772 in your referenced paper (https://arxiv.org/pdf/2108.11250v7.pdf) is only for the Car class. As listed in Table 4 of our paper, our centralized model reaches 0.788 and FedSTO reaches 0.740 for the Car class. Thus, our paper's performance is comparable (especially since we only have access to 25% of the labels).
### W4. Non-IIDness on Cityscapes
- Your observation is correct that Cityscapes was distributed uniformly at random (stated in line 260 of our main text). As you suggested, we would like to incorporate experiments with Foggy Cityscapes, space permitting. As our paper pioneers the exploration of SSFOD, we chose to demonstrate the efficacy of our approach on standard settings of datasets and anticipate future extensions to more nuanced settings.
### W5. Differentiation from previous domain-adapted object detection (OD) methods
- Although inspired by past unsupervised domain adaptation methods, there are several major differences in our work. A novel challenge we address is the separation of labeled and unlabeled data. Conventional domain-adapted OD methods use labeled data instances to conduct SSL. Our scenario, where each client only holds unlabeled data and cannot access labels, presents a unique and complex problem.
- To overcome this, FedSTO uses personalized pseudo labelers and orthogonal enhancements:
- Go against conventionally used global models and demonstrate that local EMA models are stronger pseudo labelers.
- Stabilize training with only unlabeled data by using the alternate server and local pseudo labelers.
- Introduce warm-up and alternate training phases which yield personalization and generalization.
- Mitigate instability from training exclusively with unlabeled data by employing selective training.
- Theoretical analysis is included in the Appendix.
### W6. Novel technical contribution
- Although we build on prior research, their context and application differ significantly. A detailed theoretical explanation is in **Section F of our Appendix**. The greater challenge in using single-stage detectors, along with strategic loss application, signifies novel technical contributions.
- For example, even though the penalizing loss may appear similar to existing literature such as [16] cited in our main paper, the intent behind its usage differs vastly. The focus of [16] was to enhance DNNs by applying across various layers of the architecture. In contrast, we have strategically applied the loss to the neck and head of our architecture with a specific aim: robustness of detection quality to heterogeneities among client datasets. Similarly, components like FPT with orthogonal regularization were primarily tested on image classification tasks, whereas we have adapted it to the more challenging domain of object detection.
### W7. Rationale behind Yolov5
- Given our problem setting's novelty, we focused on single-stage detectors as they jointly achieve high performance, faster inference times, and small model sizes, which are crucial for real-world applications such as autonomous driving. Among single-stage detectors, Yolo models are a de facto standard. In addition, the nature of FL demands a focus on models that do not incur excessive communication costs. The suggested models, unlike Yolo, do not necessarily satisfy the fast inference or low communication cost criteria.
### W9. EMA pseudo label assignment + training loss
- Thank you for your interest in the pseudo label mechanism and training loss. Due to space constraints, we provide explanations in **Appendix G and H**. As a brief summary, EMA constructs an enhanced local model that smoothens fluctuations during model updates. The EMA-based model serves as a local pseudo labeler to label unlabeled data held by each client. This labeling is essential for performing learning tasks on the unlabeled data. We employ the combination of objectiveness, bounding box, and classification losses included in the standardized Yolov5 loss. We train only on datapoints whose pseudo labels have a high confidence score.
### W10. Ground truth boxes for augmented unlabeled images
- Ground truth box positions for unlabeled images are determined through predictions provided by our EMA pseudo labeler. They are subjected to data augmentation in parallel with image data. Thus, we generate augmented image label pairs from unlabeled images, which are then used for further training.
### W11. Computational resources
- In our study, while SSFOD has inherent complexities due to pseudo labeling, it did not significantly increase memory use compared to fully-supervised FL; we utilized the same V100 GPU throughout. Furthermore, FedSTO reduces communication costs by 20.52% by freezing the model's neck, as seen in Table D of our response pdf. Going forward, we will aim for an optimal balance between performance and efficiency.
### Limitations
- Due to space constraints, potential limitations and societal implications are in the Appendix.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal and additional experiments.
Comment: Thanks for the well-organized rebuttal from the authors. The feedback solves most of my concerns regarding the paper, so I am tending to increase my rating to Weak Accept. But, I also would like to hear from other reviewers regarding their thoughts on the rebuttal and if they have some open issues.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 2CR4
Firstly, we'd like to express our sincere gratitude for taking the time to review our rebuttal in detail and for your consideration in adjusting your rating. Your initial feedback was instrumental in helping us clarify and enhance the aspects of our paper.
We understand and respect your desire to gather collective insights from other reviewers. If there are further questions or any additional concerns post-discussions, please feel free to raise them. We're committed to addressing all aspects to ensure the clarity and quality of our work.
Again, thank you for your constructive feedback, and we truly appreciate your open-minded approach to our rebuttal. | null | null | null | null | null | null |
Discriminative Calibration: Check Bayesian Computation from Simulations and Flexible Classifier | Accept (poster) | Summary: This paper presents a classifier based approach to provide a measure of miscalibration in Bayesian computation, including for methods such as Approximate Bayesian Computation (ABC) and Simulation-Based Inference (SBI) methods like neural posterior estimation. The method enables the test statistic to be learned from data and provides an interpretable divergence measure. The method is a form of two sample testing applied in the amortized and simulation based inference settings. Beyond standard approaches to using classifiers for two sample testing, the authors develop several classifier based approaches which use a form a "label mapping", and provide theoretical work on the validity and efficacy of these approaches. Experiments are provided to test the method for posterior inference in cases where the posterior is known, and provide empirical verification of the theoretical results. The paper also compares to the commonly used simulation based calibration method, and show clear improvements with the methods proposed in the paper. Experiments on cosmological data are also explored.
Strengths: This is a good paper, it provides:
- a clear description of the challenge they hope to address, specifically providing a better and more statistically interpretable measure of miscalibration over simulation-based calibration
- a clear description of the proposed methods, including how to use label mapping to develop calibration based diagnostics, and how this differs from other approaches
- theoretical grounding for the calibration based measures, including the expected behaviors in large sample limits
- useful discussions on implementation details
- useful discussion on the legitimacy and power of the tests.
Weaknesses: While the idea of using classifiers for two sample testing is not new, the authors do develop new methods based on label mapping. While the new classification approaches are discussed and will likely be quite useful for simulation based inference, some attempts at using classifiers for testing posterior inference quality have been performed before (albeit not directly with some of the new classification methods discussed in the work). For instance, Vandegar et. al "Neural Empirical Bayes: Source Distribution Estimation and its Applications to Simulation-Based Inference" AISTATS 2021, use the AUC of a classifier as a diagnostic in a simulation based setting.
Further development of the experiments would be quite useful. A very simple experiment is shown, allowing an exploration of the methods, but some experiments more clearly showing the quality of the method in a controlled but more complex simulation based inference setting could provide useful insights to the readers. For instance, a common example is the Simple Likelihood, Complex Posterior (SLCP) problem from Papamakarios et. al, "Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows", AISTATS 2019. As it stands now, it is difficult to asses the quality of the method in the more complex cosmological data set experiment.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you provide more details on the posterior model in the multi-variate Gaussian example?
Could you provide experiments on more complex but controlled examples, for instance the SLCP problem from Papamakarios et. al? This could provide more evidence on the utility of the method in SBI settings.
How do these methods scale with feature and parameter dimension? Does this affect the ability to attain a useful diagnostic?
You mention that the presence of nuisance parameters do not affect the quality of the diagnostics. However, the Neyman-Pearson lemma does not guarantee that the likelihood ratio is the uniform most powerful test for a given size in the presence of nuisance parameters. Perhaps I am not fully understanding the text, could you explain if this has any ramifications for the proposed tests?
Could you add references to other simulation based inference work using classifiers for diagnostics? In addition, recent work on using neural networks for two sample testing (e.g. Grosso et. al, https://arxiv.org/abs/2305.14137) may be relevant to discuss in the related work.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors provide clear discussions of limitations. It would be interesting to also know if there are limitations dependent on the dimension of the features or parameters, and how this may impact the number of samples needed to attain a useful test.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your careful reading and constructive reviews. Please find below a detailed point-by-point response to all of your comments.
**Further development of the experiments would be quite useful/more complex but controlled examples, for instance the SLCP problem?**
Thank you for your suggestions on the experiments! We find the SLCP example interesting. To showcase our method on a wider range of problems, we now add three additional simulation data examples from the `sbi-benchmark` repo: the Gaussian linear, Gaussian mixture, and the SLCP example. We run our calibration on these three datasets with varying inferences settings and we obtain positive divergence estimates when the inference is not exact. Please see our **Shared Response #1** and attached pdf file.
**Could you provide more details on the posterior model in the multivariate Gaussian example?**
We would like to refer to our Supplement B.2 for experiment details---It was an oversight that we did not state more clearly that this information was there and we apologize. We will state this clearly in the revision.
In the closed-form Gaussian experiment in Section 5, we consider an easy example: a normal likelihood $y\sim$MVN($\theta$, Id) and a normal prior $\theta \sim$MVN(0, Id). The true posterior is a closed-form Gaussian. We consider a sequence of corrupted inference by adding a bias shift to the posterior mean or multiplying the posterior covariance matrix by a bias factor. Notably, we have kept this true posterior distribution and the sampling corruption mean-field to make the calibration task easier for the traditional SBC rank-test.
**How do these methods scale with feature and parameter dimension? Does this affect the ability to attain a useful diagnostic?**
First, traditional rank-based SBC faces challenges in high dimensions due to (1) the difficulty of checking interactions between dimensions and (2) the expense of multiple testing. Running SBC in high-dimensional problems is one of our initial motivations. In the real data cosmology example, the input dimension ($\theta \times y$ joint space) of the classifier input we trained is ~1000.
Strictly speaking, high dimensions in parameters do not pose any conceptual difficulty for our framework. However, to get useful calibration it is necessary that the classifier is able to distinguish the two or multiple classes—if the classification problem is too challenging, the learned classifier could be far from the optimal one, which would mean our divergence estimate is a weaker bound and more false negatives would occur in testing. Our Section 4 offered practical recommendations to help dimension scaling, including using (1) pre-learned features for dimension reduction, (2) statistical features such as log p and log q whenever available, and (3) network symmetry.
**the presence of nuisance parameters do not affect the quality of the diagnostics. However, the NP lemma does not guarantee that the likelihood ratio is the uniform most powerful test for a given size in the presence of nuisance parameters.**
Thank you for raising this interesting question! In the paper, we discussed the nuisance parameter when the full parameter space can be partitioned into two parts, say $\theta$, the parameters of interest, and $\phi$, the nuisance parameters. We are only interested in the marginal sampling quality of the $\theta$, that is if $p(\theta|y)= q(\theta|y)$. We argue that we only need to restrict the classifier to use the target dimensions $\theta$ and $y$, while other nuisance parameters can be discarded from the classifier. As a consequence of our Theorem 1, the estimated divergence from this classifier is the conditional divergence between $p(\theta|y)$ and $q(\theta|y)$. The nuisance parameter has no ramifications.
The Neyman-Pearson lemma does not apply to nuisance parameter problems as the null and alternative are composite. Here we do not have a composite hypothesis since we just look at marginal posterior distribution $\theta|y$. To be clear, when looking at the $\theta$ margin, the density ratio to be learned from the classifier is $p(\theta|y)/q(\theta|y)$, not the original full density ratio $p(\theta, \phi|y)/q(\theta, \phi|y)$, which is what the likelihood ratio means in the NP lemma.
**Could you add references to other simulation based inference work using classifiers for diagnostics? Recent work on using neural networks for two sample testing may be relevant to discuss in the related work.**
Thank you for your suggestions on references, which we will add.
First, using the classifier two-sample test (C2ST) is not a new idea. However, the traditional classifier two sample test does not directly apply to the SBI problem because of the sequential sampling and the autocorrelation. Our present not only formulates the SBC into a classifier-two-sample-test task and interprets the divergence in posterior distribution, but also develops the traditional C2ST to incorporate autocorrelation and extra log density information, as well as a general label mapping framework. Please see our **Shared Response #2** for details of our in relation to C2ST.
It was an oversight that we did not cite Vandegar et. al (2021) who used ROC AUC to compare (1) samples from $p(\theta|y)$ and $q(\theta|y)$, and (2) the data y and the simulated posterior predictive $\int p(y|\theta) q(\theta|y) d\theta$. We apologize for the oversight we will add in the revision. However, Vandegar et. al (2021) addressed a different task than ours. Assessment (1) requires samples from the exact posterior $p(\theta|y)$, which is different from our intended application in diagnostics. Assessment (2) is related to the posterior predictive check (e.g., Gelman 1996), a way to examine the model specification $p(y|\theta)$ rather than (purely) computation accuracy. Our diagnostics are only intended to diagnose the computation, not to check the model correctness (line 377).
---
Rebuttal Comment 1.1:
Title: Response
Comment: Dear Authors, thank you for your detailed responses. I continue to consider this a good paper and believe it should be accepted.
In terms of nuisance parameters, indeed NP is for simple hypotheses, but my point was that only likelihood ratios with simple hypotheses will result in uniformly most powerful tests. This is as opposed to Wilk's test / likelihood ratios test that deals with nuisance parameters through maximization, but is not guaranteed to be uniformly most powerful. For marginal likelihoods, it is not clear this will be a uniformly most powerful test. Nonetheless, we may be discussing an issue that is not the most relevant point for your work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your careful reading and insights! Let us clarify that the exact NP lemma and the uniformly most powerful test are only precisely applicable when we know the exact likelihood ratios, while in the classifier two-sample test, the density ratios are learned from finite samples. For a two-sample test, we agree that nuisance parameters can be informative too: for example, if there is a known correlation between target parameters and nuisance parameters, then ignoring the nuisance parameters may reduce the power of the test. In general, the theory of Neyman structure and the unbiased similar test generalizes the NP lemma to the existence of nuisance parameters, but these theories are beyond the scope of this paper. | Summary: In this paper the authors focus on the challenge of comparing two conditional distributions p, q from their samples. In particular, this is useful as a check for bayesian computations.
To achieve this, the authors propose the use of a probabilistic classification approach where they create a new dataset combining samples from p and q and labels related to the particular distribution. Here, the authors propose different approaches and make connections to previous work.
Given the combined dataset, a classifier tries to predict the label from the features. Failing to do so, indicates that the classifier could not discriminate between the two distributions.
The performance of the probabilistic classifier is then used to estimate the divergence between the distributions and for hypothesis testing.
In the empirical section, the authors compare against SBC and show improvements in power given a reduced number of samples.
Strengths: The paper analyses different approaches for the generation of samples to be used by discriminators. They then present theoretical developments and demonstrate their results empirically. The sampling approaches and theoretical treatment are novel, of high quality and well written.
The main advantage here is that even for a sub-optimal classifier, one gets improvements in data efficiency.
Weaknesses: The main weakness of the paper is that in the empirical evaluation, the authors do not explore the case where the samples present more challenging behaviors such as auto-correlations, different types of imbalances, etc.
Other minor suggestions:
- The colors of the thetas in the introduction are hard to see.
- Describing why the bounds are tight could further help the reader in section 2.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I'm thinking of your paper as an approach to test whether the labels and the generated features are connected. This reminds me of [1] where they evaluate the mutual information via neural networks. I'm wondering both if it makes sense in your setting to estimate the divergence via the Donsker-Varadhan representation, and in their setting if your feature generation approach is beneficial. Do you have any opinions on that?
The same has applications for other approaches on independency testing via classifiers where your feature generation approach also has potential.
[1] Belghazi, Mohamed Ishmael, et al. "Mutual information neural estimation." International conference on machine learning. PMLR, 2018.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: There are no potential negative societal impacts from this work. The authors mention limitations in a future work section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We are grateful for your careful reading of our manuscript and your constructive review comments. We have now addressed your concerns to improve and clarify the manuscript. Please find below a detailed point-by-point response to all of your comments and questions.
**The main weakness of the paper is that in the empirical evaluation.**
Thank you for your suggestions on the experiments. To showcase our method, we now add three simulation data examples from this sbi-benchmark repo, the Gaussian linear, the Gaussian mixture, and the simple likelihood complex posterior (SLCP) example. We run our calibration method on these three datasets with varying inferences settings and it picks up positive divergence estimates when the inference is not exact. Please see our Shared response #1 and attached pdf file.
**Describing why the bounds are tight could further help the reader in section 2.**
Thank you for your suggestion. We use Section 2 to present four examples to help the reader quickly see the heuristics and intuition: why we can formulate the simulation based calibration into a joint space classification, and various label mapping could lead to different divergence estimates. We state the rigorous theory in Section 3, but we felt this was too abstract without having some examples first. We provide detailed derivation of all theorems including the tightness of the bound in Supplement C.
**I'm thinking of your paper as an approach to test whether the labels and the generated features are connected. This reminds me of [1] where they evaluate the mutual information via neural networks. I'm wondering both if it makes sense in your setting to estimate the divergence via the Donsker-Varadhan representation, and in their setting if your feature generation approach is beneficial.**
Thank you for your insightful remark. It is an interesting connection, which we did not anticipate! Nguyen et al. (2009) developed an M-estimation to compute the f-divergence from two samples using conjugate dual function theory. Belghazi et al. (2018) developed another tighter sample-based estimate of the two-sample KL-divergence using the Donsker-Varadhan representation. Both estimates have been applied in GAN to replace the traditional binary classifier.
First, let us clarify what makes the simulation based calibration (SBC) setting different from the generic two-sample divergence computation—The SBI joint draws $(\theta, y)$ and $(\tilde \theta_m, y)$ are not IID draws from two simulators since all the $\theta$ and $\tilde \theta_m$ are paired with an identical $y$. Furthermore, when the inference $q(\theta|y)$ is MCMC, there is additional auto-correlation in $\tilde \theta_m$. Both the across-class and within-class dependence violates the IID sampling assumption in Nguyen et al. (2009) and Belghazi et al. (2018). In comparison, our proposed multiclass classification (example 4) solves both issues. Even better, our multiclass classification is always balanced. Indeed, the main motivation of our multiclass classification development is to address imbalanced and auto-correlated samples, not to purely develop a sample based KL divergence estimate, and any finite M produces a divergence metric as well.
Second, similar to how we can see our method as a generalization of the C2ST, it is likely that we could adapt Nguyen et al. (2009) or Belghazi et al (2018)'s KL estimate to the SBI setting. Indeed, it is the contribution of our paper to formulate the traditionally rank- and histogram-based SBC problem into a sample-based joint-space discriminative task. Our framework is flexible to incorporate the inclusion of other sample-based divergence estimates and loss functions, such as the aforementioned sample KL estimates, Wasserstein distances, and the integral probability metrics. When some part of density information p or q is known such as in SBI and GAN, it is straightforward to include them as features in the learning of the Donsker-Varadhan representation as we have done in Section 4. We leave this extension for future work.
Lastly, the “byproduct” of our calibration method is a novel sample based KL divergence estimate using multi-class classification (example 4), which can be used in other applications. A direct consequence of our Theorem 3 is that, in a generic two-simulator setting, the divergence estimate obtained from any multi-class classifier and any simulation size M is always a lower bound to KL divergence from p to q—and this estimate becomes tight when the classifier is optimal and M goes to infinity. In the paper we have proven its convergence, yielded the convergence rate (theorem 3, where the constant is further the chi-squared divergence), and shown the empirical evidence. Perhaps the most interesting direction for future investigation (although unrelated to this paper) is to compare this new multi-class-classifier estimate to its orthogonal counterpart, the Donsker-Varadhan representation and the f-dual, in the IID sampling case (which is NOT SBC, but includes the GAN, two-sample test, and independence test) such that all three methods are applicable.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the provided examples. I still consider the paper too be good. I will keep my score and follow the discussion, suggesting it be accepted. | Summary: In this paper, the authors consider conducting frequentists tests for simulation-based inference. Specifically, the authors want to test if an inference engine q(theta | y) produces samples from the true posterior p(theta | y). The key idea is to use classifiers to determine similarities between samples generated from the true and approximated posterior (similar to GAN).
The foundation of the algorithm is laid in section 3, where the authors show the negative cross-entropy is associated with a generalised divergence between p and q. In particular, D4 becomes KL divergence between p and q as the number of posterior draws goes to infinity. Empirical study validate the theoretical results and shows the effectiveness of the algorithm.
Strengths: 1. This paper is generally well-written and the idea is intuitive and sound.
2. This paper addresses an understudied problem (SBC) in the literature, and the expansion from a single-dimensional rank statistic to a multidimensional test statistic seems to be a significant improvement
3. The algorithm provides theoretical support (Section 3) to the proposed algorithm.
Weaknesses: 1. From the methodological point of view, using a classifier to compare distributions is not a new idea (as the authors have discussed in Section 6.
2. When training a classifier, it seems authors are facing very imbalanced classes and training an imbalanced classifier itself can be difficult. I am wondering if this would cause issues in the tests performed later.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: If one wants a divergence KL(p(theta|y, q(theta|y)), one can directly approximate it without using a classifier, right?
KL(p(theta|y), q(theta|y) = KL(p(theta, y), q(theta|y)p(y)).
Suppose you can sample from p(theta, y), q(theta|y) and p(y). Then we can directly approximate above KL using f-GAN, and perform a permutation test or compute CI using bootstrap.
This approach sounds like a simpler way to obtain a divergence measure between p and q, so I wonder why the authors did not explore this idea nor use it as a benchmark method.
----------
Revision after author's response.
This approximation cannot be easily applied due to the IID-ness and the sequential nature of SBI as authors mentioned.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We are grateful for your careful reading and insightful comments. We have now addressed your concerns to improve and clarify the manuscript. Please find below a detailed point-by-point response to all of your comments and questions. We wish our explanations to help bring clarification.
**From the methodological point of view, using a classifier to compare distributions is not a new idea (as the authors have discussed in Section 6.**
Using the classifier two sample test (C2ST) to perform two sample test is not new. However, the traditional classifier two sample test does not directly apply to the SBI problem because of the sequential sampling and the autocorrelation. Our results not only formulate the SBC into a classifier-two-sample-test task and interprets the divergence in posterior distribution, but also develop the result to incorporate autocorrelation and extra log density information. We also develop a more general label mapping framework. This framework and the novel multiple-class classifiers approach may be applicable to other two sample test problems.
Please see our **Shared Response #2** for details on why our paper develops the classifier two sample test approach.
**When training a classifier, it seems authors are facing very imbalanced classes and training an imbalanced classifier itself can be difficult. I am wondering if this would cause issues in the tests performed later.**
We apologize for this confusion, but our novelly developed multiclass label-mapping and corresponding multiclass classifiers are always *balanced* (Example 4 in Section 2, where we create M+1 labels from M+1 classes from each simulation draw; one label from each class). Hence increasing the number of inference draws (M) is not a difficulty for us. We have developed asymptotic theory, convergence rate ($M\to \infty$) and empirical validation to this big M limit.
In comparison, the naive one-to-M binary classification could run into the imbalanced classification challenges you have raised, underscoring the benefit of our generalization of label mapping. That said, our method still supports binary classifiers with reweighting (Theorem 8 in Appendix). In our experiment, we find that even under highly imbalanced binary labeling (M=1000), with appropriate reweighting, the binary classifier is still able to detect the inference flaws and output accurate divergence estimates.
We would like to refer to our **Share Response #3** for a more detailed discussion on why our multiclass method leads to balanced classification.
**[f-GAN] If one wants a divergence KL(p(theta|y, q(theta|y)), one can directly approximate it without using a classifier, right?
Suppose you can sample from p(theta, y), q(theta|y) and p(y). Then we can directly approximate above KL using f-GAN, and perform a permutation test or compute CI using bootstrap.**
Thank you for your insightful remark. We note that the f-GAN cannot be directly applied to obtain a useful divergence for similar reasons as the C2ST discussed in the shared response.
Background: Nguyen et al. (2009) developed an M-estimation to compute the f-divergence from two samples. This estimate was used in f-GAN to replace the binary classifier. Belghazi et al (2018) developed another tighter sample-based M-estimation for KL-divergence.
The sample-based M-estimation in Nguyen et al. (2009) or Belghazi et al (2018) is not directly applicable to the simulation based calibration (SBC) setting because:
(1) the method in Nguyen et al. (2009) and f-GAN applies to IID samples, but the SBI joint draws $(\theta, y)$ and $(\tilde \theta_m, y)$ are not IID draws from two simulators since all the $\theta$ and $\tilde \theta_m$ are paired with an identical $y$, and
(2) when the inference $q(\theta|y)$ is MCMC, there is additional auto-correlation in $\tilde \theta_m$ draws.
Similar to how we can see our method as a generalization of the C2ST, it is likely that we could adapt Nguyen et al. (2009) or Belghazi et al (2018)'s KL estimate to the SBI setting. Indeed, it is the contribution of our paper to formulate the traditionally rank- and histogram-based SBC problem into a sample-based joint-space discriminative task. Our framework is flexible to incorporate the inclusion of other sample-based divergence estimates and loss functions, such as the aforementioned sample KL estimates, Wasserstein distances, and the integral probability metrics. However, we leave this as future work.
---
Rebuttal Comment 1.1:
Title: Thanks for responding
Comment: Thanks for responding to my question, and that clarified my misunderstandings.
I will raise my score from 5 to 6 and vote to accept this paper. | Summary: This work generalizes the well known Simulation-Based Calibration method to a setting where the tests on the posterior approximation consistency are based on classifiers. The paper has a nice balance between theoretical discussion (Section 3) and more practical issues (Section 4) making it easy for readers to appreciate different aspects of their proposal.
The paper is solid and tells a nice story from start to finish, but I would be tempted to point out the limitation of assessing only the quality of the posterior approximation **in average**. The authors mention this downside but don't give any insight regarding in which situations this might be a problem and when it could be OK to abandon local information.
Furthermore, it comes somewhat as a surprise the fact that the authors use a **discriminative** approach for assessing the quality of the posterior approximation, but make absolutely no mention to the well known classifier two-samples test (C2ST).
In all, I'm rather satisfied by the paper as a contribution to the SBI literature, but I'm not certain of how wide is its scope for the NeurIPS conference. I'm giving it, therefore, a score of 6 (borderline accept), and would not be surprised if in the end it was not accepted to the conference main track.
Strengths: - The authors managed to recast the simulation based calibration (SBC) method in a discriminative framework. This is very nice as it helps better understanding such method and see how it compares to other approaches.
- The three theorems presented in Section 3 are of great practical utility and ensure that even though we only estimate a classifier between samples from a fixed-size dataset, we can be sure that its performance will serve as a lower bound to other theoretical quantities. In other words, the fact of actually having to abide with the "real life" setting of training a classifier from limited data is not a problem.
Weaknesses: - The mathematical notation can be sometimes a bit confusing. For instance, it is not always easy to discern what is a scalar input variable to a function with what is a random variable samples from a pdf.
- There's no comment on assessing the quality of the classifier trained on calibration data and then used to build the statistical tests.
- It could have been of interest to consider more examples on simulated data to illustrate the procedure. For instance, using examples from the `sbi-benchmark` which are becoming more and more used in the simulation-based inference literature.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why is your Figure 2 **after** the set of Figures 3-6? This is very disturbing...
- How useful it is to have a diagnostics to $q(\theta \mid y)$ which is only valid in average over $y$? Can you tell a bit more about this?
- The class imbalance in your Example 1-4 will probably make your classifier have a lot of trouble to discriminate one class from the other. Indeed, it will become more of an "anomaly detection" kind of thing than an actual classification. You don't make much comments on this but I think it would be very important to explain whether this is a difficulty or not.
- Can you add a reference to your claim in Line 82?
- Line 188: What do you mean with "classifier $c$ is good enough" ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I've mentioned the limitations in the previous fields.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your careful reading and constructive comments. We have now addressed your concerns to improve and clarify the manuscript. Please find below a detailed point-by-point response to all of your comments and questions.
**Figure 2 index**
Sorry for this confusion. We will reindex.
**How useful it is to have a diagnostics to which is only valid in average over?**
Thank you for suggesting this important aspect! Simulation-based diagnostics are averages. Our method is consistent with the rest of the simulation based calibration (SBC) literature (e.g., the seminary papers of Cook et al. 2006 and Talt et al. 2018) in computing a measure that is an average over y. (This is often called a “global” diagnostic in the SBC literature.) SBC was originally designed to validate if *computer software* accurately draws samples. For this application, the “in-average” assessment is a feature, not a bug, especially for amortized inference. This is our main motivation for using such a measure.
That said, the “local” diagnostic can be a useful goal as well: when the inference is only intended to run once, and the user is only interested in the inference given one $y$. Our method is still relevant because KL$(p(\theta|y), q(\theta|y))=0$ if and only if $p(\theta|y) = q(\theta|y)$ almost everywhere (Theorem 1). With modern inference routines such as MCMC or neural posteriors, we do expect that the “global divergence” could often achieve zero, which guarantees a zero local divergence with probability 1.
Lastly, our classifier could be adapted to “local” calibration (last paragraph on page 9): it is enough to look at the classification performance around a small neighborhood of the observed data $y_obs$ in the simulation table. It underscores the flexibility of our framework, and we leave it for future work.
**The class imbalance in your Example 1-4 will probably make your classifier have a lot of trouble to discriminate one class from the other.**
We apologize for this confusion, but our novelly developed multiclass label-mapping and multiclass classifiers are always *balanced* (example 4, where we create M+1 labels from M+1 classes from each simulation draw). Hence increasing the number of inference draws (M) is not a difficulty for us. We have developed asymptotic theory and empirical validation to this big M limit.
We refer to our **Share Response #3** on the detailed discussion on the balanced multiclass classifier.
**Can you add a reference to your claim in Line 82?**
Line 82 states that, when the inference is exact ($p(\theta|y) = q(\theta|y)$ almost everywhere), and we train a binary classier to distinguish $(\theta, y)$ and $(\tilde \theta, y)$, then the expected test log predictive density of a binary classifier in a 1:M imbalanced labels could be no higher than the negative binary entropy h(w) := w log w + (1 - w) log(1- w) of a Bernoulli distribution with w := 1/(M + 1). Intuitively, under the null, the best classifier (in terms of the expected log predictive density) is a Bernoulli(w) distribution. This statement is a direct consequence of our Theorem 1, Equation 8. More detailed derivation can be found in the supplement, starting with line 589. We will clarify this point in the text.
**Line 188: What do you mean with "classifier is good enough"?**
This statement was unclear and we apologize. We will revise Line 188. What we meant was that (a) for any classifier, the divergence estimate from test data is always a lower bound of the divergence and (b) when the classifier is “good”, such classification performance is further a “good” estimate of the divergence. Here a classifier being “good” means it achieves a low test data error or equivalently a high expected log predictive density.
**It comes as a surprise that the authors make absolutely no mention to the well known classifier two-samples test (C2ST).**
It was an oversight that we did not cite the phrase “classifier two-samples test” in the related literature, for which we apologize. We will add citations to the general C2ST framework in the revision. We stress that our paper is not simply applying C2ST to SBI. We generalize C2ST by incorporating a label mapping, which allows autocorrelation and multiclass classification for the two sample tests. We would like to refer to our **Shared Response #2** for a detailed discussion of our method in relation to the original C2ST.
**There's no comment on assessing the quality of the classifier trained on calibration data and then used to build the statistical tests.**
At a high level, the techniques used to try to make the classifier generalize well are the same as any classification problem where the goal is to generalize to test data. Our Section 4.1 gives practical recommendations on network training and feature engineering.
Besides, even when the learned classifier is not optimal (as would be common in practice) the divergence we estimate is always a valid lower bound and hypothesis testing is valid. Indeed, for any classifier, the estimated divergence is always a valid lower bound of the actual divergence (Theorem 1), and the proposed hypothesis test is always valid (Theorem 4). A good classifier only helps increase the power and reduce false negatives (false positives are always controlled). We do observe in finite sample examples that the power from our test is higher than others.
**It could have been of interest to consider more examples of simulated data to illustrate the procedure.**
Thank you for your suggestions. To showcase our method, we now add three simulation data examples from this `sbi-benchmark` repo, the Gaussian linear, the Gaussian mixture, and the simple likelihood complex posterior (SLCP) example. We run our calibration method on these three datasets with varying inferences settings and it picks up positive divergence estimates when the inference is not exact. Please see our **Shared response #1** and attached pdf file.
---
Rebuttal 2:
Title: Raising my score
Comment: I thank the reviewers for their very clear answers and for adding new results from the `sbi-benchmark`.
The points which were confusing to me are no longer obscure and I will be raising my score from Weak Accept (6) to Accept (7)
Best regards. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their careful review and insightful comments. In addition to the point-by-point response we make to each reviewer individually, below we will give three shared responses, including additional experiments (in the submitted .pdf file).
### 1. Additional experiments
As suggested by reviewers, to showcase our method on a wider range of examples, we now add three simulation data examples from the `sbi-benchmark` repo:
1. the simple likelihood complex posterior (SLCP) example,
2. the Gaussian linear model,
3. the Gaussian mixture model.
The attached pdf file summarize the experiments. Here we run our calibration method on these three datasets. In each dataset, we sample prior draws from the default prior, and run adaptive No-U-Turn Sampler (NUTS). We want to check the quality of the sampler after a fixed number of iterations. To this end, we run the sampler with various numbers of iterations from 2 to 2000 (for each point, we use the same number of iterations for warm-up and for sampling, and the warm-up samples were thrown away. So 1000 in the $x$-axis means 2000 total MCMC iterations were run and the last 1000 were kept as the inference output), and for each number of iterations, we run our classifier calibration. The y-axis is the estimated divergence at the given MCMC iterations, and we visualize the plus and minus one standard error from our method. A positive divergence indicates a mismatch between the true posterior $p(\theta|y)$ and the inference $q(\theta|y)$, while the divergence being zero means that the posterior inference is exact, i.e., $q(\theta|y)= p(\theta|y)$ almost everywhere.
From the figures attached in the pdf file, in all three examples, we are able to detect the inference flaws and return a positive divergence estimate after a few iterations. The estimated divergence also does converge to zero for more iterations where we expect inference to be near exact.
### 2. [C2ST] Our paper is not simply applying the classifier two-sample test (C2ST) to SBI calibration. We generalize the classifier two-sample test.
Indeed, using a classifier to perform a two-sample test is not a new idea. We did not intend to imply this but we acknowledge that the paper as submitted does not make this as clear as it should. That said, the classifier two-sample test is not directly applicable to simulation based calibration (SBC) due to four barriers:
1. In the past of SBC, it was not clear what space to run the classifier on, i.e. should it include parameters $\theta$, data $y$, or the joint, or include likelihood values, nor to interpret results as a divergence between the true and inferred posteriors.
2. More importantly, the classifier two-sample test only works with IID examples (there are two distributions P and Q, and we have IID observations from P and Q respectively.) The SBI joint draws are not IID because the simulation table contains shared y. Think about our binary classification scheme, the example from class 0 is $(\theta, y)$, and the examples from class 1 is $(\tilde \theta_1, y), … (\tilde \theta_M, y)$; they all share a same y component.
3. When the inference is done MCMC, there is additional auto-correlation in $\tilde y$, which further violates the C2ST requirement.
4. For two-sample tests, C2ST only creates binary labels and may perform poorly with imbalanced classification. Using naive binary classification in SBI will face a 1:S imbalancement.
Our paper solves these issues by developing a general label mapping framework:
- For (1), our paper formulates the SBC problem into a sample-based joint-space discriminative task for the first time. We develop the relevant theory for the SBI calibration and prove its relevance to posterior divergence. We also make use of additional information like log-likelihoods in a unified framework.
- For (2), our theorem 1 and 9 extend the traditional IID two sample tests, and allows a sequential sampling.
- For (3) and (4), we extend the straightforward binary classification to a general framework “the label mapping” (line 132). The traditional binary C2ST is now a special case of our framework (example). In contrast, the multiclass classifier (example 4 in Section 2) is always balanced and allows the examination of auto-correlated samples (equation 12).
### 3. [balanced classifier] The naive binary classifier can suffer from imbalanced labels, but our novel multiclass classifier always has perfect label balances.
Our novelly-developed multiclass classifier (example 4 in Section 2) is always perfectly *balanced*: each simulation run creates (M+1) examples from (M+1) classes; one from each. See Page 3 for an illustration table. Our theorem 3 proves that the multiclass classification divergence converges to the meaningful KL divergence between p and q and derives the convergence rate, under the limit $M \to \infty$, the simulation in which the naive binary classification is infinitely imbalanced.
That being said, the binary classifier might be more intuitive to the users than the multiclass one, and we still support the use of binary classifiers in our calibration. In our experiment, we find that even under highly imbalanced binary labeling (M=1000), with appropriate sample reweighting (Theorem 8 in Appendix), the binary classifier is still able to detect the inference flaws and output accurate divergence estimates.
Pdf: /pdf/f7fd868a5026486322c733ab0024e30e33256d01.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Characterizing the Optimal $0-1$ Loss for Multi-class Classification with a Test-time Attacker | Accept (spotlight) | Summary: The paper generalizes lower bounds on the adversarially robust error on a finite dataset from binary to multi-class classification.
Strengths: 1. The paper is well presented and easy to read, which is no mean feat for the amount of theory that is introduced and developed.
2. The formalization and assumptions are clearly stated, well arranged, and do not contain unnecessary complications.
3. The developed theory is correct as far as I can tell.
4. The experimental evaluations make sense for the discussed topic, regard standard baselines and are easy to understand. They are shown even while they don't support the necessity of the multi-class theory, which is nice to see.
5. The fact that calculating the bounds based on binary l2-ball intersections makes no difference in practice doesn't mean that the theory is not helpful for estimations that are in principle more correct.
6. Limitations are discussed honestly.
7. The code looks nice, but I haven't tested it.
Weaknesses: 1. While they are no clear weaknesses, there are some points unclear to me which I list below in the Questions section and I would much appreciate to see answered and in some cases discussed in the paper.
2. A substantial part of the formalization, theory and experimental design are not completely newly developed, but carried over from the previous paper "Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries" which deals with the same problem in the more special case of binary classification. This is why I'm rating this submission currently as a standard "accept".
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. While Equation (1) is true, I think it is not trivial, but a theorem that depends on some assumptions (which appear to be fulfilled by the considered hypothesis classes). It was proven in (Pydi,Jog 2022: "The Many Faces of Adversarial Risk")[https://arxiv.org/abs/2201.08956], but there might be earlier, more well known versions of the theorem. I think the intuition why (1) is true should be given.
2. l. 89 I think it should better read "the vector of _robustly_ correct classification"
3. l. 97 Should "feasible" read "achievable" here to stay consistent?
4. l. 97 Do the nonnegative linear combinations need to have factors that sum to less than 1?
5. Is L without * correct in Eq. (2)? If so, it wasn't defined.
6. l. 122 missing word
7. l. 124 extra (
8. Is the construction of the full hypergraph indeed computationally expensive for an l2 threat model? Intutitively, the geometry might make incidence matrix very sparse and its computation quite straightforward. Or is it rather an issue with the LP solver?
9. Would it make sense to regard the reverse truncation where if we find a triple of pairs with overlap (as in Fig. 1 left), we assume that there is also a point that generates the degree 3 hyperedge (and so on for higher degrees)? this might maybe yield an upper bound on $L^*(K)$.
10. l. 296 whether and in which sense the hypothesis class is much smaller is not obvious, and it is not clear if insufficient fitting within the class might be responsible for a large part of the gap.
11. In Figure 3, an empirical model evaluation as in Figure 2 should be included. ($L_{CW}$ is a bit confusing at first look, since many papers use that for Carlini-Wagner l2 attacks.)
12. Why not use the full AutoAttack?
13. The statistics on number of hyperedges should be included for all datasets, and a few values of epsilon (including 2 and 2.5), since they quantify the importance of regarding multi-class overlaps.
14. Also statistics like the average distance of an image to the closest one from another class would be nice to understand the geometry of the neighborhood overlaps.
15. It would be great if the authors could find a type of dataset and threat model where multi-class neighborhood overlap plays a bigger role than with MNIST and CIFAR. Maybe even a toy example would be illustrative.
16. Since the paper talks about optimal errors for finite distributions, it would make sense to show evaluations both on training and test sets.
17. A more concrete comparison to Trillos et al.[21], if applicable with evaluation numbers, would be helpful.
18. A discussion of the limitation to distributions with finite support and whether one can expect this assumption to be softened in future works building on this one would be interesting.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are discussed in detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback. We are encouraged that you find our presentation clear and experiments interesting. We address your questions below:
1) This is a good observation. The technical issue with equation (1) is that for a particular $h$, the function $(x,y) \mapsto \sup_{\tilde{x} \in N(x)} \ell(h, (\tilde{x},y))$ may not be a measurable so the expectation may not be defined. In this case, our hypothesis class does not help us, but our assumption that the data distribution has finite support ensures that the adversarial risk is well defined.
One reason that we work only with finite-support probability distributions is to side-step these technical issues while still handling what we believe to be an interesting example of the problem. Effectively we are placing the power-set sigma algebra on our space $\mathcal{X}$: every set and function becomes measurable at the cost of reducing the number of measures available. This lets us avoid technical assumptions about the space $\mathcal{X}$ and the neighborhoods $N(x)$.
Pydi and Jog provide some conditions to ensure that adversarial risk is well-defined, one of which is taking $\mathcal{X} = \mathbb{R}^d$ and working with Lebesgue measurable functions. While we stated the theory portions of the paper in a more abstract setting, all of our experiments fit into that case.
We should move our assumption that the data distribution is discrete from Section 2.2 up to 2.1 to justify equation (1). We will also add a citation of Pydi Jog 2022 with a comment about the technical complexities that can arise in a more general setting.
2, 3, 5, 6, 7) We will update the paper to fix these typos.
4) When we optimize over the correct classification polytope, the weights are the example probabilities so they do sum to one. However, even if the weights did not sum to one, extending the region to include the origin would not affect the result of the optimization.
8, 13) Sparsity helps since we do not need to search all $\binom{n}{3}$ vertices for 3-way hyperedges, but even with sparsity, it is still expensive to find all hyperedges due to many triangles within the graph. Solving the LP is also expensive at large $\epsilon$ due to the large number of constraints (we are generally bottlenecked by the LP solver inefficiency before being unable to compute hyperedges). In Appendix Figure 5, we plot the number of edges, degree 3 hyperedges, and degree 4 hyperedges across $\epsilon$ for MNIST and CIFAR-10. We find that the number of edges/hyperedges grows exponentially with the number of hyperedges increasing at a faster rate than edges.
9) This is a good suggestion and would lead to another upper bound on the optimal loss ($L^*(K)$). The polytope that results from this process is the fractional independent set polytope of the conflict graph, which has a constraint for each clique. Optimizing over this polytope gives the fractional independence number of the conflict graph. In general, this polytope cannot always be computed quickly: there could be $\Omega(n^K)$ maximal cliques.
However, we could probably compute the fractional independence number in many of the cases we experiment with, and are willing to add it to the updated version if the reviewer thinks it would be interesting.
10) Thank you for pointing this out, the gap may be due to optimization or due to the hypothesis class. We will update the wording in this section to reflect this.
11) Thank you for the suggestion, we have updated Figure 3 to also include a line for PGD-AT performance as in Figure 2. Please see our updated plot in the rebuttal pdf (Figure 1). We have also shaded the region between the $L_{CW}$ and $L^*(2)$ lines to indicate the space where the true value of the optimal loss lies and make it more clear that $L_{CW}$ is an upper bound on optimal loss.
12) The second attack of the AA suite (APGD-T) uses targeted DLR loss which assumes at least 4 classes and cannot be used for 3 class experiments. Additionally, we find that there is small change in robust accuracy when including all attacks so we choose to use APGD-CE in our experiments to reduce computation time.
14) Thank you for the suggestion. We provide these statistics for each class in MNIST and CIFAR-10 in Table 1 of the rebuttal pdf. We will add this into the Appendix.
15) Currently, it is unclear for what data distribution and threat models the multi-class overlap would play a bigger role. In the Appendix, we present results for Gaussian data, but we observe similar trends as with MNIST and CIFAR-10 in this case.
16) Thank you for the suggestion, we provide some comparisons between evaluations on the training set and evaluations on the test set for MNIST classes 1, 4, and 7 in Table 3 of the rebuttal pdf. We observe that the optimal loss computed on the test set is close to the optimal loss computed on a sample of the training set of the same size as the test set. We will add this table to the Appendix of our paper.
17) The code used by Trillos et al. is unavailable and details about experimental setup are missing so it is difficult to compare. Looking at the plot in Figure 6 of their paper, it seems they compute a loss lower bound of ~0.05 for MNIST classes 1, 4, 6, and 9 at $\ell_2$ budget of 800/255. Computing $L^*(2)$ across the entire MNIST training set in this setting, we obtain $L^*(2)=0.21$. We believe that Trillos et al. likely provide results on a subset of the dataset causing this discrepancy.
18) Thank you for this suggestion, we will add a discussion of this into the paper. We note that the optimal transport formulation proposed by Trillos et al. allows for general distributions, and focuses on transforming the problem to one where known methods exist to enable efficient bound computation. While our results are restricted to finite support, we are focused more on computing bounds on natural datasets efficiently by transforming the conflict graph directly.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks a lot for the detailed and insightful response!
The additionally provided explanations and numbers are very helpful. I believe this is a strong contribution and am raising my score from 7 to 8.
---
> 9.
I personally would find upper bounds on the optimal lower bound very interesting, as they also enable judging the tightness of the lower bounds. I'd leave it to your judgement of the interestingness of these upper bounds and the actual numbers that can be calculated.
> 16. (/15. evaluations both on training and test sets.)
What I originally meant here was the question how close the AT model comes to the lower bounds on the training set. I think it would be interesting how much of a gap in training error could still be optimized away (if we consider the lower bounds to be close to the maximal lower bound) by better (or more overfitting) AT schemes, train attacks and models. | Summary: Deep learning techniques achieve state-of-the art performance on various classification tasks, but alarmingly, they are highly susceptible to adversarial perturbations. It is currently unknown whether there even exist classifiers that achieve low adversarial training risk on standard datasets. This paper aims to close this gap. The authors restate the adversarial learning problem over all classifiers as a linear program which can then be solved using standard LP techniques. This LP is stated in terms of the hyperedges of a hypergraph. As the resulting LP is computationally intractable, the authors then propose several ways to truncate this LP. The paper concludes with an experimental section in which they use the paper's techniques to upper bound the minimal possible adversarial loss for MNIST and CIFAR-10.
Strengths: - It is currently unknown if finding robust classifiers to real-world datasets is possible. This paper presents a strong argument that such classifiers exist
- Using linear programming to attack this problem is very creative!
Weaknesses: - I have some concerns about the correctness of this paper. Here are some specific issues
1. $q$ in line 20 of the supplementary material is not defined. This makes it quite hard to evaluate the correctness of the proof of Lemmas 1 and 2
2. I suspect Lemma 1 is false. Lower bounds on the adversarial risk computed in this paper rely on this lemma, so if Lemma 1 is false, these bounds would be invalidated.
Consider the following example: Consider 3 vertices with the incidence hypergraph of the left picture in Figure 1. Specifically, $\mathcal N(u)= \{u,v\},\mathcal N(v)=\{v,w\}, \mathcal N(w)=\{w,v\}$. Then the incidence matrix $B$ is
| |$u$|$v$|$w$|
|------------|-----|----|----|
|$e_{wu}$| 1 | 0 |1 |
|$e_{uv}$|1 | 1 |0|
|$e_{vw}$| 0 | 1 |1 |
|$e_{u}$ |1 | 0 |0 |
|$e_{v}$ |0 |0 |1 |
|$e_{w}$ |0 |1 |0|
Consider the vector $b= [0.5, 0.3, 0.2]$ (with vertices in order $u$, $v$, $w$). This vector clearly satisfies $b\geq 0$ and $Bb \leq \mathbf 1$.
We will now show that this vector is not in $\mathcal P_{\mathcal V, N,\mathcal H}$, as defined in line 96 of the paper. For contradiction, assume that there is an $h$ for which $q_N(h)$ satisfies $0\leq b_i\leq q_N(h)_i$ for $i\in \{u,v,w\}$. By definition, $q_N(h)_v=\inf$ { $h(x): x\in N(v)$}.
Thus $h(u)\geq .5$ and $h(v)\geq .5$. As $h$ is a probability vector, it follows that $h=(.5, .5, 0)$ and $q_N(h)=(.5,0,0)$. This is a contradiction as $h(v)\leq b(v)=.3$.
3. In definition 2, h is defined to be $\cY$-valued but the expression $1-h(\tilde x,c)_y$ assumes that this function is $\mathbb R$ valued. As this definition is central to section 3, the correctness of this section is hard to evaluate
- The paper does not introduce central mathematical concepts or introduces them poorly. For instance,
1. line 82: "architecture" is discussed before neural nets are introduced
2. what is 'downwards-closed' in line 110?
3. The phrase "correct probability vectors" in Lemma 1 is misleading because the q's typically don't sum to 1
4. What does 'fractional coverings' mean in line 119?
5. lines 241-246: an argument is presented involving the fractional vertex packing polytope and the independent set polytope but these are only introduced very briefly
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - Can you explain the issues with Lemma 1 pointed in in the first bullet under weaknesses? Resolving this issue would convince me to change the review score
- Problems in optimal transport frequently need to solve linear programs in $\mathbf q$ with the constraint $\mathbf q\geq 0$. For computational expediency, the sinkhorn algorithm uses entropic regularization to deal with this constraint. Could you possibly make use of this technique?
- It seems that in the matrix inequality $Bq\leq 1$ in Lemma 1, many rows may be linearly dependent. Have you tried getting rid of such rows?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: One central limitation of this work is that it studies the minimal possible adversarial risk over all possible (soft) classifiers rather than a particular function class.
This limitation is discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed engagement with our work and aim to address their concerns below, particularly those regarding the correctness of the main Lemmas in the paper.
We are confident that Lemma 1 is correct. We show below that your suggested counterexample vector is achieved by an explicit classifier. We also walk through the constructions used in the proof for this example to provide some intuition for the general argument.
1. At line 20, $q$ is the vector that was introduced at line 15. It is an arbitrary point in $\mathcal{P}\_{m,V,N,\mathcal{H}_{soft}}$, the correct-classification-probability vector region. We will better explain the high level structure of the proof.
2. In your comment, you treated $h(u)$ and $h(v)$ as scalars, but they are in fact vectors in $\mathbb{R}^3$ and probability mass functions over $\mathcal{Y}$.
We have $q\_N(h)\_{(x,y)} = \inf \{ h(\tilde{x})\_y : \tilde{x} \in N(x) \}$ rather than $q\_N(h)\_{(x,y)} = \inf \{ h(x) : \tilde{x} \in N(x) \}$.
This confusion may be related to the typo discussed in point 3.
For the left example in Figure 1, $\mathcal{P}\_{V,N,\mathcal{H}\_{soft}}$ contains the vector $b = (0.5,0.3,0.2)^T$: it is $q\_N(h)$ for the constant classifier $h(x) = b$.
Applying the definitions, we have $q\_N(h)\_{(x,y)} = \inf \{ h(\tilde{x})\_y : \tilde{x} \in N(x) \} = \inf \{b\_y\} = b\_y$.
We can work out the whole structure of $\mathcal{P}\_{V,N,\mathcal{H}\_{soft}}$ by following the arguments in the proof of Lemma 2.
Lemma 1 states that contains the vectors $(q\_u,q\_v,q\_w)$ that satisfy the inequalities $q\_u \geq 0$, $q\_v \geq 0$, $q\_w \geq 0$, $q\_u + q\_v \leq 1$, $q\_u + q\_w \leq 1$, and $q\_v + q\_w \leq 1$.
The latter three inequalities come from the three edges in the conflict graph.
We will demonstrate the derivation of one of these as an example.
Because the edge $\{u,v\}$ is present, there is some $\tilde{x} \in N(u) \cap N(v)$.
Let $u$ be from class $0$ and $v$ be from class $1$.
Thus we get inequalities $q\_N(h)\_u \leq h(\tilde{x})\_0$, $q\_N(h)\_v \leq h(\tilde{x})\_1$, and $h(\tilde{x})\_0 + h(\tilde{x})\_1 \leq 1$, which imply $q\_N(h)\_u + q\_N(h)\_w \leq 1$.
The polytope described above has extreme points $(0,0,0)^T$, $(1,0,0)^T$, $(0,1,0)^T$, $(0,0,1)^T$, and $(1/2,1/2,1/2)^T$.
The middle three points are the correct-classification-probability vectors of the three constant hard classifiers.
The latter point is the correct-classification-probability vector of the classifier that assigns probability $\frac{1}{2}$ to each of whichever two classes could have produced $\tilde{x}$.
This is described at lines 150-152 of the paper.
This is not a constant classifier, which allows it to have better performance.
We can achieve intermediate correct-classification-probability vectors (or better) by averaging the outputs of soft classifiers achieving the extreme points.
We will attempt to make space to add the explicit descriptions of $\mathcal{P}\_{V,N,\mathcal{H}}$ to Section 2.3.
2. We have a typo at line 174. It should read $h: \mathcal{X} \times \binom{\mathcal{Y}}{m} \to [0,1]^{\mathcal{Y}}$, paralleling the definition at line 71. Thus $1 - h(\tilde{x}, c)\_y$ is a real number. We are sorry for the confusion that this caused.
More:
1. We mean a function class in which each function is specified by a finite number of parameters. This encompasses neural networks but is more general. We will expand this discussion to improve the clarity.
2. The hyperedge set being downward closed means that if $e \in \mathcal{E}$ and $e' \subseteq e$, then $e' \in \mathcal{E}$. This follows from $\cap_{(x,y) \in e} N(x) \subseteq \cap{(x,y) \in e'} N(x)$, i.e. the same witness of the hyperedge e also witnesses e'.
3. The phrase should be read as correct-classification-probability vectors, because an entry of the vector is a correct classification probability. These are conditional probabilities (the entry $q_{N}(h)_v$ is the probability that the randomized classifier with distribution $h$ is correct given that the natural example is $v$), so as you point out, they do not sum to one in general. We can use this hyphenation.
4. A fractional covering is a standard concept from graph and hypergraph theory: it is a nonnegative weighting $z$ of the hyperedges such that each vertex receives coverage at least its weight. I.e. the sum of the weights of the hyperedges containing $v$ is at least the weight of $v$: $B^Tz \geq p$. We will add a textbook reference.
5. The discussion is brief due to space constraints. We will add a citation to make it clear that these are standard facts in combinatorial optimization and not new claims.
Questions:
* This point about entropic regularization connects to several other papers as well as directions for further research.
If the inequality $q \geq 0$ is replaced with a cross-entropy regularization term $\sum_i p_i \log \frac{1}{q_q}$, which is infinite at the boundaries $q_i = 0$, the resulting problem is related to the optimal adversarial cross entropy loss.
In Bhagoji et al. 2021, this is investigated for the two class setting.
They compute the optimal cross-entropy loss by solving a sequence of 0-1 loss optimization problems.
The dual problem of optimizing over adversarial strategies is more connected to optimal transport.
Trillos et al. have multiple characterizations of the optimal loss in the multiclass setting terms of various optimal transport problems and use established entropy regularized solving methods to compute optimal losses.
There are more possible ways to apply entropy regularization and we think that this is a fruitful direction for further investigation.
* We do eliminate redundant constraints when possible, but it is not as simple as checking for linear dependence.
Because $Bq \leq 1$ is an inequality, a row of $B$ can only be eliminated when it is a nonnegative linear combination of other rows and rows of $-I$ (which comes from the other constraint $-Iq \leq 0$).
---
Rebuttal Comment 1.1:
Comment: 1. I'm convinced by your explanation, I misunderstood the role of $h$. I am updating my score.
Consider including a proof outline of Lemma 1 in the main text of the paper, to further explain why this lemma would be true.
Overall, I had a hard time with the exposition and organization of the technical portions of this paper. From the text, it is difficult to understand why the mathematical claims are true, (and sometimes also what they mean).
2. Your responses to these two questions helped me understand your approach. Consider including these discussions somewhere in your paper
---
Reply to Comment 1.1.1:
Comment: Thank you for your quick response. We are happy that you are convinced about the correctness of Lemma 1. We also appreciate your suggestions on adding a proof outline of Lemma 1 in the main text and adding discussions of the 2 questions to the paper. Currently, due to space constraints we are unable to add a proof outline, but we plan on incorporating the discussion of the 2 questions into the Appendix of the camera-ready paper.
On your comment about some mathematical claims being unclear, could you please elaborate on which claims you are referring to? We are happy to clarify these points during the discussion period. | Summary: This work proposes to theoretically evaluate the robustness of a multi-class classifier by setting the lower and upper bounds of the optimal loss, i,e, the lowest loss achievable for a given hypothesis family. The lower bound is established by extending the conflict graph-based framework previously applied to the binary classification setting. The upper bound is built by generalizing the Caro-Wei bound. Beyond theoretical analysis, this work also focuses on computable methods to estimate the bounds, i.e. using the lower bounds of binary classification problems to compute that for the multi-class problem.
Strengths: It is an important contribution to set up an upper and lower bound for the lowest achievable classification loss under the testing-time perturbation. These two bounds help to narrow down the possible range of the classification loss facing input noise, which measures accurately the robustness of a classifier. Furthermore, this work provides the link between the optimal loss bound of binary classification tasks and that for multi-class tasks. This contribution enables efficient computation of the bound estimates when the number of classes increases.
Weaknesses: I have to admit that I am not familiar with this theory. It takes me quite some time to read the contexts before I can figure out how this framework can be integrated into the investigated problem. I think for any readers / reviewers without the background information, offering even a brief introduction could be very helpful to evaluate the contribution.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How tight are the upper and lower bound ? Especially for the upper bound, this is established based the Caro-Wei bound, which is different from the conflict hypergraph framework. It is not clear how accurate the upper bound could be.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive appraisal of our paper and comments to improve it further. We address their comments below:
**Further details to evaluate the contribution:**
Our contribution lies in both theoretical and experimental aspects of characterizing the optimal robust 0-1 loss for multi-class classification. This optimal loss is important to characterize so that progress in defenses can be measured. There is value in knowing how well the best possible classifier could perform in a given adversarial setting so we know how far our current defenses are.
In terms of a roadmap of our approach, we first provide an expression for the optimal loss and use Lemma 1 to connect the problem of finding the optimal loss with a linear program defined with respect to a graph of conflicts between data points from different classes (Section 2). Having established this connection, we then develop computationally more efficient methods to solve the conflict graph problem in practice (Section 3). We will elaborate upon our approach further in the beginnings of Sections 2 and 3.
We extend the approaches taken in previous work [1,2,3] that characterize the optimal loss in the case of binary classification, to the multi-class setting. Our use of the conflict graph is inspired by [1,2] which first introduced this concept, and our major contribution lies in extending it to the notion of conflict hypergraphs for multi-class classification.
**Tightness of the bounds:**
We would like to clarify that the Caro-Wei upper bound uses the conflict graph to determine the size of the maximum independent set. However, the reviewer is correct to observe that this upper bound does differ from the form of the other bounds, which truncate the hypergraph to obtain a lower bound on the optimal loss. The tightness of the truncation based lower bounds as well as Caro-Wei upper bound depend directly on the structure of the conflict graph. In practice, we find that the bounds are tight when the perturbation budget epsilon is small. The gap between the bounds grows larger as we increase epsilon. An exact theoretical characterization of the gap is beyond the scope of this paper and we can add it to the limitations if the reviewer thinks that will add clarity.
[1] A. N. Bhagoji, D. Cullina, and P. Mittal. Lower bounds on adversarial robustness from optimal transport. In Advances in Neural Information Processing Systems, pages 7496–7508, 2019.
[2] A. N. Bhagoji, D. Cullina, V. Sehwag, and P. Mittal. Lower bounds on cross-entropy loss in the presence of test-time adversaries. In International Conference on Machine Learning, pp. 863-873. PMLR, 2021.
[2] M. S. Pydi and V. Jog. Adversarial risk via optimal transport and optimal couplings. In Proceedings of the 37th International Conference on Machine Learning, pages 7814–7823, 2020. | Summary: This paper aims to analyze the optimal 0/1 loss under the most strongest test time attack. The study commences by formulating the problem of obtaining the optimal classifier (based on 0/1 loss) as a linear program on a graph. Subsequently, the authors address the high computational complexity of calculating the optimal 0/1 loss by proposing a reduction technique through graph truncation. This reduction enables the computation of a lower bound of the 0/1 loss in a feasible timeframe. Ultimately, the authors present empirical evidence comparing their bound with the empirical defense method on real-world data. Notably, they discover that the widely-used baseline (adversarial training) still offers significant potential for improvement.
Strengths: - The paper is written clearly and easy to follow.
- This paper extents the analysis on the optimal classifier under test time attack to multi-class classification setting. It seems to be a decent extension/contribution to the theoretical side of the field of adversarial robustness.
- Usually for this kind of problem, the computation complexity is one of the main challenge. However, they are able to find a way to speed it up. The idea of reducing it to a graph and then truncate the complex edges to reduce the computation needed for getting the bounds is interesting. In addition, they also show that empirically, such relaxation does not lose much information.
Weaknesses: - Although it is mentioned in the related work that this is different from verifying robustness. I think it would be a valuable information to include bounds for verified classifiers in the empirical section. It would be interesting to see how close the existing verified classifiers are to the optimal bound.
- A related work titled "Robustness for non-parametric classification: A generic attack and defense." published in International Conference on Artificial Intelligence and Statistics, 2020 can be added. This work also utilize the idea of creating graph with the vertices being each example and edges being conflicting examples pairs. They tried to approach the optimal 0/1 by removing minimum number of edges. Although they did not compute specific bounds on the optimal 0/1 loss, I think it is still worth being discussed.
- This is a work with solid technical contribution. However, as mentioned in the limitation, the lack of implication on how to close the gap between the current robust classifiers and the optimal classifier limits the impact of this work to a moderate-to-high impact paper.
- Although a method for speeding up the algorithm through truncating the graph is proposed, the applicability of the proposed algorithm in practice seems to be still limited in practice due to heavy computational cost (in the experiment, only three class classification problems are run).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Are there any of my review comments that misunderstood the paper? If so, please point them out. I am happy to adjust accordingly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do properly addressed the limitation of this work, which includes the lack of scalability of their algorithm and the lack of implications on how to close the gap between the current robust classifiers and the optimal classifier.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback and positive appraisal of our paper. We are glad they found it clear and easy to follow. We address their questions and concerns below:
**Comparison to bounds for verified classifiers:** Thank you for the interesting prompt. We checked the available leaderboard (https://sokcertifiedrobustness.github.io/leaderboard/) on verifiably robust models for the settings we are concerned with ($\ell_2$ robustness for the MNIST and CIFAR-10 datasets), and found:
- For MNIST, the best certifiably robust model has a 0-1 loss of 0.27 at a budget of 1.52 and 0.44 at a budget of 2.0.
- For CIFAR-10, the best certifiably robust model has a 0-1 loss of 0.6 at a budget of 1.0 and 0.8 at a budget of 2.0.
These are much higher than the optimal lower bound that is achievable for these datasets which is 0 in all these cases. We will add these numbers to the updated text in the paper for the value it provides in terms of providing a comparison to verifiably robust classifiers.
**Related work on ‘Robustness for non-parametric classification’:** Many thanks for the pointer to this very interesting paper. Having gone through the paper, we find it to be quite relevant to our method for finding lower bounds. In particular, the construction of the graph in Section 3.1 matches that of our conflict graph. In addition, we have used a technique similar to the ‘adversarial pruning defense’ proposed in the paper to attempt to close the gap to optimal in Section D.9. of the Supplementary Material for neural networks, although we found little to no impact in the multi-class setting. Our technique was inspired by a similar one used in Bhagoji et al. (2021), which did find improvements in the two-class setting. We will update the related work to reflect the connection to this paper.
**Potential measures to close the gap between optimal and current robust classifiers:** The main focus of our work is to provide a measure of progress for defenses by comparing them to the optimal loss. Regardless, we agree that it is interesting to consider measures to close the gap between optimal and current robust classifiers. In Section D.9. of the Appendix, we propose dropping hard data points to close this gap (inspired by Bhagoji et al. (2021)). However, we found limited to no improvement, pointing towards a need for a deeper exploration of the way in which the optimal loss construction can be used to close the gap. A few potential steps on the training side are increasing the architecture size and using additional unlabeled data. We could also potentially use the optimal classifier to overrule decisions in parts of the input space where trained neural networks are wrong, and the optimal classifier is fully specified. However, the input space coverage of the optimal classifier is low as it is only specified on points in the training data. Methods to improve this coverage would be an interesting direction for future work.
**Results in the 10-class setting:** We would like to clarify that the paper does contain experiments beyond the 3-class setting. Section 4.2 of the paper has results and discussion for the 10-class case (See Figure 3). Due to computational limitations, we use a truncated version of the hypergraph containing up to degree-4 hyperedges. The Caro-Wei upper bound is also reasonably tight until $\epsilon=3.0$, indicating that the use of higher-order hyperedges will not provide any additional information about the optimal loss. We provide an updated version of Figure 3 in the attached pdf that may be clearer.
---
Rebuttal Comment 1.1:
Title: Thank you for responding to my concerns and questions.
Comment: After reading the responses, I still hold my original opinion that this is a technical solid paper and it can bring moderate-to-high impact on the development of theories for adversarial robustness. Therefore, I would like to maintain my original score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful and constructive engagement with the paper. As reviewers ourselves, we greatly appreciate the reviewers’ efforts at providing thorough and insightful commentary on the paper. We have addressed all the reviewer’s concerns in the respective rebuttals, including providing Reviewer **yCc4** with a detailed explanation for why we strongly believe that Lemmas 1 and 2 are correct along with a refutation of the constructed counter-example.
We are glad that multiple reviewers found our theoretical contributions important (**baps**, **Qpxv**), approach creative (**yCc4**), and experiments interesting with easy to understand baselines (**Q9vC**). We are also happy that multiple reviewers found the presentation to be clear and easy to follow (**baps**, **Q9vC**), appreciated our proposed techniques for making the problem more computationally efficient (**baps**, **Qpxv**), and appreciated our code and honest discussion of limitations (**Q9vC**).
In the attached pdf, we provide
1. An updated version of Figure 3 in the paper with losses from adversarial training (**Q9vC**) and shading to indicate the space between the upper bound ($L_{CW}$) and tightest lower bound where the optimal loss $L^*(10)$ would lie (**baps**, **Q9vC**)
2. A table of statistics for the average distance of examples to their nearest neighbor in another class for each class in MNIST and CIFAR-10 (**Q9vC**)
3. A table of optimal losses computed on the MNIST train set and MNIST test set (**Q9vC**)
We plan on incorporating both tables into the Appendix of our paper.
In light of suggestions for improvement suggested by the reviewers and in addition to the clarifications already provided in the rebuttals, we also commit to making the following changes to the camera-ready version of the paper:
1. Improvements to the clarity of the text:
- Add an overview of our approach in Sections 2 and 3 (**QPxv**)
- Add citations for graph theory concepts such as fractional coverings, fractional vertex packing polytope, and the independent set polytope (**yCc4**)
- Fix typos pointed out by reviewers (**yCc4**, **Q9vC**)
2. Add discussion of the following papers:
- Yang et al. 2020: “Robustness for non-parametric classification” (**baps**)
- Pydi, Jog 2022: "The Many Faces of Adversarial Risk" (**Q9vC**)
3. A comparison to bounds for verified classifiers (**baps**)
4. Discussion of limitation to distributions with finite support (**Q9vC**)
Pdf: /pdf/ea94d988fcf9751d6c6e79831176e85b5859d7e3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DOSE: Diffusion Dropout with Adaptive Prior for Speech Enhancement | Accept (poster) | Summary: This paper describes a new method for providing noisy-signal conditioning information (y) to the diffusion steps of a diffusion-based speech enhancement algorithm. Three specific innovations are proposed: (1) improve dependence of x_0 on y by dropping out x_t, at random with Bernoulli probability p. (2) In order to make innovation #3 possible, train each x_t explicitly using MSE of the implied x_0, rather than MSE of the error \epsilon. (3) For greater efficiency, generate x_0 in only two steps, selected from the T-step trained diffusion process using validation data.
Strengths: This paper proposes a diffusion-based speech enhancement with both improved performance and (because of the two-step inference) improved efficiency. Both the performance gains and the efficiency gains are theoretically well motivated and empirically demonstrated.
Weaknesses: Clarity: (1) The dropout described in Eq. (13) is then not referenced again for the rest of the paper. I think that's because Eq. (13) affects the T-step training process, while equations (14)-(18) are about the proposed reduction of inference from T steps to 2 steps. But the division into training and testing is never really made explicit. I think this is because the derivations assume that the reader has fully understood Figure 1, but Figure 1 cannot be fully understood until one has first understood the algorithm; I had to go back and examine Figure 1 after reading the derivations in order to know what's going on. (2) I think that Eq. (17) should have an integral over dx_\tau. The algorithm might use the one-point approximation, but you've done a great job up to this point of keeping the theoretically-required integral in your equations, it seems a shame to abandon it here. (3) Shouldn't the first term on the RHS in Eq. (18) be p_\theta(\hat{x}_{\tau_2}|...)? Or are you trying to say that p(x_{\tau_2}|...) = p(\hat{x}_0|...)? That doesn't seem correct, since it misses the interpolation step. ... also, I think there should be an integration over d\hat{x}_{\tau_2}.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Significance: Dropout enhances accuracy. The two-step inference enhances efficiency. Is there any interaction between these two things? It seems like the dropout might reduce the accuracy degradation that two-step inference would otherwise incur; is that true? Similarly, why two -- is two the optimum number of steps in any way? Results show pretty clearly that two is better than one, but is there any theoretical reason for that? The theory seems to predict simply that the more steps you have, the more accurate is the inference.
"Diffusion enhancement methods have better generalizability than deterministic methods" -- By "deterministic methods" I think you mean DiffWave. In what sense is that a deterministic method?
p. 6 Considering the equivalently -> Considering the equivalence
p. 8 We contribute -> We attribute
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations and ethical considerations are not explicitly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions.
***
>Q1: The dropout described in Eq. (13) is then not referenced again for the rest of the paper. I think that's because Eq. (13) affects the $T$-step training process, while equations (14)-(18) are about the proposed reduction of inference from $T$ steps to 2 steps. But the division into training and testing is never really made explicit. I think this is because the derivations assume that the reader has fully understood Figure 1, but Figure 1 cannot be fully understood until one has first understood the algorithm; I had to go back and examine Figure 1 after reading the derivations in order to know what's going on.
Thank you for your valuable feedback and comments! We apologize for any confusion caused by the paper writing. We will move Figure 1 to Sec 4 and reorganize Sec 4.2 -- explicitly divide training and testing as your suggested.
***
>Q2: I think that Eq. (17) should have an integral over $dx_{\tau}$. The algorithm might use the one-point approximation, but you've done a great job up to this point of keeping the theoretically-required integral in your equations, it seems a shame to abandon it here.
You are right that Eq. 17 should have an integral over $dx_{\tau}$. We will correct these in the revision.
***
> Q3: Shouldn't the first term on the RHS in Eq. (18) be $p\_\theta(\hat{x}\_{\tau\_2}|...)$? Or are you trying to say that $p(x\_{\tau\_2}|...) = p(\hat{x}\_0|...)$? That doesn't seem correct, since it misses the interpolation step. ... also, I think there should be an integration over $d\hat{x}\_{\tau\_2}$.
Thank you for pointing out these oversights in Eq. 18! The RHS in Eq. 18 should be $p\_\theta(\hat{x}\_{\tau\_2}|...)$ and it should be an integration over $d\hat{x}\_{\tau_2}$. We will correct these in the revision.
>Q4: Significance: Dropout enhances accuracy. The two-step inference enhances efficiency. Is there any interaction between these two things? It seems like the dropout might reduce the accuracy degradation that two-step inference would otherwise incur; is that true? Similarly, why two -- is two the optimum number of steps in any way? Results show pretty clearly that two is better than one, but is there any theoretical reason for that? The theory seems to predict simply that the more steps you have, the more accurate is the inference.
Great question! It's important to clarify that full-step generation doesn't always yield better results compared to few-step generation. This phenomenon was exemplified in DiffuSE [5], where generating speech in 6 steps outperforms 50-step generation. Similarly, in [6, 7], authors demonstrated that generating samples in 1/10 steps performs better than full-step image generation (purification).
The notion that "the more steps, the more accurate the inference" holds true only if each step produces a better estimation than the preceding one. Now, suppose the model can always generate a better condition factor. According to Proposition 1, a smaller $t$ can always be chosen. From a holistic standpoint, this loop continues as each step generates a better condition factor, thus the more steps, the more accurate the result (Sec 4.1, lines 164-166). However, in practice, this ideal scenario is hindered by empirical and generalization errors. Various factors contribute to model errors, including complexity, architecture, data quality, optimization, and stochasticity. Empirical evidence [16, 17] indicates that 2-step generation tends to outperform 1-step generation. This is because model can always generate improved estimates compared to the initial condition $c=y$, making 2-step better than 1-step in most cases [16, 17]. However, we cannot guarantee that $K$-step ($K > 2$) is better than 2-step. We showed multiple visual cases in Fig. 5 and Fig. 15 that increasing the number of sampling steps will lead to inconsistency (error accumulation) problem and subpar results. Additional results for different $K$ (i.e., 2, 6, 50 steps) are attached in the table (in General Response). Note that research on speech enhancement based on progressive learning [18, 19] also shows that iterative learning over 5 steps often leads to performance degradation. Taking into account the computational complexity of optimizing hyperparameters for $K$ (Appendix A.6, lines 571-575), we opt to directly set $K=2$ for both efficiency and stability reasons. We appreciate your question and hope this explanation clarifies our approach and reasoning.
***
>Q5: "Diffusion enhancement methods have better generalizability than deterministic methods" -- By "deterministic methods" I think you mean DiffWave. In what sense is that a deterministic method?
To ensure a fair comparison, we kept the model architecture exactly the same as that of the DiffWave, but used $y$ as the $x_t$ (so the model's input is two noisy speech $y$ concatenated along the channel dimension). We used a zero vector as the time step embedding so that it did not contain any additional information. This common practice of comparing generative diffusion models with deterministic counterparts is prevalent in the literature [13, 20].
***
>Q6: p. 6 and p. 8 typos.
Thanks for spotting the typos! We will correct these in the revision. | Summary: This paper presents a solution to the problem of condition-collapse in denoising diffusion models for speech enhancement by introducing the adaptive prior and sample dropout techniques. The paper is well-written and provides valuable insights into the functioning of the denoising diffusion probabilistic model for speech enhancement. While the concept of adaptive priors for conditioning the generative process is not entirely novel and has been explored in vision-related tasks, the authors' application of this technique, along with theoretical analysis, is intriguing. The authors also justify their choices regarding noise scheduling and propose a faster method for sampling clean speech from the trained model based on intermediate approximation.
The experimental strategy adopted in this study assesses the generalization capability of the proposed model using objective metrics such as STOI and PESQ, as well as subjective scores like CBAK and COVL. The effectiveness of the mixed conditioning strategy is demonstrated through the analysis of spectrogram plots, which is an interesting observation. It is important to note that while the proposed technique may not consistently outperform baseline methods across all scenarios, it does excel in specific matched scenarios.
Strengths: In this paper, a novel approach is introduced to address the problem of condition collapse in diffusion models for speech enhancement. The authors propose the utilization of adaptive prior and sample dropout techniques, which offer an interesting and promising solution to this issue. Furthermore, the paper delves into the theoretical aspects of clean speech recovery, shedding light on the conditions and constraints necessary for successful restoration.
One notable contribution of this work is the development of a fast sampling technique, which not only proves effective in the context of speech enhancement but also holds potential for application in other conditional generation tasks. This aspect highlights the broader implications and versatility of the proposed approach.
To evaluate the efficacy of the proposed technique, the authors conduct comprehensive experiments and compare their approach against various diffusion-based models specifically designed for speech enhancement. The experiments are meticulously designed and executed, providing a thorough analysis of the results. This level of detail and scrutiny enhances the credibility of the proposed approach and contributes to a better understanding of its strengths and limitations.
Weaknesses: One main weakness in my opinion is the understanding of Proposition 2. I do not understand how a diffusion model has high probability of recovering ground-truth if the inequality 23 from appendix holds. I might be missing some theoretical analysis on diffusion models but I am giving the benefit of doubt to the authors.
The dataset section lacks the details about the type and level of noise present in Chime and Voicebank corpora. Another issue is in the experiment section where the authors have shown impressive performance on a wide-range of metrics. I believe that WER can be easily calculated in the evaluation and is a very straight-forward way to compare the noise-reduction performance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The conclusion section mentions that the model is sensitive to the choice of dropout probability and the sampling time-indices.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions.
***
>Q1: One main weakness in my opinion is the understanding of Proposition 2. I do not understand how a diffusion model has high probability of recovering ground-truth if the inequality 23 from appendix holds. I might be missing some theoretical analysis on diffusion models but I am giving the benefit of doubt to the authors.
Unlike directly maximizing the condition probability $p(x_0 = x|x_t = y_t)$ or the difference between the target speech and candidates $||p(x_0 = x|x_t = y_t) - \max (p(x_0 = x^{\prime}|x_t = y_t); x^{\prime} \in \mathcal{S}(x))||_2^2$, Proposition 2 (Eq. 23) presents a relatively relaxed constraint. If Proposition 2 holds, considering the characteristics of unconditional diffusion models / score-based models [14, 15], particles starting at the adaptive prior $y_t$ are more likely to converge to the ground-truth objective $x$ through an iterative MCMC procedure (known as Langevin dynamics), rather than other natural but inconsistent candidates $\forall x^{\prime} \in \mathcal{S}(x)$. This constraint also suggests that we should select a smaller $t$ and narrow the gap between the condition factor and $x_t$, which provides guidance for the subsequent DOSE design. We appreciate your question and hope this explanation addresses your concerns.
***
>Q2: The dataset section lacks the details about the type and level of noise present in Chime and Voicebank corpora. Another issue is in the experiment section where the authors have shown impressive performance on a wide-range of metrics. I believe that WER can be easily calculated in the evaluation and is a very straight-forward way to compare the noise-reduction performance.
Thanks for the suggestion! The VoiceBank-DEMAND dataset is a classical benchmark dataset for speech enhancement using clean speech from the VCTK corpus. The training utterances are artificially contaminated with eight real-recorded noise samples from the DEMAND database and two artificially generated noise samples (babble and speech shaped) at 0, 5, 10, and 15 dB SNR levels, amounting to 11,572 utterances. The testing utterances are mixed with different noise samples at 2.5, 7.5, 12.5, and 17.5 dB SNR levels, amounting to 824 utterances in total. The CHiME-4 simulated test data is created based on real-recorded noises from four real-world environments (including street, pedestrian areas, cafeteria and bus) based on four speakers, with a total of 1320 utterances. Following [12], we use the signals from the fifth microphone for evaluation. We will update the dataset details in the revision.
We have evaluated all speech enhancement methods using two public pre-trained ASR models (CRDNN-RNNLM and Conformer-Transducer) from huggingface. The result is shown in the table below.
| | | | VoiceBank | | | | |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---: |
| Model | DOSE | DiffuSE | CDiffuSE | SGMSE | DR-DiffuSE | DiffWave(dis) |
| CRDNN-RNNLM | **12.77%** | 14.28% | 12.97% | 14.81% | 13.01% | 14.31% |
| Conformer-Transducer | **9.83%** | 10.83% | 9.96% | 11.67% | 9.89% | 10.90% |
| | | | CHIME-4 | | | | |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---: |
| Model | DOSE | DiffuSE | CDiffuSE | SGMSE | DR-DiffuSE | DiffWave(dis) |
| CRDNN-RNNLM | 39.66% | 38.10% | 37.59% | **37.17%** | 44.51% | 71.92% |
| Conformer-Transducer | 30.30% | 28.44% | **28.41%** | 28.62% | 31.07% | 59.76% |
We have the following observations:
* On the VoiceBank-DEMAND dataset (matched scenario), the performance gap between diffusion enhancement models and deterministic model is not prominent. For instance, WER ranges from 0.127 to 0.148 with CRDNN and from 0.098 to 0.116 with Conformer-Transducer.
* On the CHIME-4 dataset (mismatched scenario), diffusion enhancement models significantly outperform deterministic models in terms of performance, e.g., WER of diffusion enhancement models ranges from 0.371 to 0.445 with CRDNN and from 0.284 to 0.310 with Conformer-Transducer, while WER of the deterministic model is 0.719 with CRDNN and 0.597 with Conformer-Transducer.
* We find our method has no significant differences with DiffuSE, CDiffuSE, and SGMSE on WER evaluation.
---
Rebuttal Comment 1.1:
Title: No additional questions.
Comment: I thank authors for addressing my questions. I believe an accept (7) is a good score for this paper. | Summary: This paper focuses on a new approach in the field of speech enhancement called DOSE, which effectively addresses the problem of conditional collapse by incorporating conditional information into a diffusion enhancement model.DOSE employs two effective conditional enhancement techniques that can significantly improve the performance of the model while ensuring its efficiency. The paper demonstrates the efficiency and effectiveness of the method through comprehensive experiments on a benchmark dataset.
Strengths: 1. In this paper, the authors propose an Adaptive Prior, aimed at incorporating conditioning information during the generation process, thereby ensuring greater consistency in the generated samples and augmenting the efficacy of speech enhancement.
2. The paper elucidates that by employing dropout operations during the training phase, the model is compelled to prioritize conditioning elements, which efficaciously mitigates the conditioning collapse issue. This methodology engenders a dependency on conditioning information within the model during generation, culminating in the synthesis of more coherent speech.
3. The authors undertake a comparative analysis between DOSE and extant diffusion-based speech enhancement models on two public datasets. Notably, DOSE attains superior performance with an exceedingly limited number of sampling steps, which substantiates the efficacy of the proposed method.
Weaknesses: This work realizes a two-step sampling process. The designs in sampling process include:
1. The parameter $T_1$ denotes an intersection point, enabling a shallow reverse process by leveraging the noisy speech y. Similar mechanism can be visited in text-to-speech synthesis, DiffSinger (AAAI, 2022) and image editing, SDEdit (ICLR 2022).
2. The coarse estimation $\hat{x}_{0}$ denoised from $T_1$ is first mixed with y as adaptive prior, and then corrupted with a shallow forward process, which generates the latent representation at $T_2$.
3. The high-quality estimation $\hat{x}_{0}$ can recovered from $T_2$ in one-step.
Two designs in training include:
1. The estimation target in training objective is set as clean waveform instead of noise.
2. The $x_t$ is randomly dropped out to force the model to rely on the conditioning information $y$.
Questions:
1. I can understand the dropout operation in training is helpful to utilizing the conditioner y. However, the adaptive prior is used to obtain the latent representation at T_2. I think it is manipulating the sampling trajectory. What is the relationship with condition optimizer? The unchanged noisy observation y has been provided as the condition. The abstract claims two condition-augmentation techniques.
2. I do not understand the comparison study of adaptive prior analysis very well. What are the three variants when computing the adaptive prior \hat{x}_{0}? Why is an unconditional diffusion model used? To compare the design of Eq. 12, the conditional generation should be fixed.
3. This work mentions the error accumulation of diffusion models. However, the high-quality generation of diffusion models is usually guaranteed by its iterative refinement mechanism. In this work, 50 time steps are used in training, while 2 steps are used in sampling. I am curious about the results of increasing the number of sampling steps.
Experiments:
1. Subjective tests MOS and SMOS are conducted. But where is the demo page showing the generated samples?
2. DiffWave does not claim one-step mapping for either waveform generation or denoising. Is it good to use it as a discriminative model?
3. Generative baseline models such as DiffuSE, CDiffuSE, and SGMSE have been changed to keep the uniform model architecture and training method with this work. Will this cause performance decrease in their methods? What are the results of those unchanged baseline models?
Others:
1. The thesis writing looks over-complicated. Moreover, the adaptive prior does not mean condition optimizer from my perspective. It is manipulating the sampling trajectory with observation y.
2. I suggest showing the training and sampling algorithms in the main content instead of in appendix.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: My detailed questions are as described above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: There are limitations to its use in real-time scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions.
***
>Q1: I think adaptive prior is manipulating the sampling trajectory. What is the relationship with condition optimizer? The unchanged noisy observation $y$ has been provided as the condition. The abstract claims two condition-augmentation techniques.
Yes, you are right, our adaptive prior is designed to explicitly incorporate condition knowledge by manipulating the sampling trajectory. Ideally, we can directly use the noisy speech $y$ to generate the adaptive prior like SDEdit [2]. However, when in low-SNR scenarios (speech signal is contaminated by noises severely), we have to choose a relatively large $t$ to guarantee an acceptable error bound (Proposition 1). According to Proposition 1 and 2, if we can narrow the gap between the condition factor and input $x$, we can opt for a smaller (better) $t$. This enables the model to prevent excessive removal of original semantic information in the condition factor (line 168-172). To this end, we can employ a condition optimizer to generate an enhanced adaptive prior (Eq. 12). And the second condition-augmentation technique involves employing a condition optimizer to generate an adaptive prior and explicitly injecting the condition knowledge at the inference stage. On the whole, it is a condition augmentation technique tailored for diffusion enhancement models, aiming at alleviating the condition collapse problem. We appreciate your question and hope this explanation addresses your concerns.
***
>Q2: What are the three variants when computing the adaptive prior $\hat{x}_0$? Why is an unconditional diffusion model used? To compare the design of Eq. 12, the conditional generation should be fixed.
We explored using unconditional diffusion model with adaptive prior technique as a first attempt to address the condition collapse problem of conditional diffusion enhancement methods (Sec 4.1). We found it essential to consider failure cases of the condition optimizer, particularly in mismatched scenarios -- using the estimated speech directly from the condition optimizer (Eq. 11, similar to DifFace [3] and DiffSinger [4]) could lead to excessive suppression problem. To investigate this further, we defined three variants (cf. Appendix A.4, lines 504-506):
* Applying the adaptive prior with the noisy speech (similar to SDEdit [2]);
* Applying the adaptive prior with the estimated speech (similar to DifFace and DiffSinger, Eq. 11);
* Applying the adaptive prior with a milder one (Eq. 12).
In our experiments (reported in Appendix A.4), we discovered that the mild condition is more stable in complex scenarios, while the unconditional diffusion model showed limited effectiveness in matched scenarios. Both of these insights were very valuable and used for designing DOSE (cf. Sec 4.2).
***
>Q3: This work mentions the error accumulation of diffusion models. However, the high-quality generation of diffusion models is usually guaranteed by its iterative refinement mechanism. In this work, 50 time steps are used in training, while 2 steps are used in sampling. I am curious about the results of increasing the number of sampling steps.
Great question! We would like to emphasize that the definition of high-quality in image/speech synthesis is not consistent with the one in speech enhancement: the former focus on naturalness, while the latter focus on point-to-point consistency. We presented multiple visual cases in Fig. 5 and Fig. 15 which illustrate that increasing the number of sampling steps will lead to inconsistency (error accumulation) problem and subpar results. Additional experimental results are shown in the above table (in General Response). Note that several recent works [5, 6, 7] share the same findings as ours that the consistency gets worse when starting with a large $t$.
***
>Q4: Subjective tests MOS and SMOS are conducted. But where is the demo page showing the generated samples?
We have made the test page public. Due to the NIPS policy that "rebuttal should not contain any links to external pages", we have send an anonymized link to the AC in a separate comment.
***
>Q5: DiffWave does not claim one-step mapping for either waveform generation or denoising. Is it good to use it as a discriminative model?
We use DiffWave as a discriminative model due to the following three reasons.
* As stated in DiffWave [8], their network architecture is based on WaveNet [9], a speech synthesis model that has been successfully applied to **speech enhancement** [10] and separation [11];
* Pioneer works [5, 12] in DDPM-based speech enhancement are all based on DiffWave. To ensure a fair comparison, we need to keep the model architecture exactly the same;
* There exists a concurrent work [13] that does a similar thing to ours, i.e., they use NCSN++ [14], another generative model as the basic architecture.
***
>Q6: Generative baseline models such as DiffuSE, CDiffuSE, and SGMSE have been changed to keep the uniform model architecture and training method with this work. Will this cause performance decrease in their methods? What are the results of those unchanged baseline models?
As we explained in Appendix A.12 (line 641-644), current SOTA speech enhancement methods directly use noisy speech as the condition factor, rather than Mel-spectrogram. We note that this slight modification also leads to an improved performance of these methods (please see the respective reported performance for more details).
***
>Q7: There are limitations to its use in real-time scenarios.
We would like to emphasize that one of the attractive properties of our method is that speech can be generated in only 2 steps. We believe our method can shed light on the design of future fast diffusion enhancement models.
***
>Q8: Writing and paper organization advice.
Thanks for the suggestion! We will reorganize the paper in the revision.
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Thank you for your detailed rebuttal and explanations provided to address my concerns. I have read through your answers and would like to further emphasize and elaborate on some points:
**A new question**: may I ask the main target of this work? Are the improving techniques designed for achieving high-generation quality or fast sampling speed?
**Regarding Q1**:
OK, I understand that condition optimizer means the milder adaptive prior shown in Eq. 12. You inject scaled observation y into the latent representation at the second sampling step \hat x_t2. I was hoping to ask two questions about the milder prior:
1. You set the equal weight (predefined as 0.5) for the denoising result of the first sampling step f_\theta(y_{t1}, y, t1) and the observation y. Does this mean that you have equal confidence for these two terms, although the observation y may have different signal-to-noise (SNR)?
2. When the observation y has a low SNR, would it be helpful to consider it as Eq. 12? If y is very noisy, you still inject it into the denoising result of the first sampling step. Would it be informative to the final generation results?
**Regarding Q3**: You mention that increasing the number of function evaluations (NFEs) will lead to inconsistency (error accumulation) and less satisfactory results when compared to the two-step approach. If this is indeed the case, it raises a fundamental question: what is the motivation or benefit of utilizing the diffusion-based framework for this task?
From my perspective, the intrinsic value of diffusion models lies in their promising generation quality achieved by iterative sampling. Moreover, a trade-off between (controlled) generation quality and sampling speed could be achieved by tuning NFEs. If the generation process has been limited to NFE=1 or NFE=2 because of the error caused by discretized sampling step, I think the method should be compared with more discriminative models.
**Regarding Q5**:
Regarding using DiffWave as the baseline of discriminative models: I am not claiming that the WaveNet architecture is not good. But I do not believe that changing DiffWave to one-step mapping is a proper choice of discriminative models. Other papers like CDiffuSE, UNIVERSE, and SGMSE+ show the comparison results with several published discriminative methods.
Comparing with such deterministic models would not only bolster the credibility of your results but also provide readers with a clearer context of where DOSE stands in terms of performance within the broader speech enhancement landscape.
I look forward to your further clarifications on these matters.
---
Reply to Comment 1.1.1:
Title: (1/2) Response to Further Questions Raised by Reviewer boQC
Comment: We appreciate your valuable feedback! Below we answer your questions :)
***
> Q1: The main target of this work.
Our aim is to tackle the conditional collapse issue [1] in the conditional diffusion enhancement model, ultimately improving denoising performance. We present a model-agnostic approach equipped with two innovative conditional augmentation strategies to effectively exploit condition knowledge. Our adaptive prior bolsters inference speed by shortening the sampling trajectory from $T$-step to several steps. Considering the error accumulation problem, we set $K=2$, further enhance the inference speed. We'd like to stress that 2-step is not the optimal choice -- for both efficiency and stability reasons (please see Reviewer RTAK, R4 for more details).
***
> Q2: (1) Why set equal weights? \& (2) The influence of injecting $y$ when it has a low SNR.
(1): We'd like to emphasize that the motivation behind our strategy (Eq. 12) stands apart from the concept of "confidence''. Instead, it is more like a simple residual layer that integrates raw information to circumvent excessive suppression problem. From another perspective, in cases where the performance of the conditional optimizer is uncertain—whether it performs well or not—a judicious and logical way is to opt for a merging value of 0.5. This decision finds its roots in the principles of the Maximum Entropy Principle.
(2): Low-SNR $y$ can impede the effectiveness of the adaptive prior mechanism. This is because when dealing with a low-SNR condition factor , it becomes necessary to select a relatively large value for $\tau$ to satisfy the Proposition 1 -- the original semantic information will also be removed if $\tau$ is too large (cf. Sec 4.1, line 162-167). However, it's important to note that establishing the condition prior "adaptively'' is one of the most attractive properties of adaptive prior mechanisms. When the signal-to-noise ratio of the condition factor is high, the advantages of the adaptive mechanism will be fully revealed, as it can provide an informative prior and shorten the sampling path. Even with a low-SNR $y$, in instances where the condition optimizer is effective, there's potential to attain an improved condition factor (as defined in Eq. 12) compared to the straightforward utilization of $y$.
***
> Q3: (1) Motivation behind diffusion enhancement models \& (2) The intrinsic value of diffusion models (iterative sampling).
(1): Almost all diffusion enhancement works claim that their methods generalize better than deterministic models.
(2): Good question! We concur with your insight that more steps always lead better sample quality (for generation tasks). We'd like to stress that when applying DDPMs to fine-grained point-to-point mapping (regression) tasks, full-step generation doesn't always yield better results compared to few-step generation. Not only our experiments can verify this (Figure 5 and Figure 15 for visualization \& quantitative results in General Response), but also the recent works in speech enhancement [2] (6-step is better than 50-step), inverse problem [3] (20-step is slightly better than 100-step) , and image purification [4, 5] (1/10-step is better than full-step). Additionally, we advocate that the specific training paradigm of DDPM also brings benefits. We presented a generalization analysis (see Appendix A.11) explaining why diffusion enhancement models exhibit superior generalizability over deterministic counterparts (from the perspective of multi-task training). Recent research [4] highlights that the comprehensive training process of diffusion models substantially enhances one-shot denoising capabilities, making them more adaptable compared to previous works that focused on standalone denoisers at a single noise level.
***
> Q4: More discriminative methods should be compared like CDiffuSE, UNIVERSE, and SGMSE+.
We'd like to emphasize that our approach is a model-agnostic solution (claimed in our Abstract) to tackle the condition collapse issue. Unlike works such as CDiffuSE, UNIVERSE, and SGMSE+ that focus on designing exceptional network architectures for achieving state-of-the-art performance, we take a different path. Our work aligns more closely with a concurrent study [6] (published at ICASSP 2023), which compares the diffusion enhancement model to a discriminatively trained neural network, employing the same network architecture for restoration tasks. (As they suggested: "However, to make a fair comparison of these two conceptually different approaches, similar network architectures and same training data should be used.'') In the future, we will adapt our method to more popular SE methods as your suggestion.
***
We appreciate your questions and hope these responses addresses your concerns. | Summary: This paper proposes a novel model-agnostic approach called DOSE for speech enhancement (SE) using denoising diffusion probabilistic models (DDPMs). In this paper, the authors focus on addressing the challenge of incorporating condition information into DDPMs with two efficient condition-augmentation techniques. Based on the experimental results, the authors claim that the proposed method obtain significant improvements in high-quality and stable speech generation, consistency with the condition factor, and efficiency.
Strengths: 1. The proposed method shows good generalization ability with good performance in both matched and mismatched scenarios.
2. This paper shows detailed experimental results and a comprehensive comparison with existing diffusion enhancement methods and deterministic mapping-based method enhancement methods.
3. This paper provides a proper introduction to the problem of condition collapse in generative speech enhancement.
4. This paper is well-written and the flow of the writing is natural so it was easy to read and follow.
Weaknesses: 1. To replicate the experiments, more training details and configuration should be provided.
2. It would be better if there were some qualitative analysis in the experiment section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can authors conduct the ablation study to present and analyze the effectiveness of DOSE.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not analyze the limitation of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions.
***
>Q1: To replicate the experiments, more training details and configuration should be provided.
We reported our configurations in Sec 5, line 278-283. We added more experimental details including speech processing, basic architecture, and baseline description in Appendix A.12. As Reviewer k89b suggested, we will update the dataset details in the revision. Note that we also provided the code of DOSE in both supplementary material and anonymous GitHub (Appendix A.1, line 451-452) for replication.
***
>Q2: It would be better if there were some qualitative analysis in the experiment section.
Fully appreciating the question, we'd like to note that there are multiple qualitative results discussed in the Appendix (which we'd be happy to refer to more explicitly in the revision). Specifically:
* We showed visual cases of excessive suppression in A.8;
* We presented visual cases of error accumulation in A.9;
* We conducted a counterfactual verification to understand the intrinsic mechanism of DOSE in A.10.
***
>Q3: Can authors conduct the ablation study to present and analyze the effectiveness of DOSE.
Thanks for the suggestion! We have conducted ablation studies to quantitatively show the significance of adaptive prior and dropout operation. The results are shown in General Response. We can observe that both of them are crucial for generating consistent samples. We also investigated the significance of adaptive prior and dropout (from other perspectives than metric scores) in Appendix A.6, A.7, and A.10.
***
>Q4: The authors do not analyze the limitation of this paper.
As Reviewer k89b pointed out, we had discussions about limitations in Sec 6 (line 325-338). We also discussed the border impacts in Appendix A.13 (line 675-697). We'd be happy to dedicate a (sub)section for discussing the limitations and the societal impact in the main text.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for their response. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for providing high-quality reviews and insightful feedback.
---
We are encouraged that reviewers think our paper "provides valuable insights into the functioning of the denoising diffusion probabilistic model for speech enhancement'' (R4), "an interesting and promising solution to condition collapse issue'' (R3, R4), "technically/theoretically in-depth'' (R1, R5), "comprehensive experiments and thorough analysis'' (R2, R3, R4, R5), "broader implications and versatility" (R4), and "well-written'' (R2, R4).
(We abbreviate the reviewer Zy9S, CtVJ, boQC, k89b, RTAK to R1, R2, R3, R4, R5, respectively.)
---
We provide additional ablation results requested by R1, R2 and R3, shown in the table below -- $p$ denotes the dropout rate, $\epsilon$ and $x$ denote different training objectives, and the number of steps denotes how many steps are needed to generate speech during the inference stage.
| | | | VoiceBank | | | | | CHIME-4 | | |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Variable | STOI(%) | PESQ | CSIG | CBAK | COVL | STOI(%) | PESQ | CSIG | CBAK | COVL |
| p = 0 ( $\epsilon$ ) 2 steps (to R2) | 92.7 | 2.13 | 3.44 | 2.58 | 2.76 | 86.5 | 1.39 | 2.69 | 2.04 | 1.98 |
| p = 0 ( $x$ ) 2 steps (to R1) | 93.3 | 2.49 | 3.74 | 3.03 | 3.10 | 80.6 | 1.37 | 2.59 | 2.06 | 1.92 |
| p = 0.1 ( $x$ ) 2 steps (to R1) | 93.5 | 2.50 | 3.66 | 3.24 | 3.08 | 82.0 | 1.44 | 2.69 | 2.10 | 2.00 |
| p = 0.5 ( $x$ ) 2 steps (to R1) | **93.6** | **2.56** | **3.83** | **3.27** | **3.19** | **86.6** | **1.52** | **2.71** | **2.15** | **2.06** |
| p = 0.9 ( $x$ ) 2 steps (to R1) | 92.6 | 2.33 | 3.54 | 3.01 | 2.93 | 83.3 | 1.36 | 2.67 | 2.02 | 1.95 |
| p = 0.5 ( $x$ ) 6 steps (to R3) | 93.1 | **2.56** | 3.78 | 3.03 | 3.16 | 82.2 | 1.43 | 2.70 | 2.12| 2.01 |
| p = 0.5 ( $x$ ) 50 steps (to R2 and R3) | 93.2 | 2.48 | 3.66 | 3.18 | 3.06 | 82.1 | 1.39 | 2.49 | 2.01 | 1.87 |
---
We list all needed references here to facilitate the subsequent point-to-point rebuttals.
[1] PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior, ICLR 2021.
[2] SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations, ICLR, 2021.
[3] DifFace: Blind Face Restoration with Diffused Error Contraction, Arxiv, 2022.
[4] Diffsinger: Singing voice synthesis via shallow diffusion mechanism, AAAI, 2022.
[5] A Study on Speech Enhancement Based on Diffusion Probabilistic Model, APSIPA, 2021.
[6] (Certified!!) Adversarial Robustness for Free!, ICLR, 2023.
[7] DensePure: Understanding Diffusion Models for Adversarial Robustness, ICLR, 2023.
[8] DiffWave: A Versatile Diffusion Model for Audio Synthesis, ICLR, 2021.
[9] Wavenet: A Generative Model for Raw Audio, Arxiv, 2016.
[10] A Wavenet for Speech Denoising, ICASSP, 2018.
[11] End-to-End Music Source Separation: Is It Possible in the Waveform Domain?, Interspeech, 2019.
[12] Conditional Diffusion Probabilistic Model for Speech Enhancement, ICASSP, 2022.
[13] Analysing Diffusion-based Generative Approaches Versus Discriminative Approaches for Speech Restoration, ICASSP, 2023.
[14] Score-Based Generative Modeling through Stochastic Differential Equations, ICLR, 2021.
[15] Generative Modeling by Estimating Gradients of the Data Distribution, NeurIPS, 2019.
[16] A Recursive Network with Dynamic Attention for Monaural Speech Enhancement, Interspeech, 2020.
[17] A Time-domain Monaural Speech Enhancement with Feedback Learning, APSIPA, 2020.
[18] Densely Connected Progressive Learning for LSTM-based Speech Enhancement, ICASSP, 2018.
[19] A Multi-Target SNR-Progressive Learning Approach to Regression Based Speech Enhancement, TASLP, 2020.
[20] DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability, ICASSP, 2023. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a model-agnostic method called DOSE that employs two efficient condition-augmentation techniques to incorporate condition information into DDPMs for SE. Experiments demonstrate that the approach yields substantial improvements in high-quality and stable speech generation.
Strengths: 1. In-depth presentation on diffusion SE, including formulation and methodology.
2. Good results. The authors compare different SE baselines and demonstrate the SOTA results.
Weaknesses: 1. It seems that the authors adopt the adaptive prior. Does it only use in the inference process? What is the difference from the adaptive prior in PriorGrad?
2. Why use similarity MOS to evaluate the enhancement model, and what's the difference from MOS?
3. How do you choose p for diffusion dropout? It lacks evaluation and ablation studies on p, which is an important parameter for the proposed diffusion dropout operation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: If there is a training-inference mismatch? You randomly drop x_t in training, I wonder if it causes the mismatch as the dropout usually does.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions.
***
> Q1: It seems that the authors adopt the adaptive prior. Does it only use in the inference process? What is the difference from the adaptive prior in PriorGrad?
Yes, the adaptive prior is exclusively used in the inference process. Our method differs from PriorGrad [1] in three key aspects.
* PriorGrad injects instance-level prior knowledge at the initial timestep $T$ and requires modifications to the training process (line 3-5 of Algorithm 1 in [1]). In contrast, our adaptive prior is independent of the training process, allowing our approach to be directly applied to arbitrary pre-trained diffusion enhancement models.
* PriorGrad is designed to speed up the training convergence. Our adaptive prior is used to provide condition knowledge (it can also accelerate inference speed). PriorGrad requires a complete inference process, whereas our approach starts from the intermediate timestep, shortening the sampling trajectory, and thereby improving inference efficiency (i.g., generating clean speech in 2 steps).
* PriorGrad is sensitive to the prior selection -- they have tried several sources of conditional information to compute the prior, but only the normalized frame-level energy of the mel-spectrogram worked (cf. Sec 4.1 in [1]). This means that it is hard for developers to choose an appropriate adaptive prior. In contrast, our adaptive prior, computed directly from noisy speech, is stable and effective.
We appreciate your question and hope this explanation addresses your concerns.
***
> Q2: Why use similarity MOS to evaluate the enhancement model, and what's the difference from MOS?
As the primary focus of our work is to tackle the condition collapse problem in diffusion enhancement models, it is crucial to assess the consistency of the generated speech with real speech. While MOS is commonly employed to rate the overall naturalness and fluency of synthesized audio in speech synthesis, we introduce another metric, called similarity MOS, which specifically evaluates the consistency (content, timbre, emotion, and prosody) between the generated speech and the real speech. We provided details about subjective human evaluation in Appendix A.3 (line 486-496).
***
> Q3: How do you choose $p$ for diffusion dropout? It lacks evaluation and ablation studies on $p$, which is an important parameter for the proposed diffusion dropout operation.
Similar to the process of selecting $\tau_1$ and $\tau_2$, we determine the optimal values for $p$ by evaluating the performance on a validation dataset. In this study, we pre-defined a relatively coarse candidate set $\{0, 0.1, 0.5, 0.9\}$ for $p$ and found that $p=0.5$ generated appealing results. We have included supplementary ablation studies for the hyper-parameter $p$ in the above table (in General Response). Note that we presented parameter sensitivity experiments in Appendix A.7, and the influence of $p$ (line 589-604) was shown in Figure 11 and Figure 12. We will mention this explicitly in the main text.
***
> Q4: If there is a training-inference mismatch? You randomly drop $x_t$ in training, I wonder if it causes the mismatch as the dropout usually does.
Great question! It is essential to clarify that our method is different from the conventional dropout technique typically applied to neural networks. In conventional dropout, random units (neurons) or their activations are dropped out during training to prevent overfitting and improve generalization. In contrast, our approach focuses on randomly dropping out $x_t$ (input features) to mitigate the condition collapse problem in diffusion enhancement models. This strategy would force the model to generate condition-consistent samples. While dropout might introduce a slight training-inference mismatch, we carefully validate our model's performance on benchmark datasets and ensure that it does not significantly affect the quality of the generated samples. We will include a more detailed explanation of the dropout technique and its implications in our revision. | null | null | null | null | null | null |
ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns | Accept (poster) | Summary: This paper proposes to drape multi-layered garments on SMPL-based human bodies with different poses and shapes. The method is inspired by the commonly adopted sewing pattern and learns to map garments from 2D panels, i.e. front and back panels, to 3D surfaces. The author first deforms the single-layered garments in rest pose according to SMPL skinning function, followed by the corrections of interpenetration between layers of garments. Experiments show the effectiveness and efficiency of the method. The author further shows some possible applications such as garment reconstructions from images and garment editing.
Strengths: * An efficient method mapping implicit 2D sewing patterns to 3D garments with good performance.
* The qualitative results of draping multi-layered garments show less interpenetration.
* The method is differentiable and is able to be applied to inverse problems.
Weaknesses: 1. While this paper focuses on multi-layered garments, the experiments lack some quantitative results to support the effectiveness. For example, what is the rate of interpenetration between human body and multi-layered garments? What about the interpenetration between different layers of garments? Will different types of garments, such as dress or shirt, lead to different collision rate? How will the collision rates mentioned above change when the number of layers increase? In the meanwhile, please also provide more detail information about the test set for the above evaluation, such as the number of vertices or the resolution of the mesh.
2. As for the efficiency, the author provides comparisons during training process in Table 1. Could you provide similar comparisons about reconstruction time or model forward time during test?
3. The distribution of the human parameters in the dataset is unknown. How many different human bodies are sampled? What is the distribution of different gender in both training and test set?
4. Is there any overlapping between training set and test set? How many unseen garments and human bodies are included in test set?
5. While the previous method mentioned at L62 is only able to drape multi-layered garments in T-pose, could the author provide some comparisons also in T-pose? Since the method in this paper supports different poses of human and previous method at L62 did very similar work, the comparisons in T-pose are needed to support the effectiveness of the method.
6. The writings can be further improved. Some descriptions and captions are not clear.
7. Please change the tone of the sentence at L3. As discussed in related work at L62, previous method is able to drape multi-layered garments.
8. The layering network at L191-221 seems like a post-processing step, making the multi-layered draping more like a simple extension of single-layered draping.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * For each garment, should we learn a unique latent vector z? Do we need to train different models for different garments?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: While multi-layered settings are more challenging, this paper need more experiments and further explanations to support the contributions in terms of multi-layered garments.
First, though the method shows better qualitative results, some more experiments are needed to support the strengths of the method in terms of multi-layered settings, the efficiency, and even the robustness.
More comparisons between the existing methods mentioned at L62 are needed to prove the effectiveness of this method.
Second, as for the dataset, the number of samples seems not enough. The main concern is that whether the insufficient data would lead to some bias or overfitting and thus achieve better qualitative results. More quantitative results with clear explanations are needed.
Finally, the idea of getting interpenetration-free garments seems more like a post-processing step, where another model is trained and specifically aims to move the penetrated vertices out of the inner mesh. I am concerned regarding this idea. Since this makes the settings of multi-layered garments as a simple extension of single-layered garments, weakening the contributions in the field of multi-layered garments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable reviews. Below are our responses.
1. *The experiments of layering.*
We conducted an experiment to further evaluate our layering model by measuring the intersection ratio between garment layers and between garments and the body. We generated 673 unseen bodies with unseen poses from [1] and shape parameters uniformly sampled from $[-2, 2]^{10}$. Each body was paired with 5 randomly generated unseen garments (1 skirt or 1 trousers with 4 shirts), where the 1st layer consisted of the skirt or trousers.
The results, presented in Table 1 of the attached PDF file, display the intersection ratios. Diagonal values are the intersection ratio between the body and the $i$-th ($i=1,2,3,4,5$) layer, while other values are intersections between different garment layers. As prior works do not support layering, a direct comparison is impractical. The numbers inside and outside the bracket denote the results obtained with and without the layering procedure $\mathcal{D}_m$ respectively. The significantly lower intersection ratios with layering demonstrate the effectiveness of our model in handling intersections with the body and garments, even after layering five garments (~2%). The increase in intersections as more garments are layered is explained in L150-155 of the supplementary material, where we clarify the training of $\mathcal{D}_m$ is on single-layer draped garments, which are closer to the body. The average vertex numbers for shirts/skirt/trousers are 9.2k/9.4k/10.6k respectively.
2. *Similar comparisons about reconstruction time or model forward time during test.*
As stated in L247, the time reported in the left of Table 1 in the main paper refers to the reconstruction time (or model forward time) for the garments from the training set, rather than the time taken for the training process. Furthermore, the labels 'Train'/'Test' in Table 1 indicate that the results are evaluated on the training/test set respectively, and not refer to the training/testing process. As shown in Table 2 of the attached PDF file, the reconstruction time during testing is comparable to those in the left of Table 1 of the main paper.
3. *The distribution of the human parameters.*
We use SMPL to model the human bod. Its usual parameters $\mathbf{\Theta}$ and $\mathbf{B}$ control the body pose and shape respectively.
Our training and evaluation protocol follows prior art [1,2], using the AMASS dataset. With 6519 poses for training and 673 unseen poses for testing, the dataset covers a wide range of human motions, including walking, running, jumping, arm/torso movements and dancing. Importantly, our self-supervised draping method, driven by a physics-based loss, eliminates the need of simulated or scanned garment data for training. This allows our model to easily generalize and extend to more pose data. During training and testing, we uniformly sample $\mathbf{B}$ from $[-2, 2]^{10}$.
SMPL offers 3 body models: female, male and neutral. While our study focuses on the female model, as do many prior works [2-5], there is nothing specific about it and our approach applies just as well to the other two.
4. *Is there any overlapping between training set and test set?*
No, there is no overlapping between training set and test set. As stated in L226-227 of the main paper, for the experiments of garment reconstruction, the test set comprises 20 shirts, 20 skirts, and 20 pairs of trousers that are not part of the training set (unseen). For garment draping, the test set contains 673 unseen $\mathbf{\Theta}$, along with $\mathbf{B}$ sampled from $[-2, 2]^{10}$ .
5. *Comparisons to [5].*
There unfortunately is not easy way to generalize to [5] because the code is unavailable. In addition, it would only work in T-pose, whereas we generalize to different ones.
6. \& 7. *The writings and the tone of L3.*
We will revise our paper to make it clearer, and rephrase L3 to '*However, they are either unable to handle multi-layered clothing, which is prevalent in everyday dress, or restricted to bodies in T-pose*'.
8. *The multi-layered draping like an extension of single-layered draping.*
The layering network is indeed a straightforward extension of single-layered draping precisely because our single-layered model is designed to make this extension easy, which we view as part of our contribution. Single-layer draping provides a good starting point for our layering model. By predicting corrective displacement for vertices, rather than directly regressing vertex positions, we can ease the training process. This is important because our model is *self-supervised* using a physics-based loss, and directly regressing vertex positions would cause the training to collapse and not converge.
9. *The latent vector and model for different garments.*
We use a single network to handle a wide range of garments of varying topology and geometry. Each garment is associated to a unique latent vector $\textbf{z}$ learned by auto-decoding, as in [6]. During training, the latent vectors for garments in the database are randomly initialized and optimized alongside the weights of our ISP network $\mathcal{I}_{\Theta}$ using backprop. For unseen garments, we randomly initialize a new $\textbf{z}$, and optimize it while keeping the network frozen by fitting it to observations like image segmentations or 3D meshes.
For our draping models $\mathcal{D}_s$ and $\mathcal{D}_m$, we learn a single generic network separately that can handle various garments rather than multiple garment-specific networks. This makes our approach more scalable, without having to train and maintain a separate network for each garment.
### References
*[1] N. Mahmood, et al. AMASS. ICCV 2019.*
*[2] I. Santesteban, et al. SNUG. CVPR 2022.*
*[3] L. Luigi, et al. DrapeNet. CVPR 2023.*
*[4] I. Santesteban, et al. VTO Garment Collisions. CVPR 2021.*
*[5] I. Santesteban, et al. Ulnef. NIPS 2022.*
*[6] J. Park, et al. Deepsdf. CVPR 2019.*
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thank you for your explanations.
However, I still find the technical novelty is limited.
While the author claims that "multi-layered draping" is one of the main contribution, two key questions are unable to solve and cannot well support the contributions to multi-layered settings.
First, since ULNeF [1] is the most related work, which also tries to solve multi-layered garments draping with state-of-the-art performance in 2022, the author should provide the fair comparisons with ULNeF. However, no comparison between ULNeF and the proposed method is provided, such as in easier settings T-pose. Thus, the effectiveness of the proposed method cannot be proved. And there is no evidence to prove that the proposed method can achieve better performance than existing work in terms of multi-layer draping.
Second, as the author replied in Q8, the module designed for multi-layered settings is straightforward extension of the work, while this idea, which tries to predict displacement to reduce the interpenetration, is also widely applied in previous work such as the GarSim [2]. Other work, such as the [3], only regard the multi-layered settings as simple extension instead of the main contribution, with much easier way to extend to multi-layered garment animation. The straightforward extension is more like a post-processing step, which can also be achieved even with numerical methods in TailorNet[4] and obtains interpenetration-free performance.
In short, the paper is not ready due to the limited technical novelty. The straightforward extension (post-processing step) is not enough to be regarded as a main contribution.
[1] Santesteban, et al. ULNeF, NeurIPS2022
[2] Tiwari, et al. GarSim, WACV2023
[3] Meng, et al. MotionGuided, ToG2022
[4] Chaitanya, et al. TailorNet, CVPR2020
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedbacks.
First, we would like to point out that we have addressed all the reviewer’s comments, save the one about the comparison with ULNeF.
Second, we have to disagree with the reviewer’s assessment of ULNeF: The authors provide neither the code nor numbers on standard benchmarks. This makes a comparison almost impossible and we view this as something that is lacking from the ULNeF paper. Furthermore, it only comprises results on a T-Pose, without any evidence that the proposed approach would work under more challenging circumstances. In fact, the ULNeF authors themselves mention the difficulty in extending their formulation to more complex poses is one of their limitations. By contrast, this limitation does not apply to us and we show results in a much broader range of situations.
Regarding the layering, we propose a novel representation that enables us to simplify the complex problem of layering, and treat it as a straightforward extension of the single-garment version of our approach. This is valuable contribution because it provides a unified framework for handling multi-layered draping by converting the three-dimensional threading problem to the two-dimensional template level for processing, which does not appear in prior art. Furthermore, our proposed representation offers additional advantages beyond multi-layered draping. It is differentiable, and enables latent space interpolation and fitting observations, which expands the applicability of our approach to garment generation, editing, and recovery tasks.
Finally, the method of [3] involves learning a garment-specific model using ground truth simulation data. It can be extended to cases with a limited number of garments---two in the paper---as long as ground truth data is available. However, handling a large collection of garments would require collecting simulation data for different garment combinations and train separate models for each case, which would be prohibitively costly and impractical. In contrast, our method leverages a single generic model and self-supervised learning that can handle a wide range of garments without the need for additional data collection or separate model training, which is another valuable contribution. | Summary: The authors address the task of draping individual multi-layer garments on human body models. In this context, they introduce a respective garment representation suitable for this task. Garments are represented as a set of individual 2D panels whose shape is defined based on a signed distance function (in more detail, the zero-crossing of a function with 2D location and latent vector specific to each garment as inputs). For each 2D panel, a 2D-to-3D mapping (conditioned on the 2D location and latent vector) is used to map the 2D panel to the 3D garment surface, while enforcing continuity across panels. In addition, draping networks are trained to allow draping multiple garments on human bodies (represented based on SMPL model) in different poses.
The authors provide quantitative and qualitative comparisons that indicate some potential of the proposed approach.
Strengths: Technical soundness:
- The approach seems reasonable and the results indicate some improvement over the competing techniques.
Evaluation:
- The authors provide quantitative and qualitative evaluations with comparisons to alternatives.
- The supplemental provides further experiments as well as limitations.
Exposition:
- The paper is well-structured and good to follow. Figures/tables and respective captions are informative.
- Text quality is good.
- The authors provide a comprehensive supplemental.
References:
- References seem ok, but I am not an expert in this field.
Weaknesses: Technical soundness:
- Table 1 only shows results for shirts. What are the results for the other categories like?
- The variations in the used dataset are not discussed in detail. More details on this would be interesting and allow a better assessment on the potential and remaining challenges of the presented approach.
Evaluation:
- The evaluation could have been improved by also showing examples or highlighting what works particularly well in comparison to other approaches and what still remains challenging for the presented method.
- Providing a visualization of the spatial distribution of the Chamfer distance across the surface would show where the errors are larger and where the errors are small. This could help to show the accuracy a bit better, also when comparing to others.
- The user study is quite simple regarding the fact that only direct preferences have been checked for. More detailed subquestions could have provided insights on what exactly the participants in the study did not yet like and where further improvement is needed.
- For Figure 6, zoom-ins could be used to highlight improvements/differences between methods.
Reproducibility:
- The paper presents a relatively complex system. This might complicate reproducibility. It is not clear whether code and data will be made available.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address my comments under ‘Weaknesses’.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations and failure cases have only been discussed in the supplemental.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your valuable reviews. Below are our responses to your comments.
1. *Table 1 only shows results for shirts. What are the results for the other categories like?*
The results for trousers and skirts can be found in Table 1 and Table 2 of the supplementary material respectively, where our method shows similar performance with higher accuracy and faster inference speed than UDF.
2. *The variations in the used dataset are not discussed in detail.*
We apologize for the oversight regarding the detailed discussion about the dataset in our paper.
For the generation of sewing patterns and corresponding 3D garment meshes, we used the software developed by [Korosteleva2021]. Our dataset encompasses three garment categories: shirts, skirts, and trousers. In [Korosteleva2021], shirts are parameterized by \{length, width, hem width, collar width, front/back collar depth, sleeve connection width, sleeve opening width, sleeve length\}, while skirts and trousers are parameterized by \{length, width, front/back curve\} and \{length, crotch depth, hem width\}, respectively. The values of these parameters were uniformly sampled from pre-defined ranges.
To create our training set, we generated 400 shirts, 300 skirts, and 200 pairs of trousers. Additionally, we prepared a separate testing set, consisting of 20 shirts, 20 skirts, and 20 pairs of trousers. In the supplementary material, Figures 2, 3, and 4 showcase the variations in garment shape and style that were incorporated into the dataset.
In line with the methodology described in [Santesteban2022], we relied on the AMASS dataset [Mahmood2019] for training and evaluating our draping models. The AMASS dataset offers a diverse range of poses, containing activities such as walking, running, jumping, arm movements, torso movements, dancing, and more, which ensures comprehensive coverage of various human motions.
3. *The evaluation could have been improved by also showing examples or highlighting what works particularly well in comparison to other approaches and what still remains challenging for the presented method.*
We conducted comparisons between our method and prior approaches in garment reconstruction, draping, and recovery. Our results demonstrate that our method is more accurate and faster than UDF in the context of reconstruction. Additionally, our method is able to handle multi-layered draping, which is not achieved by any other existing works. A more detailed comparison and comprehensive analysis can be found in the supplementary material (Sec. 1 and 4), where we provide further insights, limitations, and discussion of failure cases.
4. *Providing a visualization of the spatial distribution of the Chamfer distance.*
The visualization of the spatial error distribution can be found in Figure 1 of the attached PDF file. In this figure, we compare the error distribution between our reconstructions and those produced by UDF. Our reconstructions exhibit lower error across the entire surface compared to UDF's. We will include this figure in the revised version of our paper.
5. *The user study is quite simple.*
True but the simple interface and instructions we used allowed us to query many users from diverse backgrounds at a reasonably low cost. In future research, we will work on collecting finer feedback, while keeping a low user friction.
6. *For Figure 6, zoom-ins could be used to highlight improvements/differences between methods.*
We will revise Fig. 6 for better visualization of the improvements.
7. *Reproducibility.*
As mentioned in Line 44, we will release our codes and trained models.
8. *Limitations and failure cases have only been discussed in the supplemental.*
We will revise our paper to incorporate the discussion of limitations and failure cases, as shown in Sec. 4 and Fig. 21 of the supplementary material, in the main paper.
### References
*M. Korosteleva and S. Lee. Generating Datasets of 3D Garments with Sewing Patterns. In Advances in Neural Information Processing Systems, 2021.*
*I. Santesteban, M.A. Otaduy, and D. Casas. SNUG: Self-Supervised Neural Dynamic Garments. In Conference on Computer Vision and Pattern Recognition, 2022.*
*N. Mahmood, N. Ghorbani, N. F. Troje, G. Pons-Moll, and M. J. Black. AMASS: Archive of Motion Capture as Surface Shapes. In International Conference on Computer Vision, pages 5442–5451, 2019.*
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: I thank the authors for providing respective explanations to clarify several of the raised aspects.
Regarding the answer to aspect 3, I want to stress that I saw the comparisons in the paper and supplemental for the review, but the way of visually highlighting the potential benefits for the respective examples is not as good and I would also expect the respective demonstration in the scope of more examples.
Furthermore, when visualizing spatial deviations from the reference via the Hausdorff error, the authors should use a wider range in the color spectrum to better visualize deviations and depict respective results for more different examples and also for the results obtained with competing methods.
I still wonder whether the provided information on the user study allow sufficient insights.
Finally, most of all, I would be interested in the authors' comment regarding Reviewer LvBx's feedback on the technical novelty and the comparison to other multi-layer draping methods.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback.
1. For the figures depicting spatial deviation, we will change the color code and select more representative examples for better visualizations in the final version of our paper.
2. With regards to previous art, reviewer LvBx’s major concern seems to be with the ULNeF method. In our response, we point out that
- The fact that the authors provide neither code nor quantitative results on accepted benchmarks makes comparison extremely difficult. We would have to re-implement their full method.
- Even more importantly, the paper only show results on T-Poses and acknowledges that their method may not be suitable to other poses. In the limitation section states “The proposed approach for VTO using ULNeFs has only been validated with garments in T-pose. The root of this limitation is the difficulty in extending the formulation based on covariant fields to more complex poses”. Our approach does not suffer any such limitation.
3. Reviewer LvBx discuses additional approaches in his latest comment. We responded to that as well.
4. In response to another comment, our method is end-to-end differentiable and trainable for both the single- and multiple- layer cases. | Summary: This paper introduces ISP, a novel system that can, for the first time, drape a 3D human body with multi-layer garments, without the need of physics-based simulation. Several technical contributions addresses the key aspects of garment draping. To enable learning of garment sewing patterns, the authors cleverly formulate the 2D patterns with 2D signed-distance fileds (SDFs) and label fields. An AtlasNet-style network maps the 2D sewing pattern to 3D, thereby draping clothing to the body. To achieve multi-layer draping, a intuitive mechanism derives a virtual repulsive force based on the 3D locations of draped garments, thereby resolving the collision of cloth.
Extensive experiments are performed to characterize the model in garment reconstruction, draping, image-based recovery and editing. In all the tasks, ISP has a clear advantage over existing methods, showing its great potential for many downstream tasks in visual computing.
Strengths: I appreciate this paper for its strengths in many aspects:
- Significance of the problem. The paper addressed an important yet long-overlooked problem: automatic modeling, editing and perception of multi-layer clothing. This is a technically challenging problem. Being the first method tackling this (to my knowledge), this paper can potentially have a good impact in various relevant fields in both research and industry (fashion, movie-making etc.).
- Novelty. The technical contributions of this work are strong. It is clever design to leverage neural fields as a formulation for the seemingly discrete 2D sewing patterns, and to use AtlasNet for the draping problem -- both being creative ways to use recent powerful tools. The layer-wise draping is another great example of combining the power of CNNs with physics-inspired losses. In addition, the method provides a unified pipeline for garments of different topology, such that different garment pieces can be modeled in a standardized way -- another favorable property for machine-learning based approaches.
- Evaluation soundness. The experiments holistically validate the characteristics of the proposed model across multiple relevant tasks in vision and graphics, showing the versatility of the method: 3D garment reconstruction, garment draping, image-based garment reconstruction, and shape/texture editing. The implimentation details, runtime benchmarking are well documented. The user study of the draping quality makes the study more compelling.
- Model performance. To my knowledge, this is the first learning-based work that can drape multi-layer clothing on the bodies and handles cloth interpenetration adequately. Even in the single-layered setting, he proposed method has advantages over prior work in terms of less cloth-body intersecting, draping realism, reconstruction accuracy, and inference speed, all being important aspects of garment draping.
- Presentation. The paper is well-written, easy to follow, and technically sound. The math formulations are specific and clear. The illustrations are clear and well complements the text elaboration.
Overall, this is a very strong submission from which I've learned much, and I believe it can be highly impactful.
Weaknesses: I do not spot major weaknesses in this paper but one observation. In the image-based garment recovery experiment (Sec. 4.4), although recovered garments are correct in their topology and coarse dimensions (e.g. length), but they do not match the details (overall deformation and wrinkles) as that in the image. This inherits the limitation from previous methods such as DrapeNet. Probably in future work a combination with photometric losses can result in more accurate wrinkle estimation?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Out of curiosity, how would the model perform when applied to garments that are too complex to be cut into simple front and back pieces (such as a robe or a double-breasted suits)?
- Currently the garment shape vector z is a single, global one for each garment. Would it increase the representation power (hence helpful to reconstruction tasks) if the z-vector becomes pixel-aligned with the uv map, as done in e.g. NeuralActor (Liu et al., SIGGRAPH Asia 2021) and PoP (Ma et al., ICCV 2021)?
- Following the arguments in L.135, it is only necessary to deform the areas within the front and back panels instead of the full square of the UV map. However in single layer draping (L.184), the displacement MLP outputs the NxNx3 arrays, i.e. covering the entire square, which seems a bit inconsistent to me. To my understanding, in this case the values for out-of-panel areas are null or zero, is that right? If so, how to ensure that?
- Perhaps adding the dimensionality of z to Fig 2 to make it consistent with x?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The technical limitations are discussed in the conclusions and the authors provide an outlook to the solutions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work. We address your questions and comments as follows:
1. *A combination with photometric losses can result in more accurate wrinkle estimation.*
True, incorporating such a loss could significantly enhance our current method. However, formulating an effective photometric loss is not easy: The appearance of a garment in a single image is influenced not only by its geometry but also by factors such as texture and shading. The is something we plan to do in future work.
2. *How would the model perform when applied to garments that are too complex to be cut into simple front and back pieces?*
As demonstrated in Section 3 of the supplementary material, our method naturally extends to sewing patterns with multiple panels. This allows us to effectively handle the modeling of garments that are more complex than those represented by just two panels, i.e. the front and back panels. Thus, by incorporating additional panels, our method can accommodate intricate garment designs and accurately capture their details. However, it is worth noting that most common garments can easily be modeled by only two panels, and our model excels at handling them.
3. *Pixel-aligned latent codes would increase the representation power.*
We agree that employing a pixel-aligned encoding strategy would enhance the representation power of our method. By adopting such a strategy, we would be able to preserve more intricate local details of the garment than just using a single global latent code. On the downside, however, a model using pixel aligned latent codes would a) require training images, that is, for each garment geometry we would need a realistic rendering, and b) be more suited for 3D reconstruction from images and less for tasks such as fitting 3D scans.
4. *In single layer draping (L.184), the displacement MLP outputs the NxNx3 arrays, i.e. covering the entire square, which seems a bit inconsistent to me. To my understanding, in this case the values for out-of-panel areas are null or zero, is that right? If so, how to ensure that?*
Indeed, in our single layer draping model, the output is a $N\times N \times 3$ array representing the displacement $D_s$ that covers the entire square. To ensure that the out-of-panel areas, denoted as $x$, have zero values, we employ the signed distance function (SDF) of the corresponding garment. Specifically, for areas where the signed distance values are positive (i.e., $s(x,\textbf{z})>0$), we set $D_s(x)=0$. This corresponds to masking the output of the displacement network outside the panels, using the predicted SDF. We will clarify the paper.
5. *Adding the dimensionality of z to Fig 2.*
We will revise Fig. 2 to include the dimensionality of $\textbf{z}$.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the clarifications. My concerns are addressed.
After reading the thorough discussions with reviewer LvBx, I realized that I wasn't aware of the ULNeF paper. It's true that there are certain similarities of the principles handling multiple layers of clothing -- hence compromising the task-wise novelty by a bit, but the concrete way of doing so is different. For example, the 2D SDF in combination with a 2D-to-3D mapping function in this paper is an interesting formulation. Therefore, I still believe that this paper's technical contributions can inspire future work in this direction and should be accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and acknowledgement of our contributions. | Summary: The article introduces a 3D clothing generation and driving method inspired by the traditional clothing production process, which has made certain contributions to the reconstruction of clothing, especially multi-layer clothing. This method can achieve the reconstruction of multiple pieces of clothing, the editing of clothing shape and texture, as well as the driving and collision detection processes of clothing through two-dimensional to three-dimensional mapping. The feasibility of this method can be demonstrated through training and validation on the synthesized dataset.
Strengths: - A method has been proposed to solve the problem of multi-layer clothing threading, converting the three-dimensional threading problem to the two-dimensional template level for processing, improving the efficiency and effectiveness of collision detection, and providing technological innovation for the reconstruction of multi-layer clothing.
- A method flow inspired by the designer's clothing production process has been proposed, providing a new approach for the reconstruction of multi-layer clothing.
Weaknesses: - According to the description in line 226, the training set (400 shirts, 300 skirts, and 200 pants) and the test set (20 shirts, 20 skirts, and 20 pants) of the experiment are both composed of the same three types of clothing. Perhaps fewer types may lead to limitations in the generalization ability of methods?
- According to the description in section 3.1, the style of clothes is controlled by Potential space interpolation. This means that the style type and change direction of clothes seem to be displayed by the size of Potential space, and the existing data size seems difficult to support enough Potential space.
- According to the description in line 96, the stitching method of clothing seems to be pre-defined, which limits the generalization of clothing styles. If more flexible and variable stitching methods can be chosen, it seems that better universality can be achieved.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: According to the description in section 2.5 of the supporting materials, it seems that a potential code is needed to recover the human body from the image. How was this potential code obtained?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No extra.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your acknowledgement of our contribution in cloth modeling and multi-layer clothing. Below are our responses to your comments and questions.
1. *Fewer types may lead to limitations in the generalization ability.*
We could train on more types if we had the data for them, which would improve the generalization abilities of our model. This expansion is feasible because, in industrial practice, the majority of garments are decomposed into flat sewing patterns, and our representation would effectively apply to such patterns.
2. *Existing data size seems difficult to support enough Potential space.*
Our dataset is generated using the software of [Korosteleva2021] with sewing pattern parametrization. The specific parameters we use to generate shirts are \{length, width, hem width, collar width, front/back collar depth, sleeve connection width, sleeve opening width, sleeve length\}. We uniformly sample values from pre-defined ranges for these parameters, enabling us to generate the sewing patterns and their corresponding 3D meshes. We follow the same process to generate data for trousers and skirts, each with their own set of parameters. Finally, we get 400 shirts, 300 skirts and 200 pairs of trousers for training, and 20 shirts, 20 skirts and 20 pairs of trousers for testing.
While our current dataset captures variations in style and shape, we acknowledge that it may not cover the entire range of garment shapes and styles due to their vast diversity. However, this can be mitigated by designing and training on additional patterns. As stated above, we ackonwledge that expanding the dataset to encompass a broader range of garment variations would enhance the robustness and generalizability of our method.
Furthermore, as demonstrated in Section 3 of the supplementary material, our approach can be easily extended to sewing patterns with more panels. This implies that our method has the potential to model complex garments beyond those represented in the current dataset. We believe this scalability and adaptability further strengthen the applicability and value of our proposed method.
3. *The stitching method of clothing seems to be pre-defined.*
Indeed, the stitching information is built-into the sewing patterns, but our implicit formulation allows us to handle garment types with different stitching patterns using a single neural network. The reviewer is right to point out that a more flexible stitching method could potentially enhance the generalization of clothing styles. This is an interesting direction for future research.
4. *How was the potential code of human body obtained from images?*
We utilize the SMPL model, which incorporates shape parameter $\mathbf{B}$ and pose parameter $\mathbf{\Theta}$, to model the human body in our approach. As stated in Line 268-270 of the main paper, we estimate these parameters $\mathbf{B}$ and $\mathbf{\Theta}$ from the input images using the algorithm proposed by [Rong2021]. The latent code for the garments are randomly initialized and then optimized with gradient descent to fit the observations as Eq. (11).
### References
*M. Korosteleva and S. Lee. Generating Datasets of 3D Garments with Sewing Patterns. In Advances in Neural Information Processing Systems, 2021.*
*Y. Rong, T. Shiratori, and H. Joo. Frankmocap: Fast monocular 3d hand and body motion capture by regression and integration. In International Conference on Computer Vision Workshops, 2021.* | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their valuable suggestions and constructive comments. We have carefully considered all of the suggestions and concerns raised by each reviewer and responded to each of them below. We will implement these suggestions in our revised paper.
The attached PDF file contains the figure, illustrating the spatial error distribution as suggested by Reviewer 5F53, and the tables presenting the experimental results as recommended by Reviewer LvBx.
Once again, we sincerely thank all reviewers for their expertise and the time they spent in reviewing our paper.
Pdf: /pdf/e7cb00b796dc76f6a4a8f158ecf39e1cfd2bdbfc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Solving Inverse Physics Problems with Score Matching | Accept (poster) | Summary: The authors leverage the framework of score matching, which has become popular for training diffusion-based models for generative tasks, to reverse physical processes defined by forward stochastic differential equations (SDEs). Given a system state at time t=T, they propose to solve for the initial conditions at t=0 by iteratively applying a reverse-time diffusion process defined by a reverse physics simulator, a diffusion term, and the score of the data distribution. In the main contribution of the paper, the authors introduce both a 1-step and multi-step loss for training a network to learn the diffusion and score terms for solving the reverse SDE. They prove the equivalence of their proposed training objectives to vanilla denoising score matching (and a related variational objective in the multi-step case). Extensive experiments demonstrate the efficacy of their method compared to baselines, and ablation studies show the utility of the proposed multi-step objective.
Strengths: ### Originality
To my knowledge, this is the first work that considers the drift term in the typical forward SDE used in diffusion-based generative models to be a realistic physical process. Other works [1,2] have considered different diffusion processes besides additive Gaussian noise, but even these use artificial processes such as synthetic blur and pixel masking. I appreciate that the authors of the current paper identify the connection between physically-defined differential equations and those used in training score-based models, and propose a novel method for solving physics problems.
### Quality
- The authors perform extensive experiments grounded in real-world physical processes and show the effects of varying numerous design choices in each experiment
- The proposed approach demonstrates superior performance compared to baselines across various settings
- Ablation studies show the utility of the multi-step objective over the 1-step variant
- The figures are well-made, particularly Figure 1 which clearly lays out the key ideas of the proposed approach
### Clarity
- The introduction does a good job of laying out the current state of diffusion-based generative models, the connection to physics processes, and the contribution of the current work
- Clear descriptions of the problem setup, model training and inference, and hyperparameters are given for each experiment
### Significance
Physics-based inverse problems arise in many fields such as astronomy, geophysics, and wireless communication, so finding better solutions is a highly significant problem with broad interest. Furthermore, quantifying the uncertainty of these solutions is crucial to the downstream decision-making process. The authors of the current paper provide a principled approach to both solve and provide uncertainty estimates (using multiple samples from the SDE) for inverse problems in physics.
### References:
[1] G. Daras, M. Delbracio, H. Talebi, A. Dimakis, and P. Milanfar, “Soft Diffusion: Score Matching with General Corruptions,” Transactions on Machine Learning Research, 2023, [Online].
[2] A. Bansal et al., Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise. 2022.
Weaknesses: - I believe that the loss in Eq (2) is incorrect. The quantity within the squared L2 norm should be $x_m$ minus the quantity on the right-hand side of Eq (1). In its current form, Eq (2) does not match Eq (3) when Eq (3) is expanded with window S=2.
- While the thorough experimental details are appreciated, I believe some of that can be pushed to the appendix. More space should be dedicated to expanding on motivation and design choices. As it currently stands, section 2 reads as a constant flow of information with insufficient context for the proposed objectives.
- I believe that the organization of the sections could be improved by introducing the typical score matching SDE subject matter first, identifying the differences between that formulation and the physics-based inverse problem formulation, then motivating your proposed approach within this context.
- An obvious concern to me regarding the multi-step training is the huge memory expense arising from the recursive calls to $s_{\theta}$. However, the authors did not address this point in the main paper.
- The authors state that, for decreasing time step $\Delta t$, the reverse physics simulator is equivalent to the negative of the forward simulator (line 113). However, it is not clarified whether this is a simplifying assumption, true in general, or true in the specific case of the reverse-time ODE. The authors expand on the specific choices of the reverse simulators in the experiments section, but the relationship between the forward and reverse simulators remains unclear to me.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - What is your reasoning for training your model with the 1-step loss between $x_m$ and $x_{m-1}$ instead of $x_m$ and $x_0$ (as in typical diffusion-based generative models)? As I understand it, the forward and reverse physical processes are not time-dependent, and arbitrarily large time steps can be taken by simply increasing $\Delta t$. Intuitively, this method makes more sense to me than your proposed approach, and would also remove the need for multi-step training.
- How did you deal with the memory expense during multi-step training? Did you find that you needed to use smaller data and network sizes to fit the gradients in memory, or were there tricks and optimizations you used to reduce memory use?
- One concern is that your model will overfit to the physical parameters at training and perform poorly if there is a test-time distribution shift. How robust is your proposed approach to test-time shifts in the SDE parameters (namely, the coefficients of the simulator and diffusion terms)? I understand that there is limited time for responses, but a small experiment would be appreciated and may convince me to raise my score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors mention limitations in their conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable review and helpful suggestions.
- We restructured our paper based on the feedback we received. In particular, we moved the related work section forward so that it follows the introduction and moved parts of the experimental details to the appendix to improve overall readability.
- **Memory expense**: Possible solutions to save memory are gradient cutting and gradient checkpointing. We have tested the effects of gradient cutting for the heat equation task. In this experiment, we unrolled the entire simulation with the multi-step loss, but stop backpropagating the gradient after n steps. n = 32 corresponds the entire simulation trajectory. Our results show that there are no more performance benefits when backpropagating the gradient for more than 8 simulation steps for the ODE inference.
| Gradient cutting after | Avg. reconstruction MSE |
| ----| ----------- |
| 1 | 4.6e-5 |
| 2 | 1.4e-5 |
| 4 | 1.0e-5 |
| 8 | 0.8e-5 |
| 16 | 0.8e-5 |
| 32 | 0.8e-5 |
- For the evaluation of the tasks in the paper, we trained with no specific optimizations of the memory consumption. The most expensive task in terms of memory is the buoyancy-driven flow with obstacles experiment for which we used a single A100 GPU with 40GB of memory. Our neural networks are very small compared to architectures for diffusion models and generative modeling and have a size that’s comparable to typical learned correction approaches, cf. e.g., [Um et al. 2020, Kochkov et al. 2021].
- **Reverse-physics simulator**: In general there is no reverse simulator for larger time scales, as information can be lost and the initial state might not be possible to reconstruct. If we solve the underlying PDE iteratively this can be described by the update rule $x_{t+1} = x_{t} + \Delta t$ PDEupdate$(x_t)$. Then, we may approximate $x_{t} \approx x_{t+1} - \Delta t$ PDEupdate$(x_{t+1})$. So locally we can identify the reverse simulator with the negative of the forward simulator. In our experiments with non-learned simulators, we use the implementation of the forward simulator to obtain the reverse simulator by using a negative step size $\Delta t$. We included additional comments about this in the main paper as well as expanding upon the relationship in the appendix.
- **1-step loss between $\mathbf{x}_0$ and $\mathbf{x}_m$**: In theory it would be possible to consider the 1-step loss between $\mathbf{x}_0$ and $\mathbf{x}_m$ and compute the trajectory in a single step. However, in practice there are numerical issues that need to be considered. In standard diffusion models, the score is easy to compute given sample and noise, but in our case, we still require an ODE solver. Our method is described from the viewpoint of Euler steps. However, other methods for time integration could be considered, which we leave to future work. Importantly, there are considerations regarding the implementation of the reverse-physics simulator. We found that many small steps + correction for each step is numerically significantly more stable than a single big step with one correction only.
- **Test-time distribution shifts**: We have tested the effects of test-time distribution shifts for the heat equation experiment. Here we train the score network for a specific combination of diffusivity and noise and vary both parameters for testing (always updating both the simulator and test ground truth), see the pdf in the global response. Overall, for small changes of the parameters, there seems to be very little overfitting. Changes in the reconstruction MSE and spectral error can mainly be explained by making the task itself easier/harder to which our network generalizes nicely (e.g. less noise or higher diffusivity -> smaller reconstruction error).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for your detailed responses as well as for your additional experiments. As I stated in my review, I will raise my score as I believe that you have sufficiently answered most of my questions. | Summary: The paper proposes using score matching to learn the backward process of a given forward SDE, notably coming from a physics application. The paper states that this can be used to simulate backward the distribution of the initial condition given the end state, by starting from the end state and drawing trajectories backward according to the backward SDE. This can be of interest in situations where the physics of a system is not invertible, as the authors point out is the case for most of the macroscopic state governing equations.
The authors propose two inference procedures, a SDE (sampler) and an ODE (deterministic) and test in 4 different settings, with different levels of complexity.
Strengths: *The paper exploits an interesting analogy between the current score matching framework used for generative modelling and SDEs coming from physics and the inverse problems associated to them. The idea is that the forward process do not need to be a diffusion and therefore we can learn the backward process for a given SDE to sample a trajectory backward. This is indeed an interesting analogy that can lead to several interesting applications in the future.
*The numerical examples are clear and nicely illustrated. The numerical examples show that the current framework works better than simply running the simulator backwards or simply approaching it by a neural network.
Weaknesses: * In line 70 the paper states that it's goal is to sample from the posterior distribution $p(x_0 | x_M^*)$ where $p$ corresponds to the forward physical process. But, during the remainder of the paper, one of the proposed algorithms suggested (SMDP-ODE) can not be considered as a sampler from $p(x_0 | x_M^*)$, since it is a function of $x_M^*$. As shown by [1], what holds is that the ODE pushes forward the distribution of $p(x_M)$ to a distribution that converges (weakly) to $p(x_0)$ so it is not clear what is the actual point of starting the ODE from a given (fixed) $x_M^*$. In the conclusion the authors do touch on the point that the ODE variant is not a sampler but I feel that the presentation generates a lot of confusion for the reader during most parts of the paper.
* As I understand it, only the toy problem presents a multimodal posterior. One would expect this kind of technique to be particularly useful in settings where multimodality of the posterior is present, so I would expect the authors to focus more on those cases. This is also reflected by the metrics being used (comparisons to the "true" $x_0$, either $RMSE$ or $LPIPS$), which would arguably make less sense in the case of multimodality and even in the case where the mode of the posterior distribution does not match the $x_0$ that produced the fixed $x_M^*$.
* The paper do not present any comparison with other inverse problems solvers, focusing on score matching based approaches only. Even though this in comprehensible to a degree, I feel there should be at least one of the non-score matching based approaches for solving inverse problems.
[1] Yang Song, et al. "Score-Based Generative Modeling through Stochastic Differential Equations." International Conference on Learning Representations. 2021.
Remarks:
* The posterior distribution is clearly defined, as being the distribution given by
$\int p(x_{0:M-1}, x_M^*) dx_{1:M-1}$, therefore I find it strange to say "a" posterior distribution in line 136.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * There is a practical difference between training the proposed model and the standard denoising diffusion model [2]. In [2] the losses are calculated by sampling from $p(x_t | x_0)$ which can be written as $\mu_t(x_0) + \epsilon$ where $\epsilon$ is some gaussian noise. In the context of the paper this is not always possible and one need to rely on a given set of paths. How does this impact the training? Is it possible to generate the same kind of training function that does not depend of unrolling the paths?
* The values in Table 1 are counter-intuitive as far as I'm concerned. Why do the methods '1-step' and 'SSM-VR' seem to achieve the best posterior metric Q for smaller datasets in both ODE and SDE?
* Are the situations where using an inverse step solver $P^{-1}$ instead of $P$ motivated only by the numerical cost of running $P$ ?
[2] Ho, J., Jain, A., & Abbeel, P. (2020). Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems (pp. 6840–6851). Curran Associates, Inc..
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable review and helpful suggestions.
- **SMDP-ODE and $p(\mathbf{x}_0|\mathbf{x}_t)$**: We updated section 2.3 to more clearly distinguish the different design choices for the ODE and SDE sampler. Since the inference of the ODE method is similar to the trajectories of the multi-step loss based on maximum likelihood training it can be regarded as a maximum likelihood solution.
- **Multimodal posterior**: As correctly stated by the reviewer, our proposed method is capable of sampling from multimodal posteriors similar to diffusion models. We have tested this extensively for the 1D toy problem. In more challenging high-dimensional problems, it is often difficult to obtain a ground truth posterior to which we can compare the posterior we obtain by sampling from the SDE if there is multimodality. Additionally, even if the ground truth posterior is known, defining a good metric to compare both high-dimensional posteriors for simulation data is non-trivial. Nonetheless, the heat diffusion experiments make a step in that direction as in our experiments the information from higher frequencies is lost and must be generated by the network. As demonstrated by the low spectral error, our network correctly synthesizes structures on smaller scales that match the ground truth data distribution.
- **Additional baselines**: We have included two additional baselines for the buoyancy-driven flow with obstacles experiment. The first baseline is based on classical optimization with differentiable physics and the L-BFGS algorithm [1] . The second baseline is a hybrid approach that combines a standard diffusion model with gradient based optimization [2]. Both of them represent very strong baselines and highlight the effectiveness of our method in challenging high-dimensional inverse problems.
- **Standard denoising diffusion model training and relying on a given set of paths**: In principle it is not necessary to rely on a specific set of paths. In standard diffusion models it is easy to sample points at a specific time index from the subspace of paths that connect the data distribution at t=0 to the noise distribution at t=T due to the simplicity of the underlying SDE. In our context, just sampling points that are available from the dataset would correspond to this approach and resembles the 1-step loss. However, it is easy to generate the paths for standard diffusion models during training (the points can be sampled easily) and an infinite number can be considered during training in theory whereas it is not so easy in our scenario since the paths can neither be easily generated nor expanded. In that sense, the multi-step loss increases the number of points in the training, by generating additional possible paths based on individual points in the training dataset based on the physics simulator and current score model parameter. If we consider extremely large training dataset as done for standard diffusion models in computer vision, we expect that the 1-step training and multi-step loss would attain a similar performance. Such large high-quality and diverse datasets are however rarely available for more specific scientific applications.
- **Counter-intuitive values in Table 1**: The training and evaluation of this task is somewhat noisy. The posterior metric is very sensitive to the score field at a specific region (where the paths from -1 or 1 merge or separate). Because of this, even though the training converges and losses decrease, it is possible that the metric Q is close to 0, because there is an imbalance between the predicted classes (in these cases the evaluation gives either paths for -1 or 1 and has a problem with the multimodality in this example). The multi-step loss is much less noisy than the other training methods here.
- **Inverse step solver $\tilde{\mathcal{P}}^{-1}$**: For all experiments where the physics simulator is not learned (experiments 1-3) we use the existing implementation of the forward solver to derive the reverse solver by changing $\Delta t$ to $-\Delta t$. This is not motivated by the numerical cost. When pretraining the inverse solver in experiment 4, we specifically use a solver pretrained for the inverse problem.
[1] Thuerey et al. "Physics-based deep learning" arXiv:2109.05237
[2] Chung, Hyungjin, Jeongsol Kim, Michael T. Mccann, Marc L. Klasky, and Jong Chul Ye. "Diffusion posterior sampling for general noisy inverse problems." arXiv:2209.14687
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
In order to ensure the quality of the overall evaluation, please acknowledge the authors response and indicate whether you want to update or keep your original evaluation. This paper is in the borderline range and it would be helpful to have your feedback to make an informed decision.
Thanks again for your time and effort.
The AC. | Summary: The authors propose diffusion-based inverse problem solvers involving the temporal evolution of physics systems. The method utilize a combination of score function and an inverse physics simulator, which corresponds to reverse of drift term in diffusion models, to moves the system’s state backward in time. They demonstrate the effectiveness of their method in a wide range of inverse physics problems.
Strengths: The authors propose multi-step loss to capture long-range dependence in physics system, which has potential implication for general diffusion models.
The experiment are conducted extensively.
Weaknesses: The method is modification of diffusion model with adoption of inverse physic simulator in order to address nonlinear drift term of the physics system. In this regard, its novelty is limited and its applicability may be restricted to certain scenarios.
Minor/errata
Hats are missing in equation (4)
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In the evaluation of this methods, is it acceptable to not compare it with conventional method such as finite element methods?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: It is suggested to include the future direction of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable review and helpful suggestions.
- **Conventional methods as baseline**: We evaluated an additional baseline which represents a more traditional method for solving inverse problems for the buoyancy-driven flow with obstacles experiment. This baseline is based on classical gradient-based optimization for differentiable physics and the L-BFGS algorithm, see [1]. Results can be seen in the pdf attached to the global response.
- **Future directions**: We now included future directions and extensions of this work in our conclusion section. In particular, we are interested in enhancing the trajectories and obtaining solutions by including gradient-based optimization during inference.
[1] Thuerey et al. "Physics-based deep learning" arXiv:2109.05237
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
In order to ensure the quality of the overall evaluation, please acknowledge the authors response and indicate whether you want to update or keep your original evaluation. This paper is in the borderline range and it would be helpful to have your feedback to make an informed decision.
Thanks again for your time and effort.
The AC.
---
Rebuttal 2:
Comment: I have carefully reviewed other reviewers’ comments and authors’ responses. Although I didn't recognized it earlier, this work potentially carries significant contribution, being the first diffusion-based approach to partial differential equations. I also acknowledge the authors' proactive efforts to address various concerns raised.
However, an important factor influencing my assessment, which I hadn’t recognized thus didn’t express in my initial review, is my limited familiarity with physical systems. As a result, I find it difficult to raise my evaluation of this paper, but I lower my confidence score in order to ensure a fair decision process.
As mentioned earlier, this work has potential impact for researchers working on diffusion models. If you introduced physical systems in more detail in revised version, it could greatly capture the interest of researchers in the diffusion model domain. I look forward to an updated version that provides additional insights. For example,
- Why is the PDEs in experiments important? What implications for the community is to compute reverse simulation accurately?
- How meaningful is the inclusion of additional stochastic terms in the heat equation? Is it common practice to introduce stochastic terms into PDEs?
- Given that this work employs real physical time ‘t’, in contrast to diffusion generative models that use imaginary ‘t’ to smooth complex distributions, does the inclusion of ‘t’ as input in the score function hold significance? Consider a trajectory $x(t), t \in [0, T]$ of physical system. If we encounter another scenario with initial condition $x(s)$, is the corresponding trajectory $\{x(t-s)\}, {t\in[0,T-s]}$? Can we infer that $s(t,x)=s(t-s,x)$?
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their suggestions and acknowledging the paper's contributions. We believe that the additional questions asked by the reviewer are very insightful and we try to give brief answers here:
1. Aside from possible general improvements for diffusion-based generative modeling, where a more general class of drift functions (based on all kinds of PDEs) coupled with, e.g., the training setup of this paper could improve the performance, speed of inference and quality of conditional samples, there are many applications in the area of scientific machine learning. Including the physics simulator in the diffusion process enables a general framework to tailor generative modeling and conditional sampling to specific, challenging inverse physics problems, e.g., in astronomy, geophysics or climate science. This yields very effective models that can be used to obtain high-quality samples for inverse problems and uncertainty quantification. As these models include a simulator as an inductive bias, they are able to produce accurate and high-quality samples even in areas with limited training data.
2. In our experiment with the heat equation, we included additional stochastic terms to easily embed the heat diffusion PDE in the mathematical framework of SDEs and diffusion models. An advantage of this is that it theoretically implies that the distribution of samples from inference matches the correct posterior. The weight of the stochastic term was very small and had little to almost no effect on the overall dynamics of the PDEs. To our knowledge, similar approaches for other PDEs in the context of uncertainty quantification and diffusion models have not been considered so far, but we believe that a thorough analysis of this would be of great interest to researchers and leave it as a subject to future work.
3. This question proposes a very interesting experiment. We have thought about omitting the time as input to the score model, however in the experiments considered in this paper the distribution of states $p_t$ at a specific point in time $t$ was always different than the distribution of states at a different point in time (e.g. the smoothness of states from $p_t$ in the heat equation changes with $t$). In these cases, additional information about the time $t$ will help the model. On the other hand, omitting $t$ could potentially improve the generalization capabilities of the model. It could also be very useful in exactly the cases mentioned by the reviewer, where $p_t \approx p_{t'}$ but $t \neq t'$. We leave additional experiments and an extended analysis of this to future work. | Summary: This paper proposes a diffusion-based unrolled strategy for learning solve ordinary differential equations (stochastic or not). After introducing the problem at stake and proposing two training strategies for solving it (namely a 1-step loss approach and a multi-step approach), the authors draw a theoretical parallel with denoising score matching and with probability flow ODEs. More precisely, the authors claim an equivalence between training an architecture with the proposed 1-step loss and training the same architecture with a score matching objective; and the equivalence between minimizing the multi-step loss and maximizing a variational lower bound. Eventually, the authors investigate the performance of the proposed approach in 4 setups, including deterministic and stochastic problems, ranging from a simple 1D experiment to a navier-stokes simulation.
Strengths: - The paper investigates an original idea proposing to link traditional 1-step or multi-step losses in trajectory estimation for dynamical systems with diffusion and score matching. To the best of my knowledge, this idea is new in the literature and makes a link between traditional training approaches and diffusion processes.
- The authors perform a large number of experiments to support their claims.
Weaknesses: - Albeit well written, the article is difficult to follow due to not always well organised sections and very large amount of information.
- The chosen baselines do not seem very relevant, because not relying on standard losses for training denoising diffusion architectures for inverse problems.
- There is a potential problem with one of the theoretical results (Theorem 2.1)
- The literature review is not sufficient; while important and recent works are appropriately cited, there lacks references to physics informed diffusion approaches for inverse problems.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The following points list my concerns / questions / suggestions for the authors.
**1. Main concerns**
**1.1 Difficulties to follow the article** My main concern is that the paper is very difficult to follow. While overall well written, I tend to lose track of what the authors are aiming to do. For instance, in "method overview", the problem is not clearly stated. In my view, equation (6) of the main is the model that the authors aim at solving everywhere, but it comes in the middle of a section explaining score matching. In fact, the problem setting description from the appendix is rather clear while the first paragraph of 2 is not clear. There is a numerical update rule, but where does it come from? What is P? What is x? How are they related? Maybe if (6) had come in the introduction, the "Problem formulation" from section 2 would have been clearer.
**1.2 Comparison with baselines** While the theoretical explanations and intuitions link quite clearly denoising score matching with the problem of interest, the case where a network $s_{\theta}(x, t)$ is trained as a denoiser with a time embedding in a diffusion fashion does not seem to be present in the experiments (in ISM and SSM, $s_\theta(x, t)$ are not trained as denoisers if I understand correctly; please correct me if I'm wrong). Moreover, these architectures do not scale to higher dimensional problems (see line 225). As such, I wonder whether these baselines provide fair comparisons. Given the embedding of $s_\theta(x, t)$, I would suggest to use a method relying on denoising diffusion such as [HJA20]. If not applicable, maybe Diffusion Schrodinger Bridge would be a strong baseline to compare to [1]. Furthermore, given the similarities between the problem the authors want to tackle and the one from [1], I think adding a brief explanation on how the problems / approaches differ might be welcome.
**1.3 PDE litterature** I have difficulties with the relations to other works. In my opinion, section 4 comes too late and should be inserted way earlier first - probably before section 2. More precisely: in "Learned corrections for numerical errors", a more detailed review of learning-based methods for solving PDEs / stochastic PDEs would be helpful.
**1.4 Lack of references to physics informed diffusion approaches** In general I believe that a large part of the imaging inverse problems literature is not mentioned. While this is not the core topic of the paper, it remains interesting in its own right since many papers have proposed methods for incorporating a measurement operator (or P(x) in the authors' words) within diffusion models, thus making the diffusion process "physics aware", which is precisely what the authors want to do here. A cornerstone reference, which the authors included, is [Chu+22]. However, the authors mention that this work performs uncertainty quantification, and state "either focus on the denoising objective common for tasks involving natural images, or the synthesis process of solutions does no directly consider the underlying physics.": I disagree with this, see e.g. Figure 4 of the paper. If this reference does not convince the authors, here are other references where underlying physics / acquisition procedures are taken into account in a diffusion process: [2, 3, 4, 5]. Note also that an extensive literature in the inverse imaging literature has focused on a similar approach to your multistep loss, via architectures known as unfolded architectures incorporating the physics model inside the architecture [6].
**1.5 Proof of Theorem 2.1** I wonder whether the proof of $\Rightarrow$ is correct. My concern is with the particular sentence: "let $\theta^*$ denote a minimizer such that $\mathcal{L}(\theta) \to 0$ as $\Delta t \to 0$. Note that at least one minimizer exists as we can choose $s_{\theta^*}(x, t) = \nabla_x \operatorname{log} p_t(x)$." While I agree that a minimizer to this convex functional exists regardless of the nature of $s_\theta(x, t)$, I am not sure you can assume that the minimum value of the functional tends to 0 without any further assumption on the very nature of $s_\theta(x, t)$: take for instance a simplistic model that is not powerful enough to approximate $\nabla_x \operatorname{log} p_t(x)$...
**2. Additional painpoints**
**2.1 Title** IMO, there is a mismatch between the title and the article: the authors do not propose a denoising score matching method, but a radically different approach that they claim to be equivalent to score matching.
**2.2 Spectral loss** Why is a spectral loss necessary (line 241)? Aren't l2 (or l1, often more efficient) sufficient? Choosing a spectral loss seems unusual to me, maybe linking to some other papers using a similar loss would be welcome?
**2.3 Cumberstone notations** Notations are sometimes difficult to follow, maybe clarifying them would be useful. Some examples: around line 167, $p_1$ and $p_{-1}$ clash with $p_0$ and $p_T$. Around line 63, $\Delta t = t_j-t_k$, but then the authors use the convention $(t_m)_{0 \leq m \leq M}$. Why not replacing $j$ and $k$ with $m+1$ and $m$? etc...
**2.4** Shouldn't the term inside the norm in eq. (2) be updated $x_{m+1}-x_m - \Delta t \cdots$?
**2.5** line 110 of the supplementary, should (9) not be (8) instead?
**2.6** Eq. (38) in supplementary: is the brownian term not missing?
**2.7** Table 1 from supplementary: DilatedConv --> Dil-ResNet?
**2.8** Line 426 of supplementary: 100% of what?
**2.9** line 565 of supplementary: define the exact expression for $s_1$ and $s_2$.
**References:**
[1] De Bortoli, Valentin, James Thornton, Jeremy Heng, and Arnaud Doucet. "Diffusion Schrödinger bridge with applications to score-based generative modeling." Advances in Neural Information Processing Systems 34 (2021): 17695-17709.
[2] Chung, Hyungjin, Jeongsol Kim, Michael T. Mccann, Marc L. Klasky, and Jong Chul Ye. "Diffusion posterior sampling for general noisy inverse problems." arXiv preprint arXiv:2209.14687 (2022).
[3] Zhu, Yuanzhi, Kai Zhang, Jingyun Liang, Jiezhang Cao, Bihan Wen, Radu Timofte, and Luc Van Gool. "Denoising Diffusion Models for Plug-and-Play Image Restoration." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1219-1229. 2023.
[4] Kawar, Bahjat, Michael Elad, Stefano Ermon, and Jiaming Song. "Denoising diffusion restoration models." Advances in Neural Information Processing Systems 35 (2022): 23593-23606.
[5] Rout, Litu, Negin Raoof, Giannis Daras, Constantine Caramanis, Alexandros G. Dimakis, and Sanjay Shakkottai. "Solving Linear Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models." arXiv preprint arXiv:2307.00619 (2023).
[6] Adler, Jonas, and Ozan Öktem. "Learned primal-dual reconstruction." IEEE transactions on medical imaging 37, no. 6 (2018): 1322-1332.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable review and helpful suggestions.
- **Structure of sections and extended literature review**: In line with feedback from other reviewers, we have moved the related work section to appear after the introduction. We also removed some of the experimental details and pushed them to the appendix. With the new space we expanded the method overview section to include more explanations, similar to the extended method overview in the appendix. We also already discuss the physical system as an SDE (equation 6) in the introduction now. We also expanded the discussion of additional learning-based methods for solving PDEs / stochastic PDEs and physics-informed diffusion approaches in the related work. We hope that this improves the structure of the paper and facilitates following the main goals of the article while only making minor changes to the content.
- **Additional baselines and physics informed diffusion approaches**: We have included additional baselines for the buoyancy-flow problem, which represents the most challenging task. In particular, we include direct numerical optimization with the BFGS method and the differentiable solver as well one of the recent physics informed diffusion approaches [2] mentioned by the reviewer. Our method compares very favorably to these baselines for challenging tasks, see the pdf of the global response. The solutions obtained by classical optimization with BFGS and the differentiable solver attain a good reconstruction MSE, although still outperformed by SMDP. However, as can be seen in the visualizations, the solutions are far away from more natural flows contained in the training dataset. In additional to that, SMDP provides a significant speedup ($\sim$ 100x) since it does not rely on expensive gradient computations during inference. Also, our approach obtains better performance than [2], Algorithm 1 for this task. We believe that there are several reasons why our method performs significantly better than [2] in this situation. The training dataset is quite small for standard diffusion models as it comprises only 250 simulations. When training a DDPM to generate flow fields for a specific point in time, there are only 250 samples, which might not be sufficient. Additionally, the computation of the forward simulator $\mathcal{A}$ by the differentiable solver is very slow (>60 minutes for inference of a single simulation) and gradients backpropagated through many simulation steps are not very helpful anymore for the optimization. If we run [2], Algorithm 1 on this task, the initial state is too noisy and gradients are extremely high or contain NaNs. We therefore sample the first 900 steps with the standard DDPM algorithm and switch to [2] after that. However, this could degrade the performance of the method.
- For the toy problem, where we compare with ISM/SSM, we are mainly interested in methods that learn the score to a given arbitrary SDE. For the toy problem, it would be trivial to learn the mapping between the distribution of initial states at $t=0$ and the Gaussian distribution at $t=10$ with a standard diffusion model or the diffusion Schroedinger Bridge method, but that was not the main objective of this task.
- **Proof of Theorem 2.1**: We included an additional assumption for this theorem that the hypothesis space corresponding to the model architecture $s_\theta(x,t)$ includes the correct score $\nabla_\mathbf{x} \log p_t(x)$. This should fix the mentioned issue.
- **Spectral loss**: Using a spectral error for scientific data is not uncommon, see e.g. Um et al. (2020). The spectral error facilitates evaluating how the prediction matches the statistical properties of the ground truth on different scales, which is not possible with the l2 error.
- **Notation issues**: We thank the reviewer for the detailed comments. We have fixed all mentioned issues in the main paper and supplementary where appropriate.
We believe these updates address the issues raised by the reviewer, and hence we kindly ask the reviewer to consider raising their score.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications; I am looking forward to reading the updated version of the paper.
**1.** I thank the authors for running experiments with [2]. I am not surprised that their method performs better due to both the small number of samples and the differentiability issues of $\mathcal{A}$.
**2.** A major difference holds between the pointed references (including [2]) and the work of the authors, namely: in [2] and others, $s_\theta(x, t)$ is trained to remove Gaussian noise from images. It is not trained to deblur / restore / inpaint etc, in other words, $s_\theta(x, t)$ is not shown any measurement operator during the training procedure, which is a strong advantage of [2] ($s_\theta(x, t)$ does not need to be retrained for new imaging inverse problems, and might generalize to inverse problems where almost no training data is available, just like the case studied by the authors). I believe that it is important to stress this difference in the paper.
**3.** I appreciate the effort to correct the assumptions of Theorem 2.1, but I am afraid that this is not sufficient since problem (4) is of course *not* convex in $\theta$ (unless very strong assumptions on $s_\theta$). Assume that $\nabla_x \log p_t$ belongs to the hypothesis space, there still exists local minimizers such that $\mathcal{L}(\theta) \nrightarrow 0$. I may very much be wrong, but I don't think the proof in its current form might be fixed as such. Instead, would it be possible to replace the $\mathcal{L}(\theta) \rightarrow 0$ by $\mathcal{L}(\theta) \rightarrow \varepsilon$ for some small $\varepsilon$? If this is not possible, I find that $\Leftarrow$ is a nice result that is sufficient in itself.
**4.** I agree that 250 samples is too few to train a meaningful diffusion model. However, $s_\theta(x, t)$ would traditionally be trained as a pure denoiser, for which 250 samples is enough (with data-augmentation). Furthermore, since the proposed algorithm includes the physics of the problem, it is possible that this smaller, toy diffusion model might be sufficient. After all, the authors are not trying to generate high quality data from pure noise, but to revert a physical process starting with some data. Do the authors confirm that they have not tried to train $s_\theta(x, t)$ as a denoiser? (I am not expecting the authors to run this experiment, but it migth be of interest for future work.)
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response and suggestions.
- We agree with the reviewer that there is a methodological difference between our work and the work by [2] and others, which we will stress in the paper. We have not tested using the trained $s_\theta(x,t)$ for other tasks, but because training includes the physics simulator, $s_\theta(x,t)$ might generalize less well to other problems than training based on a denoising objective. On the other hand, this enables our method to attain excellent performance on specific individual tasks even with little training data, which is difficult with more general approaches to inverse problems such as [2] and others, due to, e.g., the mentioned issues of the expensive measurement operator. We believe this represents a noteworthy empirical result.
- We are not sure, if we understand the concern correctly. For "$\Rightarrow$", we assume that there is a sequence $\Delta t_1, \Delta t_2, ...$ with $\lim_{n \to \infty} \Delta t_n = 0$ and a sequence $\theta_1, \theta_2, ...$, where $\theta_n$ is a minimizer to the objective $L_\mathrm{single}^{\Delta t_n}(\theta)$ that depends on the step size $\Delta t_n$. If there is $\theta^*$ such that $s_{\theta^*}(x, t) \equiv \nabla_x \log p_t(x)$, then $L_\mathrm{single}^{\Delta t_n}(\theta_n) \leq L_\mathrm{single}^{\Delta t_n}(\theta^*)$. From "$\Leftarrow$" we know that $\lim_{n \to \infty} L_\mathrm{single}^{\Delta t_n}(\theta^*) = 0$ and therefore also $\lim_{n \to \infty} L_\mathrm{single}^{\Delta t_n}(\theta_n) = 0$. Note that we do not try to find one of possibly multiple global minima of $L_\mathrm{single}^{\Delta t_n}(\theta)$ here, but instead assume that $\theta_n$ is returned by some optimization process. As mentioned by the reviewer, the objective is not convex in most cases. It might also be the case that $\theta_n \nrightarrow \theta^*$, but we are only interested in showing that $s_{\theta_n}(x,t) \to s_{\theta^*}(x,t)$.
- We have trained $s_\theta(x,t)$ with a similar setup as denoisers and found that while training and inference work well, backpropagating gradients through multiple steps yields improved results for the buoyancy-driven flow experiment. To give more details about the exact setup: in appendix Figure 6, we compare training and inference with "joint" (method as explained in main paper) and "separate updates". The main difference between them is also explained in appendix Figure 1. For the 1-step training with separate updates, we draw samples $(x_i,t_i)$ and $(x_{i+1}, t_{i+1})$ from the dataset, predict $\hat{x_i}$ from $x_{i+1}$ using the simulator (physics step), add noise to $\hat{x_i}$ and denoise the state again with $s_\theta(x, t)$ (denoising step). Our evaluation for this experiment showed that while the 1-step training is simple to implement and very memory-efficient, the multi-step training obtains an improved performance for longer rollouts during inference. | Rebuttal 1:
Rebuttal: We thank all reviewers for their helpful suggestions and comments. Based on the feedback, there are several updates that are of interest to all reviewers:
- **Additional baselines and physics informed diffusion approaches**: We have included additional baselines for the buoyancy-driven flow problem, which represents the most challenging task. In particular, we include direct numerical optimization with the BFGS method and the differentiable solver as well as a recent physics informed diffusion approach [1] mentioned by one of the reviewers. Our method compares very favorably to these baselines for challenging tasks while inference is significantly faster at the same time, see the attached pdf.
- **Test-time distribution shifts**: We have tested the effects of test-time distribution shifts for the heat equation experiment. Here we train the score network for a specific combination of diffusivity and noise and vary both parameters for testing (always updating both the simulator and test ground truth), see the attached pdf. Overall, for small changes of the parameters (<30%), there seems to be very little overfitting. Changes in the reconstruction MSE and spectral error can mainly be explained by variations of the parameters making the task itself easier/harder to which our network generalizes nicely (e.g. less noise or higher diffusivity -> smaller reconstruction error).
- **Empirical verification of Theorem 1**: We have included new results in the attached pdf where we analytically compute the correct score and compare it to the learned score of our methods (1-step, multi-step). This demonstrates that the 1-step training learns the correct score very accurately, whereas the multi-step also learns an approximation to the score, but overall pulls trajectories during inference closer to the training data set distribution, which can be seen visually in the learned score field representation.
- **Structure of sections and extended literature review**: In line with feedback from other reviewers, we have moved the related work section to appear after the introduction. We also removed some of the experimental details and pushed them to the appendix. With the new space we expanded the method overview section to include more explanations, similar to the extended method overview in the appendix. We also already discuss the physical system as an SDE (equation 6) in the introduction now. Moreover, we expanded the discussion of additional learning-based methods for solving PDEs / stochastic PDEs and physics-informed diffusion approaches in the related work. We believe this improves the structure of the paper and facilitates following the main goals of the article while only making minor changes to the content.
[1] Chung, Hyungjin, Jeongsol Kim, Michael T. Mccann, Marc L. Klasky, and Jong Chul Ye. "Diffusion posterior sampling for general noisy inverse problems." arXiv preprint arXiv:2209.14687
Pdf: /pdf/84b5d0c7bdb7dc3995df00559ed32cd88a5fe063.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work describes a new method for solving inverse problems for time-evolved physical systems. It does this by combining two components: a time-independent reverse physics simulator (based on a priori domain knowledge or learned) and a learned time-dependent correction term. The posterior for the initial state can then be obtained by sampling from a related SDE (or a related ODE). This method is referred to as score matching via differentiable physics (SMDP). The authors argue that the learned correction term corresponds to the score (the gradient of the log-likelihood of the trajectory), providing a probabilistic interpretation of their model.
The method is numerically evaluated on four different physical systems, where it is shown to yield results superior to various baselines. This is the case for both the SDE and ODE variants of the method. The authors also discuss relations with existing methods for inference based on score matching as well as generative methods such as diffusion models.
Strengths: The paper is well laid out, clearly explaining the task considered and the proposed solution. Various quantities and models are well defined and motivated. The numerical section is also well presented, with appropriate comparison to baseline methods such as implicit score matching and sliced score matching for one of the tasks and other neural network models for another.
Weaknesses: There is little discussion of previous work on the inverse problem studied in this work. Indeed, it is only towards the end of the work that the authors discuss a set of related work, with little or no discussion of methods applied to the inverse problem in question. If it is the case that this particular problem has not been studied and therefore there are no state-of-the-art methods, this should be clearly spelled out.
Another issue is that the central theoretical result (that of the equivalence of the correction term to the score function of the system) is not empirically verified. This should be done since the theoretical result only holds in the continuous limit, so it is not clear how this applies in the discrete case. While this may not be possible in the more complex tasks studied, it should be doable for the 1D toy problem considered in Section 3.1. Due to the simplicity of the model, it should be possible to calculate the score function and compare it to the learned correction term.
Another issue is that the tasks are very close to the actual application, and a more detailed description may be necessary for readers who are not domain experts. For example, it is not clear what is meant by “We use semi-Lagrangian advection for the velocity and MacCormack advection for the hot marker density.”.
Overall, the novelty of the proposed method is not very high. This is a learned correction to a standard method of reverse time-stepping a simulation. Perhaps a more careful study of the learned correction term and its properties would help here, but as it stands, the results are incremental.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: – How is the problem in Section 2 motivated? Is this supposed to be a discretization of an SDE?
– Why is the step *added* to x_m in eq. (1) in order to obtain x_{m+1}? If this is stepping backwards, shouldn't it be subtracted? The conclusion on line 113 would make more sense in that case.
– In Section 2, N trajectories indexed by i from 1 to N are introduced, but never make another appearance. Instead, eq. (2) and (3) refer to an expectation over trajectories. How are we to think of the trajectories, as samples or as a random vector?
– Some more discussion as to the difference between the SDE and ODE variants would be useful. If I understand correctly, the former sampled the posterior states, while the latter provides an approximation of its mode (calculating the maximum a posteriori through a normalizing flow-type construction).
– Figure 2 is hard to parse. In (a), are we observing the true score or its approximation by the correction term? Also, it is not clear what is shown in (c), especially the trajectories at the bottom. Are these exploding trajectories occurring for GRID or for MLP? Are they for one step or multiple steps? The discussion of these results (lines 178–186) are similarly hard to follow.
– The two sentences on lines 212–215 seem to contradict one another (if the forward solver cannot be used, how are we implementing the reverse step using the forward solver?).
– Why are the results of the SDE not shown in Figure 5?
– Why are we adding Gaussian noise to each state in Section 3.3? If this is a deterministic system, should we not be considering deterministic methods to solve it?
– The conclusion on lines 302–303 contradicts the results in Figure 6(b) (ODE obtains better spectral error, not MSE).
– In various places, the authors use a period as a thousands separator. This should be a space in an English-language context.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors briefly discuss limitations (and possible future directions) in the last section. I don't believe there are any potential negative societal impact from the work that should be considered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable review and helpful suggestions.
- **Previous work for inverse problem tasks**: We have updated the related work section of the paper to include additional references to prior work on inverse problems, particularly the heat equation. The buoyancy-driven flow task is new with no prior state-of-the-art methods, as we wanted to design a suitable problem that is high-dimensional and includes non-trivial physics simulators with difficult-to-learn dynamics (caused by placements of different obstacles). We have included additional baselines (see global response) for general inverse problem approaches to further highlight the effectiveness of our approach compared to state-of-the-art methods. As also suggested by other reviewers, we extended the related work and moved problem-specific details to the appendix. We restructured the paper by placing the related work section right after the introduction.
- **Empirical verification of Theorem 1**: We have included new results in our global response where we analytically compute the correct score and compare it to the learned score of our methods (1-step, multi-step). This demonstrates that the 1-step training learns the correct score very accurately, whereas the multi-step also learns an approximation to the score, but overall weighs the prior implicit in the training dataset higher, which can be seen visually in the learned score field representation.
- **Novelty of method**: We want to highlight that a diffusion-based approach to reverse a simulation in a probabilistic way has to the best of our knowledge not been considered in prior work. By theoretically linking learned correction training with the 1-step and multi-step loss to the score matching objective and maximum likelihood training, this approach can be very efficiently and reliably used for a large number of specific scientific downstream applications that rely on inverting highly non-linear dynamics and uncertainty quantification.
- **Answers to questions**: While the training setup can be applied in more general settings (where the noise is not Gaussian), in our experiments and theory section, we consider Gaussian noise. In this case this is an SDE with a step size $\Delta t$ that depends on specific applications. Trajectories should be thought of as vectors (the size of the vector grows with the sliding window). We will expand the discussion of differences between ODE and SDE inference in the main paper. Your understanding is correct that the ODE inference generates a maximum likelihood solution whereas the SDE samples from the entire posterior. Fig. 2(a) shows the approximation of the score. We have included a comparison of the actual (analytic) score of an SDE and the learned correction in the global response pdf. Exploding trajectories in (c) occur in the GRID model trained with 1-step loss. For GRID multi-step training, there are no exploding trajectories. In line 212-215, we mean that we cannot use a single step backward because of exploding pixel values, but instead small steps + corrections are possible. In Fig. 5 SDE results are not shown because of space limitations, but they are shown in the supplementary material (Figure 8). For deterministic systems, there is often still noise in the measurement and another practical reason is that adding small noise helps performance in the case of multimodal posteriors. There was an error in Fig. 6b where the colors of ODE and SDE were swapped. The sentences in the text are correct, we apologize for the confusion.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
In order to ensure the quality of the overall evaluation, please acknowledge the authors response and indicate whether you want to update or keep your original evaluation. This paper is in the borderline range and it would be helpful to have your feedback to make an informed decision.
Thanks again for your time and effort.
The AC.
---
Rebuttal 2:
Comment: I would like to thank their authors for their response to my review and those of the other reviewers. In light of the changes made the the manuscript, I have decided to increase my rating. | Summary: The paper proposes an approach to sample from the posterior distribution based on score-based diffusion models with a particular focus on inverse problems in physics.
The proposed approach, to my understanding, has two novel contributions:
- they use reverse-physics simulations and augment them with score estimates that supposedly leads to superior performance in sampling,
- they propose a multi-step training regime where scores are estimated in sequential manner for a given time-horizon, in contrary to the standard single-step score estimation.
The method was tested on different synthetic data experiments, and both the proposed approaches seem to produce meaningful improvements.
Strengths: - The multi-step training of the score function seems to be effective in synthetic data applications.
- The theoretical results, although not particularly novel, provide a complete picture of the proposed methods.
- The experiments presented validate the proposed ideas well.
Weaknesses: - One of the novelties claimed in the paper is the use of score-function to simply _refine_ the outputs of a reverse-physics simulator. Can't one simply view the reverse-physics simulator as (non-learned) _part of_ the parametric score model? In this case, the claim simply becomes that a physics-informed model to approximate the score is better than one that is oblivious to the physics? Isn't this an unsurprising statement? This is the underlying motivation behind physics-informed neural nets (PINNs), a relatively large area of research. Could the authors clarify?
- In some of the experiments, the authors use the LPIPS metric to evaluate the quality of the solution. This does not make much sense as LPIPS is designed to evaluate the "perceptual" quality of the image, which has nothing to do with the physical accuracy (the metric one cares about).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - How is the ground truth obtained in the case of SDEs?
- In Section 3.4 (Fig. 6), why is the spectral error worse for SDEs compared to ODEs while it is the other way around in other experiments?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: - Releasing the code could be crucial as there are many delicate details one might need to get right for the method to work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable review and helpful suggestions.
- **Refinement of model outputs**: We agree that the reverse-physics simulator and the score network can be unified in a single joint model. Nonetheless, a clear distinction between the simulator and score network makes sense from a methodological viewpoint, as the learned score is data-driven with little to no inductive bias at all, whereas the simulator includes only inductive biases and conservation laws. Also, during inference, the update step by the simulator can be regarded as a step between different points in time, whereas the score network corrects the simulation state by projecting it towards a region with higher likelihood at the same point in time. One can expect an improved performance from a physics-aware score-matching approach, but nonetheless this connection has not been made before in the form we present it. As such, we believe our results and theory provide an important basis for future work at the intersection of both methods.
- **LPIPS metric**: Our primary metric for simulation data is the L2 distance. However, the L2 distance can become very large when the prediction does not match the ground truth on a per pixel basis, even though the prediction might still be close to the ground truth visually. In these situations the LPIPS distance can be more informative as it is less sensitive to the exact pixel positions.
- **Ground truth for SDEs**: We draw individual samples from the testing dataset at time 0 and simulate them forward in time. The prediction is compared to this sample via the spectral error and reconstruction MSE, which are both less sensitive to the exact pixel values of the sample.
- **Spectral error in Fig. 6**: In this specific plot, there is a mistake and the colors for the SDE and ODE should be switched. We noticed this mistake shortly after submission but could not fix it any more. We apologize for the confusion.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
In order to ensure the quality of the overall evaluation, please acknowledge the authors response and indicate whether you want to update or keep your original evaluation. This paper is in the borderline range and it would be helpful to have your feedback to make an informed decision.
Thanks again for your time and effort.
The AC. | null | null | null | null |
Learning Functional Transduction | Accept (spotlight) | Summary: This paper proposes a new deep learning approach for the problem of meta-learning. Inspired by the theory of reproducing kernel Banach space, the proposed method jointly trains a deep transformation as a representation, and a parametrized kernel function $K(vi, \cdot)$ of the problem instances, at the meta-training stage. Experimental results validate the effectiveness of the proposed method.
Strengths: The motivation is strong, as the problem of meta-learning is a rather important problem in the community. The link between meta-learning and the theory of RKBS discussed in this paper may bring some new ideas.
Weaknesses: My main concern comes from the fact that the proposed method also needs a meta-training procedure. At this point, the comparison in some experiments (for example, section 5.1, if there is no mis-understanding) seems somewhat unfair. Also, it would be better if there is some more comparison between the proposed method and some other meta-learning approaches.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be better if the authors could give a thorough discussion about their method’s superiority, as there are many meta-learning approaches that are in a similar style, that is, first training a meta model in the dataset containing sufficient information of the task space, then apply some task-specific procedures in the target task. It would be better if the authors could clearly state several specific points, at which their method is better than most of the others in the literature.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and interest in reviewing our work. We would like to bring some answers and clarifications in relation to your comments, and hope they might decide you to increase your score. It seems that your concern is about not properly evaluating our approach against other meta-learning approaches, so we try to emphasize this aspect in the specific replies below.
> Regarding _“[…] the proposed method jointly trains a deep transformation as a representation, and a kernel function […] of the problem instances”_.
- We want to clarify that no vectorial representation is carried over from the meta-training procedure. Rather, the meta-optimization fit the kernel function $K: V \times V \mapsto U$ which is expressed as an iterative application of the intermediate parametric function $k_{\theta}^{l}$ (equation 5). This kernel function is then directly applied to each new dataset along with query points to produce new output estimates.
> Regarding _“the comparison in some experiments […] seems somewhat unfair.”_
- We break down our reply in different arguments: First, we dedicated a whole experiment on a task previously used to benchmark meta-learning models in [1], namely MNIST-like datasets classification and compared against several state-of-the-art meta-learning systems, which directly provides evidence that the Transducer meta-learned adaptation transfer better to new tasks when meta-trained on the **same task distribution**. Second, in section 5.3, we show that our formalism can enhance the performance a regular supervised method on a large natural dataset, even though, it is again trained with the **exact same amount of data**. Third, in section 5.1 we emphasize that our original meta-learning approach can find better predictive solutions than previous neural operator approaches even when neural operators have access to large regimes of data (~1000 training trajectories). This is itself an interesting result as meta-learning systems have been very scarcely applied to the problem operator regression. To our knowledge, there is little literature on applying MAML or RNN-adaptation to neural operator. However, to convince you more of the interest of our approach, we are working on complementing section 5.1 with a comparison with a MAML-like adaptation procedure applied to FNO/DeepONet and will communicate the results as soon as possible. (Note however, that such adaptation technique is much slower than our feedforward adaptation process, see "Speed" below)
> Regarding _“It would be better if the authors could give a thorough discussion about their method’s superiority […]”_
- In the experimental section, we tried to provide different arguments showcasing the benefits of our method in several domains, against different class of models and along different features. We synthesize the main aspects below:
- **Accuracy**: First, our original transductive system can fit solution of operator regression problems at the level of accuracy of previous (gradient-based) neural operator regression approaches. (Section 5.1). Conversely, we show that our system is also a better meta-learner than several state-of-the-art meta-learning approaches on a task where they have been extensively benchmarked. (Section 5.4). Finally, we show that our philosophy can boost purely inductive systems in section 5.3.
- **Speed**: The adaptation mechanism of our model is parallel by nature and bypass the need for sequential adaptation such as in gradient-or-RNN-based adaptation. This in turn translate to significant gain in computation time, showcased in section 5.2 (outliers detection), where 5000 operator regression instances can be performed in under a few seconds while other meta-learning systems would need a much greater time because of their sequential adaptation process (I.e iterations of gradient descent for instance). This can potentially unlock new operator regression applications requiring fast sampling of multiple fitted models (such as bagging, conformal prediction or Monte-Carlo sampling in the operator space…). We will add a specific remark regarding this point in section 5.2.
- **Theoretical soundness and interpretability**: As you noted, our model builds on clear theoretical results regarding the existence and analytical form of the solutions to the considered regression problems. This is in contrast to few-shot gradient-based learning, which performs only a few gradient step (while paradoxically gradient descent is an asymptotic process by nature) and offer, in the general setting, weak guarantees regarding the quality of the found solutions. Furthermore, since our model takes explicitly as inputs the whole dataset, it offers a direct possibility, contrary to other approaches, to apply sensitivity analysis of the model with respect to each exemple for learning each task regression instance, which can be leverage to directly analyse to interpret model decisions.
- **Originality**: The original use of the theory of RKBS which generalise that of RKHS allows to encompass both operator (infinite-dimensional) and regular finite-dimensional regression problems in a single meta-regression framework and is, as such, worthy of exploration and analysis, along gradient-based or RNN-based adaptive systems, as it might open new discussion and connections with existing questions such as the interpretation in terms of kernel of in-context learning in attentional models.
We thank you again for your time and remain committed to answer any additional clarification that you might need and engage with you further during the discussion period.
---
Rebuttal Comment 1.1:
Comment: Thank you for your patience to resolve my confusions! I have now understood the experimental settings in section 5. Though I keep my point that the work has sound theoretical grounding(mostly due to the lack of understandings of DNNs), I admit that the idea of meta-learning neural operators has its novelty. I will no longer stand in the way of acceptance.
---
Reply to Comment 1.1.1:
Title: Second reply to reviewer w6wT
Comment: To reviewer w6wT,
We thank you for your positive re-evaluation of our work. Moreover, we are happy that you praise the theoretical aspect of our work in your reply. To help definitely convince you, as we proposed in our rebuttal, we would like to complement our experimental results in section 5.1 (ADR equations) by providing a comparison with model-agnostic meta-learning (MAML) applied to FNO with different inner training budgets (10/50/100 gradient steps with inner learning rate at 1e-2/7e-3/5e-3), in line with you main suggestion (as well as that of reviewer Sczv). We synthesize the results in terms of RMSE and fine-tuning time in the following table:
| RMSE / Adaptation time (sec.) | 10 gradient steps | 50 gradient steps | 100 gradient steps
|---|---|---|---|
| FNO-MAML | $6.5e^{-1} // 2.6e^{-1}$ | $3.1e^{-1} // 6.7e^{-1}$ | $1.4e^{-1}//2.1e^{0}$ |
As you can see, on this task, a meta-learned parameter initialization with a limited fine-tuning budget does not improve over the previously tested fine-tuned approaches. More importantly, this approach is much less accurate and computationally more intense than our system (due to the need for sequential gradient computation). We hope that this control experiment will help dissipate your concern and contrastively demonstrate the potential value of our own approach. We remain happy to answer to any additional comment before the end of the discussion period. | Summary: This paper proposes a method for transductive learning based on reproducing kernel banach spaces (RKBS). The resulting model is capable of learning *in-context* in the sense that given a new instance of a learning problem or a new dataset $\mathcal{D}$, it can infer the resulting functional relationship at inference time. The authors show that transformer attention layers can be considered reproducing kernels for this model. They provide a meta training objective that allows them to learn the kernel operations $\kappa^\ell$ and nonlinearities $F^\ell$ which generalize well over the distribution $\mathfrak{D}$ of possible datasets. Once trained, the model can generate functions for new datasets.
The authors test the proposed ideas to learn operators $\mathcal{O}$ for PDEs including advection reaction diffusion equation and Burgers' equation as well as climate modeling. Lastly, they test on a MNIST like task where there are pixel permuted and class permuted versions of the dataset. In all of these settings they identify benefits to their approach over several existing benchmarks.
Strengths: The paper provides a nice framework and perspective to think about transductive learning through reproducing kernels. To the best of my knowledge, this is a novel idea.
The paper also provides several experiments that showcase the benefits of the approach, especially in PDE modeling.
Weaknesses: As one of the primary motivations of the model is a meta-learning algorithm capable of in-context learning, I think that an experiment involving in-context learning of language patterns would greatly strengthen the paper. Most of the experiments at this point are for PDEs but the proposal is much broader.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: On the MNIST experiment, how well would a pure supervised algorithm (perhaps with a provided context signal to indicate permutation) trained on all instances of the data perform? I am wondering if the meta-learning objective outperforms standard supervised learning because it sees a larger amount of total data.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors addressed their need to work with finite dimensional outputs for the PDE modeling (Fourier coefficients) and also acknowledged the computational and statistical requirements to optimize the metal learning objective.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and the interesting suggestions that you shared. We are happy about your positive assessment and we reply specifically to your comments below.
> Regarding adding "an experiment involving in-context learning of language patterns"
- This is an intriguing suggestion. We agree that an experiment involving natural language could definitely strenghten the interpretation of in-context learning in attentional layers as a form of reproducing kernel fitting since ICL has been predominently demonstrated in this domain. However, analyzing natural language in terms of functional reproducing banach space of is non trivial and we chose to select more straightforward exemple tasks to illustrate our presentation. We plan on developing further this idea in a follow-up work.
> Regarding the MNIST experiment.
- We want to emphasize that all baselines are meta-learning systems that receive the same training curriculum as our model. Hence, the difference of performance can not be attributed to training data volume. Moreover, note that providing a context signal to indicate permutation will be counter-productive at test time, since it would provide a shortcut to circumvent meta-learning of a genuine regression program, hence, preventing transfer learning on other datasets (FashionMNIST and KMNIST). On the other hand, training a supervised baseline on a large quantity of class and pixel permutation with no permutation signal is likely to provide no learning at all. We agree however with you, that quantifying how our meta-learned kernel solution converge as a function of data volume is an important future direction for our approach.
We thank you again for your time and valuable feedback.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I appreciate the authors' answers to my questions about the possibility of further ICL experiments and data volume. Though natural language experiments would strengthen the work, it could also be left as a future application. I will maintain my support for acceptance. | Summary: **SUMMARY AFTER REBUTTAL**: as described below, most of my comments were addressed and the authors made a significant job in including new results. I have increase my score during the rebuttal phase and I strongly vote for acceptance.
---
Neural operators are neural networks that are trained to approximate a function-to-function mapping. This paper proposes an algorithm to perform few-shot learning of these models, where at inference time the network is given a set of examples of the operator, and an input function as the test point.
The specific algorithm they propose is built on reproducing kernel Banach spaces, where for a given prediction, the kernel and the dictionary on which it is evaluated is computed by a recursive embedding of the few-shot examples. This is inspired by the transformer architecture. When examples are continuous, they are discretized with an FFT.
They analyze a series of classical benchmarks including modeling PDEs, showing the method to perform well.
Strengths: As far as I know, the idea of performing few-shot learning of neural operators is novel. These networks are used extensively in physical modelling, so the paper can have a sizable impact there.
The paper is also relatively clear (with a few exceptions, see below), especially if one considers the complexity of some underlying ideas. The connection between their method and the transformer is also interesting, but from what I understand this is extended from Wright and Gonzalez (2021).
The experiments are varied and cover a wide range of use cases.
Weaknesses: 1) I have found the initial discussion on the difference between transduction and inference a bit misleading. According to Vapnik, 2006 (which they cite), almost anything which is used today in ML / deep learning is inductive. Transductive methods are only those that can make predictions on a given set of test points but *cannot* operate outside those. For example, SVM is inductive in general, except for some variants such as the transductive SVMs discussed in Collobert et al., 2006. In fact, what they are calling "transductive" is what is typically called "instance-based" in ML (kNN, SVM). Their setup is a standard few-shot setup extended to operators. This also leads to some strange sentences, such as "inductive neural learning with gradient descent is compute-intensive" referred to neural networks; SVM training is also notoriously intensive (and it can also be done via gradient descent).
2) There are some "standard" methods for performing few-shot learning in the literature (e.g., MAML, prototype networks, ...). Many of these methods start from a standard neural network and adapt it to a few-shot scenario. Architectures for performing operator learning are known, and it is not clear from reading the paper why the few-shot learning methods are not immediately extensible to this setup (e.g., why can't we do MAML on a standard neural operator?), and why we need instead to resort to a more complex formulation in terms of kernels. To clarify: I think the algorithm shown here is interesting, but it's a bit hard to motivate it by reading the paper itself.
3) The computational complexity of the method is not discussed, in particular the need to execute multiple FFTs for the few-shot examples (Section "Discretization"). This also connects to recent literature on performing FFTs in an efficient way on GPUs (e.g., FlashConv). Also for testing, if I understand correctly the authors are mostly comparing their method (which is only a forward pass) with a full training of its competitors, which is a bit unfair. No baselines for few-shot learning are tested. For example, they state that their setup is "Similar to (Pathak et al., 2022)", but they do not compare.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - I think rephrasing the entire "transductive" discussion in terms of few-shot learning could significantly improve the paper. At the least, objectively incorrect sentences should be amended.
- Discussing better training and test complexity is important.
- I think the paper can improve readability by providing a practical example of (5)-(6) in the more familiar case of finite-dimensional input-output spaces.
- There has been a limited amount of works that have explored using neural networks to perform Bayesian posterior inference on-the-fly on a large family of distributions (e.g., Prior-Data Fitted Networks (PFNs)). I would be curious to see a discussion on the connection with this work. On the related work part, there is also some relevant literature on recursive (recurrent) kernel evaluation which is not mentioned.
- There is a small typo on page 3: point evalutation.
- Will the code be released?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors are clearly discussing the limitations of their few-shot setup. However, they are not discussing the computational complexity of the method, which I think would be the biggest limitation in scaling this up to large setups. A few sentences on this would be appreciated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and for this thoughtful review. We are happy that you seem enthusiastic about our approach and we try to answer specifically to your remarks and questions below:
> Regarding “the initial discussion on the difference between transduction and inference”.
- Indeed, we definitely agree with you that current deep learning techniques rely, in a vast majority, on inductive learning principles, which constitutes a starting motivation of our work. We also definitely agree that "instance-based learning" is another valid denomination regarding how decisions rules are formed in our approach. However, we disagree on your definition of transductivity (_"Transductive methods are only those that can make predictions on a given set of test points but cannot operate outside those"_) See for instance the seminal _Learning by Transduction_ [1]. (Section 1: "This is a problem of transduction, in the sense that we are interested in the classification of a particular example rather than in a general rule for classifying future examples" and section 6 ""Transduction" is inference from particular to particular; for the problem of pattern recognition, it means that, given the classifications Yi, i = 1, . .. , l,of the l points x1, ... , Xl in the training set, we are only trying to guess the classifications of the k points xl+1, ... , Xl+k in the test set." Remark 5 of the same reference also directly agree with your denomination of "instance-based learning" which we will mention in the introduction. Finally, we reworked the expression "inductive neural learning [...] is compute-intensive" to emphasize instead that iterative optimization procedures for tackling high-dimensional problems are compute-intensive and bottlenecked bye their sequential nature.
> Regarding data regimes and comparison “with standard methods".
- We would like to point out that our method is not restricted to the "few-shot" data regime but can be applied to problem instances with more than thousands exemples (for instance on section 5.3, we test out model with up to 1000 exemples pairs.). However, as you noted, our model needs an original meta-optimization procedure that is unusual in the recent neural operator litterature. To your suggestion, we are working on complementing section 5.1 with a comparison with a MAML-like adaptation procedure applied to FNO/DeepONet and will communicate the results as soon as possible. Note however, that such adaptation technique will be much slower than our feedforward adaptation process.
> Regarding the computational complexity of the method, scaling and the cost of FFTs.
- We are actually performing a single FFT/IFFT transformation as a pre/post-processing operations, in line with recent litterature on Fourier operators [2]. This largely mitigates the cost of such operation which allowed us to scale our system to vert high-resolution climatic prediction (720x720 images). Note that in this experiment, we are not interested in obtaining state-of-the-art climatic variable prediction as in Pathak et al., 2022, but we are showing that our transductive approach can augment popular inductive models in the context of operator regression problems.
> Regarding improving readability of equations 5-6 with a finite-dimensional example.
- To your suggestion, we are working on integrating a cartoon of the kernel computation to the exemples depicted in figure 1 to improve readability.
> Regarding related work on Bayesian posterior inference and recursive kernel iterations.
- Thank you for these interesting suggestions, we will definitely mention this relevant litterature to our discussion. Regarding recursive kernel evaluation, are there specific pointers that you could share for us to adapt our discussion?
> There is a small typo on page 3: point evalutation."
- Thank you for this catch.
> Will the code be released?
- Yes we plan to release a public repository with the code to replicate the experiments as well as pre-run notebooks to help the interested reader familiarize with our model.
We thank you again for your interest and this valuable feedback which definitely helps strenghten our work.
[1] Learning by transduction A. Gammerman, V. Vovk, V. Vapnik. UAI'98: Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
[2] Transform Once: Efficient Operator Learning in Frequency Domain. Michael Poli, Stefano Massaroli, Federico Berto, Jinykoo Park, Tri Dao, Christopher Ré, Stefano Ermon. NeurIPS 2022
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed feedback. Concerning the definition of "transduction", I understand the point but I also think that "transduction" used in this sense can be misleading, especially since there are other definitions that are more common in today's literature. However, this does not impact my evaluation in any way. I will keep my score as-is waiting for the other points. For recursive kernels, the authors might be interested in this paper coming from the kernel signal processing literature, which is a bit niche but connected: https://ieeexplore.ieee.org/document/6722955
---
Reply to Comment 1.1.1:
Title: Second reply to reviewer Sczv
Comment: To reviewer Sczv,
Thank you for your reply as well as the interesting reference. We are happy that our rebuttal was informative. In addition to these, as we mentioned in our rebuttal, we would like to complement our experimental results in section 5.1 (ADR equations) by providing a comparison with an other meta-learning method. While we re-emphasize that meta-learning of neural operators is a new topic with no standard approaches, we tried to your suggestion to apply gradient-based model-agnostic meta-learning on our meta-dataset of operators to the same FNO model with different inner training budgets (10/50/100 gradient steps with fixed inner learning rate at 1e-2/7e-3/5e-3). We synthesize in the following table the results in terms of RMSE and fine-tuning time that will complement table 1 of the paper.
| RMSE / Adaptation time (sec.) | 10 gradient steps | 50 gradient steps | 100 gradient steps
|---|---|---|---|
| FNO-MAML | $6.5e^{-1} // 2.6e^{-1}$ | $ 3.1e^{-1} // 6.7e^{-1}$ | $1.4e^{-1} // 2.1e^{0}$ |
As you can see, on this task, a meta-learned parameter initialization with a limited fine-tuning budget does not improve over the previously tested fine-tuned approaches. More importantly, this approach is much less accurate and computationally more intense than our system (due to the need for sequential gradient computation). These complementary results further validate the interest of our own approach. We agree that this comparison will also help the reader better situate our approach in the literature.
We remain happy to answer to any additional comment before the end of the discussion period. | null | null | Rebuttal 1:
Rebuttal: We would like to thank again all three reviewers for their interest and precious feedback. We are happy that this work has been positively regarded by reviewers Sczv and f2BS while reviewer w6wT seemed less confident. We answered directly to specific remarks and questions in each review thread (see below) while integrating common suggestions to our text which we believe are strenghtening our proposal.
As a synthesis, all three reviewers recognised that our work constituted an original proposal with a _"strong motivation"_ and _"novel ideas"_ that can potentially impact research at the intersection of neural operators and meta-learning. Reviewer Sczv noted a _"clear"_ presentation of ideas and reviewer f2BS liked "a nice framework and perspective"_. A central concern of reviewer w6wT was _"to give a thorough discussion about [our] method's superiority" against other meta-learning approaches. However this is specifically the aim of section 5.4 which compares our system against existing meta-learning approaches, while we demonstrate several original results in the other experimental sections: Namelly, In-context learning in infinite-dimensional spaces in section 5.1, outliers detection in section 5.2 and scaling/inductive model boosting in section 5.3. We are working on complementing our results to reviewer w6wT suggestion, but we note simultaneously that reviewer Sczv underscored that our _"experiments are varied ans cover a wide range of use case"_ and that reviewer f2BS remarked that we _"identify benefits"_ in all of them.
We will be happy to discuss further these points during the discussion period and remain commited to answer any additional question that may arise after our rebuttal.
The authors | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Sharpness-Aware Minimization Leads to Low-Rank Features | Accept (poster) | Summary: The submission provides a study of sharpness-aware minimization (SAM) on the numerical rank of features. The conclusions are that (1) SAM reduces feature rank throughout the training, with more rank reduction for larger neighborhood size rho, (2) intermediate values of rho result in representations that are more generalizable with K-NN (3) in a small theoretical model and some realistic architectures the rank reduction is due to inactivity of ReLU units caused by the weights *below* the activation, and (4) reducing the rank with a bottleneck layer does not recreate the generalization benefits of SAM.
Strengths: The main claim of the paper (rank reduction) is supported by extensive empirical evidence. The knowledge that SAM reduces the rank or the number of active units from the beginning of the training can help with understanding SAM and the flat-minima phenomenon in general. It can also lead to faster neural network training and inference by combining SAM with compression methods based on spectral decomposition or pruning. The paper is well organized and adequately covers the background and related work.
Weaknesses: Weaknesses W1, W2, and W3 below are the reason for the current low score. I am open to raising this score if these concerns are addressed in the rebuttal or the revision and depending on the discussions.
**W1: In different experiments representations are extracted from different points of the network.** Most experiments in the paper extract the representation from a hidden layer close to the output but the ones on ResNets use the second last block. The justification is that neural collapse interferes with the low-rank behavior of SAM. Neural collapse typically happens towards the end of the training but the low-rank bias of SAM in the submission starts from the beginning of training. I did not understand how the two would interfere then. Even if the result on the low-rank behavior of SAM on the last ResNet block is negative, the authors should add the plots in a revision (at least in the appendix).
**W2: Neuron activity is only evaluated on a limited set of architectures.** The first part of the submission evaluates the low-rank phenomenon on a set of architectures. The second part traces this phenomenon to ReLU inactivity in a theoretical model and then evaluates ReLU inactivity on a *different* set of architectures. It is not clear to me if the low-rank phenomenon in the first part of the submission also due to ReLU inactivity. Whether the answer is positive or negative, the revision (at least in the appendix) should include the ReLU inactivity plots for the first set of architectures.
**W3: The text and captions do not distinguish monotonic and U-shaped patterns.** As rho changes, some metrics like generalizability of the features show a U-shaped pattern (i.e. they're maximized or minimized at an intermediate value of rho) and others monotonically change with rho. The text does not highlight this difference and, for example, simply says that SAM reduces rank and creates more generalizable features. This is confusing as it implies there is a high correlation between rank and generalizability in these results, which would be true if the two metrics changed in the same way. The caption for Figure 5 is even more problematic. This captions says higher values of rho generalize better even though in the plot the intermediate values generalize better. Overall I recommend editing the captions to distinguish monotonic and U-shaped patterns.
Minor comments:
m1: A citation for bottleneck layer would help with motivating its use for inducing low-rank features.
m2: Line 128: "Augmentations play a crucial role in revealing the low-rank trend with respect to the ρ of SAM, whereas the addition of only weight decay is insufficient for revealing a similar trend" Is this inferred from any of the results in the section or is this a separate experiment the authors conducted?
m3: The text should briefly explain the teacher-student setup and what "teacher neuron" means. The general audience is not familiar with this theoretical framework.
m4: The proof for proposition 1 is mostly a sketch and is hard to verify. I suggest laying out the intermediate steps of the proof in the appendix.
m5: Do the nearest neighbor generalization results fit with the overall narrative about rank or is this a separate finding?
-------------------
After rebuttal: Raised the score as the rebuttal addresses the main comments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the Weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Comments W1, W2, and W3 are critical. See the Weaknesses section for details
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the detailed feedback.
---
> ***W1: In different experiments representations are extracted from different points of the network.***
We observed the following behavior for the penultimate layer (consistent across CIFAR-10, CIFAR-100, and Tiny ImageNet):
- at the very beginning of the training, the feature rank is consistently smaller for SAM compared to SGD which is consistent with the feature rank at the intermediate layer reported in the paper,
- later in training, however, SAM leads to a higher feature rank (but not by a large margin), most likely because SAM prevents full convergence to a neural-collapsed solution.
We will include detailed plots and discussion on this phenomenon in the appendix.
> ***W2: Neuron activity is only evaluated on a limited set of architectures***
For pre-activation models, the rank reduction pattern is closer to the one we described for vision transformers in **Section 5: Investigation of low-rank mechanisms on deep networks** (paragraph **Pre-activation ViT on MS-COCO**). We cannot change the paper during the rebuttal phase but we will include the corresponding plots to the appendix of the revised version.
> ***W3: The text and captions do not distinguish monotonic and U-shaped patterns***
We did not expect that our caption could lead to the impression of monotonicity of the generalization improvement. We will definitely make it clearer. E.g., in Figure 5, we meant instead that all the reported $\rho$ of SAM improve *over standard training*, but the generalization improvement is clearly U-shaped. We will emphasize these U-shaped trends in the revision.
---
> *m1: A citation for bottleneck layer would help with motivating its use for inducing low-rank features.*
We agree and we will include a corresponding citation. To the best of our knowledge, one of the first studies on such low-rank reparametrizations is known as the Burer-Monteiro factorization studied in [A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization](https://link.springer.com/article/10.1007/s10107-002-0352-8).
> *m2: Line 128: "Augmentations play a crucial role in revealing the low-rank trend with respect to the ρ of SAM, whereas the addition of only weight decay is insufficient for revealing a similar trend" Is this inferred from any of the results in the section or is this a separate experiment the authors conducted?*
This is a separate experiment that we conducted. We will include it in the appendix.
> *m3: The text should briefly explain the teacher-student setup and what "teacher neuron" means. The general audience is not familiar with this theoretical framework.*
We will include a few references that also consider the same teacher-student setup. By *“3 teacher neurons”* we merely meant that the teacher network has one hidden layer with only 3 ReLU activations, while the student network is overparameterized with 100 ReLU activations. The goal of the student network is to learn the same function that is represented by the teacher network.
> *m4: The proof for proposition 1 is mostly a sketch and is hard to verify. I suggest laying out the intermediate steps of the proof in the appendix.*
Indeed, we skipped the intermediate steps to save space in the main part. We will include them in the appendix.
> m5: Do the nearest neighbor generalization results fit with the overall narrative about rank or is this a separate finding?
We will improve our explanation on why we reported the kNN error. Basically, we wanted to confirm the generalizability of the low-rank features taken at an *intermediate layer*. This experiment highlights the suitability of the intermediate features for transfer learning, especially when using nearest neighbor-based classification. Without this experiment, one could assume that since these features are not from the penultimate layer, they can be of limited use for downstream tasks.
---
> “I am open to raising this score if these concerns are addressed in the rebuttal or the revision and depending on the discussions.”
We hope our rebuttal addressed your concerns. We are happy to engage in a follow-up discussion.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. It addresses the comments and I raised the score to 6.
For the revision I suggest discussing the phenomenon in W1 more prominently and in the main paper, not in the appendix. | Summary: This submission studies a new property of deep networks trained with sharpness-aware minimization (SAM), namely feature rank reduction. The existence of this property is supported by experimental analysis on image classification, and on contrastive language-image tasks, as well as theoretical analysis on a two layer ReLU network.
Strengths: - Understanding the properties of models trained with SAM is a relevant topic, which has gained a lot of interest in the research community in recent years
- The main finding of this submission – features of lower rank for SAM-trained models – is well supported empirically, through experiments on different tasks and datasets
- The submission is quite well written, most claims are well supported and is technically sound
Weaknesses: - While it is quite clear from Figures 1-4 that models trained with SAM, particularly with large values of $\rho$, exhibit lower feature rank, the rank differences between SGD and SAM are less substantial when we consider the optimal values of $\rho$, i.e. achieving lowest test error. This can be seen better in Figures 3 and 4, as well as Table 1. For example, while the rank reduction seems to be monotonic in $\rho$, the same is not true for generalization error. It would have been better if the authors presented a unified graph of this effect, for example a heat map showing the interplay between generalization and feature rank, as a function of $\rho$.
- Following the previous point, it is not clear what would be the practical usefulness of a relatively small rank reduction, achieved for the optimal value of $\rho$. It would have been more convincing if the authors also presented a practical application illustrating the consequences of rank reduction of SAM (e.g. the authors mention faster retrieval).
- Similarly, looking at the middle plot from Figures 1-3 showing the kNN error, there does not seem to be a direct correlation between generalization and rank reduction. Overall, the scope of these plots is not very well explained.
- There clearly seems to be a very different behavior between the “minimal” and “state-of-the-art” settings: while for the minimal setting the rank seems to stabilize during training, for state-of-the-art it actually increases after an initial drop. There is almost no mention of this phenomenon in the submission.
- It is not clear whether the results in Proposition 1 imply a decrease in the pre-activation values which would eventually lead to sparse features, as the authors mention. For example, the other term in the equation below line 205 could be negative. The authors should better clarify what is the exact implication of this proposition.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - It would be better to change the color scheme in Figures 1-3, as it can be quite difficult to distinguish between some of the values of $\rho$
- It is hard to judge where the differences between the “minimal” and “state-of-the-art” settings come from. The authors mention in Section 4.1 that SAM has a different behavior from weight decay, but weight decay could also lead to low-rank features, as some results suggest. It would have been better to disambiguate the effect of weight decay, momentum and large learning rate, by, for example, performing experiments with only weight decay switched off, but keeping the other settings the same.
- The authors mention in Section 3.1 that Cifar-100 was used for feature kNN classification, but it looks like in Figure 3 for Tiny ImageNet, Cifar-10 was used instead. Could the authors clarify this?
- It looks like the decrease in feature rank from SAM is less pronounced for Tiny ImageNet than for Cifar datasets. What would be the trend for the same experiment performed on ImageNet? Additionally, I think it would have been more interesting to check the generalization of the features on less related datasets, such as Pets, Flowers or Birds (e.g. please see Kornblith et al., 2018, “Do better ImageNet models transfer better?” for examples of transfer tasks)
- As I am not familiar with contrastive language-image training, could the authors please clarify whether the setup they are using in Section 3.3 for finetuning using the InfoNCE contrastive loss is a standard one, and, if so, give the appropriate citations? Otherwise, it feels like more details on this setup are needed for reproducibility. Also, the InfoNCE loss should be cited.
- Similarly, in Section 4.1, could the others clarify what the teacher-student setup is more exactly and give the appropriate citation? Also, please mention what dataset you are using in this experiment.
- Can the authors clarify the exact implications of Proposition 1? As I previously mentioned, I think this doesn’t imply that the pre-activations are driven towards negative values, as mentioned in lines 204-205, since one of the terms in the update can be negative (due to $a_j$).
- In Section 5 it seems that for post-activation ResNets (the standard version, actually) the rank decrease is less pronounced in earlier blocks. What is the behavior in this case for pre-activation ResNets, and what do the authors believe would be the reason for this?
- Can the observations that the rank reduction from SAM is related to activation sparsity be used to leverage more efficient training of neural networks, or at least reduce the training costs of SAM? Can the authors please comment on that?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I believe one important limitation of the study presented in this paper is the lack of practical evidence regarding the usefulness of low rank features from SAM. However, the main conclusions are well supported by plenty of experiments spanning multiple datasets and tasks, and the submission is technically solid, which ultimately motivates my final rating.
------------------------------------
----- Edited after rebuttals------
------------------------------------
After reading the authors' response, I decided to keep my initial score, while increasing the score for the Contribution from 2 to 3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the extremely detailed feedback!
---
> *the rank differences between SGD and SAM are less substantial when we consider the optimal values of $\rho$*
We totally agree: the low-rank effect can be much stronger if we are allowed to sacrifice some accuracy compared to the best SAM model. We will make it clearer in the paper and present a scatter plot of generalization vs. feature rank for models trained with different $\rho$.
> *practical usefulness of a relatively small rank reduction, achieved for the optimal value of $\rho$? … a practical application illustrating the consequences of rank reduction of SAM (e.g. the authors mention faster retrieval)?*
It is true that for the *optimal* $\rho$, the rank decrease is not too large. However, if we take the largest $\rho$ that still improves upon standard training, the rank decrease is more substantial (in many cases, up to $30\\%$ rank reduction). While we did not present a direct practical application for a faster retrieval, the naive exhaustive search is linear in the dimension, i.e., there the rank reduction directly translates to faster search. The complexity of practical nearest neighbor search methods varies and various approximations are widely used. We considered this as a distinct topic and decided to focus solely on the reduction in the dimensionality of the embedding space.
> *the scope of these plots [the middle plot from Figures 1-3 showing the kNN error] is not very well explained.*
We will improve our explanation for why we reported the kNN error. Basically, we wanted to confirm the generalizability of the low-rank features taken at an *intermediate layer*. This experiment highlights the suitability of the intermediate features for transfer learning, especially when using nearest neighbor-based classification. Without this experiment, one could assume that since these features are not from the penultimate layer, they can be of limited use for downstream tasks.
> *while for the minimal setting the rank seems to stabilize during training, for state-of-the-art it actually increases after an initial drop*
The initial drop is unrelated to SAM and we verified that it originates from the usage of initial large learning rates. We will discuss this observation further and add an ablation study.
> *It is not clear whether the results in Proposition 1 imply a decrease in the pre-activation values which would eventually lead to sparse features*
Indeed, there is no guarantee that the pre-activations will necessarily decrease on each iteration of SAM because of the potential cancellation with the first term. However, the second term always biases the training dynamics towards smaller pre-activations: in case of positive first term, SAM will make it larger, while if the first term is negative, SAM will make it smaller. We agree that it is not a strong theoretical result, but we think that it still provides an intuition about the origin of the low-rank effect.
---
> *The authors mention in Section 3.1 that Cifar-100 was used for feature kNN classification, but it looks like in Figure 3 for Tiny ImageNet, Cifar-10 was used instead.*
In this experiment, we wanted to measure the *transfer learning performance*, thus we had to choose any dataset different from the one on which the model was trained on. Thus, for CIFAR-10 we used CIFAR-100, for CIFAR-100 we used CIFAR-10, and for Tiny ImageNet we used CIFAR-10 again.
> *It looks like the decrease in feature rank from SAM is less pronounced for Tiny ImageNet than for Cifar datasets. What would be the trend on ImageNet?*
We believe the rank decrease depends on the degree of overparameterization. When using the same network on Tiny ImageNet and CIFAR-10, the network trained on Tiny ImageNet will require more dimensions to fit the data. So we expect that for networks of the same size, the rank reduction will be less prominent on the full ImageNet. However, for a larger network (as typically used for larger datasets), we expect the rank reduction to be as prominent as in our current experiments.
> *for post-activation ResNets the rank decrease is less pronounced in earlier blocks. What is the behavior in this case for pre-activation ResNets, and what do the authors believe would be the reason for this?*
The behavior for pre-activation ResNets is close to the behavior of vision transformers: the rank reduction due to SAM occurs gradually, and mostly happens at later layers of the network. Intuitively, we think that the first layers learn a variety of generic features (e.g., various edge and color detectors) which are shared for different training methods. In later layers, these basic features might be combined in multiple ways. While redundant dimensions will be automatically “pruned” by SAM, they may persist with standard training.
> *Can the observations that the rank reduction from SAM is related to activation sparsity be used to leverage more efficient training of neural networks?*
Perhaps, iterative pruning procedures which are employed to prune weights can be adapted to prune the whole redundant neurons or some redundant subspaces. This is an interesting direction to explore.
---
Following your recommendations, we will also incorporate the following changes:
- Changing the color scheme in Figures 1-3.
- Providing an ablation study where we include the components of the state-of-the-art vs. minimal setting one-by-one.
- Adding more details for reproducibility of the CLIP setting and the appropriate citation to the [InfoNCE loss](https://arxiv.org/abs/1807.03748).
- Describing better the teacher-student setup: the goal of the student network is to recover the teacher network from a finite set of training points which are sampled from the Gaussian distribution and labeled by the teacher network. We will add appropriate references that consider the same setup.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed answers! After also reading the other reviews, I would like to keep my score. While the current work could be improved by providing further evidence on larger datasets and also by better emphasising the practical implications, I believe the low-rank property of SAM-trained models is an interesting observation, which would benefit the community. | Summary: The paper proposes a new optimization method called Sharpness-Aware Minimization (SAM) that aims to improve generalization performance in deep learning models. The authors demonstrate that SAM can effectively reduce the generalization gap and improve the accuracy of various models on different datasets. The paper also provides a theoretical analysis of SAM and shows that it encourages the optimization process to converge to flatter minima, which can lead to better generalization. Overall, the paper presents a novel and promising approach to improving generalization in deep learning models.
Strengths: 1. The paper's contributions are significant in several ways. First, SAM is a promising optimization method that can improve the generalization performance of deep learning models. Second, the paper provides a mechanistic understanding of how SAM leads to low-rank features in neural networks, which can have implications for more efficient feature quantization and nearest neighbor retrieval. Finally, the paper's theoretical analysis of SAM can provide insights into the optimization landscape of deep learning models, which can lead to further improvements in optimization methods. Overall, the paper is a significant contribution to the field of deep learning optimization.
2. The paper presents a thorough empirical evaluation of SAM on various deep learning models and datasets. The authors demonstrate that SAM can effectively reduce the generalization gap and improve the accuracy of the models. The paper also provides a mechanistic understanding of how SAM leads to low-rank features in neural networks, which is a valuable contribution to the field.
3. The paper introduces a novel optimization method called Sharpness-Aware Minimization (SAM) that is different from traditional optimization methods. SAM aims to minimize the sharpness of the loss function, which encourages the optimization process to converge to flatter minima. This approach is different from other methods that focus on minimizing the loss function itself or its gradient. The authors also provide a theoretical analysis of SAM, which further demonstrates its originality.
Weaknesses: 1. Sensitivity to batch size: Since sharp minima are often observed in large batch size training, which is becoming increasingly important in current large model-based methods such as batch normalization and dropout, it is crucial to investigate the relationship between batch size and performance in the proposed method. However, the authors only show the results for batch sizes of 128 and 256 in this paper and the appendix, which limits the generalizability of the findings.
2. The observation that sharp minima hurt generalization is easier to make in large-scale training, especially for large models. Therefore, it is recommended that the authors test their proposed method on large-scale training, such as CLIP. Conducting research on the low-rank effect on larger models, such as CLIP or other large text-image training, would make the proposed method more valuable, considering the current trend.
3. The authors are recommended to compare and discuss a recent related work, "Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization," which is a highlight paper at CVPR2023.
Note: If the authors respond with experiments, even preliminary results are acceptable. For example, showing the relationship between batch size and performance by investigating a few batch size training settings would be sufficient. Additionally, presenting some simple fine-tuning results on pre-trained CLIP using the proposed method would be highly appreciated.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Questions: Please see Weaknesses. I would like to update my evaluation after the discussion.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the positive comments.
> *Sensitivity to batch size: … it is crucial to investigate the relationship between batch size and performance in the proposed method. However, the authors only show the results for batch sizes of 128 and 256 in this paper and the appendix. … For example, showing the relationship between batch size and performance by investigating a few batch size training settings would be sufficient.*
We agree that this is an interesting question. **We attach the results of this experiment in the one-page pdf in the global response.** Similarly to the experiments reported in the paper that were done with batch size $256$, SAM with larger batch sizes ($512$ and $1024$) also improves test error, leads to more generalizable features, and noticeably reduces the feature rank at the intermediate ResNet block.
> *The observation that sharp minima hurt generalization is easier to make in large-scale training, especially for large models. Therefore, it is recommended that the authors test their proposed method on large-scale training, such as CLIP. Conducting research on the low-rank effect on larger models, such as CLIP or other large text-image training, would make the proposed method more valuable, considering the current trend.*
We do not have the computational budget to do large-scale training from scratch at the scale of the original CLIP training which involves training on 400 millions of image-caption pairs. However, we believe that our results with the *CLIP training objective* presented in **Section 3.3: Low-rank features in contrastive language-image training on MS-COCO** already point out the useful role of SAM in this setting.
> *The authors are recommended to compare and discuss a recent related work, "Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization," which is a highlight paper at CVPR2023.*
Thank you for this reference. During the rebuttal phase, we have done new experiments with this method using ResNets on CIFAR-10 with the same setup as in our main experiments from **Section 3.1**. We select the default settings given in their code repository and vary only the perturbation radius $\rho$. We obtain the following results which confirm that the low-rank observation also extends to other recent SAM variants (in addition, we also explored [ASAM](https://arxiv.org/abs/2102.11600) to answer the concern of **Reviewer jzeR**).
| $\rho$ of [GAM](https://arxiv.org/abs/2303.03108) | 0.0 | 0.2 | 0.4 | 0.8 | 1.6 |
| - | - | - | - | - | - |
| Test error | 4.04% | 3.65% | 3.64% | 3.81% | 4.81% |
| Feature rank | 7633 | 7381 | 7303 | 6897 | 6927 |
> *“Additionally, presenting some simple fine-tuning results on pre-trained CLIP using the proposed method would be highly appreciated.”*
We agree this is an interesting experiment. We will include it in the revised version of the paper.
> *Note: If the authors respond with experiments, even preliminary results are acceptable.*
We hope our rebuttal and our new experiments addressed your concerns. We are happy to engage in a follow-up discussion.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification
Comment: Thanks to the author's response, I'm inclined to keep the original rating on the positive side. Also, I look forward to the author discussing (not experimentally comparing) the differences in philosophy between this submission and other related papers (e.g., ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks and Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization). | Summary: This paper investigates the effect of Sharpness-Aware Minimization (SAM) on low-rank features learned by neural networks. The authors present empirical evidence of low-rank features for different models trained with SAM on four classification tasks, as well as for contrastive text-image training. They also provide a mechanistic understanding of how low-rank features arise in a simple two-layer ReLU network. The authors discuss the implications of low-rank features learned by SAM, including more efficient retrieval and feature quantization. They also suggest future research directions, such as understanding the impact of SAM on learned features that lead to generalisation improvements on natural data, and further theoretical analysis of the low-rank effect of SAM for more complex architectures.
Strengths: - The paper presents empirical evidence of low-rank features for different models trained with SAM on different tasks.
- The authors provide a mechanistic understanding of how low-rank features arise in a simple two-layer ReLU network.
- The implications of SAM-trained low-rank features are discussed in detail, including more efficient retrieval and feature quantization.
- The authors suggest future research directions that could build on the results of this paper.
- The paper is well organised and clearly written.
Weaknesses: - The paper is interesting, but only the observational results are presented instead of the methodological contributions based on the observation.
- The paper does not provide a comprehensive comparison of recent SAM variants.
- The empirical evidence presented is limited to a few datasets and models that may not generalise to other scenarios.
- The paper does not explore the impact of low-rank features on other tasks beyond retrieval and quantification.
- The theoretical analysis of the low-rank effect of SAM is limited to simple architectures and may not apply to more complex architectures.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The following directions would be interesting
- Investigate the impact of SAM on learned features that lead to generalisation improvements on natural data, beyond retrieval and quantization tasks.
- Investigate the low-rank effect of SAM on more complex architectures, such as those involving skip connections and self-attention layers.
- Develop a theoretical framework to explain the low-rank effect of SAM and its relationship to other optimisation methods.
- Investigate the impact of SAM on transfer learning and fine-tuning scenarios.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See the Weakness and Question sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the feedback.
> *The paper is interesting, but only the observational results are presented instead of the methodological contributions based on the observation.*
We believe a new paper does not necessarily have to present a new methodological contribution. For example, ["Understanding deep learning requires rethinking generalization"](https://arxiv.org/abs/1611.03530) has been very impactful in the community, although it did not present a new method. We believe that a better understanding of existing methods such as SAM can also be very useful. We think this should not be treated as a weakness of our work.
> *The paper does not provide a comprehensive comparison of recent SAM variants.*
We chose the *original* SAM since it is still the most popular SAM variant in the community and it is implemented without any further approximations. However, we acknowledge that there have been many recent variants of SAM such as [ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks](https://arxiv.org/abs/2102.11600) and [Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization](https://arxiv.org/abs/2303.03108) (suggested by **Reviewer fXy2**). Thus, we have done new experiments with them using ResNets on CIFAR-10 with the same setup as in our main experiments from **Section 3.1**. We select the default settings given in their code repositories (which includes a smaller network for ASAM compared to GAM) and vary only the perturbation radius $\rho$. We obtain the following results which confirm that the low-rank observation also extends to other recent SAM variants.
| $\rho$ of [ASAM](https://arxiv.org/abs/2102.11600) | 0.0 | 0.5 | 1.0 | 2.0 | 4.0 |
| - | - | - | - | - | - |
| Test error | 7.29% | 6.53% | 6.38% | 7.12% | 10.64% |
| Feature rank | 5048 | 4801 | 4699 | 4578 | 4383 |
| $\rho$ of [GAM](https://arxiv.org/abs/2303.03108) | 0.0 | 0.2 | 0.4 | 0.8 | 1.6 |
| - | - | - | - | - | - |
| Test error | 4.04% | 3.65% | 3.64% | 3.81% | 4.81% |
| Feature rank | 7633 | 7381 | 7303 | 6897 | 6927 |
> *The empirical evidence presented is limited to a few datasets and models that may not generalise to other scenarios.*
We would like to kindly point out that we already have multiple datasets (CIFAR-10, CIFAR-100, Tiny ImageNet, ImageNet, MS-COCO, synthetic data) and models (ResNets, Vision Transformers, MLP-Mixers, text transformers in BERT). We will be happy to add further datasets and models if you have some particular suggestions.
> *The paper does not explore the impact of low-rank features on other tasks beyond retrieval and quantification.*
In the paper, we present results on multiple tasks which cover classification (the standard deep learning benchmarks in Section 3.1 and ImageNet in Section 3.2), contrastive learning (the multimodal retrieval in Section 3.3), and regression (the teacher-student setup in Section 4.1). Moreover, the retrieval setting includes both image and language modalities. We would appreciate it if you could suggest some other valuable settings to explore, which we will be happy to include.
> *The theoretical analysis of the low-rank effect of SAM is limited to simple architectures and may not apply to more complex architectures.*
We believe that one hidden-layer ReLU networks are quite insightful and point out the origins of the low-rank feature phenomenon. Moreover, we believe a similar argument should hold for deeper networks as we illustrate empirically in Figure 6 for a post-activation ResNet-18.
> *[The following directions would be interesting] Investigate the impact of SAM on learned features that lead to generalisation improvements on natural data, beyond retrieval and quantization tasks.*
We recognize this as a great open question; however, our paper's focus is deliberately centered on the low-rank phenomenon of SAM rather than its broader generalization benefits.
> *[The following directions would be interesting] Investigate the low-rank effect of SAM on more complex architectures, such as those involving skip connections and self-attention layers.*
We have already investigated this question precisely in **Section 5: Investigation of low-rank mechanisms on deep networks**, see paragraphs **Post-activation ResNet on CIFAR-10** and **Pre-activation ViT on MS-COCO**. We hope these paragraphs would address your concern.
> *[The following directions would be interesting] Develop a theoretical framework to explain the low-rank effect of SAM and its relationship to other optimisation methods.*
We see our theoretical result on one-hidden layer ReLU networks as the first step in that direction. We agree that in the future, stronger and more general results would be of great interest.
> *[The following directions would be interesting] Investigate the impact of SAM on transfer learning and fine-tuning scenarios.*
Actually, we have already investigated the transfer learning scenario in **Section 3.1: Low-rank features for ResNets on standard classification tasks** by using the kNN classifier on the extracted features of deep networks. We tested how well the features from CIFAR-10 transfer to CIFAR-100, and from CIFAR-100 and Tiny Imagenet to CIFAR-10. We observed that SAM improves the transfer learning performance in these settings.
As for fine-tuning, we have investigated it in the CLIP training in **Section 3.3 Low-rank features in contrastive language-image training on MS-COCO** where we fine-tuned a pre-trained R+Ti/16 vision transformer and BERT on MS-COCO using the InfoNCE contrastive loss. We observed that SAM both leads to better generalization and features of lower rank even for fine-tuning.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed response. My concerns have been addressed and I will raise my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for the detailed feedback and positive evaluation such as
- *“The implications of SAM-trained low-rank features are discussed in detail, including more efficient retrieval and feature quantization”* (**Reviewer jzeR**)
- “The paper also provides a mechanistic understanding of how SAM leads to low-rank features in neural networks, which is a valuable contribution to the field” (**Reviewer fXy2**)
- *“the main conclusions are well supported by plenty of experiments spanning multiple datasets and tasks, and the submission is technically solid”* (**Reviewer 2Roy**)
- *”The main claim of the paper (rank reduction) is supported by extensive empirical evidence.”* (**Reviewer wq2P**)
Following the reviewers’ suggestions, we have extended our empirical evaluation by adding the following experiments:
- results with different batch sizes (512 and 1024 in addition to 256 reported in the paper) on CIFAR-10 (**see the attached 1-page pdf**),
- results with different SAM variants on CIFAR-10 ([ASAM](https://arxiv.org/abs/2102.11600) and [GAM](https://arxiv.org/abs/2303.03108)).
In the revised version, we will further expand the experiments and add the following:
- ablation of the minimal vs. state-of-the-art settings for classification tasks (including analysis of the feature rank drop at the beginning of training),
- results of fine-tuning with SAM on a pre-trained CLIP model,
- experiments on the role of augmentations for classification datasets,
- behavior of the feature rank at the penultimate layer on CIFAR-10, CIFAR-100, and Tiny ImageNet.
We will also carefully take into account all writing and presentation suggestions including:
- present a scatter plot of generalization vs. feature rank for models trained with different $\rho$,
- adding a clarification that we included the kNN error to check the generalizability of the features from the intermediate layer,
- emphasize the U-shaped trend of test error vs. $\rho$ in the revision,
- describing better the teacher-student setup,
- adding the suggested additional citations.
We thank the reviewers again, and we are happy to engage in a follow-up discussion.
Pdf: /pdf/fe574d64d755a1c6c4e952328ddc079d549ad3df.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Improving Graph Matching with Positional Reconstruction Encoder-Decoder Network | Accept (poster) | Summary: In this paper, the authors propose to improve the existing deep graph matching paradigm via positional reconstruction. The authors claim that existing deep GM works mainly focus on the visual feature to compute the affinities while neglecting the location and position of the key points in the graphs. To this end, they propose to capture the spatial features as well as the visual features in their model PREGM. The proposed encoder decoder PR-EnDec can learn effective graph positional encoding/coding with well-designed loss functions. The experiments on multiple datasets show the performance of the proposed method.
Strengths: 1. The consideration of utilizing spatial information is interesting to me since existing deep GM works mainly focus on how to improve the extraction of visual features. The positional encoding is further fused with the visual features, and updated by the well-defined loss functions.
2. The experiments on multiple datasets show that the proposed method PREGM can outperform current SOTA methods.
3. The paper is well-written and easy to follow.
Weaknesses: 1. In fact, in the framework of existing works such as BBGM, the geometry information is already the input of SplineCNN. So, I think the authors may over-claim their novelty since existing works also consider spatial information to some extent.
2. In the experiments on Willow and Spair71k datasets, not all baselines are reported in the table. I wonder is there a reason for that? I check the best baseline (COMMON) in the Pascal VOC dataset and find that they have reported their performance on Spair71k (84.5\%), which is higher than the performance of the proposed method PREGM.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. I think the authors should further show the difference in spatial information between their work and BBGM since BBGM does consider spatial information.
2. The experiments table may need to be completed. I think it is OK when your method does not outperform all the baselines in one of the datasets, but you need to show the results, right?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your thorough evaluation and insightful comments on our paper. Your feedback has significantly contributed to refining the clarity and impact of our work.
We are committed to addressing your queries and suggestions:
Claim of Novelty Regarding Spatial Information: We sincerely acknowledge your perspective and understand that the framework of existing works, such as BBGM, incorporates spatial information to a certain extent. In our manuscript, we aim to highlight that we are further applying spatial information in a distinct manner. We appreciate your guidance in framing our contribution accurately.
Difference in Spatial Information:
We thank the reviewer for suggesting a deeper analysis of the distinction in spatial information between our work and BBGM. Our method introduces spatial information at multiple levels:
a. Higher-order Information: The parameters involved in the splineCNN training within BBGM do not encompass spatial information beyond order 2 (edges). Our method, on the other hand, extends this by incorporating spatial information directly into the positional encoding of individual nodes.
b. Global Information: Our encoder directly facilitates information exchange between nodes, bypassing the reliance on edge construction through triangular dissection. This design choice enables better global spatial information utilization compared to BBGM.
As the reviewer pointed out, both approaches harness spatial information, but the embodiment and integration of this information differ significantly. We will elucidate these distinctions further in the revised manuscript to ensure clarity.
Incomplete Experimental Results:
We acknowledge the reviewer's concern regarding missing baseline results in the experimental table, particularly for the Willow and SPair71k datasets. We appreciate the suggestion to include all relevant baseline data. The omission of these results was due to the unavailability of some baseline methods' data on these specific datasets.
Specifically, the baseline method COMMON exhibits an accuracy of 84.5\% on the SPair71k dataset, surpassing our proposed method. However, we observed a discrepancy in the reported accuracy of BBGM across different experimental sources(78.9\% in the original paper, 82.1\% in COMMON's experimental data), which makes it challenging to directly compare results. Moreover, differences in dataset parameters may contribute to these variations. We have taken the reviewer's advice seriously and tested the open-source code of COMMON with the same settings and achieved an accuracy of 82.85\% on the Spair71k dataset. We will include this result in the revised manuscript..
Furthermore, we apologize for any confusion caused by not fully populating the experimental table. We commit to diligently addressing this concern by providing comprehensive experimental results, including those where our method does not achieve the best performance. We appreciate the reviewer's understanding and will ensure that our revised manuscript reflects these improvements.
Your hints have been immensely valuable in providing context and guiding our responses. Your dedication to a thorough evaluation has motivated us to enhance the rigor and clarity of our research.
Once again, we express our gratitude for your diligence and insightful feedback.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. My doubts have been cleared.
However, I take a look at other reviewers' comments and find one thing I care about:
In your reply to Reviewer LuNk, you claimed that "We want to clarify that the convention of comparing graphs with equal numbers of points is commonly used in graph matching.", which I do not think so. In many existing deep GM works, such as NGM and BBGM, the number of points in two images is not required to be equal. In the BBGM paper, they use two filtering to handle the cases when two images do not have the same points, namely Intersection filtering and Inclusion filtering, shown in Figure 6 of the BBGM paper. Therefore, I do not think the "Keypoints Number Constraint" claim is correct.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful comment. We appreciate your engagement with our response to Reviewer LuNk's concerns. Your feedback has provided us with valuable perspectives that will guide our research in the future. We would like to clarify our stance on the matter. While it's true that certain deep graph matching works like NGM and BBGM do not strictly require an equal number of points in the compared graphs, we also want to emphasize that the convention of comparing graphs with equal numbers of points is indeed prevalent in the field of graph matching.
We acknowledge the inclusion filtering method employed in the BBGM paper to handle cases where two images do not have the same points. Non-equal point matching is indeed valuable for addressing more flexible matching scenarios. However, our claim about the "Keypoints Number Constraint" was intended to highlight a common practice in the field and was not meant to discount alternative approaches.
We also agree that exploring non-equal point matching is an interesting avenue for future research. As you mentioned, it's clear that the BBGM and NGM papers have demonstrated successful techniques for addressing this challenge. Moving forward, we plan to investigate such approaches as part of our ongoing efforts to enhance the robustness and versatility of graph matching techniques. | Summary: This paper introduces a positional reconstruction encoder-decoder (PR-EnDec) to model intrinsic graph spatial structure for image key-points matching. By using graph to represent the image structure, the proposed model can utilize the high-order spatial information by reconstructing the locational structure of graphs contained in the node coordinates.
Strengths: 1.The structure of the whole manuscript is good. The introduced positional encoder that learns effective graph positional encoding with affine transformation invariance.
2. The figures and tables are in good illustration.
Weaknesses: 1.For experimental comparison, the compared methods are old. Besides, the evaluation datasets, such as PascalVOC [8], Willow ObjectClass [4], and SPair-71k are small and little old. Some more recent large-scale datasets, such as PhotoTourism (IMC-PT) 2020 should be evaluated.
2.Given the Problem Formulation of Graph Matching in sec.3, the compared two graphs have the same number of nodes. However, for the real application, the keypoints number constraint is not reasonable.
3.Figure 2 shows the framework of the proposed PR-EnDec network that consists of the detailed structure of its positional encoder and spatial relation decoder. However, in figure1, the Spatial Relation Decoder is missing.
4.Compared to the previous works, the two stage training pipeline is complex.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The proposed method takes the reference [28] as the baseline model (line 278), thus, this method should be compared on all three datasets.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness and questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We extend our sincere appreciation for your diligent review and insightful feedback on our paper. Your thoughtful comments have greatly contributed to the enhancement of our work.
We are committed to addressing your concerns and queries:
Experimental Comparison and Dataset Choice: We thank you for highlighting the importance of using recent and large-scale datasets for evaluation. In response, we have included experimental results on the PhotoTourism (IMC-PT) 2020 dataset in the Appendix. Furthermore, we will present comprehensive tabular data and in-depth analysis of the results in the revised manuscript to ensure a thorough understanding of the proposed method's performance.
Keypoints Number Constraint in Problem Formulation: We want to clarify that the convention of comparing graphs with equal numbers of points is commonly used in graph matching. However, we acknowledge the importance of considering scenarios with unequal keypoints, as it aligns more closely with real-world applications. In line with your suggestion, we are planning to explore and incorporate unequal point matching as a valuable future direction for our research.
Figure 1 and Spatial Relation Decoder: We appreciate your attention to the spatial relation decoder. While it might appear missing in Figure 1(graph matching phase), please note that it is not absent from the overall framework. The decoder is employed specifically to enhance the training of the positional encoder. Its primary purpose is to improve the performance of the positional encoder by providing a reconstruction target during the training phase.
Complexity of Two-Stage Training Pipeline: Your observation about the complexity of the two-stage training pipeline is accurate. We acknowledge that the concurrent training of positional encoders and graph matching presents a certain level of complexity. In light of this, we'd like to highlight that even the approach DGMC employed a two-stage training pipeline. While our results demonstrate superior performance, we acknowledge your point. Moving forward, we envision training positional encoders and conducting graph matching simultaneously as a promising direction.
Comparison with Baseline Model: We appreciate your observation and confirm that we have compared the proposed method with the baseline model BBGM on all three datasets.
Once again, we are genuinely grateful for your feedback, and we are committed to addressing these issues to ensure the clarity, comprehensiveness, and effectiveness of our paper. Your dedication to fostering high-quality research and constructive critique is invaluable.
Thank you for your time and consideration.
---
Rebuttal Comment 1.1:
Title: About the rebuttal
Comment: Thanks for the providing results. The rebuttal has answered all my concerns. I raise the core to weak accept. | Summary: The paper introduces an improved method for graph matching in semantic keypoint matching - the Positional Reconstruction Encoder-Decoder Network (PR-EnDec) and an end-to-end graph matching network PREGM. PR-EnDec efficiently learns node spatial embedding and reconstructs the locational structure of graphs from node coordinates. PREGM models the intrinsic spatial structure of keypoints and captures visual information, enhancing node positional features. Tests on three keypoint matching datasets showed improved performance over existing methods, with an ablation study demonstrating the effectiveness of each PREGM component.
Strengths: The paper introduces an innovative PR-EnDec model to effectively capture and utilize the spatial context information hidden in the locations of keypoints, which has not been adequately addressed in existing methods. The PR-EnDec incorporates a positional encoder and a spatial relation decoder, which not only capture the relative spatial relations but also learn the affine transformation invariance, enabling the network to learn more refined location information. Given the widespread use of image matching in various fields, such as object tracking, image retrieval, and pose estimation, the proposed improvements could have significant impacts on a variety of applications.
Weaknesses: 1. Lack of Visualizations
While the paper presents an innovative approach and comprehensively assesses the performance of the proposed method, I would like to express a concern regarding the absence of sufficient visualizations. Visualizations can offer empirical evidence of model performance and further support the reported quantitative results. For instance, visual representations of keypoint matching results can provide intuitive insights into how the proposed method works in practice and highlight its ability to handle complex spatial relationships. In light of the above, the lack of sufficient visualizations in this paper might impede comprehensive understanding and thorough evaluation of the presented method. Therefore, I would suggest incorporating necessary visual aids into the paper to provide a more effective and comprehensive presentation of the methodology, the performance, and the practical implications of the proposed PREGM model.
2. The Formatting of Table 4 and Table 5
In the current presentation, multiple experimental results appear to be consolidated into single rows within these tables. This presentation may potentially lead to confusion and misinterpretation of the results. Placing more than one set of results on a single line may obscure important details, making it harder for readers to draw meaningful conclusions from the data. As a suggestion to enhance clarity and readability, it would be beneficial to dedicate each row to one specific experimental result. This layout would allow for more detailed descriptions of the corresponding experimental settings and the related results, thus improving understanding. Moreover, it will facilitate the direct comparison of different experimental conditions and results, which is particularly critical in identifying trends, nuances, and potential implications. Therefore, I would recommend revising the formatting of Table 4 and Table 5 to present one set of experimental results per row, thereby ensuring that the wealth of information is conveyed as comprehensibly as possible.
3. Lack of discussion about limitations and potential negative societal impact
While the paper exhibits a commendable effort in proposing and testing a new methodology in graph matching, I would like to raise a concern regarding the lack of discussion on the potential limitations of the proposed method and its possible negative social implications. I suggest that the authors include a discussion of the possible limitations of the PREGM model and potential negative societal impacts in the paper, thereby presenting a more comprehensive and nuanced understanding of their work.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weaknesses part in the above section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are not described in this paper, and potential negative societal impact is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We extend our gratitude for your meticulous evaluation and constructive feedback on our paper. Your insights have significantly contributed to the refinement of our work.
We are committed to addressing your concerns and queries:
Lack of Visualizations: We sincerely appreciate your suggestion regarding the inclusion of visualizations. Visual aids indeed offer a valuable means of providing empirical evidence and enhancing the clarity of our proposed methodology and its performance. We acknowledge the importance of intuitive insights, and we will ensure the incorporation of meaningful visual representations of keypoint matching results in our revised manuscript. These visualizations will help elucidate the inner workings of our approach and provide readers with a comprehensive understanding of its practical applications.
The Formatting of Table 4 and Table 5: Your feedback regarding the formatting of tables is duly noted. We understand the significance of clear and concise presentation for facilitating an accurate interpretation of experimental results. In line with your recommendation, we will reformat Table 4 and Table 5 to ensure that each set of experimental results is presented in a dedicated row. This adjustment will improve the clarity and readability of the tables, enabling readers to readily compare different experimental conditions and outcomes.
Lack of Discussion about Limitations and Potential Negative Societal Impact: We appreciate your concern regarding the omission of a discussion on the limitations of our proposed method and its potential societal implications. Specifically, equal point matching is acknowledged as one of our limitations in our work. In the revised version of our paper, we will thoroughly address the possible limitations of the PREGM model and consider potential negative societal impacts associated with its application.
We genuinely value your feedback and suggestions, as they underscore our commitment to enhancing the quality and comprehensiveness of our research. Your dedication to fostering a deeper understanding of our contributions is invaluable.
Thank you once again for your time and insightful review.
---
Rebuttal Comment 1.1:
Comment: After thoroughly reviewing the authors' rebuttal and considering the feedback from other reviewers, I appreciate the detailed responses provided to address the concerns raised. The commitment to incorporate visualizations, reformat the tables for clarity, and the addition of a discussion on the potential limitations and societal implications of the PREGM model are commendable. These revisions promise a more comprehensive presentation of the paper. Based on these considerations, I maintain my initial rating of "weak accept" for this submission. | Summary: This paper presents a new method to improve graph matching by supplementing visual features with positional encodings. Specifically, an encoder-decoder model is pre-trained to reconstruct a graph’s relative spatial relation based on node coordinates only. The encoder is additionally trained under a contrastive loss to classify positive or negative graph pairs. The pre-trained encoder provides the positional encodings, which is used to compute node and edge affinities with CNN visual features jointly. The proposed method is validated on three graph matching datasets: PascalVOC, WillowObject Class, and Spair-71k, with the best performance among all compared methods. Ablation study shows the contribution of different components in the proposed method.
Strengths: 1. The proposed positional reconstruction encoder-decoder network is a simple and effective method to extract positional features. Reconstructing the spatial relation between nodes in a graph is a good target for the encoder-decoder network.
2. This paper proposes to use positional encodings to improve graph matching, which proves to be effective on three datasets. This is also in line with conclusions of some existing research, such as the vision transformer, in which positional encodings are shown to be very important. This work suggests all future work on graph matching can benefit by adding positional features.
3. The ablation study has shown the importance of each component in the proposed method, including both encoder and decoder, the reconstruct targets, and visual features. Parameter analysis also provides the reason for the choice of hyper-parameters.
4. This paper is well-written and easy to follow.
Weaknesses: Some technical details of the proposed method are not clarified. What are the detailed configurations of the multi-head self-attention layers? How are positive and negative pairs for the encoder generated? Are the visual features sampled for each node of the graph? How are the graph edges defined? How are the edges used in the encoder? How are the node and edge affinities computed?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. For the visual features, why is VGG-16 used instead of a deeper model like ResNet-50?
2. For the decoder reconstruction target, is the original coordinate, which is a natural choice, a good choice? If not, why?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper has not discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thorough review and valuable feedback on our paper. Your insightful comments have greatly contributed to the refinement and clarity of our work.
We are pleased to address your specific concerns and queries:
Detailed Configurations of Multi-Head Self-Attention Layers: For each multi-head self-attention layer, we employed an attention mechanism with 8 attention heads and a feed-forward network. This configuration allows the model to capture complex relationships between nodes effectively.
Generation of Positive and Negative Pairs for the Encoder: Positive pairs are identified when graph $\mathcal{G}^2$ is affine transformed by graph $\mathcal{G}^1$, or keypoints in both graphs are one-to-one matched. Negative pairs correspond to situations where keypoints in two graphs are shuffled and not one-to-one matched. We expect the corresponding node positional encoding $f_i^1$ and $f_i^2$ should be similar only if node $V_i^1$ matches $V_i^2$.
Sampling Visual Features for Each Node of the Graph: Yes, we sampled visual features for each node. We computed feature vectors using the relu4\_2 and relu5\_1$ layers of the VGG16 network. These feature vectors are spatially interpolated to correspond to keypoints using bi-linear interpolation.
Definition and Use of Graph Edges in the Encoder: The edges within each graph are generated through Delaunay triangulation. However, edge information is not incorporated into the position encoder. Instead, it plays a key role in the message passing module during the graph matching phase.
Computation of Node and Edge Affinities: Node affinities are calculated using a similar approach to the
unary costs in BBGM. Specifically, we compute $c^v_{i,j}=\sum_kf^v_s(i)_ka_kf^v_t(j)_k$, where $f^v_s(i)$ and $f^v_t(j)$ represent feature vectors for vertices $i$ and $j$ in the source and target graphs, respectively. The affinity $a$ is obtained using a one-layer neural network from the global feature vector $g$ extracted by max-pooling the final VGG16 layer. Edge affinities, on the other hand, are not employed in our model.
Choice of VGG-16 over Deeper Models: We used VGG-16 as a convention inherited from PCA. While we experimented with replacing it with ResNet-50, the improvement was not significant, leading us to stick with VGG-16 for consistency.
Decoder Reconstruction Target: The choice of using original coordinates as the reconstruction target has its rationale. While the translation invariance is somewhat compromised, we focus on capturing high-order spatial relationships between nodes. First-order spatial information as a target would reduce the encoder's utilization of spatial information, diminishing the model's overall performance.
Once again, we express our gratitude for your insightful feedback. Your constructive critique has helped us enhance the clarity and effectiveness of our proposed method. We are committed to addressing the remaining technical details and ensuring a comprehensive discussion of limitations in the revised manuscript.
Thank you for your dedication to improving the quality of scientific research. | Rebuttal 1:
Rebuttal: The attachment contains the experimental results for IMC-PT.
Pdf: /pdf/e0312598bd89c334be67095e2681a98cf4090f4f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors introduce a positional reconstruction encoder-decoder (PR-EnDec) to model intrinsic graph spatial structure, and present an end-to-end graph matching network PREGM based on PR12 EnDec. The PR-EnDec consists of a positional encoder that learns effective node spatial embedding with the affine transformation invariance, and a spatial relation decoder that further utilizes the high-order spatial information by reconstructing the locational structure of graphs contained in the node coordinates.
Strengths: I haven not personally conducted any research on graph matching, but the idea look interesting to me.
I suggest AC rely on other more experienced reviewer in this area.
Weaknesses: My knowledge of graph matching is limited and therefore cannot find any weakness.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What is the purpose of graph matching? Any use case related to NeRFs or generative models?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I have no idea about the limitations of graph matching.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our paper on graph matching. Your feedback is invaluable to us as it provides insights that contribute to the overall quality of our work.
To address your question about the purpose of graph matching, we are glad to provide a brief explanation. Graph matching plays a crucial role in various fields, including computer vision, image analysis, and pattern recognition. Its purpose is to establish correspondences between nodes of different graphs, facilitating tasks such as object recognition, image alignment, and shape matching.
Once again, thank you for your time, and we look forward to incorporating your suggestions and improving our work based on your feedback. | null | null | null | null | null | null |
Going beyond persistent homology using persistent homology | Accept (oral) | Summary: - The paper presents a comprehensive analysis of two types of color filtrations on graphs, focusing on their expressiveness.
- The paper introduces a novel topological summary RePHINE that combines both node and edge persistence diagrams. The proposed summary is proven to be more expressive than either node or edge persistence diagrams alone.
- The authors conduct experiments on synthetic and real-world datasets. They leverage a combination of RePHINE and a GNN structure to evaluate its performance.
Strengths: - The presented method RePHINE for mixing 0- and 1- dim information is interesting and, as far as I know, is novel.
- A remarkable strength of the proposed method is its theoretical expressiveness, which surpasses the capabilities of utilizing 0- or 1-dimensional information in isolation.
- The paper is well presented and effectively communicates its ideas, ensuring a high level of clarity and ease of understanding for readers.
Weaknesses: One weakness of the paper lies in the experimental evaluation section, which could benefit from a more comprehensive comparison with existing methods, such as the method proposed in [28]. Specifically, the paper could have included a comparative analysis between the proposed method and the local Persistent Homology (PH) method based on subgraph persistence diagrams included in [28].
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - line 47-52: I think there is some syntax issue with the appearance of [Theory], [Methodology] and [Experiments]
- line 142: "disconnect" should be "disconnects".
- line 172: please mention that Q is a color separating set in this sentence.
- line 502: what does "disjoint union" here mean? What does "these sets" refer to? could you please provide an explicit description of the set you are constructing here?
- line 510: I don't think $\{\{X_i\}\}$ should have $l$ as a superscript. Are you also referring to connected components with $l$ colors instead of subgraphs?
- line 513: "l+1 the" -> "l+1 th"
- line 693: "it is not injective" should be "is not injective".
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations of the method are not adequately discussed. I think it would be beneficial for the authors to discuss applicability of the proposed method to handle large scale graphs.
There are no potential negative societal impact of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback. In the following, we address all your questions.
**W1: "One weakness of the paper lies in the experimental evaluation section, which could benefit from a more comprehensive comparison with existing methods, such as the method proposed in [28]."**
We note that our initial experiments mainly focused on corroborating our theoretical analysis (synthetic datasets) and on assessing the benefits of combining RePHINE diagrams with popular GNNs for improved predictive performance. However, thanks to your comment, we have also compared RePHINE with a method that leverages extended persistence diagrams: PersLay. Our new results (please see Table 1 of the attached PDF) demonstrate that RePHINE outperforms PersLay on popular graph classification benchmarks.
We have opted for PersLay instead of [28] due to the similarity between the evaluation setups and the ease of adaptation --- [28] mainly considers node classification tasks. Moreover, we are currently running experiments to investigate further benefits of integrating the RePHINE diagrams into modern GNNs (e.g., PNA, SpecFormer). We will include these results in the revised version of the manuscript.
**Q1: "I think there is some syntax issue with the appearance of [Theory], [Methodology] and [Experiments]"**
Thanks for pointing this out. We will make sure that there will be no such syntax issues in the revised manuscript.
**Q2: "disconnect" should be "disconnects"**.
Yes, thanks for the careful reading.
**Q3: please mention that Q is a color separating set in this sentence.**
We have modified that in the revised paper. Now the sentence reads:
*We note that when $G$ and $G'$ have identical component-wise colors, the sets $\\{w \in V | c(w) \in Q\\}$ and $\\{w \in V' | c'(w) \in Q\\}$ induced by the color-separating set $Q$ are separating sets for $G$ and $G'$, respectively.*
**Q4: "what does "disjoint union" here mean? What does "these sets" refer to? could you please provide an explicit description of the set you are constructing here?"**
We agree that the construction was not clear. Thanks for pointing this out. We meant that when $G$ and $G'$ have distinct component-wise colors, there must be some connected component $C_h$ in $G$ such that $\\{X_h\\} \neq \\{X'_j\\} $ for $ \forall 1 \leq j \leq k $.
Then, if assumed $\beta^0_G = \beta^0_{G'}$, the unmatched component-colors come in pairs i.e. $G$ must have as many unmatched component-color sets as $G'$. The collection of sets of unmatched component-color pairs is considered in the proof.
We have clarified the language throughout the proof, and rewritten the mentioned part as follows :
*When $G$ and $G'$ have distinct component-wise colors [math was ommited here due to markdown issues], there must be at least one connected component $C_h$ in $G$ such that $\\{X_h\\} \neq \\{X'_j\\} $ for $ \forall 1 \leq j \leq k $. Let us now call such $\\{X_h\\}$ a set of unmatched component colors*.
**Q5: "I don't think $X_i$ should have $l$ as a superscript. Are you also referring to connected components with $l$
colors instead of subgraphs?"**
This is a typo, the superscript should be $k$. We fixed it.
**Q6: line 513: "l+1 the" -> "l+1 th"**.
We have fixed that typo in the revised paper.
**Q7: "it is not injective" should be "is not injective".**
Yes, thanks for the careful reading.
---
We hope our answers have addressed the points you have raised, and improved your view of the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions.
Upon reading your response to Q4, I have some further questions:
The authors highlighted that "when $G$ and $G'$ have distinct component-wise colors, there must be some connected component $C_h$ in $G$ such that $\{X_h\} \neq \{X_j'\}$ for all $1 \leq j \leq k$." My understanding is that when referring to the distinct component-wise colors of the two graphs, we're essentially discussing the differences between two multisets. However, two distinct multisets can still share the same individual elements. Given this, I'm not entirely convinced about the validity of the aforementioned claim.
Could the authors provide further explanation?
---
Reply to Comment 1.1.1:
Comment: We apologize for the confusion. Our idea was to convey that when these multisets differ, there cannot be a bijection between the connected components of G and G' such that each component would be paired with a component having the same component-wise colors.
Thanks to your comment, we have reformulated this part of the proof in a way that we believe is much more accessible for the readers. Thus, the base case in Lemma 2 now reads:
---
If there is only one color, component-wise colors cannot differ for graphs with $\beta^0_G = \beta^0_{G'}$. Let us consider two colors (say, $b$ and $w$). For two colors, there are only three possibilities for what $X_h$
in $\\{\\{X_i\\}\\}_{i=1}^k$
may be:
$\\{ b \\},\\{ w \\}$ or $\\{b, w \\} $.
Now, let us denote the multiplicities of $\\{ b \\},\\{ w \\}$ and $\\{b, w \\} $ in $\\{\\{X_i\\}\\}_{i=1}^k$ by $n_1, n_2$ and $n_3$, respectively.
For $G$ and $G'$ with $\beta^0_G = \beta^0_{G'}$, we have that $n_1 + n_2 + n_3 = n'_1 + n'_2 + n'_3$.
Thus, when $\\{\\{X_i\\}\\}_{i=1}^k \neq$
$\\{\\{X_i^\prime\\}\\}_{i=1}^k$, there are four cases to consider:
1. $n_1 \neq n'_1, n_2 \neq n'_2,n_3 = n'_3 $: In this case, $n_2 + n_3 \neq n'_2 + n'_3 $ correspond to multiplicities of real holes $(w,\infty)$ for $G$ and $G'$ respectively, in a filtration that introduces the color $w$ first.
2. $n_1 \neq n'_1, n_2 = n'_2,n_3 \neq n'_3 $ : Again, $n_2 + n_3 \neq n'_2 + n'_3 $ correspond to multiplicities of real holes $(w,\infty)$ for $G$ and $G'$ respectively in a filtration that introduces the color $w$ first.
3. $n_1 = n'_1, n_2 \neq n'_2,n_3 \neq n'_3 $: Now, $n_1 + n_3 \neq n'_1 + n'_3 $ correspond to multiplicities of real holes $(b,\infty)$ for $G$ and $G'$ respectively in a filtration that introduces the color $b$ first.
4. $n_1 \neq n'_1, n_2 \neq n'_2,n_3 \neq n'_3 $: Similarly, $n_1 + n_3 \neq n'_1 + n'_3 $ correspond to multiplicities of real holes $(b,\infty)$ for $G$ and $G'$ respectively in a filtration that introduces the color $b$ first.
Note that cases as $n_1 \neq n'_1, n_2 = n'_2,n_3 = n'_3 $ are not possible as $n_1 + n_2 + n_3 = n'_1 + n'_2 + n'_3$.
---
Thanks again for checking the proofs. We genuinely appreciate your contribution to strengthening the paper. | Summary: In this paper, the authors discuss the limitations of message-passing graph neural networks (MP-GNNs) in terms of the Weisfeiler-Leman test for isomorphism. They explore the use of persistent homology (PH) to augment graph models with topological features but highlight the challenge of identifying the class of attributed graphs that PH can recognize. To address this problem, they introduce the concept of color-separating sets. They establish necessary and sufficient conditions for distinguishing graphs based on the persistence of their connected components using filter functions on vertex and edge colors. Based on these insights, they propose RePHINE, a method for learning topological features on graphs. RePHINE integrates both vertex-level and edge-level PH, claimed to be more powerful than either category alone. When incorporated into MP-GNNs, RePHINE enhances their expressive power.
Strengths: The paper presents new theoretical results and introduces a concept of color-separating sets, providing a resolution to the problem of recognizing attributed graphs based on the persistence of their connected components. The authors establish necessary and sufficient conditions for distinguishing graphs using filter functions on vertex and edge colors. They also propose RePHINE, a method for learning topological features on graphs, which integrates both vertex-level and edge-level persistent homology.
Weaknesses: While the theoretical contributions of this paper are new, there is room for further exploration and validation in real-world experiments. The current evaluation primarily focuses on controlled simulated datasets, limiting our understanding of RePHINE's performance in practical scenarios. It is equally important to conduct more comprehensive experiments using real-world data to fully assess the efficacy and applicability of the proposed approach. Addressing these limitations would strengthen the practical relevance of the paper's findings. Are there any specific limitations or challenges when applying RePHINE to real-world applications? Additionally, it would be interesting to understand the factors that contribute to the marginal performance gain of RePHINE on most real-world datasets.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please refer to the sections above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to the sections above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback. We reply to your comments/questions below.
**"There is room for further exploration and validation in real-world experiments. The current evaluation primarily focuses on controlled simulated datasets, limiting our understanding of RePHINE's performance in practical scenarios."**
While we agree that there is room for improvements, we would like to highlight that our initials experiments were mainly designed to 1) validate our theoretical findings (synthetic datasets), and 2) show that the proposed diagrams can be easily integrated into GNNs to boost their predictive performance on popular benchmarks on graph classification. Thanks to your comment, we are currently running experiments to investigate further benefits of integrating the RePHINE diagrams into modern GNNs (e.g., PNA, SpecFormer). We will include these results in the revised version of the manuscript.
Importantly, we have now compared RePHINE diagrams with Extended Persistence diagrams used in the PersLay model. Our results demonstrate that RePHINE outperforms PersLay on four real-world datasets by a large margin (please see Table 1 in the attached pdf). Thus, we believe that beyond the theoretical analyses, the empirical benefits of RePHINE can already be demonstrated.
**Are there any specific limitations or challenges when applying RePHINE to real-world applications?**
We believe that topological descriptors (such as RePHINE) work complementary to existing graph classifiers rather than standalone methods. As a standalone method, we know RePHINE has theoretical limitations. In particular, for unattributed graphs, RePHINE cannot separate graphs of equal size with the same number of components and cycles (please see the example in the attached PDF).
However, given RePHINE's improved expressivity at the same computational cost as vanilla vertex-color filtrations, RePHINE has the potential to become the default choice for topologically enriched graph models in real-world applications.
**It would be interesting to understand the factors that contribute to the marginal performance gain of RePHINE on most real-world datasets.**
While it is hard to provide a definitive explanation for subtle performance differences, we believe that the minor gains come from our specific architectural choices, which isolate persistence diagrams while keeping other components unchanged. We note that in the comparison against the PersLay model, we can observe a significant difference in performance.
---
Rebuttal 2:
Comment: Dear reviewer,
Please **briefly acknowledge the rebuttal** by the authors and consider updating your score—we want to avoid borderline scores for reviews, and the discussion phase will close soon. If you have any additional questions to the authors please ask them **now**.
Thanks,\
Your AC
---
Rebuttal Comment 2.1:
Comment: Dear AC,
Many thanks for your kind reminder.
Dear reviewer,
Thank you again for your feedback. Given that the author-reviewer discussion deadline is approaching, we would like to highlight additional experiments on real-world benchmarks to alleviate your concerns.
The first set of experiments compares RePHINE to another PH-based model: PersLay (AISTATS, 2020) (for details, please see x2he W1/Q1). The results are:
| Method | NCI109 | PROTEINS | IMDB-B | NCI109 |
| -------- | ------- | ------- | ------- | ------- |
| PersLay | 90.48 $\pm$ 2.97 | 94.64 $\pm$ 4.69 | 90.40 $\pm$ 4.90 | 85.16 $\pm$ 6.11 |
| RePHINE+Linear | **93.97 $\pm$ 4.42** | **98.93 $\pm$ 3.39** | **94.70 $\pm$ 7.50** | **93.80 $\pm$ 4.05** |
As we observe, RePHINE+Linear outperforms PersLay by a significant margin.
We also ran experiments regarding the combination of RePHINE and a SOTA GNN: PNA (NeurIPS 2020). We report results on the ZINC dataset. The MAE values are: PNA (0.195 $\pm$ 0.004) and PNA+RePHINE (**0.189 $\pm$ 0.006**). Importantly, we did our best to conduct a fair comparison with reproducible results. We will include these and a few others in our revised manuscript.
Finally, we report fundamental theoretical results to uncover the representational limits of PH. These, alongside a provably more expressive topological descriptor, are our main contributions. We expect our work will help the Graph ML and Topological DL communities design better, more nuanced models that are both theoretically well-grounded and practically efficacious.
Thank you again for taking the time to review our submission and for your constructive feedback. We would greatly appreciate if you would kindly consider upgrading your score. | Summary: The authors introduce RePHINE, which calculates 0-dimensional persistent homology (PH) with respect to the filtration on edge colors, augmented with so-called missing holes and vertex color information. They establish the necessary and sufficient conditions for distinguishing graphs. RePHINE is shown to be more expressive than both standard 0-dim and 1-dim PH, can be easily integrated into GNNs, and is demonstrated to boost their expressive power on several benchmarks for graph classification.
Strengths: (S1) The three questions in the Introduction nicely position and motivate the work, and the presentation of the main contributions is very clear.
(S2) The theoretical results are relevant, as they discuss in detail the expressivity of PH on vertex-color and edge-color filtrations, and expressivity of RePHINE. I do not have a good overview of PH on graphs, so I cannot comment on the novelty of results, and I did not check the supplementary material for proofs.
(S3) Experiments are carried out on 3 synthetic and 5 real-world datasets.
Weaknesses: (W1) The related work seems not be detailed enough and is hard to identify.
(W2) The name RePHINE is clever and nice. However, the word interleaving suggests interleaving distance (esp. relevant in TDA), so it would probably be better to replace this with e.g. interplay or integration. Also, it is somewhat a misnomer since you are not really using a filtration on nodes (this information does not influence birth and death values), so it might be better to rephrase that part too (e.g. Refined PH by incorporating node-color on edge-based filtration)?
(W3) From the beginning of the paper, I was confused what the particular interplay between vertex- and edge-based PH will be. This was most pronounced in Section 3.3, since your way of edge-coloring definition, if used together with vertex-coloring, does not satisfy the definition of simplicial complex. For example, f(orange)=1, f(blue-orange)=2, f(blue)=3, so that an edge can appear in the filtration before the two incident vertices appear. Stressing earlier on (already in Section 1) that you calculate PH on edge-based filtration, including so-called missing holes and augmenting it with vertex-color information would be helpful. See related comment (W2) on the acronym RePHINE above.
(W4) You write: “We note that missing holes correspond to cycles obtained from 1-dim persistence diagrams.” How does you approach compare to concatenated standard 0- and 1-dim PH on edge-based filtration?
(W5) Experimental results on synthetic data: More information about the data should be included (in Appendix C.1). What exactly is the problem/goal here? What type of graphs results in the same RePHINE representation (in particular for cub12-3)? You write that you compare with 0- and 1-dim PH on vertex-color filtration, but what exactly do you mean by this, is this information concatenated (union of sets is considered)? What about PH on edge-color filtration? Why are standard PH and GNNs performing so poorly?
(W6) Experimental results on real-world data: You write that you compare against standard color-based PH, but what do you mean with this? See related comments in (W4). It would be interesting to look at the results in more detail, in particular to discuss some examples that would be wrongly classified with other approaches, but that are successfully tackled with your method, but also at the examples that cannot be classified properly with your approach (the discussion may be placed in an appendix).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Main questions are formulated in the weaknesses (W1)-(W6) above.
(Q1) Abstract: “… provably more powerful than both”. The word both is confusing here, is your method more powerful than the two combined, or more powerful than each of them separately?
(Q2) What is a suitable metric for the augmented persistence diagrams? A brief comment is sufficient, and can be placed in future research.
(Q3) Which software do you use for calculation of PH?
Some minor suggestions:
- “Experiments support our theoretical analysis and show the effectiveness of RePHINE on five real-world datasets.” -> “Experiments support our theoretical analysis and show the effectiveness of RePHINE on three synthetic and five real-world datasets.”
- Provide a reference for persistence diagram (e.g., in Section 2), so that readers can find more information, and to make it clear that this is not defined for the first time here in your work.
- Explicitly mention that all lemmas and theorems are proved in Appendix B, since one might wonder if some or all of these are earlier results.
- Lemma 1 Vertex-based filtrations … -> Lemma 1 Injective vertex-based filtrations …
- For better readability, it would be good to also have explicit Definitions for separating and disconnecting set. Improve consistency in naming theoretical results, e.g., if Lemma 6 (Edge-based almost holes as disconnecting sets), then it is better that Lemma 4 (Vertex-based almost holes as color-separating sets).
- A visual or table summary (can be placed in Appendix B) of your theoretical results could really help the readability of Section 3 and the impact of your work. For example, some of the table rows could be the following:
1) Real holes (d = infty) of 0-dim PH wrt vertex-color filtration --- Component-wise colors --- Lemma 2
2) Almost holes (b neq d, d < infty) of 0-dim PH wrt vertex-color filtration --- Color-separating sets --- Lemma 3
3) Birth time of 0-dim persistence interval wrt vertex-color filtration --- Vertex color --- Lemma 5
4) Almost holes (b neq d, d < infty) of 0-dim PH wrt edge-color filtration ---- Disconnecting sets --- Lemma 6
- What is {{ on line 135, line 160, line 182, line 199, …?
- In Section 3, you denote birth and death values with b and d (I think this is more common and readable), but you use a and b in Section 4.
- Introduce the augmented PH as an explicit Definition in Section 4, as this is your main contribution.
- Cite specific Appendix (e.g. Appendix B), rather than pointing to the general Appendix.
- Capitalization in References: PersLay, Kolmogorov, Rayleigh-Bénard, Leman
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations and future work are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and thoughtful review. You have raised very pertinent points. Below, we address your questions/comments.
**W1: "The related work seems not be detailed enough and is hard to identify."**
Thanks for your comment. In the Introduction, we decided to group references together for conciseness. To alleviate the issue you raised, we will provide a more detailed overview of related works in the supplementary material (in a new section Related Works).
**W2: "The name RePHINE is clever and nice. However, the word interleaving suggests interleaving distance (esp. relevant in TDA), so it would probably be better to replace this with e.g. interplay or integration."**
Thanks very much for the excellent suggestion. We agree that 'Refined PH by incorporating node-color into edge-based filtration' is apt for the proposed method, so have accordingly decided to adopt this rephrasing.
**W3: "Stressing earlier on (already in Section 1) that you calculate PH on edge-based filtration, including so-called missing holes and augmenting it with vertex-color information would be helpful."**
We have reworded descriptions in the Abstract, Introduction, and Section 4 to better reflect how RePHINE works. For instance, the Abstract now reads: "RePHINE efficiently incorporates vertex-color information into edge-level filtrations, achieving a scheme ...".
**W4: "How does you approach compare to concatenated standard 0- and 1-dim PH on edge-based filtration?"**
We note that RePHINE is more expressive than the union of the multisets of 0- and 1-dim PH on edge-based filtration. In particular, the two graphs in Figure 4(c) of the paper cannot be distinguished by either 0-dim or 1-dim PH. However, we can obtain two different RePHINE diagrams for such a graph as we show in part (3) of the proof of Theorem 4.
**W5: Regarding details and further analysis of the results on synthetic data.**
The cubic datasets comprise non-isomorphic 3-regular graphs. From these graphs, we create a classification problem by assigning each graph a binary class. Given the partition, we assess if the existing methods can overfit (correctly classify all) the samples. When we compare RePHINE with 0- and 1-dim PH on vertex-color filtration, we mean the union of the 0- and 1-dim diagrams.
Regarding the analysis, we have only compared RePHINE to vertex-color filtration as it has been used in the reference work (TOGL). Nonetheless, in the revised version of the paper, we will also include results for diagrams obtained from edge-level filtrations. To understand why PH works poorly, we report diagrams (after learning) for two instances of cubic-10-2 in the attached PDF. We noticed that the original code of TOGL for vertex-color filtrations uses $max(f(c))$ instead of infty, which makes it not distinguish almost and real holes in some instances. We will also provide a similar discussion on the reasons for the failure of GCNs and report obtained diagrams for other graphs (and synthetic datasets) in the supplementary material.
**W6: "Experimental results on real-world data: You write that you compare against standard color-based PH, but what do you mean with this?"**
By standard color-based PH, we mean 0-dim and 1-dim persistence diagrams obtained from vertex-color filtrations. To the best of our knowledge, edge-color filtrations have not been used in graph learning. We will clarify this in the revised manuscript.
**Q1: "Is your method more powerful than the two combined, or more powerful than each of them separately?"**
RePHINE is more expressive than the union of the families of node- and edge-color filtrations. In particular, in Theorem 4, saying that RePHINE is strictly more expressive than vertex- *or* edge-level filtration implies that it is more powerful than they combined (union only allows the separation of graphs that can be separated by one of the filtration types).
**Q2: "What is a suitable metric for the augmented persistence diagrams?"**
This is an interesting question. In particular, future research could consider the suitability of the bottleneck distance for RePHINE diagrams with necessary changes.
**Q3: "Which software do you use for calculation of PH?"**
Our code is based on the official repo of Topological GNNs. As such, we have modified their routine to compute our augmented diagrams. It consists of a Torch implementation using a Find-Union data structure.
In the following, we list the minor suggestions. For conciseness, we mark with $\checkmark$ the accepted suggestions.
1. $\checkmark$ “Experiments ... five real-world datasets.” $\rightarrow$ “Experiments ... on three synthetic and five real-world datasets.”
2. $\checkmark$ Provide a reference for persistence diagram (e.g., in Section 2)
3. $\checkmark$ Explicitly mention that all lemmas and theorems are proved in Appendix B.
4. $\checkmark$ Lemma 1 Vertex-based filtrations … $\rightarrow$ Lemma 1 Injective vertex-based filtrations …
5. It would be good to also have explicit Definitions for separating and disconnecting set. --- **This might be adopted if the page limit allows.**
6. $\checkmark$ Improve consistency in naming theoretical results.
7. $\checkmark$ A visual or table summary (can be placed in Appendix B) of your theoretical results could really help the readability of Section 3 and the impact of your work.} **See attached PDF.**
8. $\checkmark$ \textbf{What is $\{\{$ on line 135, line 160, line 182, line 199?} **We use $\{\{$ to represent multisets.**
9. $\checkmark$ In Section 3, you denote birth and death values with b and d, but you use a and b in Section 4.} **We are now using $b$ and $d$, as suggested.**
10. $\checkmark$ Introduce the augmented PH as an explicit Definition in Section 4.
11. $\checkmark$ Cite specific Appendix (e.g. Appendix B).
12. $\checkmark$ Capitalization in References
---
We're grateful for your thoughtful and perceptive comments, and hope our answers have improved your assessment of this work.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the detailed response and for the great improvements! Please make sure to include most of these clarifications in the final version of the paper.
---
Reply to Comment 1.1.1:
Comment: Sure. We will include all clarifications in the revised version of the paper. Thanks again for your insightful review and support of our work. | Summary: The discriminative power of the persistent homology (in certain homological degrees) of vertex- and edge-filtered graphs is characterized in terms of combinatorial structure of the graphs. It is shown that there exist pairs that can be distinguished by the persistent homology of vertex-filtrations but not by the persistent homology of edge-filtrations, and viceversa. This is used to motivate the introduction of a new topological descriptor of graphs which is strictly more discriminative than the persistent homology of both vertex- and edge-filtrations.
The performance of this descriptor is evaluated on benchmark datasets.
[Post rebuttal edit] Raised score to 6.
Strengths: 1. The theoretical results on the expressiveness of persistent homology of vertex- and edge-filtered graphs are interesting, and they are put to good use in that they motivate the design of a provably stronger topological descriptor of graphs.
2. The fact that the invariant being introduced is strictly more discriminative than the persistent homology of vertex- and edge-filtrations seems to be relevant in practice.
Weaknesses: 3. The exposition is, at times, not that easy to follow. See, in particular, my comment about line 174, below.
4. The experimental section is interesting, but the empirical claims about the performance of the RePHINE would be better substantiated with experimental evaluation on more datasets.
5. The conclusions are a bit too succinct. In particular, the limitations of the approach are not discussed in detail. Relatedly, no future work or open questions are mentioned.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: ## Main questions
6. Your approach seems similar to extended persistence (in the sense of your reference [2]).
How does your main strategy (RePHINE) compare to extended persistence and in particular to Perslay (both in theory and in practice)?
Can you substantiate this with theory or experiments?
7. Is it clear that RePHINE is isomorphism invariant? Since this is an important property for the theorical section of this paper, I believe that this should be addressed explicitly.
8. Is RePHINE stable in the sense of persistent homology? Is this relevant in your setup?
9. In line 58 you mention that you aim to fully characterize graphs that can be distinguished with persistent homology.
Was this achieved or are there still open questions?
## Minor questions and comments
10. Line 61: Should the subset inclusion be $\in$?
11. Line 135: the double brace notation has not been introduced and is quite important for this paper.
12. Line 174: I understand that, when you say "graph" here, you mean a graph together with a vertex coloring function, as well as a filtration defined on those colors.
However, in line 198 (Theorem 1), the graphs $G$ and $G'$ don't come with a filtration.
The convention on what exactly is a graph and what extra structure is used in each result should be made more explicit.
If graphs are always assumed to be colored, I would also suggest using the term "colored graph", or a term to the same effect.
13. Line 198: In Theorem 1, when you write $D_G$, do you mean diagrams in homological degree 0, in homological degree 1, or both?
14. Line 219: Could you please comment on the relevance of Lemma 7? How can it be used (in theory or in practice)?
15. Line 312: Is there a geometric or topological motivation for "case a=0" in line 312?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper would benefit from a discussion about limitations of the approach, even if brief. This will help readers asses how useful is their approach for specific tasks, as well as what remains to be done and what are the current promising avenues in the theoretical front.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your insightful comments and suggestions to improve the paper. Below we address your concerns.
**W1/Q1: "the empirical claims about the performance of the RePHINE would be better substantiated with experimental evaluation on more datasets." / "How does your main strategy (RePHINE) compare to ... Perslay"**
Our initial experiments aimed to 1) corroborate our analysis (synthetic datasets), 2) show that RePHINE also works in scenarios where topological descriptors are combined with GNNs for tackling real-world problems. In total, we considered eight datasets. To further assess the effectiveness of RePHINE, we are currently running experiments regarding integrating RePHINE into SOTA GNNs, including results on additional datasets. We will report these additional experiments in the revised paper.
Based on your feedback, we have now also conducted experiments to compare PersLay and RePHINE on 4 real datasets. To ensure a fair comparison, we processed extended and RePHINE diagrams similarly. In particular, we followed the design of PersLay, i.e., the vectorizations of the diagrams are combined with graph-level features (same as the one used by PersLay) and treated with a linear classifier. We ensured that both methods use identical data samples and used early stopping with the same patience for both methods. The results are in Table 1 of the attached PDF. Overall, RePHINE outperforms PersLay by a significant margin.
**W2: "the limitations of the approach are not discussed in detail...no future work or open questions are mentioned."**
While the complete characterization of the expressivity of RePHINE is an interesting open problem, we can lower-bound its capacity. In particular, if two graphs have one color, RePHINE cannot separate graphs of equal size with the same number of components and cycles. For example, we now show that RePHINE can't separate 4-node star/path graphs (details in the attached PDF).
Importantly, there are many relevant open questions regarding RePHINE/PH in graph learning, including generalization capabilities of existing methods, local versions of RePHINE, and the characterization of which graph properties RePHINE (and other PH-based methods) can compute. Comparing RePHINE and extended PH (or assessing the power of an extended variant of RePHINE) from a theoretical perspective is another interesting open problem. We will add this discussion to the subsection 'Limitations/Future Works' in the revised version of the paper.
**Q2: "Is it clear that RePHINE is isomorphism invariant?"**
Thank you for raising this question that has led us to formally prove that RePHINE is indeed isomorphism invariant as a new Corollary:
*Let $G$ and $G'$ be isomorphic graphs. Then, any edge-color and vertex-color filtrations produce identical RePHINE diagrams for $G$ and $G'$.*
We sketch the essential arguments of the proof here. RePHINE diagram’s isomorphism invariance stems from the fact that it is a function of a filtration on a graph, and this filtration is obtained from isomorphism invariant colorings. Further, when matching a vertex with a diagram element (i.e. deciding which vertex 'died' at the death of a component), RePHINE uses minimum and maximum -functions, which are invariant with respect to the order of comparisons done.
**Q3: "Is RePHINE stable in the sense of persistent homology?"**
Thank you for the interesting question. We believe analyzing the stability properties of RePHINE could be an interesting follow-up work. We will mention this in the newly added subsection ‘Limitations and Future Works’.
**Q4: "you mention that you aim to fully characterize graphs that can be distinguished with persistent homology. Was this achieved or are there still open questions?"**
We analyzed the general case of filtrations based on node and edge colors and indeed provided a complete characterization of attributed graphs that can be distinguished with PH methods that employ these filtrations (using the new notion of color-separating sets). However, there are other types of filtration functions, e.g., based on the spectral decomposition of graph Laplacians, that we have not considered in this paper. Also, there are important open problems, including generalization, stability, and complete characterization of the proposed method, i.e., RePHINE. We believe the novel analyses introduced as part of this work could help in resolving these open problems, and may also foster the rise of other powerful topological descriptors.
Below, we address the minor issues you pointed out.
1. **Should the subset inclusion be $\in$**: Yes.
2. **the double brace notation has not been introduced**: We introduced the notation for multisets in the revised paper.
3. **when you say "graph" here, you mean a graph together with a vertex coloring function, as well as a filtration defined on those colors ...**: We note that filtration functions do not come with our definition of graphs. We have added the term 'colored (or attributed) graphs' when we define graphs.
4. **In Theorem 1..do you mean diagrams in homological degree 0, in homological degree 1, or both?** We meant 0-dim diagrams. We have replaced $\mathcal{D}_G$ with $\mathcal{D}_G^0$ for clarity.
5. **Could you please comment on the relevance of Lemma 7? How can it be used (in theory or in practice)?**
Lemmas 6 and 7 motivate the introduction of color-disconnecting sets and help to characterize the expressivity of almost holes. We have added a reference also to Lemma 7 when introducing color-disconnecting sets.
6. **Is there a geometric or topological motivation for "case a=0"?** Case a = 0 corresponds to pairs that are augmented with an independent vertex-color filtration.
---
We hope our answers (including empirical comparison with PersLay, proof of isomorphism invariance, and discussion about open problems and limitations) have sufficiently addressed most of your concerns and that you would kindly consider increasing your score.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: I thank the authors for their response.
Overall, I am satisfied with their answers, and I have raised my score accordingly. My score is not higher mainly due to some unanswered theoretical questions (stability and characterization of the expressivity of RePHINE, and comparison to expressivity of extended persistence). | Rebuttal 1:
Rebuttal: We are grateful to all the reviewers for their time and insightful comments, as well as to the (senior) area, program, and general chairs for their service to the community.
We are glad to note the positive response of all the reviewers, and specifically, their acknowledgments that our work is **interesting and novel** (x2he, BFX1) and **provides a resolution to the problem of recognizing attributed graphs** based on the persistence of their connected components (oULm). Also, reviewers found that our work can **allow one to understand the limits of standard filtering schemes** and **build new enriched schemes to overcome them** (PDW5). Finally, our **theoretical contributions are said to be well presented and clear** (rXNh, BFX1, PDW5), **relevant in practice** (rXNh, x2he), and **supported by experiments on eight datasets** (rXNh).
To the best of our efforts, we’ve tried to address all the specific comments, including the minor ones, that have been raised by each reviewer. In particular, some of the main revisions are:
- Additional experimental results on the comparison between RePHINE and Extended Persistence Diagrams (PersLay);
- Proof that RePHINE is isomorphism invariant (newly added Corollary 1);
- New subsection about 'Limitations and Future Works';
- Clarifications regarding the main contributions (newly added Table with an overview of our theoretical results), and exposition of RePHINE diagrams (now as a formal definition);
- Added visualizations about the graphs and diagrams obtained from the synthetic experiments.
Moreover, we are currently running experiments to show additional results on 1) the combination of RePHINE with SOTA GNNs; 2) more (larger) datasets.
We believe that acting on reviewers’ feedback has reinforced the many strengths of this work, and we thank them again for their very constructive comments.
Pdf: /pdf/4ba3bf4c6432de9fae71f43f5fe547468987dfa3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper provides a theoretical analysis of the expressive power of Persistent Homology (PH) features in distinguishing different colored graphs. The paper characterizes the family of graphs that is separable by a 0-dimensional PH using either node filtering or edge filtering and identifies the failure cases. Based on the theoretical analysis, a new PH filtration is proposed that overcomes the previous limitation and is provably strictly more expressive than either node or edge filtering. The new filtering is compared with standard PH filtering on a synthetic dataset and a few graph-classification benchmarks.
Strengths: The main contribution of the paper lies in the theoretical analysis of the expressive power of PH filtering, which allows one to understand the limits of standard filtering schemes fully and to build a new enriched scheme to overcome them.
The paper is generally well-written and drives you through the reasoning that led to the development of the method while introducing all the essential theorems. Fully understanding them and grasping all the implications requires some effort from the reader, and possible to jump forth and backward from the main paper and the supplementary where proofs are given, but this has to be expected in this kind of paper.
Weaknesses: If the paper is well structured for what concerns the theoretical analysis, the experimental one is a bit lacking. In particular, in its current state, the paper does not position itself with respect to recent methods for graph classification, and it is not clear if it is a practical competitive alternative or if the contribution mostly lies on the theoretical analysis and just paves the road to future possible developments.
A felt also that an analysis of the expressive power bounds of the proposed method could have been interesting. For instance, is there any particular graph structure in which the proposed method cannot distinguish two non-isometric graphs?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Going through the Lemmas introduced in the preliminaries and the following section, it is not clear which lemmas/theorems are introduced by the authors and which ones are just reported (if any).
I would add to the comparison soma SOTA method for graph classification.
For MOLHIV the ROC-AUC is usually reported.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I do not foresee any particular negative societal impact. A discussion on the theoretical and practical (if any) limitations of the proposed method, also w.r.t. SOTA on graph classification, would better help to understand the current and future potential of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful feedback. We address all your questions below.
**W1: "the paper does not position itself with respect to recent methods for graph classification"**
Thanks for the opportunity to position our work appropriately. Persistent homology methods that we consider here provide valuable topological information that can often be integrated into recent graph embedding methods such as variants of (geometric) graph neural networks (GNNs) to boost their performance. Therefore, we believe their role is complementary to the existing graph classification methods, so our initial experiments focused on corroborating the benefits of augmenting popular GNNs with topological descriptors. We are currently running experiments to investigate further benefits of integrating the RePHINE diagrams into modern GNNs (e.g., PNA, SpecFormer). We will include these results in the revised version of the manuscript.
Based on the reviews, we have now also compared the proposed method RePHINE with a state-of-the-art persistent homology method Extended PH (Perslay) on several real datasets such as NCI109, Proteins, IMDB-B, and NCI1. Our results demonstrate that RePHINE performs better on these datasets (please see Table 1 in the attached pdf). Thus, beyond the theoretical analyses, the empirical benefits of RePHINE can already be demonstrated.
**W2: "is there any particular graph structure in which the proposed method cannot distinguish two non-isometric graphs?"**
That's another excellent question. Indeed, there are non-isomorphic graphs that cannot be separated based on RePHINE diagrams. In particular, if two graphs have one color, RePHINE cannot separate graphs of equal size with the same number of components and cycles. For instance, a 4-node star graph and a 4-node path graph cannot be separated. We've added a visualization in Figure 1 (in the attached pdf) to show this limitation.
**Q1: "it is not clear which lemmas/theorems are introduced by the authors and which ones are just reported."**
All Lemmas/Theorems, including those in the preliminaries and following sections, are introduced and proven in this paper. To emphasize this, we've now added a Table with a summary of our contributions in the Introduction --- we've also included the Table in the attached PDF. Please note that we have now additionally proved that RePHINE is isomorphism invariant (see our reply to reviewer x2he for details).
**Q2: "I would add to the comparison soma SOTA method for graph classification."**
Thanks for your comment. As we mentioned in the answer to R1-W1 above, we are running further experiments to substantiate the benefits of integrating RePHINE into SOTA GNNs. We will include these results in the revised version.
**Q3: "For MOLHIV the ROC-AUC is usually reported."**
Thanks for catching this. Indeed, the numbers in the paper for MOLHIV are ROC-AUC values. We will make this clear in the revised manuscript.
***
Many thanks again for your constructive feedback. Based on your review, we will update our paper to include additional results regarding the combination of RePHINE + modern GNNs (SOTA) and clarify our contributions (including their limits). We hope our answers have sufficiently addressed your concerns, and the same translates into your stronger support for this work.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification
Comment: Dear Authors,
thanks for taking the time to answer my doubts. The newly added material and experiments seem convincing. Should you have some preliminary results on the use of your method with SOTA GNNs before the reviewers/authors discussion period, I would be curious to see them.
---
Reply to Comment 1.1.1:
Title: Results using PNA on ZINC
Comment: Thanks for your feedback. We are glad to hear you found the newly added material and experiments convincing.
Regarding experiments with SOTA GNNs, we run a fair comparison between PNA and PNA+RePHINE on the ZINC dataset (public splits). We leverage the topological descriptors as described in the paper (see equations in Section 4). Both methods use the same hyper-parameters (available at the PyTorch-geometric toolbox) and training procedures. **The results obtained over ten independent runs with different seeds are: PNA ($0.195 \pm 0.004$ MAE) and PNA+RePHINE ($0.189 \pm 0.006$ MAE)**. In this case, RePHINE improves the performance of PNA, achieving MAE that lies one standard deviation away from that of PNA alone. Moreover, we plan to consider at least a Transformer-based architecture and another OGB dataset in the final paper. | null | null | null | null | null | null |
Noether Embedding: Efficient Learning of Temporal Regularities | Accept (poster) | Summary: The paper presents a method for detecting, and embedding, temporal regularities from time-stamped event data, where events have a discrete type, and temporal regularities correspond to one base type preceding another head type by a characteristic relative time. A temporal regularity (TR) is defined in the following way. A temporal regularity associates a body event-type $b$ with a head-event type $h$, alongside a mean relative time $\tau$, and a universal factor $\eta$, such that events of type $b$ at time $t$ are "regularly" followed (or maybe also preceded) by an event of type $h$, with a mean time delay of $\tau$ but with a time variation in the interval $\Delta = [\tau(1-\eta), \tau(1+\eta)]$. One challenge of this is to discover these temporal regularities purely from time-stamped events, i.e. to abstract away from the specific time of the body event type. The principle contribution of this work is to train Noether Embeddings (NE) which the authors state "enable both the data efficient formation and rapid retrieval of TRs simply through embedding each event sample".
The authors define a collection of metrics for temporal regularities between any given body and head types (b,h). standard confidence (sc), head coverage (hc) and general confidence (gc), the last of which is a key measure of how strong the evidence is for a temporal regularity under a particular interval $\Delta$ (which is itself a consequence of the mean time $\tau$). They go on to specify two tasks, the first "TR detection" is to find the best value of general confidence for each pair of event types (for any value of relative-time) and specify this pair of event types as a TR if this value is above some threshold. The second task is TR Query which asks what the characteristic relative time between a pair of events is. Noether embeddings are then a composition of event embedding and time embedding, in such a way that the complex field is used to capture the time information withing just one part of the composition. TRs detection (or scoring) can then be efficiently calculated between any pair of embedded events. Associated methods are then used to determine the TR detection and query operations.
The authors compare their method of evaluating TR detection and querying with appropriately adapted methods from the literature. It isn't entirely clear from the paper, whether these other methods are trained to maximise scores between valid TRs in some way (but the assumption is that they are). The authors show that, on three datasets derived from large scale temporal knowledge graph data used elsewhere in the literature, their method performs substantially better at detecting and scoring (querying) temporal regularities in data.
Strengths: The method is well motivated in terms of Emmy Noether's work corresponding symmetries with conservation laws. The authors develop a neat way to encode things that lends itself to efficient training and their performance on three different datasets is substantially better than other methods.
It isn't entirely clear how to judge the significance of this work, as the authors define their own task and their own metrics, possibly because there are no suitable pre-existing candidates for this.
Weaknesses: The paper could be a little clearer in parts. For instance:
* It is unclear whether the authors' definition of temporal regularity (TR) is unique to the authors or is defined elsewhere. The authors relate this to prior work on temporal regularities, but the precise formulation appears to be the authors' own. It would be helpful to know whether and how this relates to other formal definitions of temporal regularities.
* More effort could be made to give intuitive (informal) descriptions of the defined terms, e.g. standard confidence, TR Detection, TR Query, ground truth confidence, and so on.
* How the data is partitioned into test and train, and how various methods are optimised from training data could be made clearer in the main body of the paper.
* It isn't really clear until late in the paper that the time-stamps are discrete. I am not sure whether this method would be tractable for continuously valued time-stamps
* The intuitive meaning of what is being established by temporal regularities and the associated metrics could be clearer too. For instance, what does it mean to have a ground-truth confidence greater than 0.8. Can that be given a (loose) statistical interpretation at all?
* The notion of an interval $\Delta$ which depends on the mean relative time $\tau$ is not clearly discussed. Is this like saying that the TRs model events whose relative time is uniformly distributed in some range? Is this a little brittle compared to a method that "smoothly" traded off relative-time-clustering with observation frequency?
The notation is a little confusing at times. In particular, Equation 2 or the text around it, could make some things clearer:
* that this metric applies to a specific pairing of event types b and h and a time interval (I realise this is wrapped up in the tr variable but I found the notation pretty opaque).
* that the tr, sp, hc, and gc terms depend on $\Delta$ but really more fundamentally depend on the mean relative time $\tau$. Later the authors refer to $gc$ as a function of $\tau$ so this seems to me to be the fundamental variable. Would it be better to define tr as $(ev_b, ev_h, \tau, \eta)$ where $\Delta$ is derived from these values?
* that this metric is defined over an event set (by which I meant that the event set could appear formally in notation)
* b(q;tr) and n(b(tr)) mean two slightly different things, the latter is a count over b(q;tr) for all q in the event set I think. Similarly replacing b with h.
The definition of TR Detection and TR Query seem to be the authors own, and so the fairness of the experimental comparison is difficult to judge. The performance is impressive on first appearance, but I am left wondering how to interpret this. The paper would be greatly strengthened by a clear justification of why this is the fairest available comparison between methods.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Why does the queried time set $\mathbb{T}_r = \{1-T_a,\ldots, 0, \ldots, T_a-1\}$ mean that you only query one side of an asymmetric pair of events? Surely this means that the total number of relative time durations is $2T_a -2$, meaning that any pair of events $(ev_i, ev_j)$ at times $t_i$ and $t_j$ would be queried in both directions so long as $t_i\neq 0$ or $t_j \neq T_a$ and vice versa.
What is the measure of TR Query (denoted r in table 1)?
How is the data in the experiments split into training and testing parts?
Why are there only 1 element per periodic time delay in the $\omega$ vectors? Does this mean that you can only discover 1 TR (body-head type) per characteristic time delay $\tau$? What if two distinct TRs (body-head types) had the same characteristic time delay?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors could be clearer about the limitations of this work. In particular, certain choices such as predefined fixed length interval factors for detection, predefined thresholds for detection, point estimates for TR Queries may be necessary to make the method tractable, but they do introduce a certain arbitrariness to the prediction frame work.
Equally, the fact that the experiments are comparing the performance of their method in optimising against an objective that motivated the design of their approach is potentially problematic too. The ground truth general confidence of a TR is the best performing time delay for a pair of events, which is the same as the objective for their detection method. The argmax of this is both the ground-truth for TR Query and the training objective for TR Query on their model. The authors should justify why this is a fair comparator, and possibly develop alternative independent measures for evaluation too.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for spending valuable time reviewing our manuscript and providing insightful comments. We have improved our paper accordingly but discovered some misunderstandings concerning the content and contributions of the work. Our responses are provided below.
**The foremost QA**
**Q**: ‘The paper would be greatly strengthened by a clear justification of why this is the fairest available comparison between methods.’
**A**: **The answer is in the second section of the global response at the top of the webpage.**
**From ‘Weaknesses’**
**Q1**: ‘whether TR is unique to the authors or is defined elsewhere’
**A1**: Our definition is unique. A relevant definition for knowledge graph streams is in [1]. The main difference is that TR is suitable for arbitrary structured events, whereas the relevant definition in [1] applies only to knowledge graph streams. Additionally, TR introduces an adaptive $\eta$ to allow vibrations of $\tau$ in real-world data distributions.
**Q2**: ‘intuitive (informal) descriptions of the defined terms…’
**A2**: We have revised our paper accordingly. For example, the intuitive description of standard confidence can be viewed as the probability that the head event will occur at time $t+\bigtriangleup$ once the body event occurs at time $t$.
**Q3**: ‘How the data is partitioned into test and train, and how various methods are optimised from training data…’
**A3**: The train and test sets are of the same set. To enhance understanding, we can draw a comparison between TR learning and clustering. In this analogy, event items correspond to clustered samples, and TRs correspond to the clusters. Therefore, NE can be viewed as a memory with unsupervised learning capabilities due to its specific structural biases.
The loss functions of NE and existing embeddings all separate the score functions of positive and negative event samples, treating representation learning as a two-class classification problem. All the baseline embedding models adopt the log-softmax form of loss functions, as in their original settings.
**Q4**: ‘whether this method would be tractable for continuously valued time-stamps’
**A4**: It would be tractable because continuously valued time stamps can be discretized.
**Q5**: ‘what does it mean to have a ground-truth confidence greater than 0.8.’
**A5**: This implies that there is an 80% probability for the co-occurrence of a body event at time $t$ and a head event at time $t+\bigtriangleup$.
**Q6**: ‘The notion of an interval $\bigtriangleup$ which depends on the mean relative time $\tau$ is not clearly discussed’
**A6**: Since such a definition is used for calculating metrics such as general confidence, it is required to be independent of specific data distributions. Intuitive meaning: the tolerance of noises (width of $\bigtriangleup$) is proportional to the size of relative time $\tau$.
**Q7**: ‘The notation is a little confusing at times. In particular, Equation 2 or the text around it, could make some things clearer’
**A7**: We have made improvements in the revised paper. For example, we have changed $tr: (ev _b,ev _h,\bigtriangleup)$ into $tr: (ev _b,ev _h,\tau,\eta)$ as suggested.
**From ‘Questions’**
**Q1**: ‘Why does the queried time set mean that you only query one side of an asymmetric pair of events?’
**A1**: Considering two event types $ev _i, ev _j$. Since $g(\tau;ev _i,ev _j)=g(-\tau;ev _j,ev _i)$, it is only necessary to calculate one decoding function between $g(\tau;ev _i,ev _j)$ and $g(-\tau;ev _j,ev _i)$ at the decoding stage.
**Q2**: ‘What is the measure of TR Query’
**A2**: As stated in Section 2.2, TR query is to output the correct $\tau'=\tau _g$ for valid TRs, where the ground truth $\tau_g$ is set as what maximizes gc(\tau) in computing $gc_g$. For each tested query $(ev _b,ev _h)$, a ratio $r'=\frac{gc(\tau')}{gc _g}$ is computed. The averaged ratio $r$ of all queries is reported.
**Q3**: ‘…training and testing parts?’
**A3**: The same as in Answer 3 from ‘Weaknesses’.
**Q4**: ‘Why are there only 1 element per periodic time delay in the $\omega$ vectors?’ & ‘What if two distinct TRs (body-head types) had the same characteristic time delay?’
**A4**: One advantage of NE is exactly its ability to efficiently store large amounts of intertwined TRs. The specific reason is in 3(2) of the global response at the top of the webpage.
**From ‘Limitations’**
**Q1**: ‘they do introduce a certain arbitrariness to the prediction frame work’
**A1**: We have conducted data analysis to rationalize the parameters as much as possible. For example, we have analyzed that TRs whose gc ∼ 0.8 constitute a small percentage of all tested TRs. We have also made reasonable adjustments. For example, we have conducted experiments in Appendix C.3.3 where the threshold for distinguishing valid and invalid TRs is chosen as 0.7,0.8,0.9, respectively. We emphasize that, similar to the classic clustering task, some degree of ‘arbitrariness’ is inevitable due to the unsupervised nature of TR learning.
**Q2**: ‘justify why this is a fair comparator ... independent measures for evaluation too’
**A2**: Fairness is ensured by treating both NE and existing embeddings equally as justified in the global response. Moreover, we disagree with the statement that ‘the argmax is … for TR Query on their model’. At the decoding stage for TR query, the argmax in evaluation is performed on the relative time. At the training stage, however, no information about the relative time is needed. The training loss is only relevant to the specific information $(ev, t)$ of each event sample and whether it is a positive or negative sample.
We have greatly improved the paper by the reviewer's valuable feedback. We emphasize that NE's advantage of 'efficiency' is evident even without the comparative experiments.
**Reference**
[1] Omran P G, Wang K, Wang Z. Learning Temporal Rules from Knowledge Graph Streams. AAAI, 2019.
---
Rebuttal Comment 1.1:
Comment: Hi Reviewer,
This paper has divergent scores. So, please give your feedback after reading author rebuttal and other reviewers' comments.
Your AC
---
Rebuttal Comment 1.2:
Comment: Thank you for your detailed responses to my questions.
I note your preference to refocus the paper "as a first ‘efficient’ TR learner with event embeddings" and I think this is a good choice. I still think there are some weaknesses to the paper and much of this aligns with my original review. In particular I think it would help to define temporal regularity formally, independently of your detection mechanism. In the paper, you write "We define the simplest and most basic 1-1 TR...". This implies that you at least have an intuitive notion of what a more general description would be. And that is before we challenge the use of 1-1 (what would be a non 1-1 TR?).
Something informed by causal theory would probably be useful here (e.g. see the Pearl textbook or one of the many more recent works in the field). You could also relate TRs to other formalisms for reasoning about temporal events, such as "the event calculus" or "temporal relations". Most valuable would be a clear articulation of what you think a TR actually is, and then define some subclass (or set of nested subclasses) that indicate the simplifying assumptions you are making).
For instance, to my understanding, causal theory states that correlations between two random variables, X and Y, can arise if one causally influences the other directly, or if there is some third variable (or set of variables) that influence both (or other more complex relationships). The fact that your TR formalism is sensitive to order, implies that it would be more sensitive to the former case than the latter (i.e. one RV causally influencing another). In that case, one could still define a model that makes weaker assumptions than you do. For instance, that the causal RV (let's say X) induces a distribution over the wait time before which Y is observed. Your formalism (as I indicated in my original review) makes further assumptions that a) the distribution is uniform in some interval b) that the interval width is determined by the mean wait time.
This is, I think, highly relevant for work that claims to be the first meaningful attempt to detect TRs.
I also have some issues with some of your descriptions about the meaning of metrics. In particular, in response to my question:
>Q5: ‘what does it mean to have a ground-truth confidence greater than 0.8.’
You reply:
> A5: This implies that there is an 80% probability for the co-occurrence of a body event at time $t$ and a head event at time $t+\Delta$.
But I think that your descriptions elsewhere are more accurate, namely that it is the probability that you will see a head event at time $t+\Delta$ given that you have observed a suitable body event at time $t$.
Finally, I think that my original statement about your experimental conditions limiting the possibility of recognising two distinct TRs with the same mean wait time (something that your general formalism doesn't restrict).
Nonetheless, I have read the other responses and reviews, and I feel that I can be persuaded to increase my rating to weak accept based on three main strengths of the paper:
* The novelty of the formalism and its relationship to Noether's theorem. The mathematical structure, which utilises complex numbers to capture characteristic wait times, may well have a much wider applicability and I think raises the interest of this paper significantly.
* The experiments are well constructed and the results convincing.
* The fact that there are no existing methods (that I know of) that can detect temporal regularities with this kind of efficiency.
---
Reply to Comment 1.2.1:
Title: New Response
Comment: **Thank you for your response.**
1. We greatly appreciate your suggestion to provide a clear articulation of what a TR actually is, 'independently of our detection mechanism'. We will incorporate your suggestion in the revised paper to enhance clarity.
2. Regarding Q5, we understand that your interpretation of ‘ground-truth confidence’ aligns with the ‘global confidence’ in our paper. Our response is appropriate if our understanding is correct (inappropriate if wrong). In that context, your preferred answer corresponds to the ‘standard confidence’ in the paper, which is combined with the 'head coverage' to form the ‘global confidence’ using a harmonic mean.
3. In our comparative experiment setting, we excluded the scenario where $\tau=0$ because it represents a large proportion of all TRs but does not effectively reflect the difficulty of the TR query task. Baselines can easily handle queries with $\tau=0$ as they are too simple. In our grouped experiment (Figure 4(c)), we specifically test this scenario to demonstrate the flexibility of NE.
We appreciate your increase in score and thanks again for your further response and precious suggestions. **Please note that ‘weak accept’, as you suggested, corresponds to a score of 6 rather than 5 (denoting borderline accept), as you previously assigned**.
---
Reply to Comment 1.2.2:
Comment: **On the definition of TR, we have made improvements in the revised paper by the reviewer's comment. We sincerely ask the reviewer if the following description is acceptable:**
By temporal regularity we refer to the building structure of event schemas [1]. According to cognitive science literature, event schemas are learned directly from experience by accumulating common event structures statistically. These schemas also possess chronological relationships [2]. Building on the statistical interpretation, we formally define temporal regularity as temporal associations that remain invariant to time shifts: $ ( ev_b, t ) \to ( ev_h, t + \tau) \quad \forall t \in T_a $. Here, $\tau=0$ represents the synchrony of event occurrences, and both $t$ and $\tau$ can be either discrete or continuous. Since real-world data distributions often contain noise, we introduce an adaptive $\bigtriangleup=[\tau(1-\eta), \tau(1+\eta)]$ to replace $\tau$ when evaluating statistically learned temporal regularities from events. This evaluation approach offers the advantages of both simplicity and practicality by allowing for intuitive vibrations.
**We appreciate again the reviewer for your constructive comments.**
References
[1] Pudhiyidath A, Roome H E, Coughlin C, et al. Developmental differences in temporal schema acquisition impact reasoning decisions[J]. Cognitive Neuropsychology, 2020.
[2] Ghosh V E, Gilboa A. What is a memory schema? A historical perspective on current neuroscience literature[J]. Neuropsychologia, 2014.
---
Rebuttal 2:
Title: Willingness to answer further questions
Comment: Dear reviewer 7yn1
We thank you for your precious time and constructive comments. As the discussion period will end soon, we are not sure whether our responses have addressed your questions. If you still have any questions about our work, we are more than happy to provide further responses for you. | Summary: This paper defines the tasks of temporal regularity (TR) detection and query and their evaluation metrics, and proposes Noether Embedding (NE) that enables encoding TRs from limited event samples and rapid retrieval of TRs in constant time complexity. NE possesses time-translation symmetries of temporal regularities that are indicated by conserved local energies in the embedding space. NE is evaluated on ICEWS14, ICEWS18, and GDELT datasets, and achieves about double F1 scores for detecting valid TRs and over 10 times confidence scores for querying TR intervals compared with baseline embeddings with additional calculation. NE is further shown to be useful for social event prediction and personal decision-making scenarios.
Strengths: - This work defines a pair of particularly novel tasks: temporal regularity detection and query. Both are critical to human-level intelligence.
- It proposes Noether Embedding which for the first time enables learning temporal regularities directly from events and rapid retrieval of the regularities.
- NE achieves promising performances on ICEWS14, ICEWS18, and GDELT datasets, and is shown to be useful for social event prediction and personal decision-making scenarios.
Weaknesses: Please see Questions for doubts to be addressed in the next version of the paper.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. L279 argues that NE is qualitatively different from rule mining methods in that a search method may require fixing different relative time points before mining rules, while NE enables direct fitting and plotting of an approximate validity distribution of relative time points. But L150-151 suggests that users will select a set of relative time points. So how does NE differ? If users provide their selected set of relative time points, why couldn't we detect time regularities by counting?
2. What structural biases do NE assume? Could you provide theoretical analyses for the capacity limitation of NE?
3. Could you add to Section 3.2 the geometric interpretation of the Hardmard product in Eq. (4)?
4. L116: Is the $\omega$ vector a hyperparameter or learned, and what is the semantic implication of it being certain values?
5. L122: Should $t^\prime$ exclude all $\hat{t}$ where $(ev,\hat{t})$ is in the dataset?
6. To confirm understanding, the second equal sign is L134 always hold true, and by construction, g being invariant to t is always true. Right?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The Conclusion section mentions limitations. The paper doesn't seem to discuss negative societal impacts. It could say something about when the method may fail to detect or query temporal regularities in applications and potential consequences.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for spending valuable time reviewing our manuscript and providing insightful comments. We have improved our paper accordingly, and our responses are as below.
**Q1**: ‘So how does NE differ?’ & ‘why couldn't we detect time regularities by counting?’
**A1**: The main difference lies in the transfer of time complexity. Specifically, the counting time in the search for each $(ev _b,ev _h,\tau)$ is proportional to the number of relevant events. This time complexity is transferred to the training stage of NE so that decoding each $(ev _b,ev _h,\tau)$ is only O(d), and even O(1) by applying GPUs. Therefore, the NE vectors after the training stage are functionally equivalent to an approximate memory of all TR validity results after the counting process for each $(ev _b,ev _h,\tau)$. We have made this clearer in the revised paper.
**Q2**: ‘What structural biases do NE assume?’
**A2**: NE’s structural biases are directly inspired by Noether’s theorem. Specifically, (1) the event embedding $\pmb{q}(t;ev)$ should be constructed to make each local energy $g$ invariant to $t$; (2) the training loss should be constructed to make the value of $g$ approximate TR validity; (3) we should use local energies $g$s as the decoding function. The detailed reasons for how these biases enable NE are explained in Section 3.3 and theoretically analyzed in Appendix B.2.
**Q3**: ‘Could you provide theoretical analyses for the capacity limitation of NE?’
**A3**: In Appendix B.2.2, we theoretically analyzed two inequality constraints that control NE’s performance. We discovered that the vector dimension $d$ is better to be larger than the number of relative time points $T _a$ to enable NE. Such a requirement is also confirmed by the ablation experiment in Table 5 in Appendix C.3.2. We will further provide a more strict proof of this capacity limitation in the next version of our paper.
**Q4**: ‘Could you add to Section 3.2 the geometric interpretation of the Hardmard product in Eq. (4)?’
**A4**: The Hardmard product can be depicted as a rotation of event-type vectors by time in the d-dimensional complex space. We have added this interpretation in our revised paper.
**Q5**: ‘Is the $\omega$ vector a hyperparameter or learned, and what is the semantic implication of it being certain values?’
**A5**: The $\pmb{\omega}$ vector is manually set as an exponential distribution in the paper. We also show in Section 4.4 (Ablation Studies -- Frequency Distribution) that the exponential distribution surpasses linear distribution for fitting larger datasets.
$\pmb{\omega}$ provides the basis for Fourier-like expansions. Specifically, since the score function $f(t;ev)=\sum _{j=1}^d Real(\pmb{u} \circ e^{i \pmb{\omega} t}) _j$ and the decoding function $g(\tau;ev _b,ev _h)=2-2\sum _{j=1}^d Real(\overline{\pmb{u} _b} \circ \pmb{u} _h \circ e^{i \pmb{\omega} \tau}) _j\in [0,4]$ can be viewed as Fourier-like expansions, we can see that the global time vector $\pmb{\omega}$ provides the expansion basis and the event-type vectors $\pmb{u}$s store the coefficients for $f(t)$ (revealing event occurrence) and composing $\overline{\pmb{u} _b} \circ \pmb{u} _h$ as the coefficients for $g(\tau)$ (revealing TR validity).
**Q6**: ‘Should $t’$ exclude all $t$ where (ev,t) is in the dataset?’
**A6**: Your comment is reasonable. Exclusion is needed in the paper for rigor’s sake. However, it is also acceptable if it does not, as it trades a little performance drop for a faster implementation. This is because negative samples contribute much less ($\frac{1}{N}$) than a positive sample ($1$) as long as the number of negative samples $N$ is much larger than $1$, which is the general case. Therefore, those positive samples ‘wrongly regarded’ as negative samples will make a minor difference in the loss score.
**Q7**: ‘the second equal sign is L134 always hold true, and by construction, $g$ being invariant to $t$ is always true. Right?’
**A7**: Yes, it does. This contributes to NE’s major structural bias as discussed in Answer 2.
**Q8**: ‘The paper doesn't seem to discuss negative societal impacts’
**A8**: NE fails if cannot fit the whole dataset well, which is generally when the vector dimension $d$ is set smaller than the number of absolute time points $T _a$. The potential consequence is that false positives may lead to inaccurate prediction and that false negatives may lead to untimely warnings. We have added this discussion to the revised paper.
Once again, we would like to express our gratitude to the reviewer for the valuable feedback, which has helped us further improve our manuscript.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Many thanks for your reply, which addressed multiple of my doubts. I still have a major concern about the usefulness of the embedding, as compared to counting. To know if there is a time regularity between two event types, one could, for example, check their pairwise cooccurrences and see if the time offsets form a unimodal distribution. This simple counting method can detect time-invariant time regularities.
You indicated that the time-complexity of counting is transferred to the embedding training stage; but counting can also happen offline at "training" time. You argued that one of the two main advantages of NE is data-efficient learning; but counting can always use the same amount of data to obtain the same or better detection accuracy. Counting should be the default choice since it avoids unnecessary structural biases of using certain score and decoding functions. While your proposed functions satisfy the desiderata of time-invariance, they introduce additional priors that are hard to interpret.
In terms of efficient querying, a simple lookup dictionary obtained by counting will be O(1) to query. Alternatively this lookup dictionary can be compressed in various ways to consume less space.
Could you help me understand the scenario when (each component of) NE is more favorable than counting? In any case, counting methods seem to be necessary baselines in the paper. These are my current doubts, and I hope to learn thoughts from authors and other reviewers. Meanwhile, I have carefully read the comments from other reviewers and agree that clarification and revision are needed before the work is more complete.
Nevertheless, I acknowledge the importance of the proposed tasks and appreciate the existing experiments and insights, which provide much food for thought. I hope the next version of the paper will address the issues raised in the rebuttal conversations.
---
Reply to Comment 1.1.1:
Title: The senario does exist
Comment: Thank you for your additional question. It is worth noting that there is a specific scenario in which NE functionally surpasses counting, and that is when vectors are required for storage and querying purposes. The emergence of large language models has led to a rapid growth in vector databases, with many startups entering this domain recently. Looking ahead, there is a high likelihood that vector databases will become the prevailing storage solution for various data types, encompassing unstructured data, structured knowledge, structured events, and more. Given this future trend towards vector storage dominance, our research holds significant potential utility.
---
Rebuttal 2:
Comment: Hi Reviewer,
This paper has divergent scores. So, please give your feedback after reading author rebuttal and other reviewers' comments.
Your AC
---
Rebuttal 3:
Title: Willingness to answer further questions
Comment: Dear reviewer zRYw
We thank you for your precious time and constructive comments. As the discussion period will end soon, we are not sure whether our responses have addressed your questions. If you still have any questions about our work, we are more than happy to provide further responses for you. | Summary: This paper defined the complementary problems of TR detection and TR query, formulated their evaluation metrics, and adopted classic datasets for evaluations. Towards the TR problem, this paper proposed Noether Embedding (NE), which for the first time, enabled both the data-efficient formation and rapid retrieval of TRs simply through embedding each event sample.
Strengths: 1) This paper targeted at detecting and encoding temporal regularities (TRs) in events, which are of great importance.
2) This paper introduced Fourier basis to intrisincally model temporal regularities.
Weaknesses: 1) The authors overclaimed the first contribution.
The problems of TR detection and TR query are closely related to temporal assoication mining, with different evaluation metrics. However, these metrics were commonly adopted in other fields. So the problems were not new.
2) No strong baselines were compared in the experiments.
All compared baselines were not targeted this problem or had different assumptions, and therefore did not support the superiority of NE.
3) The paper is poorly presented.
The tasks were not presented clearly.
In addition this paper defined too many symbols but did not well explained them, which made readers lost.
4) The connection between Noether’s theorem and the proposed method is unclear and weak.
As an evidence, keywords such as "noether’s theorem" and "conservation law" only appeared in two paragraphs and I can't find how "noether’s theorem" motivated the formulation of Noether Embedding in a concrete description.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1) The paper emphasized "distributed" representation was an advantage of NE. Is it a general property of knowledge graph embedding?
2) The paper also emphasized the proposed TR formulation was "data-efficient". Is this property brought by Fourier expansion? In opposite, what is the drawback or limitation?
3) How to perform the tasks with NE during inference? It seemed unclear in the paper.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: 1) The limitation of Fourier expansion for real-world temporal regularities was not discussed.
2) Potential negative societal impact was not discussed, e.g. causality of TR, privacy issue brought by TR detection.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for spending valuable time reviewing our manuscript and providing insightful comments. We have improved our paper accordingly but also discovered some misunderstandings concerning the content and contributions of the work. Our responses are as below.
**From ‘Weaknesses’**
**Q1**: ‘The authors overclaimed the first contribution… the problems were not new’
**A1**: **To our best knowledge, the problem is new. The detailed answer is provided in the first section of our global response at the top of the webpage.**
**Q2**: ‘No strong baselines were compared in the experiments…therefore did not support the superiority of NE.’
**A2**: This is because our problem is new, so there are no existing baselines, to our best knowledge, that exactly match our approach in the available research fields. Besides, NE's main superiority of 'efficiency' is evident even without comparative experiments. **The detailed answer is provided in the second section of our global response.**
**Q3**: ‘The paper is poorly presented ... too many symbols but did not well explained them, which made readers lost.’
**A3**: We have accordingly revised our paper. For example, we provide intuitive descriptions of the defined terms, such as explaining that standard confidence can be viewed as the probability that the head event will occur at time $t+\bigtriangleup$ once the body event occurs at time $t$.
**Q4**: ‘I can't find how "noether’s theorem" motivated the formulation of Noether Embedding in a concrete description’
**A4**: The motivation is described by the underlined text in Section 3.1. Specifically, Noether’s theorem inspires us with three structural biases when constructing NE: (1) the event embedding $\pmb{q}(t;ev)$ is constructed to make each local energy $g$ invariant to $t$; (2) the training loss is constructed to make the value of $g$ approximate TR validity; (3) the local energy $g$ is used as the decoding function. We have made the connection clearer in the revised version of the paper.
**Q5**: ‘The connection between Noether’s theorem and the proposed method is unclear and weak’
**A5**: The connection is evident in three aspects. Firstly, Noether’s theorem directly motivates the construction of NE, as shown in Section 3.1. Secondly, we attribute NE’s unique efficient learning capability directly to the Noether-inspired structural biases. Specifically, the first bias enables the data-efficient formation of TRs, the second bias mainly contributes to accurate TR detection and query, and the third bias directly leads to the rapid retrieval of TRs. Detailed explanations are in Section 3.3 and Appendix B.2. Thirdly, a more strict correspondence between NE variables and those in a physical system is shown in Appendix B.1.1.
**From ‘Questions’**
**Q1**: ‘The paper emphasized "distributed" representation was an advantage of NE. Is it a general property of knowledge graph embedding’
**A1**: NE and knowledge graph embeddings utilize distinct aspects of distributed representations. Knowledge graph embeddings leverage their generalization (by interpolation) capabilities to achieve good performance in the completion task. NE, instead, use complex vectors to apply Fourier-like expansions, thus fitting and storing both event occurrences and TR validities in the embedding space. Specifically, the score function $f(t;ev)=\sum _{j=1}^d Real(\pmb{u} \circ e^{i \pmb{\omega} t}) _j$ and the decoding function $g(\tau;ev _b,ev _h)=2-2\sum _{j=1}^d Real(\overline{\pmb{u} _b} \circ \pmb{u} _h \circ e^{i \pmb{\omega} \tau}) _j\in [0,4]$ can be seen as Fourier-like expansions. The global time vector $\pmb{\omega}$ serves as the expansion basis, and event type vector $\pmb{u}$s store the coefficients for $f(t)$ (event occurrence) and composing $\overline{\pmb{u} _b} \circ \pmb{u} _h$ as the coefficients for $g(\tau)$ (TR validity).
**Q2**: ‘The paper also emphasized the proposed TR formulation was "data-efficient". Is this property brought by Fourier expansion?’
**A2**: We attribute NE’s data-efficient capability mostly to the fact that $g$ is invariant to $t$, which is directly inspired by Noether’s theorem. Detailed explanations are in Section 3.3. Fourier-like expansion, instead, is necessary for allowing good fitting and large-capacity storage, serving as a prerequisite for NE to perform well on large-scale real-world datasets used in the paper.
**Q3**: ‘How to perform the tasks with NE during inference’
**A3**: As described in Section 3.2, $\mathop{\min}\limits _{\tau\in\mathbb{T} _r} g(\tau)$ is computed, which is compared with a global threshold $g _{th}$ to decide whether a potential TR is valid or not (for TR detection). For a valid TR, the $\tau'$ which minimizes $g(\tau), \tau \in \mathbb{T} _r$ is selected as the model output of the relative time (for TR query). We have provided clarifications on the relationship between NE decoding and TR detection and TR query in the revised paper.
**From ‘Limitations’**
**Q1**: ‘The limitation of Fourier expansion for real-world temporal regularities was not discussed’
**A1**: In Appendix B.2.2, we have theoretically analyzed the requirement for the vector dimension $d$ to be larger than the number of absolute time points $T _a$ to avoid significant performance degradation of NE, as observed in the GDELT dataset. This limitation imposes a storage capacity constraint for large datasets. We have included notifications regarding this limitation in the revised paper.
**Q2**: ‘Potential negative societal impact was not discussed, e.g. causality of TR, privacy issue brought by TR detection.’
**A2**: Thanks for your reminding. We have accordingly revised our paper to address these concerns.
We believe that our work makes a nontrivial contribution to the representation learning community by developing NE as a first efficient TR learner with event embeddings and proposing tasks to fairly evaluate embeddings' TR learning capabilities.
---
Rebuttal 2:
Comment: Hi Reviewer,
This paper has divergent scores. So, please give your feedback after reading author rebuttal and other reviewers' comments.
Your AC
---
Rebuttal 3:
Title: Willingness to answer further questions
Comment: Dear reviewer Uoii
We thank you for your precious time and constructive comments. As the discussion period will end soon, we are not sure whether our responses have addressed your questions. If you still have any questions about our work, we are more than happy to provide further responses for you. | Summary: This paper introduced a new task, $\textit{temporal regularity mining}$, and proposed a Noether Embedding to rapidly retrieval TR.
Strengths: - The argued temporal regularities sound interesting.
- Good writing.
Weaknesses: - The proposed temporal regularity mining is not a new task with a new paradigm, which the existing methods cannot well tackle. I think it is more like an extended task based on event embedding using temporal knowledge graph data.
- All existing event embedding methods can be used to tackle the proposed TR mining task. However, there is no discussion to clarify why these existing methods cannot well deal with this new task.
- Mining general TR sounds interesting and useful, but the mined TR is not general. In other words, it is more like a temporal association between events. For example, as shown in the supplementary material, (China, Appeal for diplomatic cooperation, Malaysia) $\rightarrow$ (South Korea, Express intent to settle dispute, China). An ideal TR should be no matter who does the body event, and then the head body will occur soon, which is invariant to the time.
- The temporal range between the body and head events is large, from several days to several years. So how to set the $\tau, \eta$? Please discuss this detail.
- More experiments on traditional tasks in event embedding, such as event prediction, should be conducted to further demonstrate the effectiveness of the proposed method.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for spending valuable time reviewing our manuscript. However, there do exist many factual errors, which are justified below.
**From ‘Weaknesses’**
**Q1**: ‘The proposed temporal regularity mining is not a new task with a new paradigm’
**A1**: To our best knowledge, the problem is new. Our main problem is how to enable event embeddings with an efficient TR learning capability, which is distinct from those in the temporal rule mining field. Specifically, temporal rule mining is typically studied for practical applications, aiming to uncover event regularities in specific domains [1] [2] [3]. Our goal, instead, is to advance the representation learning field by enabling embedding models to efficiently learn the atomic structure (TR) of event schemas, similar to how humans learn [4] [5]. We, therefore, develop NE with such a novel capability and propose two complementary tasks for evaluating embedding capabilities. Our TR tasks are defined with evaluation metrics borrowed from the rule mining field [6] only to guarantee fair and reasonable evaluations.
**Q2**: ‘I think it is more like an extended task based on event embedding using temporal knowledge graph data.’
**A2**: We would like to clarify that we use temporal knowledge graph data because they are classic and authoritative, allowing for fair evaluations. However, our problem is targeted at the most basic form of events $(ev, t)$, which serves as the foundation for our definition of TR. It is important to note that both of our proposed tasks and method can easily generalize to arbitrary forms of structured events, and temporal knowledge graph is just a special case. For example, by setting each (s,p,o) as an event type, our tasks and NE method can handle temporal knowledge graph data in the form of (s,p,o,t). Similarly, by setting (s,p) as an event type, our tasks and NE method can handle data in the form of (s,p,t), and so on.
**Q3**: ‘All existing event embedding methods can be used to tackle the proposed TR mining task’
**A3**: Existing embeddings, indeed, cannot tackle the proposed tasks in two levels. Firstly, the ‘efficient’ learning capability is unique to NE and not present in existing embeddings. We attribute this unique capability of NE to its structural biases inspired by Noether’s theorem. Specifically, (1) the event embedding $\pmb{q}(t;ev)$ is constructed to make local energies $g$s invariant to $t$; (2) the training loss is constructed to make the value of $g$ approximate TR validity; (3) the local energy $g$ is used as the decoding function. Secondly, even when setting aside the ‘efficient’ requirement, existing embeddings still cannot learn TRs accurately, as demonstrated by experiments in Table 1 and evaluated by the proposed TR detection and query tasks. This is primarily because they over-apply the generalization capabilities of distributed representations, which hinders the fit of event occurrences, as discussed in Section 4.2.
**Q4**: ‘the mined TR is not general’
**A4**: We would like to clarify that NE can indeed learn ‘general’ TRs when we set (s,p,o,t) with the same p (predicate) to denote an event type, rather than with the same (s,p,o) as set in the paper. This further proves the wide range of potential applications for NE, and we appreciate the reviewer for suggesting this. We have emphasized this potential in the revised paper.
**Q5**: ‘how to set the $\tau, \eta$? Please discuss this detail’
**A5**: We have provided detailed explanations in Section 3.2 and Section 4.1 on how to set the values of $\tau$ and $\eta$. In our experiments, $\tau$ is traversed through set $\mathbb{T} _r$ of the relative time points such as $\mathbb{T} _r: \{-\tau _{max},...,0, ..., \tau _{max}\}$ to plot the decoding results. We set $\tau _{max}=T _a-1$. As for $\eta$, we set it to 0.1 in $\bigtriangleup$s for strict evaluations and take the upper integer $\bigtriangleup=[\tau-\lceil \tau\eta \rceil, \tau+\lceil\tau\eta\rceil]$. It is important to note that even in extreme situations where both body and head event occurrences are equal to 2, stochastic noises are still unlikely to interfere with the evaluation of TR validity since $\eta=0.1$ is strict.
**Q6**: ‘More experiments on traditional tasks in event embedding, such as event prediction, should be conducted to further demonstrate the effectiveness of the proposed method’
**A6**: We appreciate the reviewer for making such a suggestion. However, we have already shown that NE is the first efficient TR learner with event embeddings. Specifically, only NE can encode TRs from limited event items (as shown in Figure 4) and rapidly retrieve TRs (by applying $g(\tau)$). Such a uniqueness is fundamental and qualitative, requiring no comparative experiments to demonstrate. Our comparative experiments have further demonstrated NE's superiority over existing embeddings when setting aside the 'efficient' requirement to only learn TRs accurately. Therefore, we believe that our experiments are sufficient to support the effectiveness of NE.
Thank you for your valuable feedback, and we have made sure to address the issues raised in your review.
**References**
[1] Segura‐Delgado A, Gacto M J, Alcalá R, et al. Temporal association rule mining: An overview considering the time variable as an integral or implied component. DMKD, 2020.
[2] Chen M, Chen S C, Shyu M L. Hierarchical temporal association mining for video event detection in video databases. IEEE, 2007.
[3] Yoo J S, Shekhar S. Similarity-profiled temporal association mining. IEEE, 2008.
[4] Pudhiyidath A, Roome H E, Coughlin C, et al. Developmental differences in temporal schema acquisition impact reasoning decisions. Cognitive Neuropsychology, 2020.
[5] Chaudhuri R, Fiete I. Computational principles of memory. Nature neuroscience, 2016.
[6] Galárraga L, Teflioudi C, Hose K, et al. Fast rule mining in ontological knowledge bases with AMIE+. The VLDB Journal, 2015.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewers,
Please have a careful look at the author response and give a feedback, as your score is lower than others.
Your AC
---
Rebuttal 2:
Title: Willingness to answer further questions
Comment: Dear reviewer tTxw
We thank you for your precious time and constructive comments. As the discussion period will end soon, we are not sure whether our responses have addressed your questions. If you still have any questions about our work, we are more than happy to provide further responses for you. | Rebuttal 1:
Rebuttal: Three main justifications are provided below.
## 1. Novelty of the problem
**Q**: ‘The authors overclaimed the first contribution…the problems were not new’
**A**: To our best knowledge, the problem is new. Our main problem is how to enable event embeddings with an efficient TR learning capability, which is distinct from those in the temporal rule mining field. Specifically, temporal rule mining is typically studied for practical applications, aiming to uncover event regularities in specific domains [1] [2] [3]. Our goal, instead, is to advance the representation learning field by enabling embedding models to efficiently learn the atomic structure (TR) of event schemas, similar to how humans learn [4][5]. Our problem is not only new but also important because (1) embedding models are a promising technology field; (2) TRs are a basic world structure among events; (3) efficient learning is a long-pursued human-like capability.
To address this problem, we have developed NE as the first efficient TR learner using event embeddings. Additionally, we have proposed two complementary tasks for evaluating the TR learning capabilities of embedding models. Our tasks are defined with evaluation metrics borrowed from the rule mining field [6], only to ensure fair and reasonable evaluations.
To avoid misunderstandings, we have reversed the order of our two contributions, making the development of NE the first contribution and the proposal of tasks the second. Both of these contributions aim to advance the representation learning field.
## 2. Fairness of comparisons
**Q**: ‘The paper would be greatly strengthened by a clear justification of why this is the fairest available comparison between methods.’
**A**: Our main contribution is developing NE as a first ‘efficient’ TR learner with event embeddings. The uniqueness of NE in terms of efficiency is fundamental and qualitative, requiring no comparative experiments to demonstrate. Specifically, only NE can encode TRs from limited event items (as shown in Figure 4) and rapidly retrieve TRs (by applying $g(\tau)$), thanks to its specific structural biases. Existing embedding models lack these capabilities.
Only when setting aside the ‘efficient’ requirement are comparative experiments necessary to compare the TR learning accuracy between NE and existing embeddings. This is secondary compared to the ‘efficient’ distinction. Since there are, to our best knowledge, no existing embedding baselines that exactly match our requirements in the available research fields, we have made efforts to ensure fairness in comparison through various means:
(1) Models are applied in the same way. We input the same event data during the training stage and add the same interface $g’(\tau)$ to the respective model outputs of score functions. This interface is applied to both NE and the embedding baselines, in the same manner, to indirectly compute TR validity from stored event occurrences. The excellent performance of NE with this interface validates its effectiveness.
(2) Evaluations are reliable. The evaluation metrics used in our proposed tasks are borrowed and adapted from the mature field of rule mining [6]. This guarantees a reliable evaluation of the TR learning capabilities of embedding models.
(3) Baselines are classic. We have chosen baselines from the well-developed field of temporal knowledge graph embedding, which has a wide range of classic embeddings for structured events.
(4) Dataset is convincing. Our main experiments are conducted on three classic real-world event datasets.
(5) Performance is explainable. We provide detailed explanations in the paper regarding why NE works and why existing embeddings do not, both theoretically and experimentally.
We have further clarified these points in the revised paper.
## 3. Why NE works and overwhelms
The success of NE can be attributed to two main factors: the Noether-inspired structural biases and the Fourier-like memory representations. One contributes to NE’s efficient TR learning capability while the other enables NE’s large-capacity storage for both TR validity and event occurrences.
(1) The Noether-inspired structural biases
The Noether-inspired structural biases can be summarized as below: (i) the event embedding $\pmb{q}(t;ev)$ is constructed to make each local energy $g$ remain invariant to $t$; (ii) the training loss is designed to make the value of $g$ approximate TR validity; (iii) the local energy $g$ is used as the decoding function.
(2) The Fourier-like memory representations
The score function $f(t;ev)=\sum _{j=1}^d Real(\pmb{u} \circ e^{i \pmb{\omega} t}) _j$ and the decoding function $g(\tau;ev _b,ev _h)=2-2\sum _{j=1}^d Real(\overline{\pmb{u} _b} \circ \pmb{u} _h \circ e^{i \pmb{\omega} \tau}) _j\in [0,4]$ can be viewed as Fourier-like expansions. The global time vector $\pmb{\omega}$ provides the expansion basis, while the event type vectors $\pmb{u}$s store the coefficients for $f(t)$ (revealing event occurrence) and compose $\overline{\pmb{u} _b} \circ \pmb{u} _h$ as the coefficients for $g(\tau)$ (revealing TR validity).
Detailed explanations can be found in Section 3.3 and Appendix B.2.
## Reference
[1] Segura‐Delgado A, Gacto M J, Alcalá R, et al. Temporal association rule mining: An overview considering the time variable as an integral or implied component. DMTD, 2020.
[2] Chen M, Chen S C, Shyu M L. Hierarchical temporal association mining for video event detection in video databases. IEEE, 2007.
[3] Yoo J S, Shekhar S. Similarity-profiled temporal association mining. IEEE, 2008.
[4] Pudhiyidath A, Roome H E, Coughlin C, et al. Developmental differences in temporal schema acquisition impact reasoning decisions. Cognitive Neuropsychology, 2020.
[5] Chaudhuri R, Fiete I. Computational principles of memory. Nature neuroscience, 2016.
[6] Galárraga L, Teflioudi C, Hose K, et al. Fast rule mining in ontological knowledge bases with AMIE+. VLDB, 2015. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces Noether Embedding (NE), a new model for efficient learning of temporal regularities with event embeddings. Experiments conducted on three datasets show the superior performance of this work compared to classic embeddings in detecting valid TRs and querying TR intervals.
Strengths: 1. The authors introduce Noether Embedding (NE), a new model that enables the data-efficient formation and rapid retrieval of temporal regularities simply through embedding each event sample. NE possesses the intrinsic time-translation symmetries of TRs, which facilitates TR encoding insensitive to sample size and TR retrieval in constant time complexity. This is a novel approach that has not been explored in previous works.
2. The authors formally define complementary problems of TR detection and TR query, formulate their evaluation metrics, and evaluate NE on classic ICEWS14, ICEWS18, and GDELT datasets. This is a rigorous evaluation of the proposed model and provides evidence of its superior performance compared to classic embeddings with additional calculation efforts.
3. The paper is well-written and clear, with a concise abstract and introduction that provide a good overview of the problem and the proposed solution. The authors also provide detailed explanations of the model and the evaluation metrics.
Weaknesses: 1. In Table 2 of the Appendix, the recall rate of NE is lower than that of TASTER. It would be better if the reason could be explained.
2. It should be further explained why gc(τ) in line 79 has different input from gc(tr) in formula 3.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Table 2 in the Appendix shows that NE has extremely high accuracy, while the recall rate is not the highest. May I know the reason for this phenomenon?
2. Could you further explain why NE has an overwhelming advantage over all baselines?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author proposes the limitations of NE and solutions in terms of storage efficiency in lines 307-311.
1. In the future, methods will be explored to store event occurrences and time patterns in different regions to improve the storage efficiency of NE.
2. Future research will explore methods to compose 1-1 TRs into graphical and hierarchical event schemas and combine NE with deep learning and reinforcement learning methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for spending valuable time reviewing our manuscript and providing insightful comments. We have improved our paper accordingly and our responses are as below.
**From ‘Weaknesses’**
**Q1**: ‘In Table 2 of the Appendix, the recall rate of NE is lower than that of TASTER. It would be better if the reason could be explained.’
**A1**: The reason is that we report the highest F1 score of each model in comparative studies by tuning their respective global threshold, denoted as $g _{th}$. As the F1 score is calculated using the formula $F1 = \frac{2 * precision * recall}{precision + recall}$, TASTER achieves its highest F1 score by reporting many false positives, resulting in a relatively high recall rate but an extremely low precision rate. We have included this explanation in the revised Appendix and emphasized the ‘highest F1 score’ evaluation in the revised paper.
**Q2**: ‘It should be further explained why gc(τ) in line 79 has different input from gc(tr) in formula 3.’
**A2**: We would like to clarify that they refer to the same gc but with different emphases. On the one hand, the $gc(tr)$ in Formula 3 represents the definition of global confidence for each tr. On the other hand, since $tr: (ev _b, ev _h, \bigtriangleup) = (ev _b, ev _h, \tau, \eta)$, we have $gc(tr)=gc(ev _b, ev _h, \tau, \eta)= gc(\tau; ev _b, ev _h, \eta)$. Therefore, the $gc(\tau)$ in line 79 emphasizes the fact that $gc$ can be expressed as a function of $\tau$ with fixed $ev _b, ev _h, \eta$. We have clarified this further in the revised paper.
**From ‘Questions’**
**Q1**: ‘Table 2 in the Appendix shows that NE has extremely high accuracy, while the recall rate is not the highest. May I know the reason for this phenomenon?’
**A1**: The reason is that we report the highest F1 score of each model in comparative studies by tuning their respective global threshold, denoted as $g _{th}$. As the F1 score is calculated using the formula $F1 = \frac{2 * precision * recall}{precision + recall}$, NE achieves its highest F1 score by reporting few false positives, resulting in a relatively high precision rate but a relatively low recall rate. We have emphasized the ‘highest F1 score’ evaluation in the revised paper.
**Q2**: ‘Could you further explain why NE has an overwhelming advantage over all baselines?’
**A2**: Firstly, the 'efficient' learning capability is unique to NE, which is fundamental and qualitative, requiring no comparative experiments to demonstrate. Specifically, only NE can encode TRs from limited event items (as shown in Figure 4) and rapidly retrieve TRs (by applying $g$). We attribute this mainly to three structural biases inspired by Noether's theorem: (1) the event embedding $\pmb{q}(t;ev)$ should be constructed to make each local energy $g$ invariant to $t$; (2) the training loss should be constructed to make the value of $g$ approximate TR validity; (3) we should use local energy $g$ as the decoding function. Without such structural biases, baseline embeddings can not learn TRs efficiently.
When setting aside the 'efficient' requirement by added with the same interface $g'$, baseline models still can not learn TRs accurately as compared to NE, as demonstrated by the comparative experiments. The main reason is that baseline models over-apply the generalization capabilities of distributed representations, which hinders the fit of event occurrences, as discussed in Section 4.2 with Figure 3.
Once again, we would like to express our gratitude to the reviewer for the valuable feedback, which has helped us further improve our manuscript.
---
Rebuttal 2:
Title: Willingness to answer further questions
Comment: Dear reviewer w2A9
We thank you for your precious time and constructive comments. As the discussion period will end soon, we are not sure whether our responses have addressed your questions. If you still have any questions about our work, we are more than happy to provide further responses for you. | Summary: In this paper, the authors introduce the concept of temporal regularities (TR), which indicates temporal associations invariant to time shifts between events. The authors claim that existing models are lack of the TR learning capability. Based on this idea, the authors define two tasks, TR detection and TR query, as well as their evaluation metrics. They also further develop a new framework to learn a set of event representations regularized by fixed time embeddings. Experiments on several benchmark datasets demonstrate the effectiveness of the proposed framework and its superiority on TR learning compared to existing methods.
Strengths: * The proposed temporal regularity problem is important and many existing models are lack of such TR learning capabilities.
* The proposed TR detection and TR query tasks are well-designed. The corresponding evaluation metrics are also reasonable.
* The proposed solution is simple and effective. Experiments demonstrate significant improvement on TR compared to previous methods.
Weaknesses: * The event embedding implementation part is not very clear. Details are insufficient for reimplementation.
* Font sizes in Figure 5 and 6 are too tiny.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * In Line 166-168, each (s,p,o) combination corresponds to a specific event embedding. From the authors' code, it seems like the authors have tried different embedding strategy, such as encoding s, p and o separately, or encoding (s,p,o) as a whole. What's your choice among these different implementations?
* In formula 7, the conserved local energy is only relevant two events. If the relative time is conditioned on other events, could this framework handle it?
* I'm interested how much have the model learnt from statistic prior. Have you tried to compute the averaged relative time for every event pair in the training set and then use such averaged relative time for testing?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for spending valuable time reviewing our manuscript and providing insightful comments. We have improved our paper accordingly and our responses are as below.
**From ‘Weaknesses’**
**Q1**: ‘The event embedding implementation part is not very clear. Details are insufficient for reimplementation.’
**A1**: Original details of model implementation include: the determination of hyperparameters (in Section 4.1) and the training details (in Appendix C.1). We have added justifications of how NE decoding relates to TR detection and TR query (in Section 3.2) in the revised paper. Specifically, $\mathop{\min}\limits _{\tau\in\mathbb{T} _r} g(\tau)$ is computed, which is compared with a global threshold $g _{th}$ to decide whether a potential TR is valid or not (for TR detection). For a valid TR, the $\tau'$ which minimizes $g(\tau), \tau \in \mathbb{T} _r$ is selected as the model output of the relative time (for TR query).
**Q2**: ‘Font sizes in Figure 5 and 6 are too tiny.’
**A2**: We have taken note of this feedback and have made the necessary changes by enlarging the font sizes in Figure 5 and 6 in the revised paper.
**From ‘Questions’**
**Q1**: ‘From the authors' code, it seems like the authors have tried different embedding strategy, such as encoding s, p and o separately, or encoding (s,p,o) as a whole. What's your choice among these different implementations?’
**A1**: We appreciate the reviewer's careful observation. Our choice of the current form of NE is based on its ability to fit large datasets effectively while still maintaining the efficient learning capability. We have found that encoding s, p, and o separately leads to a higher loss after training convergence. This reduces the capacity of NE for large datasets and subsequently affects its performance in both TR detection and TR query. Making the model smaller while maintaining effectiveness is an area that can be explored in future work.
**Q2**: ‘If the relative time is conditioned on other events, could this framework handle it?’
**A2**: One advantage of NE is exactly its ability to efficiently store large amounts of interlinked TRs. We attribute such a large capacity of NE storage to its Fourier-like representations. Specifically, the score function $f(t;ev)=\sum _{j=1}^d Real(\pmb{u} \circ e^{i \pmb{\omega} t}) _j$ and the decoding function $g(\tau;ev _b,ev _h)=2-2\sum _{j=1}^d Real(\overline{\pmb{u} _b} \circ \pmb{u} _h \circ e^{i \pmb{\omega} \tau}) _j\in [0,4]$ can be viewed as Fourier-like expansions. The global time vector $\pmb{\omega}$ provides the expansion basis, while the event type vectors $\pmb{u}$s store the coefficients for $f(t)$ (revealing event occurrence) and compose $\overline{\pmb{u} _b} \circ \pmb{u} _h$ as the coefficients for $g(\tau)$ (revealing TR validity).
**Q3**: ‘I'm interested how much have the model learnt from statistic prior. Have you tried to compute the averaged relative time for every event pair in the training set and then use such averaged relative time for testing?’
**A3**: While we have not specifically explored this setting, we have demonstrated the flexibility of NE through a grouped experiment. We have grouped valid TRs based on their golden relative time and showcased NE's performance in TR query. The results in Figure 4 (c) indicate that NE performs consistently well in learning TRs with varying $\tau$s. We have not tested TR detection using grouped $\tau$s as the golden relative time for invalid TRs does not hold meaning.
Once again, we would like to express our gratitude to the reviewer for the valuable feedback, which has helped us further improve our manuscript.
---
Rebuttal 2:
Title: Willingness to answer further questions
Comment: Dear Reviewer bn9B
We thank you for your precious time and constructive comments. As the discussion period will end soon, we are not sure whether our responses have addressed your questions. If you still have any questions about our work, we are more than happy to provide further responses for you. | null | null | null | null |
General Munchausen Reinforcement Learning with Tsallis Kullback-Leibler Divergence | Accept (poster) | Summary: This paper studies the Tsallis regularized MDPs, and proposes a practical algorithm for Tsallis KL divergence based on Munchausen RL. The experiments show that the resulting algorithm MVI($q$) performs notably better than its counterpart MVI.
Strengths: 1. The paper is well-written with sufficient introduction of background
2. Properties of Tsallis policies are sufficiently studied in the paper.
Weaknesses: 1. $\exp_q Q_2$ in Eq.(8) should be removed. In fact, since the permutation is changeable, no weighted average is performed in Eq.(8).
2. There is no clear theoretical justification nor intuition for why Tsallis regularization is beneficial for regularized value iteration
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Theorem 4 seems to suggest that the Tsallis KL regularized policy focuses more on the Q values at early iterations based on the second term in Eq.(7). Is this desirable?
2. Why would MVI($q$) provide notable gains while Tsallis value iteration provides no benefits?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper only considers $q>1$ for Tsallis regularization, but does not explore the case of $q<1$
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Addressing the reviewer’s comments on weakness:
1. We thank the reviewer for pointing out the typo $\exp_q Q_2$ in Eq. (8). It should be corrected as $\exp_q (\frac{Q_2}{1 + (1-q)Q_1})$. As the reviewer pointed out, permutation is changeable so there exists many way to expand the term $\exp_q (\sum_{i=1}^{k} Q_i)$, and our Eq. (8) serves mainly illustrating the point that TKL policy differs from the uniform average of KL. We can also see the difference between TKL and KL policies by inspecting the second term in Eq. (7): this action-value cross-product term further increases the probability for any actions that have had consistently larger values across iterations. This observation agrees with the mode-covering property of Tsallis KL.
2. It is true that we don’t have a theoretical justification for the benefits of Tsallis KL, but we would like to point out that this was the case for Shannon entropy and Tsallis entropy methods as well where only empirical evidence was provided. Our results do provide intuition and empirical evidence showing when Tsallis KL may be preferable.
Specifically, at Line 84 we explained the superiority of KL regularization in terms of uniform average of history, and in Eq. (7) and (8) we explained how Tsallis KL regularization inherits the uniform average of history plus an additional cross-product term boosting actions that have had values consistently large across iterations. The average is not available to Tsallis regularized value iteration (Tsallis-VI). On the other hand, compared to MVI, Tsallis KL regularization provides an additional degree of freedom in truncating action support and therefore in better exploiting high probability actions, which is not available to Shannon entropy/KL divergence regularized methods like MVI.
Answering the reviewer’s questions:
1. We would like to point out that TKL regularized policy does not focus more on Q values at early iterations. Though policies at iteration $k+1$ can only make use of action values up to $k$, all action values are equally weighted. Instead, as we explained in the above, TKL policy does boost actions with consistently large values across iterations.
2. We hypothesize that it is the same reason that the KL divergence is usually better than using Shannon entropy: the KL results in a policy that averages over the history of action values, whereas Shannon entropy uses the most recent action values (see [Vieillard et al., 2020] *Leverage the average: an Analysis of KL Regularization in Reinforcement Learning*). MVI(q) has a parameter $\alpha$ that weights the KL regularization; for $\alpha = 0$, no KL regularization is used and we only have entropy regularization. In other words, for $\alpha = 0$, we do not get averaging over the history of action values. Our results for $q > 1$, therefore, parallel what is often observed for $q = 1$ (namely for KL divergence and Shannon entropy). Tsallis-VI uses Tsallis entropy only and corresponds to alpha=0. MVI(q) with $\alpha > 0$ averages over history of action value estimates to smooth out errors.
Addressing the reviewer’s concern of not investigating $q<1$:
We did not investigate $q<0$ since the function $-\ln_q x$ would be concave and no longer a member of $f$-divergence.
We tested some values from $q \in (0,1)$, however, they all showed bad performance. Since Shannon entropy corresponds to $q=1$ (full support) and sparsemax entropy to $q=2$, no regularization to $q=\infty$, we have decided to focus on $q>1$.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for addressing my questions. I see that my question 1 relates to my misunderstanding of the meaning of $\sum_{i_1=1<\dots<i_j}^k$--I thought $i_1$ was always equal to $1$. I got the meaning after checking [T. Yamano. 2002](Some properties of q-logarithm and q-exponential functions in tsallis statistics), which wrote it clearer as $\sum_{1=i_1<\dots<i_j}^k$. Regarding the other questions addressed in response, I decide to raise my rating from 5 to 6. | Summary: This paper introduces a principle way of generalizing KL-divergence regularized RL into Tsallis KL regularized RL. There have been studies that replaced Shannon entropy with Tsallis entropy to obtain sparsemax policies, but they had limited success. On the other hand, this paper extends Munchausen value iteration to get MVI(q). The extension is not straightforward due to the pseudo-additivity of $\ln_q$, but the paper manages to do it with approximation, and empirically show that the apprixmation error remains negligible for small $q$. In the experiments, the paper achieves large performance gain across 35 Atari games, where the authors conjecture that the improvement is due to the change in how MVI(q) do the exploration.
Strengths: - Clear theoretical foundations for establishing the MVI(q) algorithm
- The paper is written clearly and easy to follow
- Proposed algorithm shows notable performance improvement over existing baselines
Weaknesses: - The main characteristics/advantages of the proposed algorithm is less explained. While the authors assumed that the Tsallis policy can be better in exploitation allowing the policy to exploit high probability actions, It is also possible to control the regularization coefficient (i.e. temperature) to get similar effect, and it is hard to know the difference between these two options from the paper. The paper overall explains very well about the derivation of the algorithm, but it is hard to understand how it works and in what aspects it is better. The paper may benefit from including case scenarios on toy problems when MVI(q) works better than other algorithms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The paper only explains in a way that MVI(q) does less exploration compared to MVI, and it better explores compared to Tsallis-VI. What happens if we control the regularization coefficient to negate thses effects (we can try adding shannon entropy regularization on top of Tsallis-VI if it concentrate to quickly)? If these effects are not able to be negated, in what aspects MVI(q) expores/exploits better than the other algorithms?
- How sensitive is MVI(q) to the regularization coefficient $\alpha$ compared to existing algorithms?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have not addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Entropy**: Tsallis entropy truncates action support but Shannon entropy does not truncate. For softmax policies induced by Shannon entropy or KL divergence to ignore some actions, the temperature would have to be set to infinity, which is impossible in practice. On the other hand, the entropic index $q>1$ offers this flexibility. Moreover, for Tsallis-VI, adding Shannon entropy on top of Tsallis entropy would not only result in the loss of average of history but also sparsity, as can be seen from existing paper characterizing necessary conditions on the action sparsity, e.g., Theorem 2 of *A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning* [Li et al., 2019]. In short, MVI(q) is capable of exploiting truncation of action support (controlled by $q$, not available to MVI ($q=1$)), as well as averaging the history of action values (controlled by $\alpha$, not available to Tsallis-VI).
**Sensitivity**: We believe MVI(q) is similar in terms of sensitivity to $\alpha$. In Appendix Table 2 we provided hyperparameters of MVI(q) and MVI, which were respectively fine-tuned. They shared the same $\alpha=0.9$.
As for the comment about not including limitations of our work. We do try to mention limitations throughout the work. In Section 3.3, we acknowledge that we as yet do not have a strong theoretical motivation for the policy form under Tsallis KL, and instead rely primarily on experiments to motivate it. The limitations of our approximation to obtain Tsallis KL regularization are discussed in line 230-239. However, it could be useful for the reader to have a paragraph dedicated to summarizing the limitations of the work, at the end of the paper. With the additional space for the final paper, we can add such a paragraph to the conclusion. | Summary: The paper introduces the idea of Tsallis KL divergence for regularizing RL algorithms. They first introduce the Tsallis entropy based regularization formally whilst intuiting the how the q exponential and q logarithm, as defined by Tsallis 1998, have a truncation effect on the divergence. They also formalise this idea of truncation in Theorem 1. They also provide a provably tractable approximation of the *threshold function*. The authors then go on to very interestingly analyze Tsallis KL regularized policy and assert that "they average over history of value estimates". They finally introduce an algorithm similar to Munchausen Value Iteration (Vieillard et al., 2020b) which uses the Tsallis KL regularization instead of vanilla KL. Their empirical results show significant gains in a lot of environments.
Strengths: The paper is largely well written. The manner in which the authors build on previous work in addition to explaining various concepts so well, section after section, is impressive. I liked how their theories and their empirical applications associated with Tsallis KL are explained via illustrations (figures 1,2,3). The empirical results suggest a strong improvement over MVI with numerous Atari environments. Figure 5 (left) shows that MVI(q) has significant improvements over MVI in various environments.
Weaknesses: I see no major weaknesses. I have a couple of comments:
1. For section 3.3: is this average of histories a feature of Tsallis KL or is it to be expected of other KL regularization as well? Please let me know if this is mentioned somewhere (maybe line 183-184?). I would be interested in knowing how come this is unique for Tsallis KL case.
2. Section 4.2: you empirically choose to "omit the residual term" and judging from Section 5 this works well in practice. Despite this I would be interested in seeing the return on the cartpole environment as iterations increase. How is the the residual calculated or estimated for Figure 3? Please feel free to point me to the appendix or elsewhere in case I might have missed this.
Minor issues and fixes:
1. Line 77-78: citep for Sason and Verdu [2016]
2. Line 102: the use of the constant $p$ is not defined or explained while being introduced
3. How many seeds are the results of Figures 4 and 5 averaged over?
4. I would expect some discussion of the computational overhead especially in terms of GPU compute time MVI(q) adds to the algorithm.
5. For the sake of completeness, it would be beneficial to report all the hyper-params used by you even if you are porting them from past work. Additionally a detailed algorithm would also be a helpful addition in the Appendix.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Please see the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I do not see any major limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Averaging**: Averaging is a feature of other KL regularization as well. The primary difference is the form of averaging. KL regularization induces a uniform average of the history, as can be seen from line 84. On the other hand, Tsallis KL inherits this uniform average plus an additional cross product term between action values, which boost the probability of actions that have had consistently large values across iterations.
**Residual**: The residual is given by line 230. We apologize for not including the details for computing the residual. To compute it, we store a copy of the target network for $Q_{k-1}$. Policies $\pi_{k+1}$ and $\pi_k$ are then respectively computed as shown in Algorithm 1. To prevent divide by zero issue, we clip the policy to the range $[0.01, 0.99]$.
For the return on CartPole-v1, please refer to Figure 2 in the attached PDF. M-VI $S_2(\pi)$ denotes simply replacing the standard logarithm in MVI by the $q$-log, which performed poorly due to the pseudo-additivity of the $q$-log. We will include the residual computation procedure and Figure 2 to the final version of the paper.
To answer the questions listed under minor issues and fixes:
- Atari results were averaged 3 seeds, and the error bars denote 95% confidence intervals. We apologize that we did not make this information clear, and will include it in the final version of the paper.
- Computational overhead of MVI(q) is approximately equal to MVI, with only an additional sorting for a list that is the length of the number of actions, which is small for the environments we considered (please refer to the code or Algorithm 1 in page 16).
- A detailed algorithm for implementing MVI(q) is provided in page 16 of the appendix.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: Thank you for your response! I will stick to my score and I hope the authors can make the addition about the residuals in the revised version. Thank you for clarifying the averaging statement and for pointing me to the detailed algorithm.
I would encourage the authors to increase the number of seeds for future versions to make the results more convincing. | Summary: Preface: this paper was assigned to me as an emergency review paper, so I had less time to do an in-depth review.
The paper tried to extend Munchausen Reinforcement learning by replacing the commonly used KL divergence with a more generalised form, i.e., Tsallis KL, and empirically demonstrated its benefits under various simulation experiments. The authors conduct theoretical proves to motivate their intuitive extension. Results showed that adding the Tsallis KL can bring impressive improvements over the simplest M-RL algorithm on game simulations.
=======post-rebuttal=========
Based on the clarifications and new experimental results, I raised my score to 6.
Strengths: The paper is generally well-motivated and well-written at the beginning, with a clear introduction to new concepts such as the Tsallis KL Divergence and its intuitive effect visualisation as in Figure 2. It extended the standard KL-based MVI to a more generalised form MVI(q), supported by theoretical proof of the feasibility of using Tsallis KL instead of KL. Further discussion had been made to explore how Tsallis KL reweights the current policy, in the analogy of other well-recognised algorithms.
Weaknesses: 1.Contribution. The main contribution of this paper is the extension of Tsallis KL for M-RL and provided necessary math evidence of why can, which I sincerely appreciate. However, the authors fail to address why reweighting the policy distribution by the Tsallis q matters under the hood. From a critical point of view, this work extends an established work 3 years ago by replacing the policy regularisation function to a more variable form where a new hyper-parameter is introduced. Although the paper is self-contained, the contribution to RL community seems to be minor.
2.Potentially misleading and inadequate experiment result. The experiments only showed MVI(q) (i.e., the proposed method) with MVI and Tsallis-VI. The authors would be appreciated if they are able to include more baselines, especially those which considers policy regularisation, such as SAC, TRPO, etc. Moreover, the experiments were not sufficiently standardized. The authors may consider compare their methods with commonly acceptable benchmarks, such as Atari-57, or Mojuco. I also have some concern about Figure 5. The performance is obtained by computing “Improvement over Tsallis-VI on Atari environments, normalized with Tsallis-VI scores”, which, in an extreme case, look exaggerated when the Tsallis-VI scores are small enough. An easy fix is to compare MVI(q) to existing benchmark.
3.Lack of deeper explanation of experiments. It is yet clear to me how MVI(q) boosts performance. Figure 3 somehow tried to discuss it but still insufficient. The authors brought up many guesses in the paper, such as line 179 and line 272. However, they did not dig into any of those.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Will the choice of q affect the performance significantly? Is it possible that some q choices are more suitable for some particular environments? If so, can you provide an empirical study of how to choose q adapting to the characteristic of task or environment?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate you getting in an emergency review, and understand you did not have as much time. Your comments are nonetheless appreciated.
We would like to address the overall goal and contribution of the paper. It is not yet certain if Tsallis KL regularization will prove to be an effective choice in RL, nor when it might be preferable to the standard KL regularization developed for max-ent RL. However, we cannot even begin to answer the utility of this regularization until we develop approaches to use Tsallis KL. The purpose of this paper is to provide such a strategy and to begin the investigation into its properties, knowing that it will take several papers to properly investigate this line of work. The original max-ent paper also did not fully answer the utility of entropy regularization; rather, this direction has become an important line of inquiry and we are starting to better understand when it is and is not useful.
This paper 1) introduces Tsallis KL regularization to RL, a general class of regularizers that also includes regularizers like the alpha-divergence (see e.g. Appendix A of [Li and Turner, 2016] or [Wang et al., 2018, Belousov and Peters, 2019], mentioned in the footnote on page 4.) 2) highlights theoretical properties of this regularizer including the form of the resulting policy, 3) provides some motivation for why this regularizer might be useful, 4) provides a practical, approximate implementation of the idea (MVI(q)) and 5) provides empirical evidence for the potential utility of generalizing q > 1. We are not setting out to get state-of-the-art, or show impressive performance on challenging benchmarks. We want to understand this regularizer. In that sense, we respectfully disagree that we needed to do benchmarks like Mujoco and compare to a different class of methods, like actor-critic methods (e.g, SAC). More on this below, explaining why we provide a normalized score to Tsallis-VI.
Extending MVI to MVI(q) is by no means trivial. In fact, due to the properties of the Tsallis KL, it is downright difficult. We had to make approximations that took time to develop. Our extension does become the original MVI when $q=1$ (as we did intentionally), but MVI(q) for $q > 1$ is a totally new algorithm. Arguably, others might actually come up with better ways to approximate Tsallis KL regularization, because, as we discuss in the work, we had to make several approximation steps. We believe these to be reasonable, with some conceptual and empirical motivation, but they are approximations.
We compare to Tsallis-VI because we are asking: given the same system/architecture, what is the impact of adding this new Tsallis KL regularization? As mentioned above, we are not trying to outperform SAC or be state-of-the-art. In that sense, the performance is not exaggerated, because we are not making a bold claim that it provides this level of improvement over all approaches. We are simply asking how much improvement is obtained when incorporating Tsallis KL instead of just Tsallis entropy regularization; the answer is that it can give significant improvement, for this agent.
Finally, for your question about the choice of $q$, it can affect performance significantly. For many settings, $q = 1$ and $q = 2$ perform reasonably well, where $q = 1$ corresponds to the standard Shannon entropy/KL divergence and $q = 2$ is what was previously used for Tsallis entropy to get the sparsemax. One conclusion from this work is that shifting from $q = 1$ to $q = 2$ can often provide performance improvements, without even considering the other possible values of q. In other words, a reasonable choice for this generalized MVI is to use $q = 2$. However, we did find other values of q could also be effective, see Figure 1 in the uploaded PDF, on Acrobot-v1, with all $q$ independently fine-tuned. Looking at Eq. (3), different q not only control truncation but also the root, hence the smoothing effect. We do not yet know why certain $q$ might be better than others, and how it relates to properties of the environment; this is absolutely one of the important next steps for this work. | Rebuttal 1:
Rebuttal: Included figures for rebuttal:
- Figure 1: MVI(q) on Acrobot-v1 across different $q$
- Figure 2: MVI(q) on CartPole-v1
Pdf: /pdf/665249bf686802b415bd277a20f563b81eecb603.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Beyond Normal: On the Evaluation of Mutual Information Estimators | Accept (poster) | Summary: The authors propose a method for creating expressive distributions via injective mappings that maintain their original MI. They state this in Theorem 2.1 and prove it in the Appendix. In addition to this, the authors benchmark a variety of estimators for MI on a set of tasks, including high-dimensional, long-tailed distributions, sparse interactions, and high mutual information. This information is contained in Sections 3-4, where they describe each task and critique each estimator for the tasks. From these experiments, the authors provide guidelines for practitioners on how to choose an estimator in Section 6.
Strengths: The paper is clear, provides many novel benchmarking tasks, and is easily verifiable. The results are reproducible as the authors have shared their code and documented their experimental parameters.
Weaknesses: 1. The author does a poor job of motivating each data setting and explaining why each one is important. It would be beneficial to provide examples of domains where long-tail distributions are common, such as insurance, and elaborate on the significance of the other data settings as well.
2. I believe the last row of Figure 2, "True MI," could be better highlighted as it was difficult to discern that it was the point of comparison.
3. The main contributions of the paper seem relatively minor, as the authors are primarily considering more data settings than previous work when comparing MI estimators. However, addressing the first point could help alleviate this concern.
4. The main theoretical result, Theorem 2.1, appears to have already been demonstrated in the appendix of the following paper: https://arxiv.org/pdf/cond-mat/0305641.pdf.
5. The authors seem to have switched \citep for \cite in their paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why not utilize the result of Theorem 2.1 and apply it to Normalizing Flows? I believe this combination would enhance the paper's distinctiveness.
How is Theorem 2.1 different from that of the result in the appendix of https://arxiv.org/pdf/cond-mat/0305641.pdf?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
> The main theoretical result, Theorem 2.1, appears to have already been demonstrated in the appendix of the following paper: Kraskov et al. (2004) [...] How is Theorem 2.1 different from that of the result in the appendix of Kraskov et al. (2004)?
We believe that Theorem 2.1 has been known in the community and we do not list it as a contribution. We use it to define the distributions used in our benchmark, which is the main content of the paper. However, we could not find an appropriate version of this theorem with a formal proof which we could reference, hence for completeness, we provide our own proof in the Appendix.
The highly influential paper of Kraskov et al. (2004) covers the case where (a) applied mappings are diffeomorphisms and (b) all measures involved have probability density functions. The proof we give in the Appendix applies more generally (and this is needed e.g., for the "Swiss roll" distribution which uses topological embeddings or for the half-cube mapping which is not a diffeomorphism at $0$).
> The author does a poor job of motivating each data setting and explaining why each one is important. It would be beneficial to provide examples of domains where long-tail distributions are common, such as insurance, and elaborate on the significance of the other data settings as well.
The main motivation for selecting the distributions was to evaluate and compare existing estimators. We would like to stress that the space of all possible distributions is vast, and until now, was left almost entirely unexplored except for the simplest case of a normal distribution. This is particularly striking for mutual information, which is commonly used precisely for its applicability to any distribution and invariance to diffeomorphisms.
We approach this issue by casting a wide net of tasks, paying particular attention to the following:
1. **Dimensionality**. High-dimensional datasets are becoming more common, particularly in machine learning and systems biology.
2. **Sparsity**. While the data might be high-dimensional, the effective dimension may be much smaller (e.g., only a few genes out of thousands convey information about the amount of a particular protein).
5. **High MI**. Estimating high MI is known to be difficult. However, it is usually something we might know approximately in advance -- if there are 4000 image classes, MI between image class and representation is at most 12 bits. An additional interesting observation is that CCA performes very well, suggesting that in this scenario incorporating prior information is crucial.
6. **Long tails**. Since the Student distribution has heavier tails than the multivariate normal distribution, this was a natural choice. An interesting conclusion is that even after "removing the tails" (see Appendix E), these distributions remain difficult. Thus, we have low- and moderate-dimensional distributions, which are unimodal and without heavy tails, which are still challenging to estimate.
7. **Robustness to diffeomorphisms**. Mutual information is often chosen because it is theoretically invariant to diffeomorphisms. We wanted to challenge this invariance when only a finite sample is available.
However, we agree that including real-world examples illustrating these motivations will improve the manuscript.
> The main contributions of the paper seem relatively minor, as the authors are primarily considering more data settings than previous work when comparing MI estimators. However, addressing the first point could help alleviate this concern.
This is the first benchmark which can be used to compare different mutual information estimators in a systematic and reproducible manner. As mentioned above, it is critical to evaluate mutual information estimators on non-normal distributions. Our benchmark contains a diverse set of 40 distributions, addressing problems like sparsity of interactions, long tails and invariances to selected mappings.
As such, we believe this manuscript will be useful for the community for two reasons:
- For the mutual information researchers, it provides a standard benchmark which can be used to conveniently test their ideas and understand their strengths and limitations. To increase the chances that it is adopted by the community, we made it cross-platform, used well-defined interfaces in the API, and implemented a diverse set of baseline estimators.
- For researchers willing to use mutual information in selected problems ranging from machine learning to natural sciences (e.g., Uda (2020), Young et al. (2023)), a set of points to be aware of when a reliable estimate is needed.
> I believe the last row of Figure 2, “True MI,” could be better highlighted as it was difficult to discern that it was the point of comparison. [...] The authors seem to have switched \citep for \cite in their paper.
Thank you for these suggestions: we have improved the figure accordingly and fixed inconsistency with citations.
> Why not utilize the result of Theorem 2.1 and apply it to Normalizing Flows? I believe this combination would enhance the paper’s distinctiveness.
Indeed, using normalizing flows is an interesting idea which can be used as:
1. Transformations to define more expressive distributions.
2. Another preprocessing technique (similar to the ones studied in Appendix E).
We had considered both options, but eventually, we have decided to use interpretable alternatives: proposed mappings were able to capture several interesting phenomena already, and we were concerned that using normalizing flows may distort the distributions in unexpected manners, making it difficult to separate failure modes of different estimators.
We hope that our response provided an additional perspective on the significance of this work and that the introduced changes have increased the manuscript's clarity. Hence, we would like to kindly ask the Reviewer to consider raising the scores.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
Theorem 2.1:
I appreciate your effort in providing further clarification. Nevertheless, I maintain my belief that the novelty of the theorem might still appear ambiguous to readers. To enhance its clarity, I suggest considering referencing the proof by Kraskov et al. (2004) or another similar result. Despite its familiarity within the community, it's important to remember that individuals from outside the community might perceive this as novel.
Motivating data settings:
I commend the authors for their thorough approach in encompassing a diverse range of distributions. Thus making their benchmark applicable to a large set of domains..
My primary reason for the initial low score pertains to the treatment of Theorem 2.1. If the authors could furnish additional context surrounding Theorem 2.1, I would happily raise my evaluation. I appreciate the authors for their considerate response and answers to all my questions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestion and encouraging words on the diverse range of distributions covered!
We have now understood your argument regarding Theorem 2.1 and we fully agree that adding more context to it will significantly increase manuscript clarity. We will add the following paragraph to Section 2:
> Theorem 2.1 is a well-known property of mutual information, formulated in various versions. For example, Kraskov et al. (2004) consider a case in which $f$ and $g$ are diffeomorphisms and all measures have probability density functions. For the sake of completeness, we include a proof of Theorem 2.1 (covering singular measures and any continuous injections) in Appendix A.
Please, let us know if you have any further suggestions. | Summary: This paper identifies a clear problem with many mutual information estimation benchmarks: most of them focus on simple (normal) distributions. The authors present a new set of forty (!) tasks that contain ground-truth informations, which can be constructed by noting that only injectivity is needed for an information-preserving transformation. The authors identify four distinct challenges: interaction sparsity, long tails, invariance, and high information values. Several conclusions about existing estimation algorithms are then made regarding the extensive analysis.
Strengths: - I enjoyed reading this paper. It clearly defines a goal and identifies key problems with existing approaches.
- The paper presents mathematical background in a precise and effective way.
- The structure of the paper is clear, using figures for clarifications where needed.
- The paper is well-written with clear sentence structures and no grammatical or spelling errors.
- The analysis is thorough, presenting forty distinct MI estimation tasks.
- I enjoyed how instead of simply presenting the results, the authors dug deeper and identified four distinct challenges for MI estimators.
Weaknesses: - One could say that the paper lacks slightly in terms of originality and contributions.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Out of curiosity: could modern invariant neural network architectures be used to obtain MI estimates invariant to diffeomorphisms?
### Conclusions
While the overall contribution could be limited in terms of a model development sense, I think the paper identifies serious issues with modern MI estimation benchmarks. The paper not only provides new benchmarks that address these issues but also makes an effort to identify what aspects of MI estimation can make the problem hard. I foresee much new research originating from the identification of these aspects, where future papers focus in on them and propose methodologies that overcome these challenges. On top of that, the paper is very well written. Hence, I would recommend acceptance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors clearly discuss the limitations of their study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive assessment and encouraging words.
> Out of curiosity: could modern invariant neural network architectures be used to obtain MI estimates invariant to diffeomorphisms?
Thank you for your insightful question! Sadly, we cannot achieve full invariance to diffeomorphisms for finite samples since an arbitrary diffeomorphism can transform any set of $n$ points to any other set of $n$ points in $\mathbb R^k$ when $k\ge 2$. Nonetheless, we can hope for invariance (or robustness) to a subset of diffeomorphisms. We hypothesise that such invariant neural networks could require smaller sample sizes due to their useful inductive biases. However, we have not experimented with invariant neural networks ourselves, so we are unable to support this hypothesis with data.
We think that there are several solutions how this problem can be approached: either by encoding invariance to, e.g., the group of rigid motions by appropriate layers (in this case based on the representation theory of the orthogonal group) or using training schemes increasing robustness to diffeomorphisms, akin to [1, 2]. Our benchmark can be used to generate data sets for the latter approach, however thorough investigation of this idea is beyond the scope of this work.
[1] Benton, Gregory, et al. "Learning invariances in neural networks from training data." Advances in neural information processing systems 33 (2020): 17605-17616.
[2] Petrini, Leonardo, et al. "Relative stability toward diffeomorphisms indicates performance in deep nets." Advances in Neural Information Processing Systems 34 (2021): 8727-8739.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the detailed response. Your elaboration on the potential applications and limitations of invariant neural networks in relation to diffeomorphisms is enlightening. Thanks for the references. | Summary: A test benchmark for the evaluation of mutual information estimators is established and many different estimators compared. The test cases contain student-t and normal distributionas and their injective transformations. Difficult cases are discussed and evaluated in more detail.
Strengths: The code is reproducible and thus might be used for other estimators in the future
The paper is well-written and easy to understand.
It is important to make the community aware that evaluation of MI estimators on Gaussian distributions is pointless as these only depend on the covariance structure, so the big plus of the MI that it goes beyond te linear dependencies is ignored. The paper makes a strong point here by including a simple covariance estimator as well.
The results on heavy tails are particularly interesting.
Weaknesses: The choice of the distribution used is not sufficiently argued. In particular, it is known that no MI estimator can evaluate MI correctly on arbitrary distributions. Only with restriction to a class of probability distributions (e.g., probability density functions with Lipschitz constraints) there is hope that estimation works. It is thus quite pointless to evaluate MI estimators on what seems like an "educated guess" of diverse distributions.
The authors seem to not be aware of the huge theoretic background of MI, e.g., that arbitrary measureable injective mappings do not change MI and thus their only Theorem 2.1 is well-known in a much more general setting. This can be argued by the data processing inequality in two directions (X-Y-f(Y) and X-f(Y)-Y are both Markov chains) or directly the definition of MI via countable partitions that do not change if we use a measureable injective mapping.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What was the reason for choosing this specific set of distributions?
What is "pointwise MI"?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors state that there are more interesting cases and they only cover transforms of normal and student-t distributions. They also mention that prior information might be incorporated.
However, as said above it is known that an MI estimator can be fooled arbitrarily (giving any value for the MI) if one can choose the distribution freely. Thus, prior information is also included in the test cases here it is just not mentioned explicitly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed comments.
> The choice of the distribution used is not sufficiently argued. In particular, it is known that no MI estimator can evaluate MI correctly on arbitrary distributions. (...)
Indeed, one version of a no-free-lunch theorem for MI estimation follows from the fact that for $n\ge 1$ and $k\ge 2$ any set of $n$ points in $\mathbb R^k$ can be mapped to any other set of $n$ points by a diffeomorphism (if $M$ is any connected smooth manifold of dimension at least $2$, the group $\mathrm{Diff}(M)$ acts $n$-transitively on it). It is thus clear that the only truly diffeomorphism-invariant estimators have to be constant functions (for a given number of points).
Nonetheless, invariance to diffeomorphisms is precisely the reason why MI is useful in practice. The theorem above implies that all estimators have to break at some point. However, while no estimator can work on all distributions, different estimators can work better on certain types of distributions. From a practical standpoint, it is interesting whether certain distributions which intuitively seem "reasonable" are difficult for estimators (e.g., even low-dimensional Student distributions are challenging for most estimators), as well as which estimators are particularly suitable for which types of problems (e.g., neural estimators are good at solving problems involving sparse interactions).
The issue of which distributions are "reasonable" is indeed vague. Certainly, some distributions are "unreasonable" (e.g., a complicated embedding into a 1000-dimensional space), and we do not expect any estimator to solve them. In our study, we avoid such uninformative problems by excluding tasks which no estimator was able to solve. We also include visualisations of the considered distributions to allow for visual inspection (Appendix F).
> What was the reason for choosing this specific set of distributions?
As explained above, the goal was to understand the limits and advantages of different estimators. To this end, we decided to focus on relevant phenomena (e.g., robustness to "sufficiently nice" diffeomorphisms, sparsity) by designing a set of transformations wide enough to (a) understand the mentioned issues and (b) construct a standardised benchmark, which can be used to diagnose strengths and weaknesses of new estimators.
For the motivations of individual phenomena, see the General Response.
> They also mention that prior information might be incorporated. However, as said above it is known that an MI estimator can be fooled arbitrarily (giving any value for the MI) if one can choose the distribution freely. Thus, prior information is also included in the test cases here it is just not mentioned explicitly.
Yes, since no estimator can work on all distributions, each estimator can be thought of as having an implicit bias towards distributions it can handle well. For example, we show that the KSG estimator is not competitive with neural approaches when interactions are sparse. Thus, if we believe that the distribution we are analyzing has sparse interactions, we should use a neural approach. We argue that using (and developing) estimators with explicitly known assumptions and biases could result in significant performance gains. We demonstrate this by implementing CCA (which uses very strong prior information), and showing that it is an excellent choice when the distribution matches assumptions.
We think the term "prior information" in the manuscript may have been misleading and not convey the above meaning. We will clarify it in the manuscript.
> The authors seem to not be aware of the huge theoretic background of MI, e.g., that arbitrary measureable injective mappings do not change MI and thus their only Theorem 2.1 is well-known in a much more general setting. This can be argued by the data processing inequality in two directions (X-Y-f(Y) and X-f(Y)-Y are both Markov chains) or directly the definition of MI via countable partitions that do not change if we use a measureable injective mapping.
We believe that Theorem 2.1 has been known in the community for a long time and therefore we did not list it as a contribution. In spite of our efforts, we were not able to find a reference with a formal proof of the theorem which covers singular measures, infinite MI and topological embeddings. Since we use this result extensively, we supplied an appropriate version in the appendix. We would be grateful for suggesting a reference covering this (or a more general) result, so that we can cite it. The proof provided by Kraskov et al. (2004), for example, does not cover the “Swiss roll” distribution, which uses a topological embedding or the half-cube mapping, which is not a diffeomorphism at 0.
> What is pointwise MI?
We apologise for using the term "pointwise MI" in Section 6 rather than "pointwise mutual information", which we use in all the other sections. We fixed this notational inconsistency.
> The authors state that there are more interesting cases and they only cover transforms of normal and student-t distributions.
There are many distributions which cannot be obtained as $P_{f(X)g(Y)}$, where the join $(X, Y)$ is a multivariate normal or Student distribution and $f$, $g$ are continuous injective mappings. However (except for some simple bivariate distributions and trivial cases with zero MI), an analytical expression for non-zero ground-truth MI is currently tractable only for the multivariate normal and Student families.
We consider enlarging the proposed family an important open direction of the problem and we designed our benchmark so that adding new distributions with known ground-truth MI requires little effort.
We hope that we were able to answer questions raised in the review and clarify our contributions. Given that the Reviewer confirmed excellent soundness and presentation, and a fair contribution, we would be thankful if the Reviewer could reconsider the overall score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response.
Indeed, I had a hard time finding a source for Theorem 2.1 as well and can only provide data processing inequalities that then imply the authors result as corollary. I also have to admit that mere measurability as conjectured in my original review is not sufficient and some way to show measureability of the inverse (defined on the range) is also required.
I'm still reluctant to support the given set of distributions as reasonably representative but have to admit that so far MI has been estimated on much worse datasets, thus it is a step in the right direction. I also question the statement that "an analytical expression for non-zero ground-truth MI is currently tractable only for the multivariate normal and Student families." With some basic math, other approches like sums of uniform distributions should also be tractable.
Nevertheless, I'll increase my score as the paper is a step in the right direction but hope that soon more versatile distributions will be added to the benchmark and we are not stuck with a less than perfect solution for the next decade.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response. We agree that the chosen set of distributions is not universal, but, at the same time, is a step in the right direction.
Regarding the sums of uniform distributions, we would like to thank you for the suggestion. We have already included in our benchmark the bivariate case (lines 134–137): $Y = X+N$, where $X\sim \mathrm{Uniform}(0, 1)$ and $N\sim \mathrm{Uniform}(-\varepsilon, \varepsilon)$, but we agree that a multivariate generalization (with independent $X_1, \dotsc, X_k$ and $N_1, \dotsc, N_k$) has tractable ground-truth mutual information as well. We will add it to the benchmark. | Summary: This paper focuses on the topic of mutual information and shows how to construct a diverse family of distributions with known ground-truth mutual information. It's worth noting that obtaining a closed-form solution for mutual information is highly dependent on the specific assumptions and functional forms used for the variables X and Y. In practice, deriving closed-form expressions for mutual information can be challenging and may require additional simplifying assumptions or specific knowledge about the distributions involved.
In contrast to previous works that typically assess mutual information estimators using simple probability distributions, this paper introduces a novel approach to constructing a diverse family of distributions with known ground-truth mutual information. Additionally, the authors propose a language-independent benchmarking platform to assess mutual information estimators. The authors explore the applicability of classical and neural estimators in scenarios involving high dimensions, sparse interactions, long-tailed distributions, and high mutual information. By examining these challenging settings, they provide insights into the strengths and limitations of different estimators.
Moreover, the paper offers guidelines for practitioners to select the most suitable estimator based on the specific problem's difficulty and considerations when applying an estimator to new datasets. By presenting a comprehensive evaluation framework and practical recommendations, this research aims to advance the understanding and application of mutual information estimation in various domains.
Strengths: The mutual information estimator is an essential tool in causality, it can help discover the underlying causal graph or inference the strength of causal relations. However, as I mentioned earlier, deriving closed-form expressions for mutual information can be challenging in practice and may require additional simplifying assumptions or specific knowledge about the distributions involved.
The paper introduces a method to construct a diverse family of distributions with known ground-truth mutual information. This is a significant contribution as it allows researchers to explore and evaluate mutual information estimators across various scenarios, encompassing various data characteristics and relationships. For example, explore gene regularity networks, understand the treatment effect in medical health care, gain insight for constructing a recommendation system, etc.
The research paper presents a comprehensive evaluation framework for mutual information estimation, encompassing the construction of diverse distributions, benchmarking platform, exploration of challenging scenarios, and practical guidelines. This framework provides a holistic view of the estimation process, aiding researchers and practitioners in understanding, comparing, and selecting mutual information estimators effectively.
Furthermore, the authors investigate the applicability of classical and neural estimators in challenging scenarios involving high dimensions, sparse interactions, long-tailed distributions, and high mutual information. This exploration provides valuable insights into the performance, strengths, and limitations of different estimators under these challenging conditions, enhancing our understanding of their effectiveness in real-world settings.
Weaknesses: 1. Not all joint distributions can be represented in the form of $P_{f(x)g(x)}$, limiting the applicability of the benchmark to a specific set of distributions. Extending the family of distributions with known mutual information and efficient sampling is seen as a natural direction for future improvement.
2. Even though the benchmark demonstrates that distributions with longer tails pose a harder challenge for the considered estimators, applying a transformation like the asinh transform does not fully address the issues.
3. The summary does not mention external validation or comparisons between the proposed approach or estimators and existing methods or benchmarks in the field. The absence of such external validation makes it difficult to assess the generalizability or superiority of the contributions in relation to established techniques or alternative approaches.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above "weaknesses".
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above "weaknesses".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review. Regarding the questions and limitations:
> Not all joint distributions can be represented in the form of $P_{f(X)g(Y)}$, limiting the applicability of the benchmark to a specific set of distributions. Extending the family of distributions with known mutual information and efficient sampling is seen as a natural direction for future improvement.
Yes, we agree with this assessment. As noted by the Reviewer, obtaining a closed-form solution for mutual information usually requires simplifying assumptions, which constrain the distributions.
One way to extend the family of distributions is to allow for random variables for which mutual information can be efficiently estimated numerically. For example, when the joint and marginal probability distributions have tractable PDFs, mutual information can be estimated by averaging pointwise mutual information over a sufficient amount of Monte Carlo samples. In most cases, the standard error of the mean should be a sufficient diagnostic to provide precise and accurate estimate. This type of task could be readily implemented in our package.
> Even though the benchmark demonstrates that distributions with longer tails pose a harder challenge for the considered estimators, applying a transformation like the asinh transform does not fully address the issues.
Indeed, our experiments suggest that although long tails make estimation harder, simple approaches of "shrinking" the tails (as with the asinh transform or different preprocessing strategies explored in Appendix E) cannot resolve this issue completely. We consider this an interesting open problem and suspect that this may be related to the shape of the "PMI profile" (defined below).
Let $i$ be the pointwise mutual information (PMI) function for distribution $P_{XY}$. We define the PMI profile to be the distribution of $i(X, Y)$. This distribution is defined on the set of real numbers and its expected value is the mutual information. Higher moments (and, informally, the overall "shape") of this distribution can influence how well the expected value can be estimated. Importantly, the PMI profile does not change under diffeomorphisms, so the PMI profiles of $P_{XY}$ and $P_{f(X)g(Y)}$ are the same.
We hypothesise that this may explain why these simple preprocessing strategies (which proceed by applying diffeomorphisms) cannot fully resolve the issues with long-tailed distributions, but we have not validated this hypothesis yet.
> The summary does not mention external validation or comparisons between the proposed approach or estimators and existing methods or benchmarks in the field. The absence of such external validation makes it difficult to assess the generalizability or superiority of the contributions in relation to established techniques or alternative approaches.
A lack of universally accepted, easy-to-use benchmarks for testing mutual information estimators is one of our main motivations. One of the packages implementing several estimators we consider in our benchmark was covered by unit-tests which only ensured that the returned value is a `float`(!). Other benchmarks (Song and Ermon (2020), Khan et al. (2007), and Poole et al. (2019), which are discussed in Section 5) are typically constructed for evaluating a specific class of estimators, and usually focus on Gaussian variables (with known MI) or complicated high-dimensional tasks (with MI not known). This has made it difficult to build a general understanding of available methods and their applicability. We believe that our benchmark can serve as a strong reference point for future work. | Rebuttal 1:
Rebuttal: We would like to thank the Reviewers for their insightful comments and appreciate that they find that our work *"provides valuable insights into the performance, strengths, and limitations of different estimators*" (AkdF), that our *"results on heavy tails are particularly interesting"* and that the paper *"makes a strong point here by including a simple covariance estimator as well"* (FjRx). It *"presents mathematical background in a precise and effective way"* (vCqe) and that *"the paper is clear, provides many novel benchmarking tasks, and is easily verifiable"* (62uG). We find it particularly encouraging that all reviewers find that our paper is clear and that Reviewer vCqe concludes *"I foresee much new research originating from the identification of these aspects"*.
Further, we would like to comment on two points asked by both Reviewer FjRx and Reviewer 62uG.
Regarding the motivation of used distributions, we approach the issue of benchmark construction by casting a wide net of tasks. We decided to focus on the following phenomena:
1. **Dimensionality**. High-dimensional datasets are becoming more common, particularly in machine learning and systems biology.
2. **Sparsity**. While the data might be high-dimensional, the effective dimension may be much smaller (e.g., only a few genes out of thousands convey information about the amount of a particular protein).
5. **High MI**. Estimating high MI is known to be difficult. However, it is usually something we might know approximately in advance -- if there are 4000 image classes, MI between image class and representation is at most 12 bits. An additional interesting observation is that CCA performes very well, suggesting that in this scenario incorporating prior information is crucial.
6. **Long tails**. Since the Student distribution has heavier tails than the multivariate normal distribution, this was a natural choice. An interesting conclusion is that even after "removing the tails" (see Appendix E), these distributions remain difficult. Thus, we have low- and moderate-dimensional distributions, which are unimodal and without heavy tails, which are still challenging to estimate.
7. **Robustness to diffeomorphisms**. Mutual information is often chosen because it is theoretically invariant to diffeomorphisms. We wanted to challenge this invariance when only a finite sample is available.
Secondly, we would like to note that our main contribution lies in constructing a general benchmark and the analysis of its results; we do not consider Theorem 2.1 (a necessary tool to construct our benchmark) to be a novel contribution of this paper (Reviewers FjRx, 62uG), as we feel it has been known by the community for a long time. However, we could not find the proof in the literature (and the proof given by Kraskov et al. (2004), e.g., does not cover singular measures and mappings other than diffeomorphisms), so we included our proof in the Appendix for completeness. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CAPro: Webly Supervised Learning with Cross-modality Aligned Prototypes | Accept (poster) | Summary: This paper proposes a unified prototypical contrastive learning framework, named Cross-modality Aligned Prototypes (CAPro), to learn visual representations with correct semantics. CAPro exploits web data across modalities to formulate semantically-correct textual and visual prototypes. The authors propose text matching to leverage textual prototypes to establish their noise-robust estimation. They also bring in text enhancement with visual guidance from image neighbors to mitigate the effect of noisy texts on clean sample selection. Further, the authors propose collective bootstrapping (CB) to provide smoother label reference by extending bootstrapping with collective knowledge. Experimental results on WebVision1k and NUS-WIDE (Web) are provided to validate the effectiveness of the proposed method.
Strengths: - The idea of bridging visual and texture prototypes to cope with webly supervised learning is interesting and promising.
- The motivation is clear, and the paper is written well.
- The code is released. The ablation study seems extensive.
Weaknesses: - Eq.(5), (6), and (7) seem very similar to those in [28]. It is expected to discuss the difference between the proposed one with [28]. Why is the proposed one better?
- The total objective loss function shown in Ln 242 contains 4 loss weight hyper-parameters. However, only \lambda_{bts} is discussed in the ablation study. What about other hyper-parameters? Moreover, it is also a concern that the proposed method involves too many hyper-parameters, including these four loss weights, \gamma, update frequency, etc. How to decide the proper value of these hyper-parameters? What about the robustness?
- In Table 1, the result of NCR[58] (i.e., 73.9) seems wrong. The result shown in the NCR paper is 75.7 for NCR and 76.8 for NCR+Mixup+DA, both surpassing the reported result of the proposed method. This raises a concern about the performance of the proposed method.
- Some minor issues:
- In Ln 62, a period is missing between "selection" and "We".
- In Ln 69, the sentence uses the past tense, while the former and latter sentences basically use the present tense. It is recommended to change this sentence to present tense to make them consistent.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have briefly discussed the limitations and broader impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: A4.1 Thank you.
A4.2 First, the Eqs. (5)(6) are exactly the same as Eqs. (1)(4) in MoPro [28] because we adopt the same prototypical and instance-wise contrastive learning.
Second, the control flow in Eq. (7) is inspired by Eq. (5) in MoPro [28], but we differ in that the labels of the top-$K$ matched examples are kept unchanged. Our design guarantees that these top-$K$ samples would provide consistent guidance on noise removal and prototype update, which avoids overfitting in highly noisy classes. Our superior results under both single-label and multi-label scenarios demonstrate that MoPro [28] is prone to overwhelming noise in certain categories, where clean samples would be easily ruled out by the majority of noisy ones. A detailed comparison between our noisy removal policy Eq. (7) and MoPro [28] is in the supplementary (line 119).
Third, our CAPro differs from MoPro in three aspects:
CAPro notices the semantic noise problem and takes advantage of both textual and visual prototypes to handle semantic misalignment. Our visual prototypes are maintained and polished only by semantically-correct examples.
CAPro "creatively" reuses the dictionary (originally set for instance-wise contrastive learning) for collective bootstrapping, where visually-similar neighbors provide label reference by performing dictionary look-up.
CApro, to our best knowledge, is the first to extend prototypical contrastive learning for multi-label classification, where the overwhelming noise ratio and intra-class positive-negative imbalance pose great challenges to optimization. To solve that, CAPro performs prototypical learning in subspaces of the shared embedding space to stabilize training.
A4.3 Please refer to our response A2.5 for the instructions on how to tune the hyper-params. of loss weights in line 242. We will add more explanations on hyper-params. of the loss function.
A4.4 First, ablation studies on $\lambda^{bts}$ and top-$K$ can be found in lines 301-304 of the manuscript. The discussion on threshold $\gamma$ and update frequency is in our supplementary (lines 88, 103).
For $\gamma$ on single-label datasets, its value is related to the percentage of noise in datasets. For WebV1k/Ggl500 ($34\\%$ noise [3]), $\gamma=0.6$ works better than $\gamma=0.8$. For one's own web dataset, if the noise ratio is larger, $\gamma$ should be tuned lower so that wrong labels could be corrected at an earlier stage before overfitting.
For $\gamma$ on multi-label datasets, its value is related to both the percentage of noise and the number ratio of positive-negative samples. For NUS-WIDE ($50\\%$ noise [78] and $0.02$ for avg. ratio of positive-negative examples), $\gamma=0.9$ works better than $\gamma=0.6$. For one's own web dataset, if the noise ratio is smaller and the positive/negative ratio is smaller, $\gamma$ should be tuned higher so that hard positive samples will not be easily discarded to avoid underfitting.
For the update frequency, its value is related to the dataset scale and noise level. For WebV1k/Ggl500, visual prototypes should be updated per epoch to improve their diversity, which better handles the domain gap between web and realistic datasets. For NUS-WIDE, the update frequency could be reduced to stabilize training, where the prototypes can be prone to overwhelming negative examples in each category.
For $\lambda^{bts}$, we suggest $0.1$ would achieve a balance between the individual and collective label references. A much larger value may cause over-smoothing and over-regularization of visual learning.
For top-$K$, its value is related to the percentage of noise. If the noise ratio is less than $30\\%$, $K$ should be set higher than $50$ to include more diverse examples.
The settings of other hyper-params. can be found in the supplementary (Tab. S2). We find current settings work well. For one's own dataset, we believe these values can be set as starting points and finetuned accordingly.
Among all hyper-params., our ablation results show that for $\lambda^{bts}$, $\gamma$, and top-$K$, their values do affect performance and should be set following the rules mentioned above (such as the dataset scale, the noise ratio, and the positive-negative ratio).
For other hyper-params. such as the prototype update frequency, we do not observe significant fluctuation. In other words, the model is robust to these hyper-params.
A4.5 First, the result of $73.9\\%$ for NCR [58] is correct for the batch-size of $256$ and can be found in Tab. 6 in their supplementary [58].
Second, as we pointed out in line 257 (manuscript) and in line 24 (our supplementary), we follow most SOTA methods (see Sec.4.2 in MoPro [28]) to use the standard batch-size of $256$ on the WebV1k dataset.
Third, the improvement by large batch size has been studied by NCR [58] as their results for batch-size=$256$ and batch-size=$1,024$ are respectively $73.9\\%$ and $75.7\\%$. Their vanilla baseline with R50 backbone achieves a top-1 Acc of $74.9\\%$ [58].
Fourth, NCR [58] does not report their top-5 results on WebV1k and does not provide both top-1 and top-5 results on ImgN1k, where we cannot tell if their method can properly handle the domain gap between web and realistic datasets.
Finally, we agree that comparability between different methods should be carefully studied. To help interpret Tab. 1, please see our response to the Reviewer HQEo on how to compare different methods.
We will add results of NCR $\dagger$ [58] ($75.7\\%$ for batch-size of $1,024$) to Tab. 1. More explanations on how to interpret and compare methods will also be added.
A4.6 We correct these sentences accordingly.
A4.7 More discussions on limitations will be added. Please see response A1.9 for details. | Summary: This paper dives into the study of webly-supervised learning and aims to utilize the neglected alt-text of web images to enhance the learning process. To this end, the authors propose the approach called Cross-modality Aligned Prototypes (CAPro). CAPro is adopted with two modules, namely, text matching&enhacement, and collective bootstrapping. The former aims to assign text to the corresponding prototypes by resorting to the LLMs and cross-modal nearest neighbor mechanism. The latter provides smoother label reference by extending bootstrapping with collective knowledge. Extensive experiments have been conducted on several benchmarks to verify the effectiveness of the proposed CAPro.
Strengths: Most existing webly-supervised learning works mainly focus on the web images and corresponding (noisy) labels while omitting the potential alt-texts (captions). This paper provides a new perspective that uses the alt-text to complement the webly-supervised learning. From this point, the motivation and idea of this paper are novel and interesting.
Weaknesses: 1. My major concern is the differences between this work and the existing problem or techniques including the noisy correspondence, noise-robust learning from NNs, and noise removal. First, this paper claims that the existing webly-supervised learning works mainly address certain types of noise including label-flipping noise and out-of-distribution (OOD), while neglecting the semantic noise, namely, the misalignment between image contents and the associated texts. The claim might be correct for the webly-supervised learning community. However, the so-called semantic noise is very similar to the definition of noisy correspondence [61,62, 65]. The authors should provide more discussion to clarify the differences between the so-called noisy correspondence and semantic noise. If the two problems are somewhat similar, I think it would be better to give more discussion on the related works like 'Noisy Correspondence Learning with Meta Similarity Correction'. Second, the differences between the used KNN-graph mechanism and the works in 'Noise-Robust Learning from Neighbors' are encouraged to be further discussed. Third, the proposed noise removal strategy seems to be similar to the sophisticated WSL method (MoPro), which is an important baseline for WSL learning.
2. The performance improvement is limited (See Table 1). This paper adopts a relatively complex pipeline and additional resort to the existing LLMs. However, the performance is marginal compared to the sophisticated WSL baseline (MoPro, ICLR 2020).
2. There are some typos and unclear statements. For example:
i) the definition of 'concept definition texts (Line 56-57)' is lacking
ii) 'with visual guidance from image neighbors (Line 61)' is unclear.
iii) 'selection We' (Line 62)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My major concern is the differences between this work and the existing problem and techniques as eloborated on in Weaknesses. Thanks
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: A3.1 Thank you.
A3.2 We would like to explain the differences between our work and previous studies in noisy correspondence learning.
First, the reasons behind these two problems are different.
The semantic noise is caused by the polysemy retrieval keywords which are used to crawl web images. For example, when we try to retrieve web images of "drumstick" for the percussion music instrument, we may end up with a bunch of images of "drumstick dishes (chicken)" and "drumstick trees (moringa oleifera)". The associated texts of these irrelevant images indeed contain "drumstick", but the image contents are irrelevant to the expected concept.
Noisy correspondence [61, 62, 65] emphasizes the mismatch between an image and its associated text itself. It is mainly caused by mistakes in partitioning the interleaved image and text web data. For example, the caption "a bunch of cows grazing in a dry field together" is wrongly assigned to an image of "giraffes" [65]. Mismatched image-text pairs mainly hinder the performance of cross-modality retrieval.
Second, the solutions to these two problems are different.
The semantic noise belongs to the label noise, which means that web images are incorrectly annotated. We focus on learning visual representation with categorical noise, which falls under the scenarios of unimodal single-label or multi-label classification.
Noisy correspondence tackles the instance-level mismatched image-text pairs. Its task is to identify alignment errors in the paired data and remove false positives of matching. Most noisy correspondence methods [61, 62, 65] learn to align image and text embeddings to facilitate cross-modal retrieval.
Finally, we agree that there exists an intersection between the two problems. In our webly-supervised learning, we resort to both images and their texts to form semantically-correct visual prototypes. These images and texts can be mismatched and therefore we adopt text enhancement by $k$-reciprocal-NN-based smoothing and reranking to alleviate the noisy correspondence problem. From this point of view, it is indispensable to take noisy correspondence into consideration.
We add another section to the related work:
Noisy Correspondence Rectification
One paradigm similar to WSL might by noisy correspondence rectification or calibration [60,65,62,66,61,63,68,79]. It tackles the mismatched image and text pairs and aims to simultaneously learn aligned visual and textual embeddings for improved cross-modal retrieval. Huang et al. [65] utilizes the memorization effect of neural networks to partition clean and noisy data and then learns to rectify correspondence. Hu et al. [61] derives a unified framework with contrastive learning to reform cross-modal retrieval as an N-way retrieval. Han et al. [66] proposes a meta-similarity correction network to view the binary classification of correct/noisy correspondence as the meta-process, which facilitates data purification.
Although the noisy correspondence removal is closely related to our task, it differs in two aspects: 1) We focus on the label noise where web images are wrongly-annotated by weak keywords or hashtags. Noisy correspondence emphasizes the instance-level mismatch between an image and its associated text. 2) We aim to learn visual representations with correct categorical labels while most methods on noisy correspondence try to align image and text embeddings to improve cross-modal retrieval.
A3.3 More discussions on the differences between previous nearest-neighbor methods and our CAPro will be added to the related work as follows.
It is noted that nearest neighbors play a vital role throughout the components of our CAPro, from text enhancement to text matching and collective bootstrapping. Compared with previous methods of learning from neighbors, our mechanism differs in that:
1) We acquire guidance from cross-modality neighbors, where noisy texts are enhanced by image neighbors to alleviate the mismatch problem. In contrast, most previous studies investigate neighbors within one modality.
2) We exploit reciprocal structures to filter and rerank nearest neighbors for pertinent text matching, while most existing works neglect those top-ranked false positive neighbors.
3) We resort to neighbors for on-line collective bootstrapping in a manner of dictionary look-up instead of explicit global graph construction.
A3.4 First, CAPro shares the same prototypical contrastive learning with MoPro [28]. The differences between MoPro and our CAPro include:
CAPro notices the semantic noise problem and takes advantage of both textual and visual prototypes to handle semantic misalignment. Our visual prototypes are polished only by semantically-correct examples.
CAPro "creatively" reuses the dictionary for collective bootstrapping, where visually-similar neighbors provide label references by performing dictionary look-up.
CApro, to our best knowledge, is the first to extend prototypical contrastive learning for multi-label classification, where the overwhelming noise and intra-class imbalance pose great challenges. CAPro performs prototypical learning in subspaces of the shared embedding space to stabilize training.
Second, please see the response A4.2 for the differences between our noise removal strategy (Eq. 7) and MoPro (Eq. 5).
Third, the advantage of CAPro over MoPro is highlighted in Fig. 1(c), where performance on $235$ polysemy categories is validated. CAPro indeed improves MoPro by addressing the semantic noise that prevails in these classes. In this case, CAPro can be seen as an effective "booster" to MoPro, which further enables the handling of semantic misalignment with wiser collective bootstrapping. The performance gains are expected to be much more significant in practice usage.
A3.5 We have corrected these typos and statements.
A3.6 Thank you for the insightful comments.
A3.7 Thank you.
---
Rebuttal Comment 1.1:
Title: Replying to the authors' rebuttal
Comment: Thanks for the detailed rebuttal. In the rebuttal, the author have clarified the difference between some related works. I would like to maintain my positive rating. | Summary: To handle the label noise problems especially the semantic noise in webly supervised learning, the authors propose a unified prototypical contrastive learning framework named as Cross-modality Aligned Prototypes (CAPro). It exploits web data across modalities to formulate semantically-correct textual and visual prototypes. Besides, collective bootstrapping is proposed to encourage smoother and wiser label reference from appearance-similar instances in a manner of dictionary look-up. Extensive experiments on WebVision1K and NUS-WIDE(Web) demonstrate that the proposed CAPro could handle realistic noise under different scenarios. It also achieves new state-of-the-art performance on open-set recognition.
Strengths: 1. The authors propose a new cross-modality prototypical learning framework named as CAPro, which aims to handle various noises, especially semantic noise.
2. The proposed CAPro framework is carefully designed and fully explores the cross-modality intervention to filter out noise.
3. Extensive experiments have been conducted to prove the effectiveness of the proposed method. On the other hand, the code is available and enables reproducing this work.
Weaknesses: 1. The framework figure, i.e., fig. 2, is too complex to be understood. The arrows are not in the same directions, and the symbols are not introduced in caption or texts. There is also lack of explicit module division in the figure. These aspects increase the difficulty to understanding the whole framework.
2. In experiments, it seems that different methods obtain different “vanilla” performance even with the same backbone, such as R50. Why is the performance different among different methods? Does it mean unfair comparison?
3. The whole framework is complex since it introduces many modules to handle noise. The whole training process should be organized as explicit algorithm to facilitate the understanding.
4. There are a lot of hyper-parameters in the total objective. How to choose them during experiments?
5. There are some typos in the manuscript.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please try to address the weaknesses above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: More explanations will be added to the manuscript.
A2.1 Thank you for the comments.
A2.2 First, we would like to further explain the function of each module in Fig. 2:
Siamese image encoders: extract features $\mathbf{v}_i$, $\mathbf{v}_i'$ from inputs $\mathbf{x}_i$ and their augmented counterparts $\mathbf{x}_i'$.
Text encoder: generates embeddings $\mathbf{s}_i$, $\mathbf{s}^c$ respectively from the instance $\mathbf{t}_i$ and the category $\mathbf{t}^c$.
Classifier: maps $\mathbf{v}_i$ to predictions $\mathbf{p}_i$ over $C$ classes.
Projector: distills discriminative low-dim.al embeddings $\mathbf{z}_i$ from $\mathbf{v}_i$, followed by $\ell_2$-normalization for unit-sphere constraint on $\mathbf{z}_i$.
Reconstructor: recovers $\tilde{\mathbf{v}}_i$ from $\mathbf{z}_i$ to be close to $\mathbf{v}_i$.
Auxiliary classifier: outputs predictions $\mathbf{q}_i$ on $\mathbf{z}_i$.
Dictionary: records keys for both contrastive learning and collective bootstrapping. The latest embeddings $\mathbf{z}_i'$ are enqueued while the oldest are dequeued.
Second, we will modify Fig. 2 to make it easier to read and follow (see PDF). The refined captions are as follows:
Overview of CAPro. Images $\mathbf{x}_i$ and texts $\mathbf{t}_i$ are respectively fed into the image and text encoders for features $\mathbf{v}_i$ and $\mathbf{s}_i$. Then, $\mathbf{v}_i$ is projected into the embedding space as $\mathbf{z}_i$, followed by the reconstruction from $\mathbf{z}_i$ to $\mathbf{\tilde{v}}_i$. Visual prototypes $\mathbf{z}^c$ are initialized with anchor instances that are selected by matching enhanced texts $\mathbf{\tilde{s}}_i$ to textual prototypes $\mathbf{s}^c$ for semantic alignment. They are constantly polished up by clean images and engage in contrastive learning to constrain cluster distribution. Collective bootstrapping exploits visual dictionary for regularization on the auxiliary classifier output $\mathbf{q}_i$, where each key embedding is matched to the query for the reference $\mathbf{b}_i$. Web labels $y_i$ are simultaneously refined as $\tilde{y}_i$ for ``denoised'' supervision on the classifier output $\mathbf{p}_i$.
A2.3 First, even under the same R50 backbone, different results of "vanilla" baselines are reported [26,28,58,78]. We believe the training settings, especially the batch-size, are the reasons that certain vanilla baselines surpass most SOTAs. In the present study, we follow MoPro [28] to use the standard settings of ImageNet training. Please refer to our supplementary (line 21).
Second, we take VSGraph [78] and NCR [58] as examples to show how the batch-size would affect performance.
Both VSGraph [78] and NCR [58] adopt the R50 backbone.
But VSGraph, NCR, and their vanilla baselines are trained with a batch size of $1,024$. And their vanilla methods surpass most SOTAs that are trained with a batch-size of $256$.
The benefits of a larger batch-size (1,024 over 256) on WebV1k are studied in NCR [58]. Its top-1 Accs for batch-sizes of $1,024$ and $256$ are respectively $75.7\\%$ and $73.9\\%$.
We believe batch-size is the key factor that affects comparability.
Due to the limited GPU budget, training with a batch-size of $1,024$ is currently not affordable but we are willing to experiment in further research.
Third, we would like to clarify how to fairly interpret Tab. 1:
The comparison between SOTA methods with ours:
Methods in rows 1-5 are not comparable with the proposed CAPro since their backbones are different.
Methods in rows 6-8 are not comparable due to their optimized training settings.
Methods in rows 9-18 are all trained with R50 with a batch size of $256$, which are comparable with ours.
The comparison between SOTA methods with their vanilla baselines:
SCC can be compared with the 4th row.
VSGraph and CoTeach can be compared with the 6th row.
MoPro can be compared with the 9th row.
Our CAPro can be compared with the second last row.
A2.4 Thank you for the suggestion. We prepare an algorithm to clearly explain the entire process (see PDF) and will be added to our paper.
A2.5 First, for the total objective (line 242), we follow MoPro [28] to use $\lambda_{pro}=1$ and $\lambda_{ins}=1$. Out of simplicity, we also use $\lambda_{prj}=1$ as the default.
Second, we would like to explain the effect of $\lambda_{pro}$, $\lambda_{ins}$, and $\lambda_{prj}$ on regularzation.
A larger $\lambda_{pro}$ may pull instances too close to their prototypes, which "shrinks" class clusters in the embedding space.
A larger $\lambda_{ins}$ will enforce stronger visual discriminability between two instances. It may cause two examples from the same category to differ greatly and thereafter downgrades the visual prototype update and class cluster regularization.
A larger $\lambda_{prj}$ improves the reconstruction quality of $\tilde{\mathbf{v}}_i$, which encourages $\mathbf{z}_i$ to retain more information of $\mathbf{v}_i$ in the embedding space.
The projection-reconstruction loss is only involved in the pre-training stage (see line 62 in our supplementary), $\lambda_{prj}$ will not affect the prototypical and instance-wise contrastive learning in the following stage.
Third, for custom web dataset, we suggest that $\lambda_{pro}$, $\lambda_{ins}$, and $\lambda_{prj}$ should be tuned according to the performance results under
1)$\lambda_{pro}=0$ vs. $\lambda_{pro}=1$;
2)$\lambda_{ins}=0$ vs. $\lambda_{ins}=1$;
3)$\lambda_{prj}=0$ vs. $\lambda_{prj}=1$.
According to our experiments on both single-label and multi-label datasets, the default settings of $\lambda_{pro}=1$, $\lambda_{ins}=1$, and $\lambda_{prj}=1$ should work well.
For settings of other hyper-params., please refer to our response A4.4.
A2.6 We will double-check grammar and word spelling.
A2.7 We will modify the manuscript accordingly. Please see the point-by-point response above.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for addressing most of my concerns and questions. Therefore, I will keep my rating. | Summary: The authors propose a prototypically-aligned contrastive learning framework for vision and language in order to enable better web-scraping of fine-grained, rare concepts that are easily confused or mapped to other more common concepts when either vision or language is considered in isolation (what they term “semantic noise”). Their goal is to improve webly-supervised learning, which is often plagued by label noise, particularly for fine-grained categories. They call their method CAPro, for “Cross-modality Aligned Prototypes.” Their method is built from a few simple ideas: 1) use text prototypes, well-defined in the literature, to scrape a cleaner set of visual prototypes for fine-grained classes, 2) Use visual features to fill gaps or correct errors within text prototypes for fine-grained concepts, generating “semantically-aligned visual prototypes,” 3) an additional cleaning step to further reduce noise between the visual and textual prototypes that uses cluster regularization, 4) “collective bootstrapping” to further smooth concepts or “label references” via essentially an adapted/aggregated/bootstrapped dictionary lookup over the entire dynamic concept dictionary that reduces the effect of examples divergent from the average when making predictions, particularly helpful for visually similar classes. Their method is highly similar to MoPro, but with the addition of the above described semantic noise correction and regularization across modalities.
After reading the authors rebuttal and discussing with them, as well as reading the comments and discussion with other reviewers, I will increase my score to a 6.
Strengths: The authors show that their method performs well for single-label and multi-labe, on WebVision1k and Imagenet1k, and shows some open-set generalizability. In particular, they define 235 categories from these datasets as exhibiting “polysemy concepts”, and show that their method is particularly effective for these concepts. The gains shown are nice, and the more detailed performance breakdown of the 235 class versions of both datasets is interesting (though it would be nice to also show the performance breakdown for the non-polysemy concepts to help build reader intuition of the impact of polysemy). The method does seem to clearly have an advantage in multi-label challenges, and performs better than CoTeach or VSGraph on open-set concepts. The qualitative examples in table 3 are quite compelling, though they seem potentially highly cherry-picked.
Weaknesses: Notably, their system is quite complex, and feels a bit like a bag of tricks (albeit an effective one), and the method both performs quite similarly to MoPro and often underperforms VSGraph on Top1 (the authors point out that these may not be directly comparable, as they use different backbones, but this makes it difficult to understand or interpret the table. Perhaps they could somehow point out which methods are comparable along which axes in the table? As it is, it’s not very easy to determine the methods value vs other methods). I would also have liked to see more thorough ablations to better understand how impactful their suggested components were vs the additional computational complexity added, to better understand how “worth it” each component would be to use or implement. I find table 4 to be a bit difficult to interpret. I would also like a more thorough analysis of why top-1 suffers, but top-5 benefits (this may be related to over-regularization somehow?).
Nits:
In the abstract: “exacerbates fine-grained visual concept learning.” should perhaps be “exacerbates the challenge of fine-grained visual concept learning.”
Line 26: “The large scale” -> “Large scale”
Line 305: “study” -> “studies”
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I’m curious how this method would handle visual dimorphism in fine-grained classes, for instance dark-eyed juncos which have multiple distinct color morphs, some more common than others. Since the method explicitly seeks prototypical examples, is this at the expense of recognition of anomalies or rarities within-concept? Have the authors seen any interesting failure modes for their method, where simpler methods succeed?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have not addressed this at all, which I see as a significant weakness. What are the potential benefits or harms when building prototypical definitions of concepts? What might this mean for instances which do not fit the prototypical visual appearance of a given concept on the internet? How might biases in internet data limit our prototypes?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: A1.1 We improve Fig. 1 (c) with non-polysemy performance (see PDF).
A1.2 CAPro is indeed a complicated, systematic solution. It is specifically designed to address web noise instead of piling up tricks.
It follows a similar paradigm of prototypical learning as MoPro [28] and PCL [27].
Different to them, we take semantic noise into serious consideration.
Our advantage against MoPro is highlighted in 235 polysemy concepts to demonstrate the benefits of visual-semantic alignment.
A1.3 In Tab. 1, CAPro does not excel VSGraph in top1 but surpasses it in top5 on WebV/ImgN1k. On Ggl/ImgN500, CAPro outperforms it with 7.9%/8.9% in top1. CAPro consistently outperforms VSGraph in Tabs. 2-4 for different benchmarks, tasks, and encoders.
A1.4 First, VSGraph adopts the same R50 but is trained with a batch-size of 1,024. The benefits of a larger batch size are studied in NCR [58]. Its top1 Accs. for 1024 & 256 batch-sizes are 75.7% & 73.9%. We believe batch size is the reason that a baseline [78] surpasses most SOTAs. Due to the limited budget, training with a batch-size of 1024 is currently not affordable, but we will experiment in the future.
Second, we would like to clarify that:
Methods in the same parts of Tab. 1 are comparable to each other. Methods in rows 1-5 are not comparable to CAPro due to different backbones. Methods in rows 6-8 are not comparable to CAPro due to optimized training settings. CAPro is fairly compared with all methods in rows 9-18.
A1.5 First, we present the params. & GFLOPs of encoders.
|Encoders|Params.|GFLOPs|
|---|---|---|
|R50|25M|3.8|
|MiniLM|22M|4.7|
|XLNet|110M|29|
|GPT-Neo|1.3B|3400|
Second, we present the costs vs. gains for text enhancement, where N is the # of nodes, k is # of neighbors per node, and $d_v$ is the dim. of $\mathbf{v}$.
|Text Encod.|Text Enhance.|Cost|Ggl500 Top1|ImgN500 Top1|
|---|---|---|---|---|
|MiniLM/XLNet/GPT-Neo|VSGraph|$O(N^2d_v)$+$O(N{d_v}^2+Nkd_v)$|72.0/71.6/72.0|66.9/66.8/67.2|
|MiniLM/XLNet/GPT-Neo|Ours|+$O(3Nkd_v)$+$O(4k)$+$O(4klog(4k))$|+3.5/+3.8/+3.7|+4.6/+4.7/+4.4|
With k=5 & k=10 for WebV1k & NUS-WIDE, our improvement over VSGraph is worthy at the expense of such a low cost.
Third, we present the costs vs. gains for reference provider, where m is the batch size, d_v is the dim. of v, Q is the size of dictionary, d_p is the dim. of z, and C is the # of classes.
|Text Encod.|Ref. Provider|Cost|Ggl500 Top1|ImgN500 Top1|
|---|---|---|---|---|
|MiniLM|MixUp/BootStrap/LabelSmooth/SCC|--|75.7/75.5/75.4/73.8|71.4/71.3/71.2/70.2|
|MiniLM|NCR| $O(m^2(d_v+C))$|-0.2/+0/+0.1/+1.7|+0.1/+0.2/+0.3/+1.3|
|MiniLM|Our CB| $O(mQ(d_p+C))$|+0.3/+0.5/+0.6/+2.2|+0.6/+0.7/+0.8/+1.8|
Operations of our CB are fast to compute since PyTorch supports efficient matrix multiplication on GPUs.
Our $d_p$=128 is 16x << $d_v$=2048 in NCR & our m=256 is 4x << m=1024 in NCR.
For WebV1k, our cost is 1.35x < NCR.
For NUS-WIDE, our cost is 20.37x << NCR.
It is reasonable to conclude that our CB is more efficient & effective than NCR.
Finally, text encod.& enhance. are off-line and executed only once. They do not participate in network optimization. Besides, the pretrained text encoders are only used for inference under 1 V100 GPU. Therefore, the additional cost is acceptable in return for semantically-correct web images.
A1.6 First, for Tab. 4, methods in rows 2-7 outperform the 1st row with better top1&top5, validating the textual knowledge. For different text encoders, we outperform VSGraph in both top1 & top5.
Second, SCC estimates confidence independently and neglects the relationship between an instance and its prototype. NCR brings no gains. Its effect is limited without a large batch size. Bootstrap & label smooth degrades both top1 & top5 on ImgN500 while mix-up benefits top1 on Ggl500.
Mix-up improves CAPro in top1 but lowers top5. It adopts convex combinations for both inputs & targets, enforcing a stronger regularization than our CB where we only recombine targets.
For WebV1k, examples with noisy labels still resemble their prototypes and therefore neighbor knowledge brings useful reference. Mix-up does not consider appearance similarity and causes over-regularization.
A1.7 Thank you. We fix typos.
A1.8 First, CAPro can handle fine-grained categories on WebV1k. The introduction of atypicals increases the risk of noise. For generalization on anomalies or rarities, one solution is to choose both top-K and randomly sampled instances.
Second, for WebV1k, both MoPro&CAPro underperform the vanilla on a total of 387&312 classes. Top5 failures: screen, sunGlasses, bellCote, ballPlayer, popBottle. For ImgN1k, MoPro&CAPro underperform the vanilla on a total of 450&358 classes. Top-5 failures: silkyTerrier, walkerHound, academicGown, standardSchnauzer, bellCote.
Findings:
1) Domain gap exists between web and realistic datasets.
2) The vanilla tends to overfit training set so that it outperforms on highly similar concepts: screen & monitor, sunGlasses & sunGlass.
Mistakes on silky & yorkshire Terrier, walker & englishFox hound are ascribed to over-regularization. The inter-class relationship might be used for class-wise adjustment.
A1.9 Two benefits:
Noisy web data can be corrected by measuring the instance-prototype distance.
Inter-class relationship can be statistically studied to shed light on similarities between species.
Three drawbacks:
Tolerance to the intra-class minority. Web data follow long-tailed distribution. The more common one instance is, the greater the likelihood that it gets exposed.
=>to introduce randomness into init. & update of prototypes.
The domain bias of web data. Their styles (advert. & render.) are different from realistic ones. Specific modalities (infrared & CT) are unavailable.
=>to prepare realistic images for guided-training.
Prior knowledge about class hierarchy. Coarse-grained or improper descriptions about hierarchical structure would devalue semantic alignment.
=>to perform a thorough analysis of concepts.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarification and analysis in your rebuttal
Comment: I thank the authors for their rebuttal, and I believe that with the additions they have proposed in the general rebuttal across all reviews the paper will be significantly clarified and strengthened which is a win for the peer review process and will lead to a notably improved manuscript. If all 1-10 of the promised improvements are incorporated thoroughly and well into the final manuscript I believe that would justify an increased score of 6. | Rebuttal 1:
Rebuttal: Dear reviewers, area chairs, and senior area chairs,
We sincerely thank that all four reviewers are positive towards our paper, and provide detailed, constructive comments.
According to these comments and suggestions, we will modify our manuscript by:
1) improving all figures to make them easy to understand and follow;
2) adding more explanations on the differences between our CAPro with MoPro;
3) adding another section into the related work (Noisy Correspondence) to introduce the differences between webly supervised learning and noisy correspondence removal;
4) adding more explanations on the differences between our work and previous methods of learning from noisy labels;
5) adding computational complexity to the ablation study to show the additionally introduced cost;
6) adding more explanations on how to interpret the comparability between SOTA methods when different backbones and training settings are adopted;
7) adding more discussions on how to tune hyper-parameters;
8) adding more discussions on our findings of failure cases;
9) adding more discussions on the limitations;
10) polishing English writings.
Please see the point-by-point response below for each reviewer.
Besides, we provide additional **tables, figures, and algorithms** in the uploaded **one-page PDF** for better explanations. These tables, figures, and algorithms will be magnified in the manuscript, but have to be compressed now to fit into one page only for rebuttal.
Finally, we would like to express our gratitude again for all the valuable comments that would significantly help improve the quality of the manuscript.
Pdf: /pdf/45725829174066ed49bc1ade1e52551d76ceaa4b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models | Accept (oral) | Summary: This work proposed a new framework that augments the diffusion-based synthesis with physical dynamical simulation in order to generatively co-design task-driven soft robots in morphology and control. The extensive experiments in simulation to verify the effectiveness of DiffuseBot.
Strengths: 1. The paper is well-written and easy to follow.
2. The visualization is good and help me understand how the method works.
3. The application of diffusion is always be encouraged and diffusion model is an interesting and promising backbone.
Weaknesses: 1. The novelty is limited and more like an incremental work that applied diffusion model into 3D soft body generation task. The relationship and difference with Softzoo mentioned in the paper should be more clearly illustrated.
2. The baseline for comparison is relatively weak, consisting of some outdated works from a few years ago. The paper should includes more strong and recent baselines.
3. The evaluation way is not agreed. 'To avoid being trapped in the local optimum, we run each baseline with 20 different random initializations and choose the best one. Since DiffuseBot is a generative method, we draw 20 samples and report the best' . If the baseline is sensitive to the initialization seed, more detailed results should be reported rather not choose the best one. And the 20 runs should report the mean and var rather not the best. The explanation about the generative model is not convincing enough for me.
4. The novelty is limited and more like an incremental work applying diffusion model in 3D generation domain.
5. The physics augmented component is not included in ablation studies and this will harm the convincing conclusion in the paper.
6. Why diffusion not other generative model, such as VAE or transformer? The ablation studies should be listed or the paper will be likely the incremental work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please give more detailed clarification and experiments results to address my concerns.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. The novelty is limited and the motivation of using diffusion model is not well illustrated.
2. The baseline is weak and old, the evaluation way is not agreed. The ablation studies are not enough.
3. The role and the improvement bring by physic-augmented component is not well studied.
4. The limitation part is absent in the submitted version. I hope the authors can discuss the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer vV19 for bringing up several concerns. We provide additional experimental results (see the one-page pdf in the global response) and discussion as the below.
**Limited novelty.**
We aim to design robots with physical function and simple pattern generation will not get us there. We hence propose an entirely new framework by leveraging the computational power of a physics simulator to achieve meaningful results. Generative models are far from sufficient as:
- The generated contents of most large-scale pre-trained generative models don’t reason about physics and fail to achieve any physical utility or functionality.
- Unlike most training of generative modeling, computational robot design normally doesn’t have access to real data (which in our case, a dataset of well-performing robots). Instead, we need to leverage the physics-based simulation that evaluates the performance of a robot.
Please refer to 1-3 paragraphs in the introduction for more detailed justification. Thus, we make the following technical contributions,
- introduce a new framework that augments the diffusion-based synthesis with physical dynamical simulation in order to generatively co-design task-driven soft robots in morphology and control (the last paragraph in the introduction).
- propose a method to robotize 3D shapes from diffusion samples for meaningful evaluation in physics-based simulation (section 2.3).
- present methods for driving robot generation toward improved physical utility by optimizing input embeddings and incorporating differentiable physics into the diffusion process (section 2.4).
**Stronger baselines.**
As opposed to pure 3D generative modeling, soft robot co-design remains relatively unexplored. We believe most prior methods are covered as baselines for benchmarking, e.g., we adapt one of the most well-known yet a bit old method CPPN [49] to Diff-CPPN that can be used with the more recent and powerful techniques via differentiable physics. Nevertheless, we further compare with a very recent paper, DiffAqua [27]. Originally, we didn’t include this baseline since this method is designed for swimming tasks and lacks generality across a wide range of tasks. Briefly, DiffAqua proposes to compute Wasserstein barycenter among a set of primitives of underwater creatures. We report mean and standard deviation for all tasks in Table A1.
We can observe that DiffuseBot outperforms DiffAqua across all tasks. There is a natural tradeoff in the method between choosing a larger set of primitives for potentially better performance across diverse tasks and obtaining good solutions in Wasserstein barycenter optimization. To this end, we believe leveraging the power of large-scale pre-trained 3D generative models remains a more scalable and general method toward soft robot co-design.
**Report the mean and var.**
We report the best results since normally the soft robot co-design problem expects to only produce one final robot design that can achieve high performance for a certain task (somewhat similar to other applications like drug discovery). To make the analysis more thorough, we report the mean and standard deviation in Table A2. Most results lead to conclusions that are consistent with Table 2 (using the best), except for hurdling, where the implicit function (IF) gives slightly better performance. However, IF is much more unstable, indicated as the much larger 0.63 standard deviation.
**Ablation on physics components.**
The ablation for the physics-augmenting components is shown in Table 1, Table 3, and Table 4 (this is the figure with caption “Varying starting...”; it is incorrectly labeled as table).
- In Table 1, we show the results of improved physical utility by augmenting physical simulation with diffusion models; specifically, we demonstrate how the 3D diffusion model (Point-E) works poorly (1st row), and how the proposed components in DiffuseBot greatly enhance the task performance (2nd, 3rd rows). Please refer to section 3.2 paragraph “Physics-augmented diffusion” for more detailed discussion.
- In Table 3 and Table 4, we conduct more fine-grained ablation studies on embedding optimization and diffusion as co-design. Please refer to section 3.3 for more detailed discussion.
**Baselines such as VAE or transformer.**
While there exist many 3D generative models other than diffusion-based models (including VAE [A1,A2], normalizing flow [A3], GAN [A4], etc.), it is non-trivial to incorporate physics priors into the generative process. Compared to other generative models, diffusion models allow us to elegantly build theoretical constructs to incorporate external knowledge like physic-based simulation as guidance throughout the iterative generative process, which is also the major technical contribution in DiffuseBot. Besides, diffusion models have emerged as the de-facto of content generation, inspiring our work to harness such power to soft robot design application.
To further strengthen the paper, in Table A3, we compare with a recent VAE-based 3D generative model [A1], which outperforms other widely adopted baselines [A2-A6]. Given it is an open question to augment physics in VAE-based models (and, in fact, any other generative models), we perform direct optimization of co-design on the generated samples of [A1] to leverage physics-based simulation. With more advanced ways to inject physics prior into generative process as proposed in DiffuseBot, much superior performance can be achieved. Lastly, DiffuseBot uses a transformer-based architecture to generate 3D point clouds as in Point-E.
**Limitations.**
Please check the paragraph about limitations in the global response.
**References**
[A1] Cheng. Autoregressive 3d... ECCV 2022.
[A2] Kim. Setvae: Learning hierarchical... CVPR 2021.
[A3] Yang. Pointflow: 3d point... ICCV 2019.
[A4] Wu. Multimodal shape... ECCV 2020.
[A5] Luo. Diffusion...point cloud generation. CVPR 2021.
[A6] Zhou. 3d..point-voxel diffusion. ICCV 2021.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal.
Comment: Thanks for the authors.
Novelty is currently acceptable to me.
I am very grateful for the efforts made by the author, especially for incorporating the VAE-based generation method (although I was anticipating seeing the advantages of diffusion over other generative models by incorporating physics constraints or knowledge as conditions in the backbone of conditioned VAE). However, considering the limited rebuttal time window, I will not be demanding more.
Furthermore, I still believe that providing mean and variance through multiple evaluations is better than reporting only the best results. For instance, in the supplementary Table A3 provided by the author, for the locomotion task and the task of moving a box, it can be observed that the variance is particularly high. There is already significant overlap with the VAE algorithm that does not incorporate physics constraints. The author should provide the mean and variance for the baseline in Table 2 of the main submission to enhance the persuasiveness of the results and conclusions.
If my concerns are addressed, I will immediately increase the score.
---
Reply to Comment 1.1.1:
Title: Addressing the remaining concern
Comment: Thanks for acknowledging our effort for the rebuttal and bringing up the remaining concern about reporting the mean and variance.
We will provide mean and variance for the results in Table 2 of the main submission from the supplementary Table A3 along with the discussion presented in the above to further strengthen the persuasiveness of our analysis based on your suggestion. As we cannot edit the main paper now, we will incorporate those results and changes right after we can do so.
We greatly appreciate your timely follow-up on our rebuttal. | Summary: This paper presents DiffuseBot, a framework that uses physics-augmented diffusion models to generate soft robot designs and control strategies for various tasks. The authors propose to optimize the embeddings conditioned by the diffusion model to improve the physical utility of the generated robots, and to reformulate the diffusion sampling process as a co-design optimization that leverages differentiable simulation. The authors demonstrate the effectiveness of their method on several tasks, such as balancing, landing, crawling, hurdling, gripping, and moving objects. They also show how to incorporate human feedback and fabricate a physical robot prototype.
Strengths: The paper is well-written and clear. The proposed method is novel and interesting, combining diffusion models for shape generation, physics-based simulation and co-design optimization. The paper provides extensive experimental comparisons with baselines and ablation studies on both latent optimization and co-design to validate the proposed method. The paper also shows some fun qualitative results of diverse robot designs that function under passive dynamics, locomotion tasks and manipulation tasks.
Weaknesses: - The paper does not discuss the limitations or failure scenarios of the proposed method. In addition, discussions on design choices can be help: how hyper-parameters are chosen, such as the guidance scale, or the number of MCMC steps?
- Figure 1 shows that the motivation of this work is to deploy the optimized soft robot into the real world. However, though I may miss it, I did not find discussions in the paper about the feasibility of manufacturing the resulting soft robots, such as the soft gripper. Given that the actuators are currently assumed to be muscle fibers, it can be hard for manufacturing.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Can limitations of the current pipeline be discussed and included in the paper?
- Can the paper includes discussions about manufacturing?
- How design choices are set? How sensitive is the current pipeline to hyper-parameters?
- How to interpret the metric reported in Table 1, 2? "We report the average performance with standard deviation in the superscript." -- Is the performance success rate?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please include a section about limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer syw2 recognizing our work as well-written, novel, and with extensive results. We address the remaining questions as the below.
**Limitations or failure scenarios.**
Please check the paragraph about limitations in the global response.
**Hyper-parameter choices.**
Hyperparameters are chosen mostly based on intuition and balancing numerical scale with very little tuning. In the following, we briefly discuss the design choices of all hyperparameters listed in Table 5 and 6 in the appendix. For min buffer size, samples per epoch, training iteration per epoch, and batch size, we roughly make sufficiently diverse the data used in the optimization and use the same setting for all tasks. For buffer size, we start with 60 and if we observe instability in optimization, we increase to 10 times, 600 (similar to online on-policy reinforcement learning); note that buffer size refers to the maximal size and increasing this won’t affect runtime. For buffer Top-K, we start with 6 and if we observe limited diversity of generation throughout the optimization (or lack of exploration), we double it. For $t_{max}$, $t_{min}$, and $\Delta t$ (we made some typos in Table 6, all 60’s should be 50’s), we roughly inspect how structured the generation in terms of achieving the desired robotic task to determine $t_{max}$ and modify $\Delta t$ accordingly to match the similar number of performing MCMC sampling (e.g., $t_{max}$/$\Delta t$: 400 / 50 $\approx$ 150 / 25). For the number of MCMC steps K, we simply set 3 for passive tasks and 5 for active tasks by intuition. For $\sigma$, we simply follow one of the settings in [11]. For the guidance scale $\kappa$ and renorm scale, we check the numerical values between $\epsilon$ and gradient from differentiable simulation and try to make them roughly in the similar magnitude, and set the same scale for all tasks for simplicity. For $\gamma$, we set 0.001 for trajectory optimization and 0.01 for parameterized controllers based on our experience of working with differentiable physics. Overall, from our empirical findings, the only hyperparameters that may be sensitive include buffer size and buffer Top-K for optimization stability and generation diversity, and guidance scales, which need to be tuned to match the numerical magnitude of other terms so as to take proper effect.
We will include the above descriptions in the appendix section C.
**Details on robot manufacturing.**
The details of the physical robot experiment and the manufacturing is in the appendix section G. We describe how to build muscle fiber with tendon-driven actuators, how to achieve soft bodies with lattice structure, and how to fabricate the physical robot with a carbon 3D printer. Please find more details in the section G along with videos of using the soft gripper to pick up various types of objects in the project site (link shown in line 685). In addition, we further conduct a simple quantitative analysis on the behavior of simulation and the physical robot. Please check out more details in the response to reviewer 7Vvn.
Although, at present, the compilation of the virtual robot to a physical, digitally fabricated counterpart involves manual post-processing of algorithm's output, most, if not all of these steps could be automated. Our method outputs a point cloud (defining geometry), actuator placements, and an open-loop controller, along with a prescribed stiffness. Since we can easily convert the point cloud into a 3D triangle mesh, the geometry can be created by almost any 3D printing method. In order to realize an effective stiffness and material behavior, stochastic lattices, specifically Voronoi foams, have been used [A1,A2] in the past and employed here in order to match target material properties. Given the actuator placement, tendons [A3,A4] can be aligned with the prescribed (contiguous) regions. Since a lattice is used, threading tendons through the robot body is simple, and we note that even more complex routings have been studied in detail in the literature [A5]. Creating attachment points for the tendons is a relatively simple geometry processing problem [A6]. Thus, converting a virtual robot to a specification that incorporates geometry, material, and actuation can be automated in a straightforward way.
We note that when controlled, the physical robot may not always match the virtual robot's motion. This is the sim-to-real gap, and is significantly harder to overcome in our case than translating the virtual robot to physical hardware. Significant literature has been invested in specifically tackling the sim-to-real gap, and in our case would require its own dedicated project; however, we note that often hardware can be adapted to work by modifying only the control policies using feedback from real-world experiments, often even with little human intervention [A7].
**Metrics.**
They are more of a soft version of success rate. The definition of the metric in Table 1 and 2 is in the appendix section D. We will add a pointer at line 182 in section 3.1 as,
“We refer the reader to the appendix Section D for more detailed task descriptions and performance metrics.”
**References**
[A1] Martínez et al. "Procedural voronoi foams for additive manufacturing." TOG 2016.
[A2] Goswami et al. 3D‐architected soft machines with topologically encoded motion. Advanced functional materials 2019.
[A3] In et al. A novel slack-enabling tendon drive that improves efficiency, size, and safety in soft wearable robots. ToM 2016.
[A4] Kim et al. Slider-tendon linear actuator with under-actuation and fast-connection for soft wearable robots. ToM 2021.
[A5] Bern et al. "Interactive design of animated plushies." TOG 2017.
[A6] Chen et al. Encore: 3D printed augmentation of everyday objects with printed-over, affixed and interlocked attachments. UIST 2015.
[A7] Ha et al. Learning to Walk in the Real World with Minimal Human Effort. CoRL 2021.
---
Rebuttal Comment 1.1:
Title: nice work!
Comment: Thanks a lot for the rebuttal and the nice work! Authors cleared most of my concerns. Would like to keep my rating of 7 Accept. | Summary: This paper introduces DiffuseBot, a physics-augmented diffusion model designed for generating and optimizing the morphologies and control mechanisms of soft robots. DiffuseBot aims to bridge the gap between virtually generated content and physical utility in the domain of soft robotics. Firstly, it combines the diffusion process with a physical simulation that serves as a performance certificate, thereby ensuring the feasibility and effectiveness of the generated designs. Secondly, it details a co-design procedure that simultaneously optimizes the physical design and control of the soft robots, leveraging insights from differentiable simulation. The paper validates the efficacy of this approach by presenting a variety of both simulated and physically fabricated robots, along with their diverse capabilities.
Strengths: 1. In general, the paper is well written, with only minor flaws. Even those unfamiliar with soft robot design will find the paper easy to comprehend.
2. Although diffusion models are expressive and powerful, their performance for tasks dealing with physical tasks often falls short. Thus, injecting a physics prior or 'physics-augmented diffusion model' is crucial. I think the method proposed in this paper is interesting and promising.
3. The evaluation is comprehensive and thoughtful. The physical robot is impressive.
Weaknesses: Overall, I did not identify any major weaknesses in the paper, but here are a few points that could strengthen it:
1. While the writing is generally clear, certain sections could benefit from clearer exposition, such as:
* The section on diffusion as co-design is not very intuitive, especially for audiences not familiar with soft robot design. Specifically, it should be clearer how gradient-based optimization benefits robot design and what exactly line 152's "synergy" means.
* It would be helpful if the authors clarify that the "condition" in this work actually refers to text.
2. The robot's actuator and stiffness seem oversimplified, having only constant stiffness. Given that the gradient of $\Psi_{act}$ is almost zero, it appears that the actuator and stiffness are solely determined by the geometry.
3. A similar idea of tuning in the embedding space is proposed in[1]. A discussion and connection to this existing work could be interesting.
4. In general, the method the paper uses to inject a physics prior into the generation process could be applicable to more general scenarios. Works like Diffuser[2] or Decision Diffuser[3] generate state sequences with diffusion models, but the generated states can sometimes be physically implausible. A deeper discussion about the potential of the method could make the paper stronger.
[1] Gal, Rinon, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik and Daniel Cohen-Or. “An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion.”, ICLR, 2023.
[2] Janner, Michael, Yilun Du, Joshua B. Tenenbaum and Sergey Levine. “Planning with Diffusion for Flexible Behavior Synthesis.”, ICML, 2022.
[3] Ajay, Anurag, Yilun Du, Abhi Gupta, Joshua B. Tenenbaum, T. Jaakkola and Pulkit Agrawal. “Is Conditional Generative Modeling all you need for Decision-Making?” ICLR, 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
1. I do not fully understand how the k-means clustering is performed for actuator and stiffness generation. Specifically, what kind of feature is used for clustering?
2. In line 86, which structural biases are you referring to ?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: see weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer zr6A for acknowledging that our approach is interesting and promising. We address remaining questions as the below.
**Clearer exposition in diffusion as co-design.**
Gradient-based optimization is shown to achieve more efficient and effective design search in soft robot co-design [19,25,44,48], especially with soft robots having a continuum of bodies and non-rigid contact. Thus, we aim to connect gradient-based optimization to diffusion-based generative processes in our work.
The “synergy” in line 152 is between diffusion models and energy-based models, not robot co-design; it draws a connection to MCMC sampling in energy-based models and makes the update of diffusion process more “gradient-descent-like”. This allows us to elegantly formulate gradient-based optimization, which is commonly used in soft robot co-design, in the diffusion-based generative process. Please refer to [11,12,42] for more details about the theory and the appendix section D paragraph “Connection to MCMC” for more complete theoretical motivation.
More precisely, the condition refers to the embedding either from text inputs, or from image inputs, or directly optimized in the Embedding Optimization stage to achieve physical utility.
We will improve the exposition in section 2.4 paragraph “Diffusion as Co-design” in the revision.
**Actuator and stiffness.**
The goal of DiffuseBot is to demonstrate the potential of using diffusion models to generate soft robot design and to leverage the knowledge of the pre-trained generative models learned from a large-scale 3D dataset. Under this setting, the generated output of the diffusion model can only provide the geometry information of robot designs, leading to our design choice of having full dependency of actuator and stiffness on the geometry. This may be a reasonable simplification as prior works [48] have shown geometry along with properly-set actuator and stiffness (we take manual efforts to design proper mapping from geometry to actuator and stiffness in this work) roughly reflect the performance of a soft robot design. For better generality, one potential remedy is to optimize actuator and stiffness independently from the geometry generated by the diffusion model, i.e., apply DiffuseBot and do direct optimization on actuator and stiffness afterward or at the same time. Another interesting direction may be, for actuators, to leverage part-based models [A5] to decompose a holistic geometry into parts (or different actuator regions in soft robots).
**The connection to [A1].**
There is some synergy between text inversion in [A1] and embedding optimization in DiffuseBot. Both of them aim at tuning the embedding toward reflecting certain properties of the output generation, i.e., describing the output generated images in [A1] and toward improved physical utility in DiffuseBot. The major difference lies in the nuance of the data/samples used to carry out the optimization. Text inversion performs a direct optimization using latent diffusion model loss (Eq. (2) in [A1]), which computes losses on noisy samples/latents corrupted from the real dataset. On the other hand, it is tricky to think about real dataset in robot design (as discussed in line 40-44 and line 125-130), embedding optimization in DiffuseBot computes losses on noisy samples corrupted from self-generated data filtered by robot performance (as in Algorithm 1 and section 2.4). Conceptually, it is more like a mixture of diffusion model training and online imitation learning like DAGGER [A4].
We will include this discussion in the revision.
**Discussion on more general applications [A2,A3].**
A potential and interesting way to adapt DiffuseBot to other applications like motion planning or control [A2,A3] is to view a generated robot as one snapshot/frame of a motion/state sequence and the physics prior can be the dynamics constraint across timesteps (e.g., robot dynamics or contact dynamics that enforce non-penetration). The physics prior can be injected similarly to diffusion as co-design that propagates the enforcement of physical plausibility of generated states from differentiable physics-based simulation to diffusion samples. For example, considering states in two consecutive timesteps, we can compute loss in the differentiable simulation to measure the violation of physical constraints regarding robot dynamics or interaction with the environment. Then, we can compute gradients with respect to either control or design variables; for gradients in control, this will essentially augment works like [A2,A3] with classifier-based guidance to achieve physical plausibility; for gradients in design, this will much resemble optimizing toward the motion sequence of a shape-shifting robot.
We will include this discussion in the revision.
**Features used in k-means clustering.**
We use the 3D coordinates offset by the center of the geometry as the feature for k-means clustering. We will make it clearer in line 120 in the revision.
**Structural biases in line 86.**
The structural biases refer to the knowledge in the pre-trained 3D generative models learned from large-scale 3D datasets. The idea is, instead of searching for good robot designs from scratch, explore in the space of what a 3D generative model has learned, which provides biases toward diverse and sensible 3D structures. We will provide a clearer description in the revision.
**References**
[A1] Gal et al. “An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion.”, ICLR, 2023.
[A2] Janner et al. “Planning with Diffusion for Flexible Behavior Synthesis.”, ICML, 2022.
[A3] Ajay et al. “Is Conditional Generative Modeling all you need for Decision-Making?” ICLR, 2023.
[A4] Ross et al. A reduction of imitation learning and structured prediction to no-regret online learning. AISTATS 2011.
[A5] Kaiser et al. A survey of simple geometric primitives detection methods for captured 3D data. CG 2019.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thorough response.
It would be great if the authors could integrate them into the revised manuscript, particularly those changes regarding clarity.
Keeping the writing clear for the audience is important.
Since I am not an expert in soft robot design, I will keep the score as is.
Thanks! | Summary: The paper introduces DiffuseBot, a system that aims to simplify and automate the design of soft robots in simulation and real-world systems. DiffuseBot uses diffusion-based algorithms to co-design soft robot morphology and control for specific tasks, combining the diversity of evolutionary algorithms with the efficiency of gradient-based optimization. The system is made possible by advancements in AI-driven content generation.
However, existing generative algorithms face challenges when applied to physical soft robot co-design, such as the lack of consideration for physics and task performance. To overcome these, DiffuseBot uses physical simulation to guide the generative process of pretrained large-scale 3D diffusion models. It also develops an automatic procedure to convert raw 3D geometry into a format compatible with soft body simulation.
The system optimizes the embeddings that condition the diffusion model, skewing the sampling distribution toward better-performing robots as evaluated by a simulator. It also reformulates the sampling process to incorporate co-optimization over structure and control.
DiffuseBot has been tested on a wide range of tasks, demonstrating its superiority to comparable approaches. It also allows for human input in the robot generation process and has been used to create a proof-of-concept 3D-printed real-world robot. The paper contributes a new framework that augments the diffusion-based synthesis with differentiable physics simulation, methods for driving robot generation in a task-driven way toward improved physical utility, and extensive experiments in simulation to verify the effectiveness of DiffuseBot.
Strengths: This paper is robust and comprehensive in its approach. It introduces an innovative method that applies diffusion models to the co-design of robots, representing a significant contribution to the field. The authors have ensured thorough experimental coverage by testing their system, DiffuseBot, on a diverse range of tasks. This extensive testing underscores the versatility and applicability of the proposed method. Furthermore, the paper is not limited to theoretical constructs but extends to practical, real-world applications. The authors demonstrate this by providing a proof-of-concept 3D-printed real-world robot, thereby solidifying the relevance and potential of their research in real-world scenarios.
Weaknesses: The paper does not exhibit any significant shortcomings or areas of concern.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In your paper and supplementary materials, you've provided a qualitative discussion on the challenges of translating simulated results into the fabrication of real robots based on the design developed in simulation. Could you delve deeper into this issue by providing more detailed, quantitative results that highlight the discrepancies between the behavior of the simulated robot and its real-world counterpart? Additionally, could you propose potential solutions aimed at minimizing this gap between simulation and reality?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper effectively addresses all identified limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate reviewer 7Vvn for recognizing our paper as a robust, comprehensive and innovative work supported by extensive experiments and a proof-of-concept physical robot to demonstrate the potential of future research. We address the remaining suggestions as the below.
**Simulation and physical robot: quantitative analysis and potential future solution.**
Thanks for bringing this extremely interesting question. To start off, we would like to highlight that the main focus of our work is to showcase the fascinating possibility of applying diffusion-based generative models to soft robot co-design by augmenting physics, and the hardware experiment is more of a proof-of-concept that minimally demonstrates the potential. The sim-to-real issue in soft robots involves materials, actuation, fabrication, and many other factors, and is an open and extremely challenging research question.
In order to explore the quantitative gap between the behavior of the physical robot and the simulated robot, we conducted an experiment with the following conditions, where similar setups are commonly adopted in soft robot literature [A1]. The objective was to measure the change in distance between two tips when we pull/release two tendons - one for closing the gripper (flexion) and the other for opening it (extension). The tendons were pulled or released in increments and decrements of 2mm, and the results are depicted in Figure A1 in the one-page pdf in the global response.
When contracting the tendon to flex or extend the fingers, both simulation and real robot results show log-shaped graphs. The pattern in the physical robot plot is a commonly observed phenomenon called hysteresis. However, the main difference between the simulation and real-world cases can be seen when releasing the tendon from a fully contracted state. In the real robot experiment, the tip distance changes rapidly, while in the simulation, the opposite effect is observed.
One plausible explanation for this disparity could be attributed to the friction direction and elongation of the tendons. During the transition from tendon contraction to tendon release, the tension of the tendon at the end-effector may change suddenly due to the change of the friction direction. Also, since we only control the motor position (not the tendon position) to pull/release the tendon with 2mm step, the exact tendon length may not be exactly the same when we consider the tendon elongation.
Given that the gap between simulation and real robot performance seems to originate from the actuation/transmission method, our future work will focus on developing a tendon-driven actuation simulation framework. This framework aims to address the differences and improve the accuracy of our simulations. We are exploring other possible explanations for the sim-to-real gap and will investigate any additional factors that may contribute to the observed discrepancies. Overall, as for a high-level general solution, we believe (1) adjusting parameters based on observed sim to real gap and repeat the design process or (2) building a more accurate physics-based simulation (which can be straightforwardly plug-and-played in DiffuseBot) can largely bridge the sim-to-real gap of fabricating physical robots; or more interestingly, connecting generative models to commercial-level design and simulation softwares.
**References**
[A1] Fang, B., Sun, F., Wu, L., Liu, F., Wang, X., Huang, H., Huang, W., Liu, H. and Wen, L., 2022. Multimode grasping soft gripper achieved by layer jamming structure and tendon-driven mechanism. Soft Robotics, 9(2), pp.233-249.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I understand that it was not the main focus of the paper, and I appreciate the detailed analysis of the potential sources of the sim2real gap. I'll keep the rating as is, and I think it's a strong and interesting work. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful and constructive feedback. We are encouraged to hear the reviewers acknowledge,
- that the proposed approach is robust, innovative, and extends beyond theoretical construct to practical, real-world applications (reviewer 7Vvn), interesting and promising as a crucial solution to inject a physics prior to diffusion models (reviewer zr6A), novel and interesting (reviewer syw2), and introduces a new framework that augments diffusion-based synthesis with physical dynamical simulation (reviewer 5Fqn);
- that the paper is well-written and easy to follow with overall clarity (reviewer zr6A, syw2, vV19);
- that the results verify the effectiveness of DiffuseBot (reviewer 5Fqn), are with thorough experimental coverage and extensive testing (reviewer 7Vvn), are comprehensive and thoughtful (reviewer zr6A), and provides extensive comparisons with baselines and ablation studies (reviewer syw2);
- that the proof-of-concept physical robot solidifies the relevance and potential of the research in real-world scenarios (reviewer 7Vvn), is impressive (reviewer zr6A).
In response to feedback, we provide individual responses below to address the remaining concerns from each reviewer to improve clarity of missing details and to provide additional discussion that strengthen our paper. Briefly, we summarize the added experiments and revision to the paper,
- Add quantitative analysis on the behavior of simulation and physical robots along with further discussion on physical robot fabrication.
- Add a paragraph for the discussion on limitations.
- Comparison to an additional baseline of a more recent soft robot co-design method.
- Comparison to an additional baseline of a VAE-based generative model.
- Report additional statistics including mean and standard deviation for baseline comparison.
- Add more clarification to the paper and discussion on relevant works.
For more details, please check individual responses. We thank all reviewers’ for their time and efforts! We hope our responses have persuasively addressed all remaining concerns. Please don’t hesitate to let us know of any additional comments or feedback on improvement.
Note that we include all additional experimental results in the one-page pdf submitted along with this global rebuttal response.
**A paragraph dedicated to limitations.** Re reviewer 5Fqn, syw2, vV19. We will add this to the revision.
“The major limitation of DiffuseBot is that we make a simplification in the parameterization of actuators and stiffness; we make dependencies of the two design specifications on robot geometry (check more technical details in section 2.3 paragraph Actuators and Stiffness. This works well with properly-crafted mapping from geometry to the others yet limits the potential by human prior with little use of the generative power. While this may be reasonable as properly-set actuators and stiffness based on geometry (hand-tuned empirically in this work) roughly reflects task performance, a more flexible parameterization can definitely lead to improved performance. Potential remedy can be using part-based 3D generative models for actuators and direct optimization for stiffness. Another limitation is the gap between simulated results and real robots. While the hardware experiment has shown as a proof-of-concept that minimally demonstrates the potential, physical robot fabrication and real-world transfer have countless non-trivial challenges including stiffness and actuator design, sim-to-real gap, etc. This may require studies on more high-fidelity physics-based simulation, which can be straightforwardly plugged into DiffuseBot.”
Pdf: /pdf/f16d5eb664b497787f779ea7f0de90a17099cb87.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks called DiffuseBot. DiffuseBot bridges the gap between virtually generated content and physical utility by (i) augmenting the diffusion process with a physical dynamical simulation which provides a certificate of performance, and ii) introducing a co-design procedure that jointly optimizes physical design and control by leveraging information about physical sensitivities from differentiable simulation. In the experiment, they showed a range of simulated and fabricated robots along with their capabilities.
Strengths: 1. The paper introduces a new framework that augments diffusion-based synthesis with physical dynamical simulation in order to co-design task-driven soft robots in morphology and control.
2. The method leverages optimizing input embeddings and incorporating differentiable physics into the diffusion process for driving robot generation in a task-driven way toward improved physical utility.
3. They performed experiments in simulation to verify the effectiveness of DiffuseBot, extensions to text-conditioned functional robot design, and a proof-of-concept physical robot as a real-world result.
Weaknesses: The presentations in this paper were sometimes unclear, as asked in the following placeholder. The paper admitted that there are countless non-trivial challenges in the physical robot fabrication and real-world transfer, including stiffness and actuator design, and the sim-to-real gap. However, in my opinion, other contributions of this paper may outperform the weaknesses.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In Section 2 before 2.1, there was no reference to 2.2. Is it fine?
2. I did not find the explicit definition of bold c in Eq. (5).
3. L159: what does the slash mean?
4. In the experiments, I want to know why such tasks were selected (i..e, motivation for the tasks).
5. I cannot find the explanation about the performance metric such as Tables 1 and 2 (sometimes having minus values). This information is important and should be mentioned in the main text.
6. The (short) introduction of the baseline models and the reasons can be mentioned.
7. The figure at the bottom of page 7 was Table 4, but may be incorrect. And Figure 4 in L233 may be incorrect.
8. Section 3.4: The paper discusses the use of textual inputs and is very interesting. Can the authors discuss the potential for using other input formats?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: In the conclusion section, there seems to be less limited information about this work from the experimental results. In other parts, general limitations were mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 5Fqn for positive comments on the soundness and the contribution of our work. We address the remaining questions as below.
**Challenges of physical robot fabrication.**
Thanks for recognizing the contribution of our work in spite of these non-trivial challenges of real-world transfer. We further conduct a simple quantitative analysis on sim-to-real transfer. Please check out more details in the response to reviewer 7Vvn along with the experimental results shown in the one-page pdf in the global response as well as the in-depth discussion on manufacturing physical robots in the response to reviewer syw2.
Note that this experiment only serves a preliminary study and more comprehensive and in-depth analysis along with further contribution should be done in the future research. We will also dedicate a paragraph to discuss these challenges and potential remedy in the limitation section in the revision.
**No reference to 2.2.**
We will add the following in the first paragraph of section 2 in the revision,
“… then describe the proposed DiffuseBot framework, which consists of diffusion-based 3D shape generation (Section 2.2), a differentiable procedure …”
**Explicit definition of \bold c.**
The **c** in Eq. (5) is the embedding to be optimized. We will add a clearer definition at line 133 right after Eq. (5) in the revision.
**L159: what does the slash mean?**
It is a typo and it should be a period. We will update this in the revision.
**Motivation for the tasks.**
At a high level, we select tasks that
1. can cover a wide spectrum of existing robotics tasks: we briefly categorize tasks into passive dynamics, locomotion, and manipulation. Note that passive dynamics tasks are explicitly considered here since there is no active control of robot bodies, making optimization on robot design a direct factor toward physical utility.
2. only involve lower-level control/motion without the complications of long-term or higher-level task planning: we select tasks that mostly involve few motor skills, e.g., in manipulation, instead of pick and place, we simply aim at picking up/gripping an object.
3. are commonly considered in other soft robot co-design literature: all proposed active tasks are widely used in the soft robot community, including crawling [5,7,35,48], hurdling/jumping [19,A1,A2], and manipulating objects [3,8,27].
4. may induce more visible difference in robot designs between the performing and the non-performing ones to facilitate evaluation and algorithmic development: we select tasks more based on heuristics and intuition, e.g., in crawling, we expect leg-like structures may outperform other random designs.
We will include the above discussion in the appendix section D in the revision.
**Explanation about the performance metric.**
The performance metrics are described in the appendix section D. We will add brief description as below and a pointer in section 3.1,
“We refer the reader to the appendix Section D for more detailed task descriptions and performance metrics.”
**Brief introduction to the baselines.**
In the revision, we will add the following paragraph in section 3.2 with more details in the appendix:
“In Table 2, we compare with extensive baselines of soft robot design representation: particle-based method has each particle possessing its own distinct parameterization of design (geometry, stiffness, actuator); similarly, voxel-based method specifies design in voxel level; implicit function uses use a shared multi-layer perceptron to map coordinates to design; DiffCPPN uses a graphical model composed of a set of activation function that takes in coordinates and outputs design specification. These baselines are commonly used in gradient-based soft robot co-design [19,44,48]. ”
**Incorrect labeling of Table 4.**
Thanks for catching these typos. Table 4 at the bottom of page 7 should be labeled as Figure X and the reference of Figure 4 in L233 should be Figure X. We will fix the labeling and referencing in the revision. (After fixing this issue in the manuscript, X is 6 and the original Figure 6 becomes Figure 7)
**Other input formats than texts.**
The use of textual inputs additional to the embeddings optimized toward physical utility is achieved by both being able to be consumed by the diffusion model to produce guidance for the diffusion process $\epsilon$. More concretely speaking, in DiffuseBot, we use the CLIP feature extractor as in Point-E and it allows to extract embedding for both text and image modalities, which can then be used as a condition $\mathbf{c}$ in the diffusion model. Thus, we can also incorporate images as inputs and perform the exact same procedure as that of the textual inputs. Theoretically, the textual inputs are incorporated via following the intuition in lines 162-165, where the textual inputs additionally provide gradients toward following the textual specification. Similarly, the image inputs can also be processed to provide gradients since CLIP embeddings live in a joint space of images and languages. More interestingly, if we build DiffuseBot on models other than Point-E, which can consume embeddings for other modalities like audio as conditioning, we can then straightforwardly perform robot design generation guided by the other corresponding input formats (and meanwhile, toward physical utility). Note that this critical feature of compositionality across different sources of guidance throughout the reverse diffusion process is one of the biggest advantages of using diffusion-based models as opposed to other types of generative models.
We will include this discussion in the appendix in the revision.
**Limitation in the conclusion.**
Please check the paragraph about limitations in the global response.
**Reference**
[A1] Tolley, M.T., et al., An untethered jumping soft robot. IROS 2014.
[A2] Bartlett, N.W., et al., A 3D-printed, functionally graded soft robot powered by combustion. Science 2015.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for replying to my comments. I confirmed and understood them, but I am not an expert in soft robots, so I will leave my rating as is. | null | null | null | null | null | null |
Comparing Causal Frameworks: Potential Outcomes, Structural Models, Graphs, and Abstractions | Accept (poster) | Summary: In this paper, the authors compare the causal frameworks of the Rubin causal model (RCM) and the structural causal model (SCM) from a logical perspective. This is a pretty theoretical work in which RCMs are connected to SCMs using the notion of *representability* which describes whether an RCM can be represented by an SCM. Utilizing causal abstractions, it is shown that there exists a representable low-level RCM for any RCM. Using a neutral formal language, the authors also model assumption for causal inference which can be applied to both RCMs and SCMs. The authors highlight how related both frameworks are and do not state a general preference towards one or the other.
Strengths: 1. This paper is a good contribution towards a theoretical comparison of RCMs and SCMs, comparing them on a very precise, fundamental level.
2. The examples included in the paper are nice and are helpful for understanding (but also see point 2 in weakness section).
3. The writing is very clear and concise.
4. I like that a neutral language is used, giving no preference to either framework.
Weaknesses: 1. While I acknowledge the page limit for the submission, the paper would have benefited from some additional examples and clarifications for an easier and faster understanding. Maybe the appendix would be a good place for that.
2. There are several parts in the text where the notation could have been explained better and is not easily understandable (see "Minor Criticism and Comments").
3. Results about the relationship between RCMs and SCMs included in this paper are not very surprising to me, however, the theoretical framework is still very useful, so this should not be seen as a major weakness.
### Minor Criticism and Comments
- Line 53: It should say here that "SUTVA" means "Stable Unit Treatment Value Assumption"
- Line 77: What is $B$?
- Line 121, also later: "Ex. 2" does not link to "Example 2"
- Line 197: What is the inverse of a projection, i. e. $\pi^{-1}$? Intuitively, it makes sense to me but I think it could be explained more clearly
- Line 200, 201: $m$ and $n$ are used without explanation
- Line 203: I understand the idea behind $\tau^{-1}$ but I think a clear definition would be useful
- Line 283: What is $Z$? As in what is the meaning of $Z$? Is it the treatment assignment, with $X$ being whether treatment is applied?
- **Please check** I think the monotonicity assumption (Equation 6) is not defined correctly. It depends on what on how exactly to interpret $X$ and $Z$ but this definition looks not correct to me. I read it as "If an individual gets assigned treatment and takes it, it follows that this individual would also take treatment if it was not assigned"
- Line 304: Please state what "ITT" stands for ("Intention-To-Treat" I assume)
- Lines 347, 348: Should it not be $\mathbf{Pa}_Y^\mathcal{G}$ instead of $\mathbf{Pa}_V^\mathcal{G}$?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. As far as I understand it, the SCM definition only covers hard interventions (Definition 4). How would the extension to soft interventions for SCMs change further results of the paper?
2. I do not understand the sentence in line 354/355. How can equation 6 not be valid if $Z \rightarrow X$. Do we not assume that $Z \rightarrow X$ (that the treatment assignment has an effect on whether treatment is applied)? Or is there another mistake in my understanding? Overall, this sentence is not clear to me.
3. While not the scope of this paper, I would be interested in the implications of the differences, which were hinted at. For example, what kind of "insights that arise when using each that are less transparent when using the other"? Could a language as in this paper help make insights more transparent in the framework in which they would be seen as not as transparent? And more generally speaking, how do these results help us use causal frameworks "better"?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No concerns on this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their meticulous comments.
We appreciate the call for more examples and clarifications and have added several new examples, with discussion, to the appendix. Please see also our "global" response to all reviewers.
Concerning the minor criticism and comments:
- Line 53: thanks for the catch. We have moved the full explanation of this abbreviation from line 247 to 53.
- Line 77: $B$ is an arbitrary indexing set for the family mentioned on line 77. We have now made this explicit.
- Line 121: we have fixed the link.
- Line 197: the ${}^{-1}$ refers to the inverse image, a notion defined for mappings in general (the same applies regarding $\tau^{-1}$ in your comment about line 203). To make this maximally transparent, we have now defined inverse images in Section 1.1 ("Preliminaries").
- Line 200, 201: $m$, $n$ abbreviate the cardinalities $|O_{\mathrm{L}}|$ and $|O_{\mathrm{H}}|$ of low- and high-level outcome sets respectively. We have added an explanation.
- Line 203: see response to comment about Line 197
- Line 283: yes. To make this clearer, we have added an explanation of the meaning of $X$, $Z$, which are indeed interpreted as treatment and prescribed treatment respectively, and elaborated on our explanation of monotonicity ("no defiers," line 281).
- "Please check": Indeed, $z^+$ and $z^-$ should be swapped in equation (6), and we have done this for the final version.
- Line 304: we have added an explanation.
- Line 347, 348: we have fixed these typos.
Responses to questions:
1. Perhaps surprisingly, it is possible to show that soft interventions are fully reducible to hard interventions within our language $\mathcal{L}$ (in the sense that soft effects are equivalent to distributions on the results of hard effects). Thus, including such soft interventions would not change our results to the extent that they are couched in $\mathcal{L}$. If the reviewer feels it would be important to mention this, we would be happy to add a footnote (and short accompanying appendix subsection) clarifying this.
2. We mean that there is no $\mathcal{G}$ including the edge $Z \rightarrow X$ such that $\mathrm{T}(6)$ is valid over $\mathfrak{M}(\mathcal{G})$, i.e., is true in all SCMs with graph $\mathcal{G}$. Since, as you say, we do indeed assume that $Z \rightarrow X$, this means that monotonicity (6) is not implied by any acceptable graph. Thus it is not a graphical assumption—its source must be fundamentally extra-graphical, contrasting it with, e.g., the exclusion restrictions. We have inserted an additional sentence making this clear.
3. We really appreciate this question and think it's an important point for connecting this theoretical work with more practical issues in causal inference. We believe the framework in the paper offers a distinctive—and notably, objective—perspective on the comparison. By framing causal inference in terms of derivability in $\mathcal{L}$, this can help illuminate inferential relationships among assumptions and possible conclusions. Examples of this are the observations in the paper about what exactly is necessary for the well-known LATE derivation, as well as our graphical completeness result. But we definitely don't want to suggest that the framework is unique in its promise for such clarification. The important body of research on SWIGs and related topics (cf. the discussion with Reviewer vtiC) has many other examples of shifts in transparency afforded by news ways of packaging and formulating existing ideas and results, which can in turn spur new results. We are optimistic that the (meta-)framework in the present work will continue to be an important contributor to this wider endeavor.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I would like to thank the authors for their response. I will maintain my rating. | Summary: The paper offers a mathematically rigorous comparison of the potential outcomes and structural causal models frameworks for causality. Their comparison goes further than previous work, in part by invoking the idea of an abstraction, and in part by invoking recent axiomiatizations for probabilistic SCMs. As a result, they offer several new and important connections between the two frameworks, as well as offering novel insights into the frameworks separately.
Strengths: Given the importance of both the RCM and SCM frameworks, and the long-standing debate as to how exactly they relate, these novel results about their relation are extremely valuable. This paper does so much in such little space, that I am confident its results will have an impact on many different issues within the causality literature. I believe this paper is an important milestone when it comes to the mathematical foundations of causal frameworks.
Weaknesses: The paper is extremely dense, introducing very sophisticated and complex ideas that are usually given chapter-length expositions in just a few paragraphs, using very compact notation. Therefore it requires both a substantial amount of familiarity with the related literature and a lot of effort from the reader to understand all of it. Ideally there would have been more simple examples such as Example 1 that highlight the main intuitions and illustrate all of the important concepts, making life easier for the reader. I suggest to the authors to write up a much longer journal version of this paper that simply lays out the same content but with a lot more handholding for the reader, both in terms of conceptual clarifications as in concrete examples.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Main questions:
1: As the authors point out, in the literature on abstractions it is common to add a set of allowed interventions to an SCM, bringing them closer to RCMs. In fact, this makes me wonder: is an SCM that includes a set of allowed interventions, simply an RCM which satisfies effectiveness, composition, and reversibility? Clarifying this would help in understanding the intuitions behind some of the results.
2: Relatedly, an RCM such as R' in Proposition 1 that satisfies these three axioms and has no proper extension, must be one which includes every intervention, and thus would simply be equivalent to an SCM, right? This would also help in interpreting the comment in 182: an RCM generalizes an SCM by not requiring all interventions to be allowed, and by not requiring the same axioms to hold. It might be interesting to compare how GSEMS fit into this picture. (GSEMS are Generalized Structural Equations Models, introduced by Halpern and Peters.)
3: Again a follow-up: L in Proposition 2 would then be equivalent to an SCM, right? It has no proper extension, thus it must include all interventions, and hence by Proposition 1 it satisfies the axioms for an SCM.
4: An obvious question: have you looked into whether new interesting results emerge if you generalize beyond constructive abstractions?
Minor technical questions/clarifications:
1: When introducing SCMs, why assume that U_V and Pa_V are strict subsets?
2: 228: "close under constructive abstraction".
This wasn't clear to me, because for effective we go from H to L, whereas for representable we go from L to H, so it sounds like close means something different in each case.
3: monotonicity: Here the variables are given a specific interpretation, right? X is treatment, and Z is prescribed treatment? Or is the idea that this relation holds for all potential outcomes? More generally, my background in PO is limited, and I have no sense of what this condition is capturing. Some intuition would be nice.
4: Theorem 2: I’m confused, here because members of S are part of the base language and thus do not include quantification, so are we assuming an implicit universal quantification? Because that sounds at odds with the possibility of using an existential quantifier in Definition 11.
Typos:
35: a "a...
44: tactictly
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review!
Responses to questions:
1. Yes, fixing a set of allowed interventions over an SCM yields an RCM with these three properties. However an RCM with these properties does not always come from an SCM (see Ex. 3). We have clarified this on line 181, changing "possibly not sufficient" to "not sufficient, in light of Ex. 3" and added a footnote to the text of the example.
2. Indeed, such an RCM can be considered equivalent to an SCM. It will, for example, be identical with regard to interpreting our probabilistic logical language. Note however that RCMs and SCMs (Def. 1, 3) are still defined as very different objects. We have updated the explanation on line 185 ("However, ...") to point out this equivalence explicitly.
We are also grateful for the suggestion to compare Peters and Halpern's GSEMs, which were also observed to be expressively equivalent to Blom et al.'s "causal constraint models" (UAI 2019). These are similarly offered as "mechanism-free" generalizations of SCMs. But the focus in that work is on allowing multiple possible outcomes, corresponding, e.g., to multiple equilibria in a dynamical systems model. We've added citations to this important related work.
3. Yes, such an $\mathcal{L}$ would likewise be equivalent to an SCM.
4. We thank the reviewer for this suggestion, which we had not considered. Generalizing beyond constructive abstractions is an interesting direction for future work, as it may allow one to require, e.g. that $\mathcal{R}_{\mathrm{L}}$ be representable by an SCM with a specific graph.
Minor questions:
1. We have changed the notation $\subset$ to $\subseteq$. Indeed, this was not intended to indicate a strict subset.
2. In both directions, the abstraction is from $\mathcal{L}$ to $\mathcal{H}$ (finer to coarser); the phrasing was in opposite orders though so we can see that it may have been misleading. Note that we have actually decided to remove the final claim of Prop. 2 (see response to Question 2 of Reviewer w4ki), so in the event, there will no longer be any conflict here.
3. This principle means that there are no units that do the opposite of what they were prescribed. To make this clearer, on line 283 we have added an explanation of the meaning of $X$, $Z$, which are indeed interpreted as treatment and prescribed treatment respectively, and elaborated on our explanation of monotonicity ("no defiers," line 281).
4. $S$ is meant to be a set of quantified (over $u$) assumptions in $\mathcal{L}_{\text{base}}$, making it consistent with Def. 11, although as you point out this was not explicit. We have clarified this in line 290.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal of the authors. Thanks for the clarifications. | Summary: This paper compares the Rubin causal model (RCM) and structural causal model (SCM) frameworks for causal inference. Specifically, the authors show that RCMs, when encoded with the composition and reversibility properties, represent the same space of counterfactual distributions as SCMs. Moreover, they show that all RCMs are constructive abstractions of some RCM that is representable by an SCM. Finally, they characterize the axioms and assumptions that are sound and complete for counterfactual inferences in both frameworks.
Strengths: 1. The paper offers a refreshing and productive perspective of the two causal inference frameworks that compares them on their axiomatic properties and abilities to encode causal assumptions and perform inferences, as opposed to more controversial philosophical aspects.
2. The work is logically grounded given the definitions and axioms and provides concrete connections between the frameworks. The logic is explained clearly through examples.
3. I found the usage of causal abstractions in the paper unexpected and very interesting, especially since the original sources were developed under the SCM framework. Causal abstractions provide a powerful framework for describing causal properties across different sets of variables. Prop. 2 demonstrates that certain properties such as effectiveness are preserved across abstractions, allowing for a more flexible way of interpreting RCMs.
4. Causal assumptions are an important aspect of causal inference research, but they are typically studied with respect to a fixed framework. This paper provides an interesting approach to encoding the assumptions in a manner that allows comparisons between the RCM and SCM frameworks.
Weaknesses: 1. It is not clear what is the ultimate takeaway of this work, as most of the paper details mathematical connections but not much about their implications. For example, Sec. 1.1 and Prop. 1 claim that SCMs and RCMs are equivalent frameworks in theory, but how should a reader change the way they approach causal inference given this knowledge?
2. The flow between different sections is somewhat strange. It is not clear exactly what the abstractions of Sec. 1.2 are adding to, say, the results of Prop 1. Sec. 2 also seems to be mostly unrelated to the previous section. Perhaps the transitions between the sections could be reworked to fix this.
3. As mentioned in the conclusion, Thm. 3 only applies to graphs for which the components connected through bidirected edges are complete cliques, meaning that the theory is currently incomplete for SCMs. This did not impact my score in this review, but this is definitely an interesting direction of future work. I am curious if the authors have any counterexamples of cases with other graphs where there are additional inequality constraints not captured by the given axioms.
I would also appreciate it if the authors could answer some of my questions in the next section.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: 1. The preliminaries focus on the space of effective RCMs $R_{\text{eff}}$ and the space of SCMs $M_{\text{uniq}}$. Are there SCMs that are not in $M_{\text{uniq}}$? Is this uniqueness property a characterization of effectiveness in the SCM framework?
2. Can you clarify the relationship between Thm. 1 and Prop. 2? I think both results are interesting, but they seem to have conflicting messages. Prop. 2 seems to be saying that $\mathcal{H}$ is representable if it is an abstraction of a representable RCM, while Thm. 1 seems to be saying that all RCMs are abstractions of a representable RCM.
3. Is Eq. 6 reversed? It seems like the $z^+$ and $z^-$ should be swapped.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations are clearly stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments.
In response to your very helpful questions:
1. We have chosen to focus on effective RCMs because this assumption is almost always made in practice. Effectiveness is so desirable that (as we have noted) one would go so far as to introduce additional variables should it fail to hold.
On the SCM side, we focus on $M_{\mathrm{uniq}}$ because it is the most general class (up to measure zero) in which the SCM induces a unique counterfactual distribution, thus permitting comparison with that induced by an RCM. We have added a sentence explicating this. There are certainly SCMs that are not in $M_{\mathrm{uniq}}$. Given the above consideration, they are outside the scope of our paper, but one example is the following: $f_X(Y = y, U = u) = y$, $f_Y(X = x, U = u) = x$. If $X$, $Y$ are both binary then $(X, Y) = (0, 0), (1, 1)$ are both solutions to this SCM under any $u$.
There is no relationship per se between the uniqueness property and effectiveness, but because of the interventional semantics effectiveness will hold in every SCM solution (whether these are unique or not), which we have also pointed out in a new footnote.
2. Spurred on by your comment, we have decided to remove the final claim of Prop. 2 (and its mention in Ex. 1) from the main text. We decided it is more important to focus attention on Thm. 1 and avoid the potential confusion you identified. Thm. 1 is central to the paper as a whole, as it identifies a source of apparent incompatibilities between the RCM and SCM frameworks—namely, the level of abstraction.
3. Indeed, $z^+$ and $z^-$ should be swapped in equation (6), and we have done this for the final version. We thank the reviewer for the catch.
---
Rebuttal Comment 1.1:
Title: RE: Rebuttal by Authors
Comment: I have read the rebuttal, and I thank the authors for answering my questions. I will maintain my positive rating. | Summary: The paper presents a logical framework to represent both the Rubin potential outcomes (PO) approach and the structural causal model (SCM) approach to causality. It shows that under mild assumption (composition and reversibility) every PO model is representable by an SCM. The paper proceeds to show how the underlying logical framework can then be used to elucidate the assumptions necessary for instrumental variable inference (commonly discussed in the PO context), in particular for the derivation of the local average treatment effect (LATE), and similarly, how the logical language permits the derivation of graphical conditions used for identifiability results in the SCM framework.
Strengths: -- offers formal connection between potential outcome framework and structural causal model framework which elsewhere is often only stated informally
-- gives specific examples using the underlying logical language (developed in more detail in the papers cited) that show the need for specific assumptions in the PO framework that have been subject to much discussion
-- shows with examples how to recover identifiability results in the graphical models framework
-- offers the prospect of a unifying logical framework for understanding causality
These are very interesting results and I really encourage a thorough and clear discussion of how they fit in.
Weaknesses: -- Section 1 provides the most interesting contribution by formally connecting the PO and SCM framework, but the paper then quickly moves on to treat questions of causal inference, rather than fully explaining what can and cannot be translated between the two frameworks now. There has been a very substantive discussion in the literature about (a) the connection between PO and SCMs and to what extent it has been addressed by the single world intervention graphs (SWIGs, citation 28) and (b) the differences in the treatment of counterfactuals even within different versions of the structural equation modeling framework. The present paper is missing a more detailed discussion of how the present contributions fit into that context. Obviously, this would require more space and so a journal paper would be more appropriate for this material. As it stands the paper is very dense and provides a mixture of half a discussion of the unification of PO and SCM and then a couple of very nice examples of re-derivations of known results using the logical framework.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) I would very much like to understand the connection of the present work to SWIGs. This seems to me to be crucial for a paper that provides a unifying approach to PO and SCM frameworks.
2) Am I missing something about the causal abstraction? A whole page is dedicated to it, but for what is discussed in the actual paper it appears to only ensure that one can appropriately control the state spaces of the variables.
3) Please explain cf-sep as a generalization of d-separation.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: limitations are appropriately discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments!
We very much appreciate the point about SWIGs and alternative frameworks for reasoning about counterfactuals. In fact, in an earlier draft we had included some remarks about SWIGs in particular, including how they fit into our logical framework. We removed this due to space constraints, but we could use a fraction of the additional alloted page to discuss that connection (still relegating some of the more technical discussion to the appendix).
In short, we can define FFRCISTGs as a subclass of RCMs and show that SWIGs are in fact sound and complete (in the logical sense) with respect to FFRCISTGs. Specifically, the SWIG framework derives the crucial independences and restrictions of these models within our formal language L. We will explain this connection—also re-emphasizing its importance in the larger reconciliation project between counterfactual and graphical traditions—in a new, brief subsection (2.3).
There are aspects of the SWIG framework that are, so to speak, extralogical—meaning that they pertain to how easy it is for humans to understand and assess them—but we believe it is a further testament to the framework in the present work that it can help locate these "quasi-graphical" models in the broader context of RCMs and SCMs, particularly from an inferential perspective.
In response to Question 2 about abstraction, we have edited the text to make the role of abstraction maximally clear. The point here is that the state space of the RCM is given, and our Thm. 1 reveals that by moving to some finer state space (given in the proof) it is possible to ensure that principles characteristic of SCMs hold while maintaining consistency between the finer and coarser spaces.
As for Question 3 about $\textsf{cf-sep}$, it is a generalization in the sense that $\textsf{cf-sep}$ and exclusion restrictions ($\textsf{ER}^{\mathcal{G}}$) suffice (by none other than our completeness result) to derive *all* conditional independences given by d-separation. On the other hand, every instance of $\textsf{cf-sep}$ represents *some* conditional independence implied by d-separation (in an appropriate twin network); our result just shows that strictly speaking the others can be derived from these few (given $\textsf{ER}$). We have elaborated the discussion in our paper to make this relationship clearer.
---
Rebuttal Comment 1.1:
Title: great work, but would like to see connection to SWIGs first before recommending acceptance
Comment: I have read all the comments and thank the authors for the very clear and helpful feedback. I think this is a very interesting paper, but would like to see the details of the connection to the SWIGs worked out before recommending acceptance. I find the rebuttal comments to my review most intriguing and would like to understand them fully. I like this work a lot and think it will eventually be a very significant contribution once the the connections are discussed and clarified. I will maintain my rating given the current manuscript.
---
Reply to Comment 1.1.1:
Title: Further details
Comment: We are very grateful to the reviewer for the positive and encouraging remarks, and we completely understand the reservations about adding new material. Just in case it might be helpful, we wanted to offer some further clarifying remarks about the role of SWIGs and what we plan to include about them.
We see the SWIG framework not so much as a means for comparing the RCM and SCM approaches, but rather as an elegant example of how ideas and concepts from the two approaches can be productively combined. As such, it fits nicely in the section on inference, highlighting methods that draw from both approaches.
A SWIG can be understood as a kind of graphical model, characterized in terms of factorization of joint distributions on potential outcomes. Aside from facilitating reasoning with a wider class of expressions than is possible with the notation of do-calculus — see [1] on (conditional) path-specific effects for a great example of this — it also enjoys completeness with respect to independencies implied by a particular kind of RCM (namely, a FFRCISTG [3]). Drawing on existing results [2], we can re-present (and ever so slightly strengthen) this fact as a completeness result for the language L in our paper. The point of doing so is again to gauge the strength of assumptions implied by a SWIG in a common language, which we see as very much in the spirit of the SWIG framework.
In sum, we wanted to clarify that the new subsection is not intended to introduce fundamentally new results, but merely to illuminate how this graphical framework fits into the narrative of our paper. In addition to illustrating the benefits of hybrid frameworks, it also anticipates some of the very connections that we have tried to bring out in our own work. In any case, we would be delighted to answer any other questions about this, or about what we intend to include in an additional allotted page.
[1] D. Malinsky, I. Shpitser, and T. S, Richardson. A potential outcomes calculus for identifying conditional path-specific effects. In K. Chaudhuri and M. Sugiyama, editors, Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89, pages 3080–3088, 2019.
[2] T. S. Richardson and J. M. Robins. Single world intervention graphs (SWIGs): A unification of the counterfactual and graphical approaches to causality. Working Paper Number 128, Center for Statistics and the Social Sciences, University of Washington, 2013.
[3] J. M. Robins. A new approach to causal inference in mortality studies with sustained exposure periods – applications to control of the healthy worker survivor effect. Mathematical Modeling 7, 1393–1512, 1986. | Rebuttal 1:
Rebuttal: We are sincerely grateful to the reviewers for their truly helpful and constructive feedback, and also for their encouraging remarks about the work and its significance. We feel that addressing their constructive suggestions has improved the paper and made it more effective.
In the individual responses below we comment on the specific points raised by each. At a general level, we appreciate the concern from several of the reviewers about density. In addition to some of the particular amendments detailed in the individual responses, we are enthusiastic about using the extra page for the final version to add further explanation of key ideas, notation, and concepts. In addition, as discussed in the response to vtiC, we will use about 1/3 of the extra page to comment on how SWIGs fit into the broader framework, emphasizing the importance of such work for illuminating the many facets and connections between research programs in these two traditions.
We look forward to further discussion and opportunities for clarification, and want to thank the reviewers again for their careful and thoughtful attention to our work. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Computing a human-like reaction time metric from stable recurrent vision models | Accept (spotlight) | Summary: This paper introduces a new measure of visual artificial network computation 'time', $\xi_{cRNN}$, as the time-averaged uncertainty of a convolutional RNN trained with an evidential deep learning loss (Sensoy et al.). The authors proceed to analyse the dynamics of this network as it solves a range of classification tasks. Importantly, beyond the network learning to solve the tasks, $\xi_{cRNN}$ is generally well correlated with human reaction times, with which it also shares several qualitative features across tasks.
Strengths: Deep network models of the visual system have often focused purely on classification accuracy and neural representations when comparing to biological data. However, this neglects an important aspect of human behaviour, namely the time it takes to arrive at a decision. The present paper tackles this important question using recent ideas from the machine learning literature and represents an interesting step in the direction of capturing the temporal variability in human behaviour.
The authors analyse an impressive breadth of tasks and behavioural data and include many interesting analyses, both qualitative and quantitative.
Weaknesses: The major weakness of the submission is that while there are strong _correlations_ between $\xi_{cRNN}$ and human reaction times, there is less evidence that $\xi_{cRNN}$ is mechanistically similar to human reaction times, since it is fundamentally a measure of uncertainty rather than computation time. In particular, human reaction times generally involve a tradeoff between computation/evidence accumulation and decision making (as in common drift diffusion models). On the contrary, the cRNN has a fixed computational budget and has no need or even capacity for evaluating this tradeoff. This is in contrast to a few previous deep learning models in the literature that are capable of explicitly trading off computation and actions (e.g. Graves et al., Pascanu et al., and refs by the authors in the ML literature, and Jensen et al. in the Neuro/Cogsci literature). These considerations are important because e.g. task difficulty is likely to correlate with both uncertainty and reaction time, and this raises the question of whether a model of one is automatically a model of the other.
It might the interesting to compare the current model to alternative models that more explicitly have adaptive/variable computation time, such as a cRNN that computes until $\epsilon$ reaches a certain threshold (akin to classical drift diffusion models).
As the authors mention in L326, another potential weakness is that the present approach does not easily generalize beyond classification tasks, which form a small subset of the types of problems humans and animals are faced with during natural behaviour.
_References:_ \
Graves et al.: "Adaptive computation time for recurrent neural networks", arXiv 2016.\
Pascanu et al.: "Learning model-based planning from scratch", arXiv 2017.\
Jensen et al.: "A recurrent network model of planning explains hippocampal replay and human behavior", bioRxiv 2023.\
Bansal et al.: "End-to-end Algorithm Synthesis with Recurrent Networks: Extrapolation without Overthinking", NeurIPS 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: L106: what does 'without loss of generality' mean here?
L123 "our control models lack inference-time flexibility." What does this mean?
L144: It might be worth comparing to or discussing recent work by Bansal et al. (NeurIPS 2022), which develops a way of stably training convolutional RNNs with long computation times that generalize from small to large problem settings.
L156 "for a complete mathematical specification of this loss, refer to SI 1". $\epsilon$ does not seem to be defined in either the main text or SI beyond a qualitative description as 'uncertainty'. It would be good to provide a precise mathematical definition somewhere.
L204: it would be good to briefly explain how this distance measure by Jeurissen et al. is defined (or at least refer to SI where it can be described in more detail).
Figure 3b and 5c: it's not entirely clear to me how these 'activity maps' are computed?
Figure 7b: it might be worth talking a bit more about the differences between the human and model data here instead of doing linear regression and calling it a day. The cRNN data is decidedly not linear and exhibits something resembling a plateau followed by a sharp drop at high discriminability. This seems rather different from the monotonic decrease in human RT, and the authors could perhaps speculate on or discuss this difference.
Supplementary material: it would be worth having a separate folder with just a few example videos of each condition in addition to the larger set of videos currently provided.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately discussed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and useful comments. We offer clarifications, additional numerical analyses, and visualizations to address concerns raised in this review. We hope the reviewer sees value and effort in our responses and is willing to adjust their score.
> **Fixed computational budget of the cRNN**
Thanks for allowing us to clarify this point. We draw an important distinction between a model having a fixed "total" budget versus a budget that can be expended *dynamically* in a stimulus-dependent manner. Consider the results presented in **Fig. R5**. Inspecting the step sizes of the recurrent dynamics (in a low-dimensional latent space) reveals that the cRNN utilizes its computational budget differently for easy (**Fig. R5b**) versus hard (**Fig. R5c**) stimuli. The network practically *stops* (i.e., step size becomes small) after $t=t^*$, where $t^*$ is stimulus-dependent. This is unlike standard BPTT-trained cRNNs and shows how our cRNN computes dynamically and has the potential to evaluate the tradeoff between computation/evidence accumulation.
> **Alternative models with explicit adaptive computational time**
We appreciate this suggestion. We now implement and include comparisons to cRNNs trained with ACT on the incremental grouping task. Here, we report quantitative results for one out of three well-performing (val. acc. $\geq 0.95$) runs. For the other two, we note that the results were qualitatively similar. None of these models showed a significant *positive* correlation between the ACT step count ($\rho$) and human RT, $r= -.26, p=.04$. While for some (but not all), regressing $\rho$ onto Euclidean distance yielded a significant positive linear slope, $b=0.07$, $SE=0.02$, $z=4.20$, $p<.001$, none of the models seemed to capture the effects of other factors known to affect human RT: narrow vs. wide, $d=0.38$, $t(21)=-1.82$, $p=.08$; curved vs. straight, $d=0.03$, $t(21)=-0.13$, $p=.90$; and A vs. B vs. C vs. D in **Fig. 2c**, $F(3,63)=1.84$, $p=.15$.
We hypothesize that despite ACT being a clever scheme for small networks, the fundamental bottleneck arises from the memory demands imposed by BPTT that limit step counts for cRNNs condemning any derived time metric to be coarse. We also note that these are only preliminary results and warrant further exploration.
> **Beyond classification tasks**
We agree that this is a very promising direction for future research. Alternative forced choice tests comprise a dominant part of visual cognitive psychology paradigms for which researchers have amassed a wealth of human RT data. And thus, our motivation to start here. To the best of our knowledge, we present one of the early attempts to leverage RT metrics for alignment, and we are confident in expanding the scope of this work in the future.
> **L106: what does 'without loss of generality' mean here?**
We have clarified this as follows:
The model of interest chosen in this work is the horizontal gated recurrent unit (hGRU; [19]), a convolutional recurrent neural network model that we will canonically refer to as cRNN throughout the rest of the paper. This choice does not imply any loss of generality, as our framework applies to any cRNN architecture.
> **L123 "our control models lack inference-time flexibility." What does this mean?**
By the *lack of inference-time flexibility*, we refer to the inability of our control models trained on easier task settings to generalize to arbitrarily harder task settings.
> **Comparison to Bansal et al. (2022)**
Thanks for this reference. We will factor this into our revised discussion.
> **Mathematical definition of $\epsilon$**
Agreed! We have added this in the description.
Starting at **L41** in **S1.2**: “$S = \sum_{j=1}^{K}\alpha_j$ represents the "total evidence" accumulated by the model (or the Dirichlet strength). $D_{KL}$ is the Kullback–Leibler divergence measure. $\boldsymbol{\hat{\alpha}}$ are the Dirichlet parameters of just the "misleading" evidence given by $\boldsymbol{\hat{\alpha}} = \mathbf{y} + (1-\mathbf{y})\boldsymbol{\alpha}$. We define the instantaneous uncertainty as $\epsilon = \frac{K}{S}$.
> **Distance measure from Jeurissen et al.**
We will add this description to the SI. "Jeurissen et al. [41] describe a growth cone distance metric that is sensitive to the topology of the object, proximity of boundaries, and narrowness. The authors construct a series of non-overlapping, varying-sized discs whose centers trace a curve (originating at the fixation location and ending at the cue location). The radii are such that a disc is the maximum inscribed disc at its location. Disks never cross object boundaries. The final distance metric is defined as the total number of discs that constitute this curve."
> **Computing the activity maps**
Thanks for the comment. We mention this in the main text (**L174**). We will repeat it in the figure caption to make it easier for the reader.
"First, we track cRNN dynamics by looking at its latent activity $\mathbf{h_t}$ at every time step $t$, averaged across the channel dimension and smoothed by averaging $\mathbf{h_t}$ and $\mathbf{h_{t+1}}$."
> **Clarification on Figure 7b**
Point well taken! We apply our cRNN on this scene categorization task to showcase the generality of our framework in extending to naturalistic stimuli. While the ability of our metric+cRNN to capture aspects of variations in scene discriminability is a promising sign, we acknowledge that there could be other cRNN architectures that are better suited for this particular task and are subsequently better aligned. We intended to propose our metric as a tool to precisely enable these sorts of comparisons.
> **Re-organizing SI videos**
Thanks for this suggestion. We will keep your comment in mind to highlight representative examples (per-condition) in our project webpage (URL currently protected for double-blindness).
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response of the authors to myself and the other reviewers.
### Fixed computational budget
I appreciate that network iterations might change the latent space less at later iterations for the simpler task compared to the more complex task, and that this intuitively seems to suggest that the model should be able to rely on less computation in this case. However, at the end of the day, the two settings use the same number of network iterations and therefore FLOPs (unless I've misunderstood something). It is exactly this intuition that was the motivation for suggesting an 'adaptive computation time' network, and it is interesting that this did not work since it seems like computation could be terminated earlier for simpler tasks, and the results of the authors show that the network has lower uncertainty earlier for simpler tasks. I wonder if there is another criterion that would be more suitable for 'early stopping' of the computation to develop a network that truly has a 'reaction time' rather than only an uncertainty?
### Without loss of generality
I would suggest not saying "without loss of generality" and just mentioning that the framework generalizes to other cRNN architectures.
### Figure 7b
I appreciate that your framework of course cannot explain all empirical results, and that being able to identify results that are/are not consistent with a given framework is part of its value. However, for this reason, it's also important to be up front about the cases where the data differs from the model.
---
Reply to Comment 1.1.1:
Title: A productive discussion on capturing uncertainty vs. time
Comment: Thanks for reading our rebuttal and your prompt response.
> Computational budget
Another insightful comment, and we sympathize with their FLOPs remark. Typically, techniques for early stopping involve at least one (or more) heuristic choices, such as the threshold value in ACT, that need careful finessing. We try to build a formulation that does not enforce such a selection. By training our cRNN with an attractor dynamics constraint, we wanted to achieve stable latent states that would also result in stable readouts (and thus a stable, bounded metric).
One observation we like to highlight is that achieving lower uncertainty states does not immediately suggest that these states were reached quickly. In other words, models in our framework can achieve low uncertainty states eventually after a prolonged period of high uncertainty. Naturally, there are cases where the model reaches attractor states with low uncertainty rapidly (for very easy stimuli), or gradually stabilizes in states with high uncertainty. So in essence, our metric also implicitly captures aspects of time since it integrates over instantaneous uncertainty. Empirically, we also find this to be critical. This integrated metric, as opposed to just the uncertainty value at the final state, consistently aligns better with human RTs.
Thinking out loud, one way to make these two (no heuristics *and* explicit measure of time) desires meet might be to learn a controller policy (with RL) that can choose "stop" as an action. Though admittedly, this is a more challenging optimization problem. We will plan to add this comment to our extended discussion section.
> Without loss of generality
Thanks. We will follow your suggestion here.
“*The model of interest chosen in this work is the horizontal gated recurrent unit (hGRU; [19]), a convolutional recurrent neural network model that we will canonically refer to as cRNN throughout the rest of the paper. In principle, our framework applies to other cRNN architectures as well.*”
> Figure 7b
Fair enough! We will include this text in manuscript Section 7.2 (Scene categorization; Results):
*While our cRNN models of choice here do not perfectly account for empirical patterns in human reaction times, specifically in their non-monotonic reaction time readouts for the low-discriminability stimuli, they provide a promising start.*
And this text in manuscript Section 8 (Discussion):
*The value proposition of this framework includes the ability to adjudicate models based on the consistency of their latent dynamics with human reaction time data.* | Summary: The authors in this work propose a combination of model output uncertainty predictions along with stable recurrent vision architectures in order to derive a proxy for models' reaction time to process input static images. The authors use the previously published horizontal GRU architecture combined with stable RNN training methods (contractive RBP) in their experiments. The proposed work is technically sound with extensive evaluation on 4 diverse datasets (some inspired by prior work in visual psychology) where the computed proxy model reaction time ($\xi$) trends correlate positively with that of humans performing the same task. Overall, this paper contributes a new way to estimate output uncertainty of recurrent convolutional networks and to evaluate similarity between model & human perception by taking into account temporal dynamics of processing static inputs.
Strengths: + The authors make several key contributions: (1) combining Evidential Deep Learning-based output uncertainty prediction with convolutional RNNs and using the AUC of the output uncertainties over time as a measure of the model's reaction time; (2) training stable recurrent vision models using the above EDL-based readout + objective function in order to obtain models whose temporal dynamics of processing visual inputs matches that of humans performing the same task.
+ The experiments performed are thorough on all datasets, it is clear that the positive correlation between model and human reaction times is present on the tasks evaluated.
+ If the authors were to release the code for some of these datasets (especially the incremental grouping task in Fig 2), this would make the contribution even more valuable as it would encourage further exploration of these understudied grouping problems
+ Clear presentation; The authors have exactly stated their contributions and have written the paper with good clarity and detail. The figures are intuitive and appreciate the video presentations showing model activations and uncertainty through time in the Supplementary.
Weaknesses: - Choice of tasks: While the presented 4 visual cognition tasks are interesting and relevant to cognitive scientists, only the scene recognition task concerns real-world stimuli and natural vision. It would be great if the authors found any interesting patterns of reaction time on natural images or more naturalistic tasks than the ones shown here.
- Advantages of similarity in reaction time: It is very interesting that the authors are presenting recurrent networks that are correlated with humans in terms of reaction time (and the benefits as a model of biological visual processing are clear), but does this similarity translate to any significant advantages for machine learning? Are models with better RT correlation with humans also well calibrated, or more robust compared to others? In summary, inter-model comparison based on how well they rank on RT similarity with humans would be an interesting direction that the authors don't comment about here.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to my review above for questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately addressed limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their extremely positive feedback and very helpful comments. We have provided clarifications and additional numerical analyses below to address concerns raised in this review.
> **The figures are intuitive and appreciate the video presentations**
We really appreciate the reviewer for going through the supplementary information in detail and for watching the videos.
> **Code and data sharing**
We plan to make all code (model training/analyses) and datasets (with the accompanying data generation code) publicly available for the community to build on.
> **Extensions to more naturalistic tasks**
We agree! This is a great direction and one that we envision for this work in the future. In this manuscript, we curtail our focus to detailing a framework without over-engineering very large-scale models that typically are needed to be performant on naturalistic benchmarks. However, our framework does not preclude scalability. We look forward to accomplishing this in future work.
> **Are there benefits for ML conferred by temporal alignment?**
This is an excellent point! In the past, aligning deep neural network models to neuroanatomy, neural, and behavioral data has shown benefits in terms of performance (R1), interpretability (R2), and adversarial robustness (R3). One of our motivations for developing this framework was to enable investigations into the benefits of temporal alignment.
While this was not a primary focus at the time of writing this manuscript, intrigued by the reviewer's comments we piloted experiments to test what we think is an important computational benefit conferred by temporal alignment: *generalization ability*.
We consider two model classes (A and B) in the context of our incremental grouping task. cRNNs in Model Class A were *aligned* models that learned to solve the task. cRNNs in Model Class B (15x15 kernels, BPTT, 6 timesteps) also learned to solve the task (val. accuracy $\geq 0.99$) but were *not aligned*. They did not capture narrow vs. wide or curved vs. straight effects. They also did not correlate significantly with human RT: $r = -.08, p = .51$. Interestingly, when we tested both these model classes for strong generalization (experimental details in **SI 2**), the aligned model class showed superior average performance (**0.70** vs **0.61**).
Though this result is anecdotal and needs a more thorough investigation, we believe that there are benefits for ML that accompany model temporal alignment.
[R1] Fel et al. (2022) Harmonizing the object recognition strategies of deep neural networks with humans.
[R2] Kubilius, Schrimpf et al. (2019) Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs.
[R3] Dapello et al. (2020) Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbation. | Summary: The authors of the paper present a novel concept of crafting a metric for analyzing the temporal alignment between Recurrent Neural Networks (RNNs) and human behavior. The innovative approach, rooted in the estimate of task uncertainty via the application of the Derelict distribution, is both interested and pertinent. In an era where understanding the interface between machine learning models, specifically RNNs, and human behavior is of utmost importance, the authors' research is timely and bears the potential for broad impact. A commendable aspect of this study is the diverse range of tasks that the authors have attempted to cover. Their inclusive approach elevates the practical relevance of their research, thus ensuring the applicability of their findings across multiple contexts. However, there are two crucial areas that the authors need to address to improve the overall quality and comprehension of the manuscript. Firstly, in the same/different dot object task, the authors discuss a correlation between the task's difficulty, reaction time, and dot distance. It is important to note that human reaction time also ties strongly with factors such as the object's topology and occlusion. The authors assert that their network and measurements reflect human reaction time concerning these more intricate features. Nevertheless, the demonstration of this claim in the manuscript is not lucid enough. To solidify their claim, it is crucial for the authors to condition the different stimulus conditions on the distance between dots. They should explicitly illustrate how elements like the narrowness of the object outline and occlusion correlate with human reaction time and their RNN metric. Secondly, the specificity of the proposed metric to the particular cortically inspired architecture employed in this research should be clarified. It would be beneficial to understand if this metric can be generalized to other architectures. For instance, would it be feasible to use this metric if one were to implement a convolutional LSTM or another vanilla RNN? The authors' response to this concern could significantly influence the wider application of their proposed metric. In conclusion, while the paper is insightful and tackles a highly relevant topic, it is recommended that the authors address the concerns raised to improve the robustness and clarity of their research. By doing so, the potential impact of the manuscript can be further augmented, providing a more substantial contribution to the field.
Strengths: A relevant and timely paper extending the comparison between human behavior and ANNs.
Weaknesses: Not clear how well they demonstrate that their metric predicts human reaction time between the more straightforward relationship of dots distance, task difficulty and reaction time. In other words do they predict reaction time and object occlusion, and more complex topology of objects (narrowness of segmentation boundaries etc).
Does their metric extend to other architectures beyond the specific one they use here.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Perform analysis to condition on dot distance to demonstrate that other more intricate relationships predict human reaction time.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and very helpful comments. We have provided clarifications and additional numerical analyses below to address concerns raised in this review.
> **Moving beyond Euclidean distance between cue and fixation dots to other factors such as object topology**
Thanks for this comment. We agree with the reviewer that human reaction times are affected by factors beyond the Euclidean distance between cue and fixation dots in our incremental grouping task. We perform several analyses in the manuscript (and further ones in this rebuttal) to back this claim. We will take this opportunity to highlight these here and better position them in the main text.
#### Narrowness and topological factors
We explore stimulus manipulations presented in Ref [41] in detail in this manuscript (subsection starting at L198 and titled “$\xi_{cRNN}$ recapitulates the spatial anisotropy in human RT data”). We consider four distinct manipulations (A-D) in the incremental grouping task. The cue and fixation dot are nearest in Condition A. Condition B and Condition C are *matched* in Euclidean distance (larger than that in Condition A) but Condition C has the cue dot on *narrow* parts of the image. While Conditions A, B, and C correspond to the cue and fixation dot on the same object, Condition D has counterbalanced cue locations for the "different" case.
1. We find that $\xi_{cRNN}$ is significantly higher in Condition C than Condition B even though they are matched in Euclidean distance, consistent with prior work on human RTs. This suggests that our metric clearly captures non-Euclidean factors (**Fig. 2c; L212** in the manuscript).
1. The variance in $\xi_{cRNN}$ is significantly better explained by a non-Euclidean distance metric (introduced in [46], explanation provided in our response to **4B3C**). We report this on **L208**.
1. To further test effects beyond Euclidean distance we generated a novel stimulus set that explores two additional manipulations. In the first test, we keep *both* Euclidean distance and topological distance constant, but manipulated whether the path between the dots passed through narrow regions. $\xi_{cRNN}$ was significantly higher in the Narrow condition than in the Wide condition. In the second test, we kept the Euclidean distance constant but manipulated whether there was a straight path between the dots or not (making the latter condition higher in topological distance). $\xi_{cRNN}$ was significantly higher in the latter condition. **These results are summarized starting from L216 in the main text and discussed in more detail in the supplementary information. Fig. S5 and Fig. S6 of the SI also show all stimuli used.**
#### Fixed Euclidean distance. Varying distances to boundaries.
In a follow-up experiment, we varied the distance between the fixation dot and an object boundary while keeping the Euclidean distance between the cue and fixation dot constant (**Fig. R4a**). We found that our model exhibits higher $\xi_{cRNN}$ values near the boundaries (**Fig. R4b,c**). This is strikingly similar to human behavior presented in prior literature (Ref [41], Fig. 3d).
#### Occlusions
As we discuss in our response to reviewer **yVHQ**, we also test our model on "occluded" stimuli (**Fig. R1**). When compared to control conditions, we find that our model exhibits extended periods of higher uncertainty in the occluded stimuli (**Fig. R1b**).
> **Specificity of the proposed metric to the architecture considered in the manuscript**
This question motivated us to clarify an aspect of our framework and we have now taken steps to perform additional model comparisons. We believe this makes our manuscript stronger and we thank the reviewer for that.
First, we take this opportunity to clarify that, mathematically, the choice of cRNN architecture and our evidential metric can be disentangled. Suppose $f_\theta$ is our cRNN with parameters $\theta$ specifying the model architecture. For the evidential loss term, we rely on interpreting the outputs of a readout function $g(.)$ operating on the cRNN states in this following manner: $\hat{\boldsymbol{\alpha}}(t) = g(f_{\theta}^t(x))$. Here $x$ denotes the stimulus and $f^t$ denotes the recursive application of function $f$, $t$ times. The subsequent construction of the evidential loss as well as our metric rely only on $\hat{\boldsymbol{\alpha}}(t)$. Within this framework, we are able to swap out various choices of $\theta$ and $f$(model architecture) without affecting the downstream formulation. This makes our metric an ideal choice to compare and contrast models in terms of their temporal alignment.
Having said that, our choice of architecture was motivated by the demands that a task like incremental grouping imposes. The visual cognitive psychology literature posits that incremental grouping requires feedback instantiated by lateral connectivity. Empirically, we find this as well. A recurrent CNN without horizontal connectivity, such as a convolutional LSTM (convLSTM), fails to learn generalizable solutions on this task (**Fig. S1**) and hence was not an appropriate model choice to test our metric.
However, the rapid scene categorization task presents an opportunity to demonstrate the generality of our metric. This task is more akin to purely perceptual processing and relies less heavily on lateral feedback. We implement and train a convLSTM equipped with our metric on this task and indeed find that our metric captures variations in scene discriminability (**Fig. R3**), $b = -0.48,SE = 0.03,t = -16.88,p < .001. $ The correlation between $\xi_{convLSTM}$ and human RT is $.17, p<.001$. | Summary: This study introduces a metric for evaluating the alignment between model and human behavior wrt task complexity reflected in reaction times. The metric is easy to compute and shows qualitative correspondence with human RTs in different tasks.
Strengths: The paper is well written and easy to understand.
The study attempts to go beyond choice responses to establish correspondence between model and behavior. For image-computable models, this is both novel and important.
The idea of using model uncertainty as a proxy for RT is intuitive and appealing.
Weaknesses: The models are trained using an algorithm that imposes attractor dynamics which causes instantaneous uncertainty to become zero. This is critical for their proposed metric to remain bounded. But this is a bit odd because subjective uncertainty should be non-zero for difficult tasks even if given a long time. This raises the question of whether the metric is meaningful for other tasks e.g. with ambiguous stimuli.
The authors make a convincing case for the utility of this metric for evaluating models. But the metric itself is artificial and might not have a correlate in neural activity. An alternative (and more brain-like) but admittedly more challenging approach would be to learn a policy that chooses left/right/wait at each moment (e.g. via RL) based on the latent representation such that you can directly get a reaction time from the model. This might even work with BPTT.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Sentence in lines 88-90: I don't follow. Please unpack.
Lines 38-39. This assertion is a bit strong. There have been some preliminary attempts to explain human RTs using image-computable models so it would be good to cite them.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: As stated above, would be useful to give some examples of the challenges which may arise when applying this approach to other types of tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their overall positive feedback and insightful comments.
>**Does instantaneous uncertainty have to go to zero? What happens in the case of ambiguous stimuli?**
We thank the reviewer for this observation and their suggestion to test the model on ambiguous stimuli.
We take this opportunity to clarify that while our training procedure promotes stable attractor states in the model's *latent* representation, this does not directly correspond to a model reaching an uncertainty (obtained from the *readout*) value of *zero*. Additionally, given that we operate within a discrete, finite-time horizon paradigm, our metric does indeed remain bounded. In **Fig. R2**, we present one such example where a challenging stimulus results in an attractor state with non-zero uncertainty.
We also curated a novel dataset consisting of "occluded" stimuli, a classic scenario of ambiguity (**Fig R1a**). We perform zero-shot transfer of our trained model to this novel test set and observe its uncertainty traces which do match the reviewer's intuition (**Fig. R1a**). A paired t-test reveals that our metric is significantly higher in the Occluded condition compared to the Control condition: $Cohen's\ d = 2.85, t(37) = 17.54, p < .001$ (**Fig R1b**). We will add this analysis as a part of the discussion in the manuscript.
>**Biological plausibility of the metric**
Excellent point! We agree that metrics mined from an RL-trained model are likely to have analogous biological correspondences (as opposed to models trained via supervision). However, this is a hypothesis that needs empirical confirmation. We are excited to pursue this line of research in future work.
>**Sentence in lines 88-90: I don't follow. Please unpack.**
We apologize for the confusing wording. We will make the correction as follows:
The elegant approach of [26] suffers from the primary drawback that it relies on the backpropagation through time (BPTT) algorithm. BPTT imposes high memory demands, limiting the number of timesteps a cRNN can perform and therefore condemning any derived measure to be coarse. Furthermore, vanilla BPTT forces the dynamics to be constant (i.e., the cRNN arrives at a solution at t = T for any input) making it non-stimulus-adaptive [8].
>**Lines 38-39. This assertion is a bit strong. There have been some preliminary attempts to explain human RTs using image-computable models so it would be good to cite them.**
Thanks for the comment. We will include the following references as prior art in the domain of explaining human RTs using image-computable models. We shall also highlight better reference [26] from the current main text.
Mirzaei et al. (2013) Predicting the human reaction time based on natural image statistics in a rapid categorization task. Vision Research, 81(5), 36-44.
Kumbhar et al. (2020) Anytime Prediction as a Model of Human Reaction Time. arXiv preprint arXiv:2011.12859.
Duinkharjav et al. (2022) Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency. ACM Transactions on Graphics, 41(4), 1–15. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time in reading our manuscript and for their extensive feedback. In this general response, we address some common themes across the reviews. We provide detailed answers to specific reviewers' comments in subsequent responses. To go with this rebuttal, we also provide a PDF with additional figures labeled **Fig. R1-R5**, as well as an interactive version of our SU maps (URL supplied to the AC).
First, we sincerely thank the reviewers for their overwhelmingly positive feedback. All reviewers noted the importance and novelty of our work, as well as the thoroughness of our experiments.
One common critique across reviews pointed towards the need for a comparison across model architectures and alternative training frameworks.
We do believe that our framework is broadly applicable. And towards addressing questions raised by the reviewers, we perform and include several new analyses in this rebuttal. Specifically,
1. To demonstrate the generality of our metric, we trained a convolutional LSTM (convLSTM) model on the scene categorization task and show that our metric can work with alternative model architectures as well (**Fig. R3**) (More details in our response to **JuhY**)
1. We *implemented* and made a direct comparison to a model that explicitly has adaptive computational time (ACT) trained on our incremental grouping task (as suggested by **4B3C**). More details in our response to **4B3C**.
1. We probe the generalization benefits of models exhibiting temporal alignment with humans (RTs). Specifically, we demonstrate that poorly-aligned models have a propensity to fail at out-of-distribution generalization while well-aligned models are consistently good in their generalization performance. We refer to our response to **fhSL** for more details in this regard.
We hope you agree that our manuscript has improved through your feedback and that our findings will have a significant impact on computational cognitive neuroscience and machine learning.
Pdf: /pdf/4e837a33e35c2224e1c4283583fa0e194b1de7ce.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Hierarchical VAEs provide a normative account of motion processing in the primate brain | Accept (poster) | Summary: - This empirical study develops a new framework for neural representation of motion in cortex.
- The authors propose a novel stimulus synthesis method for model training, parametrically generating optic flow fields w/ low-dim latent structure, rather than using pixel-space (i.e. image-computable) inputs.
- They modify previously-developed hierarchical Variational Autoencoders, and argue that the inductive architectural biases of such networks produce a more desirable representation of motion, as quantified via entanglement metrics and neurophysiological alignment via linear regression.
- Apart from the stimulus synthesis method, which I believe to be well-motivated and a sensible approach to modeling the inputs, I believe the theoretical advance of this study is marginal, and am unconvinced that the proposed architecture that is central to the paper provides a significant advancement to our understanding of cortical motion processing.
UPDATE: Sep 1, 2023. I have read the rebuttal, my issues with the paper remain, and I maintain my score.
Strengths: - I enjoyed the proposed stimulus synthesis method for training, and the last figure teasing apart contributions of the different generative factors to performance is nice.
- Writing is mostly clear and easy to follow (clarity suggestions below).
- The use of hierarchical VAEs to producing more disentangled/unentangled motion representations is interesting.
Weaknesses: - The logic surrounding the physiological alignment analyses feels unsettlingly circular and/or redundant. This perhaps applies to Higgins' paper too since they do similar analyses. 1) The cNVAE is shown to produce improved DCI scores over vanilla VAE. 2) Your assessment of alignment effectively penalizes models with worse disentanglement (i.e. a more distributed code), and therefore adjudicates that the cNVAE has better alignment with physiological data.
- In fact, a central motivation of this study relies on a conjecture that disentanglement and/or unentanglement are normative goals of the dorsal stream. While this objective would be appealingly analogous to those proposed in the object recognition and ventral visual stream literature, I am unconvinced that this is the case given the evidence presented here.
- The above points are compounded by the fact that, as you mentioned in Fig. 7, the non-hierarchical VAE architecture does comparably on predicting MT responses to the cNVAE.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The paragraph introducing the architecture (L108) remains opaque to me. It would be helpful to include a more thorough description of the dimensionality of x, z1, z2, z3. Are you concatenating z1,2,3 to form your 420D cNVAE latent, and comparing it to a single 420D z VAE latent? This is what I am inferring from Fig. 3.
- There are many acronyms that are introduced and never explicitly written out. I am familiar with much of the VAE and disentanglement literature but the general reader shouldn't have to be. E.g. NVAE, DCI, TCVAE (since this one isn't pertinent to your study you can just remove it altogether instead of introducing more terms to the reader), .
- Please elaborate on what end-point error is (L137).
- Please include a sentence on what permutation importance is; I had to look up the sklearn docs.
- VAEs are inherently stochastic neural networks with exact mean and covariance structure available to the experimenter. However, covariance is typically thrown out most studies using VAEs, including this one. This is at odds with the fact that noise exists and neural coding is impacted by noise (e.g. the classic Averbeck 2006 review on neural correlations). You mention there are repeats in some trials of the CRCNS dataset; is it possible to quantify whether or not your model captures the stochastic structure of the neural responses?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: There was discussion of limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we want to contextualize all “weaknesses” about the brain alignment score (Fig. 8) in the greater scope of the manuscript. Fig. 8 is one of the 5 metrics we use to evaluate the learned representations (the others being untangling, disentanglement, completeness, and neural prediction). All 5 metrics favor the cNVAE. Therefore, this is a small fraction of the results.
> The logic surrounding the physiological alignment...
We interpret your comment to mean our alignment metric favors disentangled codes, therefore any disentangled code will better align with MT (or any other representation, for that matter). Although we agree that our alignment score is far from perfect, there are several reasons why we believe this is not a weakness:
1. Maybe “redundant” but not “circular”: If the alignment metric was biased to favor disentangled codes, this would only make it redundant because we do not start from a motivation that codes should be disentangled (more below).
2. There is no such thing as a *universally* disentangled code, and therefore this alignment score would have to depend on what is being disentangled. In its most common definition, disentanglement depends explicitly on how we define ground truth variables. We explain this point in the appendix (section: “disentanglement is in the eyes of the beholder”; Fig. S8) and show that disentanglement scores change with different ground truth definitions.
3. The disentanglement score for the cNVAE is not that different from vanilla VAE (Fig. 5). Where the models differ most is in untangling and completeness. Given this, we think it is unlikely that disentanglement is driving the high MT alignment.
4. Our alignment score (Fig. 8) is more dependent on the hierarchy than DCI metrics, as indicated by the good performance of the cNAE.
5. We believe *untangling* is the most important feature of cNVAE, and where it shines (Fig. 4b). Despite their similar names, “untangling” and “disentanglement” are independent concepts, where “untangling” simply means the ground truth factors are linearly decodable ([DiCarlo & Cox, 2007](https://tinyurl.com/dicarlocox)). Although several recent lines of work have argued that disentangled codes are desirable, others have argued that distributed codes are good as long as they support untangling ([Rigotti et al., 2013](https://tinyurl.com/rigotti13)).
6. The reviewer is holding us to a standard that no prior work is held to. When it comes to prediction performance, the best model is cNVAE (at $\beta=0.8$) with an improvement of 0.008 over the best VAE (see Table 3). In comparison, Mineault et al. (ref 29, which was a spotlight NeurIPS paper) report a minuscule improvement of 0.001 to select their best model.
We completely agree with the reviewer that our measure of alignment is not perfect and we highlighted limitations of prediction and alignment in our main text and supplemental material. We think the reviewer is correct to be dissatisfied with disentanglement and linear regression as a measure of brain alignment. There is a lot of future work to do before the field converges on metrics for evaluating the similarity of pairs of representations, as highlighted in recent work (e.g., [Han et al. 2023](https://tinyurl.com/hanicml23)). A potential route forward is geometric analyses ([Williams et al., 2021](https://tinyurl.com/awshape21)). Ultimately, we don’t think disentanglement is central to our results, but is one of several metrics that can be used to evaluate a code. Because Fig. 8 is a small part of our results, we are happy to move it to the supplemental material and modify the text accordingly if the reviewer feels we overstated its importance.
> In fact, a central motivation of this study...
That was not a motivation for this study. Our study was motivated by conjectures from Helmholtz and Mumford (see our global rebuttal). In our efforts to compress our introduction to fit the page requirements, we collapsed this idea too much and this probably misled the reviewer. We will fix this in our revisions and address this point directly here.
We were primarily motivated by the idea that hierarchical inference is important for representation learning, which we evaluated using a number of metrics that have been proposed by neuroscientists and ML researchers and concluded that hierarchical inference leads to several improvements. Our approach contrasts with previous work on dorsal stream (e.g., Mineault et al. 2021) where linearly decoding ground truth was literally the objective (they used supervised ResNets). Our models were trained solely using the standard ELBO or beta-VAE loss. Our results demonstrate that unsupervised models w/ hierarchical inference untangle more than models w/o hierarchy. Importantly, untangling and disentanglement are metrics we use to evaluate the role of hierarchy in learned representations, and they were neither a motivation nor objective.
> The above points are compounded…
Our best model was a cNVAE by 0.008 over the best VAE, which is 8 times larger than the selection criterion used by the SOTA model of the dorsal stream (Mineault et al., 2021), which we outperform by over a factor of 2.
> VAEs are inherently stochastic…
We agree there is an interesting parallel between the covariance in VAEs and the covariance of neurons in the brain. One practical reason why we cannot examine this is our dataset consists of single-unit recordings, making it impossible to compare the covariance of these neurons to VAEs.
Interestingly, we did look at the covariance of latent representations and found that cNVAE encodes its input in a much larger dimensionality than the vanilla VAE (Figure R1-f in the global rebuttal). What information is encoded in those correlations? Is it related to noise correlations? We don’t know, but we like this suggestion and plan to study this later with a population-recording dataset.
> Questions
We will implement the remaining suggestions in our revisions.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses to my comments, and the preliminary changes you wrote in your global rebuttal.
I was not aware that Mineault et al. [29] received a spotlight for their paper; however, I don't think this is constructive to the discussion (I notice you also mention it in another reviewer's rebuttal), nor would it be appropriate for me as a reviewer to factor that into my judgment of the present study.
I can appreciate that your average prediction score is higher than [29]. I was mainly bringing attention to your statement in both the main text and fig 7 that cNVAE and VAE performance are comparable (indeed, the standard errors overlap in the table). This leads me to question how effective cNVAE and its inductive biases are to motion processing, when prediction performances of these models overlap.
I'm inclined to raise my score from 3 -> 4 for the clarity changes you promised. But, provided the evidence in your study, I remain unconvinced that untangling, as emphasized in the paper and your rebuttal, are imperative to neural coding of motion.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our responses and for engaging with us.
We genuinely appreciate your effort in delving into our work and sharing your thoughts. Your insightful critique of our work prompted us to think carefully about our contributions. As a result, we included some of those clarification points in our global rebuttal and will add more to our revised manuscript. We were pleased to learn that you find our efforts to enhance clarity in line with your expectations.
We believe there might be some remaining misinterpretations that we would like to address in a more transparent manner below.
> I remain unconvinced that untangling, as emphasized in the paper and your rebuttal, are imperative to neural coding of motion
We did not set out to test whether untangling is imperative to cortical motion processing, nor do we claim that. We were inspired by the hypothesis-driven approach of testing for the presence of particular information with a decoder (e.g., see [Kriegeskorte & Diedrichsen, 2019](https://www.annualreviews.org/doi/abs/10.1146/annurev-neuro-080317-061906)): if an [artificial or biological] information processing system represents a feature of its sensed inputs, then those features should be easily decodable from the representation.
As stated in our rebuttal, our objective was to investigate the role of hierarchical inference in learned representations. We were inspired by longstanding conjectures in neuroscience that representations should explicitly represent generative causes of the senses.
Motivated by this hypothesis, we set out to test whether VAEs w/ hierarchy do better in untangling compared to those w/o hierarchy. Our results provided strong evidence in favor of that.
In a separate set of experiments, with a totally different motivation, we asked whether the hierarchical models are also more “aligned” with biological representations. To test this, we used the existing MT dataset from Mineault et al. [29] and found that not only were we able to outperform SOTA with over 2x gain, but also our cNVAE was better at predicting MT neurons, although marginally.
> I can appreciate that your average prediction score is higher than [29]
We are grateful for your acknowledgment of the substantial performance gain observed in our study compared to [29]. We believe this achievement stands as a significant contribution in its own right, offering valuable insights into the potential range of attainable performances on the MT data.
In sum, we hope that these added insights will contribute to a clearer understanding of our motivations and conclusions. If you come across any specific instances in our manuscript that might not align with this clarification, please let us know. We are more than happy to make any necessary language adjustments to ensure that our motivations and conclusions are accurately reflected. | Summary: The paper introduces a synthetic data framework called Retinal Optic Flow Learning (ROFL) and uses that framework to test the performance of unsupervised models on two learning tasks: reconstructing grand truth variables and predicting the response of MT neurons. By imposing a latent hierarchical structure, the authors observed improvements along three axes: on the linear decodability of the ground truth, on the predictability of MT neuron responses and, finally, on identifying the causal structure of the world as a major factor driving these results.
Strengths: As far as I understood (given the lack of clarity on a few aspects regarding the problem definition and related work, please look at my comment below), the authors introduced a method to generate synthetic data and then modeled the generated data using an ensemble of model architectures of their choice. While I am dubious of the novelty and the sensibility of the idea, the experimental results presented seem to validate the argument. Another important strength of the paper is that the authors are pretty open and clear about the weaknesses of their method, which is something to comment them for.
Weaknesses: I am dubious of the idea because it seems that the authors generate synthetic data and then choose an ensemble of architectures to model them. Also, it is hard to understand what is the main problem tackled by the paper, how this work relates to the state of the art and most importantly what is the improvement and contribution compared to the SOA.
There’s no Related Work section in the manuscript and the method is not compared against any other methods in the experiments. Are there many papers that deal with the same problem? Reference 29 seems to be one such paper and it is compared against in the experiments. However, given that this reference has a different dataset and model, I see no common ground for comparison. And given the peculiar structure of this problem area (generating data and then modeling them), the authors need to find way to highlight their contributions.
All these make it hard to assess the paper. I recommend the authors add a related work section, clearly connecting their paper with the state of the art, and clarifying the novelty and contribution.
In you related work section, you may want to consider referencing some papers that use hierarchical latent spaces for real-world data, such the following one (which is a newly published paper, so it’s understandable that you hadn’t included it).
Generative Decoding of Visual Stimuli, https://openreview.net/pdf?id=57OuafQmu8
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: “Importantly, this framework allows us to manipulate both the architectures of the models and the causal structure of the world they are trained on” -> As far as I understood, you chose the models and the method for generating synthetic data. Being able to manipulate the architecture and structure of data is somehow expected. Why is that then an important fact and statement to make? If it’s not, please consider removing it.
“We found that a single inductive bias, hierarchical latent structure, yields several improvements.” -> What is the metric that you use to assess performance and what do you compare against? The metrics are only qualitatively described in the paper. Please consider condensing that information in a clearly defined section, something like “Metrics and Baselines”, and give a clear, quantitative definition of the metrics.
“the brain engages in hierarchical Bayesian inference ” -> Based on what I read, I do not believe you have presented sufficient evidence to back up such a strong claim. The experimental study on predicting MT response does not suffice. If you meant to back the claim using more of the results you presented please make it more clear in the main manuscript or else remove it.
The metrics that are used in the experimental section such as informativeness, disentanglement and completeness are not clearly defined. There are some papers cited where those metrics are mentioned in the text and I am sure that someone would find the definition if they look them up. However, given that these metrics are not widely used, I think the authors should give the definition in the text.
What are the details of the data that you used for the MT learning task? There’s little to no information regarding what these data are, how they are collected, pre-processed etc. I understand that you cite a couple of papers that probably have these details, however, I think you have to give some details and explain how you use this dataset.
What is the learning task for the MT prediction, i.e., what is the quantity you’re predicting? Is it the firing rate shown on figure 6? If so, how is that related to the hierarchical modeling in the previous section? Details on the dataset would help clarify.
What is the AirSim dataset that the authors compare against in the MT prediction task? And how does it related to the newly introduced ROLF? If it’s a completely different synthetic data framework I do not see the point of comparing against it.
Is the label in Figure 3 correct? There’s a reference to Figure 2. Maybe the author meant to refer to the bottom of Figure 3.
On lines 225-227 you say “all metrics for a broad range of β 227 values (Figure 5).” I am assuming that by all metrics you mean the “untangling”, “disentengling”, and “brain-alignment” mentioned in line 195. However, the labels on figure 5 are slightly different, “Informativeness”, “Disentanglement” and “Completeness”. Please clarify what the metrics are and quantitatively define them, as they are not well known metrics.
Why the word “learning” in ROFL? There is no learning involved in the data generation, as far as I understood.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have clearly and openly discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > As far as I understood…comment them for.
We struggled with how to answer this review. It seems that comments alternate between not understanding or knowing the relevant literature, and asserting (confidently) that the work was not novel and/or incorrect. Furthermore, in several cases, we already have whole sections devoted to some of the reviewer's concerns.
As a result, we hope that we can largely address these concerns by clarifying our contributions, and have reworked parts of the text accordingly. Along these lines, we have included a draft Related Work section in the Global Rebuttal, and this should clarify where this paper sits in the existing literature. We have also reiterated the main points in our global response to reviewers and addressed individual ones here. We hope this clarifies the “sensibility” of the ideas, which stem from two major conjectures in neuroscience.
> I am dubious of the idea…
We were motivated by the idea that hierarchical inference is important for learned representations. This is an old idea in neuroscience and a newer idea in ML. To evaluate the role of hierarchical inference in learned representations, it was essential to generate synthetic data with ground truth variables. We then evaluated the representations using a number of metrics and concluded that hierarchical inference leads to several improvements using metrics that have been proposed by neuroscientists and ML researchers.
In addition to our main goals, our paper moves beyond solely evaluating the reconstruction performance of hierarchical VAEs. As far as we know, a comprehensive investigation of representation learning in hierarchical VAEs has not yet been done: this is the first.
> …and the method is not compared against any other methods in the experiments. Are there many papers that deal with the same problem?
We compare directly to a recently established benchmark, which we cite extensively: ref [29] – which was a spotlight NeurIPS paper in 2021. We are the only other paper to use this benchmark thus far.
> Reference 29 seems to be one such paper…no common ground for comparison.
We use the same MT dataset as Ref 29 (crcns-mt1), and the model is indeed different which we now detail in the Related Work.
> And given the peculiar structure of this problem area …
Synthetic data with ground truth factors is essential to interrogate the learned representations and is standard practice in disentangled representation learning: https://paperswithcode.com/task/disentanglement
Our manuscript had an entire section on this point (specifically, section 2.1 “Using synthetic data to test hypotheses about the causal structure of the world”)
> In your related work section, you may want to consider referencing...
Thank you for this ref. We now address this (and other) papers in the Related Work section. We note here that Miliotou et al. have a very different goal than ours: to decode images from fMRI voxel activations (using a hierarchical VAE); whereas, our work focuses on learned representations and how they depend on architecture (hierarchical vs. non-hierarchical), loss function (variational vs. maximum likelihood), and the generative factors of variation in the training set.
> Q1: As far as I understood...
Briefly, our ability to manipulate the structure of the data (and model) is exactly why it is useful for our work. We highlight this point at multiple places in the manuscript, including in the Introduction and discussion (line 290). We even include a section about it (section 2.1 “Using synthetic data to test hypotheses about the causal structure of the world”).
> Q2: What is the metric that you use to assess performance…
Because our focus was on the learned representations, the definition of the performance metrics based on the reconstruction loss was relegated to the Appendix (Fig. S5 shows a summary). We already did state in the Introduction (line 54) what we meant by “improvements”. We will add a table summarizing reconstruction performance and additional text describing the metrics computed.
> Q3: “the brain engages in hierarchical Bayesian inference "...
Here we agree. Our observations are consistent with this notion, rather than prove it. We will modify the language accordingly.
> Q4: The metrics that are used in the experimental section...
These metrics are widely used in the VAE literature, but perhaps not more generally. We will add more details about these metrics in the revised text.
> Q5: What are the details of the data...
We will add some key details to the main paper, and refer the reader to the supplemental where we include every detail to make this work completely explained given the length constraints of the main text.
> Q6: What is the learning task for the MT prediction...
We follow standard neural modeling approaches and predict the binned spike count of each neuron, which when normalized by the time bin size, results in a “firing rate” (number of spikes per second).
> Q7: What is the AirSim dataset…
Both ref 29 and our manuscript analyze MT data from the same CRCNS dataset. We both train models using completely separate synthetic datasets that we then use to predict MT. Ref 29 used AirSim to train supervised 3D ResNets that they then use to predict MT (using a completely different stimulus). We created ROFL to simplify the causal structure of motion in the world and then, as a part of our analysis of the learned representations, we predict MT. Importantly, we get a factor of 2x performance gain over the previous best model (ref 29).
> Q8: Is the label in Figure 3 correct?
Yes.
> Q9: On lines 225-227 you say...
“Untangling” and “informativeness” are the same which we discuss in lines 197-202 and also in supplemental section 1.4.
> Q10: Why the word “learning” in ROFL?
The dataset is for unsupervised learning of latents: thus we consider this to be a “learning framework” overall.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebutal and appologies for not responsing earlier.
Even though I still remain dubious of the idea, I slightly increased my score to 6. With the changes that the authors said they'll do, I think that the paper would be in a much better shape. The most important one that I'd like to see is a clearly defined Related Work section.
Also, please add details and clearly define all the metrics. Regarding Figure 5, it is still not clear what "Completeness" is.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments and questions. Below we address the remaining items.
> Also, please add details and clearly define all the metrics.
The camera-ready version will contain a Table and an associated text section that defines and explains the “Metrics and Baselines” used in this work, as originally suggested by the reviewer, which we thank them for.
> Regarding Figure 5, it is still not clear what "Completeness" is.
Completeness measures the average number of latent variables $z_i$ required to capture any single ground truth variable $g_j$. If a single latent contributes to $g_j$’s prediction, the score will be 1 (complete). If all latent variables equally contribute to $g_j$’s prediction, the score will be 0 (maximally overcomplete).
It is noteworthy that the completeness score has also been called *compactness* ([Ridgeway & Mozer, 2018](https://proceedings.neurips.cc/paper_files/paper/2018/hash/2b24d495052a8ce66358eb576b8912c8-Abstract.html)). For more info, please see the original DCI paper ([Eastwood & Williams, 2018](https://openreview.net/forum?id=By-7dz-AZ)) or a recent extension of it ([Eastwood et al., 2023](https://openreview.net/forum?id=462z-gLgSht)).
In the final version of our manuscript, we will clarify what each metric measures in the main text, and add this background information about metrics utilized in our study including mathematical formulas used to compute the scores, in the “Metrics and Baselines” section in the supplemental.
We hope that this will properly address the reviewer's concerns and comments. | Summary: The authors present a framework to evaluate motion detection in different DNNs. First, they intoduce a new concept to create flowfields for optical sitmuli, which include local and global motion and additionally fixation points. They use the parametrized stimuli to train a new hierarchical VAE (cNVAE) and compare it to other DNNs in terms of different disentanglement metrics and similarity to neural recordings of MT. cNVAEs outperform standared VAEs in the disentanglement setting, and are comparable to VAEs for neuronal prediction while their latent is more aligned with single neural recordings. Training on different synthetic flowfield datasets showcase the influence of the treaining data to predict neural respones.
Strengths: Clear description of stimulus genearation, and overall a solid paper.
Clear and detailed description of the limitations.
Interesting hierarchical VAE model which could be transferred to other learning tasks and brain areas.
Weaknesses: **Major**
- Evaluation: While the authors do a good job in a high-level evaluation of the models (applying different evaluation scores and comparison to other models), further in-depth investigation would strengthen the paper significantly.
For example, it would be interesting to investigate: the learned representation (manifold); the necessary dimension of the latent space, and how the performance and disentanglement change with different dimensions; the hierarchical structure (in the model and the neural data); counterexamples where the model fails and why it fails; further investigation of the receptive fields for different neurons; some of them are touched on in the Appendix, but a clear link to the main text is missing.
- The paper would benefit from some restructuring to disentangle previous work and Methods. Each subsection in Section 2 seems to start with a short introduction and previous work.
- l. 108 ff. The model architecture (especially the processing of the sampling layers) should be described more clearly.
- It is not clear why the authors chose a specific $\beta$ in different places. (For example in Fig. 4; Fig. 8b seems to be cherry-picked for cNVAE, and an especially bad example for VAE)
- All Figures: more descriptive captions would be beneficial.
**Minor**
- The authors could reference to specific sections/figures in the appendix to make navigation easier.
- The layout of the tables does not follow the style guidelines for scientific tables (see NeurIPS guidelines)
- Fig. 4a is not mentioned or described in the text.
- Fig. 4b: mention R^2 score in caption. Write R^2 numbers down also for “bad” models. It is not clear why some bars are “missing”.
- The authors should include a short description of the DCI framework.
- Fig. 6: figure labels are very tiny and hard to read, 6d is so tiny that it is hard to interpret it at all.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Which $\beta$ is used for Fig. 3?
- Can the authors reiterate how cNVAE is used for data generation?
- How does cNVAE compare to the non-compressed version (with matched latent dimensions)?
- How do the different metrics perform on the raw data? This would add a simple baseline which could help to interpret the results.
- Fig 4 a: from which layer do the neurons come? Maybe even highlight them in Fig. 3.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, very well done.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Evaluation...
We like these suggestions and find some particularly exciting. Although, we consider several of them interesting future directions, given the manuscript already covers a lot of ground (see general response for a summary). In short, the present work is meant as an empirical report of all the cool things that happen to the representations when the latent space is designed in a hierarchical way. As a next step, we were planning to quantify the geometry of the latent spaces using tools from neural population geometry approaches that have recently gained traction.
> the learned representation (manifold)
Motivated by your comment, we performed some first-level analyses to investigate the geometry of representations using the simple method of “effective dimensionality”, which is computed based on eigenvalues of the covariance matrix and provides an estimate of manifold dimensionality. We found that across a broad range of $\beta$ values, the dimensionality of cNVAE representations was substantially larger than that of VAE, suggesting that their representational geometries are ultimately different in a quantifiable way. This result be taken as a starting point for a thorough analysis of the representational geometries. Please see our Figure R1 and global rebuttal for more details.
> the necessary dimension of the latent space
It is interesting to quantify the dependence of our results on the number and organization of latents (i.e., how many latent groups, how many latent variables per group, etc). We did not perform these analyses for the rebuttal due to lack of time, but if the reviewer feels like this would enhance the quality of our paper we will do so and include the results in the final version. Please let us know!
> counterexamples where the model fails and why it fails; further investigation of the receptive fields for different neurons
We will also explore more neurons and find counterexamples to report in the supplementary. We will dig a bit deeper to understand why some neurons are more aligned, while others are not.
> The paper would benefit from some restructuring
Thank you for the suggestion, which we agree with. We will add a “related work” section and clarify some of the missing key background info, which is added to our global rebuttal.
> l. 108 ff. The model architecture (especially the processing of the sampling layers) should be described more clearly.
We apologize for the lack of clarity and will include more details in a substantially revised description of the model architecture and our specific contributions.
> It is not clear why the authors chose a specific beta in different places. (For example in Fig. 4; Fig. 8b seems to be cherry-picked for cNVAE, and an especially bad example for VAE)
Overall, we (and others) find that different choices of the $\beta$ will result in different model properties. As a result, while we scanned across all betas, we displayed results for certain betas to best demonstrate our points. For instance, in Figures 3 and 4 we chose beta values that maximized the overall informativeness score for each architecture in order to make a fair comparison ($\beta = 0.15$ for the cNVAE, $\beta = 1.5$ for the VAE -- please ignore the typo in Figure 4 caption, it should say $\beta=1.5$, we will fix this). You can see that this is the case from Figure 5 where we show DCI scores for all betas. In Figure 8b we deliberately chose a larger beta for VAE because previous work ([Higgins et al. 2021](https://www.nature.com/articles/s41467-021-26751-5)) suggested that increasing $\beta$ values increases alignment, which we did not observe here. In contrast, even a very small $\beta = 0.01$ for cNVAE results in a large alignment. This result (paired with other observations) suggests that alignment, as we measured it here, emerges due to the architecture rather than from large $\beta$ values alone---although we do observe some dependence on $\beta$ values, so ultimately a combination of both architecture and loss is important (but mostly architecture).
> All Figures: more descriptive captions would be beneficial.
This will be fixed in our revised manuscript, including the other minor suggestions (which we thank the reviewer for).
> Which beta is used for Fig. 3?
$\beta = 0.15$ for cNVAE, and $\beta = 1.5$ for VAE.
> Can the authors reiterate how cNVAE is used for data generation?
cNVAE was not used for data generation.
> How does cNVAE compare to the non-compressed version (with matched latent dimensions)?
With a matched number of latents, it is expected that NVAE will severely underperform cNVAE. This is partly because trying to match their latent dimensionality necessitates reducing the number of hierarchical latent groups in the NVAE. From previous work ([Child 2021](https://openreview.net/forum?id=RLRXCV6DbEJ)), we know that the “stochastic depth” of hierarchical VAEs is the key reason behind their effectiveness; therefore, we expected that reduced depth was going to hurt an NVAE with matched # of latents.
To test this, we trained a non-compressed NVAE with roughly the same # of latents (440 vs. 420 for cNVAE), the same number of parameters and conv layers, but necessarily with a reduced number of latent groups (11 vs. 21 for cNVAE). We tested its *untangling* performance (similar to Figure 4b) and found that it dropped significantly compared to cNVAE in predicting every ground truth variable, but it was still higher than VAE. The average untangling scores are as follows:
- cNVAE: **0.898**
- NVAE: 0.639
- VAE: 0.406
We thank the reviewer for this question because it prompted us to demonstrate clearly that the latent space of NVAE is unnecessarily and redundantly large. An observation made by others as well ([Hazami et al., 2022](https://arxiv.org/abs/2203.13751)). We are planning to include this new result in Figure 4b, along with performance on raw data (as was also suggested) to further enhance the comparisons.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for their thorough responses and I appreciate the additional experiments.
Especially the results of the ablation experiments are very interesting, and a good starting point for an in-depth investigation and discussion of different aspects of the model.
For example, I still think that investigating different dimensionalities of the latent space could add interesting results to the paper. As for a perfectly disentangling model, the stimulus dimension should already be sufficient.
When reading the reviewers’ comments and all responses, I see a lot of promised clarifications and additional experiments. While I know, that it is not possible in the short rebuttal period to run a lot of different experiments, nor is it possible to upload a revised version of the manuscript, this shows that the current manuscript is, if at all, in a borderline state for publication. Therefore, I stand by my initial assessment.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments.
> Especially the results of the ablation experiments are very interesting
We share the reviewer’s enthusiasm for these experiments. The ablation experiments are one of many ways to follow up on our initial results and are consistent with our goal of demonstrating the utility of our modeling framework—a goal that will enable us to explore all the exciting future research.
> I still think that investigating different dimensionalities of the latent space could add interesting results to the paper. As for a perfectly disentangling model, the stimulus dimension should already be sufficient.
We agree and also find this result quite intriguing. While the ground truth is 11-dimensional, we observe dimensionalities of ~50 for the cNVAE latents. What is encoded in those extra dimensions? We speculate that this relates to the nature of non-linear embeddings, where a lower-dimensional but curved manifold can be approximately captured in a higher-dimensional (but still dimension-limited) subspace.
More broadly, our simulation/modeling framework presented here provides a straightforward instance to understand how hierarchically structured systems can approximate natural laws, which are generally non-linear. Such laws that govern the structure of stimuli in the world typically connect a limited number of ground truth variables to complex, high-dimensional realizations. In our case, the rules of projective geometry determine how the flow frames (raw data), spanning hundreds of dimensions, are derived from the 11 underlying variables.
In this point of view, the extra latent dimensions can be regarded as effective or *emergent* degrees of freedom that are nontrivial (and likely nonlinear) combinations of the true generative factors. Are those extra dimensions physiologically relevant? One example could be the angle between the *heading direction* and the *gaze direction*. Moving forward, we are excited about investigating the learned latent codes more thoroughly in an attempt to find answers to such questions.
Nevertheless, a deep dive into exploring the relationship between an 11-dimensional ground truth and an effectively larger dimensional latent code calls for a dedicated, subsequent paper. In our estimation, such an in-depth treatment would be more valuable than hastily incorporating a brief and surface-level analysis into our current paper.
The present work is meant to describe the framework's main results and is focused on establishing foundational points that must precede the direction that the reviewer suggests. Please also see our latest follow-up comment under the general rebuttal. Thus, while we are in complete agreement that these are interesting directions, we assert that our current figures are necessary to establish this framework, and are important in their own right.
> When reading the reviewers’ comments and all responses, I see a lot of promised clarifications and additional experiments. While I know, that it is not possible in the short rebuttal period to run a lot of different experiments, nor is it possible to upload a revised version of the manuscript, this shows that the current manuscript is, if at all, in a borderline state for publication. Therefore, I stand by my initial assessment.
More broadly, the fact that additional analyses would also be interesting should not necessarily detract from the current content of the paper, which is also novel and establishes the foundation for the future directions suggested by the reviewer. We believe that it is an unfair metric to judge a paper by what could be added rather than its current content. Given that we clarify our current work (as the reviewers have given us great direction on), we hope the reviewer would see this work as an advance, with the potential of exciting follow-up work as a positive rather than negative factor in their judgment. | Summary: The paper investigates the alignment of representations in deep generative models with activity in mammalian nervous systems. They provide a novel dataset on motion perception (Retinal Optic Flow Learning or "ROFL") against which to test computational models of Helmholtzian analysis-by-synthesis. The dataset generates retinal flows based on disentangled latent factors determining the motion and appearance of objects. The paper then tests a novel extension of the deeply hierarchical Nouveau VAE which imposes a "pyramidal" scaling of latent spaces through the hierarchy for its ability to capture the retinal-flow dataset in a "Helmholtzian" way, as well as its alignment with a pre-published macaque electrophysiology dataset on the middle-temporal visual area (MT).
Strengths: The paper presents an original deep generative model architecture, trainable with the typical ELBO or $\beta$-VAE objective by black-box variational inference, which captures a desirable and sought-after feature of generative modeling in the brain. The experimental evaluation compares against maximum-likelihood/reconstruction-loss estimation and a deterministic autoencoder, a nice comparison for making sure that the neural data aligns best with a deep probabilistic generative model as opposed to a non-probabilistic deep neural network.
Weaknesses: As the authors admit in the Discussion section, they trained their compressed Nouveau VAE (cNVAE) on optical flow data rather than on video/pixel data. While this does map better to the MT area than pixels do, a very deep model ought to be able to learn to represent optic flow in the course of predicting pixels.
While this supervision with feature-engineered optical flow data in place of learning from time-series does remain a weakness, the authors have presented a sufficiently interesting additional contribution in addressing my question below that I will be raising my score.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Do the authors have a plan to apply their cNVAE in a "clockwork VAE" setting to predict videos? Can we see if optic flow emerges from training a deep generative model? Moreover, is there an ablation study able to show how the recognition model $q$ versus the generative model $p$ in the cNVAE contribute to its alignment with neural representations?
The authors have presented interesting ablation studies in their rebuttal, whose results make perfect sense in retrospect and yet which I could not have predicted a priori. I am thus convinced I need to raise my score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have addressed potential limitations and impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > As the authors admit in the Discussion section, they trained their compressed Nouveau VAE (cNVAE) on optical flow data rather than on video/pixel data.
We appreciate this point, and it relates fundamentally to key choices in our approach. The raw visual image (photons on the retina) is processed by many layers of the visual pathway through many specialized computations throughout the retina and cortex, and these transformations are the subject of much computational work (e.g., [Nishimoto & Gallant](https://www.jneurosci.org/content/31/41/14551)). Here, we wanted to simplify the stimuli to focus on a clearly defined and focused question of optic flow: starting with the optic flow (velocity field resulting from movement in the scene and by the observer/eyes) to test if our artificial model can extract the generative “ground truth” variables from the optic flow stimulus. This stimulus also purposefully matches a majority of experiments in area MT that use moving dots, including the dataset in our study.
The use of optic flow instead of raw-pixel stimuli of course allows the cNVAE to focus on the problem of inferring a representation of the generated motion fields: and, in this sense, not need to simultaneously solve the difficult problem associated with extracting motion fields from pixel data (and/or the conjoint problem). In short, our manuscript focuses on this problem – and please see our summary of the general goals of this manuscript – and, in achieving high performance on explaining MT data, implies that MT may be doing something similar (when given equivalent dot-motion stimuli).
In terms of solving the harder problem of processing raw spatiotemporal movies, we propose our approach provides a new possible path for this and plan to extend our cNVAE to operate on raw images in the future.
> While this does map better to the MT area than pixels do, a very deep model ought to be able to learn to represent optic flow in the course of predicting pixels.
Yes, in the future we will test this. We expect such solutions based on raw pixel data will be achievable with the same framework and more layers of latent variables, although it is possible that these problems are linked.
> Do the authors have a plan to apply their cNVAE in a "clockwork VAE" setting to predict videos? Can we see if optic flow emerges from training a deep generative model?
Thank you for this question. In the Clockwork VAE paper, they use hierarchical latents but in the temporal domain and report the benefits of this inductive bias. In our study, we decoupled spatial and temporal aspects of motion processing and focused solely on the spatial hierarchy. This simplification allowed us to demonstrate that hierarchical models were able to understand multiscale data (like the real world) where a non-hierarchical VAE struggled.
It would be interesting to add temporal hierarchical structure to ROFL in a spatiotemporal video setting and use it to investigate how latents would capture coexisting fast and slow dynamics. For instance, one can include objects that move much faster or slower than self-motion. In the clockwork VAE paper, they found that higher latents captured objects with slower time scales. Here, we found that higher latents specialize in encoding object-related features. It would be interesting to combine their setting with our architecture to find out what happens in the spatiotemporal domain and whether the model learns brain-like representations. This is a scientifically relevant and important question because there is also a hierarchy of intrinsic time scales in cortical dynamics ([Murray et al., 2014](https://www.nature.com/articles/nn.3862)). However, we believe such an important problem deserves its own full treatment in a separate, follow-up paper.
> Moreover, is there an ablation study able to show how the recognition model q versus the generative model p in the cNVAE contribute to its alignment with neural representations?
Thank you for this excellent suggestion. We performed ablation experiments and found complementary insights into why cNVAE outperforms alternative models. We report these results in Figure R1, along with a brief discussion and interpretations in our global rebuttal. Overall, we found your suggestion to be very interesting because understanding the contributions from bottom-up and top-down connections in the cortex is a central problem in neuroscience. We are planning to tackle this question more systematically in the future.
> The authors do not appear to include a Broader Impacts section to review.
We had to fill it out here on OpenReview rather than having a designated section in the paper. Our response is copied below for the reviewer’s consideration:
"**Broader Impacts**: We introduce a new simulation framework that facilitates hypothesis generation and testing in science, but it is not sophisticated enough for potentially harmful applications. Thus we do not anticipate negative social impacts from this work." | Rebuttal 1:
Rebuttal: Here we address the most common concerns and highlight additional analyses inspired by them. We realize our work did not come across clearly to all reviewers and we offer a brief summary of our main points first:
We were motivated by the idea that representations of the natural sensory world involve the inference of the underlying causes of the senses. This is foundational in theories of perception, that posit that our brains learn hierarchical generative models of the world. We focused on the role of hierarchical inference in the learned representations of natural motion and its causes (moving objects and self-motion). Our paper’s main points are:
(i) We created a simulation framework (ROFL) for synthesizing motion stimuli. ROFL enables control over ecologically relevant factors (self-motion and objects) while avoiding confounds due to texture. Importantly, ROFL has a hierarchy of spatial scales — just like the real world — which interact in nontrivial ways (see Figs. 1c-d).
(ii) We introduced compressed NVAE (cNVAE) that greatly reduced the number of latents.
(iii) We evaluated the representations of hierarchical and non-hierarchical models using a multitude of metrics and found the cNVAE performed favorably.
(iv) We measured neural prediction and alignment of the cNVAE and comparison models using recordings from area MT, where we outperform the SOTA by a factor of 2.
In sum, we focused on understanding the relationship between the learned representations (VAE latents) and “ground truth” causes of sensor data. We showed how representations depend on architecture (hierarchical vs. non-hierarchical), loss (variational vs. maximum likelihood), and the training set.
We will add a Related Work section to the manuscript in the appendix, which should further clarify the contributions of the manuscript. As we are limited to 6K characters, we have included an abridged draft below.
# Related work (draft)
## Neuroscience and VAEs
The connection between VAEs and neuroscience is reviewed by (Marino 2022), but direct comparisons have been limited to these recent papers:
Higgins 2021: Trained beta-VAE models on face images and found that beta-VAE discovers individual latents that are aligned with IT neurons.
Storrs 2021: Trained PixelVAE on synthesized images of glossy surfaces and found that the model spontaneously learned to cluster images according to underlying physical factors and mimicked human perceptual judgments.
Csikor 2022: Investigates the properties of representations and inference in a biologically inspired hierarchical VAE called Topdown VAE that captures key properties of representations in V1 and V2 of the visual cortex.
Miliotou 2023: learned mappings from the latent space of a hierarchical VAE to fMRI voxel space supported improved reconstruction of images from brain data. Ablation experiments find hierarchy is an essential component.
## Hierarchical VAEs
Ladder VAE (LVAE) was the first to introduce hierarchy to VAEs, which improved upon standard VAEs by sharing information between the inference and generative networks, allowing LVAEs to learn deeper representations. Building on LVAE, Maaløe et al. 2019 introduced Bidirectional-Inference Variational Autoencoder (BIVA), using skip-connections to further enhance the flow of information among latent variables. Both the LVAE and BIVA enabled VAEs to effectively leverage deep stochastic hierarchies.
Recently, the hierarchical Nouveau VAE (NVAE) (Vahdat & Katz, 2020) achieved SOTA in several benchmarks and generated high-quality faces. Very Deep VAE (vdvae; Child 2021) achieved impressive performance on complex image benchmarks. Neither work evaluated how the hierarchical latent structure changed the quality of learned representations. As far as we know, ours is the first study focused on the evaluation of representations in hierarchical VAEs with applications to neuroscience data.
Additionally, NVAE and vdvae have an undesirable property: their convolutional latents result in a latent space that is several orders of magnitude larger than the input space defeating the main purpose of autoencoders. For vdvae, a tiny subset (only 3%) of its latent space is necessary [Hazami 2022](https://arxiv.org/abs/2203.13751). We demonstrate that it is possible to compress hierarchical VAEs and focus on investigating the latents.
## Evaluating DNNs on predicting biological neurons.
Many studies have evaluated DNNs for predicting brain responses, but most are on static image processing (“ventral stream”), most notably. In contrast, motion processing in the dorsal stream has only been considered thus far in Mineault (2021), who used a deep neural network to extract ground truth variables from simulated drone flight ("AirSim"). They evaluate neural prediction across many areas and achieved SOTA in prediction for the dataset we consider here, although we greatly outperform them.
# New rebuttal results, future work
## Ablation experiments
We performed ablation experiments and found that this offers insight into what is represented by the latents in cNVAE. Lesioning top latents in the encoder pathway removes the object from the reconstructed output, leaving an unperturbed background (see our rebuttal Figure R1-a). In contrast, bottom latent lesions disrupt self-motion and leave object position and velocity mostly undisturbed (Fig R1-d). We extended these ablations to our MT neuron predictions. The performance drops most dramatically when the bottom latents are disrupted. Lesioning the decoder did not lead to significant performance drops (Figure R1-e; top).
## Manifold analysis
We performed a manifold analysis using effective dimensionality (ED; see Figure R1 for a definition) and found ED is larger for cNVAE compared to VAE even though both models have the same number of latents and achieve almost identical loss (Fig. S5). This suggests a future direction to understand how cNVAE accomplishes these properties.
Pdf: /pdf/a77b775dc0ab01e1a645850aea51cc36f3e4bfb8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition | Accept (oral) | Summary: The paper aims at a fairer face recognition model.
First, the authors conduct large-scale experiments to show that architectures and hyperparameters matter for fairness (Section 3). Concretely, a wide range of models in different architectures and hyperparameters are evaluated in terms of performance (metric: “Error”) and fairness (metric: Rank Disparity), showing that some models (e.g., DPN) are indeed Pareto-optimal compared to others.
Motivated by this finding, unlike previous bias mitigation strategies based on a fixed neural architecture and a set of hyperparameters, the paper provides a new angle on bias mitigation by searching for fairer neural architectures and hyperparameters.
The paper designs a search strategy to satisfy three desiderata: (1) both architectures and hyperparameters are optimized, (2) both accuracy and fairness are used as the objective, and (3) the searching process should be efficient. To this end, the paper uses some existing approaches, such as SMAC, Hyperband, and ParEGO.
The results show that the proposed method is Pareto-optimal compared to existing methods on two face datasets. Furthermore, the experiments show that the proposed method can also generalize to other datasets and protected attributes.
Strengths: 1. The proposed method is well-motivated by the experiments in Section 3 to show that architectures and hyperparameters matter for fairness.
2. The paper gives a novel angle from architectures and hyperparameters toward bias mitigation.
3. The paper provides insights into why this method works from the perspective of linear separability of protected attributes (L293).
4. The experiments are extensive.
5. In terms of the results, the proposed method is Pareto-optimal, beating existing bias mitigation methods.
6. The code is provided for better reproducibility.
7. The paper is well-written and easy to follow.
Weaknesses: ## Generalization of Pareto-optimal results to different datasets
I appreciate the results of cross-dataset generalization. However, Table 2 only shows the performance results. Therefore, whether or not the proposed method is still Pareto-optimal remains unknown.
### Minor Comments:
The plots in Figure 2-4 are in low resolution. I suggest the authors export the plots in PDF, SVG, or EPS formats instead of image formats (e.g., jpeg).
Fonts in Table 1 are in a strange aspect ratio. I suggest the authors use the “adjustbox” package to adjust the table size.
L300: probes[1] -> probes [1]
Appendix, L806, L808: broken \ref link to the figure. “Figure ??” -> Figure 16
Appendix, Caption of Figure 17: “(b) SMAC model second last layer (b) DPN MagFace on the second last layer” -> “(c) SMAC model second last layer (d) DPN MagFace on the second last layer.”
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Can the authors add the fairness-performance results of cross-dataset generalization instead of only showing the performance results?
2. I appreciate the authors’ efforts in explaining why the proposed method work (L293). However, in terms of neural architecture, is there any pattern that makes some architectures more Pareto-optimal than others? This would be interesting because future works may use such a pattern to manually design a fairer architecture.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations (L333-348). From my perspective, the paper does not have a potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and thoughtful feedback on our manuscript. We appreciate that you find our approach well-motivated, our angle of architectures and hyperparameters novel, our experiments extensive, our method reproducible and the paper well-written and easy to follow. We address each of your points below:
**Weakness & Q1: Fairness performance on cross-dataset generalization**
Thank you for raising this point, and we have updated our manuscript to include Rank Disparity in Table 2. We replicate that table here below. We note that the only dataset with usable protected attribute labels is AgeDB so thus, we present that result here. We divide the ages into groups of 0-25 yrs, 25-50yrs, 50-75yrs and 75-100yrs. Further, we report the maximum disparity amongst these groups. We note that the SMAC models are Pareto-*dominant* here showing the lowest error and lowest rank disparity.
| Dataset | Model | Accuracy | Disparity |
|----------|-------------|------------|------------|
| CelebA | DPN_CosFace | 64.84 | 0.2824 |
| | DPN_MagFace | 60.00 | 0.3129 |
| | SMAC_000 | 80.23 | 0.2188 |
| | SMAC_010 | **82.35** | **0.1229** |
| VGGFace2 | DPN_SGD | 71.866 | 0.2247 |
| | DPN_AdamW | 61.316 | 0.2114 |
| | Rexnet_100 | 59.1833 | 0.2892 |
| | SMAC_301 | **81.533** | **0.1883** |
**Q2: Why are these architectures more fair?**
Thank you for your question. We precisely search for a recurring Dual Path Network block in-terms of architecture. The handcrafted DPN block contains a Conv3x3Bn (Conv 3x3 followed by batchnorm), BnConv5x5 (batch norm followed by 5x5 convolution) and BnConv3x3 ( batch norm followed by 5x5 convolution). Amongst the architectures, we find a strong preference for the BnConv3x3 operation (every architecture containing at least one or more of such operations). Furthermore, in terms of the optimal face recognition head, we surprisingly find a strong preference for “CosFace” instead of “MagFace” and “ArcFace”. We find that “ArcFace” has the least preference during search. Moreover, we also discover that the SGD optimizer often with high learning rates > 0.1 is often preferred in comparison to AdamW and Adam optimizers from our search space.
The proposed multi-objective neural architecture search and HPO simultaneously optimize two objectives, firstly the accuracy and secondly the fairness metric (e.g. rank disparity). Hence, we bias the search toward models which do not exploit the protected attribute (e.g. gender) to make classifications. We hypothesize that the SMAC models learn to use more fine grained facial features to distinguish faces instead of exploiting obvious coarse features like protected attributes (gender, race, age). We leave a more detailed analysis of the properties of learned features for future work.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' responses and reviewers' comments. The response addresses my concern. I raise my rating to "Strong Accept." I encourage the authors to add the response to the final version. | Summary: This paper propose a brand new framework (NAS+HPO) to mitigate biases in FR. The discussion is extensive and interesting. But experiments on authoritative face recognition dataset are required, e.g., Ms1m, Glint360 and webface260m.
Strengths: a. The presentation is easy to follow.
b. The discussion is extensive and interesting.
c. The paper propose a new framework (NAS+HPO jointly) to mitigate biases in FR.
Weaknesses: The reported FR performance of this method should be also verified in large scale FR datasets, since lots of methods just work in small datasets but always fail in large ones. Some authoritative face recognition datasets are suggested, e.g., Ms1m v3[1], Glint360K[2] and webface260m[3].
[1] Lightweight face recognition challenge.
[2] Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC
[3] WebFace260M: A Benchmark Unveiling the Power of Million-Scale Deep Face Recognition
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: null
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 4 excellent
Limitations: null
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We’d like to first thank you for your time and thoughtful feedback on our manuscript. We appreciate that you find our presentation easy to follow, our discussion extensive and interesting. We have conducted new analysis and answer your question below:
**New Results**
**The Effect of Pretraining**
We now study the effect of pre-training vs. training from scratch (Figure 2 (a) and (b) in rebuttal PDF) for face-recognition using the Dual Path Network architecture which is the basis for our search space definition. Interestingly, we find that the disparity of the pre-trained model is **much** higher compared to the model trained from scratch. Moreover, we observe that while the pre-trained model starts strong in terms of accuracy, the model trained from scratch eventually catches up. This opens up an interesting direction of future work on how to effectively exploit pre-trained models for face-recognition systems without increasing bias.
**SMAC Pareto-dominates other NAS Methods**
We now study models discovered by other NAS methods (using a limited time-budget for search), and we observe that SMAC (multi-fidelity+Bayesian Optimization) optimizes compute-efficiency and performance.
| | Accuracy | Rank Disparity | Disparity | Ratio | Rank Ratio | Error Ratio |
|:-----------------|-----------:|-----------------:|------------:|----------:|-------------:|--------------:|
| MO-ASHA_032 | 0.934739 | 0.390588 | 0.0485621 | 0.0533381 | 0.448144 | 0.542336 |
| NSGA-II_728 | 0.868105 | 0.599085 | 0.0857516 | 0.103913 | 0.490213 | **0.490651** |
| SMAC_301 | **0.963366** | **0.230327** | **0.0300871** | **0.0317269** | **0.367554** | 0.582215 |
**Fairness w.r.t. Age Groups on AgeDB**
We conducted an analysis on fairness across age groups on AgeDB and find that our models are **pareto-dominant**.
| Dataset | Model | Accuracy | Disparity |
|----------|-------------|------------|------------|
| CelebA | DPN_CosFace | 64.84 | 0.2824 |
| | DPN_MagFace | 60.00 | 0.3129 |
| | SMAC_000 | 80.23 | 0.2188 |
| | SMAC_010 | **82.35** | **0.1229** |
| VGGFace2 | DPN_SGD | 71.866 | 0.2247 |
| | DPN_AdamW | 61.316 | 0.2114 |
| | Rexnet_100 | 59.1833 | 0.2892 |
| | SMAC_301 | **81.533** | **0.1883** |
**Training on Larger Datasets**
We appreciate your point that our learned architectures were not evaluated on additional very large-scale FR datasets. We did not conduct these experiments and leave them for future work since we specifically focused on datasets which have protected attribute labels, unlike Glint360K. During the rebuttal period, we were unable to obtain the WebFace360M dataset given the process and license agreement protocol. Finally, as the ethics reviewer has stated, the use of some of these datasets is controversial, MS-Celeb-1M for example is listed as a [deprecated dataset](https://neurips.cc/public/deprecated-datasets) by NeurIPS itself.
---
Rebuttal Comment 1.1:
Title: S
Comment: some of my concern is resolved, I'll raise my score. good job. | Summary: The paper focuses on Bias Mitigation for face identification, i.e., ensuring that face identification works “well” for different identities: gender, race, etc. Unlike prior work, which focuses on model backbone agnostic methods to mitigate bias, this work explores the relevance of the inductive bias encoded in different Deep Learning model's backbones to the bias issue. In other words, are there model backbones better at learning robust features and ensuring good performance on samples from different identities? The authors uncover model configurations that significantly improve performance over standard model backbones through an extensive empirical analysis based on Neural Architecture Search on two face identification datasets. More surprisingly, the authors prove that these backbones' performance is better or more competitive than when paired with standard bias mitigation methods. Moreover, the authors confirm the generalizability of the configurations uncovered using CelebA and VGGFace2 by testing them on other datasets, further confirming their competitive performance. Finally, the authors analyze the new model configurations and ensure that the features learned by the models are less likely to be discriminative between the biased groups confirming that they learn more diverse and robust features.
Strengths: 1- The work is well motivated; prior work needs to include an analysis of model architecture relevance to the bias issue.
2-The results are interesting and relevant to fairness/bias community practitioners.
3-The advantages of newly discovered model configurations are explored by a well-designed empirical analysis that confirms the configurations' more robust learned features.
Weaknesses: 1- Some choices in the experimental design would benefit from further motivation. For example:
Why was the “multi-fidelity Bayesian optimization method SMAC3” chosen in particular? Are there other methods that could also work?
2- SMAC_301 was the architecture that works well across datasets. I understand that 301 denotes the operations that constitute the novel architecture. However, the authors do not discuss the details of these operations or why they think they are meaningful choices compared to other choices ruled out by the NPS. Some discussion here would be helpful. Why do the authors think this configuration is better able to learn non-linearly separable bias features?
3- In Section 3, why are the models trained on a gender-balanced subset of the dataset? Wouldn’t one want to train on a gender-imbalanced split to see which architectures are less likely to be influenced by the imbalance? Is this the same in Section 4? Please clarify.
4- In the analysis in Section 4.2, were the hyper-parameters (learning rate, optimizer) of the timm models tuned too? Or were they the ones used by the original papers? If it is the latter, then it is an unfair comparison to the SMAC models since those hyper-parameters were tuned as outlined in Section 4.1.
5- From reading the paper in detail, I understand now that one of the motivations of Section 3 analysis was to limit the number of architectures explored in Section 4 (only DPN was considered since it achieved Pareto optimal performance on both datasets). However, I did not get that from the first read, so further clarification in the text about that would be helpful.
6- In Section 3, the hyper-parameters of the different models were not optimized. It is likely very computationally expensive. Nevertheless, it does undermine some elements of the analysis, so I would make that clear as limitations.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I am overall positive about this work. However, I need further clarifications as outlined in the Weakness sections, particularly questions: (2,3,4). I am happy to increase my score upon adequate further clarification.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I commend the authors for explicitly discussing their work's technical limitations and that while it improves the notion of technical fairness, the advancement could still be harmful in downstream applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and thoughtful feedback on our manuscript. We appreciate that you see the novelty in our work being the first to systematically conduct a large-scale analysis on the problem of fairness face recognition with different architectures and hyperparameters. We address each of your points below.
**W1: The choice of SMAC3**
Thank you for your suggestion. We agree that given the plethora of methods for multi-objective NAS+HPO, there are multiple algorithms one could choose from. Given that SMAC3 supports parallelization across GPUs and multi-fidelity search, we initially restrict ourselves to SMAC3. Following your advice, we now studied two other multi-objective methods, MOASHA [1] and NSGA-II [2] from the syne-tune[3] library, using our search space design. Note that we run the search for a limited time budget of 48 hrs, so the models discovered would likely improve with a longer search budget. We will include an extended experiment in our updated manuscript. We present the results below:
| | Accuracy | Rank Disparity | Disparity | Ratio | Rank Ratio | Error Ratio |
|:-----------------|-----------:|-----------------:|------------:|----------:|-------------:|--------------:|
| MO-ASHA_032 | 0.934739 | 0.390588 | 0.0485621 | 0.0533381 | 0.448144 | 0.542336 |
| NSGA-II_728 | 0.868105 | 0.599085 | 0.0857516 | 0.103913 | 0.490213 | **0.490651** |
| SMAC_301 | **0.963366** | **0.230327** | **0.0300871** | **0.0317269** | **0.367554** | 0.582215 |
[1] Schmucker, R., Donini, M., Zafar, M.B., Salinas, D. and Archambeau, C., 2021. Multi-objective asynchronous successive halving. arXiv preprint arXiv:2106.12639
[2] Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T.A.M.T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE transactions on evolutionary computation, 6(2), pp.182-197
[3] Salinas, D., Seeger, M., Klein, A., Perrone, V., Wistuba, M. and Archambeau, C., 2022, September. Syne tune: A library for large scale hyperparameter tuning and reproducible research. In International Conference on Automated Machine Learning (pp. 16-1). PMLR
**W2: Why is SMAC_301 the best model?**
In most of the fair models discovered by NAS+HPO, we see a prevalence of BnConv3x3 operation (every architecture containing at least one or more of such operations). Furthermore, in terms of the optimal face recognition head, we surprisingly find a strong preference for “CosFace” instead of “MagFace” and “ArcFace”. We find that “ArcFace” has the least preference during search. Moreover, we also discover that the SGD optimizer with high learning rates > 0.1 is often preferred in comparison to AdamW and Adam optimizers from our search space. We believe that these are just some of the important characteristics (architectural and hyper-parameter pipeline) for making models more fair. We will include an extended discussion of these components in our updated manuscript.
**W3: Gender-balanced training**
We have employed the training regime for fair face identification as described by [4], which shows the importance of training models with fully balanced datasets (both balanced in identities and number of images per identity). They point out how these two types of imbalances (both at training and testing time) can cause researchers to draw misleading or incorrect conclusions. Thus, balancing the training and testing data as we did in our experiments is an important step to disaggregate the disparity introduced by the model architecture and hyperparameters, from the disparity introduced by the data imbalance.
[4] Cherepanova, V., Reich, S., Dooley, S., Souri, H., Goldblum, M., & Goldstein, T. (2022). A deep dive into dataset imbalance and bias in face identification. Sixth AAAI/ACM Conference on Artifical Intelligence, Ethis, and Soceity, 2023.
**W4: Hyperparameter tuning**
We conduct our large scale analysis with handcrafted architectures and the hyperparameters as reported in their respective papers. In addition to this, we also study every model with 9-13 different hyperparameter combinations for each model, to allow for more flexibility in terms of optimizers, face-recognition heads, and learning rates (Section 3.2 Experimental Setup). Our goal is to compare these already strong pipelines with ones that can be discovered automatically using joint NAS+HPO.
**W5&6: Clarity of writing**
We greatly appreciate your careful read of our paper. We have updated the manuscript to incorporate this feedback, and we will include these edits in our updated manuscript.
---
Rebuttal Comment 1.1:
Comment: The response addresses my concerns. I encourage the authors to revise the manuscript and include the updates. I revised my score accordingly. | Summary: This paper presents a new perspective on bias mitigation in machine learning models, challenging the conventional belief that one should first find the highest-performing model and then apply a bias mitigation strategy. The authors propose that finding a fairer architecture offers significant gains compared to conventional bias mitigation strategies. To test this hypothesis, they conduct the first neural architecture search for fairness and a search for hyperparameters in face recognition.
Strengths: This paper proposes a new way to mitigate biases in face recognition systems from the perspective of fairer model architectures.
This paper conducts the first large-scale analysis of the impact of architectures and hyperparameters on bias in face recognition, demonstrating that the implicit convention of choosing the highest-accuracy architectures is a sub-optimal strategy for fairness.
This paper may be the first to apply existing tools from NAS and HPO to design a fair face recognition model automatically.
Weaknesses: (1)According to the definition of rank difference, the smaller the model's error, the fairer the model. Therefore, the difference in model parameters will lead to a difference in model performance. There is a lack of consistent constraints on the magnitude of the model parameters when searching for a fairer structure in this paper.
(2)Why the results in Table 2 are far worse than those reported in other papers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the WeakNess
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first thank you for your time and thoughtful feedback on our manuscript. We are glad that you find our approach novel and interesting. We address each of your questions below:
**W1: Rank Disparity Definition**
Thank you for raising this point. Precisely as per the definition of rank disparity, Rank(image) = 0 if and only if Error(image) = 0. This, however, **doesn’t** necessarily imply that decreasing error would correspond to decreasing rank disparity. Unlike the ratio of errors metric, rank disparity is a much richer metric which **doesn’t** have a strong correlation with error rate.
To probe this question, we conducted a new analysis (Figure 1 (a) in the PDF) which examines the correlation of each fairness-metric with model statistics. We compute statistics like number of parameters, model latency, number of convolutions, number of linear layers, and number of batch-norms in a model’s definition. Interestingly, we observe very low and non-significant correlations between parameter sizes and different fairness metrics. This observation supports the claim that increases in accuracy and decreases in disparity are very closely tied to the architectures and feature representations of the model, irrespective of the parameter size of the model. Hence, not constraining the parameter size helps our NAS+HPO approach search in a richer search space.
**W2: Table 2 Results**
Table 2 reports results of transfer learning from the given datasets (VGGFace2 on top and CelebA below) to the given datasets. Thus, the performance will be lower than if each model were fine-tuned or hyperparameters were optimized for each model on each dataset, or if a different pre-training dataset were used. We highlight here that the transfer learning result is strong and indicative that the representations that are learned by our novel architectures are indeed generalizable in a way that the other models are not. We have clarified this point in our updated manuscript.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I have read the authors' responses and reviewers' comments. The response addresses my concern. I keep my rating. | Rebuttal 1:
Rebuttal: We first thank all the reviewers for their insightful feedback and suggestions. Our work shows that bias in face recognition systems is actually inherent to their architectures and hyperparameters, and we can design fairer systems by searching for fair architectures, in fact significantly surpassing previous approaches. We appreciate that the reviewers find our perspective on bias mitigation interesting and fresh (**jfj2**, **kze1**, **Nw7v**, **3GhH**), our presentation clear and well-motivated (**jfj2**, **Nw7v**, **mXr9**, **3GhH**) and our experiments and analysis thorough and extensive ( **mXr9**, **3GhH**, **Nw7v**). Following suggestions made by the reviewers, we conducted further analyses and evaluations, some of which we highlight below:
**New Results**
1. We now conduct an analysis on fairness across age groups on AgeDB and find that our models are **Pareto-dominant**.
| Dataset | Model | Accuracy | Disparity |
|----------|-------------|------------|------------|
| CelebA | DPN_CosFace | 64.84 | 0.2824 |
| | DPN_MagFace | 60.00 | 0.3129 |
| | SMAC_000 | 80.23 | 0.2188 |
| | SMAC_010 | **82.35** | **0.1229** |
| VGGFace2 | DPN_SGD | 71.866 | 0.2247 |
| | DPN_AdamW | 61.316 | 0.2114 |
| | Rexnet_100 | 59.1833 | 0.2892 |
| | SMAC_301 | **81.533** | **0.1883** |
2. We now study models discovered by other NAS methods (using a limited time-budget for search), and we observe that SMAC (multi-fidelity+Bayesian Optimization) optimizes compute-efficiency and performance.
| | Accuracy | Rank Disparity | Disparity | Ratio | Rank Ratio | Error Ratio |
|:-----------------|-----------:|-----------------:|------------:|----------:|-------------:|--------------:|
| MO-ASHA_032 | 0.934739 | 0.390588 | 0.0485621 | 0.0533381 | 0.448144 | 0.542336 |
| NSGA-II_728 | 0.868105 | 0.599085 | 0.0857516 | 0.103913 | 0.490213 | **0.490651** |
| SMAC_301 | **0.963366** | **0.230327** | **0.0300871** | **0.0317269** | **0.367554** | 0.582215 |
3. We now study the effect of pre-training vs. training from scratch (Figure 2 (a) and (b) in rebuttal PDF) for face-recognition using the Dual Path Network architecture which is the basis for our search space definition. Interestingly, we find that the disparity of the pre-trained model is **much** higher compared to the model trained from scratch. Moreover, we observe that while the pre-trained model starts strong in terms of accuracy, the model trained from scratch eventually catches up. This opens up an interesting direction of future work on how to effectively exploit pre-trained models for face-recognition systems without increasing bias.
**Ethical Concerns**
We strongly believe that our findings need to be placed into the larger sociotechnical context of facial recognition. The impacts of facial recognition technologies on individuals are well-documented, and our work considers a new way to reduce harms caused by disparities in these systems. We adhered to the [NeurIPS deprecated dataset guidelines](https://neurips.cc/public/deprecated-datasets) for our choice of datasets. MS-Celeb-1M and MegaFace are two datasets widely used by the face recognition community, even today, which we omitted from our experiments due to ethical issues. We have updated our manuscript to reflect these points and to further highlight the representational issues with CelebA as pointed out by reviewers.
Pdf: /pdf/02e7407946d48c9b07e683b0b69ad99309847564.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors offer a fresh view on mitigating fairness bias in ML by leveraging neural architecture (NAS) search and hyperparameter optimization (HPO).
The authors demonstrate their idea on the exemplary problem of face identification, where fairness biases have tangible consequences on society. They utilize a wide range of model architectures, and define a NAS+HPO search strategy where multi-objective optimization helps balance the tradeoff between accuracy and fairness (as quantified e.g. by the rank disparity metric) .
Strengths: + The authors formally describe a new paradigm compared with classical bias mitigation techniques that have traditionally focused on postprocessing/rectifying ML predictions, preprocessing/balancing the datasets, or extending the loss.
+ The solution is systematically devised and the experiments are straightforward to follow.
+ The experimental results are insightful and helpful in practice. I find it inspiring that the NAS models generalize to new protected attributes in new datasets.
+ The authors provide their source code besides a variety of analysis scenarios available to run via notebooks.
Weaknesses: - There was no discussion on the impact of pretraining. With the availability of a large number of foundational models, it would be very relevant to shed light into which ones generalize better and why.
- The theoretical analysis is a. bit lacking. I was expecting more explanation of why the NAS models outperform other bias mitigation strategies and better generalize to other sensitive attributes. Is it because you are forcing the model to work harder and to avoid misusing these sensitive attributes as shortcuts when making predictions? (this would explain the reduced linear separability of protected attributes).
- The visualization of the results could be more insightful. For example a confusion matrix / similarity matrices might be useful to conduct error analysis and shed light into the improvements facilitated by the NAS paradigm. [Such analysis](https://arxiv.org/abs/2007.06068) has revealed many characteristics of VGGFace, e.g., oftentimes, gender misclassification is due to labeling issue (e.g. the image scrapped is not of the actor, but of their opposite-gender spouse).
Minor and language issues:
- The figures could be better annotated (e.g. to explain that the two red dots in Figure 2 are two variants of DPN)
- when comparing to other bias mitigation techniques => compared?
- at the most extreme low errors => unclear
- oepration
- to supports
- are Pareto-optimal the top performing … => are the Pareto-optimal top-performing …
- recognititon
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Have you considered other alternatives to SMAC3 or ParEGO? How generalizable are the insights in section 4.2. to possible alternatives?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Sufficiently discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your time and thoughtful feedback on our manuscript. We appreciate that you see our view on mitigating fairness bias in ML as fresh and interesting, our solution systematically devised, and our experiments straightforward to follow. Further, we are glad that you find our results insightful, inspiring and useful in practice. We address each of your questions below:
**W1: The Effect of Pre-training**
Thank you for raising this important point. Prompted by your feedback, we now fine-tuned a pre-trained DPN model obtained from timm on vggface-2, as DPN is the most representative of our search space. We compare the error trajectory of this model with and without pre-training. Interestingly, we observe that while the pre-trained model starts strong in terms of accuracy, the model trained from scratch catches up quickly. And more importantly, the disparity of the pre-trained model is **much** higher compared to the model trained from scratch. You can find the plot for the same in the attached PDF (Figure 2 (a) and (b)). We have updated our working draft accordingly, and we will perform more experiments to include in the updated manuscript.
**W2: Theoretical Analysis**
The proposed multi-objective neural architecture search and HPO simultaneously optimizes two objectives, firstly the accuracy and secondly the fairness metric (e.g. rank disparity). Hence, we bias the search toward models which do not exploit the protected attribute (e.g. gender) to make classifications. This is also reflected in the reduced linear separability of the features of the models discovered by SMAC. We hypothesize that the SMAC models learn to use more fine grained facial features to distinguish faces instead of exploiting obvious coarse features like protected attributes (gender, race, age). We leave a more detailed analysis on the properties of the features learned for future work.
**W3: Visualizations**
We appreciate this feedback. We have now conducted a new analysis in accordance with your suggestion in Figure 1 (a) of the attached PDF. Specifically, we visualize a dendrogram of the SMAC\_301 model as well as three highly performing DPNs. The visualization shows the correlation of the logits for each image in each identity. From this analysis, we observe that SMAC\_301 and DPN\_CosFace\_SGD have smaller cross-logit similarities, pointing to the fact that they do not try to exploit easier image properties like protected attributes to cluster images. The average similarities are lower for these models compared to others and the max similarities are much higher. We hypothesize that this property aids fair classifications.
**Q1 SMAC3 and ParEGO Pareto-dominate other methods**
We agree that given the plethora of methods for multi-objective NAS+HPO, there are multiple algorithms one could choose from. Given that SMAC3 supports parallelization and multi-fidelity search, we primarily restricted ourselves to it for compute optimal search. However, following your advice, we now studied two other multi-objective methods: MOASHA [1] and NSGA-II [2] from the syne-tune [3] library, using our search space design. Note that we run the search for a limited time budget of 48 hrs, so the models discovered may improve with a longer search budget. We will include an extended search comparison in our updated manuscript. The results are found below and indicate that our chosen method Pareto-dominates the other methods for all metrics except for Error Ratio where it is Pareto-optimal.
| | Accuracy | Rank Disparity | Disparity | Ratio | Rank Ratio | Error Ratio |
|:-----------------|-----------:|-----------------:|------------:|----------:|-------------:|--------------:|
| MO-ASHA_032 | 0.934739 | 0.390588 | 0.0485621 | 0.0533381 | 0.448144 | 0.542336 |
| NSGA-II_728 | 0.868105 | 0.599085 | 0.0857516 | 0.103913 | 0.490213 | **0.490651** |
| SMAC_301 | **0.963366** | **0.230327** | **0.0300871** | **0.0317269** | **0.367554** | 0.582215 |
[1] Schmucker, R., Donini, M., Zafar, M.B., Salinas, D. and Archambeau, C., 2021. Multi-objective asynchronous successive halving. arXiv preprint arXiv:2106.12639
[2] Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T.A.M.T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE transactions on evolutionary computation, 6(2), pp.182-197
[3] Salinas, D., Seeger, M., Klein, A., Perrone, V., Wistuba, M. and Archambeau, C., 2022, September. Syne tune: A library for large scale hyperparameter tuning and reproducible research. In International Conference on Automated Machine Learning (pp. 16-1). PMLR
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed response and for conducting the additional experiments. This confirmed my assessment about the utility and the solid results of the presented method. | null | null | null | null | null | null |
Unexpected Improvements to Expected Improvement for Bayesian Optimization | Accept (spotlight) | Summary: This paper addresses a major weakness of Bayesian expected improvement acquisition functions, which are ubiquitously used for black-box optimization tasks such as computational hyperparameter tuning, materials science, and biomedical research. It is very common to use gradient-based optimizers to find local maxima of the acquisition surface, however expected-improvement acquisition functions have the very unfortunate pathology of a completely flat acquisition surface (where both the acquisition value and acquisition gradient are 0) on large regions of input space, particularly as optimization progresses and the best known solution improves. This pathology makes Bayesian optimization extremely sensitive to implementation decisions, particularly the initialization scheme of the acquisition maximization subproblem, which hinders BayesOpt practitioners in academia and industry. This paper rightfully places numerical precision and stability as one of the primary considerations in acquisition function design, and proposes simple, intuitive modifications to expected improvement acquisition functions that significantly improve performance.
Strengths: I am strongly in favor of accepting this paper. The basic problem the authors are addressing is one I have often encountered myself, and I have even tried some similar ideas as those presented in this paper to try to address the problem, however I had to shelve the project due to competing demands for my time. I'm delighted to see the problem addressed so thoroughly here.
The greatest strength of this paper is the emphasis placed on how acquisition function design interacts with the optimization algorithms used to find their maxima. Generally speaking I feel this aspect of acquisition function design is often neglected in many Bayesian optimization papers, to the great detriment of the field.
Given the widespread use of BayesOpt across industries, and the use of EI-style acquisition functions in particular, I think this paper could have significant practical impact on multiple fields.
Weaknesses: This work is ready for publication without any significant revisions. I would encourage reviewers in general to think about the opportunity cost of burdening authors with minor or tangential concerns, slowing the development of follow-up work.
It's worth noting that the weaknesses of EI-style acquisition functions are fairly well documented in latent-space BayesOpt papers, such as [1] and [2]. Both of those works employed a heuristic I didn't see mentioned in the paper, which is to scale the max_{x_i \in D} f(x_i) term in the acquisition function with some factor < 1 (e.g. 0.9). Some brief discussion in the related work on this point could help better communicate the potential impact of this paper.
[1] Tripp, A., Daxberger, E., and Hernandez-Lobato, J. M. ´
Sample-efficient optimization in the latent space of deep
generative models via weighted retraining. Advances in
Neural Information Processing Systems, 33, 2020.
[2] Stanton, S., Maddox, W., Gruver, N., Maffettone, P., Delaney, E., Greenside, P., & Wilson, A. G. (2022, June). Accelerating bayesian optimization for biological sequence design with denoising autoencoders. In International Conference on Machine Learning (pp. 20459-20478). PMLR.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Can you comment on whether you expect batch acquisition value optimization to outperform sequential greedy optimization when there are strong locality constraints placed on the inner loop problem (i.e. d(x_0, x_t) < \varepsilon for all x_t optimization iterates)? Intuitively it seems that the performance of batch acquisition optimization once again comes down to heuristics for choosing the right collection of points as the initial solution (e.g. [2]), since the iterates may not be able to move far enough from the initialization for the improved acquisition landscape to make much difference.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I think the discussion section could be expanded a bit. In particular I think the following rather vague sentence could be made more specific:
"While our contributions may not apply verbatim to other classes of acquisition functions, our key insights and strategies do translate and could help e.g. with improving information-based [20, 42], cost-aware [26, 36], and other types of acquisition functions that are prone to similar numerical challenges."
I take this to mean that this paper has primarily focused on resolving numerical difficulties arising from the use of the max operator, and other acquisition functions may have numerical issues from operators that are not the max. If this is the case I think it could be stated more clearly, as it would give readers a clearer picture of avenues for future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and encouragement.
__Regarding the scaling of the incumbent__
Thank you for pointing us to the heuristic of scaling the incumbent by a factor. We have encountered and experimented with this heuristic in the past, and while it can be employed to try to avoid numerical degeneracies, it has drawbacks compared to the solution proposed in the present paper:
- For a fixed scaling factor, the resulting acquisition function – to the best of our knowledge – does not have a principled grounding, requires setting a hyper-parameter that the optimization is sensitive to, and it is unclear which scaling factor is sufficient to remedy the numerical problems a-priori.
- Using a homotopy approach to sequentially increase the scaling factor can be more robust, but this requires solving a sequence of acquisition function optimization problems, which is more computationally expensive than optimizing LogEI once. We experimented with this approach in the early stages of this project.
LogEI successfully circumvents the downsides of the incumbent-scaling approach without adding the computational overhead of solving multiple acquisition function optimization problems.
We agree that discussing this will further improve the paper and the motivation of the methods. Will do so in the related work section.
__Regarding greedy vs sequential batch optimization__
We are not sure we fully understand your comment regarding “strong locality constraints”, but believe it may refer to explicitly constraining the distance between new candidates and previously seen points in the context of latent space BO. Such a heuristic has been used in the literature to avoid exploring parts of the latent space that the decoder cannot map back well in the original input space. Assuming this is indeed the setting, it isn’t immediately clear to us whether these imposed locality constraints would have a significant effect on whether joint optimization is beneficial over sequential greedy. Certainly, as $\varepsilon \to 0$ we would expect the benefit to disappear (and similarly we would expect to recover it fully as $\varepsilon -> \infty$, assuming the decoder performance is good across the entire latent space), but it’s hard to say in general what the behavior would be. It’s an interesting question that deserves further study.
__Regarding the extensibility of the methods__
We seek to highlight two distinct aspects:
- Analytical LogEI relies primarily on careful and stable implementations of Gaussian log-probabiliby functions, their sums, differences, and fractions. We believe information-based acquisition functions like Gibbon and Joint Entropy Search (JES) could benefit from similar treatments. A particularly striking result of our experiments revealed that JES fails to outperform random search on the Michaelwicz function in greater than 8 dimensions, which could be caused by numerical issues of the acquisition function.
- Monte-Carlo (“q”) LogEI primarily relies on novel smooth approximations to the ReLU non-linearity, as well as max operator, in addition to the associated log-tricks. We believe that especially the fat-tailed smooth operators (Appendix A.3) could find wider application even outside of Bayesian optimization, in the design and optimization of deep architectures, which are known to lead to saturating nonlinearities and vanishing gradients. We aim to explore this in follow up work.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thanks for your response, I remain strongly in favor of acceptance. I feel that it is relevant to mention that the problem the authors solve has been such a headache for me that I've already started using the code the authors included in the supplementary material for my own work. I can't think of better evidence for the potential impact of the paper.
I completely agree with your comments on the incumbent scaling approach. I will have to think a bit more about how to make my question regarding constrained batch optimization more clear. Your clarification of the discussion section is great, I hope you include it in the camera-ready.
---
Reply to Comment 1.1.1:
Comment: We will include the additional clarification in the paper, and are happy to hear you are already using the code! | Summary: The paper proposes LogEI, family of acquisition functions with improved numerical stability over EI that makes it more suitable for gradient-based acquisition function optimization, all while retaining similar optima as EI. Pathologies of EI are visualized and analyzed, and the approximation error between qEI and qLogEI is theoretically bounded. Empirically, LogEI clearly outperforms EI on most tasks, suggesting that it can act as a drop-in replacement for EI.
Strengths: __Good motivation__: Acquisition function optimization is an often-overlooked aspect of Bayesian optimization, and the paper does a good job of displaying the difficulties of acquisition function optimization (Fig. 1) and how the proposed approach remedies the issue.
__Simple, effective and extensible solution:__ Simple solutions that work are great, and LogEI (and its extensions) is good example.
__Very good empirical performance:__ The improvement over EI in the results is striking on most tasks, suggesting it is simply a superior acquisition function to the default.
Weaknesses: __Anecdotal evidence for similarity with EI:__ Intuitively, It is sensible that LogEI is similar to EI. However, there is little evidence (Fig. 1) and theory (Lemma 2) to support this. I would greatly appreciate a similar Lemma for the analytic variant, and examples of when the two may not be identical. The performance gain of LogEI compared to EI is rather substantial on Ackley (~4x on Ackley-16!) and Mich, which suggests that the two may in fact not be very similar (but that LogEI may in fact simply be superior). Specifically, I don't believe the statement in Row 9, "LogEI, whose members either have _identical or approximately equal optima_ as their canonical counterparts" is well supported.
__Existing LogEI and Lacking references to Related work:__ The idea of a LogEI is not novel (LogEI was an acquisition function in SMAC at one point). Admittedly, that implementation regards log-transformations of the objective [1, 2] and would not help with numerical stability in the same manner. Nevertheless, I think these warrant citation and _comparison in the experiments_, given the similarities. Moreover, it limits the novelty of the approach.
__Relevance in high dimensions:__ Currently, I am not convinced by the justification for LogEI in high dimensions. To me, LogEI aims to address the pathology where the acquisition function is zero, but I don't see that happening in high-dimensional problems due to the exuberance of high-uncertainty regions (which would make the acquisition function _non-zero, but constant?_). So, why is the proposed approach particularily important in high dimensions - i.e., why does LogEI help when the acquisition function is constant as opposed to (almost) zero? This would, in my opinion, require a separate motivation than Fig. 1, empirical results aside. Moreover, I think the _zero-value_ versus _constant-value_ distinction is very important, and should be emphasized more.
With this in mind, I find Fig. 2 striking and odd. 60 data points (which is when almost all points have a zero-valued gradient) on an 8-dimensional problem is not a large amount of data (not even a 2x2x...x2 grid), yet the uncertainty is small to the point of "EI and its gradients become numerically zero across most of the domain"? With all due respect, are the authors sure that this is not _just_ the gradient (and not the function value), or that the model has oddly long lengthscales?
__Minor:__
- __Noisy tasks:__ Adaptation of LogEI to noisy problem settings are missing
- __Lack of conventional benchmarks__: As a potential drop-in replacement for EI, seeing its performance of the method on the most conventional low-dimensional tasks (Branin, Hartmanns) would be informative. Moreover, it would be helpful for future benchmarking.
For all of these bullets, I believe that evidence to clarify (and not necessarily disprove) the remarks would substantially strengthen the paper.
[1]. An experimental investigation of model-based parameter optimisation: SPO and beyond. F Hutter, HH Hoos, K Leyton-Brown, Kevin P. Murphy. _GECCO '09: Proceedings of the 11th Annual conference on Genetic and evolutionary computation_. 2009.
[2] Sequential model-based optimization for general algorithm configuration. F Hutter, HH Hoos, K Leyton-Brown. _Learning and Intelligent Optimization: 5th International Conference_. 2011.
LogEI in SMAC: https://github.com/automl/SMAC3/blob/29355618b35dcf4b3ce3e773d633109f036dba17/smac/optimizer/acquisition.py#L503
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Is there any setting where LogEI can _not_ act as a plug-in replacement for EI, or where performance would be expected to be worse?
- Have the authors experimented with LogEI non-continous search spaces, and if so, what are the findings?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Some suggestions for addressing limitations have been stated in the Weaknesses section, but are otherwise adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback about areas that deserve additional discussion. We seek to clarify the points raised in the following.
__Equivalence of optima of analytical LogEI and EI__
In the CR, we will clarify this statement through a brief Lemma. If the maximum of EI is greater than 0, LogEI and EI have the same set of maximizers. Furthermore, if $\max_{x \in \mathbb X} EI(x) = 0$, then $\mathbb X = \arg \max_{x \in \mathbb X} EI(x)$. In this case, LogEI is undefined everywhere, so it has no maximizers, which we note would yield the same BO policy as EI (where every point is a maximizer). The Lemma is as follows:
_Lemma_: If $\max_{x \in \mathbb X} EI(x) > 0$, then $\arg \max_{x \in \mathbb X} EI(x) = \arg \max_{x \in \mathbb X, EI(x) > 0} LogEI(x)$.
_Proof_:
Suppose $\max_{x \in \mathbb X} EI(x) > 0$. Then $\arg \max_{x \in \mathbb X} EI(x) = \arg \max_{x \in \mathbb X, EI(x) > 0} EI(x)$.
For all $x \in \mathbb X$ such that $EI(x) > 0$, $LogEI(x) = \log(EI(x))$. Since $\log$ is monontonic, we have that $\arg \max_{z \in \mathbb R_{>0}} z = \arg \max_{z \in \mathbb R_{>0}} \log(z)$. Hence, $\arg \max_{x \in \mathbb X, EI(x) > 0} EI(x) = \arg \max_{x \in \mathbb X, EI(x) > 0} LogEI(x)$.
Figure 1 shows that LogEI holds approximately the same value as the logarithm of EI, except in regions where the log of EI is numerically zero.
__Figure 2, Model quality, and constant vs. zero values__
First, we’d like to clarify the data generating process (DGP) for the training data: Training points are not chosen uniformly at random, but rather, 80% are sampled uniformly at random from the domain, and 20% are sampled according to a MVN centered at the function maximum with a standard deviation of 25% of the length of the domain. The idea behind this DGP is to mimic the kind of data one would see during a BO loop (for illustration purposes, without having to run thousands of BO loops to generate the figure). We will clarify this in the CR.
Figure 2 in the MT reflects the fact that under this data generating process with the chosen test problem, the incumbent (best observed point) is much better than the values at the random test locations, and this becomes increasingly the case as the dimensionality of the problem increases and the number of training points grows. For a particular replicate, Figure 1 in the attached PDF shows the model fits in-sample (black), out-of-sample (blue), and the best point identified so far (red) for our DGP with 60 training points and a random subset of 50 (out of 2000) test points. One can see that the model produces decent mean predictions for out-of-sample data, and that the uncertainty estimates appear reasonably well-calibrated (e.g., the credible intervals typically cover the true value). Because of this, we do not see any clear evidence that there is something odd about the model and its length scales.
What Figure 1 in the attached PDF does show is that while there is ample uncertainty in the predictions of the model away from the training points, for the vast majority of points, the mean prediction is many standard deviations away from the incumbent value (the error bars are +/- 2 standard deviations). This is the key reason for EI taking on zero (or vanishingly small) values and having vanishing gradients.
To illustrate this, Figure 2 in the attached PDF shows the histogram of `z(x)` values, the argument to the function `h` in eq (2). It also contains the thresholds corresponding to the values `z` below which `h(z)` is less than the respective threshold. Since `sigma(x)` is close to 1 for most test points (mean: 0.87, std: 0.07), this is more or less the same as saying that `EI(z(x))` is less than the threshold. It is evident from the histogram that the majority of the test points fall below these threshold values (especially for larger thresholds), showing that the associated acquisition function values (and similarly the gradients) are numerically almost zero and causing issues during acquisition function optimization.
__Relevance in high dimensions__
LogEI tends to be particularly effective in high dimensions and we can see how the discussion of Lemma 1 did not adequately address this.
It is important to note that dimensionality alone is not the key factor at play, but the measure (“volume”) of points that attain significantly sub-optimal objective values in the search space (the right-hand side of Lemma 1).
While the average posterior uncertainties tend to be larger in a larger proportion of the search space as the dimensionality increases, they are usually much smaller than the empirical standard deviation of the objective values, and this means that the predictive mean can still be many predictive standard deviations away from the incumbent value for large swaths of the search space.
Fig. 1 and 2 in the attached pdf are an empirical validation of this intuition: while the posterior uncertainties are large and cover the errors, the corresponding EI values are vanishingly small.
__Existing work__
As you already noted in your review, SMAC’s “LogEI” acquisition function tackles a fundamentally different problem than our LogEI. In the former, the surrogate model is fit to log-transformed outcomes in the hope that a log-transformation improves the model fit and optimization performance. Despite the nominal similarity, transformations of the outcomes before fitting the surrogate model are entirely orthogonal to this work, as they do not seek to solve the fundamental problem of the acquisition function itself being hard to optimize due to vanishing gradients. In fact, the techniques underlying our LogEI acquisition function are complementary to the outcome transformation (e.g. a log transform in SMAC’s LogEI) and would help resolve numerical issues and vanishing gradients. We will clarify these differences in the section on related work. See Fig. 4 in the PDF for a comparison of LogEI with SMAC's LogEI.
---
Rebuttal Comment 1.1:
Title: Initial response
Comment: Thanks for the additional plots. These really strengthen the motivation for the paper, and (in my opinion) highlight the vanishing gradient even better.
An aside:
_I found myself checking the exact value at which torch.float64 (and torch.float32) rounds to zero, and thereafter, exactly how many standard deviations of tolerance EI supports. Such reference values (in both Rebutal Fig. 1 and Fig. 2) are, in my view, even more informative than the provided thresholds._
__Existing work__
I agree with the authors on this point. The existence of SMAC LogEI does not limit the novelty. Thanks for including it in the rebuttal plots.
__Equivalence of optima of analytical LogEI and EI:__
The extraordinary difference between LogEI and EI is unsettling to me, especially given the Lemma for the CR. This, to me, should mean one or two (or both) things:
1. This method is very, _very_ necessary since EI truly is zero just about everywhere
2. Not enough budget is allocated towards optimizing EI
While I am _decidedly not_ advocating for simply increasing the budget of EI, it seems like the gap is simply too large at the moment. Since the lemma suggests that an increased budget is guaranteed to solve the problem, are the authors able to provide an ablation as to when this happens (or at least, provide a sense of the rate)?
For now, I have increased my score to a 6. To further increase it (which I am willing to do) I would appreciate the aforementioned ablation to assess the practical impact on the budget allocated to BO acquisition function maximization.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful questions and interest.
__Ablations on initialization heuristics__
> I would appreciate the aforementioned ablation to assess the practical impact on the budget allocated to BO acquisition function maximization.
We believe the ablations in the last two figures (Fig. 17 and Fig. 18) of the Appendix of our submission can clarify this.
- Fig. 18 shows the regret of q(Log)EI on the 16-dimensional Ackley and Levy test problems using 1, 4, and 16 random restarts, and for q = 1, 4, and 16. For the Ackley q = 1 case, we see that increasing the number of restarts does improve the performance of qEI. However, extrapolating from the small increase in performance from 4 to 16 restarts, it seems exceedingly unlikely one could match the performance of qLogEI within a practical compute budget by scaling up the number of restarts. On Levy q = 1 on the other hand, the performance of qEI and qLogEI is similar. This is because the distribution of near-optimal values (relevant for the right-hand side of Lemma 1) is much less peaked around the optimal input for Levy, than it is for Ackley (see plots of Levy and Ackley for a visual illustration).
- Fig. 17 displays an ablation on the impact of initialization heuristics, comparing random restarts with BoTorch’s default Boltzmann-sampling-based approach. One can see that BoTorch’s initialization heuristic helps ameliorate the performance of canonical EI somewhat, but in no way closes the performance gap to LogEI. While prior research has produced many more initialization heuristics, as we discuss in Appendix B.1, they do not resolve the fundamental issues in computing EI that are addressed here.
__Numerical thresholds__
> An aside: I found myself checking the exact value at which torch.float64 (and torch.float32) rounds to zero, and thereafter, exactly how many standard deviations of tolerance EI supports. Such reference values (in both Rebutal Fig. 1 and Fig. 2) are, in my view, even more informative than the provided thresholds.
That's a good point. It is important to clarify that there are at least two numerical threshold that lead to pathologies:
1) Numerical underflow: $x = 0$ numerically, but $x \neq 0$ mathematically.
2) Numerical precision: $1+x = 1$ numerically, but $x \neq 0$ numerically.
Underflow (1) implies that both the value and gradient of standard EI is numerically exactly zero.
Numerical precision (2) plays a key role in gradient-based optimization, where the parameters are incremented by a scaled gradient $x_{n+1} = x_n - \alpha \nabla f(x_n)$ and $\alpha$ is the step size. If $\alpha \nabla f(x_n)$ in this expression becomes smaller than the numerical precision ($\approx$ 1e-8 for single and $\approx$ 1e-16 for double precision floating point numbers), the gradient increment is likely to be a no-op, i.e. $x_{n+1} = x_n$ numerically, even if $\alpha \nabla f(x_n) \neq 0$ *numerically*.
For quasi-second order methods like L-BFGS, which we used for the experimental results, the gradient is further scaled by an approximation to the inverse Hessian, which makes reasoning about the precise thresholds more involved, but the fundamental issue remains the same. The optimization step becomes a no-op if the *(step size + inverse Hessian)-scaled* gradient value is below numerical precision, which is increasingly likely to happen for the thresholds we indicated.
Most practical implementations further use non-zero convergence tolerance parameters that trigger the optimizer to terminate when the gradient magnitude is sufficiently small. In order not to conflate these effects, we did not use any (non-zero) convergence tolerance parameters for the experiments of this paper.
We will add this elaboration. | Summary: This paper identifies a numerical pathology with the expected improvement (EI) family of acquisition functions: the vanishing gradients of the acquisition function leads to failure in acquisition function optimization. A set of modified EI acquisition functions that fix the numerical pathologies have been proposed. In experiments, the proposed EI acquisition functions are shown to outperform the canonical EI acquisition functions and to perform on par with other state-of-the-art acquisition functions on Bayesian optimization benchmarks.
Strengths: - This paper focuses on a previous neglected aspect of Bayesian optimization, acquisition function optimization, and identifies the numerical pathologies associated with EI acquisition functions. This paper provides a theoretical analysis of the vanishing gradient issue.
- Improvements have been proposed to EI, Monto Carlo Parallel EI, Constrained EI and EHVI.
- Experiments clearly show the numerical pathology of EI optimization and the superior performance of the improved EI version.
- The proposed improved EI acquisition functions perform on par with state-of-the-art acquisition functions on high dimensional synthetic functions in both sequential and batch settings.
Weaknesses: - The proposed treatment only works for the acquisition functions with vanishing gradient issues.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - It is great to have an error bound of the qLogEI. However, for acquisition functions, preserving the relative order of values is more important than absolute difference. I wonder to what extent qLogEI preserves the relative order of values compared to qEI.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitation of the proposed method has been discussed in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging your review.
> It is great to have an error bound of the qLogEI. However, for acquisition functions, preserving the relative order of values is more important than absolute difference. I wonder to what extent qLogEI preserves the relative order of values compared to qEI.
While the smooth approximation to the ReLU in the integrand of qEI is monotonically increasing and upper-bounds the ReLU, the smooth approximation to the max operator can change the relative ordering. Therefore, while the relative ordering is fully maintained for the analytical LogEI version, this cannot generally be guaranteed for the Monte-Carlo (batch) version.
Nevertheless, the absolute error bound guarantees that the maximizer of qLogEI attains a similar acquisition value to the maximizer of qEI. Whether or not that is a similar point in the input space depends on the surrogate model during the particular iteration. If the EI acquisition value is a good indicator of optimization performance, such a change in ordering would not lead to a significant decrease in optimization performance, and if EI is not a good indicator of optimization performance on a particular problem, it would not be recommended to use EI in any case.
Notably, our empirical experiments demonstrate that the benefits of a smooth acquisition optimization landscape with strong gradients far outweigh the potential for a small change in relative ordering of points with promising acquisition values.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I understand a guarantee for relative ordering is hard, especially with Monte Carlo approximation. The response makes sense. It address my concern. Overall, the paper presents a nice tricky that addresses a real world challenge of qEI. | Summary: In this paper, the authors identify both through examples and theoretical analysis several numerical pathologies inherent to the computation and optimization of the Expected Improvement (EI), a popular acquisition function at the heart of Bayesian Optimization (BO) algorithms.
They subsequently propose a numerical reformulation, LogEI, which achieves substantially better performances on a quite extensive range of benchmarks. This reformulation applies to all the member of the EI family: constrained EI for constrained BO, parallel EI for batch BO, and expected hypervolume improvement for multi-objective BO.
Strengths: - The paper is well-written and well-organised.
- The proposed numerical fix for EI is likely to have a great impact as it may benefit to all public implementations of EI. Furthermore, it does not incur excessively longer computation times except perhaps for multi-objective BO (roughly one order of magnitude larger).
- LogEI seems to produce more consistent results with respect to the initial optimization starting point compared to its canonical counterpart EI, thus reducing the need for heuristics for that matter.
- All claims are backed up by an impressive amount of numerical experiments and ablation studies in each setting (vanilla BO/ constrained BO/ batch BO/multi-objective BO).
Weaknesses: I did not spot any weakness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I do not have any questions.
Edit: I have read the rebuttal and the discussions between the authors and other reviewers, my score remains unchanged.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive review. | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed and predominantly positive reviews. We are attaching a one-page pdf with additional figures to help answer questions that arose during the review process, and are responding to each reviewer's questions in detail below and in the comments.
__Generality of the Methodology__
> The proposed treatment only works for the acquisition functions with vanishing gradient issues.
Our work brings to light the fundamental challenges of optimizing the popular EI family of acquisition functions (AFs), and proposes remediations tailored to the analytic, batch, constrained, and multi-objective setting, which cover extensive application domains. Through the lens of this popular family of AFs, we investigate how the formulation of AFs contribute to one’s ability to optimize them, and in general how experimental results may be subject to the quality of the AF optimization procedure. While we consider a broad class of EI-based acquisition functions in this work, we hope that it can inspire further investigation, thought, and potential remediation in the development of other types of acquisition functions, promising examples including entropy-based acquisition functions.
__Extension to Noisy Observations__
We point out that LogEI extends naturally to the Noisy Expected Improvement (NEI) acquisition function. We did not include NEI in our original manuscript since the construction is a trivial extension of qEI. The only difference between qEI and qNEI is that the computation of "best_f" values are based on samples of the GP at previously observed points, rather than taking the empirically observed objective value (cf. Balandat et al. NeurIPS 2020, S5.2). We then directly use the forward pass of qLogEI to compute qLogNEI.
In the attached PDF (Fig. 3), we show empirical optimization performance of qLogNEI compared to qLogEI, EI, NEI, and Gibbon on Hartmann 6D, Ackley 8D, and Ackley 16D for varying noise levels. We set the noise level as a proportion of the total range of the respective function, which is ~3.2 for Hartmann and ~20 for Ackley. Thus, a noise level of 1% * Range(f) is equivalent to Gaussian noise with a std of 0.2 for Ackley.
On Hartmann 6D, qNEI, qLogNEI, and Gibbon consistently find the optimum in the allocated number of iterations. qEI and qLogEI exhibit higher variance in their optimization traces especially at the highest noise level, which is expected since the `best_f` value they rely on becomes highly stochastic and might be far from the true (noiseless) best objective value corresponding to the queried inputs.
Notably, qLogNEI outperforms both canonical EI counterparts and leads to significantly improved optimization on Ackley. qLogNEI also leads to notable improvements over Gibbon for the higher-dimensional functions.
We will include a description of qLogNEI, include its implementation in the code release, and add a treatment of these results in the SM.
__Non-continuous search spaces__
Our work largely focuses on the problem of vanishing gradients, which is most pronounced for problems involving continuous or mixed spaces. We expect the proposed methods to have less impact in fully discrete or mixed spaces, especially when some combinations of input parameters can be exhausted entirely. Nevertheless, since our work ensures that the acquisition function values (not just the gradients) do not numerically become zero, it will result in better optimization performance in settings where the feasible choices are all “far” from the incumbent. Promising approaches to gradient-based optimization of fully-discrete or mixed spaces with difficult-to-enumerate combinations such as Probabilistic Reparameterization or straight-through estimators (Daulton et al. NeurIPS 2022) may particularly benefit from LogEI. We will add a discussion of this to the CR.
__Conventional Benchmarks__
As a potential drop-in replacement for EI, seeing its performance of the method on the most conventional low-dimensional tasks (Branin, Hartmanns) would be informative. Moreover, it would be helpful for future benchmarking.
We included benchmarks that had flexible numbers of input dimensions, many constraints, or many outcomes to highlight how these factors impact the vanishing gradient problem and benefit from LogEI. We agree that including such canonical test problems are worth documenting and will include them in the CR. In the interim, please see the attached PDF for the results on Hartmann and Branin (Fig. 4). LogEI outperforms alternatives on Hartmann (6 dimensional) and matches the performance of EI on Branin (2 dimensional).
__LogEI as Replacement of EI__
LogEI should work for any case where EI works, as the two acquisition functions have the same optima (in the analytic case) — LogEI just makes optimizing for those optima much easier. One could imagine cases where the model is incorrect or so degenerate that other model-free policies such as random search or evolutionary strategies work better. To our knowledge, such concerns are true of most model-based acquisition functions studied in the literature. Otherwise, we confidently recommend LogEI as a drop-in replacement of EI.
On a practical note, our implementations indeed already share the same interface as BoTorch’s EI variants, making such a replacement particularly convenient in one popular BO framework.
Pdf: /pdf/2bd7b0b09a18243db51685abd3443cad19883064.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Boosting with Tempered Exponential Measures | Accept (poster) | Summary: This work proposes a generalization of the popular ADABOOST algorithm based on the use of the t-logorithm/exponential. Their method is derived by replacing the standard relative entropy with the _tempered_ relative entropy (introduced in eq. 2), and solving a constrained optimization problem (eq. 3). This results in a solution (eq.4) which is a tempered generalization of ADABOOST's exponential update. Their approach recovers the standard ADABOOST updates at t->1, and maintains the exponential convergence rate of ADABOOST for values of 't' between 0 and 1. A new family of tempered losses is derived from the loss that t-ADABOOST minimizes, and experimental results using t-ADABOOST+trees show significant improvements can be gained by tuning 't'.
Strengths: - This paper has both strong theoretical and experimental results, and show compelling improvements over baselines
- ADABOOST is a very popular and performant ensemble technique, and so this work has the potential to have significant applications for ML practitioners, and could easily be implemented in existing libraries (sklearn etc )
- The paper is well written and easy to follow, and their algorithm is accompanied with a comprehensive theoretical analysis
Weaknesses: - The major limitation is that performance is highly sensitive to the choice of t, and currently it looks like the only way to choose t is to perform expensive hyperparameter sweeps. Further, their Table 1 (should that be "Figure 1"?) seems to imply there is no rhyme or reason to the datasets which perform better with large/medium/small t. Unless practitioners have a way of learning/tuning t it's unlikely this approach will be adopted widely in practice
- AFAICT, there are no ablation experiments testing the effect of the new losses, separate from the effect of the new exponential update. Which loss(es) were used in section 7? How do we know they work? How should a practitioner choose their loss?
## Four dimensions
**Originality:**
- Are the tasks or methods new?
- Yes
- Is the work a novel combination of well-known techniques?
- Yes, the work builds off developments in ADABOOST showing connections to exponential families and bregman divergences, and uses this to generalize ADABOOST by replacing exp/log with t-exp/t-log
- Is it clear how this work differs from previous contributions?
- Yes, section 2 discusses this well.
- Is related work adequately cited?
- Yes.
**Quality:**
- Is the submission technically sound?
- Yes, the technical contributions seem sound.
- Are claims well supported (e.g., by theoretical analysis or experimental results)?
- Theoretically yes, experimentally yes, with the exception of missing experiments evaluation the new losses
- Are the methods used appropriate?
- Yes
- Is this a complete piece of work or work in progress?
- Yes, modulo missing loss experiments
- Are the authors careful and honest about evaluating both the strengths and weaknesses of their work?
- yes
**Clarity:**
- Is the submission clearly written?
- Yes
- Is it well organized?
- Yes
- Does it adequately inform the reader?
- Yes
**Significance:**
- Are the results important?
- If the authors provide a mechanism of choosing t, yes the results would be important.
- Are others (researchers or practitioners) likely to use the ideas or build on them?
- Yes
- Does the submission address a difficult task in a better way than previous work?
- Yes
- Does it advance the state of the art in a demonstrable way?
- Yes
- Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?
- Yes
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - In https://arxiv.org/abs/2107.00745 the authors show t is a highly sensitive hyperparameter, and best results are obtained using tiny perturbations away from 1. This is due to numerical problems associated with using a log-t density instead of the more numerically-stable log-density. Did the authors see similar effects? Might their grid search of [0,0.2, 0.4, 0.6, 0.8, 0.9] be too coarse? Could significant gains be achieved with a finer grid search?
- If I'm a practitioner who is willing to do a grid search to find the best t, why wouldn't I spend that effort just tuning the hyperparameters of standard ADABOOST instead? If I take a hyperparameter-tuned ADABOOST, _then_ tune t, do I get significant gains compared to tuning t with untuned ADABOOST?
- With the introduction of the new losses in section 6, does that introduce a second t I need to tune, or should I use the same one?
- Does the second column of table 1 indicate clamping isn't effective?
- Do all/most values of t pick out the same "difficult" examples, or are the weights totally different for diff values of t?
- Is there any relationship between number of examples N and number of features D, and the best t? i.e do "tall+skinny" datasets get different optimal t's than 'short and fat' datasets, and might this offer tuning suggestions to practitioners? What about the depth of the weak classifier?
- ADABOOST can also be used for regression - would their approach work in this setting? A few comments about regression would be useful even if not feasible under the proposed approach.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: There does not appear to be a "Limitations and Broader Impacts" statement in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for having evaluated our paper despite its heavy notational content.
> The major limitation is that performance is highly sensitive to the choice of t,
[XQ4B:A] We respectfully disagree with the argument: our theory says that regardless of the value of $t \in [0,1]$, the regime of performances do not degrade (both from the standpoint of the 0/1 loss, but also in terms of margins [4pmA:B][4pmA:C][4pmA:D]). Our experiments show that different datasets correspond to different values of “good” $t$s on test. Thus, there is an incentive to **not** stick with a single $t$ (e.g. AdaBoost).
Now, more broadly – since the reviewer makes the comments from the angle of practical considerations –, ML is a field where heavy-hyperparametered algorithms abound: in deep learning, apart from architectural choices (use BatchNorm vs. LayerNorm, add/remove skip-connection, add/remove dropout, etc.), there are several hyperparameters to tune: starting with the optimizers, say, Adam, there are four parameters (lr, beta1, beta2, eps), plus the schedule for the learning rate (number of warm-up steps, decay type, etc.), weight decay parameter, etc.. Even in the field of tabular data learning, these are very common: the number of hyperparameters of XGBoost is 2-digits. In this big world of heavily tuned algorithms, adding one extra hyperparameter does not add too much complexity (compared to deep learning methods; even a ResNet-50 model has an order of two digits or more hyperparameters + plus many more architectural design choices). **This being said**, it would be nice to get “good guesses” for $t$ beforehand. *However*, it emerged after experiments suggested (**see attached pdf**) [6L9a:H], we in fact believe that the best way to approach the question could in fact be to **learn** $t$ during training and adapt it at *each* iteration, thus equivalently ** learning the loss**, a problem getting traction in ML.
We also dispute the fact that there is “no rhyme or reason” in the results: in many cases, one could group the results for small $t$ and “big” $t$, with one group performing better than the other. That this picture changes among domains is not a downside: it is the *opposite*. It shows that the problem is worth solving *and* it is non-trivial. But it is out of the scope of our paper. We can even say it *has to be* out of the scope of our paper: almost all reviewers notice the already heavy load of material, which is probably to be made even heavier after some fruitful remarks [4pmA:B][4pmA:C][4pmA:D]. It would be a *tour-de-force* to fully explore *in addition* all dimensions of this problem. In fact, we claim it probably deserves its own paper (also considering the problem of learning the loss, see above) !
> AFAICT, there are no ablation experiments testing the effect of the new losses
[XQ4B:B] We do not understand this question: each value of $t$ gives rise to a different loss to optimize. So each time we fix a different $t$, the problem solved radically changes. Even more (Section 6), the full range of $t$ for the induction of decision trees covers the full range of known boosting rates !
> How should a practitioner choose their loss?
This is exactly the question / problem of choosing $t$ [XQ4B:B] ! We hope that this, the discussion in [XQ4B:A] and the new experiments (**see attached pdf**) [6L9a:H] shows the interest of our experiments and the fact that this problem deserves its devoted “iteration” (=paper).
(questions)
> In https://arxiv.org/abs/2107.00745 the authors [...] Did the authors see similar effects?
Essentially, no. One reason is that we address a completely different problem.
> Could significant gains be achieved with a finer grid search
This is precisely relevant to [XQ4B:A].
> why wouldn't I spend that effort just tuning the hyperparameters of standard ADABOOST instead?
Because we add a new “robust” (=for which the theory behind stand still) dimension for which experiments – as the reviewer has already remarked – clearly show that there is value in exploring this dimension [XQ4B:A]. And depending on the package, the way some hyperparameters are fixed might just be recycled for a standard search for better $t$ than $t=1$ (AdaBoost). We also refer to the discussion [XQ4B:A].
> should I use the same one?
We use the same one.
> Does the second column of table 1 indicate clamping isn't effective?
Quite the opposite: it shows that it can be quite effective (for example, $t=0.9$ and **see attached pdf** [6L9a:H]) – with the additional benefit of doing computations for training / inference that remain in a prescribed “precision” interval, which could especially be relevant to specific applications [KBAC:A].
> Do all/most values of t pick out the same "difficult" examples,
Good question. From a purely theoretical standpoint, we would not expect this to be true, in particular for the comparison clamped / unclamped models.
> Is there any relationship between number of examples N and number of features D, and the best t?
There does not seem to be. We conjecture this has more to do with properties of the full domain itself.
> ADABOOST can also be used for regression
Good question, we conjecture it is possible, though some care-and-caution has to be applied, in particular for clamped models.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Hi - I found your rebuttal convincing and appreciate the thoroughness of your response. None of the issues I raised were make-or-break, so I will keep my scores as they are. Looking forward to reading the final camera ready! | Summary:
The paper introduces a generalization of the classic adaboost algorithm to apply to a family of certain exponential losses, called TEMs (tempered exponential measures). They demonstrate the validity of their approach both theoretically and empirically.
Strengths: This technique allows their method to overcome numerical issues that typically arise when classic adaboost is employed.
Experiments show that tuning t can lead to significant improvements, compared to adaboost.
Moreover, the algorithm is simple and gives an interesting and practical generalization of adaboost.
Weaknesses: My main issue here is the notational choices and somewhat unclear technical presentation.
Notation issues :
- Intuition regarding the used notation (in particular the terms in Eq. 4) would have been helpful, and also for the definitions of log_t and exp_t. Especially as these are key parts of the main algorithm which is very simple otherwise.
- Equation 5 is unclear - how is \mu defined? and the notation Z_t(\mu) is confusing.
- What is Card? (Line 140) is that short for cardinality?
- Eq 14 - I’m assuming these are indicators returning +1/-1 ?
Because of the notation issues above, Theorem 2 is harder to parse. However, it does resemble in form to the standard bounds on adaboost - maybe it would be nice to emphasize that comparison more explicitly.
Also, consider giving more explanation of the meaning of theorem 2.
Maybe the paragraphs below it try to do that, but it was a little dense and technical.
I am also confused about Figure 1 - it is essentially plotting the convergence rate, the arrows indicate the case of t=0 and t=1, but is that something easily seen by the connection of the x-axis of rho to the value of t? Is it possible to also give a plot with an axis for values of t ?
Table 1 does not say what each line indicates (it did say on the appendix though) but still not sure what is the red line?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions in the above comment
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for having evaluated our paper, despite its heavy technical nature.
(weaknesses)
> My main issue here is the notational choices and somewhat unclear technical presentation.
[J18J:A] We apologize for the inconvenience, also probably due to several typos – fortunately spotted by the reviewers – that would have dampened the readability. As we explain in the general rebuttal, we believe there is a way forward to making it more readable. It is important as it has been asked that we consider additional theoretical results that will incur a few additional notations [4pmA:B][4pmA:C][4pmA:D].
> Intuition regarding [...]
[J18J:B] We believe this is one of the first times the tempered algebra (extending that over the reals) is used. To ease the reading, we propose to add in Section I of the supplement not just a primer on TEMs, but also a primer on this algebra and its properties.
> Equation 5 is unclear [...]
We used the same formalism as [11] (their eq. (1.5))
> What is Card? (Line 140) is that short for cardinality?
Correct. We will make this explicit.
> Eq 14 - I’m assuming these are indicators returning +1/-1 ?
No, this is the Iverson’s bracket (see Knuth in [12]), returning the truth value in ${0,1}$
> Maybe the paragraphs below it try to do that, but it was a little dense and technical.
[J18J:C] This is actually correct. Hopefully, with the additional page, we can put a bit of spacing in here to make it more readable
> I am also confused about Figure 1 - it is essentially plotting the convergence rate
No. The function plotted is the one in (12). The $x$-value is in fact $\rho_j$, which is in $[-1,1]$. The *color code* provides all different curves for all the different $t$s that are relevant. To simplify, the LHS of (12) is of the form $(K_t(\rho))^J$. Hence, if $K_t(\rho)$ is smaller than 1 (and the smaller it is), we have geometric convergence (and the faster it is). The plot shows that the curve for AdaBoost ($t=1$) is in fact the “highest” among all, and thus provides the “worst” guarantee of all, for $t\in [0,1]$. We could put $t$ as an axis but this would make the plot a 3D plot, not necessarily easily parsable at this size.
> Table 1 does not say what [...]
The red / thickest line is for $t=1.1$. We picked this color code because it is in fact the only value of $t$ for which the convergence is not guaranteed by our theory – but experiments obviously display that it is working as well !
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and clarifications. | Summary: The paper proposes a variant of AdaBoost algorithm based on a generalized exponential function parametrized by the temperature. The generalized exp function also induces a new criterion for splitting nodes of decision trees. The paper shows a training error bound of the AdaBoost variant. The experimental results show that the generalized AdaBoost often perform better than AdaBoost.
Strengths: The strength of the contribution is the potential advantages of the proposed exp function. The exp function can derive new Bregman divergences and be applied to online learning or Boosting. In a practical sense, the new exp function and updates derived based on it seem more robust in numerical computation.
Another strength is a theoretical guarantee of its convergence rate of training error, which is the same as AdaBoost's, when given weak hypotheses with edge gamma. This iteration bound is optimal in the worst case (when the final hypothesis is a majority vote).
Weaknesses: A crucial weakness of the paper is the lack of explanation about the generalization ability of proposed algorithms. Previous boosting algorithms, including AdaBoost, are motivated by their margin maximization properties. In fact, standard generalization bounds of weighted-voting classifiers depend on the margin over the sample. Furthermore, there are boosting algorithms explicitly designed to optimize the soft margin optimization problem (e.g., SoftBoost or entropy regularized LPBoost). So far, the paper only considers the convergence rate of the training error.
So far, theoretical results do not improve previous ones, say, the convergence rate. So, the merit or theoretical advantages of the proposed function and the resulting boosting algorithm are unclear yet.
I read the rebuttal comments and the new analysis seems to resolve my concern.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -Does the temperature control the smoothness of the function or resulting distributions over the sample? If so, please discuss the relationship between the previous boosting algorithms that keep distributions smooth (e.g., SmoothBoost by Servedio or AdaFlat by Gavinsly).
-
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: As raised above, the paper does not show any theoretical advantages of the proposed function. That is a huge drawback of the paper and thus the paper seems pre-matured.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > A crucial weakness [...] generalization ability [...] margin maximization properties.
[4pmA:A] we conjecture in L187 that similar rates of convergence (as the one we provide for the 0/1 loss) hold for margins as well; we also do not discuss generalization. We would like to point out also that margins are not discussed in the original AdaBoost paper [A] because this was not the focus of the paper. Historically, margins came later in the explanation as to why AdaBoost indeed works so well.
While we initially thought that our paper was already dense enough to leave margins + generalization for a further step, the additional page of camera-ready might be used in part to address the reviewer’s concern. We have two good news in the direction they point at: one from the standpoint of margins, one on generalization. We apologize in advance for the use of Markdown to formally summarize those results (we were told that the 1 page pdf cannot be used for proofs).
[4pmA:B] 1. Margins bounds. Instead of bounding $L_{0/1}(H)$ as in (12), we want to upperbound $E_i[f(y_iH(x_i)) \leq \theta]$ where $f(z)$ is increasing and typically in a bounded domain, say $[-1,1]$. [B] normalize their hypothesis space by the sum of leveraging coefficients. [C] prefer to pass the unnormalized classifier, with arbitrary real value, through the $\tanh$ function. A simple way forward for us is to generalize the approach of [C] to TEMs as well, which is in fact easy. Define the tempered hyperbolic tangent as $\tanh_t(z) = (1-\exp_t(-2z))/(1+\exp_t(-2z))$ and the margin of $H$ on example $(x, y)$ as:
$\nu((x, y), H) = \tanh_t(yH(x)/2)$.
Developing and reorganizing the predicate $[[\nu((x, y), H) \leq \theta]]$, it is the same as predicate
$-yH(x) + \log_t\left(\frac{1+\theta}{1-\theta}\right) - (1-t) yH(x) \log_t\left(\frac{1+\theta}{1-\theta}\right) \geq 0$,
Which, with the tempered algebra of [16] (our paper) is just stating $(-yH(x) \oplus_t \log_t\left(\frac{1+\theta}{1-\theta}\right)) \geq 0$. Since $[[z \geq 0]] \leq \exp_t^{2-t}(z), \forall t \in [0,1], \forall z\in \mathbb{R}$ ($[[.]]$ = Iverson’s bracket) and $\exp_t(u \oplus_t v) = \exp_t(u) \cdot \exp_t(v)$, we derive:
$[[\nu((x,y),H) \leq \theta]] \leq \exp_t^{2-t}\left[ (-yH(x)) \oplus_t \log_t\left(\frac{1+\theta}{1-\theta}\right)\right] = \exp_t^{2-t}\left[ \log_t\left(\frac{1+\theta}{1-\theta}\right)\right] \cdot \exp_t^{2-t}(-yH(x))$,
And so, after simplification, we get
$[[\nu((x,y),H) \leq \theta]] \leq \left(\frac{1+\theta}{1-\theta}\right)^{2-t} \cdot \exp_t^{2-t}(-yH(x))$.
We then branch directly to Section II.II.2.3, eq. 24, replacing $[[\mathrm{sign}(yH(x)) \neq y]]$ by $[[\nu((x,y),H) \leq \theta]]$, which yields in lieu of the (unnumbered) identity just before Lemma F,
$\frac{1}{m} \cdot \sum_i [[\nu((x_i,y_i),H_J) \leq \theta]] \leq \left(\frac{1+\theta}{1-\theta}\right)^{2-t} \cdot \prod_{j=1}^{J}Z_{tj}^{2-t}$,
and the rest of the proof of Theorem 2 remains unchanged. So, the left ineq. (12) (to save space) in our paper becomes the much more general inequality on **margins**:
$\frac{1}{m} \cdot \sum_i [[\nu((x_i,y_i),H_J) \leq \theta]] \leq {\color{blue}\left(\frac{1+\theta}{1-\theta}\right)^{2-t}} \cdot\prod_{j=1}^J \tilde{Z}^{2-t}_{tj}$
When $\theta = 0$, we recover (12) and if $t=1$, the ${\color{blue}\mbox{blue}}$ factor above recovers the one in Theorem 1 in [C]. First discovery, thanks to the reviewer:
[4pmA:C] **$t$-AdaBoost is a margin maximization algorithm** (for any $\theta\in[-1,1), t\in[0,1]$)
Drilling a bit reveals a perhaps more interesting phenomenon: when $\theta < 0$ (examples badly classified, eventually with large confidence), the blue factor $\left(\frac{1+\theta}{1-\theta}\right)^{2-t}$ can be substantially *smaller* than the same factor for $t = 1$ (this is the difference between $z$ and $z^2$ for $z\in [0,1]$), while when $\theta > 0$ (examples receiving the right class), the blue factor can this time be substantially *larger* than the same factor for $t = 1$. Hence, the analysis brings the interesting second discovery that
[4pmA:D] **Fixing $t<1$ increases the “focus” of $t$-AdaBoost on increasing the margins of examples badly classified, compared to AdaBoost ($t=1$)**
We certainly propose to use part of the camera-ready to state and prove [4pmA:C] and [4pmA:D], with due acknowledgements.
[4pmA:E] 2. Generalization guarantees (simplified analysis, summarized). Suppose $H \in [-v,v]$, i.e. $|H|$ is bounded (e.g. we learn clamped classifiers). Then it is straightforward to see that the Lipschitz constant $L_t$ of the tempered exponential loss in $[-v,v]$ is $L_t = (2-t) \exp_t(v)$. If $v$ is sufficiently large, then $L_t < L_1$, $\forall t \in [0,1)$. Hence, from [D], the empirical minimization of the tempered exponential loss is better “aligned” with generalization for $t<1$ compared to AdaBoost. Note that if $v$ can be small, a more careful analysis is required, but could lead to nice characterizations of the appropriate *loss* to minimize for convergence rate *and* generalization.
> So far, theoretical results do not improve previous ones [...] unclear yet.
Even without [4pmA:C], [4pmA:D] and [4pmA:E], we strongly disagree with this statement. Just one example (space limit): it has been known for decades that AdaBoost can quickly run into numerical errors [13] because of its unbounded leveraging coefficients. We show that $t<1$ just gets rid of this problem, **and** at no cost convergence-wise. Nor margin-wise [4pmA:C], [4pmA:D]. Nor generalization-wise [4pmA:E].
> Does the temperature control [...] SmoothBoost [...] AdaFlat
Briefly, $t<1$ allows to control the divergence of weights – equivalently, the smoothness parameter as in D. Gavinsky – (vs explicit in both SmoothBoost and AdaFlat)
References:
[A] Freund & Schapire, JCSS 55, 119-139, 1997.
[B] Schapire, Freund, Bartlett & Lee, ICML 1997, 322-330.
[C] Nock & Nielsen, ECAI 2006, 509-515.
[D] Bartlett & Mendelson, JMLR 3, 463-482, 2002.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply to my concerns. The new part on margin error bound is convincing to me (but I am unsure if the new analysis might go beyond the rebuttal answer). I would appreciate if you could add the new analyses in the final version. I will raise my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We thank very much the reviewer for the reply. We are committed to putting the margin analysis in the camera-ready, inclusive of comments -- in particular for the interesting behaviour [4pmA:D] --, using part of the additional space available. | Summary: This paper introduces a generalization of Ada Boost called “t-Ada Boost”. Boosting algorithms aggregate multiple weak classifiers into a strong classifier. Ada Boost is a well-established boosting algorithm.
t-Ada Boost generalizes Ada Boost by introducing an additional parameter $t$, which is a “tempering” parameter. When $t=1$, the algorithm becomes identical to Ada Boost. Thus, the main focus of the paper is to analyze whether or not any benefit can be gained by selecting a different value of $t$, besides $t=1$.
The paper provides a thorough theoretical and experimental analysis of t-AdaBoost, for various values of $t$. The theoretical analysis shows that different values of $t$ are viable to study, and bounds the convergence rate. The experimental analysis shows that, while $t=1$ works best for some data sets, and some values of $t$ seem equivalent to AdaBoost on other data sets, there do exist some data sets for which various values of $t$ perform better than Ada Boost. So, the overall analysis suggests that $t$ would be a good additional parameter to introduce on top of AdaBoost. For some data sets, tuning $t$ could lead to improvements on top of AdaBoost, so it’s worth consideration.
Strengths: I think the paper is well-written, and conducts a thorough analysis of the proposed technique. It is built upon well-respected approaches.
Weaknesses: The proposed method can have numerical instabilities, resulting from exponentiation, but the authors give reasonable consideration of this in their theory and experiments. This weakness is not very significant.
Also, see my experiment question below.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Originally, I went looking for the full plots from the experiments, and did not see them in the supplement. I then realized that they occurred after the references. Perhaps add an additional entry in the “Table of Contents” would be appropriate, or else make sure the experiment plots are before the references.
- Please include the code, or a link to the repository, along with the final version.
- For some data sets, the plots see a high variability from changing the decision trees (see, for example, sonar, bottom). I know you mentioned that some of this might be due to overfitting. However, I also see that, in some cases, the error repeatedly rises and falls as the number of decision trees increases. This makes me wonder whether the variability here is something intrinsic. For example, if you rerun your experiments (perhaps with a very slight variation), do you get the same results? If not, then I think it is important to execute multiple runs and give a standard deviations, or other variance measure.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I don't think this paper has any negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: (questions)
> Originally, I went looking for the full plots from the experiments,
Excellent suggestion ! We will oblige.
> Please include the code,
We commit to sharing all codes, inclusive of the plotting code
> If not, then I think it is important to execute multiple runs and give a standard deviations, or other variance measure.
All our experiments are done using a 10-folds stratified cross-validation (L246). Thus, they include as well the standard deviations. We chose not to plot the deviation on the pictures to keep the pictures readable (we tried with, readability was very poor for many plots). **Hovever**, we emphasize that the summary results in Table 2 use a Student paired $t$-test to assess significance (Cf Table), and thus integrate the variability for comparisons.
---
Rebuttal Comment 1.1:
Title: Thank you for the follow-up
Comment: I have read your rebuttal comments, and appreciate the clarifications. | Rebuttal 1:
Rebuttal: We would like to thank all six reviewers for their work and appreciate the global positive tone of reviews given the notation-heavy nature of our paper. To ease the cross-search among the pieces of our rebuttal, we have put tokens of the form **[Reviewer-Id:Letter]** in rebuttals, making it easily to search for cross-references among rebuttals. Each rebuttal proceeds by quoting the review and replying, in the order of the review’s comments. References shown like [number] refer to our paper’s bibliography. References shown like [letter] refer to references put in each rebuttal.
This is a general rebuttal, summarizing the major changes, all of which could be easily done given the additional page in the camera-ready.
## From the standpoint of its theoretical content
We note that our paper *might* see an increase in notations after new results that we got for a rebuttal on margins and generalization **[4pmA:B][4pmA:C][4pmA:D][4pmA:E]**. We believe however that reviewers have done a fantastic job of typo-spotting and that the final version of our paper will surely gain in readability. We apologize however for having used the keen eye of six reviewers to find those.
To ease reading our paper, we propose
1. To correct all typos (we believe they contributed to hiccups in reading)
2. To put in the camera-ready before Section I in the supplement a table of notations, grouped by topics (models, TEMs, properness, etc.), eventually by paper Section, to help the reading of the paper
3. To strengthen part I in the supplement to include not just a primer on TEMs but also on the tempered algebra of [16]
## From the standpoint of its experimental content
Experiments vs overfitting suggested by **[6L9a:H]** have been put in **the attached .pdf**, and present an interesting direction which we believe could present value, briefly mentioned in the main file and then put in the supplement.
Pdf: /pdf/f185b61fc1db6f5ce507bfac576ddf98da7cac82.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a generalization of the ADABOOST algorithm using tempered exponential measures. To do this, they begin by introducing $log_t$ and $exp_t$ and the generalization of entropy. A first theorem is proposed to explain how to find the solution for the minimization of this entropy and an adaptation of the ADABOOST algorithm, called t-ADABOOST, is presented. Section 5 is dedicated to explaining the convergence of the t-ADABOOST algorithm and a theoretical study of its behavior. In section 6, the authors take the time to introduce tempered loss and the associated decision trees. The authors conclude with experiments and a discussion.
Strengths: I think the paper is quite original in that the authors generalize ADABOOST using the functions $\log_t$ and $\exp_t$. In this way, they obtain an algorithm generalizing the former and extending the perspectives.
I was interested in the reflections on $\log_t$ and $\exp_t$ and how the introduction of these functions forced the authors to rethink the rest of the procedure.
The authors propose theorems that are consistent with our expectations as readers of these new notions, and the order seems logical.
Finally, the authors have chosen a complete path (new notions + theory + experiments + discussions), which makes the whole coherent in my opinion.
Weaknesses: For me, a big weakness of the paper is the introduction of numerous notations that I didn't always find interesting (for example, I didn't understand the use of clamped sums) and which makes reading difficult. Certain notations, such as $t^*$, complicate reading in my opinion (because when $t$ and $t^*$ behave in opposite ways, you have to do some mental gymnastics to follow the reasoning). Finally, some notations are used differently: for example, $\textbf{q}$ is sometimes a vector (Definition 3.1) and sometimes a matrix (Algorithm 1), or $Z_t$ is sometimes a function in $\mu$ and sometimes not.
Similarly, the graphs are not intuitive for me. For example, why did the authors choose a color scale when there are only 7 values $t$ presented?
A last small weakness of this paper for me is the fact that the authors present "just" an extension of the ADABOOST algorithm but I do not have the impression to know if it will help or not to improve things (it is a point of discussion). I would have liked them to highlight at least one case where the algorithm actually worked around a weakness in ADABOOST.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I think the paper would be easier to read if the number of notations were reduced (for example, $t^*$ is not compulsory in my opinion, as it's often $1/t^*$ that's used). I think it should be made clear whether $Z_t$ is a function or not. In the same vein, I think some equations lack rigor. For example, $\log_t$ and $\exp_t$ are introduced at the same time, even though their spaces of definition are different. On line 106, $\mu$ is not introduced either.
I didn't understand where clamped sum was used. Given the limited space available, I think there's a point in putting a paragraph on it. Could the authors explain?
What does the $\|\|\cdot\|\|_{1/t^*}$ norm represent? Is it just the Hölderian norm with $\alpha=1/t^*$? Or is it a new norm based on the new dissimilarity measure introduced?
I thought I read a few typos:
* Page 2, line 90: [11, 6, 17] is not in increasing order.
* Page 7, line 246: Table A1 or Section III1 instead of Section A1?
* Page 8, Table 1: isn't it a figure?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I didn't see any limitations that weren't addressed by the authors. I appreciated their honesty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we thank the reader for having evaluated our paper despite its heavy notational nature.
(weaknesses)
> For me, a big weakness of the paper is the introduction of numerous notations that I didn't always find interesting
[KBAC:A] We would like to emphasize that it has been asked that we consider additional theoretical results that will add additional notations [4pmA:B][4pmA:C][4pmA:D]. We do believe there is value in those additional results, and we believe that what the proposal we make (general rebuttal) to better present our notations will be of great help to understand the paper. We also believe that some hiccups in reading were in fact created by typos that the reviewers have spotted; removing them will substantially contribute to easing the reading.
For clamped summations, we in fact do believe that such classifiers could be of substantial use, in particular when training or inference is done with *reduced computational power* (either because of machines, e.g. ML at the edge, or because of compute constraints, e.g. MPC / secure multiparty computation). It is important to realize that clamping / clipping a value -- as it is usually done -- still requires *storing the full value* before “simplifying” it (or you lose arbitrary numerical precision, which can be damaging). In our case, *all steps in the computation of the output can be performed with the desired complexity* and do not alter the guarantees.
> q is sometimes a vector (Definition 3.1) and sometimes a matrix
No, $q$ is always a vector. The first index in the notation in Algorithm 1 is the iteration number, explicit in Step 2.
> $Z_t$ is sometimes a function in $\mu$ and sometimes not
No. Dependences may be implicitly noted to lighten the text. $Z$ is the normalization coefficient, which *de facto* is always a function of the leveraging coefficients (and their additional parameters, eventually). This is a consequence of (4).
> For example, why did the authors choose a color scale when there are only 7 values presented?
Simply because we wanted to give the reader different ways to visually compare the curves, one using the thickness of the curves, one using the colors of the curves. By this choice, we hope to be more inclusive with respect to the readers of our paper.
(questions)
> I think the paper would be easier to read if the number of notations were reduced
We hope the comments above clarify some proposed changes on $Z$.
> for example, $t^*$ is not compulsory in my opinion
This notation was introduced in [3]. We agree with the reviewer; however, we believe removing the $t^*$ notation (and substituting the value $\frac{1}{2-t}$) will eventually make the expressions longer and impair the readability of the derivations.
> I think some equations lack rigor. For example, $\log_t$ and $\exp_t$ are introduced at the same time, even though their spaces of definition are different
$\log_t$ and $\exp_t$ are indeed inverse functions introduced by Naudts (2011), as generalizations of standard $\log$ and $\exp$. If the reviewer is referring to their respective domains, we can make further clarifications in the text. Did the reviewer also want us to change the width of the equations ? (this would affect the height as well and would decrease the readability of our paper)
> $\mu$ is not introduced either.
$\mu$ is just a real number. We can put $\mu \in \mathbb{R}$.
> I didn't understand where clamped sum was used
As explained in (9), clamped sums are used to define models.
> What does the $\|.\|_{1/t^*}$ norm represent?
The $L_p$ norm with $p = 1/t^*$. Standard notation.
We acknowledge the typos found and will make the modifications. Thanks !
(limitations)
> I didn't see any limitations[...]. I appreciated their honesty.
We would like to very much thank the reviewer for this comment.
(Referene)
Naudts, J. (2011). Generalized thermostatistics. Springer
---
Rebuttal Comment 1.1:
Title: I have always some questions
Comment: I thank the authors for clarifying the text. I hope it is more understandable by the uninitiated.
> No, $q$ is always a vector. The first index in the notation in Algorithm 1 is the iteration number, explicit in Step 2.
Ok, thanks. As $q$ is a vector, I think it would be better to use the $q^(1)$ notation for iteration.
> No. Dependences may be implicitly noted to lighten the text. $Z$ is the normalization coefficient, which de facto is always a function of the leveraging coefficients
Sorry, I think my comment was misunderstood. In math, if $f$ is a function, $f(x)$ is a scalar. My problem is your choice to consider $Z_t$ to be a scalar when $Z_t(\mu)$ is the scalar. I understand that it is painful to write it every time but I find it more precise to add it.
> (and their additional parameters, eventually).
If you say that there may be other parameters, I think it is all the more important to add them.
> This notation was introduced in [3]. We agree with the reviewer; however, we believe removing the $t^*$ notation (and substituting the value $\frac{1}{2-t}$) will eventually make the expressions longer and impair the readability of the derivations.
We agree on the advantages/problems of both choices, and whether to take $t^*$ or $\frac{1}{2-t}$ is obviously just a matter of point of view. Mine remains that since you're talking about $t$ convergence, it's clearer for the reader to understand what's going on if they see $\frac{1}{2-t}$ directly rather than $t^*$. Alternatively, you could put convergences directly in $t^*$ to avoid any intellectual gymnastics.
> If the reviewer is referring to their respective domains, we can make further clarifications in the text. Did the reviewer also want us to change the width of the equations ?
My remark was simply to remind you that $\log_t$ was defined on $\mathbb{R}^+_{\star}$ and $\exp_t$ on $\mathbb{R}$. Once again, I understand that, if there is no clarification, the first idea is to copy the definition sets of $\log$ and $\exp$. But since these are extensions, what proof do we have that the set isn't different? A bit like the extension of the logarithm to the imaginary, which can be defined almost anywhere.
> We can put $\mu\in\mathbb{R}$.
Thanks.
> As explained in (9), clamped sums are used to define models.
Sorry, my question was *where $H_J^{(\delta)}$ was used?* I must have had a moment's inattention because it was line 145. Sorry for my question.
> The $L_p$ norm with $p=1/t^*$.
Thank you for your reply. I don't remember why, on first reading, it bothered me. Sorry about that. | Summary: The paper presents an estension of ADABoost algorithm for binary classification, called t-ADABoost, by modifying the weight optimisation under simplex constraint formulation of the original algorithm. The generalised formulation involves optimisation of modified Bregman divergence between new and old weights under a so called co-simplex constraint (i.e. a sum constraint on the power of the weights). The paper establishes that these new optimisation problems amount to a one parameter optimisation problem in a way similar to ADABoost, then that the resulting construction of a strong learner will, under a generalised assumption, will have an empirical error which decreases exponentially quick.
The paper complements their results by experiments comparing the behavior of the t-ADABoost algorithm for decision tree using different t values. The results show that depending on the data and presence of noise, the relationship between t value and test error can change.
Strengths: The paper is well introduced and describes an original extension of ADABoost algorithm, by incorporating the notion of tempered exponential measures which was found to improve robustness in clustering.
The authors give sufficient evidence that the extended algorithm is able to decrease the empirical risk as at least the same rate as the original ADABoost algorithm (Theorem 2 and discussion following) and that the use of an oracle t-ADABoost algorithm can reduce the test error compared to the initial ADABoost algorithm (Table 1). This, as the authors note, imply that while work remains to be done in order to learn the optimal t for each new dataset, this generalisation has its use cases.
Weaknesses: Part of section 5 and Algorithm 1 are confusing. Notably, in the description of Algorithm 1, the beginning of Step 2.2 starts with what seems to be a typo (I believe $\mu_{ji} = y_i h_j(x_i)$ should read $ u_{ji} = y_i h_j(x_i)$, then neither the coefficients $\nu_j$ and $\alpha_j$ are defined until Theorem 2. The wording "choose leveraging coefficient" seems to imply that the choice can be arbitrary, while, considering that Algorithm 1 is described just after Theorem 1., one could infer that the coefficient $\nu_j$ is the minimizer of equation (5).
In equation 11, the quantity $m^{1-t^*}$ is not introduced anywhere. The next mention is in line 180, where it is stated that it can be discarded in the unclamped case, but that it does play a role in equation 12, where the quantity does not appear (while $1 + m^\dag {q_j^\dag}^{2-t}}$ appears and could play the "dampening part" mentionned later on).
Parts of section 6 could be clarified. There is some confusion between the notations $L$ and $\underline{L}$ (line 200, $\mathbb{E}_\lambda[\underline{L}(p_\lambda)$, but $\mathbb{E}_\lambda[L(p_\lambda)]$ line 219, which in itself does not make sense as $L$ takes 2 arguments. Moreover in equation (16), from equation 200 implies that the DT is trained from the Bayes risk, and as such, reverse engineering should start by computing the Bayes risk rather than CPE loss).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The algorithms presented here works for binary classification, as is the case for the initial ADABoost algorithm. ADABoost has been extended to multiclass settings. Could such an extension also work for t-ADABoost?
The interpretation of theorem 2 relies (line 161) on the assumption that $\lvert \rho_j\rvert \geq \gamma$. If $m_j^\dag > 0$, the value of $\rho_j$ depends on $t$ and therefore the assumption above could be false for certain $t$. Is there any insight on how this impacts the exponential decay? Moreover, even in the case where $m_j^\dag = 0$, considering that $\rho_j = \frac{1}{R_j}\sum_{i\in[m]} q_{ji}, y_i h_j(x_i)$ and that $q_{ji}$ no longer need sum to $1$ but rather should belong to a co-simplex, the assumption that $\lvert\rho\rvert >\gamma$ does not have the same implication depending on the value on $t$. Could you comment on whether this is a stronger requirement in the case where $t<1$.
Are there any overfitting issues related to t-ADABoost, notably when $t \ll 1$ or when $t > 1$? Could these be investigated by allowing the number of trees grown to be larger than 20 in the experiments?
In section 6, it is mentionned that reference 10 note that Matusita's loss implies a near optimal boosting rate while the empriical risk gives the worst possible guarantee. Could you specify where these two results are stated in the reference?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The current form of the algorithm seems to be usable only the case of binary classifiation, while the original ADABoost has been extended to multiclass settings. It is unclear whether this restriction can be lifted efficiently.
The authors clearly mention that the current work should be complemented by further insights on the selection of $t$, which is coherent with their experiment results (where the optimal $t$ value depends on the dataset and noise).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we would like to thank the reviewer for reading our paper and noticing the typos mentioned.
(weaknesses section)
> Part of section 5 and Algorithm 1 are confusing
[6L9a:A] We sincerely apologize for the confusion, resulting from our choice of organization, and unwanted typos. Correct for $\mu_{ji}$.
> The wording "choose leveraging coefficient" seems to imply that the choice can be arbitrary
[6L9a:B] We followed the convention for AdaBoost [28] (our paper). We can do otherwise
> could infer that the coefficient $\nu_j$ is the minimizer
[6L9a:C] (we assume it is $\mu_j$) It is a minimizer of an upperbound of (5) with the “secant trick” (II.II.2.4 supplement)
> the quantity $m^{1-t^*}$
[6L9a:D] $m$ is the number of training examples, $t^* = 1/(2-t)$. This is just a factor (the same for all leveraging coefficients) that makes the formal analysis “simple”. It logically disappears from (12) because it is part of $H$.
> Parts of section 6 could be clarified
[6L9a:E] As we state in L219, the simplification ends up with a loss of the form $E_\lambda[L(p_\lambda)]$. Here, $L$ may just be any function from $[0,1]$ to $\mathbb{R}$. It turns out that, in our case, it ends up being a very particular function with the key property of being proper – and thus eliciting Bayes prediction as its minimizer – for any $t\in [-\infty,2)$, a function usually noted $\underline{L}$ [22,23] (our paper). We can make this more explicit.
Note: we insist on the fact that in a field (ML) where we are used to see a plethora of loss functions, such an invariant is quite exceptional because one can safely slide $t$ in a huge range without ever breaking up properness *and* guaranteeing boosting rates in the complete spectrum of known rates (and even unknown rates, for $t\in(1,2)$ (L243). We know of no other parameterized loss with such a property.
(Questions section)
> The algorithms presented here works for binary classification... Could such an extension [to the multiclass setting] also work for t-ADABoost?
[6L9a:F] Most certainly, following the blueprint of [28], Section 6.
> The interpretation of theorem 2 relies (line 161) on the assumption
Excellent questions ! First, the theoretical analysis covers all cases, including $m_j^\dagger > 0$. In our experiments, we always noted $m_j^\dagger = 0$ so the first question seems to be essentially relevant to theory. The question(s) led us to reconsider why we observed this and in fact there is a simple explanation as to why $m_j^\dagger > 0$ would be very rare *if* the weak learner is not “that” weak (which is our case, with decision trees): one needs to consider Lemma J (supplement) together with (12). Let $Q_j = 1 + m_j^\dagger (q_j^{\dagger})^{2-t}$ and $\tilde{\rho}_j = \rho_j \cdot Q_j$. Note that $\tilde{\rho}_j$ is of the form $\beta \cdot E_{p_j}[yh]$ where $p_j$ lives on the *simplex*, and $|yh| \leq 1$, $\beta \leq 1$. Using Lemma J with (12), to keep geometric convergence, it is sufficient that
$Q_j \log Q_j \leq \tilde{\rho}_j^2 / (2t^*) $
Since $q_j^{\dagger}$ is homogeneous to a tempered weight, one would expect in general $m_j^\dagger (q_j^{\dagger})^{2-t} \ll 1$, so using $Q_j \log Q_j \sim_1 -1 + Q_j$, one gets the sufficient condition
$m_j^\dagger (q_j^{\dagger})^{2-t} \leq \tilde{\rho}_j^2 / (2t^*)$
Note that for $t=0$, $2t^* = 1$, so one roughly gets that to keep geometric convergence, one needs $m_j^\dagger (q_j^{\dagger})^{2-t} = O(\tilde{\rho}_j^2)$. What does that mean ?
1. If it is not violated, then we have geometric convergence
2. If it is, then a large number of training examples have $q_{ji} = 0$, which means that they are receiving *the right class with large margin* [4pmA:C], [4pmA:D]. In this case, breaking geometric convergence is not an issue: we already have a very good ensemble !
[6L9a:G] We propose to put this analysis, which we believe is enlightening, at least in the supplement.
> Are there any overfitting issues related to t-ADABoost,
[6L9a:H] Excellent question. We have performed additional experiments on learning ensembles with a larger number of trees ($J$), each of them being smaller ($T$) *and* tested domains with a larger amount of training noise (because noise affects just training, it could induce overfitting by models “focusing” more on fitting training data or the noise patterns, at the expense of the full domain). We crammed a few in **the attached .pdf**, selected for the topic raised (we would be in a position to present all results in the supplement of the camera-ready). What emerges:
1. Overfitting *can* happen (winered $\eta = 0.2, 0.4$, sonar $\eta = 0.4$) but affects very differently the algorithm at different $t$ values and yields very substantial differences (by several % points). Overall, this displays that tuning $t$ can also have the purpose of handling overfitting.
2. Clamped models can be better at resisting overfitting (qsar, all $\eta > 0$). Strong incentive to train clamped models as well.
3. Some plots (sonar $\eta > 0.1$) suggest the idea that more than just tuning $t$ beforehand, good strategies could in fact **learn** $t$ and adapt it **during training** (so each iteration $j\in J$ would use a specific $t_j$). We suggest adding this in conclusion.
> In section 6, it is mentionned that reference 10 note that Matusita's loss
[6L9a:I] Optimality of Matusita = paragraph following Theorem 1 in the JCSS version (open access). Empirical risk = worst: Fig 2 + Fig 5 (+legends). Note: [10] prove the optimality of Matusita’s from an information-theoretic standpoint. It is also shown from a computational complexity standpoint in [A]
[A] Nock & Nielsen, “On domain-partitioning induction criteria: worst-case bounds for the worst-case based” (TCS 321, pp 371-382, 2004).
---
Rebuttal Comment 1.1:
Comment: I have read the authors' answer, and thank them for their clarification.
[6L9a:A], [6L9a:D], [6L9a:F], [6L9a:H], [6L9a:I] are satisfactory explained/taken into account.
For [6L9a:B] and [6L9a:C]: First of all, you're right, $\nu_j$ in my comment should be read $\mu_j$. My comment did not concern the rationale for the choice of $\mu_j$, but was only concerned with the presentation of the algorithm: if reading the paper linearly, the first time $\mu_j$ is mentionned, it is not yet defined. Since it is defined a page later, it could be confusing. I would advise mentionning in the Algorithm caption that the different values involved in the algorithm are chosen from theorem 2, only to help readability.
[6L9a:E] is cleared up, although I would still advise using a different letter than L in line 219 (I've only just noticed that the letter was not in italic, and as the notation L in italic stands for a function of 2 arguments as defined in equation 15, this can be confusing).
I've read the development on [6L9a:F], though I'm unsure to understand it thoroughly.
The authors conducted thorough analysis to assess resistance to overfitting in [6L9a:H]. As the authors note, experimental results suggest than the choice of $t$ and the use of clamped models can impact overfitting risks. This increases the impact of the extension.
All in all, the authors satisfactorily answered all remarks. I'm inclined to increase the rating from 6 to 7, mostly due to the impact the generalised Adaboost procedure could have on overfitting risks.
---
Reply to Comment 1.1.1:
Title: Thanks for the last comments & inclination to increase score (6 -> 7)
Comment: We thank the reviewer for their last comments and reported impact on changing their score.
The reviewer mention inclination on changing rating from 6 to 7, though we are not sure it has been actioned in OpenReview.
With regards,
the authors. | null | null | null | null |
Explaining V1 Properties with a Biologically Constrained Deep Learning Architecture | Accept (poster) | Summary: In this study the authors propose the incorporation of mechanistic biologically inspired filtering and normalization components in deep convolutional networks (DCNs) with the goal of increased alignment of model responses to V1 neural responses and tuning properties. The authors add center-surround receptive fields, local receptive fields, tuned divisive inhibition and cortical magnification to DCNs that are trained with downscaled ImageNet-64x64. The authors perform an extensive ablation of the components above to show their relative importance for alignment with V1 neural responses. The proposed best-performing model produces quite significant improvements on Brainscore's V1 model alignment. The authors also touch upon whether the above models with high V1 alignment are more robust to perceptual distortions.
Strengths: + The authors propose a unique combination of biologically inspired components that have been shown to exist in primate early visual mechanisms. These components are also explained in good depth and clarity for readers who may not be familiar with the specific computations. To the best of my knowledge, even though each of these components by themselves are not novel, the unique combination explored here appears to be novel and not explored before from the perspective of bettering alignment to V1 properties.
+ The proposed model is significantly outperforming the previous SOTA on explaining neural activity and tuning properties from the Brain-score dataset. The authors have run multiple simulations with different random seeds and add more credibility to their observed findings.
Weaknesses: - As the authors have pointed out, there is a significant drop in accuracy with respect to image classification accuracy. I believe the authors must try to address, from their perspective, why this drop occurs. It is common in this area that models that are trained to better represent biological neural activity tend to suffer from poor classification accuracy. Addressing this issue will be quite a strong contribution.
- The scope of this work is quite limited; I am unsure if the produced improvement on alignment to V1 neural responses and tuning properties is sufficient as the only major contribution in this work.
- It may help if the authors could please add some intuition about why each of the explored components help in improving neural predictivity; it is also important to find out whether these components are useful to improve model-brain alignment regardless of the underlying architecture.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see my review in the Weaknesses section for questions and suggestions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have adequately addressed limitations in their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insights, all of which have helped us improve this work. We ran additional experiments to answer questions about the drop in classification accuracy and reveal insights about the contribution of each component to explaining neural activity. In the latter experiment, we studied the features learned by each model by generating images that maximally activated neurons in artificial V1 layers via gradient ascent, and analyzed the learned parameters of the trainable, biologically-inspired components. We address the concerns raised by the reviewer and elaborate on the results of these experiments below.
> "...there is a significant drop in accuracy with respect to image classification accuracy... Addressing this issue will be quite a strong contribution."
Although not a focal point of this work, we agree that preserving image classification accuracy would be valuable to the targeted research audience. The reduction in accuracy primarily resulted from the introduction of cortical magnification, which assumed that the model’s gaze was focused on the center of the image. As a result, visual stimuli at greater distance from the image center are under-sampled during the polar transformation (Fig. 1 of rebuttal PDF). Images in which the object of interest is not located at the image center became more challenging to classify, reducing model accuracy. This challenge could be mitigated with a mechanism that dynamically determines the fixation center of this polar transform (e.g., saliency or attention maps) or performs inference using multiple fixations. We demonstrate the efficacy of this strategy with a naive proof of concept in which classification is performed by averaging predictions from the five static crops of each image. This simple strategy improved the validation accuracy of the network with all components from 55.1% to 59.9% (without affecting V1 scores). A more sophisticated, dynamic strategy (DOI:10.48550/arXiv.1709.01889) could further reduce this accuracy drop. We have added these details to our manuscript.
> "I am unsure if the produced improvement on alignment to V1 neural responses and tuning properties is sufficient as the only major contribution in this work."
While our results did focus on architecture-driven improvements to model-V1 alignment, this work has further contributions that we briefly summarize below and will detail in a camera-ready version:
- As the reviewer noted, we systematically analyzed the contribution of biologically-inspired components to explaining V1 activity. This analysis revealed complementary interactions of these components that improved model-V1 alignment beyond what would be suggested by any individual component.
- The developed models (the most accurate models of V1 to date) are in-silico platforms for analyzing processing in V1. Such image-computable models can enable neuroscientists to study complex dynamics of large neural populations that aren’t readily observable through data-limited, time-consuming, in-vivo observations (DOI:10.1126/science.aav9436) or run surrogate experiments that cannot be done with humans. Their processing and learned parameterization suggest new hypotheses about processing in V1 and provide evidence and alternative views for existing theories. As an example, learned features in networks with tuned normalization were commonly more diverse than baseline those of ResNets (detailed further in response #3). The improvements in V1 property scores that we observed from networks with this component can be treated as additional evidence that competition among neurons driven by tuned normalization gives rise to diverse tuning properties.
- Improving model-V1 alignment does not trivially improve image classification robustness. While small improvements to corruption robustness from tuned normalization layers were observed, alternative components stood as counterexamples to prior works that have suggested strong correlations between model-V1 alignment and classifier robustness to corrupted images.
> Re: intuition about the contribution of each explored component.
Data-driven approaches have elucidated our strongest intuitions about the contributions of these components and have also produced new neuroscientific insights about processing in V1. We briefly summarize findings from these analyses in the points below, all of which will be expanded upon in the camera-ready paper.
- Center-surround antagonism improves spatial frequency properties by learning features that are selective to a high variance of spatial frequencies. Most trainable DoG kernels learned low-variance center gaussians, suggesting strong preferences for high frequency patterns and textures (Fig. 2 of rebuttal PDF).
- We theorized that local RFs would improve response selectivity properties of artificial neurons by removing weight sharing. Our ablation studies demonstrated a drop in the response selectivity property score when local receptive fields were omitted. Surprisingly, this was not observed in isolated-component evaluation.
- Ablation studies support the role of tuned normalization in improving spatial frequency, response selectivity, receptive field size, surround modulation, and response magnitude tuning alignment. Inter-neuron competition resulting from tuned normalization led to a more diverse feature set (qualitatively depicted in Fig. 2 and quantitatively shown by statistically lower perceptual similarities), likely contributing to these property score improvements.
- Given the retinotopic organization of V1, we hypothesized that cortical magnification would give rise to better-aligned response selectivity and receptive field size tuning distributions, meanwhile improving neural predictivity. In each trial where cortical magnification was removed, these respective scores dropped, supporting this hypothesis.
The generalization of these components to different architectures has been planned for future work.
---
Rebuttal Comment 1.1:
Title: Thank you for the author rebuttal
Comment: I thank the authors for responding to the reviewers' concerns in the rebuttal. I appreciate the authors effort to successfully reduce the drop in ImageNet accuracy using static crops of the images. I do agree with the reviewers that the proposed work will be valuable to neuroscientists as a model of V1 processing, I am slightly improving my score from my pre-rebuttal evaluation of the paper. | Summary: The paper considers deep networks as a model of the visual stream, specifically V1. The authors systematically study the impact of various biological additions to deep networks on alignment of deep nets' representations with V1 recordings.
Strengths: The paper considers several features of the early visual stream that can be added to deep networks, and tests the influence of those features on image classification performance and V1 alignment. While some individual features have been considered before, the approach and especially ablation studies here are novel and, in my opinion, interesting to the community.
The final result is interesting too: combining all 4 architectural features alone resulted in the best V1 alignment (0.605 vs. 0.594 of the top1 V1 model www.brain-score.org/model/vision/623). Adding adversarial features improved it to 0.629, which seems very significant -- the median V1 score at www.brain-score.org/ is less than 0.5.
Weaknesses: The results in Tab. 3 suggest that V1-like features significantly hurt ImageNet performance -- the best V1 model is 16% less accurate than the best ImageNet model. This is mostly due to adversarial training. I think the authors should discuss why it has such an effect.
Two important ablation studies are missing:
1. Adversarial training only, since it has a big effect on both V1 alignment and ImageNet performance.
2. Untrained networks with all/some biological features. All discussed features change the distribution of neural responses even in untrained networks, so it might be that V1 improvements come from that distribution change alone, not from training with those features.
I also suggest the authors include, at least in the appendix, Brain-Scores for other areas (V2, V4, IT) and behavioural data. This is the standard way to evaluate models on Brain-Score, so having all results would make comparisons to other models easier.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it possible to provide a baseline for the "best possible" V1 score by comparing Brain-Score neural data to itself (e.g. with K-fold cross-validation)? I don't think it was done in the original Brain-Score papers, so it's definitely not a hard requirement here. But it would be a great addition.
### Minor issues
> [Line 20] Advances in neuroscience have long been proposed as essential to realizing the next generation of artifical intelligence (AI).
Is that true? I'm not quite sure… also misspelled artificial
> [22] (e.g, convolutional neural networks and mechanisms of attention) owe their origins to biological intelligence
Conv nets need a citation; attention too, and I’m not sure if attention mechanism in transformers were even inspired by biology on the implementation level (see https://www.frontiersin.org/articles/10.3389/fncom.2020.00029/full which says "While the spirit of
attention in machine learning is certainly inspired by psychology,
its implementations do not always track with what is known
about biological attention, as will be noted below.")
Overall: \cite doesn’t generate links to bibliography?
> [45] In specific
I don’t think this phrase is commonly used. “Specifically” or “in particular” would read better.
> [151] these DoG convolutions were only applied to a fraction of the input feature map
Why?
Fig. 1D can benefit from a more detailed explanation. I want to say the original image is on the right but then the transformation doesn’t preserve retinotopy.
> [209] alternate
Alternative
Color-coding Tab. 3 would be great!
The code is in the supplementary, so the authors should indicate it in the main text (and perhaps add it to github afterwards)
### Rebuttal acknowledgement
I have read the rebuttal and responded to the authors. I think it addressed all (minor) concerns that I had, and I still think 7 (accept) is an appropriate score.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations and impacts are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback and questions. Our responses to the concerns raised are provided point-by-point below.
>The results in Tab. 3 suggest that V1-like features significantly hurt ImageNet performance -- the best V1 model is 16% less accurate than the best ImageNet model. This is mostly due to adversarial training. I think the authors should discuss why it has such an effect.
While the focus of our work was to highlight the contribution of a biologically-inspired architecture towards explaining V1 activity in response to arbitrary image stimuli, we also realize that preserving image classification accuracy would be valuable to practitioners who wish to use such models in experiments for which core object recognition inference is important. There are two primary reasons for the reduced accuracies:
1. In comparison to the 64x64 ResNet50 baseline, the observed reduction in image classification accuracy of models that were not adversarially trained primarily resulted from the introduction of cortical magnification, which assumed that the model’s gaze was focused on the center of the image; consequently, visual stimuli at greater distance from the image center are under-sampled during the polar transformation (Fig. 1 of rebuttal PDF). Images in which the object of interest is not located at the image center (common in ImageNet) became more challenging to classify, reducing model accuracy. This challenge can be mitigated with a mechanism that dynamically determines the fixation center of this polar transform (e.g., saliency or attention maps) or performs inference using multiple fixations. We demonstrate the efficacy of this strategy with a naive proof of concept in which classification is performed by averaging predictions from the five static crops of each image. This simple strategy improved the validation accuracy of the network with all components from 55.1% to 59.9% (without affecting V1 scores). A more sophisticated, dynamic strategy could further reduce this accuracy drop.
2. The accuracy of the model that achieved the highest model-V1 alignment (the adversarially trained, biologically constrained model) was further reduced by adversarial training. We evaluated the classification accuracy of an adversarially trained ResNet50 at 55.5% (rebuttal PDF, Table 2).
These revisions will be detailed and clarified in the camera-ready paper.
>Two important ablation studies are missing: 1) Adversarial training only and 2) Untrained networks with all/some biological features
We thank the reviewer for these suggestions and agree that they are important. We have run these evaluations and included the results in Tables 1 and 2 of the rebuttal PDF. For ease of reference, the V1 Overall Brain-Score for the adversarially trained ResNet50 was 0.581 and no statistically significant correlation was observed between untrained and trained model V1 Overall scores. These evaluations will be added to the paper.
>I also suggest the authors include, at least in the appendix, Brain-Scores for other areas (V2, V4, IT) and behavioral data. This is the standard way to evaluate models on Brain-Score, so having all results would make comparisons to other models easier.
We agree with the reviewer’s suggestion. These scores will be added to our supplementary material. For reference, V2, V4, and IT scores for the top performing model (all-components with adversarial training) are .343 (rank 26), .459 (rank 118), and .343 (rank 168), respectively.
> Question: Is it possible to provide a baseline for the "best possible" V1 score by comparing Brain-Score neural data to itself (e.g. with K-fold cross-validation)? I don't think it was done in the original Brain-Score papers, so it's definitely not a hard requirement here. But it would be a great addition.
V1 Predictivity scores as computed as correlations between measured and predicted neural activity, normalized by internal consistency of the measured neural data. V1 Property scores are similarly ceiled according to maximum distribution similarities observed among the measured neural data.\
Regarding these internal consistency scores and maximum distribution similarities, we unfortunately could not calculate this as we do not have access to the private neural evaluation data. This is an interesting question, however, and the answer to it could have interesting implications regarding what we could expect from an “optimal” model.
> Minor: [151] these DoG convolutions were only applied to a fraction of the input feature map. Why?
We skip the DoG convolution for some channels to account for the fact that not all V1 neurons have a symmetric surround suppression region (DOI:10.1523/JNEUROSCI.19-23-10536.1999, 10.1038/nn1310).
>Minor: Fig. 1D can benefit from a more detailed explanation. I want to say the original image is on the right but then the transformation doesn’t preserve retinotopy.
We apologize for this confusion. The original image is on the left and the transformed image is on the right. Fig. 1 of the rebuttal PDF shows two example images before and after the transformation. The caption of Figure 1D has been updated and these example figures will be added to the supplementary material for clarity.
>Minor: The code is in the supplementary, so the authors should indicate it in the main text (and perhaps add it to github afterwards)
We plan to make our code publicly available on github following anonymous review and will add the corresponding link in the main text.
>Minor: Remaining minor issues regarding grammar, spelling, formatting, citation issues, and unclear claims about the importance of neuroscience in AI and biologically inspired mechanisms in artificial neural networks.
We would like to thank the reviewer for highlighting these oversights. We will resolve these issues and clarify these claims in the camera-ready paper.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the response! Overall, I think it addressed all (minor) concerns that I had, and I still think 7 (accept) is an appropriate score.
> We demonstrate the efficacy of this strategy with a naive proof of concept in which classification is performed by averaging predictions from the five static crops of each image
Great! I believe this technique, test-time augmentation, is not uncommon in deep learning, and makes sense for models of the visual stream.
I also think the new results with untrained networks, which achieved much lower V1 alignment than the trained ones, strengthen the results -- they suggest that all added components don’t just match superficial features of V1 processing, but lead to better (in terms of alignment to V1) training.
Other reviewers have noted the limited scope of this paper, but I don’t completely agree. I think the contribution of this paper is significant enough for the task of building better models of the visual stream. | Summary: The authors incorporated four well-known architectural components of V1 into an earlier layer of the CNN, resulting in a reduction in task performance but an improved alignment with V1 neurons' behaviors. Their study demonstrated that cortical magnification led to the most significant enhancement in alignment, as observed in the overall property and predictability of V1 in the Brain-Score benchmark test. Tuned normalization also improved alignment in certain V1 properties, while the contribution of center-surround mechanisms appeared to be minimal or data-dependent. These improvements generally ranged between 1% and 2%.
Strengths: The motivation and hypotheses of the study are reasonable, and the exploration is conducted in a systematic and logical manner. The finding that cortical magnification provides some improvement in alignment is interesting.
Weaknesses: The introduction of brain architectural components was expected to enhance the alignment of the model with V1 data in the Brain-Score test, so the results are not particularly surprising. While the paper contains some systematic and well-done experiments, it does not provide an explanation as to why certain architectural components would have specific effects. As a result, the main contribution is simply showing incorporating more information relevant to the data would improve performance in explaining the data, at the expense of the model's task performance. Although the work may hold value, its contribution might not reach the level typically expected in a NeurIPS paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be helpful to discuss why and how certain architectural components would produce greater alignment while others do not. Is there any theoretical and conceptual framework that can help us to make sense of the results?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations on performance drop has been discussed. The work could have implications on neuroscience.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insights. We have broken down the raised concerns and provide our responses below.
>The introduction of brain architectural components was expected to enhance the alignment of the model with V1 data in the Brain-Score test, so the results are not particularly surprising
We appreciate this feedback but suggest the contrary, that the results are nontrivial and surprising. From one perspective, classical neuroscientific models of V1 (inherently based on architectural components of the brain) fail to predict V1 neural responses as well as task-driven ANNs (DOI:10.1101/2021.03.01.433495). Second, none of the top-scoring models on Brain-Score feature biological components and prior work has demonstrated strong correlations between ImageNet accuracy and model-brain alignment (DOI:10.1101/407007). Further, in isolation (main text, Table 1), most modules did not improve model-V1 alignment (center-surround antagonism and both divisive normalization components reduced the V1 Overall score, on average). Surprisingly, it was only when these components were combined that we observed drastically improved explanations of V1, suggesting their complimentary contribution.
>it does not provide an explanation as to why certain architectural components would have specific effects
We thank the reviewer for this insight and have since run additional studies to explain these observations. Specifically, we analyzed the features learned by each network variant by visualizing images that maximally activated neurons in artificial V1 layers via gradient ascent and studied the learned parameters of trainable, biologically-inspired components. We summarize the conclusions of these experiments below and will further detail these insights in the camera-ready paper:
- Center-surround antagonism improves spatial frequency properties by learning features that are selective to a highly varying spatial frequencies. Most trainable DoG kernels learned low-variance center gaussians, suggesting strong preferences for high frequency patterns and textures (Fig 2 of rebuttal PDF).
- We theorized that local RFs would improve response selectivity properties of artificial neurons by removing weight sharing. Our ablation studies demonstrated a drop in the response selectivity property score when local receptive fields were omitted. Surprisingly, this was not observed in isolated-component evaluation.
- Single and multi-component studies support the role of tuned normalization in improving spatial frequency, response selectivity, receptive field size, surround modulation, and response magnitude tuning alignment. Inter-neuron competition resulting from tuned normalization led to a more diverse feature set (qualitatively depicted in Fig 2 and quantitatively shown by statistically lower perceptual similarities), likely contributing to these property score improvements.
- Given the retinotopic organization of V1, we hypothesized that cortical magnification would give rise to better-aligned response selectivity and receptive field size tuning distributions, meanwhile improving neural predictivity. In each trial where cortical magnification was removed, these respective scores dropped, supporting this hypothesis.
>the main contribution is simply showing incorporating more information relevant to the data would improve performance in explaining the data, at the expense of the model's task performance
While our results did focus on architecture-driven improvements to model-V1 alignment, it has further contributions that we briefly summarize below and will expand upon in the camera-ready paper:
- As the reviewer noted, we systematically analyzed the contribution of biologically-inspired components to explaining V1 activity.
- The developed models (the most accurate models of V1 to date) are in-silico platforms for analyzing processing in V1. Such image-computable models can enable neuroscientists to study complex dynamics of large neural populations that aren’t readily observable through data-limited, time-consuming, in-vivo observations (DOI:10.1126/science.aav9436) or support experiments that cannot be run in humans. Their processing and learned parameterization suggest new hypotheses about processing in V1 and provide evidence and alternative views for existing theories. As an example, learned features in networks with tuned normalization were commonly more diverse than baseline those of ResNets. The improvements in V1 property scores that we observed from networks with this component can be treated as additional evidence that competition among neurons driven by tuned normalization gives rise to diverse tuning properties.
- Improving model-V1 alignment does not trivially improve image classification robustness. While small improvements to corruption robustness from tuned normalization layers were observed, alternative components stood as counterexamples to prior works that have suggested strong correlations between model-V1 alignment and classifier robustness to corrupted images.
Regarding diminished classification accuracy, these reductions primarily resulted from the introduction of cortical magnification, which assumed that the model’s gaze was focused on the center of the image. Images for which the object of interest is not located at the image center became more challenging to classify, reducing accuracy (rebuttal PDF, Fig 1). This challenge can be mitigated with a mechanism that dynamically determines the fixation center of this polar transform (e.g., saliency or attention maps) or performs inference using multiple fixations. We demonstrate the efficacy of this approach with a naive strategy that performs inference by averaging predictions from five static crops of each image, improving the accuracy of the network with all components from 55.1% to 59.9% (without affecting V1 scores). A more sophisticated, dynamic strategy (DOI:10.48550/arXiv.1709.01889) could further reduce this accuracy gap.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: Thank you for the responses. It is indeed interesting to show that "it was only when these components were combined that we observed drastically improved explanations of V1, suggesting their complimentary contribution." Somehow I did not get this on my earlier reading.
Thank you also for the additional experiments which do provide some insights.
I am willing to upgrade my score, but my overall sentiment is very much aligned with that of Reviewer bbGj.
Incidentally, it would also be interesting to compare this transfer-learning + neural constraint model with purely data-driven model, e.g.
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006897
However, I suppose that data set won't have contextual modulation effects, and thus can't serve as a real neuron-in-silico.
Perhaps some Allen Institute dataset on mice could work. | Summary: This paper incorporates a wide range of biologically inspired components into the initial stages of a ResNet model to see if these result in improved alignment with properties of V1 neurons. Specifically, the authors incorporate architectural components for Center-Surround, Local Receptive Fields, Divisive Normalization, Tuned Normalization, and Cortical Magnification. They find that some of these components are complementary when explaining different properties of V1. Adversarial training further improves the match to V1 responses.
Strengths: This paper provides a through discussion about many hypothesized computations in V1, and incorporates these computations into deep neural networks as modules with learnable components. The individual contributions of these components are systematically evaluated in terms of how well they capture properties of V1 and predict V1 responses to stimuli. Building this type of biologically inspired model with many V1 components is novel and the discussion of incorporating biological components into AI systems is currently of high interest for the NeurIPS audience (both on the CS and neuroscience communities).
Weaknesses: * The idea of including biological components into neural networks is not novel, and the paper lacks full discussion of other attempts along these lines. Perhaps most relevant here is VOneNet (Dapello et al. 2020) which incorporates properties of V1 into a convolutional neural network (this paper is cited but not in the context of modeling V1). Although the exact architectural components added are different here (and designed for specific response properties), the paper would benefit from an explicit discussion about what sets this work apart from what was previously done.
* Re clarity: As the paper is currently written, I found difficult to follow what is previous work and what is built into the model. In some parts of the “Background” it is mentioned that certain things are adopted in the previous work, but other parts are not discussed as being incorporated until the Methods. It might help the reader if these sections were restructured so that it is clear how the previous work builds into the proposed architecture.
* The paper claims to “introduce architectural components based on neuroscience foundations” however, from what I can tell, all of the included architecture components have been previously proposed. The novelty of this paper is including all of them within one model and analyzing them systematically.
* Due to the number of models and response properties that are tested, and the somewhat small differences in many cases, it is difficult to interpret which of the brain-score results are significant, although I do appreciate that the authors report mean and standard deviation across 3 trials of training and evaluation. Perhaps just changing the colors to make it clear which differences are or are not significant (correcting for multiple comparisons) would help? But generally, the changes in Brain-Score seems relatively small as all models are still far from the noise ceiling, making some of the claims of the paper seem too grandiose given the reported results.
* The robustness to corrupted images experiment also seems like the differences from the baseline model may be within the noise after correcting for multiple comparisons, and the overall changes are very small.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: a) It would be helpful to include a model with *just* adversarial training and none of the biological components as an additional baseline model in Table 3.
b) I’m a bit confused about the greedy backwards elimination approach. Was the architectural component that “reduced overall V1 alignment” selected specifically at that given stage of the architecture (ie was a model trained eliminating each component separately, and then just one was chosen?) Or was this elimination order based on the single-architectural component experiment?
c) On line 199-200 it states that the cortical magnification is applied immediately before the ResNet50 layer 1. Does this ResNet still have the first convolutional layer that proceeds all of the residual blocks, such that the cortical magnification is acting on the output of the convolutional layer/pooling? This detail should be clarified.
d) Was the decision to downsample the ImageNet images to 64x64 based on biology or computational constraints? Is it possible to train on more standard image sizes? This would possibly help the accuracy of the models.
e) Related to the above – is the baseline ResNet50 presented trained on these 64x64 images as well?
f) What networks are being referred to on lines 327-330? Are these the networks with adversarial training? (If so, this claim maybe should be toned down, as the margins are not “large” and still far off from the noise ceiling).
Minor:
* Lines 25-28: It would be helpful if specific “neuroscientific” models were listed here. I think the idea is that the models referred to in this section are hand-designed based on observed neural properties, rather than being optimized in some other manner?
* Line 32: “Through typical task-driven training alone” – this is not quite true, as a (typically linear) readout must be trained to map the activations of the neural network onto the responses of neurons.
* Lines 36: It would improve the paper if citations showing that CNNs are not achieving properties of the visual system were included here.
* Lines 68-73 are vague and uninformative to the context of the presented work. It would help to be more specific about failures of the models and previous work.
* Line 325: Include a reference to the supplementary section documenting the training details for the adversarially trained network.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors discuss the limitations in the discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insights. Our responses to each raised concern are provided in the points below and we will add these details to the camera-ready paper.
>Re: The paper would benefit from an explicit discussion about novelty
We appreciate this feedback and have run additional experiments to expand our results and discussions. We studied the features learned by each network by generating maximally activating images of artificial V1 layers via gradient ascent, analyzed learned parameters of trainable components, and elaborated on the original ablation studies. We summarize our findings in the points below.
- Center-surround antagonism improves spatial frequency properties via features selective to a high variety of spatial frequencies. Most trainable DoG kernels learned low-variance center gaussians, suggesting strong preferences for high frequency patterns and textures (rebuttal PDF, Fig 2).
- We theorized that local RFs would improve response selectivity of artificial neurons by removing weight sharing. Ablation studies demonstrated a drop in the response selectivity property score when local receptive fields were omitted. Surprisingly, this was not observed in isolated-component evaluation.
- Ablation studies supported the role of tuned normalization in improving spatial frequency, response selectivity, receptive field size, surround modulation, and response magnitude tuning alignment. Inter-neuron competition resulting from tuned normalization led to a more diverse feature set (qualitatively depicted in Fig 2 and quantitatively supported by statistically lower perceptual similarities), likely contributing to these improvements.
- Given the retinotopic organization of V1, we hypothesized that cortical magnification would give rise to better-aligned response selectivity and receptive field size tuning distributions and additionally improve explanation of neural response. Each of these scores dropped whenever cortical magnification was removed, supporting this hypothesis.
Regarding explicitly identifying the novelty of this work:
- Prior works have evaluated a subset of these components independently. Our systematic analysis revealed nontrivial and complementary interactions that improved model-V1 alignment beyond what would be suggested by individual components, yielding SOTA models of macaque V1.
- The developed models are in-silico platforms for analyzing processing in V1. Such image-computable models can enable neuroscientists to study complex dynamics of large neural populations that aren’t readily observable through data-limited, time-consuming, in-vivo observations (DOI:10.1126/science.aav9436) or support experiments that cannot be run in humans. Their processing and learned parameterization suggest new hypotheses about processing in V1 and provide evidence and alternative views for existing theories.
- Improving model-V1 alignment does not trivially improve classifier robustness. While small improvements to corruption robustness from tuned normalization layers were observed, alternate components stood as counterexamples to trends suggested in prior works.
>Re: Interpretability of brain-scores across many models and small changes in many cases
We wish to clarify that integrating individual components had little impact on model-V1 alignment (with the exception of cortical magnification). It was when these components were integrated together that substantial improvements emerged (rebuttal PDF, Fig 3). To date, these are the most accurate models of macaque V1 and prior work has demonstrated that even models with neural alignment far from the noise ceiling have utility in revealing novel insights about processing in the brain (DOI:10.1126/science.aav9436).
>Re: Small changes in robustness to corrupted images
We agree that the changes resulting from tuned normalization were minor (akin to the architecture-only effects observed in VOneNet (Dapello et al. 2020)). Notably, different components challenged the presumed link between model-V1 alignment and classifier robustness. Systematically investigating this divergence through analysis of the training dataset, training dynamics, and architecture would be an intriguing avenue for future exploration.
>Re: Adversarially trained ResNet50 as a baseline
We appreciate this suggestion. These results have been included in Table 2 of the rebuttal PDF.
>Re: Greedy backwards elimination
In this approach, we iteratively removed individual components from the architecture, computed which contributed the least to the V1 Overall score, and then removed this single component. This was done in a top-down approach, starting with our top-performing model, to deduce the critical components without having to evaluate every model permutation.
>Re: Cortical magnification integration
This ResNet still has the first conv and batch norm layers. The cortical magnification layer replaced the pooling layer before the first residual block, as it implicitly performs pooling among pixels of the same polar cell.
>Re: ImageNet downsampling
This downsampling was done for all models due to computational constraints (not a requirement of any component). We agree that training without downsampling is likely to improve model accuracy. Reductions in model accuracy were also found to be mitigated by multi-fixation inference strategies that addressed classification challenges associated with cortical magnification.
>Re: Toning down claim on lines 327-330
The networks in question refer to the top performing models with and without adversarial training. Top V1 alignment scores were previously separated by small margins, and these models were evaluated as the most accurate models of activity in V1 to date. These substantial improvements are depicted in Fig 3 of the rebuttal PDF.
>Re: Paper clarity and minor suggestions
We appreciate these suggestions, agree with the reviewer’s points, and will address these points in the camera-ready paper.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: Thank you for the response. From this, it is more clear that the benefit of this work comes from integrating all of the different V1 components into the same model, showing that this results in better alignment with V1 responses. If this is the case, I think that should be made more explicit throughout the paper (ie in the listed bullet points on line 46).
I also agree with Reviewer yZMG about the Untrained models being a nice addition.
After the author responses I still find this work borderline. I appreciate that amount of effort it takes to integrate all of these components into a single model (and that itself may be useful for the field as a new baseline, as the authors discuss). But as currently presented, I'm not sure if that is "new" enough or if the insights gained are deep enough for a typical NeurIPS paper.
Finally, this is a bit of my personal preference in terms of wording (and thus is not influencing my score), but as neuroscientists, I wonder if we want to fall into the trap of publishing paper after paper highlighting the achievements of "SOTA" on a particular benchmark when the improvements are around +2-3%? To me, it seems especially problematic given the limited size of the current datasets. This is why I was highlighting the concerns about multiple-comparisons above, and suggestions to tone down the language about things like "unprecedented explanation" of V1. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful and productive feedback. We were grateful to read that the reviewers agreed that the biologically-constrained models proposed in this work significantly outperform previous SOTA on explaining neural activity observed in macaque V1 (Y5Aa, yZMG), were systematically evaluated (EW9X, bbGj, yZMG), and that the work is of of high interest to the NeurIPS audience (bbGj, Y5Aa).
The reviewers raised constructive questions regarding intuition behind the contribution of each analyzed component towards explaining neural activity in V1 (Y5Aa, bbGj, EW9X), solutions to mitigate reductions in image classification performance upon introducing these biologically-motivated components (Y5Aa, yZMG, EW9X), and further implications of the observed results (Y5Aa, bbGj, EW9X).
We address the questions and concerns raised by each reviewer point-by-point in the respective threads below. In summary, data-driven insights emergent from studying learned features and parameters of the trained models are suggestive of each component's contribution to explaining neural activity in V1. Static center-of-gaze assumptions of the cortical magnification layer made the model more susceptible to misclassifying images for which the object of interest was outside of the image center, a challenge that could be mitigated with dynamic or recurrent fixation strategies. Finally, the learned parameters and processing strategies of these state-of-the-art models of V1 could unveil hypotheses and provide evidence for visual processing strategies in the primary visual cortex.
Pdf: /pdf/ea612736a52e5b8774624268be5522203b2c42db.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Topological Obstructions and How to Avoid Them | Accept (poster) | Summary: The authors investigate two types of topological obstructions that pose challenges for models aimed at learning a particularly structure in the embedding space. Specifically, the authors identify figure eight local minima and mismatches in winding numbers are two defects that make learning the right latent structure difficult. The authors propose a VAE method to improve the learning procedure in order to better reflect the underlying data’s structure. The authors run experiments on three shapes and measure continuity of the learned space in comparison to existing VAE models.
Strengths: - Understanding challenges arising in the learning dynamics of models to ensure they appropriately reflect the underlying structure of data is an important topic.
- Authors acknowledge a reasonable set of limitations in the conclusion.
- It’s nice to see the authors applying tools from topology to examine how well the right structure can be learned by a model’s embedding space.
Weaknesses: - The writing flow for Section 2 and 3 could use more work to improve clarity
- Problem statement would benefit from an illustrative example of an application for f, h, and pi to help make the setup in line 61-64 more clear. I’d suggest the authors consider moving the running example earlier or using another application to motivate and illustrate this setup.
- Figure 2 isn’t referenced at all in the running examples section and doesn’t appear in the text until Section 3 (line 107). I’d recommend moving the reference to Figure 2 earlier.
- Figure 2: the figure would benefit from additional labels to indicate what each component is. For example, M as I understand is comprised of points for each degree of rotation for the object. The three right hand diagrams should be labeled clearly to indicate they are 3 runs (with different seeds).
- I found the empricial evidence, especially for the Torus experiment in 5.2 to be quite weak. For example, the authors base the results on only 5 runs grounding the conclusion on the fact that “VAE learns a homeomorphic mapping 0 times” while “GF-VAE learns the mapping 2 out of 5 times”. I don’t find this sufficiently convincing evidence as the authors test a single learning rate and setup across only 5 runs. I would recommend the authors run many more trials to prove this out.
- Unfair comparison to basleines: In the appendix, the full table shows Beta-VAE (with beta = 4) is quite an effective baseline, but this is absent from the main table 1 in the paper, which only compares against vanilla AE and VAE even though the GF-VAE model uses a beta term. This is not an apples-to-apples comparison to tease out the impact of the author’s proposed method. I suggest the authors include beta-VAEs in the main table and appropriately compare their method to the same beta-VAE baseline.
- I’m also surprised the authors did not include a comparison to other topological models that impose equivariance to Lie groups, but only to standard VAE models that do not impose group structure in the latent space.
Details
- typo in citation line 74
- typo line 227: “stand” → "standard”
- typo 239: extra space
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Why does the setup include a projection function pi form y to Z? You cite Falorsi, but I couldn’t find intuition or motivation for this at all in the paper. Could you elaborate?
- How prevalent in practice is the figure eight local minima problem you identify in 3.1? This seems like a very specific topological artificat. Is it found real applications, or does it arise in training of deep networks?
- Could you explain the justification for the claim on line 178: “In order for f to be a homeomorphism, the winding number must be -1 or 1?”
- What exactly are the columns in table 1? The caption is quite sparse and doesn’t adequately describe how #H or Continuity are measured. The discussion of continuity seems to be relegated entirely to the appendix. At least a brief description with some intuition should be in the main text.
- What’s the rough computational cost of training with the additional normalizing flow regularization?
- How come the baseline “reg-y” model performs so poorly? I’m surprised that explicitly encouraging the latent space to be close to S1 doesn’t improve continuity on two of the three shapes.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your kind review and constructive comments. We appreciate your thorough review and detailed questions! We are also delighted that you recognize the topic of our work as an important one. We believe we can address most of your concerns. We will respond to your questions and comments below:
**Clarity of Section 2 and 3**:
Thank you for your comments. We all make sure that figure 2 is referenced in the running example part as well as revising it to make it clear that the plots on the right belong to 3 different random seeds. We will also try to add a separate example at the beginning of Section 2, if there was sufficient space.
**Tori Results**:
Thank you for your constructive comment. The reviewer is completely correct. We were slightly short on time during the submission for the Tori experiment and therefore failed to run for 15 seeds or do any other hyperparameter tuning. We have run additional experiments since the deadline for the Tori experiments. GF-VAE (β = 6) converges to a homeomorphic mapping 9/15 times while 0/15 for VAE (β = 6). Hope this has addressed your concern.
**Comparison against β-VAE**:
Thank you for your constructive comment. We will make sure to add the β-VAE result in the main paper. As mentioned in our Appendix as well as [1], β was an important hyperparameter as it maximizes entropy which makes sure that all of the Lie group is covered and therefore improves continuity. However, as it is shown in the Appendix, you normally need both β and a GF-VAE architecture to learn a homeomorphic mapping (see β-VAE results for Tetrominoes and Airplanes).
**Comparison against equivariant neural networks**:
Thank you for your constructive comment. Equivariant neural networks achieve homeomorphism by definition so the reviewer is correct that if we have the prior knowledge of the Lie group *and* we can design an equivariance network for it, we should. However, as we are trying to build towards cases where we do not assume prior knowledge of the group or at least a design of an equivariant neural network for the group, here we consider a setting where no constrains are imposed on the neural network and only on the latent space. We hope that this and our general response and our response to Reviewer mNP4 have addressed your concern.
**Why does $\pi$ exist?**
Ordinary neural networks only map from Euclidean to Euclidean space. Therefore, if we want to map to a different space such as a Lie group, we have to add a separate mapping which is the role of $\pi$ in our design. For example, in the case of $S^1$, we can do this by either encoding to $\theta \in \mathbb{R}^1$ and then map to the circle $[\cos \theta, \sin \theta]$, or encode to $y \in \mathbb{R}^2$ first and then project to the circle $\pi(y) := y / \|y\|$. Hope this clarifies the role of $\pi$.
**How prevalent is the figure-8 obstruction?**
We mainly decided to analyze this obstruction because empirically this was the most common obstructions we faced in practice as you can see in the encodings in Figure 5. It is not unique in the sense that sometimes it manifests itself in the form of trefoil or figures with crossing numbers higher than 1 as well. However, we focus on figure 8 in our theory because it is the simplest case. Please see our general response for more details.
**Why should the winding number be either 1 or -1 for a homeomorphic mapping?**
Thank you for the question! The winding number can intuitively be explained by how many times do we ‘warp around‘ in the output space if we warp around once in the input space. Winding number 1 means as we go through $-\pi$ to $\pi$ (rotating counter-clockwise) in the input, the circle in the output space is also covered once in the same direction. Winding number -1 is basically the same thing except the circle in the output space is warped in the clockwise direction as we go through $-\pi$ to $\pi$ in the input space.
**Details of our Evaluation Metrics**:
Thank you for the question! We could have done a better job describing the details of our evaluation. The #H column shows the number of runs that have converged successfully to a homeomorphic mapping. Evaluating homeomorphism is difficult in general as it requires verifying continuity across the full domain. Here, we evaluated it based on two criteria: (1) if the continuity score is less than 8 (empirically, we observed the encoding to appear smooth below this threshold), and (2) winding number 1 or -1. The other column is continuity for which we describe the details of its calculation in the Appendix. This is done by taking an equidistant trajectory in the input space and keeping track of the pairwise distances in the output space. We then divide the maximum $q_i$ (where there was the most discontinuity) by the 90-th percentile $\{q_i\}$. We will provide more details of our valuation in the final manuscript.
**Additional Computational cost of Flows**:
This is a valid point and we will add it to our discussion of limitations in the paper. For the set of experiments in the paper, the additional computational cost was relatively small because in our experience, only a single flow layer was enough. Moreover, the flow is applied on the latent space which is very low dimensional therefore does not require a lot of extra computation. As an example, the training run time for a VAE on the Airplane dataset was 1 hour and 17 minutes while for GF-VAE it was 1 hour and 31 minutes.
**$reg\text{-}y$ performance**:
This was somewhat of a surprise to ourselves too. What in practice occurred when regularizing $y$-space was that if the mapping was already close to something homeomorphic, then it helped stabilize the convergence to a homeomorphic mapping. However, if the representation was stuck in a “figure 8” minima, then it made it even more difficult for the model to escape this local optimum as most of the arcs of “figure 8” had to be close to the circle and had no room to move.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing a detailed set of responses. In light of the additional runs for the Tori experiment, authors agreeing to move Beta-VAE to the main paper for direct comparison, and clarifications, I'm raising my score.
I'd suggest including a few words about the authors' surprise regarding `reg-y` performance to aid readers and inspire future explorations. | Summary: This paper theoretically and empirically characterize obstructions to training Homemorphic encoders with geometric latent spaces, such as local optima due to singularities or incorrect degree or winding numbers.
Strengths: Originality: The paper is original in its approach to addressing topological obstructions in machine learning models. While previous work has explored the use of geometric inductive biases to improve interpretability and generalization, this paper is one of the first to systematically investigate the topological challenges of encoding to a specific geometric structure from the training perspective and propose a novel solution using adapted normalizing flows.
Quality: The paper is of fair quality, in terms of its theoretical rigor and its empirical evaluation. The authors provide a detailed analysis of the topological obstructions that can arise in encoders with geometric latent spaces, and their proposed GF-VAE model is well-motivated to address these challenges.
Clarity: The paper isfairly written and organized. The authors provide ample background and motivation for their work, and their theoretical and empirical analyses are both presented in an accessible manner. The use of figures and examples throughout the paper helps to illustrate the authors' ideas and make the paper more understandable.
Significance: The authors' characterization of topological obstructions and their proposed GF-VAE model have the potential to improve the interpretability, generalization, and robustness of topological machine learning models. The paper is likely to inspire further research in this area and has the potential to lead to significant improvements in the performance and reliability of machine learning models.
Weaknesses: While the paper is fairly written with novel contributions to the training of geometric deep learning, there are a few areas where it could be improved:
3. While the paper provides a clear and accessible presentation of the authors' ideas, some of themathematical concepts could be more clearly explained. For example, the motivation of homemorphic encoder and its applications in generative models should be discussed. The concrete geometries considers in this paper are restrited to $S^1$, which is a commutative group. As th title is for general topological obstructios, the paper could benefit from providing a more general Lie group case.
2. The design of GF-VAE is to apply flow model as a plug-in prior for the VAE, however combination of VAE and flow model is not new. Therefore, there are likely other approaches and techniques that could be relevant to GF-VAE that are not discussed in detail.
Second, The empirical evaluation of the proposed GF-VAE model is also limited to just two domains, and it is unclear how well the model would perform on other types of data. It would be helpful for the authors to provide additional experiments on a wider range of datasets to demonstrate the generalizability of their approach.
3. The paper could benefit from a more detailed discussion of the limitations of the proposed approach.
Typos: stand-> standard in line 227
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Why the author selected the figure eight and the degree cases to represent topological obstructions. Are there any principles for classifying topological obstructions?
2. What is the precise definition of the projection from $Y$ to $Z$ in formula (1) ?
3. Figure 2 doesn't contain the deconvolutional decoder $f^*$?
4. How is the flow $r$ realized (in line 227) for general lie groups?
5. The last paragraph of page 5 seemed to have provide a simpler solution to avoide topological obstructions than the GF-VAE?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper could benefit from a more detailed discussion of the limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your kind review. We are delighted that you recognized our theoretical and empirical contributions as significant and novel! We will respond to your questions and comments below:
**Motivation for learning a homeomorphic representation**:
Thank you for your comment. As stated in our introduction, we believe a reasonable notion of a good representation is a representation that reflects the underlying data structure, and we believe a sensible definition of this reflection is a representation that locally preserves the distances between the ground truth representations (and by extension, the data) which is described by the concept of homeomorphism. This should not only help us to perform better in downstream tasks, but also has the additional advantage of interpretability which is a desired property of any learned representation.
**Does our theory apply to more general Lie Groups?**
Thank you for your constructive comment. We believe there is a small misunderstanding. The concepts such as winding numbers and crossing numbers both can be defined for Lie groups of higher dimensions as well. We hope that our general response has addressed your concern.
**Combination of VAEs and Flows**:
Thank you for your constructive comment. Just to clarify, we apply to the flow to the inference model $q_\phi(z|x)$ and not the “prior” as the reviewer wrote. Nevertheless, we certainly did not mean to claim that we are the first paper that combined VAEs and Flows and will try to clarify this in the final manuscript. There are two main differences between our work and previous work that combined VAEs and Flows: (1) Previous work on Flow-VAEs only considered Euclidean latent spaces. To the best of our knowledge, we are the first work that employs a geometric latent space in this setting. This is mainly because defining flows on geometric spaces is a challenging task. (2) The motivation for applying flows to $q_\phi(z|x)$ in the prior work is to tighten the variational gap and therefore obtain a higher $\log p_\theta(x)$, while our motivation is to learn a homeomorphic mapping. Does this address your concern?
**Limited Scope of experiments**:
We hope that our general response has addressed your concern. If the reviewer had a specific domain in mind, we would be happy to try.
**Limitations**:
Thank you for your constructive comment. We will expand on the limitations of our work mentioned in the conclusion.
**Other types of topological obstructions**:
That is a great question! Yes it would be possible to characterize different types of topological obstructions using tools from algebraic topology, namely homology and homotopy theory. Computing such topological invariants would give a classification of possible to obstructions to optimization, although many may be unlikely to occur in practice. The main reason we considered the figure-8 shape and the winding number mismatch was because empirically the combination of these two was the most common obstruction we faced in practice (Figure 5).
**Definition of $\pi: \mathcal{Y} \rightarrow \mathcal{Z}$**:
In the case of SO(2), $\pi(y)$ is just a projection on the circle: $\pi(y) := y / ||y||$. Thank you for your comment. We will clarify this in the final manuscript.
**Decoder in Figure 2**:
Our intention with this figure was to depict our encoder design as well as various types of potential representations we could learn from this design. As the decoder $f^*$ is just an ordinary neural network, we did to include it in that figure due to space constraints. We are happy to try to include it in Figure 2 if the reviewer feels it helps with clarity.
**Normalizing flows for general Lie groups**:
This is a great question! As mentioned in the related work Section, defining normalizing flows on geometric spaces is a field of itself. One general way of defining a flow on the Lie group is to define the flow on the Lie Algebra and use the exponential map to map it to the Lie group, which has been defined and discussed for most Lie groups. However, more care needs to be taken to account if we want to avoid having a discontinuous pdf as discussed in [1]. We refer the reviewer to [1,2] for more details on this topic.
**Last paragraph on page 5**:
If the reviewer is referring to the “directly decoding from $y \in \mathcal{Y}$ but push embeddings to the unit circle using the loss $| ||y|| − 1|$” we believe there has been a misunderstanding. As pointed out in the same paragraph and Figure 8 in the Appendix, optimizing this objective could result in the wrong winding number.
**[References]**
[1] Danilo Jimenez Rezende, George Papamakarios, Sébastien Racaniere, Michael Albergo, Gurtej Kan-war, Phiala Shanahan, and Kyle Cranmer. Normalizing flows on tori and spheres. In the International Conference on Machine Learning, pages 8083–8092. PMLR, 2020.
[2] Luca Falorsi, Pim de Haan, Tim R Davidson, and Patrick Forré. Reparameterizing distributions on lie groups. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 3244–3253. PMLR, 2019.
---
Rebuttal Comment 1.1:
Title: Response from the reviewer
Comment: I'm generally satisfied with the author's response. I will keep my 'marginal accept' score. | Summary: This paper explores the challenges of encoding data into geometric spaces and proposes a solution using Group-Flow Variational Autoencoders (GF-VAEs). The authors discuss how incorporating geometric inductive biases can improve interpretability and generalization but also present obstacles due to topological constraints. They identify two types of local optima that can arise: singularities and incorrect degree or winding number. To overcome these challenges, the authors propose GF-VAEs, which utilize normalizing flows to define multimodal distributions on geometric spaces. The paper characterizes topological defects in encoders, introduces evaluation criteria based on winding number, crossing number, and continuity, and demonstrates that GF-VAEs can escape local optima and achieve a more reliable convergence to a homeomorphic mapping. The main contributions of the paper include characterizing topological defects, proposing GF-VAEs as a solution, and empirically validating their effectiveness.
Strengths: 1. *Theoretical and Empirical Analysis*: The paper combines theoretical analysis with empirical evaluations to provide a comprehensive understanding of the challenges and solutions related to encoding data into geometric spaces. This approach strengthens the validity of the proposed methods and their practical implications.
2. *Identification of Obstructions*: The paper effectively identifies and characterizes topological defects that can occur in encoders mapping to geometric structures. By recognizing the specific challenges such as singularities and incorrect degree or winding number, the authors provide a clear understanding of the obstacles that need to be addressed.
3. *Proposal of GF-VAEs*: The introduction of Group-Flow Variational Autoencoders (GF-VAEs) as a solution to the identified obstructions is a significant contribution. The paper explains how GF-VAEs leverage normalizing flows to model complex multimodal distributions on Riemannian manifolds. This proposal offers a practical approach to circumvent local optima and achieve more reliable convergence.
Weaknesses: 1. *Idealized Assumptions*: The paper acknowledges that the theoretical analysis is limited by the idealized assumptions necessary to analyze the method using topological tools. These assumptions may not exactly match the real-world scenarios encountered in practice. This limitation undermines the direct applicability of the theoretical findings to real-world problems and raises questions about the generalizability of the proposed solutions.
2. *Limited Metrics for Higher Dimensions*: The metrics defined in the paper, such as winding number and crossing number, are primarily designed for lower-dimensional manifolds. It is mentioned that these metrics become harder to define and compute for higher-dimensional manifolds. This limitation restricts the applicability of the evaluation criteria to higher-dimensional geometric spaces, potentially limiting the scope of the proposed approach.
3. *Restricted Scope of Experiments*: While the paper presents empirical evaluations on two domains, it does not cover a wide range of datasets or scenarios. The limited scope of the experiments may not fully capture the diversity of real-world applications and datasets, leaving open questions about the performance and generalizability of the proposed GF-VAEs in different contexts.
4. *Lack of Comparative Analysis*: The paper lacks a comprehensive comparative analysis of the proposed GF-VAEs with existing methods. While the empirical evaluations demonstrate the effectiveness of GF-VAEs in escaping local optima and achieving better convergence, a thorough comparison with alternative approaches would provide a clearer understanding of the strengths and weaknesses of the proposed method in relation to existing state-of-the-art techniques.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the Section 'Weaknesses'.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors presented in the paper the most important limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your kind review. We are delighted that you recognized our contributions as significant! We will respond to your questions and comments below:
**Assumptions in Propositions 3.1 and 3.2**:
The reviewer raises a valid point about the idealistic nature of our assumptions, as we acknowledge in our paper. However, undertaking theory work in a fully realistic setting is indeed challenging, particularly considering that, to the best of our knowledge, our work represents the first attempt to conduct theoretical analysis on the topic of topological obstructions. While we recognize the limitations of our assumptions, we would like to emphasize that these idealized conditions serve as a foundational step to understand the fundamental constraints and possibilities within the context of topological instructions. Our intention is to establish a theoretical framework that lays the groundwork for future research in this promising area. Moreover, it is essential to note that we adopt a perspective that interprets gradient descent as a discretization of a continuous path. Under this viewpoint, our theory highlights that even if continuous optimization were feasible, obtaining a homeomorphic mapping would not be possible. The presence of topological obstructions fundamentally means we have to break the continuity of the optimization path. This finding is an intriguing and novel result that showcases the inherent challenges in learning such mappings, and it holds significance irrespective of the idealized assumptions. As our field progresses, we anticipate that future works will build upon our theoretical framework to consider more realistic scenarios and explore techniques to address practical challenges.
**Limited Metrics for Higher Dimensions**:
We believe there is a small misunderstanding. The concepts such as winding numbers and crossing numbers both can be defined for Lie groups of higher dimensions as well. It is simply a matter of discretization of the latent space. Though they become computationally more challenging to compute in higher dimensions. We will make sure to clarify this in the final manuscript. Please see our general response for more details.
**Restricted Scope of Experiments**:
Thank you for your constructive comment. We hope that our general response has addressed your concern. If the reviewer had a specific domain in mind, we would be happy to try it.
**Lack of Competitive Analysis**:
Thank you for your constructive comment. Due to the uniqueness of our setting, finding suitable prior works for a fair comparison presents challenges. Let us elaborate on the reasons behind our experimental choices and the considerations we took into account while designing our approach. In our work, we deliberately refrain from imposing constraints on the neural networks, such as equivariance, to maintain generality. This choice as well as the fully unsupervised nature of our setting aligns with our long-term goal of eventually moving towards scenarios where we do not assume prior knowledge of the correct Lie group. Thus, as a foundational step, we opt to utilize an ordinary encoder and solely impose constraints on the latent space itself. It is important to acknowledge that if we were to possess knowledge of the correct structure in advance, employing an equivariance neural network would be the correct choice. However, as mentioned in our related work, existing approaches either heavily rely on equivariance neural networks or assume additional information about the data, such as the presence of group elements 'g' in between pairs. These approaches, while useful in their respective contexts, deviate significantly from our fully unsupervised and unconstrained setting. Also please note that the primary focus of our experiments was to investigate the hypothesis that multimodal distributions have a higher probability of learning a homeomorphic representation when compared to vanilla VAEs with a Lie group latent space. If the reviewer has any specific baseline that would like us to compare to, we would be happy to hear it. | Summary: The paper addressed several topological obstructions that cannot be easily solved during the optimization of VAE. To solve this problem, the paper proposed to train a special NF to escape the defect, which the authors called GroupFlow. The paper then evaluated the proposed method on synthetic image datasets with different kinds of manifolds.
Strengths: The paper is clearly written and the mathematical formation of the problem is precise.
The analysis on the obstructions during optimization clearly shows the invariants which lead to defects.
The proposed method naturally follows the defects and seems interesting to me.
The evaluation results indicate the proposed method achieves the best homeomorphism.
Weaknesses: I am not an expert in topology, so please refer to other reviewers for the comments on theoretical analysis. However, it seems the authors only analyzed a very simple type of manifold with small freedom. It would be more interesting if there are results on more complex manifolds beyond rotation and colorization. There is a similar concern for experiments.
From Table 1 it seems the $\beta$ parameter has large impact on the homeomorphism result. There should be more (intuitive or theoretical) discussion on the relationship between $\beta$ (disentanglement) and homeomorphism. It is also important to demonstrate results from the standard $\beta$-VAE.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Does your theory apply to more complicated manifolds?
Why does higher $\beta$ leads to better homeomorphism? How about results of the standard $\beta$-VAE?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper has adequate discussion on limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your kind review. We are delighted that you found our theory and proposed approach clear and interesting! We will respond to your questions and comments below:
**Applicability to other Lie groups**:
Thank you for your question! Yes, our theory does apply to other compact Lie groups as well. Please see our general response for more details.
**Role of β**:
The reviewer is absolutely correct that β indeed makes a difference. We show the results for β-VAE in the Appendix in Table 3. We believe the reason for its effectiveness is that because the prior $p(z)$ here is uniform, regularizing the KL terms corresponds to maximizing entropy which encourages the encodings to cover all of the latent Lie group (which is necessary for a homeomorphic mapping). As the results in Table 3 shows, sometimes increasing the β can be enough to learn a homeomorphic mapping (e.g. teapots). However, as we can see in the other cases, we generally need both a GF-VAE and a high β to achieve a homeomorphic mapping (e.g. Tetrominoes & Airplanes). We will make sure to add the results from β-VAE to the main paper as well.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thanks the authors for addressing my questions. The answers seem to be convincing. I will keep my score because I am not expert enough to evaluate the correctness and the impact of the theoretical (especially the topology) part of the paper. I also ask AC to give higher weights to the comments from other reviewers who are more expert in this topic. | Rebuttal 1:
Rebuttal: We sincerely appreciate the valuable feedback provided by all the reviewers. Your constructive comments and efforts in evaluating our paper are highly appreciated. We are pleased to see that the general consensus is that our theoretical analysis of topological obstructions, along with the proposed GF-VAE, are significant and novel contributions to the field. There are some shared concerns among reviewers regarding the application of our theory and method to other domains and topological spaces, which we will address below before addressing individual comments:
**Applicability to other Lie groups and topological obstructions**:
All reviewers inquired about the applicability of our theoretical analysis beyond the specific case of $\mathrm{SO}(2)$ and “figure 8” shape. The wording in the conclusion might have made it seem the mathematical concepts we employ are limited to $\mathrm{SO}(2)$, but this is not the case. To clarify, the concepts of winding number and crossing number can be extended to other Lie groups and topological spaces of higher dimensions. In higher dimensions, the winding number is known as the degree of a continuous mapping and can be computed over any compact manifold. Thus Prop 3.2 may be readily generalized. The crossing number can be generalized in different ways including the measure of the size of the self-intersection. For instance, we can consider a 2D sheet intersecting itself in a manner akin to a twisted ribbon. Additionally, in the case of the figure-8 example, we note that other shapes with the incorrect crossing number, such as trefoil or quatrefoil, can also exist as a possible encoding and would be part of proposition 3.1. Computing these metrics for other topological spaces primarily involves discretization of the latent space, though we recognize that the computational challenges increase with higher dimensions. We apologize for any confusion caused by our phrasing, and we will rectify this in the final manuscript to make it clear that the defined metrics can be applied to various Lie groups and topological spaces.
**Limited Scope of Experiments**:
Reviewers mNP4, 1s1k, and WSwE pointed out that we are only considering Circles and Tori in our experiments. In the field of equivariance/disentanglement, and generally extracting the right Lie group from data, $\mathrm{SO}(2)$ and $\mathrm{SO}(2) \times \mathrm{SO}(2)$ are the two of the most common Lie groups. We could consider adding translation to the features as well (e.g. $(R^2, +) \times \mathrm{SO}(2)$), but translation can be modeled with a standard Euclidean latent space so therefore does not lead to the topological obstructions discussed in the paper. The only common Lie group we did not consider in our experiments was the Lie group $\mathrm{SO}(3)$ which we will try our best to add to the camera ready. However, we would like to point out that besides homeomorphic-VAE, our work would be the only VAE-based work that would manage to learn a homeomorphic mapping from images to $\mathrm{SO}(3)$ in a fully unsupervised manner that doesn’t use an equivariance neural network. Moreover, even in the case of homeomorphic-VAE, it requires two additional regularizers as well as a hard tuned β-scheduler to have a single successful run. While we understand the reviewers' desire to see broader experiments, we feel that given the novelty of our theoretical and methodological contributions, the focus on circles and tori is sufficient for this paper. However, we assure the reviewers that we will explore the extension of our approach to other Lie groups in future work. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Decorate3D: Text-Driven High-Quality Texture Generation for Mesh Decoration in the Wild | Accept (poster) | Summary: The authors present Decorate3D, a technique for text-driven texturing of a 3D mesh given a NeRF representation of a given scene. To this end, the authors introduce a two-stage texturing scheme. First, the NeRF is decomposed into a 3D mesh and view-dependent texture map. Second, given the reconstructed diffuse UV texture map, the authors edit the mesh using a modified score-distillation objective that considers the structure, or depth, of the input. Finally, to mitigate some jittering artifacts of the resulting editing texture map, the authors propose a few-view resampling technique. The authors compare their texturing scheme to existing techniques and provide many visual examples to show the 3D consistency of the textured meshes. Additional ablation studies are provided to validate the core design choices of Decorate3D.
Strengths: - The visual results achieved by Decorate3D are impressive and appear to surpass existing texturing methods. Additional quantitative evaluations across numerous objects are used to further validate the effectiveness of the technique.
- I am not familiar with existing works that operate over a real 3D scene and NeRF model. For example, to the best of my understanding, most works assume that a 3D mesh is provided. Here, the authors operate in a real-world setting, which adds an additional challenge that is overcome quite nicely.
- Although the overall system is quite complex, the different components are presented quite nicely and can be understood after careful reading. The intuitions provided by the authors to motivate the different component help in understanding the design of Decorate3D.
- Finally, many ablation studies are provided to validate the different components of Decorate3D. Although I would have liked to see more visual results, I believe this can easily be added to the revision.
Weaknesses: **General Points:**
- The visual results of TEXTure raise some concern on whether the method was run correctly by the authors. From my experience with the official code base, the results should be of much higher quality. In the TEXTure paper itself, the authors show an Ironman texture of a mesh and the results look far better than those presented in the paper. Moreover, in DreamAvatar [Cao et al. 2023] the authors also compare to TEXTure and achieve much better results for TEXTure. These visual results also seem to contradict the quantitative results, which placed TEXTure quite closely to Decorate3D. I want to give the authors the benefit of the doubt here, but could the authors please clarify and verify how the results for TEXTure were obtained?
- The method cannot edit the geometry, which is needed in real-world applications. Specifically, the authors do not explore the robustness of the quality of the resulting mesh. For example, does the method still work nicely if the mesh contains defects such as holes or a few faces? Moreover, the authors assume that there is some semantic relation between the prompts and the geometry in order to get reasonable results.
- This is discussed by the authors as a limitation.
**Ablation Studies:**
- After the decomposition stage, is the resulting diffuse texture map consistent across all views? From my understanding, already at this point, the texturing should be 3D-consistent. However, I could not find results obtained after the decomposition stage (e.g., reconstruction) to verify this.
- It is difficult to assess the contribution of the structure-aware SDS from a single visual example. This also holds for the other ablations performed by the authors (e.g., the FVR training). Additional visual results, and ideally, more quantitative evaluations (e.g., as done in Table 1) would greatly assist in truly evaluating the contribution of each component.
- I had a difficult time understanding the contribution of the few-view resampling training. If I understood correctly, Figure 8 is designed to show the improvements obtained using the FVR training. Could the authors provide some additional examples that illustrate this improvement? Could the super-resolution model be applied directly to the previous result? And if we do so, would this also help with the jittering effect? That is, I am wondering whether the improved results are from the FVR or from the super-resolution model. Based on the results provided, it appears that the FVR does provide some minor improvements, but additional examples and an ablation study on applying the super-resolution model directly to the previous step would be helpful to highlight this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Regarding the FVR stage, the authors chose to sample 8 views, if I understand correctly. Couldn’t this miss texturing areas of objects with complex areas? Does the FVR assist mainly in simpler geometries? A discussion on where to use the FVR would be beneficial since I assume that the FVR can assist differently for different geometries.
- For the 3D-consistent texture editing, wouldn’t editing the UV texture still maintain the 3D consistency? What is the intuition behind editing one of the rendered images?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss the limitations of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed comment. We kindly remind you to check our supplymentary material that provides some video results. We think the video results are helpful to dispel your concerns.
***
* **Q1:** Concern on results of TEXTure. Could the authors please clarify and verify how the results for TEXTure were obtained?
**A1:** We use the officially released code provided by the authors of TEXTure. To make sure we have set everything up correctly, we validated the results by employing the same human mesh model shown in their paper. Further details can be found in the accompanying one-page PDF document (see Figure C).
We also ran the demo on HuggingFace which produces similar results. The texture looks decent on the front view but the artifacts are quite obvious at the back view. This is caused by the error accumulation of their progressive texture updating strategy starting from the front view to the back. We found the same artifacts when using our experimental mesh models.
Our mesh models are even more challenging, because
* **(1)** it is captured from the real world, the mesh is not perfectly clean, and
* **(2)** the coordinates of the reconstructed object cannot be perfectly aligned with their settings of front/back view.
The supplementary material includes an anonymous project page, where we provide the human object file used in our experiments. If necessary, the reviewer could verify the TEXTure's results with our mesh file. In the anonymous page, you can click the 'code' button and the file path in the anonymous code link is 'docs/samples/qualitative/knightarmor/texture/mesh.obj'.
***
* **Q2:** Does the method still work nicely if the mesh contains defects such as holes or a few faces?
**A2:** Since the test cases of our mesh model are reconstructed from real-world captured images, they are not guaranteed to be watertight and there are some holes around the borders (see Figure C).
Since we use the UV parameterization to represent the texture, the generated texture quality is not affected by the number of faces. For example, the mesh of the Monitor object only has a few faces(1216 faces), while the generated texture is still decent.
Please refer to Figure C in the one-page pdf document and the supplementary material.
***
* **Q3:** After the decomposition stage, is the resulting diffuse texture map consistent across all views?
**A3:** Yes, the diffuse texture is consistent across views. Please refer to the one-page pdf document and the demo video (from 3:00 to 3:20) in the supplementary material.
***
* **Q4:** Additional visual results, and ideally, more quantitative evaluations (e.g., as done in Table 1) would greatly assist in truly evaluating the contribution of each component.
**A4:** Please refer to Table A and Figure A in the one-page pdf document. The supplementary material provides the vieo results.
***
* **Q5:** Could the authors provide some additional examples that illustrate the improvement of using FVR?
**A5:** In the one-page pdf document, we visualize the jittering problem on the left side of Figure B. The figure shows the error maps of neighboring views, where the views are aligned to a reference view by using the rendered depth. Please also refer to the supplementary video (starting from 1:50 to 2:04) to compare the results before and after FVR training.
***
* **Q6:** Could the super-resolution model be applied directly to the previous result to address the jittering effect?
**A6:** FVR is designed to solve the jittering problem and super-resolution is applied directly on the UV Texture to enhance the resolution of the global texture. If we neglect the FVR while directly applying super-resolution on the rendered view after the Neural Renderer, i.e. $\text{SR}(V(\mathcal{R}(\psi,\mathcal{M},\mathcal{P}_i)))$ , the jittering effects will still exist.
***
* **Q7:** In FVR stage, choosing 8 views will miss texturing areas of objects with complex areas?
**A7:** Yes, there could be some missing areas if not covered by the sampled views. For the objects in our experiments, we assume N=8 views (2 elevation angles chosen from \{$-20^\circ,20^\circ$\} and 4 azimuth angles uniformly sampled between [$0^\circ$, $360^\circ$]) that can cover most of the mesh surface.
Technically, for more complex geometries, we can consider adopting a more general solution. We can infer a UV mask from the UV texture to indicate which areas are overlooked by previous N views. This UV mask can be easily computed by using the camera matrices of previously sampled N views and the UV atlas. Then we can take one more step to sample more views (M views)to cover the overlooked areas of previous N views. Finally, we can train the $\text{MLP}_{\tilde{\phi}}$ with the N+M views using the FVR training.
***
* **Q8:** For the 3D-consistent texture editing, wouldn’t editing the UV texture still maintain the 3D consistency? What is the intuition behind editing one of the rendered images?
**A8:** Yes, directly editing the UV texture will surely maintain 3D consistency. However, editing UV texture is quite inconvenient for non-professional users, particularly when the targeted region for editing becomes fragmented across non-connected parts in the UV map. Hence, compared with editing UV texture, editing rendered images is much easier to operate for general users. 3D consistency can be maintained by propagating the editing onto UV map.
***
If the reviewer has any further concerns, we are most willing to discuss them.
---
Rebuttal 2:
Title: Followup Response to Reviewer 7jco
Comment: Dear Reviewer 7jco:
We sincerely thank you again for reviewing our paper and we appreciate your precious advice regarding additional ablations to support our proposed technique. We deeply hope that our response has properly helped in addressing your concerns, especially the comparison results of TEXTure.
If there are additional questions, please do not hesitate to let us know.
Best,
Paper 2236 Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the clarification regarding the comparison to TEXTure. The other clarifications made by the reviewers also helped ease some of my reservations and I am therefore happy to raise my rating.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer 7jco
Comment: Dear Reviewer 7jco,
We are delighted to hear that your questions have been properly addressed. we'd like to thank you again for making our work stronger, and for your time and patience in reviewing our paper.
Best,
Paper 2236 Authors | Summary: This paper proposes decorate3D, a method for re-texturing real-world 3D objects using text-conditioned image diffusion models. The proposed method can be split into a 3D reconstruction phase and a re-texturing phase. In the 3D reconstruction phase, a 3D mesh is reconstructed from set of multiview images via NeuS, and a view independent texture map is distilled via differentiable rendering. In the re-texturing phase, a depth-conditioned latent diffusion model is combined with SDS to optimize the texture map. The texture map is re-rendered by passing it through the encoder-decoder of stable diffusion to remove neural artifacts, but this step introduces jittering artifacts. Jittering artifacts are then removed via optimizing a MLP through few-view resample training to reconcile view-inconsistencies. Lastly, a super-resolution diffusion model is used to up-res the produced texture map.
Strengths: 1. The method is technically impressive, utilizing many interesting tricks to overcome problems associated with latent diffusion models, and thereby obtaining visually impressive experimental results.
2. The presentation of the method is precise and easy to follow. Despite the many moving parts, never once did I feel a need to backtrack due to inconsistent notations or frivolous math equations.
3. The experimental procedure is detailed and well documented. One can be confident of the reproducibility of the results (as long as the authors release the real world data they've collected). The ablations are also fairely thorough, giving clear intuitions as to the effect of each component.
Weaknesses: 1. Novelty:
Most components utilized in this method are either well known to the literature, or straight-forward extensions of existing workflows, such as NeuS for mesh reconstruction, disentangling view-dependency via differentiable rendering of two MLPs, using depth condition for text-to-3D, and appling super-resolution diffusion models on UV textures. Though the problem of SDS with LDMs as observed in figure 3 has not been formally studied in a research paper, knowledge of this problem is folklore within the community and the proposed neural renderer solution is rather simplistic. As such, it is not clear to me whether this paper contains enough technical novelty to be impactful in the text-to-3D field.
2. Fairness of comparisons:
The experiments can be more convincing if other SDS based approaches (namely dreamfusion and latent paint) are also equipped with depth conditioned diffusion backbones instead of the vanilla backbone. I think these are sufficiently simple modification such that they can still be considered the same method, but adapted for the re-texturing task. By the same token, none of the included baselines was designed for the task of re-texturing, and the use of an initial texture provides a significant performance boost as illustrated in one of your ablations. It would be more fair if the view-independent MLP is provided as initialization for the baselines as was done for Decorate3D.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. what is the rendering model used in the SDS optimization step? Is any lighting/view-dependent effects incorporated into the rendering equation (as used in magic3d and fantasia3d) or is it purely UV based retrieval from a neural texture? Do you have any ablations on this?
2. could this method be adapted for texture synthesis by removing the initialization and using a textureless rendering of the geometry as input to the depth estimator (thereby preventing compounding artifacts between bad initial texture and bad depth estimation)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: I think the computational cost of this method is a limitation worth mentioning - it is by far the most expensive method to run versus its baselines, whose runtime ranges from seconds to tens of minutes on a single GPU, whereas Decorate3D requires hours on full 8 GPUs.
Potential negative societal impacts such as identity theft, deep fakes, and manufacturing of disinformation should be mentioned.
---------------------------------------------Post rebuttal:
I think the changes to the manuscript promised by the authors will significantly improve the delivery and message of the paper by firmly substantiating their claims regarding the effectiveness of proposed techniques with more ablations. Therefore I'm changing my suggestion to acceptance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: * **Q1:** Novelty: Most components utilized in this method are either ..., such as NeuS for mesh reconstruction, disentangling view-dependency via differentiable rendering of two MLPs, using depth condition for text-to-3D, and applying super-resolution diffusion models on UV textures ... As such, it is not clear to me whether this paper contains enough technical novelty to be impactful in the text-to-3D field.
**A1:** **We extend our gratitude to the reviewer for acknowledging the technical excellence of our proposed Decorate3D which achieves state-of-the-art results.**
We would like to re-emphasize here the major difference between our Decorate3D and existing retexturing techniques. Decorate3D is designed to handle noisy 3D objects derived from real-world images. In contrast, existing retexturing approaches usually work with a given ideal mesh, commonly collected from synthetic models. The real-world setting of Decorate3D gives rise to extra challenges, which cannot be addressed by a simple extension of existing workflows. For example, as shown by the ablation study, the performance of Decorate3D drops considerably without the proposed structure-aware initialization, structure-aware SDS optimization, or FVR training etc. To the best of the authors' knowledge, Decorate3D is the first method to provide a complete and effective text-driven texture generation solution, which simultaneously optimizes the high-quality and geometry-aware texture generation, for 3D objects obtained from real-world captured images. Therefore, we have reason to believe that the work is novel and significant as appreciated at this point by other reviewers, and would like to humbly request the reviewer to reconsider the novelty issue.
***
* **Q2:** The experiments can be more convincing if other SDS-based approaches (namely dreamfusion and latent paint) are also equipped with depth-conditioned diffusion backbones instead of the vanilla backbone. ... It would be more fair if the view-independent MLP is provided as initialization for the baselines as was done for Decorate3D.
**A2:** First of all, we want to point out that incorporating depth-conditioned backbone can mitigate the Multi-face Janus problem, but the the visually blurry results from DreamFusion and Latent Paint may not be attributed to the lack of depth guidance. DreamFusion is optimized over NeRF and Latent Paint is optimized over the latent space. None of them works on the UV RGB texture space, which is first verified by Decorate3D. This also prevents them from using the same initialization technique as Decorate3D to solve the challenges of retexturing real-world 3D objects. We also want to mention that the latest SOTA retexturing work, i.e. TEXTure adopted the depth guidance strategy but still performed much worse than Decorate3D.
Second, our superior performance does not just come from depth-conditioned diffusion backbone and initialization. **These two proposed techniques are used to guarantee that the generated texture can match the geometry.** Our neural renderer and Few-view Resampling Training play an important role in our high-fidelity texture. **We are the first to address this problem in UV texture generation, which the competitors cannot achieve.**
***
* **Q3:** What is the rendering model used in the SDS optimization step?
**A3:** We used the pure diffuse rendering model for UV texture without any lighting effects incorporated. This paper primarily focuses on improving the accuracy and quality of texture generation. While we agree that the utility of lighting decomposition and material modeling are useful (which will be our future work), our generated texture quality and fidelity are not affected by those factors.
***
* **Q4:** Could we remove the initialization and using a textureless rendering as input to the depth estimator?
**A4:** The depth estimator is trained over natural images, therefore using textureless rendering might not contribute to better depth estimation. If we remove the initialization, we could use the rendered depth from z-buffer which has to be normalized to feed into the depth-conditioned diffusion model. Please refer to Figure A for the results of the ablation study involving the removal of initialization.
***
* **Q5:** Computational cost and potential negative societal impacts such as identity theft, deep fakes, and manufacturing of disinformation should be mentioned, and a discussion on societal impacts.
**A5:** The optimization-based generation naturally has a higher computational cost than the feed-forward methods. This shortage is a common and unsolved problem of current text-driven 3D generation techniques. Reducing the computational cost remains as our future work. For the discussion on societal impacts, we will add a broader impact statement.
---
Rebuttal 2:
Title: Followup Response to Reviewer U4jz
Comment: Dear Reviewer U4jz:
We would like to thank you again for the invaluable time you dedicated to reviewing our paper. We hope that our response can address your concerns regarding the contributions of the paper. Please feel free to share with us if you have further questions.
Best,
Paper 2236 Authors
---
Rebuttal 3:
Title: Response to rebuttal
Comment: My apologies for the late response.
Regarding novelty, I agree that the task addressed by Decorate3D is novel in itself, and no prior works have demonstrated similarly complete pipelines for real-world capture -> reconstruction -> retexturing. However, as pointed out by another reviewer as well, the meat of the task is in the re-texturing phase, whereas the reconstruction phase is already solved reasonably well by prior works and this paper directly use existing methods (NeuS for reconstruction and Xatlas for UV parameterization) for this phase.
Hence, while the paper indeed proposes an original pipeline with novel combination of well-known techniques, I stand by my point that the "technical novelty" of this paper, in the stricter sense, is limited. Nonetheless, I would evaluate the novelty of the paper as a net-neutral, and my bigger concern is with the question about the fairness of comparisons.
Regarding fairness of comparisons, concurrent (1) and recent but prior (2,3) works have demonstrated the feasibility of obtaining sharp textures with vanilla SDS losses, without the proposed structure-aware SDS or few view resample training.
Thus it is not clear whether structure aware SDS or few view resample training truly improve the quality of generated textures when compared to a well optimized and carefully implemented baseline adapted to this task.
I was hoping that the authors would implement a reasonably naive baseline where depth conditioned SDS loss (with depth from render buffer) is used to optimize a surface color MLP parameterized in UV space, thereby allowing the use of the same initialization from the original texture. Instead, the authors seem to consider this as less of a baseline and more of an ablation to their method (e.g. Decorate3D minus neural renderer and FVR). If one do consider this setup as a baseline, then the relative improvement between Decorate3D and said baseline will appear smaller, and more thorough comparisons would be required than the 1-2 images currently presented in the ablation studies. I think this change would still be a net-positive on for the impact of this paper because a paper that presents a method which firmly improves upon realistic baselines will elucidate more knowledge than the paper in its current form.
I also concur with reviewer xCPF's comment that this paper stands to gain more clarity if the reconstruction phase is moved out of the methods sections to focus solely on the retexturing task.
An additional question I have after browsing through the project webpage is that a majority of the results have a "starry" pattern and a toy-ish palette (of sharp greens and purples), even in prompts where such textures are unexpected (such as the newton sculpture, plush teddy bear doll, and the tables under the zootopia figure). I suspected that there is interaction between initial texture and depth estimation in the original review, and Figure A's w/o init and w/o depth result seems to be in agreement with this suspicion. Do we know at which stage in the pipeline do these patterns occur and are there ways to mitigate them to produce more photorealistic textures?
Lastly, an important barely concurrent (~March 2023) work (4) also addressing the problem of editing real world 3D scenes should be discussed.
1. DreamHuman: Animatable 3D Avatars from Text (https://arxiv.org/abs/2306.09329)
2. threestudio's implementation of Magic3D (https://github.com/threestudio-project/threestudio)
3. Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation (https://arxiv.org/abs/2303.13873)
4. Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions (https://arxiv.org/abs/2303.13873)
---
Rebuttal Comment 3.1:
Title: Followup Response To Reviewer U4jz
Comment: Thanks for your continued engagement and feedback to help us improve our paper. We are delighted to hear that you have reassessed our technical novelty and acknowledged our contribution to the overall pipeline of re-texturing the real captured objects.
***
- **Comparison with Magic3D and Fantasia3D:**
We uploaded extra comparisons with them using the same text prompt as ours. **The video results are available on our anonymous project homepage, where the link can be found in the supplementary pdf file.** Since there is extra uncertainty in the geometry generation for those two methods, for fair comparison, we adjust those two approaches by initializing the DMTET [5] geometry representation with the same mesh model as in our paper and we lock the geometry as not trainable and only activate the texture generation function. It is worth mentioning that, for Magic3D, we have also incorporated the depth-conditioned SDS; and for Fantasia3D, it used the guidance model of ControlNet with Normal conditions. As you will see in the video, the generated textures from Magic3D and Fantasia3D are sharper than those produced by DreamFusion but noisier and are not as high fidelity and clear as ours.
Our superior performance of generating clean and clear texture owing to the proposed simple yet effective Neural Renderer and Few-View resampling Training.
[5] Shen etal, "Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis", NeurIPS 2021
***
- **Contributions in Neural Renderer and Few-View Resampling (FVR) Training**
- We have provided several ablations about Neural Renderer and FVR in the rebuttal. The over-saturated and noisy textures are common issues in SDS-based methods. We are the first ones who propose an effective solution. Albeit simple and straightforward, they are effective and elegant. The aforementioned comparison experiments with Magic3D and Fantasia3D as well as our extra ablations in the rebuttal, all support that our Neural Renderer and FVR are essential for generating high-fidelity textures.
- We agree that having additional ablations on them could be a net-positive for the impact of our paper. As suggested by reviewers, we will re-organize the structure of our paper and include more ablation study and comparison results demonstrated in the rebuttal as well as in this response. To be more specific, since it is difficult to measure the results w and w/o Neural Renderer with metrics, we will include more visual comparisons in addition to those ablations demonstrated in the rebuttal.
- To validate the effectiveness of FVR, we can measure the difference in pixel values between neighboring frames (which are warped onto the reference view) to any chosen reference image before and after FVR. We have included an error map (left side of Fig. B) in the one-page rebuttal document. Furthermore, in the Table below, we show the averaged quantitative results on all the cases we have tested.
- | Rotation | w/o FVR | w/ FVR |
| :----: | :----: | ----: |
| $+1^\circ$ | 0.022 | $<10^{-5}$ |
| $+5^\circ$ | 0.038 | $<10^{-5}$ |
| $+10^\circ$ | 0.044 | $<10^{-5}$ |
For each 3D model, we randomly chose a reference image, and we got its neighboring frames through the Neural Renderer by sampling a camera with a rotation angle of 1 degree, 5 degrees, and 10 degrees. For evaluation, we aligned the neighboring frames to the reference view by a depth-guided warping and then computed the jittering errors.
As shown in the above Table, without FVR, there is a pixel error between neighboring frames which will cause jittering artifacts in 360 rendered video. In contrast, after FVR, the pixel error will significantly decrease to almost 0. We neglected the occluded pixels when calculating the pixel errors.
***
- **Structure-aware SDS loss:**
We believe that we all agree on the importance of having our structure-aware SDS loss which has been verified in our ablations with both quantitative evaluations and visual results. Instead of treating it as a baseline, the reasons that we think it might be worthy of mentioning this in the paper are: 1) first, to the best of our knowledge, at the time when we submitted our paper, there was not any existing work addressing this; 2) second, it will bring useful insights into the 3D generation community to realize the importance of incorporating structure constraints to the score distillation sampling.
---
Reply to Comment 3.1.1:
Title: (Continued) Followup Response To Reviewer U4jz
Comment: - **"starry'' patterns and a "toy-ish palette'' (of sharp greens and purples):**
From our experimental experience, these artifacts on the texture arise when not giving any specific text prompts corresponding to those surface regions. For example, for the cases of Zootopia and Teddy Bear, there is not any prompt describing the tables under Zootopia, or the floor of Teddy Bear standing on, This will lead to some random and periodic patterns. To elaborate more on this, you can check the results of the human models with the prompts of "Ronald McDonald'" and "Captain America stands on the desert", where the former case has the mentioned meaningless ground patterns, but the latter one has the generated ground that matches the prompt ``desert''.
***
- Thanks the Reviewer U4jz for recommending several concurrent works.
1) Instruct-NeRF2NeRF [4] conducted the 3D editing in the NeRF representations. Instead of exploiting SDS losses, they progressively and explicitly replaced the multi-view images for NeRF reconstruction by adopting a pre-trained 2D image editing model called Instruct-Pix2Pix. From our understanding, the major drawback of their method is that they didn't have any pixel-wise correspondence constraints over different view directions when explicitly conducting image editing. Instead, we use a UV map to maintain pixel-wise consistency. Therefore, their rendered images from optimized NeRF are rather blurry compared with ours.
2) For Magic3D and Fantasia3D, we have provided extra comparisons on our homepage.
3) DreamHuman has just been released recently after the submission and there is not any publicly available codebase. Therefore, we cannot conduct any experimental comparisons. But from their demonstrated models in the paper as well as on their website, we do believe we still have achieved clearer texture details with much higher resolution.
***
Thanks again for your suggestions. We hope our repsonse can dispel your concerns. Please do not hesitate to let us know if there are any other questions, and we are more than willing to help on them. | Summary: This paper proposes a method to edit the textures for neural fields (NeRFs) using score distillation sampling and also export a mesh model with texture that can be used in traditional graphics pipelines (i.e. game engines, VFX). More specifically, the main contributions that I see from this work is the "Few-view Resampling Training" which can take an SDS-optimized RGB diffuse texture map (which is noisy due to the nature of SDS with LDMs), and refine it through LDM-driven re-rendering, which takes advantage of both "LDM as a renderer" and having a real 3D consistent 3D model that can be used. This is specifically a general technique that could be widely applicable in a variety of different tasks.
In addition to this, they also create an entire pipeline to extract editable & high quality mesh representations from multi-view images (i.e. ones with good geometry, good UV parameterization, diffuse + specular separation, mostly based on existing tools) as well as another case study on SDS-driven texture generation.
Strengths: The main strength of this paper is in the "few view resample training", which takes as input a noisy 3D model, renders the 3D model, refines the rendered image using a "neural renderer" (which in this case is the VAE of an LDM), and back propagates the refinement back to the 3D model. This as far as I'm aware is an original idea that I have not seen at least in this specific context. The method also seems to be effective from the limited results I am able to see, and is something that can likely be incorporated into many different contexts.
The paper also proposes an end-to-end pipeline for doing NeRF -> mesh -> editing, and evaluates several different tricks to make this pipeline effective which they also evaluate in some limited ablation studies. This is significant as it provides a case study for implementation tricks in making this pipeline work (which in my experience tends to be a big part of SDS based pipelines).
The clarity of the paper could be improved, but is not something that significantly detriments the paper. This will be discussed further in the weaknesses.
Weaknesses: The biggest weakness of this paper is in its clarity. With some restructuring and refinement, however, I think that this paper could be very convincing.
First, the paper introduces the problem of 'mesh decoration'. The task really at hand is 'retexturing' or 'texture editing'. I'm not sure what the motivation for using the word 'decoration' is, but this is something that makes the paper unnecessarily confusing to grasp.
Second, the paper puts a lot of weight on discussing the end-to-end pipeline from reconstruction to retexturing. In reality, the meat of the contributions for this paper lies in the retexturing method (and specifically the few view refinement), and the rest feels like a distraction that is not core to the contribution. Making the writing and contribution statements more specific to the retexturing part of the pipeline, and treating the end-to-end pipeline as almost an 'implementation detail' would make the paper much more convincing. (i.e. the decomposition stage isn't really core to the method, since the same pipeline could be applicable for an existing 3D mesh).
Third, the paper does not sufficiently compare and contrast their method with concurrent works like TEXTure which can be considered prior art given its more than 2 months before the deadline. The paper does compare against them in the evaluations, which is great, but it could use more discussion on _why_ these prior arts produce bad artifacts in their generation, and what fundamental differences makes this paper more advantageous.
Fourth, the results shown on the core contributions are rather light. It would be very illustrative to show (on multiple models) the rendered 3D models after the SDS optimization (with their artifacts & UV textures), after "neural rendering", and after refinement to really showcase the efficacy of the refinement method.
Fifth, the ablations are good but they could be on different figures with more examples. Some of the text space that is currently used for the description of 'prior things' like SDS and the decomposition stage could probably be placed in the supplemental or taken out to make more space for results. More results on especially the effects of higher viewpoints for the refinement step could be very useful.
Lastly, it would be useful potentially to write the approximate time for completion for each stage in Figure 2 to make the costs more clear.
These are not things that affect my rating, but nitpicks:
22: "Since the implicit representations of the NeRF model are tightly coupled" this should be explained in more detail. I believe the authors are referring to the fact that it's difficult to disentangle geometry and texture from a typical NeRF model, but this phrase does not communicate this at all.
41: "The reason is that the optimized UV texture stands for neural features in effect, which produce rendered neural images that necessitate a neural interpreter" I find the whole section starting with this sentence rather confusing and hard to interpret what it really means (without having read the rest of the paper at this point yet). Trying to describe this more precisely would help. For example, what does "stands for ... in effect" mean? What is a "neural interpreter"? I can make inferences but those are then inferences. At this point I'm also confused as a reader why the UV texture needs to be 'neural' or 'latent' at all.
113: "However, diversified 3D generation is often infeasible due to a lack of enough data pairs of text and 3D models" I assume this sentence is in reference to an auto encoder sort of framework, but this is not explained anywhere. Also, I'm not really sure what "diversified" means.
131: It's not made clear in this general section that the thing that is being passed to the encoder is a rendered image, not the texture map. Although this is clear from the figure, explicitly stating this here would be nice.
The writing could be improved stylistically in various places, like not starting sentences with "And".
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: 1. Why is the task at hand called 'decoration' as opposed to texturing or texture editing?
2. At least to my eyes, it does not seem like the higher N for FVR makes a big difference. What would happen if we choose more extreme numbers for these, like N=1,2,4,8 and N=1024,2048? Is there a way to design a loss to make it robust to both extremes?
3. Is there any cases where the multi view consistency fails for FVR? How does the number of views affect this?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations. I think they could make a broader statement on the societal impacts, as content creation tools like this are something that could potentially impact labor markets (and displace artists) and is something that is based on diffusion models trained on large amounts of data (which often also means they are unattributed / has no provenance back to artists).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful suggestions that are helpful in improving our paper. We will carefully check and refine the descriptions that may lead to ambiguity.
As suggested by the reviewer, we demonstrate more ablations of each component of the proposed pipeline and show the results in the one-page response pdf. We will also reorganize the paper structure, emphasize more on the re-texturing part, and add those ablations into the final version.
***
* **Q1:** The motivation for using the word 'decoration'.
**A1:** We concur with your assessment that the essence of 'mesh decoration' lies in retexturing. However, a slight difference exists between them. Decorate3D's ultimate objective is to offer a solution for controlling or editing the texture of real-world 3D objects where a given 3D mesh is unavailable.
The proposed method successfully accomplishes the entire pipeline: 'real-world images $\rightarrow$ user editing signal $\rightarrow$ textured mesh', whereas most retexturing methods presume the availability of an ideal 3D mesh. In the real-world context, Decorate3D encounters an additional challenge and effectively overcomes it.
Additionally, from the standpoint of non-specialized users, the term 'decoration' carries greater expressiveness. But, we will consider to revise the 'decoration phase' to 'retexturing phase' and change 'Decorate3D' to 'TextureGen3D' or 'DecoTexure3D' to improve the clarity.
***
* **Q2:** Analysis and comparison with the TEXTure paper.
**A2:** In addition to the comparison cases presented in our main paper, we have provided more results in the supplementary material file (from Figure 18 to Figure 20) and the demo video (please refer to the demo video from 0:30 to 1:30).
TEXTure proposes a progressive UV texture generation approach utilizing 10 selected views. Specifically, it initiates texture generation by first creating a front-view image through a pre-trained stable diffusion model. Subsequently, this front view is propagated onto the UV texture, and its neighboring view is generated via an inpainting model conditioned on the priorly generated texture. Consequently, TEXTure excels at generating a high-quality texture for the object's front view. However, the final generated UV texture may exhibit seams across different sampled views, and the inconsistency could accumulate when progressively updating from the front view to the back view.
In contrast to the progressive updating strategy, our UV texture undergoes global optimization through a structure-aware SDS loss. As a result, the generated texture is seamless and consistent across both the front and back views. Additionally, our FVR training significantly enhances the quality of the generated UV texture.
***
* **Q3:** It would be very illustrative to show (on multiple models) the rendered 3D models after the SDS optimization (with their artifacts & UV textures), after "neural rendering", and after refinement to really showcase the efficacy of the refinement method.
**A3:** We add the results of extra quantitative and qualitative ablation studies in the one-page response pdf. In detail, Figure A demonstrates the ablations with and without initialization, depth condition as well as Neural Renderer. Figure B shows the difference map between neighboring frames before and after FVR to better visualize the jittering problem. We use ground truth depth to compute the perspective warping from neighboring frames to reference frame.
***
* **Q4:** It would be useful potentially to write the approximate time for completion for each stage in Figure 2 to make the costs more clear.
**A4:** The decomposition stage takes about 3 minutes. The neural texture optimization in the decoration phase takes about 2 hours for 100K iterations, and the FVR training takes about 5 minutes.
***
* **Q5:** The effects of setting the number of views for FVR training. Is there a way to design a loss to make it robust to both extremes?
**A5:** If setting a rather small number of views, like N=1,2,4, the UV texture may not be fully covered by these limited views. As illustrated on the right side of Figure B in the one-page general response PDF, we demonstrated an extreme case with N=2048. The resulting generated texture appears slightly blurry, but the difference is not easily discernible when compared with N=256 or 512. However, it should be noted that increasing the number of views will also entail additional computation costs.
In our experiments, we set N=8 empirically, which includes 2 elevation angles {$\{-20^\circ, 20^\circ\}$ }and 4 azimuth angles uniformly sampled between [$0^\circ$, $360^\circ$]. This configuration adequately covers most of the mesh surface.
As suggested by the reviewer, we can consider an algorithm that maximizes the UV coverage while at the same time minimizing the overlapping pixels in the UV.
We can deduce a UV mask from the UV texture to identify the areas that were overlooked by the previous N views. This UV mask can be easily and accurately computed using the camera matrices of previously sampled N views and the UV atlas. Subsequently, we can take an additional step by sampling more views (M views) to cover the previously overlooked areas. Ultimately, we train the $\text{MLP}_{\tilde{\phi}}$ using the FVR training with N+M views.
***
* **Q6:** Is there any cases where the multi-view consistency fails for FVR? How does the number of views affect this?
**A6:** Before applying FVR, those rendered images of different views are consistent except for the jittering effects. We haven't found such cases where multi-view consistency has failed.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for providing additional details for the paper. Most of the concerns I had about the paper were having to do with clarity, which the authors commented on and gave some suggestions on ways of making it more clear. I believe including a more verbose comparison to TEXture and highlight the global optimization step would be super important for the final manuscript. I will update my score to an accept.
---
Reply to Comment 1.1.1:
Title: Thank you for the update, Reviewer xCPF!
Comment: Thank you very much for the update! We are glad that we have addressed your concerns. We'd like to thank you again for making our work stronger and for your time and patience in reviewing our paper.
---
Rebuttal 2:
Title: Followup Response to Reviewer xCPF
Comment: Dear Reviewer xCPF:
We sincerely thank you again for your great efforts in reviewing our paper, especially for the insightful suggestions on re-oragnizing the pape structure, and additional ablation analysis of each component. which effectively strengthens our paper. Please do not hesitate to let us know if there are additional questions, we would be more than happy to help with them.
Best,
Paper 2236 Authors | Summary: This paper introduces Decorate3D, which enables text-guided 3D model editing by extracting and editing a learned UV texture. Specifically, Given multi-view images, Decorate3D first generates 3D mesh and UV textures based on NeuS. Then, it optimizes neural textures by the guidance of 3D structure (depth) and stable diffusion model. Finally, an RGB UV texture is optimized and upsampled to generate the final result. Experiments demonstrate the state-of-the-art performance of Decorate3D.
Strengths: - The idea of this paper is well-motivated and presented.
- A carefully designed pipeline (Nerf rendering, depth-aware texture optimization, few-shot texture re-optimization, and texture super-resolution) enables high-quality generation results. The 3D consistent decoration phase is novel and effective.
- Best performance is achieved compared to SOTA.
Weaknesses: - The proposed method uses few-view resample training to obtain a UV texture that reduces the jittering effects. I am wondering if this step can be done in the decomposition phase or the text-driven neural texture optimization.
- The super-resolution is applied to the UV texture. However, the UV texture is not a natural image, if the model is trained with natural images, will there be some domain gap leading to inferior results? Moreover, as super-resolution is only a post-processing step that enhances the results and is not one of the main contributions, I recommend spending fewer texts on this point.
- I would like to see some quantitative ablation studies like Table 1.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Some discussions about the above weaknesses and a quantitative ablation study would make the submission stronger.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Important limitations have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for your valuable comment and positive feedback. We have demonstrated extra ablations, including visual results and quantitative ablation study results. Please refer to the one-page response pdf.
***
* **Q1:** The proposed method uses few-view resample training to obtain a UV texture that reduces the jittering effects. I am wondering if this step can be done in the decomposition phase or the text-driven neural texture optimization.
**A1:** The texture after decomposition is consistent, and there is no jittering problem. Therefore, it wouldn't make much difference if applying FVR in the decomposition phase. We use FVR after the text-driven neural texture optimization since the jittering problem is caused by the Neural Renderer when rendering the optimized texture across various camera views.
***
* **Q2:** The super-resolution is applied to the UV texture. However, the UV texture is not a natural image, if the model is trained with natural images, will there be some domain gap leading to inferior results?
**A2:** The rationale behind this successful application of UV texture upscaling is that 1) the upsampling operation has a spatial locality, concentrating on local textures such as edges. For this reason, SR models are usually trained on cropped image patches instead of the whole image to increase training efficiency. This spatial locality of SR allows the SR model trained on natural images to be directly applied to the UV texture. 2) We used the SR model that is fine-tuned over the pre-trained stable diffusion, which has a strong prior to boost its generalization ability.
***
As suggested by the reviewer, we will move some details of the super-resolution part into the Implementation Details.
---
Rebuttal 2:
Title: Followup Response to Reviewer MiDH
Comment: Dear Reviewer MiDH:
Thanks again for your valuable suggestions and we sincerely appreciate your acknowledgement of our work. We hope our clarifications on FVR and super-resolution can solve your concerns. Please feel free to share with us if you have any more questions.
Best,
Paper 2236 Authors
---
Rebuttal Comment 2.1:
Comment: I would like to thank the authors for the rebuttal that addressed my concerns. The responses to other reviewers are also quite helpful for the audience to understand the merits of the paper. My rating remains.
---
Reply to Comment 2.1.1:
Title: Thank you, Reviewer MiDH!
Comment: We are glad to hear from you, and sincerely appreciate your acknowledgement to our work. Many thanks again for your insightful suggestions. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for their constructive suggestions. Extra visual results and quantitative evaluations are included in our submitted one-page pdf document, as suggested by reviewers:
* (1) Figure A shows more ablation study results on the proposed components of Decorate3D including initialization, depth guidance, and our neural renderer;
* (2) Table A provides the quantitative ablation study results;
* (3) Figure B visualizes the error map to demonstrate the jittering artifacts and provides more ablation study results using a bigger N for FVR training;
* (4) Figure C provides extra results of TEXTure, rendering results after our decomposition phase, and the real-world meshes.
**We would like to bring to your notice that video results have been included in our supplementary material**. It is suggested to watch them for more visually appealing results from Decorate3D.
Next, we will respond to each reviewer separately about their comments and questions.
Pdf: /pdf/3c01edc9a30376e115714b6661cb6053b5cf4dac.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation | Accept (poster) | Summary: This paper leveraged large language model (LLM) based and mutation-based strategies to generate high-quality test cases for the popular dataset HumanEval. The extended dataset HumanEval+ provides a better code generation benchmark for assessing the performance of LLMs such as ChatGPT and GPT4. Experimental results showed that compared to the HumanEval dataset, the extended one can detect more issues of the generated code, where the generation performance of 19 LLMs is reduced by 13.6-15.3% on average in terms of pass@k. To save the testing time, this paper also provided a refined set of test cases with fewer numbers but with the same code coverage, killed mutants, and sample killings.
Strengths: + Proposing a test case generation method for LLMs, which combines the LLM-based and mutation-based strategies.
+ Evaluating 19 LLMs on the extended datasets and showing the overestimation of the original dataset.
+ Providing a test-suite reduction for quick evaluation.
Weaknesses: - Some technical details, such as the quality of seed inputs, are unclear, which affect the soundness of the proposed approach.
- There are some unclear claims/overclaims.
- Lack of comparisons with other test case generation methods and related work.
The claimed contribution indicates “that prior popular code synthesis evaluation results do not accurately reflect the true performance of LLMs for code synthesis”. Can the extended dataset proposed by this paper reflect the “true performance”? Similarly, the paper said that the existing tests “often fall short in capturing all possible scenarios”, leading to “false confidence” in the results. Can your test cases “capturing all possible scenarios”? Furthermore, this paper claimed that the proposed benchmark can “precisely evaluate the functional correctness of LLM-generated code” by generating “interesting test inputs”. It is unclear to me how “precise” can the evaluation provide and what are the “interesting test inputs”. Please clarify them.
EvalPlus “first uses ChatGPT to generate a set of high-quality seed inputs for later mutation”. The quality of the seed inputs is not verified. Are these inputs correct? How do you ensure the quality of the generated seed inputs? and how many seed inputs did the ChatGPT generate? What are the prompts used?
The type-aware input mutation “randomly” selected some inputs from the generated seed pool as the inputs of mutants. This pipeline did not clearly describe how randomness affects the result of the input mutation and the quality of the evaluation.
The proposed approach “adopted a programming by contract philosophy by systematically adding code assertions as contracts (e.g., assert n > 0) to ensure the test inputs for the function are well-formed”. The context did not tell how to define the assertions specifically, how to ensure the correctness of the definitions, and which docstrings need clarification with these assertions. Will it incur a lot of manual effort?
For the evaluation, the paper did not compare the method with other baseline methods. The paper says that traditional automated test generation methods are inapplicable to generating semantically meaningful inputs. Why is that so? More descriptions are needed here. Otherwise, it is difficult to determine the advantages of the proposed method over the traditional methods.
Recently, there is also a related work on applying ChatGPT prompts to code generation, the authors also discussed the quality/correctness of the generated code: C. Liu et al., Improving ChatGPT Prompt for Code Generation, https://arxiv.org/abs/2305.08360.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How is the quality of the seed inputs generated by ChatGPT? Are these inputs correct? How do you ensure the quality of the generated seed inputs? and how many seed inputs did the ChatGPT generate? What are the prompts used? Will the quality of seed inputs affect your approach significantly?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors addressed the limitations in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1:How is the quality of the seed inputs generated by ChatGPT and will the quality of seed inputs affect your approach significantly?
The quality of seed inputs and the effect it has on mutation-based test generation and fuzzing has been well-studied in prior work [5, 6]. Similarly, the quality of seed inputs will also affect our approach significantly, and that is exactly why we are leveraging ChatGPT to generate high-quality seed inputs. We found that compared to the original HumanEval test, by adding the 30 high-quality seeds generated by ChatGPT, we can already improve coverage from 96.7% to 98.4%, and finally decrease the average pass@k by 11% (full EvalPlus is around 14%). This demonstrates the quality of the seed inputs generated by ChatGPT.
[5] Rebert, Alexandre, Sang Kil Cha, Thanassis Avgerinos, Jonathan Foote, David Warren, Gustavo Grieco, and David Brumley. "Optimizing seed selection for fuzzing."
[6] Pailoor, Shankara, Andrew Aday, and Suman Jana. "MoonShine: Optimizing OS fuzzer seed selection with trace distillation."
> Q2:Are these inputs correct?
The raw inputs generated by ChatGPT are not guaranteed to be correct. However, as mentioned in Section 2.3, EvalPlus uses program input contracts to filter out ill-formed inputs and the ones satisfying contract conditions are bound to be correct.
> Q3:how many seed inputs did the ChatGPT generate?
Please kindly see Section 3 for the discussion. For each programming task, we generate 30 seed inputs using ChatGPT.
> Q4:What are the prompts used?
We apologize for not including the detailed prompt in our supplementary material (we did show an overview of the prompt in Figure 2), and will include the detailed prompt in the appendix of the next revision. Below is our detailed prompt used for seed generation.
```
Here is a function that we want to test:
[[FUNCTION]]
These are some example inputs used to test the function:
[[EXAMPLES]]
[[INSTRUCTION]]
```
Specifically, the meanings of these macros are:
[[FUNCTION]]: The ground-truth solution of the programming task
[[EXAMPLES]]: (At most) 5 inputs randomly selected from original HumanEval
[[INSTRUCTION]]: We randomly select one of the following instructions:
“Please generate complex inputs to test the function”
“Please generate corner case inputs to test the function”
“Please generate difficult inputs to test the function.”
> C1: Lack of comparisons with other test case generation methods and related work.
We first want to clarify that our main contribution is not our test generation techniques but instead is our generated dataset and accompanying rigorous evaluation study on recent popular LLMs, which demonstrates for the first time that prior popular benchmarks are insufficient to evaluate functional correctness of LLM generated programs. Regarding the comparison with prior test case generation methods, there has been a ton of work on automatically generating tests for programs [7, 8] (see Section 4 for more detail). These prior work mainly focused on either generating new inputs (generation-based) or mutating existing seeds (mutation-based) to create new test cases. Unfortunately, discussed traditional methods are mostly inapplicable to generating semantically meaningful inputs for arbitrary problems programmed in a dynamically-typed language. We address this by using ChatGPT to inspect the ground-truth (i.e., white-box) for initializing interesting seeds, based on which type-aware mutation (i.e., black-box) scales the test inputs to a large amount. Of course, we fully agree with the reviewer that our work may inspire more test generation techniques for this important domain. We will also work towards adding more discussion and comparison with previous test case generation methods in the next revision.
[7] Fraser, Gordon, and Andrea Arcuri. "Evosuite: automatic test suite generation for object-oriented software."
[8] Serebryany, Kosta. "Continuous fuzzing with libfuzzer and addresssanitizer."
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I will keep my current score.
---
Reply to Comment 1.1.1:
Comment: Thanks for taking the time to read our response thoroughly! We truly appreciate it and will address all minor comments raised by the reviewer.
---
Rebuttal 2:
Comment: Reviewer, please confirm that you read this rebuttal and adjusted your score and review if appropriate. | Summary: In this paper, the authors introduce EvalPlus, an evaluation framework crafted to assess the code generation of LLMs. Combining LLM and mutation-based techniques, EvalPlus diversifies the generated test inputs, thereby broadening the evaluation spectrum for LLM-produced code. Through comprehensive experimentation, the authors highlight EvalPlus's ability to expose shortcomings in previous LLM-based code evaluations. Additionally, the study brings HumanEval Plus, an enriched dataset built upon the existing HumanEval, offering an abundance of test inputs and a more dependable ground truth for improved reliability.
Strengths: * This paper distinguishes itself through its innovative approach to test input generation. It extends beyond the conventional usage of ChatGPT for creating test inputs, by incorporating a test-mutation technique and a “distill” method. This results in a diverse yet non-redundant range of test inputs, illustrating the authors' commitment to crafting a robust and efficient evaluation framework.
* This paper is well-written and easy to read.
* The study stands out for its well-designed experiment, which encompasses a comprehensive selection of contemporary LLMs.
Weaknesses: * HumanEval [8], in comparison to real-world coding projects, presents a significantly simpler challenge. It's a benchmark that's even simpler than others, such as APPS [Hendrycks et al.]. Given this context, the effectiveness of EvalPlus when applied to more complex tasks remains uncertain. This indicates a valuable area for enhancement in this study—specifically, future research could delve into assessing how well EvalPlus performs in more complicated, real-world coding scenarios. Such explorations could contribute crucial insights to understanding the scalability and adaptability of EvalPlus.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Given the impressive results demonstrated by EvalPlus within the comparatively simpler context of the HumanEval benchmark, how do you envision expanding this work to more complex environments? Specifically, what strategies or modifications do you plan to implement to ensure EvalPlus is effective in assessing code generated for real-world projects, such as those found in open-source Python libraries? Could you elaborate on potential challenges you anticipate and how you intend to navigate them in this expansion?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * While this study presents a promising step forward with EvalPlus, a limitation lies in the evaluation scenario chosen for testing. Given that the proposed method is intended to be general, using the relatively simple HumanEval dataset for testing may not sufficiently demonstrate the framework's generalization capabilities. To conclusively assert its wide applicability and robustness, it would be beneficial for future research to include tests on more complex datasets or in more challenging real-world contexts, such as open-source Python libraries. This would provide a more comprehensive understanding of EvalPlus's potential in diverse, practical applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1:how do you envision expanding this work to more complex environments? (strategies, modifications and challenges to ensure EvalPlus is effective in assessing code generated for real-world projects?)
To begin with, we want to re-emphasise that our main contribution is to show that the existing popular benchmarks for LLM-based code synthesis evaluation (e.g., HumanEval) contain insufficient tests that cannot faithfully evaluate the functional correctness for LLM generated code. As such, the reported performance of LLMs when evaluated on these benchmarks can be inaccurate or exaggerated (compared to their actual performance). For example, this work shows for the first time that almost all recent LLMs for code can be affected by such dataset issues. Furthermore, the paper also helps more precisely understand the strengths/limitations of existing LLMs via automated test generation/reduction, and can in turn help build more powerful LLMs in the near future.
Meanwhile, this is a great question that allows us to discuss potential future work! First, we plan to expand EvalPlus by taking in more knowledge and context from the entire project rather than a single function. For example, developer documentation can be directly used as input to help ChatGPT synthesize interesting inputs that not only invokes a single function but also multiple functions or API sequences. Second, one key challenge is that unlike the benchmarks that we seek to improve in this paper, real-world projects do not have the exact groundtruth solution. In order to address this challenge, we can instead apply partial test oracles such as 1) crashes: to discover rare inputs that trigger crashes in developer code (e.g,. segmentation faults), 2) differential testing: to evaluate developer code on two different setups (i.e. CPU vs GPU for deep learning programs) to discover bugs, and 3) LLM-based oracle generation: we can even leverage the generative and code understanding power of LLMs themselves to generate the oracle.These partial oracles may not give us the same guarantees as using the reference groundtruth solution, but EvalPlus can still leverage these oracles to assist developers in more complex real-world systems.
---
Rebuttal 2:
Comment: Reviewer, please confirm that you read this rebuttal and adjusted your score and review if appropriate. | Summary: The authors propose an evaluation framework for validating the correctness of large language model-generated code. In particular, the framework first utilizes ChatGPT to generate multiple seed inputs, which are then expanded into a large set of inputs through type-aware mutation. In addition, to ensure evaluation efficiency, the input set can be reduced by employing code coverage, mutant killings, and LLM sample killings.
Strengths: The proposed solution addresses the problem of insufficient testing for LLM-generated code. The expanded dataset can outperformance the original one, while the reduced dataset achieves similar results as the expanded one.
The authors evaluate the proposed solution via comparative analysis of 19 large language models, analyze the pass rate distribution of the employed dataset HUMANEVAL, and identify several errors in the ground-truth solutions.
The related work section discusses previous research on large language models for code, the coding benchmark, and automated test generation. The authors discuss the research gap between this study and the related work.
Weaknesses: The authors do not employ a more capable large language model in the proposed solution, and there is a lack of evaluation of efficiency.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - I wonder if GPT-4 can generate more interesting seed inputs than ChatGPT. Explicitly clarifying the employed component in EvalPlus may affect the generality of the solution, I suggest this can be introduced as the implementation or setup information.
- The authors can elaborate on how to filter out the invalid inputs in Section 2.1.
- The evaluation does not examine the efficiency of type-aware input mutation, and test-suite reduction.
- Table 4: “Killed samples” has ambiguity.
- Section 2: “EvalPlus obtains a augmented benchmark …” -> “an”.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors do not explicitly identify the potential societal impact or limitations of this work, but they claim that their future work include conducting testing to more code benchmarks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1:I wonder if GPT-4 can generate more interesting seed inputs than ChatGPT.
Please note that the EvalPlus input generation component does not rely only on ChatGPT but is general and can be implemented using any other foundational models like GPT-4 or LaMDA. We have also thought about using GPT-4 to generate additionally interesting seed inputs. However, due to the high monetary cost of invoking the GPT-4 API (on 164 different HumanEval tasks will cost more than 600 dollars compared to only 60 dollars using ChatGPT), we choose to only use ChatGPT for our work. In fact, ChatGPT has already shown impressive performance generating high quality seed inputs. Thanks again for the suggestion and we will further use GPT-4 for seed generation (which will very likely lead to even better performance as mentioned by the reviewer) in the future.
> Q2:Elaborate on how to filter out the invalid inputs in Section 2.1.
This has been elaborated in Section 2.3. In short, we adopt a programming by contract philosophy by systematically adding code assertions as contracts to ensure the test inputs for the function are well-formed. Invalid inputs during generation will trigger assertion failures and thus be detected and dumped by EvalPlus.
> Q3:The evaluation does not examine the efficiency of type-aware input mutation, and test-suite reduction.
Thanks for the suggestion and we will emphasize the efficiency results in our next revision.
Previously we did not rigorously study the efficiency of input generation and reduction because these are one-time efforts for each dataset (can be reused to evaluate any current or future code models) and can be done reasonably fast. Specifically, EvalPlus is able to generate on average 1100+ valid tests for each of the 164 problems in half an hour (i.e., ~3000 valid tests/hour). The test reduction for all 164 problems in total can also finish in one hour.
> Q4: Table 4: “Killed samples” has ambiguity.
We apologize for the ambiguous term. “Killed samples'' refers to the number of incorrect LLM generated samples that are “killed” (i.e. failed a test) by a test-suite. Please see Section 2.2 for more detail. We will make the term less ambiguous.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their rebuttal. I have no further comment at this time.
---
Reply to Comment 1.1.1:
Comment: Huge thanks for taking the time to read our response thoroughly. We truly appreciate it! | Summary: This paper introduces a rigorous evaluation framework EvalPlus for program synthesis driven by automated test generation. For automated test generation, this work proposes to combine both LLM-generated tests and mutation-based input generation to largely augment the text inputs for an existing code benchmark of HumanEval. EvalPlus is evaluated with a diverse set of code LLMs and results show that the new augmented benchmark can identify a significant amount of previously undetected wrong code generated by LLMs, leading to a more accurate evaluation. It further introduces a mini-version benchmark reserving almost the same test effectiveness.
Strengths: * The paper studies how to evaluate code LLMs more accurately, which has become an important topic in the era of LLMs as coding is one of the key capabilities of LLMs to showcase. The proposed EvalPlus of combining both LLM-generated tests and mutation-based tests is well motivated and technically sound.
* The paper is well written and easy to follow. The evaluation is very comprehensive (considering most of state-of-the-art LLMs) and provides convincing results to authenticate the effectiveness of the proposed EvalPlus.
Weaknesses: * The biggest weakness of this paper is the lack of discussion and comparison to other related work of AlphaCode and CodeT [1], which share common techniques such as mutation-based input generation and unit tests generated by LLMs. In this regard, the novelty of EvalPlus is relatively limited. Note that in AlphaCode, they have already explored similar mutation-based techniques to largely augment the test cases for program synthesis. Besides, they also explored training a LLM-based test input generation model to generate test cases in a clustering process. For CodeT, they have explored using LLMs to generate test cases, though these generated test cases are used for a different purpose of reranking the generated programs. The authors should discuss and compare with these works to better justify their novelties.
[1] CODET: CODE GENERATION WITH GENERATED TESTS
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Can you explain more on the difference between EvalPlus and AlphaCode/CodeT?
* I noticed that this work compares with the recently released StarCoder and CodeGen2, but not another SoTA code LLM of CodeT5+ which was released at the same time with better results on HumanEval. Any reasons for not including it?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I did not find any discussion on limitations from the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1:Can you explain more on the difference between EvalPlus and AlphaCode/CodeT?
Great question. First, we want to clarify that our main contribution is not the input generation technique but rather our generated dataset (HumanEval+) and accompanying rigorous study on recent popular LLMs. In short, we demonstrate for the first time that: 1) existing popular datasets (i.e. HumanEval) used to evaluate almost all recent LLMs for code are not reliable, containing not only insufficient tests but also incorrect groundtruths, 2) such deficiencies in existing datasets can drastically affect the evaluation results of almost all recent LLMs for code, with around 15% decrease in performance when using HumanEval+ compared to base HumanEval. EvalPlus can help researchers more precisely understand the strengths and limitations of existing models, and can in turn help build more powerful models in the near future.
In terms of the comparison with AlphaCode and CodeT, while indeed they both use LLM-based input generation, the key difference is that both AlphaCode and CodeT only use the LLM generated inputs for clustering in order to determine the ranking of samples for evaluation. In fact, none of them leveraged the LLM-generated inputs to filter out incorrect solutions, since they do not have an exact oracle to ensure the generated tests are correct. In contrast, in EvalPlus, we directly leveraged the generated test inputs to perform differential testing across the groundtruth implementation and LLM-generated solutions to detect any potential incorrect solutions. Furthermore, compared with the simple mutation operators used in AlphaCode to augment the evaluation tests (different from their LLM-based test input generation that is only for clustering), our type-aware mutation approach also includes an additional mutation operator to collect data fragments, from previously generated inputs, and reuse them during later mutation. This allows our mutation strategy to generate more structurally aware inputs that are likely to pass the structural constraints of certain tasks (e.g., need to be palindrome or open/close bracket strings).
Thanks again for these great references and we will surely discuss more such related work in our revision.
> Q2:I noticed that this work compares with the recently released StarCoder and CodeGen2, but not another SoTA code LLM of CodeT5+ which was released at the same time with better results on HumanEval. Any reasons for not including it?
Additional models are not included due to the lack of time and space. After submission we have been improving EvalPlus and adding more outstanding models including CodeT5+. Specifically, here are the pass@1 (i.e. greedy) results for the 2B, 6B and 16B CodeT5+ models. Overall, our main findings still hold on CodeT5+. Note that using 512 as the maximum new token size (i.e., the default setting of our paper) can cause OOM on A6000-50G for certain problems. Consequently we dynamically reduce the new-token size by 0.8 until OOM is overcome.
| Model | | pass@1 |
| :--- | :----: | ---: |
| CodeT5+ 2B | base | 25.0 |
| | +extra | 22.0 (-12%) |
| CodeT5+ 6B | base |29.3 |
| | +extra | 23.8 (-19%) |
| CodeT5+ 2B | base | 31.7 |
| | +extra | 26.2 (-17%) |
---
Rebuttal 2:
Title: Official comment by reviewer SGiT
Comment: Thanks for the detailed response which sufficiently addresses my concerns. I will increase my rating to 6 for this work.
---
Rebuttal Comment 2.1:
Comment: Big thanks for taking the time to read our response thoroughly. We truly appreciate it! Should you have any new questions or concerns, please don’t hesitate to let us know. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful comments and suggestions to improve the paper! We address the main questions (labeled as Q) and concerns (labeled as C) in the response to individual reviewers below. Furthermore, we will also revise the paper accordingly to address all other minor suggestions and comments.
Please kindly let us know if there is any misunderstanding of the questions, and we are very happy to further communicate with all the reviewers during the reviewer-author discussion period (Aug 10-16). | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper describes an enhanced test dataset and test-driven evaluation of code generation by LLMs. The paper compares different LLMs and a stricter metrics using the enhanced evaluation dataset and shows that these LLMs are about 15% less correct than is reported based on earlier test datasets.
Strengths: Given that LLMs appear to achieve a number of programming tasks but often fail to completely get them right. They fail in unexpected ways and places. Improving the evaluation metrics is critical to use of LLMs in product and commercial contexts. This paper improves the test beds by synthetically creating addtional tests through a framework that uses both LLMs and mutation based techniques.
The paper demonstrates that about 13-15% of code generated by typical LLMs that would be qualified as passable according to previous metric is disqualified by this method. Since LLMs are often used for NL->code contexts this problem may arise because of inssufficient tests or because of less specific description in NL of the code to be written. They increase the test suites by 81X and also create a reduced version of it that shrinks it 47x for fast evaluation.
The test suites are enhanced by augmenting the prompts with ground truth tests and generating higher quality seed data. From these seed inputs, type-aware mutations are created to enhance the test data sets.
Weaknesses: While the enhanced data set does call out more problems in the LLM-generated code, it is not clear if it is the best it can do. For instance, just by running a static analyser, a syntax checker or some other engine, the issues in the code generated can be found and fixed. The authors should have at least compared one such approach. There is no comparison of another approach that improves the performance of LLMs in context. One of the ways LLMs are often used for code generation is to generate good enough code and then through human interaction or through other static tools rerank, correct or qualify the generated code.
The enhanved data set is diefinitely useful but not clear how useful in that it flags about 13-15% of the code. Some human evaluation or A/B test to qualtitatively say how good this is would have been more insightful.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Have you tried any A/B experiment or other human evaluation to see how much more useful is this improved test set to the LLM context in a real application?
2. Have you thought about other LLM tasks which have similar challenges.
3. As LLMs are constantly improving likely the value of such an enhanced test set diminishes. Have you thought about studying that?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations - LLMs are used in context of human input. So it is not clear how useful this enhanced data is. It is also not clear if the enhanced test set increases coverage in criticial dimensions.
LLMs are constantly improving and in the absence of a good A/B test or application context it is hard to tell the usefullness of the system.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1:Have you tried any A/B experiment or other human evaluation ... in a real application?
In this work, we focus on improving the evaluation of functional correctness that can be deterministically, objectively and automatically measured through testing and verification techniques. Contrastively, A/B testing or other human evaluation [1] can be useful for understanding the overall usefulness of NLP applications whose correctness is hard to evaluate systematically, but can incur high manual efforts. Please note that we did in fact manually examine the LLM generated samples that had passed the original HumanEval tests but failed on our EvalPlus tests and found that such solutions can contain subtle but important logic errors (e.g., Figure 1). However, due to the sheer number of samples generated (>45,000), manually performing A/B testing or human evaluation on all samples would be infeasible.
The HumanEval+ benchmark produced by EvalPlus can better reflect the true performance of LLM code synthesis. Not only do we show the average performance drop is around 15%, but even widely used open-source (e.g., CodeGen) and proprietary state-of-the-art LLMs (e.g., ChatGPT/GPT-4) suffer from significant performance decreases (13% to 19%). As LLMs become more widely used for program synthesis, it is critical to ensure the functional correctness of LLM generated code, with the first step being developing a rigorous benchmark as we did in this work. Furthermore, the input generation proposed in EvalPlus can also be applied to test real-world projects and we hope EvalPlus can inspire more future work in helping developers more rigorously evaluate code for real-world projects.
[1] Zheng, Lianmin, et al. "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena."
> Q2:Have you thought about other LLM tasks which have similar challenges.
Thanks for this interesting question! Comprehensively evaluating LLMs is definitely a common challenge in many LLM tasks. In this work, we target program synthesis as it is the holy grail of computer science since the 1950s, with many recent new LLMs including program synthesis as a key evaluation metric. We believe the general idea and approach of EvalPlus can also help improve datasets, such as MathQA and GSM8K in mathematical reasoning tasks.
More generally, it is an open research question to apply automated software testing to improve the evaluation of other LLM tasks. For example, consistency checks [2] can automatically evaluate the self-consistency [3] of LLMs, which is a form of “metamorphic testing” -- leveraging multiple inputs and their metamorphic relationship as test oracle. For instance, we can ask an LLM to evaluate two semantically equivalent chess positions to check if the resulting evaluation is also equivalent.
[2] Fluri, Lukasi, et al. "Evaluating Superhuman Models with Consistency Checks."
[3] Wang, Xuezhi, et al. "Self-Consistency Improves Chain of Thought Reasoning in Language Models."
> Q3:As LLMs are constantly improving likely the value of such an enhanced test set diminishes?
It is true that EvalPlus and any software testing technique is useless when we have perfect LLMs. However, please note that this is the ultimate goal for the community and still requires long-term efforts. In fact, our work also aims to contribute towards such an ambitious goal: EvalPlus helps to better evaluate all code models that are still imperfect. We believe it is extremely important to precisely understand the strengths and limitations of existing code models to make informed decisions in improving them, and hope this initial work can inspire more researchers to join this effort.
Meanwhile, please also note that while models are getting stronger (e.g., ChatGPT to GPT-4), the performance decrease (i.e., the value of such an enhanced test set) does not necessarily become smaller: according to our experiments, while the pass@1 value of ChatGPT drops by 15.5% when using HumanEval+, it drops by 16.0% for GPT-4.
> C1:just by running a static analyser, the issues in the code generated can be found and fixed.
Thanks for the suggestion. We agree that static code analyzers can detect compile-time (e.g., syntactical error) and simple runtime errors (e.g., wrong arguments). However, code with syntactical errors would already fail the original HumanEval tests, and thus simple syntax checkers cannot help detect any incorrect solutions missed by the original HumanEval. In addition, more advanced static analyzers can only handle very limited types of errors, and are well known to have high false positive rates. For example, Pyright, a state-of-the-art analyzer, can only detect 1 of the 20 incorrect solutions detected by HumanEval+ for GPT-4, and at the same time reported various false positives. As a result, almost all the popular code generation datasets use testing to validate the generated solutions, and EvalPlus also focuses on enhancing tests for such popular datasets.
> C2:no comparison of another approach that improves the performance of LLMs in context.
Great comment. Please kindly note that our main technique is to rigorously evaluate instead of directly improving LLMs. As a result, we focus on evaluating a large spectrum of LLMs. Given the large number of LLMs studied, we only focus on the default application scenario for each model. Meanwhile, since our main findings hold on all the studied LLMs, our results may very likely further generalize to more LLMs or even LLMs with more advanced in-context learning (e.g., with chain-of-thought or execution feedback). For example, even the recent prompting techniques using execution feedback still rely on the existing tests in the dataset, and may still produce incorrect solutions overfitting to the dataset. Thanks again for the comment, and we can surely add more experiments to further validate this argument.
---
Rebuttal 2:
Comment: Reviewer, please confirm that you read this rebuttal and adjusted your score and review if appropriate. | null | null | null | null | null | null |
Energy Discrepancies: A Score-Independent Loss for Energy-Based Models | Accept (poster) | Summary: Energy discrepancy is presented as a new loss for the training of EBM.
ED interpolates between the losses of score matching and maximum-likelihood estimation.
Efficacy of ED on a latent variable energy-based model is demonstrated to tackle manifold hypothesis as an important challenge in the adoption of likelihood-based training.
Extensive numerical experiments are done to show ED's superiority over contrast divergence and score matching.
Strengths: 1. Theoretical derivation is rigorous.
2. Numerical experiments are solid.
3. Paper is in-general well written.
4. The proposed algorithm is easy to implement.
Weaknesses: No obvious weakness to me.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: How important is the w-stabilization term for the numerical experiments? If w is small(close to 0), are numerical experiments results still hold?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: No negative social impact is seen.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the positive and helpful comments. Here is our answer regarding the w-stabilisation:
> How important is the w-stabilization term for the numerical experiments? If w is small(close to 0), are numerical experiment results still hold?
>
**ANSWER:** We found that w-stabilisation is important to make Energy Discrepancy scalable. Initially, Energy Discrepancy was developed without this stabilisation and worked well on two-dimensional data when $M= 256$ contrastive samples were used. With the w-stabilisation, however, we were able to reduce the number of contrastive samples to $M=1$ in two-dimensional settings and learn high-dimensional distributions as in our image modelling experiments using $M=16$. A study on the effect of $w$ and $M$ can be found in Appendix D.5. in Figure 21.
In general, we found that our experimental results are robust to the choice of $w$ and the parameter requires no fine tuning. The choice becomes less consequential the larger $M$ is chosen. Large values for $w$ encourage smoother energy landscapes. Small values of $w$ may underestimate the variance of the distribution if $M$ is chosen too small, but typically stabilise the training sufficiently to produce good results. Since $w$ adds no computational complexity and corrects the approximation bias of the contrastive potential, we found that it is reasonable to choose $w=1$, which appears to work well in all settings we investigated.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. My concern is well-addressed. I will keep my original given score
---
Reply to Comment 1.1.1:
Comment: We are glad that we could address your concerns. Thank you for reviewing our work! | Summary: This paper has proposed a new loss funciton, i.e., Energy Discrepancy to train energy-based models without computaiton of MCMC. The proposed loss function could be directly derived from the energy function without relying on MCMC samples.
Strengths: The proposed energy discrepancy could be directly computed from the energy function and alleviate the problem of nearsightedness of score matching and approximates maximum-likelihood estimation. The experiments show that the energy discrepancy could achieve better performance than score matching and contrastive divergence in image generation.
Weaknesses: I think this is very good paper that proposes a novel optimization method in enery-based models. I only have several questions about the experiment part. Pleare refer to the question section.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Model in [1] utlizes MCMC-based maximum likelihood algorithm to train a normalizing flow model for image generation. Could the proposed energy discrepancy used in this model?
2. According to Table 1, the performance of CD-LEBM and the proposed ED-LEBM are quite closed. The gap is much smaller than the experiments in density estimation.
3. Are there any comparison with GAN-based models?
4. For comparison with score-based methods, [2] shows a much better performance in FID on Cifar10. Could the author give some discussion about these results as the author has claimed that the proposed energy discrepancy has advantages of score matching?
Reference:
[1] Xie, Jianwen, et al. "A Tale of Two Latent Flows: Learning Latent Space Normalizing Flow with Short-run Langevin Flow for Approximate Inference." arXiv preprint arXiv:2301.09300 (2023).
[2] Song, Yang, et al. "Score-based generative modeling through stochastic differential equations." arXiv preprint arXiv:2011.13456 (2020).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Please refer to the question section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments and helpful suggestions! Our answers are listed below.
> Model in [1] utlizes MCMC-based maximum likelihood algorithm to train a normalizing flow model for image generation. Could the proposed energy discrepancy used in this model?
>
**ANSWER:** Thank you for the interesting reference. The Model in [1] works analogously to the latent EBM [3] used for image data in our work, with the EBM prior being replaced with a normalising flow prior. Since the normalising flow has a tractable likelihood, the maximum likelihood training is possible without training approaches for EBMs like CD, SM, and Energy Discrepancy. However, [1] requires MCMC sampling from the posterior $p(\mathbf z \vert \mathbf x)$ to generate latent representations of data, and this step is necessary in our work as well.
It is an exciting avenue for future research to use a normalising flow as the base distribution of an EBM [4], which would enable training of EBMs with Energy Discrepancy without the need of a latent variable model or MCMC.
> According to Table 1, the performance of CD-LEBM and the proposed ED-LEBM are quite closed. The gap is much smaller than the experiments in density estimation.
>
**ANSWER:** This is true and we can currently only make assumptions why this is the case. Most likely, the reason lies in the latent variable model used in both methods:
In the density estimation experiments, we observed that CD learns energy landscapes that are biased towards being too smooth. In the image modelling experiments, the latent representations obtained by sampling from $p(\mathbf z\vert \mathbf x)$ may be fairly noisy which corresponds to smooth energy landscapes that can be learned equally well by CD and ED. Flat energy landscapes are also suggested by the interpolation results which show that recognisable images can be produced from midpoints between two latent representations.
Additionally, it is noteworthy that the results are compared in different metrics. In our density estimation results we are comparing the MSE between the estimated density and ground truth density. For the image modeling results, we are comparing the FID of images, which not only measures the quality of the learned EBM prior but also that of the decoder. We do not have access to the true prior to assess the accuracy of the learned EBM and to what extent the decoder compensates for poorly learned energies.
We would like to point out that Energy Discrepancy requires significantly fewer computations per iteration to produce comparable results. Furthermore, we hope that we can improve our experimental results in the future by finding alternatives to the latent EBM model.
> Are there any comparison with GAN-based models?
>
**ANSWER:** Neither VAEs nor EBMs can currently outperform GAN-based models if it comes to image generation. The appeal of EBMs lies in the fact that they don’t just generate data but also encode data into a probability distribution. An exciting research direction, however, are Generalised EBMs [5] which combine EBMs with GANs and outperform both methods. Exploring new perturbation strategies to train Generalised EBMs with Energy Discrepancy holds promise for future work.
> For comparison with score-based methods, [2] shows a much better performance in FID on Cifar10. Could the author give some discussion about these results as the author has claimed that the proposed energy discrepancy has advantages of score matching?
>
**ANSWER:** The results in [2] are possible because the score-based method in [2] trains a diffusion model, i.e. the work uses an annealing scheme to learn the scores of data at various noise scales. This achieves two things: Firstly, the annealing scheme makes score matching aware of other modes in the distribution and the scores in low-density areas are estimated accurately. Secondly, the approach learns a sampler as part of the model, which enables a high sample quality. However, score-based diffusion models do not learn an energy-based model or a density. Energy-based models, on the other hand, learn the data generating density but are hard to sample from, resulting in lower FID scores compared to diffusion models.
Training energy-based models with score matching is challenging because the estimated scores are only accurate in the vicinity of modes of the data distribution. [6] elaborates these difficulties of score matching and how they influenced the development of score-based diffusion models ([2, 6]). Energy Discrepancy alleviates some of these difficulties by implicitly incorporating data information at different noise scales since the Gaussian Energy Discrepancy is equivalent to a multi-noise-scale score matching loss.
[1] Xie, Jianwen, et al. "A Tale of Two Latent Flows: Learning Latent Space Normalizing Flow with Short-run Langevin Flow for Approximate Inference." arXiv preprint arXiv:2301.09300 (2023).
[2] Song, Yang, et al. "Score-based generative modeling through stochastic differential equations." arXiv preprint arXiv:2011.13456 (2020)
[3] Pang, Bo et al. "Learning Latent Space Energy-Based Prior Model." NeurIPS, 2020
[4] Nijkamp, Erik, et al. "MCMC Should Mix: Learning Energy-Based Model with Neural Transport Latent Space MCMC" ICLR, 2022.
[5] Arbel, Michael, Liang Zhou, and Arthur Gretton: Generalized energy based models, ICLR 2021.
[6] Song, Yang and Ermon, Stefano: Generative Modeling by Estimating Gradients of the Data Distribution, NeurIPS 2019
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' reply. My question is well-answered and I will keep my rating. | Summary: This paper proposes a new loss function for training energy-based models, called energy discrepancy (ED). ED does not rely on score functions and MCMC samples. Instead, it is defined as the difference between the energy function of data and some conditional samples. They prove that optimizing this objective function yields the appropriate energy function. Then, they build connections between ED, score matching and contrastive divergence. Finally, this paper focuses on training the latent energy-based prior model, which is a VAE with EBM prior.
For experiments, the authors first showcase the density estimation on several 2D pdfs. ED outperforms SM and CD on pdf estimation and sampling. Then, they train latent EBMs on SVHN, CIFAR-10 and CelebA. ED also outperforms SM and CD on image reconstruction/generating and out-of-distribution detection.
Strengths: 1. Training EBMs is an important problem in generative modeling. This paper proposes a new training criteria, which does not rely on possibly ill-conditioned score functions and time-consuming MCMC samples.
2. The authors also build connections between the proposed method and score-matching estimates or MLEs.
Weaknesses: 1. The authors do not train EBMs directly on the data space. Instead, they train a VAE with a EBM prior. Since CD or SM can lead to competitive EBMs, I am wondering if there is any empirical result on training EBMs directly on the data space using the proposed loss?
2. Since the proposed loss function is similar to the conditional NCE, I think one promising baseline could be training the same model using CNCE loss.
3. The authors said ED provides fast training (in Line 338). Is there any comparison about the running time?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In the Equation between Line 187 and 188, is there $E_\theta(x^i)$ missing behind $w / M$?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive feedback! Here are our responses to the mentioned weaknesses and questions:
**Weaknesses:**
> The authors do not train EBMs directly on the data space. Instead, they train a VAE with a EBM prior. Since CD or SM can lead to competitive EBMs, I am wondering if there is any empirical result on training EBMs directly on the data space using the proposed loss?
>
**ANSWER:** In our density estimation experiments, the energy-based models are trained on data space, and Energy Discrepancy outperforms CD or SM in this domain. Furthermore, we have new empirical results for discrete EBMs trained directly on pixel space for various binary image data sets (See Figure 3 in the attached PDF).
The VAE approach is used in our image modelling experiments, only. The so-called manifold hypothesis suggests that most image data lives on a lower-dimensional subset of the higher dimensional pixel space. As a consequence of this, the energy function outside of the manifold is undefined, i.e. for a noisy data point, we have $p_\mathrm{data}(\tilde{\mathbf x}^i)=0$ which implies for the learned energy that $E_\theta(\tilde{\mathbf x}^i) \approx -\log p_\mathrm{data}(\tilde{\mathbf x}^i) \to \infty$. For this reason, we use a latent variable model to construct representations of the data that are supported in the whole latent space. The probability of a noisy latent representation is still positive $p_z(\tilde{\mathbf{z}}^i)>0$ and we can learn a well-defined energy.
Contrastive divergence is not as sensitive to this problem because it only pulls up low-energy states, and the value of the energy function stays bounded. However, this comes at the cost of biased energy landscapes as shown in Figure 3, which pose difficulties in likelihood-based tasks like anomaly detection as shown in Table 2. Furthermore, sampling from the learned energies remains challenging.
Score-based generative models normally require the use of multiple noise scales [1, 2] to alleviate issues with its nearsightedness and scores being undefined outside the data manifold. Training EBMs with score matching directly on data space without annealing faces challenges similar to the ones outlined for Energy Discrepancy with the additional difficulty of SMs nearsightedness.
> Since the proposed loss function is similar to the conditional NCE, I think one promising baseline could be training the same model using CNCE loss.
>
**ANSWER:** This is a great suggestion. In fact, in the special case of Gaussian perturbations with one contrastive sample ($M= 1$ in ED, $\kappa = 1$ in CNCE [3]), and $w= 1$ , the two approaches coincide.
To further explore CNCE as a baseline, we conducted experiments on the 25-gaussian dataset for different choices of $M$ ($\kappa$ in CNCE) and different choices for $t$ ($0.5\epsilon^2$ in CNCE). Our results (see Figure 6 in the attached pdf) show that the performance of CNCE is comparable to Energy Discrepancy. On image data, we expect that CNCE performs similarly to Energy Discrepancy and that many of the techniques we use for ED like the latent EBM would also improve the experimental results for CNCE. We believe that both, CNCE as well as our proposed loss function are promising approaches for fast and accurate training of EBMs.
Conceptually, Energy Discrepancy has the appeal of being closely connected to the original maximum likelihood estimation problem, which has the lowest asymptotic variance amongst all unbiased estimators. For this reason, Energy Discrepancy may be more data efficient if sufficient compute is available. Additionally, CNCE requires tractable likelihoods of the perturbation, while Energy Discrepancy only requires being able to simulate the perturbation. Thus, ED is applicable to different non-Gaussian perturbations in which the likelihood is intractable.
> The authors said ED provides fast training (in Line 338). Is there any comparison about the running time?
>
**ANSWER:** The statement is based on complexity analysis. Energy Discrepancy requires $\mathcal O(M)$ evaluations of the energy net, while SM and CD methods require at least $\mathcal O(d)$ evaluations of the energy net to compute the gradient, where $d$ denotes the dimension.
For the density estimation experiment, one step of the optimiser for ED, CD, and SM takes 0.006s, 0.018s, and 0.021s, respectively. Additionally, ED converges more quickly as shown in Figure 4 in our paper. The resulting run time can be compared in Figure 5 in the attached pdf. In the image modelling experiments, ED reduces the computational cost significantly as $M=16$ is used throughout all image modelling experiments irrespective of the dimension.
**Questions:**
> In the Equation between Line 187 and 188, is there $E_\theta(x^i)$ missing behind $w/M$?
>
**ANSWER:** In this equation, the energy term behind the $w/M$ cancels with the non-contrastive energy term, i.e. for each $\mathbf x^i$ we write the energy as $E_\theta(\mathbf x^i) = \log \exp(E_\theta(\mathbf x^i))$ and use $\log(a)+\log(b) = \log(ab)$ to take the logarithm outside of all other operations. This gives for each sample in the batch:
$E_\theta(\mathbf{x}^i) + \log\left(\frac{w}{M}\exp(-E_\theta(\mathbf{x}^i)) + \frac{1}{M}\sum_{j=1}^M \exp(-E_\theta(\tilde{\mathbf{x}}^{i, j}) \right) = \log\left(\frac{w}{M} + \frac{1}{M}\sum_{j=1}^M \exp(E_\theta(\mathbf x^i)-E_\theta(\tilde{\mathbf{x}}^{i, j})\right)$
We will add the identity to the appendix and hope this makes things clearer.
[1] Song, Yang and Ermon, Stefano: Generative Modeling by Estimating Gradients of the Data Distribution, NeurIPS 2019
[2] Zengyi Li, Yubei Chen, Friedrich T. Sommer: Learning Energy-Based Models in High-Dimensional Spaces with Multi-scale Denoising Score Matching, 2019
[3] Ceylan, Ciwan, and Michael U. Gutmann. "Conditional noise-contrastive estimation of unnormalised models." *ICML*, 2018.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses, which address my issues.
I would like to raise my score to 7.
---
Reply to Comment 1.1.1:
Comment: We are delighted to hear that we could address your concerns and your decision to increase your score! Thank you for your review of our work. | Summary: The authors propose a new loss function for training energy-based models, which they dub "energy discrepancy." The aim is to provide a viable alternative to contrastive divergence and score matching based methods that suffer from near-sightedness -- these approaches lack global information and can have difficulties fitting well-separated Gaussians. The energy discrepancy method seeks to overcome this difficulty by perturbing the distributions to increase the mass of the low probability regions separating the peaks of the distribution. This is done at different noise levels and then integrated. The proposed loss is theoretically justified and then validated experimentally in both synthetic and real-world settings.
Strengths: The approach is well-motivated for the most part and the presentation is mostly clear. The idea is novel to me and experimentally improves over existing approaches.
Weaknesses: I'm not certain that the approach is potentially as practical as it appears. My main concern is that the w-stabilisation procedure seems to be doing a lot of heavy lifting and that is tailored to the Gaussian case. Given other discussion in this work, I don't think this is a huge problem. However, the case for acceptance would be bolstered if a broader set of applications were considered.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - There is a bit of magic in the w-stabilisation procedure. I read the explanation in Appendix B, but I feel that more explanation is needed in the main text. In particular, does this approach generalize beyond the Gaussian case? This makes me think that the proposed approach is really quite specific (at least if you want to be able to do it in practice). Does this need to be included as a limitation?
- Is it possible to include some discrete setting experiments as well?
Minor typos:
- "normalisation of EBMs, also known as partition function" sounds funny. Maybe "normalisation constant"?
- The sentence in lines 30-33 doesn't really make sense.
- "i.e." and "e.g." must always be followed by a comma.
- "line out in" -> "outline in"?
- Lots of "which" usage that I think requires a comma.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, but see above comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your valuable comments. Your suggestions are very helpful in further improving the work, and we are refining the manuscript accordingly.
> My main concern is that the w-stabilisation procedure seems to be doing a lot of heavy lifting and that is tailored to the Gaussian case. Given other discussion in this work, I don't think this is a huge problem. However, the case for acceptance would be bolstered if a broader set of applications were considered.
>
**ANSWER:** Thank you for raising this concern. The $w$-stabilisation is critical to scale up our experiments to high dimensions and significantly reduce the computational cost. The stabilisation is, however, not restricted to the Gaussian case. To support our case, we have added experiments using a Bernoulli perturbation in various discrete settings. We observe a similar effect of the $w$-stabilisation in discrete spaces (see Figure 4 in the attached pdf).
Additionally, it is possible to avoid stabilisation in certain use cases. For example, it is feasible to estimate densities using a large value for $M$ such as $M=512$ in low-dimensional data (see e.g. figure 21 in the appendix). The stabilisation is also not needed if the energy function is sufficiently regular. This typically applies for models that are not deep architectures and that have interpretable parameters, such as products of Gaussian experts with finite variance.
> There is a bit of magic in the w-stabilisation procedure. I read the explanation in Appendix B, but I feel that more explanation is needed in the main text. In particular, does this approach generalize beyond the Gaussian case? This makes me think that the proposed approach is really quite specific (at least if you want to be able to do it in practice). Does this need to be included as a limitation?
>
**ANSWER:** Thanks for the comment. We will extend the main text to make the w-stabilisation more intuitive. The w-stabilisation can be generalised beyond the Gaussian case and we choose the same stabilisation successfully in the discrete setting which uses a Bernoulli perturbation. We included a study on the effect of w for the Bernoulli perturbation in Figure 4 on the attached PDF. The main idea behind the stabilisation is the following:
In theory, the difference between the positive energy term and the contrastive energy term is bounded, and thus Energy Discrepancy has an existing minimum at $\exp(-U^\ast)\propto p_\mathrm{data}$. In practice, Energy Discrepancy requires approximating the contrastive energy term with samples, which we call “contrastive samples”. At the edge of the data support, it can happen at random that all contrastive samples have high energy. The resulting parameter gradient encourages energy landscapes for which the energies of contrastive samples go to infinity. We visualise this in Figure 2 in our paper. The w-stabilisation effectively adds the unperturbed data point to the set of contrastive samples. If all contrastive samples have unusually high energies, the log-sum-exp operation is dominated by the energy of the unperturbed data point which does not blow up because it is minimised in the non-contrastive term at the same time. This heuristic argument applies to all perturbations.
We agree that further theoretical work on the effect of the w-stabilisation and its optimal value would be of great interest and it is conceivable that different stabilisation procedures could improve the effectiveness of our method. However, at this stage of the work, the w-stabilisation has several big advantages:
- It is simple, adds no computational complexity to the optimisation, and there is numerical evidence for its effectiveness
- The $w$-parameter requires little tuning, and experimental results are of similar quality for all values that are of the order of 1.
- It is theoretically justified by Theorem 3, which we have generalised to other perturbations in the meantime
We hope we could address your concerns and are happy to discuss further.
> Is it possible to include some discrete setting experiments as well?
>
**ANSWER:** We have included new experiments on discrete data using the Bernoulli perturbation in the attached PDF, following the setting in [1]. The experiments are conducted on predicting the connectivity matrix of an Ising model (Figure 1), learning the density of 2D data mapped via grey codes to the discrete space $\\{0, 1\\}^{32}$ (Figure 2), as well as binary MNIST, Polyglot, and Silhouettes data sets (Figure 3). It is noteworthy that there were no latent variable models used in the discrete image modelling while achieving competitive results. We leave it as future work to find suitable perturbations for other structured data like graphs and text.
[1] Dinghuai Zhang et al.: Generative Flow Networks for Discrete Probabilistic Modeling, ICML 2022 | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive and extensive comments that help us to improve this work.
We would first like to summarise the paper according to the reviewers:
- Our work proposes a new practical, easy to implement, and fast training technique for energy-based models that does not require MCMC sampling. Reviews agree that this approach is novel:
> **BMfu:** “Energy Discrepancy (ED) is a novel loss for training enerygy-based models.”; **BJCK:** “The idea is novel to me and experimentally improves over existing approaches.”; **NEwe:** “This paper proposes a new training criteria, which does not rely on possibly ill-conditioned score functions and time-consuming MCMC samples.”; **M9yD:** “I think this is very good paper that proposes a novel optimization method in enery-based models.”; **37sQ:** “The proposed algorithm is easy to implement.”
>
Furthermore, the reviewers agree that the contribution is significant:
> **BMfu:** “Although the ED loss function work for only low-dimensional data distributions, this work lays the basis for alleviating the issues of maximum likelihood methods and score-based methods for training energy-based models.”; **M9yD:** “The experiments show that the energy discrepancy could achieve better performance than score matching and contrastive divergence in image generation.”
>
- Our work introduces theoretical guarantees demonstrating the validity of our approach. Furthermore, we draw connections to score matching and maximum likelihood estimation. The reviews reflect the soundness of our approach as follows:
> **NEwe:** “The authors also build connections between the proposed method and score-matching estimates or MLEs.”; **37sQ:** “ED interpolates between the losses of score matching and maximum-likelihood estimation. Theoretical derivation is rigorous.”
>
We now summarise the concerns that were raised most frequently by reviewers and explain how we addressed them in our rebuttal:
- **Experiments on discrete spaces**
Reviewer ****BJCK**** is interested in additional experiments on discrete spaces. We added three new experiments to the attached pdf regarding different discrete settings: Using an Energy Discrepancy based on a Bernoulli perturbation, we learn the connectivity matrix of an Ising model, two dimensional densities that are mapped into $\\{0, 1\\}^{32}$ via grey codes, as well as various image data sets with binary pixel values (MNIST, Polyglot, Silhouettes). Our results are competitive at a low computational cost.
Reviewer ****NEwe**** is interested in experiments on data space, directly. We would like to refer to the new experiments on discrete data (Figure 1, 2, 3 on the attached PDF) to demonstrate that Energy Discrepancy is capable of learning high-dimensional distributions on pixel space, directly. However, we also discussed why we think that for contrastive learning methods like ours, latent variables or other types of hybrid models are necessary to model many image data sets due to the manifold hypothesis.
- **Concerns regarding the w-stabilisation:**
Reviewer ****BJCK**** is concerned that the w-stabilisation only applies to the Gaussian case and that more motivation is needed in the main text. To respond to this concern, we are going to add a new paragraph to the main paper that motivates the w-stabilisation and how it applies to **any** type of perturbation. The main idea of the stabilisation is to control the variance of a log-sum-exp operation in the sample approximated loss functional. To support our case that this stabilisation applies to other perturbations, we have added ablation studies (see Figure 4 in the attached PDF) comparing empirical results with and without $w$ in the case of a Bernoulli perturbation.
Reviewer ****37sQ**** is interested if Energy Discrepancy works if the stabilisation term goes to zero. Here, we would like to refer to the appendix of our paper. Figure 21 shows a comparison of learned energies for various choices of $w$ and $M$. One can see that Energy Discrepancy works for all values of $w$ if the number of perturbed samples $M$ is large. Introducing even a very small stabilisation $w$ reduces the number of required samples drastically.
Pdf: /pdf/98d8a84ddb55d392151ecd39a148c894c529f3a5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the authors consider the problem of training energy-based models. Due to the limitations of two current approaches known as approximate maximum likelihood methods (which might lead to malformed estimators of the energy function) and score-based methods (which fail to resolve global features in the distribution without big data), the authors propose a novel loss function for this task named Energy Discrepancy (ED) to overcome those issues with rigorous theoretical guarantee. Moreover, they also conduct several experiments to demonstrate the efficacy of the ED loss function in learning low-dimensional data distributions compared to two previous methods. However, the authors point out that their approach does not work for high-dimensional data due to some manifold hypothesis.
Strengths: 1. Originality: Energy Discrepancy (ED) is a novel loss for training enerygy-based models.
2. Quality: Theoretical results are solid and associated with rigorous proof, but I do not spend time double-checking all of them. Additionally, a lot of experiments are carried out to empirically justify the effectiveness of the proposed loss function ED. This significantly strengthens the contributions of the paper.
3. Clarity: The paper is well written and organized, which makes it easy to follow.
4. Significance: Although the ED loss function work for only low-dimensional data distributions, this work lays the basis for alleviating the issues of maximum likelihood methods and score-based methods for training energy-based models.
Weaknesses: 1. Clarity: There are some places that the authors introduce results without necessary intuitions. For instance, in line 96, they directly suggest using Gaussian kernels without explanation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In equation (2), why is the score-matching loss defined between $p_{\text{data}}$ and $E_{\theta}$ rather than between $p_{\text{data}}$ and $p_{\theta}$?
2. In the beginning of section 3, can we use other kernels rather than Gaussian kernels to perturb $p_{\text{data}}$ and $p_{\theta}$?
3. In equation (5), is the energy discrepancy $ED_q$ a proper metric?
4. In Theorem 2, the authors should either briefly introduce the definition of Wasserstein distance or cite relevant papers to that distance so that readers from different communities can understand.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are discussed in Section 7 of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive comments. Here is our response to your questions:
> There are some places that the authors introduce results without necessary intuitions. For instance, in line 96, they directly suggest using Gaussian kernels without explanation.
>
**ANSWER:** Thank you for making us aware of this. The intuition for Gaussian kernels is two-fold: Firstly, Gaussian kernels are the simplest choice as they are easy to sample from and allow easier calculations for the analysis of the method. They are often used to apply noise in score-based methods [1], contrastive methods [2], and as spread noise [3]. Consequently, this is the first perturbation we tried. Secondly, we observed empirically that Gaussian perturbations work well in practice. They allow us to make the connection to score matching which we describe in our motivation in equation (3) as well as maximum likelihood estimation which characterises Gaussian perturbations as optimal among a large class of Markov transition kernels. We will include an additional sentence to motivate our choice of perturbation in line 96 and leave other choices for follow-up work.
> In equation (2), why is the score-matching loss defined between $p_{\mathrm{data}}$ and $E_\theta$ rather than between $p_\mathrm{data}$ and $p_\theta$?
>
**ANSWER:** Thank you for pointing out that this notation is unclear. This notation was chosen for the following reasons:
1. Unlike the Fisher divergence from which score matching is derived, the score-matching objective in equation (2) is not a statistical divergence between $p_\mathrm{data}$ and $p_\mathrm{ebm}$, but an estimation criterion for the energy function. For this reason, we treat score-matching as a functional of the learned energy function $U$ or $E_\theta$, respectively and make this explicit in the notation.
2. We also reuse the notation in equation (3) where we find this notation more concise than the alternatives.
3. Finally, this notation is consistent with our notation for Energy Discrepancy, which we treat as a functional of the energy function in the same way.
We are going to edit the sentence that precedes the equation to make the chosen notation more logical.
> In the beginning of section 3, can we use other kernels rather than Gaussian kernels to perturb $p_\mathrm{data}$ and $p_\theta$?
>
**ANSWER:** The proposed Energy Discrepancy can indeed be defined for other kernel functions. The connection to score matching made at the beginning of section 3, however, holds only for Markov transition kernels associated with SDEs of the form $\mathrm d\mathbf x_t = \mathbf a(\mathbf x_t)\mathrm dt + \mathrm d\mathbf w_t$.
In practice, Gaussian kernels are favourable due to easy sampling and many analytical tools available, so we conducted our analysis and experiments for this choice of kernel, first. Additionally, Theorem 2 shows that Gaussian kernels are optimal among possible Markov transition kernels in the sense that the maximum likelihood objective can be approximated with a Gaussian kernel of sufficiently large variance.
One example of a non-Gaussian kernel for Energy Discrepancy is the Bernoulli perturbation that can be applied in discrete spaces (see section B.3 in the appendix). The additional experimental results in the attached PDF (Figure 1-4) demonstrate that this perturbation is effective in training EBMs in various discrete settings. We will explore other kernels in future work.
> In equation (5), is the energy discrepancy a proper metric?
>
**ANSWER:** Energy Discrepancy is not a metric but is designed as a criterion for density estimation, similar to maximum likelihood estimation. This means that for Energy Discrepancy, $p_\mathrm{data}$ is always fixed while $p_\theta$ is learned and the role of the two arguments can not be interchanged. In particular, ED is not symmetric. When Energy Discrepancy is minimised in $p_\theta$, the minimum is attained at the ground truth $p_\theta = p_\mathrm{data}$.
While not a metric, Energy Discrepancy can be related to a statistical divergence called KL contraction, which is a weaker notion of distance. (see section A.6 in the appendix).
> In Theorem 2, the authors should either briefly introduce the definition of Wasserstein distance or cite relevant papers to that distance so that readers from different communities can understand.
>
**ANSWER:** Thank you for pointing this out, we will refer to [4] in our revision.
[1] Song, Yang and Ermon, Stefano: Generative Modeling by Estimating Gradients of the Data Distribution, NeurIPS 2019
[2] Michael Gutmann and Aapo Hyvärinen: Noise-contrastive estimation: A new estimation principle for unnormalized statistical models, AISTATS 2010
[3] Mingtian Zhang, Peter Hayes, Tom Bird, Raza Habib, David Barber: Spread Divergence, ICML 2020
[4] Peyre, Gabrial and Cuturi, Marco: Computational Optimal Transport, Foundations and Trends in Machine Learning 2019
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thanks for your thorough rebuttal. The authors has already addressed my concerns toward the paper. Therefore, I will maintain my score of 6 subject to the fact that these changes will be included in the revision of this paper.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We are glad to hear that we could address your concerns! The changes will be included in the next revision. Thank you for your assessment of our work. | null | null | null | null | null | null |
Large Language Models are Visual Reasoning Coordinators | Accept (poster) | Summary: 1. The author proposes to utilize a language model as the coordinator between different outputs from different VLMs, leveraging their strengths for visual reasoning.
2. The proposed method achieves SOTA results on multiple visual reasoning benchmarks.
3. The analysis shows how zero-shot / fine-tuned language models can coordinate VLM outputs.
Strengths: 1. The proposed method with zero-shot VLMs and zero-shot / fine-tuned LLMs as coordinators are novel.
2. The performance of the model is significant with sufficient experiment results.
Weaknesses: 1. Different dataset requires different visual / reasoning capabilities and knowledge. The proposed method performs differently on different datasets. For example, the ZS version performs similarly with 'Ensemble' on VQA v2 and OK-VQA, but outperforms 'Ensemble' significantly on other datasets. It's better to analyze the reason.
2. Why the proposed method performs better than other methods needs more discussion. For example, comparing with BLIP-2 on VQA.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It is a little unclear that in Table 3, the Cola-FT model is fine-tuned on BLIP+OFA outputs or 3 OFA variance. Why is the Cola-FT performance here much lower than the performance in Table 2 and Figure 3?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are discussed and potential solutions are proposed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We kindly request the reviewer to check our detailed responses and revisions in the following. Your time, effort, and affirmation of our research are truly valued.
**Q: Different dataset requires different visual / reasoning capabilities and knowledge….**
We appreciate the reviewer's insightful question. This is inherently determined by the mechanism of Cola-Zero. It leverages few-shot in-context examples to guide the LLM in harmonizing VLM outputs to obtain the final accurate answer. Its efficiency largely hinges on the quality of these examples.
Key factors to consider include:
1. **Question Format:** Datasets like VQA v2 and OK-VQA contain open-ended questions, while A-OKVQA, e-SNLI-VE, and VSR use multiple-choice. Converting VQA v2 and OK-VQA to classification introduces complexities for traditional ensemble methods, as evident in Table 2. Classic methods struggle with generative models like API-based GPT-4, underscoring Cola's value as an end-to-end ensemble strategy for extensive (vision-)language models. Moreover, Cola-Zero’s efficiency also relies on the question format – it's easier for LLMs to answer when given choices like in A-OKVQA. Conversely, Cola-FT finetunes LLMs to discern answer formats.
2. **Knowledge Demands and Out-of-Distribution Challenges:** Datasets requiring deep reasoning, such as A-OKVQA, emphasize Cola-Zero and FT’s advantages over Ensemble. With A-OKVQA demanding broad commonsense and world knowledge, pretrained LLMs (like our FLAN-T5) are favored. On such datasets, Cola-Zero outperforms the ensemble baseline, while established models like OFA and BLIP-2, trained on VQA v2, set higher baselines, making significant improvements by Cola challenging.
3. **In-context Examples:** Cola-Zero's performance is influenced by its in-context examples. Misrepresentative examples can hamper its ability to synchronize VLMs effectively. We use random selection as our baseline for this learning. However, Cola-FT overcomes such limitations. By finetuning, it offers a comprehensive grasp on harmonizing VLM outputs, consistently delivering excellent results, and potentially matching top-tier results.
**Q: Why the proposed method performs better than other methods needs more discussion…**
Compared to individual models like BLIP and OFA, Cola's strength is its integration of responses from multiple models, refined by a language model, to produce superior results. For instance, Figure 2's third example shows that while OFA and BLIP individually fail, Cola-FT uses their captions to deduce the correct answer, “maybe”. While Cola's efficacy may vary across datasets, its LLM can harness comprehensive knowledge to improve VLM outputs. For example, the left side of Figure 8 in the supplementary material showcases instances where both VLMs err. Yet, both Cola-Zero and Cola-FT interpret the data to deliver the right answer, “to rest”. While Cola-Zero's effectiveness is limited by dataset question formats and context examples, Cola-FT's fine-tuning broadens its adaptability across diverse questions. See also our response to Reviewer 3ict.
**Q: It is a little unclear that in Table 3, the Cola-FT model is fine-tuned on BLIP+OFA outputs or 3 OFA variance…**
We greatly appreciate this constructive question raised by the reviewer, which warrants further clarification. In Table 3, we run OFA-base models under different random seeds, represented as OFA-Base-1, OFA-Base-2, etc. Therefore, the corresponding results of Ensemble and Cola-Zero/FT in Table 3 are based on the outputs of the aforementioned OFA-Base-1/2/3 models. Similarly, in Table 4, the structure of Ensemble and Cola-Zero/FT is based on the outputs of the OFA-tiny/medium/base models.
In both experiments, the BLIP model is not involved and the prompt is slightly changed to accommodate more VLMs. We have clarified the settings in our revised paper.
As an ensemble method, Cola demonstrates larger performance gains compared to single VLMs when the VLMs are more complementary in their capabilities. Table 3 presents an example where the three VLMs are nearly identical. As a consequence, the performance gains of Cola-Zero and Cola-FT are limited, and so are Ensemble baselines. In Table 4, OFA-tiny/medium/base possesses different capabilities, which results in a much larger performance gain for Cola-Zero (3.6% on A-OKVQA, 5.0% on e-SNLI-VE) and Cola-FT (8.5% on A-OKVQA, 11.1% on e-SNLI-VE). In Table 2, BLIP and OFA models are quite different in their pre-training data so they show complementary capabilities (see examples in Figure 2, 8-11). Therefore, the performance gains for Cola-Zero and Cola-FT are even larger.
---
Rebuttal Comment 1.1:
Title: Sincerely Looking Forward to Your Reply
Comment: Dear Reviewer,
We extend our heartfelt gratitude for the invaluable suggestions and comments you have provided, which have significantly contributed to refining our paper. Specifically, your insights on the performance gap between Cola-Zero and ensemble baselines helped us better present our analysis of why Cola-Zero works. We hope our response has addressed your concerns.
We eagerly anticipate your response to our revised submission and are earnestly open to engaging in any discourse aimed at enhancing the quality of our paper.
Warm regards,
The authors | Summary: The paper proposes an ensemble based approach to solve visual reasoning problems. The paper proposes to use an instruction fine-tuned large language model to integrate answers to visual reasoning problems provided by vision language models. The paper presents two variants of the aggregation model -- using fine-tuning and using in-context learning. The model is evaluated on the VQA v2, A-OKVQA, OK-VQA, e-SNLI-VE and VSR datasets.
Strengths: * The paper reports promising results on the VQA v2, A-OKVQA, OK-VQA, e-SNLI-VE and VSR datasets.
* The paper includes a variety of ablations that show the effectiveness of the proposed method -- including model size, scaling, number of video-language models as ensemble members.
* The paper includes qualitative examples which highlight the effectiveness of the proposed method.
* The paper is well written and easy to understand.
Weaknesses: * While the results are very promising, it would be helpful to add results on more complex datasets such as CLEVR (CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning) and GQA (GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering) which study compositional reasoning.
* The paper should also consider comparing to prior work such as Visual Programming: Compositional visual reasoning without training, CVPR 2023, which also uses an external large language model to coordinate vision / language-vision models.
* The paper claims in L291 "This work demonstrates the first step toward applying language models for visual reasoning", but Flamingo (Flamingo: a Visual Language Model for Few-Shot Learning, NeurIPS 2022) already shows zero-show visual reasoning on datasets such as Next-QA.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The paper should include further details on the computational resources used. L680 in the supplementary material just states that V100 or A100 GPUs were used, but the paper should include further details about the total computational resources used.
* The paper should include further motivational details on why the particular datasets VQA v2, A-OKVQA, OK-VQA, e-SNLI-VE and VSR were used? Why were more datasets which require more complex reasoning abilities such as CLEVR and GQA were not used.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not include any discussion about its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for your feedback on our paper, and we will provide our responses to these concerns in the following. We have also made targeted modifications to the paper, and these issues will help improve our paper.
Our work aims to demonstrate that by using LLM, multiple VLMs can be coordinated to achieve better visual reasoning effects. We hope this work can bring some of our insights to the currently booming fields of LLM and VLM. We hope the reviewer will carefully review our responses and modifications, and we sincerely thank the reviewer for your time and effort, as well as your acknowledgement of our work.
**Q: CLEVR and GQA results (The paper should include further motivational details on why the particular datasets VQA v2, A-OKVQA, OK-VQA, eSNLI-VE and VSR were used? Why were more datasets which require more complex reasoning abilities such as CLEVR and GQA were not used.)**
**Q: The paper should also consider comparing to prior work such as Visual Programming: Compositional visual reasoning without training, CVPR 2023, which also uses an external large language model to coordinate vision / language-vision models.**
We thank the reviewer for the suggestion of experiments on compositional reasoning datasets and Visual Programming. Below we release the results on GQA and CLEVR datasets. On the GQA validation set, Cola-FT shows marginal improvement over best VLM (OFA), and outperforms Visual Programming [1].
| GQA | Acc. | $\Delta$ |
| --- | --- | --- |
| BLIP | 41.7 | |
| OFA | 58.0 | |
| Cola-FT | 60.3 | +2.32 |
| VisProg [1] | 50.5 | |
On the CLEVR validation set, Cola-Zero shows marginal performance gain and Cola-FT shows substantial performance gain over single VLMs. It’s interesting to note that InstructBLIP (FLAN-T5-XL) [2] outperforms InstructBLIP (FLAN-T5-XXL) by a large margin in the 0-shot evaluation.
| CLEVR | Acc. | $\Delta$ |
| --- | --- | --- |
| InstructBLIP (FLAN-T5-XL) | 33.7 | |
| InstructBLIP (FLAN-T5-XXL) | 16.6 | |
| Cola-Zero (2-shot) | 34.4 | +0.7 |
| Cola-FT | 54.3 | +20.6 |
[1] Gupta, T., & Kembhavi, A. (2023). Visual programming: Compositional visual reasoning without training. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (pp. 14953-14962).
[2] Dai, W., Li, J., Li, D., Tiong, A.M., Zhao, J., Wang, W., Li, B., Fung, P., & Hoi, S. (2023). InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. *ArXiv, abs/2305.06500*.
**Q: The paper claims in L291 "This work demonstrates the first step toward applying language models for visual reasoning", but Flamingo (Flamingo: a Visual Language Model for Few-Shot Learning, NeurIPS 2022) already shows zero-show visual reasoning on datasets such as Next-QA.**
We greatly appreciate the reviewer pointing out the oversight regarding the 'Flamingo' work in our article. Our claim, 'This work demonstrates the first step toward applying language models for visual reasoning,' is primarily based on our unique approach, where we employ a standalone and end-to-end LLM to coordinate outputs from various VLMs and combine them for enhanced results. This, compared to either a single VLM or multiple VLM ensembles, led to significant improvements in multiple visual reasoning datasets as observed with our 'Cola-Zero/FT'. Thus, a better way to quote our claim would be "This work demonstrates the first step toward applying end-to-end language models for visual reasoning", which we have revised in the paper.
On this front, compared to VLM models that incorporate powerful LLMs—such as Flamingo which can integrate with OPT[1] or Chinchilla [2], and OpenFlamingo [3] which can combine with LLaMA [4]—our Cola strategy could potentially benefit from merging with a larger scale LLM to interpret VLM outputs, leading to even better outcomes. For example, Cola-FT obviously applies to end-to-end LM APIs (like Open GPT-4 [5] and Anthropic Claude [6]), which is challenging for conventional ensemble methods or VLMs that need to be finetuned from LLMs.
[1] Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., ... & Zettlemoyer, L. (2022). Opt: Open pre-trained transformer language models. *arXiv preprint arXiv:2205.01068*.
[2] Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., ... & Sifre, L. (2022). Training compute-optimal large language models. *arXiv preprint arXiv:2203.15556*.
[3] Awadalla, A., Gao, I., Gardner, J., Hessel, J., Hanafy, Y., Zhu, W., ... & Schmidt, L. (2023). OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models. *arXiv preprint arXiv:2308.01390*.
[4] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. A., Lacroix, T., ... & Lample, G. (2023). Llama: Open and efficient foundation language models. *arXiv preprint arXiv:2302.13971*.
[5] OpenAI (2023). GPT-4 Technical Report. *ArXiv, abs/2303.08774*.
[6] Anthropic \ Introducing Claude https://www.anthropic.com/index/introducing-claude
**Q: the paper should include further details about the total computational resources used.**
We greatly appreciate the reviewer for pointing out the need for additional detail in this section. To address this, we've incorporated a table illustrating the GPU hours as the computation cost in our training process. These computations are performed with reference to a server with 8 NVIDIA V100 GPUs.
| Cola-FT | V100 hours |
| --- | --- |
| A-OKVQA | 12 |
| e-SNLI-VE | 8 |
| VSR | 8 |
| GQA | 24 |
| VQA v2 | 80 |
| OK-VQA | 12 |
| CLEVR | 24 |
For most datasets, the inference time of Cola-Zero and Cola-FT is ~16 questions / second, with 1 A100 GPU. Each question is composed of 90 to 150 tokens. We have included these details in our revised paper.
---
Rebuttal Comment 1.1:
Title: Update
Comment: The results on CLEVR and GQA are promising. The final version should discuss prior work in more detail and update claims accordingly. I would keep my score and vote for acceptance.
---
Reply to Comment 1.1.1:
Title: Genuinely Thankful for Reviewers Feedback
Comment: Thanks! your reviews and suggestions significantly help us to improve our work. We hope our effort can bring value to the research community. | Summary: The paper introduces a new paradigm called Cola, which aims to coordinate multiple vision-language models (VLMs) for visual reasoning tasks. While several VLMs have demonstrated strong commonsense reasoning abilities in different domains, effectively combining their capabilities remains a challenge. Traditional methods like ensembling struggle to achieve higher-order communications between these models.
Strengths: Cola proposes a solution by employing a language model (LM) to coordinate the multiple VLMs. The LM facilitates natural language communication, leveraging the distinct and complementary capabilities of each VLM. The authors introduce two variants of Cola: Cola-FT, which involves fine-tuning the models, and Cola-Zero, which performs in-context learning without the need for fine-tuning.
The authors conduct extensive experiments to evaluate the performance of Cola on various visual reasoning tasks, including visual question answering (VQA), outside knowledge VQA, visual entailment, and visual-spatial reasoning. They demonstrate that Cola-FT achieves state-of-the-art results in these tasks. Additionally, Cola-Zero exhibits competitive performance in zero and few-shot settings, without requiring fine-tuning. The paper further includes ablation studies and visualizations to validate the effectiveness of the coordinator LM. These analyses confirm that the coordinator LM comprehends the instruction prompts and understands the individual functionalities of the VLMs, allowing it to coordinate their efforts and enable visual reasoning capabilities.
Using a language model as a coordinator for different VLMs is novel.
Adequate experiments show a promising performance over baselines.
Weaknesses: Does the language coordinator play a role to determine the correctness of comparing multiple VLMs according to the answering language description? What if both VLMs are wrong?
As is mentioned in the limitation, no rational explanation or logical steps were applied either in
VLMs or LM.
The paper missed the recent work in visual reasoning, like Flamingo(https://nips.cc/Conferences/2022/ScheduleMultitrack?event=54165), STAR (http://star.csail.mit.edu/), GAMR(https://openreview.net/pdf?id=iLMgk2IGNyv), SHG-VQA(https://arxiv.org/abs/2304.08682), etc.
Only two VLMs are applied. It’s interesting to see the margin gains of Cola when compared with the best VLMs in the experiments
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Refer to the above comments
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Refer to the above comments
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to extend our gratitude to reviewer’s insightful critique of our paper. These thoughtful comments have guided revisions that will undoubtedly enhance the quality of our work. We hope that by highlighting the potential of language models (LMs) to coordinate multiple vision-language models (VLMs), our findings can provide new insights into large pretrained models and visual reasoning.
**Q: Does the language coordinator play a role to determine the correctness of comparing multiple VLMs according to the answering language description? What if both VLMs are wrong?**
**Q: As is mentioned in the limitation, no rational explanation or logical steps were applied either in VLMs or LM.**
Yes, the language coordinator determines the correctness of VLM plausible answers implicitly, given their captions and the caption and answer labels (see Figure 3 and Sec 3.4). It’s all plausible answers, captions, VLM answer/caption labels, and the world knowledge the LM it encodes in itself, that contribute to the final decision of the language coordinator. Even in the cases where both VLMs give wrong answers, there are chances the language model arrives at the right answer.
In the paper, we did not apply techniques such as Generating Rationale or Chain-of-Thought to make the LM’s logical process more explicit. However, we attempted in the supplementary experiments, on a small scale, to modify the prompt template of Cola-FT so that the model would output the logical conduction process, allowing us to observe the specific behaviors of the LM during its coordination of VLMs. We finetuning Cola-FT to output rationales before answers, by A-OKVQA ground truth rationales. In the modified prompt, we ask the model to provide rationales.
Cola FT's demo output is as follows:
- Case 1: leftmost example of Figure 8
> Rationale: People might sit here to rest. The umbrellas are on the riverwalk. The answer is:rest
OFA gives a reasonable answer (but out of choices) to the question while BLIP gives an irrelevant answer. In this case, both answers are wrong. However, either Cola-Zero or Cola-FT is able to infer from captions and plausible answers to give the correct answer “to rest”. The rationale suggests that the LM understands the scene that the umbrellas are on the riverwalk and guesses that people might sit here to rest based on commonsense. The final answer is correct.
- Case 2: leftmost example of Figure 11
> Rationale: The people are in a canoe. The people are in a canoe. The people are in a canoe. The answer is:kayaking
Both VLMs are wrong in their plausible answers. OFA’s answer “boating” is semantically correct as the correct answer is “kayaking”, though it’s not the correct answer because this is a multiple-choice question. Cola-Zero gives a wrong answer “OFA” which is obviously wrong because “OFA” is the name of one of the VLMs given in the prompt and it’s out of the choices too. However, Cola-FT gives the correct answer “kayaking”, recognizing the correct choice based on prompts of captions and plausible answers after being finetuned. Even though the OFA and BLIP captions fail to identify that the people in the water are on a canoe. The LM identifies that the people in the water are associated with the canoe. The rationale is valid and helpful, though repetitive. The final answer is correct.
- Case 3: third example of Figure 11
> Rationale: The bike is parked in a no parking zone. The bike is parked next to a pedestrian crossing sign. The answer is:no parking
From the rationale, we can tell that the LM understands that the bike is parked next to a pedestrian crossing sign. However, it “overthinks” that this is a no parking zone and therefore gives the wrong answer. This rationale helps us understand why Cola-FT gives a wrong answer.
The inference results on A-OKVQA validation set are as follows:
| | Acc. |
| --- | --- |
| w/t rationale | 74.3 |
| w/o rationale | 77.7 |
To force the LM to output rationale does not improve the reasoning performance of Cola. This might be attributed to the low-quality ground truth rationales provided by the A-OKVQA dataset that we use to train the LM. Such rationales are just short and objective descriptions of the scene, without suggesting the underlying outside knowledge to answer the question. Therefore, training the LM to output rationale is harmful, though it outputs insights into the LM's behaviors during reasoning.
**Q: The paper missed the recent work in visual reasoning, like Flamingo, STAR, GAMR, SHG-VQA, etc.**
We appreciate the reviewer for pointing out this issue. We believe the related works you mentioned here are indeed relevant papers in the field of visual reasoning. Even though we have encompassed a citation list of total 120+ papers in our revision, it is hard to cover all relevant works in this rapidly and vigorously developing area. We have incorporated the related works mentioned above. This ensures that our audience can better reference more excellent papers by reading our paper.
**Q: Only two VLMs are applied. It’s interesting to see the margin gains of Cola when compared with the best VLMs in the experiments.**
We benchmark the best VLMs on A-OKVQA, InstructBLIP (FLAN-T5-XXL), and InstructBLIP (FLAN-T5-XL). For InstructBLIP baselines, we use InstructBLIP solely (i.e., without LM) to evaluate 0-shot performance. Interestingly, InstructBLIP (FLAN-T5-XL) slightly outperforms InstructBLIP (FLAN-T5-XXL). Referring to the results below, both Cola-Zero and Cola-FT improved the reasoning performance by substantial margins compared to single VLMs.
| | Acc. | $\Delta$ |
| --- | --- | --- |
| InstructBLIP (FLAN-T5-XL) | 60.4 | |
| InstructBLIP (FLAN-T5-XXL) | 59.8 | |
| Cola-Zero (0-shot) | 68.0 | +7.6 |
| Cola-Zero (2-shot) | 72.3 | +11.9 |
| Cola-FT | 78.1 | +17.7 |
---
Rebuttal Comment 1.1:
Title: Sincerely Looking Forward to Your Reply
Comment: Dear Reviewer,
We sincerely appreciate you taking the time to provide thoughtful suggestions and comments, which have been immensely helpful in improving our paper. In particular, your feedback regarding the coordinator LM and Cola with the best VLMs has enabled us to strengthen and clarify these sections. We have revised the writing and added experiment results to our paper.
We sincerely look forward to hearing your perspective on our response. Please know that we remain open to any discussion that could further enhance our work, and we highly value your constructive input.
Sincerely,
The authors
---
Rebuttal Comment 1.2:
Title: Thank you for your responses
Comment: Hi Thank you for your responses. Although I have a different opinion on the first question, some of the questions were resolved by the responses.
But there is no revised version of the paper on the system, please check if it is uploaded successfully.
Will add scores to reflect the change.
---
Reply to Comment 1.2.1:
Title: Thank You for Recognizing Our Rebuttal
Comment: Dear Reviewer,
Thank you for your reply! NeurIPS does not allow uploading the revised manuscript or giving external links during the discussion periods. We will upload the revised manuscript as a camera-ready version. Please let us know if you would like to discuss further the first question or any other aspect of our work. We greatly appreciate your effort in making our paper better!
Sincerely,
The authors | Summary: The paper introduces a novel approach to ensembling multiple vision-language models (VLMs) for solving visual reasoning tasks. More specifically, the authors propose to use a language model (LM) to coordinate answers from various VLMs, which outperforms traditional ensemble approaches. Multiple experiments demonstrate the effectiveness of the proposed Cola approach.
Strengths: 1. The paper is well written, with great explanations of the proposed Cola approach and various figures and tables.
2. Using the LM to coordinate VLMs for visual reasoning tasks is novel and interesting.
Weaknesses: The authors observed that long input (multiple VLMs) does not guarantee higher performance in Section 3.5. Specifically, Figure 4 shows the approach is vulnerable to the number of models. Cola models may also be affected by which VLMs are used.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In Table 3, the authors implemented "Ensemble (average)" based on Equation (1). Was each $P_i(v, q)$ normalized before averaging them?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors discussed the limitations of the work in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's insightful critique and thoughtful suggestions for improving our work. In response, we have substantially revised the paper to better demonstrate the effectiveness of language models for coordinating multiple vision-language models.
We humbly ask the reviewer to meticulously check our responses and the revisions. We greatly value your time, effort, and acknowledgment you have given to our work.
**Q: The authors observed that long input (multiple VLMs) does not guarantee higher performance in Section 3.5. Specifically, Figure 4 shows the approach is vulnerable to the number of models.**
Regarding the limitation on long context inputs, we agree this is an inherent challenge for LLMs. Longer text input is a significant challenge for LLM itself. It’s a interesting topic that should be investigated further and we have added references [1,2,3] to recent works that propose techniques to extend LLM ability to model long context, which can potentially enhance Cola in the future.
[1] Chen, S., Wong, S., Chen, L., & Tian, Y. (2023). Extending context window of large language models via positional interpolation. *arXiv preprint arXiv:2306.15595*.
[2] Mu, J., Li, X. L., & Goodman, N. (2023). Learning to compress prompts with gist tokens. *arXiv preprint arXiv:2304.08467*.
[3] Bulatov, A., Kuratov, Y., & Burtsev, M. (2022). Recurrent memory transformer. *Advances in Neural Information Processing Systems*, *35*, 11079-11091.
**Q: In Table 3, the authors implemented "Ensemble (average)" based on Equation (1). Was each $P_i(v, q)$ normalized before averaging them?**
We also appreciate the clarification request on ensemble prediction averaging. To confirm, we did not normalize the per-model scores $P_i(v, q)$ before averaging and majority voting, since they are already on the same 0-1 scale. Normalization is not needed in this circumstance. Thank you for catching this detail - we have updated the text to clearly state that no normalization was applied.
---
Rebuttal Comment 1.1:
Title: Sincerely Looking Forward to Your Reply
Comment: Dear Reviewer,
Your suggestions and comments have greatly helped polish our paper, regarding the long input and the ensemble baseline. We sincerely look forward to your reply to our response, and we are open to any discussion to improve our paper.
Best wishes,
The authors | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewers' time and effort in providing thoughtful feedback on our work. We are pleased that the reviewers recognize the novelty of our Cola framework for coordinating multiple VLMs for visual reasoning. We also appreciate the suggestions to strengthen the paper through additional experiments and analysis.
First, we are deeply appreciative of the reviewer’s recognition of our paper by their comments on:
- Introduction of a novel Cola approach using a language model (LM) to effectively coordinate multiple visual language models (VLMs) for visual reasoning tasks;
- Comprehensive experiments that highlight Cola's notable performance across several datasets;
- Detailed analysis, inclusive of ablation studies and visual illustrations, confirming the coordinator LM's adeptness at integrating VLMs.
In response, we have conducted several new ablation studies which provide further insights into Cola:
- Rationales help explain Cola's reasoning steps but do not improve performance, confirming that the coordinator alone captures the necessary reasoning.
- Cola boosts the latest SOTA VLMs like InstructBLIP, showing general effectiveness across model families.
- Cola improves compositional reasoning on GQA and CLEVR, demonstrating broad applicability.
Additionally, we present a detailed analysis differentiating the behaviors of Cola-Zero and Cola-FT. References are provided to specifically address the requests of Reviewers 3ict and Yu6H.
Through these added experiments and analyses, we believe the paper better conveys the strengths of our approach. We are grateful to the reviewers for their feedback pushing us to strengthen the work, and we hope the revisions satisfactorily address the concerns raised. We look forward to continuing the discussion and thank the reviewers for their time and consideration. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Uncertainty-Aware Instance Reweighting for Off-Policy Learning | Accept (poster) | Summary: This paper delves into the issue of off-policy learning, the objective of which is to devise a new action selection policy based solely on the logged feedback derived from a logging policy. The paper pays particular attention to scenarios in which the logging policy remains unidentified and its estimation proves challenging. Under these conditions, common estimators, like IPS, may lose beneficial attributes such as unbiasedness. To address this complication, the paper introduces a new off-policy learning (OPL) approach called Uncertainty-aware off-policy learning. This new framework aims to optimize the uncertainty-aware objective function by employing a novel weighting scheme that is tuned by minimizing an upper bound of MSE in estimation. A local convergence based on the proposed method is also shown. Experimental results indicate that this proposed framework outperforms a range of benchmark methods on both semi-synthetic and real-world recommendation datasets.
Strengths: - The paper addresses the practically relevant problem of dealing with uncertainty in logging policy estimation in off-policy learning.
- The paper proposes a reasonable and conceptually straightforward method to handle the issue of uncertain logging policies, providing theoretical guarantees regarding estimation and local convergence.
- The paper presents comprehensive experiments, not just basic performance comparisons, but also experiments on off-policy evaluation (OPE) and critical hyperparameters (some of which are included in the appendix).
Weaknesses: - Given that several papers already exist on the topic of distributionally robust off-policy learning (OPL), as discussed in the paper, the formulation of a problem addressing the uncertainty of logging policies may not be groundbreaking, even though I understand that their motivations differ somewhat.
- In the experiments, the issue might also be tackled by simply applying calibration during the estimation of the logging policy, as seen in the following paper:
Aniruddh Raghu, Omer Gottesman, Yao Liu, Matthieu Komorowski, Aldo Faisal, Finale Doshi-Velez, and Emma Brunskill. Behaviour Policy Estimation in Off-Policy Policy Evaluation: Calibration Matters. https://arxiv.org/pdf/1807.01066.pdf
- In most of the experiments, CE performs quite well and is not substantially outperformed by UIPS. Therefore, considering the current experiment results, I may not use UIPS in practice and would rather rely on CE, which is much easier to implement (there is no need to estimate the logging policy when using CE), and does not require the tuning of additional hyperparameters as with UIPS.
- Related to the previous point, in most experiments, the second-best methods for each metric and dataset perform very similarly to UIPS. I am not sure how essential it truly is to address uncertainty in logging policy estimation. I understand that the results are statistically significant, but results can be deemed significant even with a slight performance difference if the sample size is sufficient. In this context, my focus is on the performance difference.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How do baseline methods perform in the experiments when they are combined with a calibrated logging policy estimator? Some additional results about this would be useful.
- Could you provide the results relative to the performance of the (true) logging policy? This enables us to see how much improvements the methods bring compared to the logging policy.
- When does UIPS become really crucial? That is, are there any situations where UIPS performs well while all other methods do not work satisfactory. In the current experiments, the second-best methods perform very similarly to UIPS in all datasets and metrics. Moreover, CE performs reasonably and stably for a range of metrics and datasets, which makes it a really good choice in practice indeed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper touches on the tightness of the bound as a limitation and future work in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reply to Reviewer 19wU
We thank the reviewer for pointing out the related work on calibration, and for posing valuable questions that have assisted in clarifying crucial arguments.
> [Q1] "Difference between UIPS and the line work of distributionally robust off-policy learning (OPL)."
In Section 7.6, we highlighted the distinct differences between UIPS and the line of work on distributionally robust RL in terms of the source of uncertainty, the motivation for utilizing uncertainty, and the techniques employed to handle uncertainties. Furthermore, the experiments conducted in Section 7.6 demonstrate that directly adapting methods from distributionally robust OPL to handle inaccurately estimated logging probabilities leads to bad performance, thus it is not a preferred and suited solution.
> [Q2] "Comparison to Calibration Methods."
The paper [1] found that the accuracy of off-policy evaluation (OPE) strongly depends on the calibration of the estimated logging probabilities. The research specifically highlights that AppoxKNN exhibits the lowest calibration error, leading to the most accurate OPE. However, the paper does not explore how to calibrate the estimated logging policy models for better OPE.
Following the reviewer’s suggestion, we include two new baselines: 1) ApproxKNN following [1]; 2) IPS-C-TS: IPS that combined with calibrated logging probabilities via temperature scaling [1]. We adopted temperature scaling for calibration for two reasons: 1) The logging policy is inherently a probability distribution over actions, and 2) It has been widely acknowledged as one of the most effective calibration methods in multiple classification settings.
The average performance and standard deviations of ApproxKNN, IPS-C-TS and UIPS on both real-world and synthetic datasets are reported in **Table 2 and 3 in the PDF attached in the global response**.
We found that both ApproxKNN and IPS-C-TS generally achieved better performance than BIPS-Cap, implying the effectiveness of calibration. However, UIPS still consistently outperformed both ApproxKNN and IPS-C-TS, particularly on real-world datasets.
The main reason is that calibration primarily focuses on adjusting the predicted probabilities to ensure **on average** the model's predictions are reliable and accurate. In contrast, UIPS specifically handles the impact from each individual sample in policy learning. Moreover, a perfectly calibrated model is clearly beneficial for IPS, but a perfect model for IPS is not necessarily calibrated: a scaled version of ground-truth logging model is well-suited for IPS, but terrible in calibration. Hence, small calibration error could lead to big IPS error and therefore a poorly learnt policy.
[1] Raghu A, Gottesman O, Liu Y, et al. Behaviour Policy Estimation in Off-Policy Policy Evaluation: Calibration Matters[J].
[2] Guo C, Pleiss G, Sun Y, et al. On calibration of modern neural networks. ICML 2017.
> [Q3] "CE performs quite well and is not substantially outperformed by UIPS. "
We first want to clarify that UIPS consistently exhibits strong performance over CE, particularly on the real-world datasets. The following table shows the relative improvement ratio of UIPS over CE on three real-world datasets regarding Recall@K, Precison@K and NDCG@K.
| | Yahoo | Coat | KuaiRec |
|----------------------------------|-------|------|---------|
| improvement ratio on Recall@K | 1.7% | 2.8% | 3.6% |
| improvement ratio on Precison@K | 1.9% | 3.0% | 4.2% |
| improvement ratio on NDCG@K | 3.3% | 1.0% | 4.1% |
| | | | |
We can observe that on KuaiRec dataset, characterized by a large action space and sparse interactions (which is common in real-world scenarios), UIPS achieves approximately a 4% improvement over CE in terms of Recall@K, NDCG@K, and Precision@K metrics.
As shown by recent literature [2,3,4] referred in answer to CQ3, an improvement at this scale is regarded as being significant for our adopted metrics. In particular, an improvement ratio of around 2% in these offline metrics can lead to enhanced online performance for algorithms, resulting in increased GMV /transactions or longer user staytime.
> [Q4] "Performance difference to the best baseline."
Please find our answer to CQ3 in general response to all reviewers.
> [Q5] "Performance of the (true) logging policy."
Please find our answer to CQ2 in the general response to all reviewers.
> [Q6] "When does UIPS become really crucial? That is, are there any situations where UIPS performs well while all other methods do not work satisfactory."
Thank the reviewer for posing this intriguing question. Empirically, Table 1 and Table 4 demonstrate that UIPS offers distinct advantages in scenarios where the ground-truth logging policy is skewed (indicated by smaller $\tau$ values) or when dealing with larger action spaces and sparse interactions (such as the KuaiRec dataset). This is primarily due to the fact that in such scenarios, the accurate estimation of logging probabilities becomes challenging, amplifying the adverse impact of inaccurate logging policies on overall performance. Considering that in most real-world scenarios, the logging policy tends to be skewed, accompanied by large action spaces and sparse interactions, UIPS holds practical applicability and relevance.
Above empirical observation is also supported by theoretical findings presented in Theorem 3.4.
This theorem suggests that with small and inaccurate estimated logging probabilities, particularly when $\beta^*(a|\boldsymbol{x}) \geq 2 \hat{\beta}(a|\boldsymbol{x})$, UIPS can still be guaranteed to converge to a stationary point with the ground-truth policy gradient being zero.
However, the convergence of BIPS is unknown.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' clarifications. Most of my main concerns were addressed nicely. I still think that CE is quite impressive given its simplicity and effectiveness, and thus it might be preferred in practice compared to UIPS (of course the datasets used in the experiments are substantially smaller compared to those of the industry, so it might not be the problem in the empirical analysis.) However, I also acknowledge the importance of studying how well we can do at best under such uncertainty in research and I can increase my score to 5 to indicate that at least I am no longer on the negative side.
> Empirically, Table 1 and Table 4 demonstrate that UIPS offers distinct advantages in scenarios where the ground-truth logging policy is skewed (indicated by smaller values) or when dealing with larger action spaces and sparse interactions (such as the KuaiRec dataset).
This actually seems to imply, at first glance, that the advantage of UIPS comes from the fact that it unintentionally deals with high variance of typical estimators such as IPS and SNIPS, rather than via dealing with the uncertainty in logging policy estimation. However, the author indeed compared the variance reduction method such as Shrinkage from [Su et al. 20], so at least empirically, dealing with the uncertainty seems to have an additional positive effect on the policy performance. I think this is a very interesting point, and I would be nice to add this discussion in the revision.
---
Reply to Comment 1.1.1:
Title: Reply to Official Comment by Reviewer 19wU (Part 1)
Comment: We highly appreciate the reviewer’s timely feedback and acknowledgement that our responses were helpful in addressing most of the reviewer’s concerns!
> “I still think that CE is quite impressive given its simplicity and effectiveness, and thus it might be preferred in practice compared to UIPS.”
We would like to highlight the practical benefits of UIPS over CE with the following additional notes.
Firstly, we should emphasize the remarkable and statistically significant improvements of UIPS over CE in all our experiments, as discussed in our previous response to reviewer’s question Q3. An improvement at this scale, as indicated by recent literature [1,2], is very likely to lead to a substantial increase in GMV/transactions of the platform, resulting in billions of profits in practical industry applications. As a result, we firmly believe UIPS has great attractiveness and applied value in practice.
Furthermore, recent work [3] has demonstrated the benefits of off-policy algorithms over CE in an industry recommender system with an action space in the orders of millions. Their findings specifically indicate that directly learning from the logged feedback (i.e., CE) is subject to biases caused by solely observing feedback on logged recommendations, resulting in the 'richer get richer' effect or popularity bias. Off-policy correction methods, such as BIPS in the study, effectively mitigate these biases.
As an off-policy learning algorithm, UIPS naturally inherits the aforementioned benefits. To verify this, the table below depicts the frequency at which different algorithms tend to recommend the most popular items in the logged training set of the KuaiRec dataset. A higher recommendation ratio signifies that the algorithm is more influenced by the logging/popularity bias present in the training data, and thus amplifying the "rich get richer" phenomenon.
| | CE | BIPS | POXM | UIPS |
|---------------------------------------------|----|------|------|------|
| recommendation_ratio of top-10% popular items | 0.445 | 0.296 | 0.476 | 0.297 |
| recommendation_ratio of top-20% popular items | 0.749 | 0.617 | 0.748 | 0.585 |
Recall that POXM is the best baseline on the KuaiRec dataset. While UIPS significantly outperformed BIPS in terms of recommendation accuracy (Table 1 and 4), it is noteworthy that UIPS also exhibits a tendency to recommend less popular items compared to CE, highlighting its effectiveness in mitigating the 'richer get richer' effect while maintaining the quality of recommendation.
Lastly, UIPS actually does not incur significant computational overhead as discussed in Section 7.1 of the appendix. The estimation of logging policy can also be further simplified, by parameterizing the learning policy and logging policy within one network and learning them in a simultaneous way [3].
[1]Zheng et al. "Multi-Objective Personalized Product Retrieval in Taobao Search. KDD2021
[2]Li et al. Embedding-based product retrieval in taobao search. KDD2021
[3] Chen et al. Top-k off-policy correction for a REINFORCE recommender system. WSDM 2019.
---
Reply to Comment 1.1.2:
Title: Reply to Official Comment by Reviewer 19wU (Part 2)
Comment: > “This actually seems to imply, at first glance, that the advantage of UIPS comes from the fact that it unintentionally deals with high variance of typical estimators such as IPS and SNIPS, rather than via dealing with the uncertainty in logging policy estimation. However, the author indeed compared the variance reduction method such as Shrinkage from [Su et al. 20], so at least empirically, dealing with the uncertainty seems to have an additional positive effect on the policy performance. ”
When using the estimated logging policy, samples with either **high estimation uncertainty** or **small estimated probability** tend to introduce high variance and high bias, thereby impeding subsequent off-policy learning. This is demonstrated in Proposition 2.1 of our paper. Figure 1 further illustrates that these two factors are usually accompanied, exacerbating their detrimental effects.
As a result, variance reduction methods, such as Shrinkage from [Su et al. 20], which solely handle small estimated probabilities, cannot handle all situations and thus performed worse than UIPS. In contrast, UIPS effectively handles **both high estimation uncertainty and small estimated probability** through incorporating an uncertainty-aware sample weight to minimize the mean squared error (MSE) of the estimator to its ground-truth value (line 155-161), leading to its strong performance. Again, as we explained in the rebuttal, the design of UIPS is top-down: from the principle of minimizing the MSE of the offline estimated policy value, to estimating the per-sample weights to control impact from samples with high estimation uncertainty and small estimated probability.
This also explains why UIPS works especially well in scenarios where the ground-truth logging policy is skewed (e.g., smaller $\tau$ value in Table 1) or when dealing with larger action spaces and sparse interactions (e.g., on our real-world datasets). In order to achieve good performance in such situations, effective handling of both high estimation uncertainty and small estimated probability becomes crucial. | Summary: The paper considers a scenario in off-policy evaluation where we don't have access to the action probabilities of the logging policy, which we need to compute the propensities in IPS. Prior work would estimate these probabilities from data, but would ignore the uncertainties associated with these estimates. In this work the authors propose to re-weight the propensities based on thess uncertainties. The exact form of the weights is derived by minimizing an upper bound on the MSE. The resulting method consistently beats SOTA baselines on both toy and real datasets.
Strengths: A novel IPS variant that is grounded in and backed by theory (i.e. is derived by minimizing the MSE of the estimator), and is designed to solve a concrete problem in existing methods.
Solid experimental methodology, sufficient details on the experimental setup to enable reproducibility.
Strong results: the proposed method beats a broad list of SOTA baselines on both toy and real datasets.
Weaknesses: I have doubts re. the importance of the problem that the method is solving: we can simply log the propensities, right? How common is it in practice that these propensities are not available?
At the same time, I see that in some experiments the method improves upon using GT propensities. This would be a good selling point of the method, but it goes against the initial motivation. How can a method that takes uncertainty into account be better than having _no_ uncertainty? This warrants more discussion in the paper, and makes me wonder if there are alternative explanations to why the method works.
In general, I've found the manuscript to be rather dense and hard to read in places. I would've liked to see more intuition and/or pedagogical examples to understand what the method does exactly. E.g. on line 158: "UIPS assigns them with an increasing weight [..] as uncertainty increases" -- why does it make sense?
The quality of the write-up could be improved: a few grammar mistakes, figure captions could be more informative, some opaque references to prior work could be expanded to make the paper more self-contained.
The related work could be stronger: lines 301 - 320 largely repeat the introduction. What about other methods in on-policy/off-policy that exploit the uncertainties? UCB in bandits, for example?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How sensitive is the method to the uncertainty estimation approach? Does something simple like MC-Dropout produce similar results?
Also see a few questions in the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Little discussion of limitations: e.g. what about additional cost due to having to estimate uncertainties? Limited societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer YnY8
We thank the reviewer for the positive comments on our work and valuable suggestions. We have clarified several important arguments as outlined below. And we will diligently address other suggestions by further polishing our paper, making the figure captions more informative, and enhancing the references accordingly.
> [Q1] " How common is it in practice that these logging propensities are not available?"
The absence of ground-truth logging probabilities and taking the estimated logging policy for off-policy learning has been a long-standing practice in the off-policy learning literature [1-3].
There are several reasons that hinder the recording of logging probabilities.
First, in some situations, such as the healthcare domain discussed in [1] or industrial recommender systems as discussed in [2], access to the ground-truth logging policy is not feasible.
Another important reason is due to legacy issues, i.e., the probabilities were not logged when collecting data (e.g., space efficiency was prioritized when designing the logging system). However, even if one is willing to bear the high costs in time and resources to re-collect data with “everything” logged, relying solely on newly-collected data provides only a partial depiction of users' preferences afterwards. Hence, in order to gain a clearer understanding of users, leveraging their historical interaction logs becomes crucial, necessitating the estimation of logging probabilities.
Moreover, in certain cases, there may be security concerns where individuals/companies are unwilling to disclose the logging policy to prevent potential adversarial attacks, while these public resources may hold significant value. As a result, estimating the logging policy is still an important and necessary effort in practice.
[1] Raghu et al. "Behaviour policy estimation in off-policy policy evaluation: Calibration matters." arxiv 2018
[2] Chen et al. Top-k off-policy correction for a REINFORCE recommender system. WSDM 2019.
[3] Strehl et al. Learning from logged implicit exploration data. NeurIPS 2010.
> [Q2] "Why do UIPS improve upon using GT propensities ?"
Please find our answer to CQ2 in the general response to all reviewers.
> [Q3] "More intuition and/or pedagogical examples to understand what UIPS does exactly."
Please find our answer to CQ1 in the general response to all reviewers. We will make further revisions to enhance the readability of the corresponding section.
> [Q4] " What about other methods in on-policy/off-policy that exploit the uncertainties? UCB in bandits, for example?"
In the context of on-policy RL/bandits, the use of uncertainty aims to strike a balance between exploration and exploitation by adopting an optimistic approach (i.e., UCB in bandits). On the other hand, most research on off-policy RL/bandits tends to be more conservative, employing techniques such as Lower Confidence Bounds (LCB) in bandits or penalizing samples with high uncertainty. But those principles are fundamentally different from what we developed in UIPS, which directly minimize the mean square error of off-policy evaluation. The closed-form solution of the resulting per-instance weight in UIPS reflects how uncertainty contributes to the policy evaluation error.
Our UIPS-O and UIPS-P baselines leverage uncertainties using the two aforementioned general principles respectively. However, empirical findings indicate that blindly penalizing or boosting instances based on uncertainty leads to inferior performance compared to UIPS, as they do not directly suggest how uncertainty in the estimated logging probability is related to policy evaluation.
We thank the reviewer once again for your suggestion regarding the related work. We will incorporate the aforementioned discussion to enhance the related work section.
> [Q5] "How sensitive is the method to the uncertainty estimation approach? Does something simple like MC-Dropout produce similar results?"
Our framework is agnostic to the uncertainty estimation methods, as long as the estimated uncertainty is reliable. In the paper, we conducted experiments using the uncertainty estimation framework described in [4] due to its computational efficiency and theoretical soundness. But alternative methods for estimating uncertainties can be readily incorporated into our framework. Inspecting the impact of different uncertainty estimation methods on the quality of the policy evaluation as well as the resulting policy optimization in UIPS is an important and interesting future direction.
[4] Xu P, Wen Z, Zhao H, et al. Neural Contextual Bandits with Deep Representation and Shallow Exploration. ICLR 2021.
> [Q6] "what about additional cost due to having to estimate uncertainties?"
The computational complexity of UIPS is discussed in Section 7.1 in appendix. Given the logged dataset containing $N$ samples, $A$ actions and latent dimension $d$, the computational cost of precomputing uncertainties of the logging probabilities is $O(Nd^2 + d^3)$, where $O(d^3)$ is for matrix inverse and $O(Nd^2)$ is for calculating uncertainties in samples.
Note that calculating logging probability for each sample, which is essential for both UIPS and all IPS-type algorithms, takes $O(NAd)$ time. But considering that the dimension $d$ is typically much smaller than the action size $A$ and sample size $N$, and with precomputed logging probabilities and uncertainties, UIPS can efficiently calculate the sample weight in $O(1)$ time during off-policy learning. Therefore, we can conclude that **UIPS does not introduce significant computational overhead compared to the original IPS**.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response.
> relying solely on newly-collected data provides only a partial depiction of users' preferences afterwards
Nit: while I understand the additional cost argument, I don't fully follow the "partial depiction" argument: wouldn't newer data provide a more up-to-date depiction of users' preferences?
In general, I appreciate the author's arguments for the problem importance, and I think future readers would benefit from a short summary of those in the introduction.
Unfortunately, I struggled to follow the intuition for why the relationship between the uncertainty and the sample weight is non-linear, as provided in the common response. Unpacking the argument further and being more rigorous could help. At the same time, the method is theoretically grounded -- while a clear, intuitive interpretation would be useful, I do not consider it to be essential.
Overall, the authors have addressed many of my concerns/questions, hence I increase my score.
---
Reply to Comment 1.1.1:
Title: # Reply to Official Comment by Reviewer YnY8
Comment:
We genuinely appreciate the reviewer's dedicated time, efforts, and invaluable input in responding to our submission and rebuttals. We are delighted to know that our responses effectively addressed most of the reviewer's concerns. Furthermore, we extend our gratitude to the reviewer for raising the recommendation.
Following the reviewer's suggestion, we will incorporate necessary discussions about the problem's importance into the introduction and expand our current discussion about its derived closed form solution of the per-sample weight in the method section.
Here are some additional notes on the questions mentioned in the reviewer’s latest comment.
> “I don't fully follow the "partial depiction" argument: wouldn't newer data provide a more up-to-date depiction of users' preferences?”
We agree that newer data does have the advantage in providing a more up-to-date depiction of users' preference. However, there are two potential drawbacks in solely relying on newer data. First, it would overlook aspects or patterns that were present in the past but are no longer captured in the recent data. Second, to get a more comprehensive picture, a longer window for data collection is needed, which however slows down model update and system optimization. Hence, effectively leveraging historical data together with any newer data is a more economic and preferred way to understand user preferences in large practical systems [1].
[1] Pi Q, Zhou G, Zhang Y, et al. Search-based user interest modeling with lifelong sequential behavior data for click-through rate prediction. CIKM 2020.
> “why the relationship between the uncertainty and the sample weight is non-linear”
This is also something particularly interesting to us: the closed-form solution for the per-sample weight is rigorously derived based on the minimax optimization problem in Eq (8); and this optimization problem is formulated based on the principle of minimizing MSE of value estimation. Hence, this nonlinearity cannot be manually instructed beforehand. As our empirical study suggested, simply boosting or penalizing samples based on the uncertainty in their estimated logging probabilities did not work out, which further confirmed the validity of our derivation. This motivates us to look further into this new perspective in sample importance in off-policy learning. | Summary: This paper proposes an Uncertainty-aware Inverse Propensity Score estimator (UIPS) for off-policy learning, taking into account the uncertainty in the estimated logging policy. The authors demonstrate that the commonly used method of estimating the logging policy can lead to biased estimators, particularly for samples with small estimated logging probabilities. UIPS addresses this issue by reweighting the propensity scores based on the uncertainty of the estimated logging policy. The paper provides a theoretical analysis of the convergence properties of UIPS and presents experimental results on synthetic and real-world recommendation datasets, comparing against state-of-the-art baselines.
Strengths: ● The paper addresses an important problem in off-policy learning and proposes a novel method, UIPS, to improve the quality of the discovered policy.
● The authors provide a comprehensive theoretical analysis of UIPS, including a convergence guarantee.
● The experimental results demonstrate the effectiveness of UIPS compared to some baselines on both synthetic and real-world datasets.
Weaknesses: ● There remain some issues unsolved in the paper, such as the availability of the logging policy. See the questions for details.
● There are some related works that are not mentioned in this paper. In off-policy RL, several papers work on behavior-agnostic instance reweighting [1,2]. They compute the prioritization weight without the need of obtaining a behavior policy. There are also papers that discuss the importance ratio term when applying RL to recommendation systems [3,4].
● Introducing another neural network to estimate $\beta^*$ will increase the system complexity and the computational cost during training and testing. This may hinder the practical application of the algorithm.
● The synthetic dataset and the offline evaluation can give biased evaluation results of the algorithms.
[1] Sinha, Samarth, et al. "Experience replay with likelihood-free importance weights." Learning for Dynamics and Control Conference. PMLR, 2022.
[2] Liu, Xu-Hui, et al. "Regret minimization experience replay in off-policy reinforcement learning." Advances in Neural Information Processing Systems 34 (2021).
[3] Cai, Qingpeng, et al. "Reinforcing User Retention in a Billion Scale Short Video Recommender System." arXiv preprint arXiv:2302.01724 (2023).
[4] Chen, Minmin, et al. "Off-policy actor-critic for recommender systems." Proceedings of the 16th ACM Conference on Recommender Systems. 2022.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1.How is this paper related to behavior-agnostic methods [1,2,5]? With a GAN-like estimator, these methods no longer reconstruct all those behavior policies. They may also be regarded as baselines to compare with.
2.Why are the probabilities $\beta^*(a|x)$ not recorded in the data? With stochastic logging policies, it is easy to store probabilities together with state and actions when generating data. With deterministic logging policies, a common practice is to sample actions from a Gaussian distribution, with policy output as mean and a certain standard deviation. The action probability will also be available.
3.Why do UIPS-O and UIPS-P lead to poor performance?
[5] Nachum, Ofir, et al. "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections." Advances in neural information processing systems 32 (2019).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The usage of off-policy correction and uncertainty-based reweighting is limited to policy-based techniques based on the REINFORCE trick. Such techniques can have higher variance than value-based techniques such as TD3 and SAC, and may lead to unstable training.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer QiPS
We thank the reviewer for valuable suggestions provided, which help clarify important arguments and enhance the overall quality of the paper.
> [Q1] "Why are the probabilities not recorded in the data?"
The absence of ground-truth logging probabilities and taking the estimated logging policy for off-policy learning have been a long-standing assumption in the off-policy learning literature [6-8]. There are several reasons that hinder the recording of logging probabilities.
First, in many situations, such as the healthcare domain discussed in [6] or industrial recommender systems as discussed in [7], access to the ground-truth logging policy is not feasible.
Another important reason is due to legacy issues, i.e., the probabilities were not logged when collecting data (e.g., space efficiency was prioritized when designing the users' interaction logging system). However, even if one is willing to bear the high costs in time and resources to re-collect data with “everything” logged, relying solely on newly-collected data provides only a partial depiction of users' preferences afterwards. Hence, in order to gain a clearer understanding of users, leveraging their historical interaction logs becomes crucial, necessitating the estimation of logging probabilities.
Moreover, in certain cases, there may be security concerns where individuals/companies are unwilling to disclose the logging policy to prevent potential adversarial attacks, while these public resources may hold significant value. As a result, estimating the logging policy is still an important and necessary effort in practice.
[6] Raghu et al. "Behaviour policy estimation in off-policy policy evaluation: Calibration matters." arxiv 2018
[7] Chen et al. Top-k off-policy correction for a REINFORCE recommender system. WSDM 2019.
[8] Strehl et al. Learning from logged implicit exploration data. NeurIPS 2010.
> [Q2]"Comparison with the behavior-agnostic methods[1,2]".
The main goal of work in [1,2] is to prioritize instances in the replay buffer for better TD learning, rather than accounting for uncertainty in the estimated logging policy for improved off-policy learning. However, we do acknowledge that their proposed solution for directly estimating the propensity ratio has the potential benefit of avoiding estimating the logging policy. To compare its effectiveness, we also included a new baseline called IPS-LFIW, which implements the approach proposed in [1] to directly estimate the propensity ratio for off-policy learning.
The average performance and standard deviations of IPS-LFIW and UIPS on three synthetic datasets are reported in **Table 1 in the PDF attached in the global response**.
Notably, UIPS consistently outperformed IPS-LFIW with statistically significant improvements.
One major reason for the worse performance of IPS-LFIW is that it does not consider the accuracy of the estimated propensity ratio, in a direct analogy to failing to handle uncertainty in the estimated logging probabilities in existing IPS-type algorithms.
Furthermore, another advantage of UIPS over the work in [1,2] is that UIPS provides a theoretical guarantee regarding the performance of the learnt policy (Theorem 3.4). In contrast, the behavior-agnostic methods in [1,2] do not offer such a guarantee.
> [Q3] "Difference between UIPS and the DICE line of work."
The difference has been discussed in Section 7.5 in appendix. To briefly recap, we demonstrated that in the contextual bandit setting, DualDICE degenerates to the IPS estimator that approximates the unknown ground-truth logging policy with its empirical estimate from the given logged dataset.
Denoting the adapted algorithm from DualDICE as DICE-S, we compared its performance against UIPS in Table 9 and 10 in appendix. We found DICE-S underperformed UIPS significantly in all datasets.
> [Q4] "The relevant papers on RL for recommendation systems [3-4]."
We will incorporate the discussion of them in the related work section. However, it is worth noting that none of the aforementioned works attempt to account for the inaccuracy of the estimated logging policy. In particular, Chen et al. [4] directly used the estimated logging probabilities as the ground-truth in their IPS estimator.
> [Q5] "Introducing another neural network to estimate $\beta^*$ will increase the system complexity and the computational cost during training and testing."
We thank the reviewer for pointing out the place that unfortunately caused misunderstanding.
For UIPS, both the estimated logging policy and its associated uncertainty can be pre-computed. The uncertainty can be directly estimated using the same model employed for estimating the logging policy (line 162 to 170), with no requirement for an additional neural network.
During training, with the precomputed logging probabilities and uncertainties, UIPS calculates the per-sample weight in O(1) time, and thus it incurs no additional computational cost and does not require an extra network. The evaluation is performed on the learnt policy. Further details regarding the computational cost can be found in Section 7.1.
> [Q6] "The synthetic dataset and the offline evaluation can give biased evaluation results of the algorithms."
While training under non-uniform training datasets (collected under a specific logging policy), all algorithms are evaluated on the **unbiased test dataset**, either from randomized controlled trials (Yahoo & Coat) or a full-observed interaction dataset (KuaiRec, synthetic datasets).
Consequently, all algorithms are evaluated in an unbiased manner. And such evaluation setting is one of the referred procedures in off-policy learning.
If the reviewer has any further concerns, please let us know. We are more than willing to engage in further discussion.
> [Q6] "Why do UIPS-O and UIPS-P lead to poor performance?"
Please find our answer to CQ1 in the general response to all reviewers.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. I am more pleased with the paper, and I wish you luck.
---
Reply to Comment 1.1.1:
Title: # Reply to Official Comment by Reviewer QiPS
Comment: We sincerely thank the reviewer for invaluable time and efforts in handling our submission. We are delighted to learn that our explanations in the rebuttal have addressed the reviewer’s concerns and the reviewer is satisfied with our submission. We are also excited about the theoretical validity and empirical effectiveness of our proposed UIPS algorithm for off-policy learning, and thus are eager to share it with the community.
Given the rebuttal period is coming to its end, we kindly inquire if there is any additional guidance or request that is necessary for the reviewer to consider increasing the evaluation of our work. Thank you. | Summary: This paper proposes UIPS, a method that models the uncertainty of the estimated logging policy to improve off-policy learning. It assigns weights to each observation instance instead of simply dropping those with high uncertainty. The paper deduces the optimal form of weights from minimizing the upper bound of the resulting estimator’s MSE. Then it gets the improved policy by a two-step iterative optimization. This method is evaluated on both synthetic data and three real-world datasets in which UIPS outperforms multiple baselines.
Strengths: The paper addresses an important problem in policy optimization and proposes an innovative way to handle the uncertainty in the logged data. It derives a closed-form solution for the upper bound of MSE and proves the convergence of the method. It also conducts extensive evaluation on synthetic and real-world dataset to demonstrate the effectiveness of UIPS.
Weaknesses: 1. For both synthetic and real-world data, although UIPS achieves the best results, it does not outperform the second-best method by a significant extent. It's not clear how this method will be useful in application.
2. The paper lacks a more detailed study and comparison with existing work on handling uncertainty. More explanation is needed on why UIPS is better compared to other methods.
3. Some deduction is not clear. For example, in-between steps are needed to show the "log trick" used in formula (2). The figure illustration is not very clear either. For example, it's confusing to use ``Log(item freq)" on the X-axis. In Tables 3 and 4, the metrics are not very clear.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How many iterations and how much time does the algorithm run in the evaluation? Does the result verify Theorem 3.4?
2. For Figure 1, the conclusion "items with lower frequencies in the logged dataset have lower estimated logging probabilities" is not a universal trend but only applies for frequency less than 7. How do you explain this observation, and does this affect your other results?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors are upfront about their limitations and listed future directions to address them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reply to Reviewer ha2F
We appreciate the reviewer's positive feedback, insightful questions, and suggestions for improving the paper. We will incorporate the suggested revisions, including providing a detailed derivation step for the 'log trick' and offering further explanations on the metrics used.
>[Q1] "Performance difference to the best baseline".
Please find our answer to CQ3 in the general response to all reviewers.
>[Q2] "The paper lacks a more detailed study and comparison with existing work on handling uncertainty. "
To the best of our knowledge, our work is the first to explicitly model uncertainty in the estimated logging policy for improved off-policy learning. We have noticed that some work on offline RL also utilizes uncertainties to address the OOD issues or extrapolating errors. But the main idea of those studies is to penalize states/state-action pairs with high uncertainties, which is fundamentally different from the principle in UIPS. Please also refer to our answer in [CQ1].
To understand the impact of these different choices in leveraging uncertainty, we introduced the baseline UIPS-P, which always penalizes samples with high uncertainty, and UIPS-O, which always boosts samples with high uncertainty. The clearly worse performance of UIPS-P and UIPS-O suggests that blindly reweighting through uncertainties is not effective, regardless of the scale of propensity scores.
If the reviewer is aware of any other relevant work that leverages the uncertainty of estimated logging policy for off-policy learning, please let us know. We are more than happy to discuss this further.
> [Q3] " Use Log(item freq)" on the X-axis in Figure 1."
The range of values for X is immensely broad in the real-world KuaiRec dataset, spanning multiple orders of magnitude. Consequently, we employ a logarithmic scale to ensure the readability of the figure.
> [Q4] In Tables 3 and 4, the metrics are not very clear.
Let us first briefly recap the motivation behind the experiments in Table 3. UIPS is divided into two steps:
- Derive the optimal instance weight $\phi_{\boldsymbol{x},a}$, so that $\hat{V}\_{\rm UIPS}(\pi_{\boldsymbol{\vartheta}})$ in Eq. (4) approaches the ground-truth $V(\pi_{\boldsymbol{\vartheta}})$ as closely as possible.
- Update $\pi_{\boldsymbol{\vartheta}}$ by maximizing $\hat{V}\_{\rm UIPS}(\pi_{\boldsymbol{\vartheta}})$.
Thus, Table 3 evaluates the mean square error (MSE) of $\hat{V}_{{\rm UIPS}}$ and the estimators used in the related baselines in approximating the ground-truth $V$. A smaller MSE indicates a higher accuracy. The results reveal that incorporating uncertainty makes the UIPS estimator the most accurate.
In Table 4, we employed widely adopted offline metrics in recommendation algorithms, namely Recall@K, NDCG@K, and Precision@K, to evaluate the learnt policy $\pi_{\vartheta}$ for recommendation. Their definitions are described in line 233-235, with provided references.
Due to space constraints, we are unable to present the detailed definitions here. However, we are more than willing to discuss further details during the discussion stage if the reviewer expresses interest.
>[Q5]"How many iterations and how much time does the algorithm run in the evaluation? Does the result verify Theorem 3.4?"
The objective of Theorem 3.4 is to demonstrate the optimality of the learned policy, rather than its convergence rate. The theorem suggests that even without direct access to the true policy gradient (due to the unknown ground-truth logging policy), UIPS converges to a stationary point where the true policy gradient approaches zero.
The computational complexity of UIPS is discussed in Section 7.1 in appendix. Notably, both the estimated logging policy and its associated uncertainty can be pre-computed, resulting in no additional computational cost during the off-policy learning process of UIPS. Empirically, we also found that UIPS achieved the optimal performance using approximately the same number of epochs as in the BIPS-Cap baseline.
>[Q6] "For Figure 1, the conclusion "items with lower frequencies in the logged dataset have lower estimated logging probabilities" is not a universal trend but only applies for frequency less than 7. How do you explain this observation, and does this affect your other results?"
Proposition 2.1 in the paper suggests that samples with either **high estimation uncertainty** or **small estimated probability** tend to introduce large errors in off-policy learning. And Figure 1 suggests that these two factors are usually accompanied, exacerbating such errors.
However, we should also emphasize that UIPS does not have any threshold to define what is small in the estimated logging probability, or assume any monotonic relation among a sample’s estimation uncertainty, estimated logging probability, and observation frequency. The estimated uncertainty is leveraged in Eq (8) for policy learning, and a particular way for uncertainty estimation provided in line 162 to 170. Hence, the reviewer’s observation in Figure 1 does NOT affect the application or performance of UIPS.
On the other hand, there could be many reasons why we did not see a universal trend in Figure 1. For example, in the model's learnt embedding space, some lower frequency items might be similar to some high frequency ones and therefore their estimated logging probabilities are not smaller than those with higher observation frequencies (e.g., around log frequency 7). But again, as long as we can quantify the associated estimation uncertainty, the proposed UIPS solution can be applied.
---
Rebuttal Comment 1.1:
Comment: While my overall decision of the paper remains the same, the authors' response is quite sufficient and is appreciated. They more or less address my concerns.
---
Reply to Comment 1.1.1:
Title: Reply to Official Comment by Reviewer ha2F
Comment: Thank you for your timely feedback. We are very glad to find that the reviewer is satisfied with our submission and found our explanations in the rebuttal sufficient. Ourselves are also excited about this work, because of its potential in providing a theoretically justified off-policy learning solution, especially when learning from offline data with no knowledge about the logging policy.
We are more than happy to know if there is anything that is necessary and may potentially convince the reviewer to increase the recommendation, which would help us improve both the quality and visibility of this work. | Rebuttal 1:
Rebuttal: # General Response
We thank all reviewers for their insightful comments and suggestions, which will significantly help us strengthen our paper. In the following, we will first respond to the common suggestions from all reviewers, and then respond to each reviewer individually.
> [CQ1] "Explain what UIPS exactly does and why do UIPS-O and UIPS-P lead to poor performance?"
UIPS minimizes the mean square error of the estimated value of a learnt policy, via estimating a per-sample weight $\phi\_{\boldsymbol{x},a}$ using bi-level optimization defined in Eq (8). Eq (8) nicely leads to a closed-form of $\phi\_{\boldsymbol{x},a}$ (in Theorem 3.2), with very intuitive and insightful physical meanings (in line 155 to 161):
- For samples whose largest possible propensity scores are under control: i.e., $ \frac{\pi_{\boldsymbol{\vartheta}}(a|\boldsymbol{x})}{\min
\boldsymbol{B}_{\boldsymbol{x},a}} < \sqrt{\lambda}$, higher uncertainty in the estimated logging probability implies smaller values of $\pi / \hat{\beta}$ and even smaller values of $\pi(a|\boldsymbol{x})$.
This suggests samples of this type with positive rewards are underestimated, and the degree of underestimation increases with larger uncertainty.
UIPS thus chooses to increase their weights as uncertainty increases, to emphasize these long-tail positive samples.
- Conversely, for samples with large propensity scores, UIPS decreases the weights of these samples as the uncertainty increases, so as to prevent their distortion in policy learning.
We should emphasize that these insights were purely extracted by the closed-form solution of $\phi\_{\boldsymbol{x},a}$, rather than manually injected beforehand.
The learning problem induced by UIPS also has a theoretical guarantee on the learnt policy (Theorem 3.4), which suggests that with a high probability UIPS can converge to a stationary point where the ground-truth policy gradient is zero.
In contrast, UIPS-P and UIPS-O blindly penalize or boost samples based on uncertainties, without considering their impact on the accuracy of the resulting estimator and the learnt policy.
As a result, they either overlook long-tail positive samples with high uncertainty or become distorted by samples with high uncertainties. This ultimately leads to their inferior performance.
> [CQ2]"Provide the results relative to the performance of the (true) logging policy, and explain why UIPS improves upon using GT propensities on some datasets."
We have included the IPS-GT baseline in our evaluation on synthetic datasets, which represents the performance of an IPS estimator utilizing the ground-truth logging probabilities.
We can observe from Table 1 that UIPS achieved similar and even better performance than IPS-GT when the ground-truth logging policy is skewed, specifically when $\tau=0.5$ and $\tau=1$. This is because IPS-GT suffers from high variance due to small logging probabilities associated with a skewed ground-truth policy. However, UIPS achieves a better bias-variance trade-off by effectively controlling the negative impact of these high-variance samples.
When the ground-truth logging policy is smoother (e.g., $\tau=2$), the variance of the IPS estimator becomes much smaller, and off-policy correction with the ground-truth logging probabilities leads to better model performance. But UIPS still outperformed all baselines without accessing the ground-truth logging policy.
> [CQ3] “Clarification on the performance.”
To evaluate the significance of improvement, we performed a t-test between the performance of UIPS and the best baseline on all datasets over 10 random trials created by distinct random seeds. Note that the hypothesis testing was conducted at the trial level rather than the instance level.
Table 1 (synthetic datasets) and Table 4 (real-world datasets) demonstrate that **the best baseline varies across datasets** and also **under different metrics**, i.e., no consistent best baselines. In contrast, **UIPS consistently outperformed the best baseline across all metrics with a high level of statistical significance.** This proves the practical generality and applicability of UIPS.
Furthermore, the adopted offline metrics, namely Recall@K, NDCG@K, and Precision@K, have been demonstrated to align well with the online performance of recommendation algorithms [1]. As shown in recent literature [2,3,4], similar improvements on these offline metrics as UIPS has achieved, already suggest enhanced online performance, such as increase in GMV/transactions or longer staytime.
Finally, the benefit of UIPS is also been theoretically guaranteed (Theorem 3.4).
[1]Wang X, et al. How well do offline metrics predict online performance of product ranking models? SIGIR2023.
[2]Zheng et al. "Multi-Objective Personalized Product Retrieval in Taobao Search. KDD2021
[3]Li et al. Embedding-based product retrieval in taobao search. KDD2021
[4]Zhang et al. Disentangled Representation for Diversified Recommendations. WSDM2023
Pdf: /pdf/adf9db1927992de76180f272a2a46a81e4074004.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Alternating Updates for Efficient Transformers | Accept (spotlight) | Summary: This work proposes an efficient way to increase the width of transformer models, i.e. Alternating Updates (AltUp). AltUp can increase the width of one existing model with little computation overhead. Authors evaluated their approach on T5 model and see some improvements on well-established benchmarks including GLUE SG SQuAD and TriviaQA.
Strengths: 1) This paper is well-motivated. Scaling efficiently is indeed an important topic. Considering scaling is so useful, how to save the cost of scaling up is very helpful to our community.
2) Authors conduct a comprehensive evaluation and explored various modifications of this approach, such as Recycled-AltUp and Sequence-Altup.
Weaknesses: My main concern is about the effectiveness of the proposed model. In Figure 4, we can see AltUp cannot bring a very significant improvement. For instance, in Figure 4b, if we connect B+AltUp and L+AltUp, the L without AltUp will almost appear on the line. Certainly, the x-axis is at log scale, it is not totally fair by using this line. However, it shows that the improvement of AltUp is indeed not very significant.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Would AltUp introduce more activation? If yes, although AltUp introduces little flops, given the fixed hardware, the max batch size would be smaller. For instance, if the original T5 base can use batch size = 256 on 16 TPUs, your AltUp would only be able to use batch size = 128. Then, to support a larger global batch size, you have to use gradient accumulation. This issue would be more serious for a longer sequence.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Theoretical analysis is good. But as stated in the weakness section, my main concern is whether the AltUp is working well enough to let more people try and use it. If AltUp is not working well enough on the official paper, since trying a new approach is expensive in scaling transformer, people will not really use the model and then this work would have little impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful consideration of our paper and constructive feedback.
### Effectiveness of AltUp
We would like to point out that AltUp achieves significant speedups in *wall-clock time* (not only theoretical FLOPS) compared to dense baselines as shown in our evaluations in Sec. 5. In fact, AltUp enables up to 87% speedup relative to the dense baselines at the same accuracy for SQuAD and is highly effective on the other evaluated datasets as well.
Regarding your point about Figure 4b: here, we show the comparison of T5 dense baselines and compare them to T5 Base + AltUp2x and T5 Large + AltUp2x. We use a log-scale because it is standard in prior work in this area, e.g., in scaling laws for neural language models [1,2]. As we can see in Figure 4b, there is a clear trend of diminishing returns (even in the log-scale, even for the baselines) as we go from T5L to T5XL. This means that it is not entirely fair to connect the B+AltUp point to L+AltUp and state that L lies on this line, since L+AltUp is well past the point of diminishing returns that the models experience as we increase their capacity. We would like to highlight that SuperGLUE is particularly exceptional in terms of this diminishing returns phenomenon. For example, if we conduct the same point connecting procedure in any of the other plots (GLUE, SQuAD, Trivia-QA), we see that the dense baselines fall well below the lines formed by AltUp’s data points.
In order to make this comparison for SuperGLUE more direct, we compared L + AltUp with a size-matched dense baseline – T5L+4, which adds 4 encoder layers and 4 decoder layers to the T5L model. Note that this model has the same number of parameters as L + AltUp, but uses more computation and is roughly 10% slower than L + AltUp. On SuperGLUE, T5L+4 achieves an average score of 82.57, versus the average score of 82.75 for L + AltUp. Therefore, AltUp achieves better quality even when compared to a compute-heavier dense baseline. Note that this is specific to SuperGLUE, and we see larger speedups – up to 87% – on other datasets. We would be happy to include this discussion in our final submission.
Moreover, we would like to highlight that the lightweight version of AltUp, Recycled-AltUp (from Sec. 4), adds virtually no additional latency or additional parameters to the dense baseline models and strictly improves their performance (see Fig. 5). In Table 8 of the appendix, we also demonstrate that Recycled-AltUp leads to similar gains as AltUp does on GLUE, SuperGLUE, SQuAD, and TriviaQA. Given the lightweight and latency-matching nature of Recycled-AltUp, these improvements directly translate to clear gains over the dense baselines. We will clarify this point in our revision and highlight the effectiveness and practicality of AltUp.
### AltUp and Activation Memory Footprint
Although AltUp has a higher activation memory footprint compared to the baseline *during training*, this memory footprint is largely negligible when compared to the activation memory footprint of the baseline. Thus, AltUp would not need a drastic reduction in batch size in practical applications.
Following the computations in “Reducing Activation Recomputation in Large Transformer Models” [1], a transformer model with $L$ layers, model dimension $h$, batch size $b$, sequence length $s$ and number of attention heads $a$ has total activation memory of
$ s b h L (34 + 5 a s / h) $.
When we add AltUp (with $K=2$) on this model, the additional activation memory due to is
$ (s b h + 2 s b h) L = 3 s b h L$,
which is less than 10% of the vanilla transformer’s activation memory footprint. Moreover, a significant portion of memory is used by the weight parameters which only increase marginally with AltUp. Overall this results in a <10% additional memory usage with AltUp. In addition, since the additional blocks are inexpensive to recompute, we can also recompute these blocks in the backprop without storing them. Model parallelism and techniques such as gradient checkpointing, sequence parallelism, or selective activation recomputation can mitigate the impact of a wider activation vector even further.
During inference, the memory footprint of activations is mostly due to the size of the KV cache. In AltUp, since each transformer layer takes a single sub-block, the additional activations introduced by AltUp do not show up in the KV cache.
[1] https://arxiv.org/abs/2203.15556
[2] https://arxiv.org/pdf/2001.08361.pdf
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. My concerns are solved. I decide to raise my Rating to 6.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are happy to hear that we were able to address your concerns in our rebuttal!
It seems that the score is not updated in the original review. Could you please update it at your convenience?
Thank you again for your time and consideration of our paper. | Summary: The paper proposes a novel technique named “AltUp” to expand Transformer’s feature dimension while preserving the computation cost. The key idea of AltUp is to divide wide hidden features into multiple blocks, where only one block is processed by Transformer sub-layers, while the other blocks are computed through a linear combination of all blocks. The position of the updated block alters across layers. Furthermore, the authors present two variants of AltUp, “Recycled-AltUp” and “Sequence-AltUp”, which focus on reducing the embedding parameters and effective sequence length, respectively. Experimental results show that the models trained with AltUp are up to 87% faster in inference compared to the model with the same performance.
Strengths: * The concept of “widening representations” is unique and hasn’t been explored much before. This approach effectively disentangles the computation and feature dimensions. Importantly, the paper introduces a non-trivial solution that minimizes additional computation costs.
* AltUp demonstrates promising speedup across various model sizes, with K=2 generally performing better than the original version.
Weaknesses: * Experiments are conducted on T5 models; however, the paper could be strengthened if encoder-only or decoder-only architectures are also evaluated.
* The concept of “Sequence-AltUp” may be considered somewhat different from the “widening representations”. Maybe this could be justified by “effective width for each token” … More clarification would be helpful.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * (Minor) Is Conformer (L214) an appropriate example for “striding operation”?
* (Suggestion) I basically agree with that ‘large K leads to less frequent activation’ (L279-280), but could this issue be compensated by 2-4x longer training steps?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have discussed the limitations in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive review and insightful suggestions. Please find below our specific responses.
1. Thank you for suggesting experiments on other architectures. We conducted a preliminary study on a lightweight BERT model which has 12 layers, 256 model dimensions, and 4 attention heads. On masked language pretraining task (trained on the BERT pretraining data), we observe lightweight-BERT achieves 54.7 MLM accuracy while lightweight-BERT + AltUpx2 achieves 56.2 MLM accuracy. We will include these empirical results in the updated version of the paper.
2. Thank you for pointing out this potential source of ambiguity and for your suggestion. You are fully right that Sequence-AltUp is different from the widened representations idea since it applies the predict-compute-correct idea of Algorithm 1 to the sequence dimension but does not increase the sequence dimension. We revised our paper to clarify the text surrounding Sequence-AltUp based on your helpful suggestion of framing it as increasing the effective width of each token.
3. You are right that Conformer is not an appropriate example for “striding operation” on Line 14. We have updated the paper to remove this ambiguity. Thank you for your careful consideration of our work.
4. That is a great point – we definitely believe that a larger number of train steps can help with the infrequent activations that arise when we use a large value of $K$. Our ongoing work is focused on training recipes to enable the application of AltUp with a very large expansion factor $K$.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my points. I will keep my score. | Summary:
The study introduces Alternating Updates, a novel method to increase the capacity of transformer models without significantly raising latency. AltUp broadens the token representation, operating on a subblock of the widened representation at each layer, and employs a predict-and-correct mechanism for updating inactive blocks. AltUp also extends to the sequence dimension and can work synergistically with existing techniques like Sparse Mixture-of-Experts models. This allows for the creation of efficient models with high capacity. The effectiveness of the method is demonstrated across various scenarios, with notable performance improvements in language modeling and understanding benchmarks. It allows for a speedup of up to 87% on SuperGLUE and SQuAD benchmarks without accuracy loss.
Strengths: - the paper is scientifically sound and well written
Weaknesses: The authors did not mention other efficient transformer variants and their latency times (e.g. Flash attention), therefore it is hard to judge the model's performance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - how the model would perform on the LRA benchmark?
- how is it comparable with other efficient attention variations?
- does the model scales up to other domains beyond GLUE and SuperGLUE bechmarks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have openly discussed some limitations of their work, recognizing that there is a lack of a deep theoretical understanding of the properties of their proposed technique, AltUp, given the complexity of analyzing transformer models. They acknowledge that the optimal hyperparameter K might vary on an application-specific basis and plan to investigate this in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer’s careful consideration of our paper and their helpful feedback. Please see our specific comments below.
### Comparison to other efficient transformer variants
As we mention in our response to Reviewer RDoa, the favorable properties of our method and its operation on the representation dimension make it difficult to directly compare to existing techniques which target different components of the architecture. For example, an MoE T5 model [1], Switch-Base, leads to a 2.5x speed-up measured in pretrain accuracy (not downstream evaluations) relative to a T5-Large dense model. However, Switch-Base contains 10x more parameters than a T5-Large dense model (7B vs. 0.7B), necessitates careful sharding along the model and expert dimensions, introduces auxiliary losses for load balancing and stability that need to be implemented, and requires tedious hyper-parameter tuning. Even when resources are available for sharding, there is a high degree of implementation and maintenance complexity involved with sharding-aware models. These challenges are in contrast to AltUp, which introduces an often negligible amount of additional parameters (especially with Recycled-AltUp), does not require sharding, and only introduces a single integral hyperparameter $K$ with a default value that works well across the scenarios we considered.
More generally, we view AltUp as an orthogonal method that can work synergistically with existing approaches like MoE or Flash Attention (which provides up to 3x speedup [2]). We will include this contextualization in our final submission so that it is easier to judge the improvements with AltUp.
### Remaining questions
1. In our work, we adopted the standard T5 training procedure, with an input sequence length of 512 and target sequence length of 114. This makes our trained models ill-suited for long input tasks, such as LRA. For example, LongT5 and CoLT5 models are T5 based models targeting long contexts and they are pre-trained with input sequence length of 4098 and target sequence length of 920. We leave long-context evaluations with our approach to future work.
2. As we highlight in our submission, our approach is a conditional computation approach where the conditionality is in activating a subblock of the token representation at each layer. So for AltUp with $K=2$, for example, the efficiency comes from using attention and MLP blocks that operate on a $d$-dimensional input, while we maintain a $2d$-dimensional representation from one layer to the next. This differs from the more well studied efficient attention work that focuses on the *sequence* dimension with the goal of reducing the quadratic dependence of attention on the sequence length, e.g., Longformer, Linformer, Performer, BigBird. As with MoE, these efficient attention mechanisms are orthogonal to our approach and can be combined synergistically.
3. In addition to GLUE and SuperGLUE benchmarks, our evaluations (Fig. 4 and Table 1) contain results for SQuAD and TriviaQA. These datasets are considered fairly standard and comprehensive for the evaluation of T5 models [4]. Evaluations on translation tasks are not feasible due to the monolingual vocabulary. Nevertheless, we have recently obtained promising preliminary results on MBPP and MATH datasets that follow the same trends as those in our submission.
[1] https://arxiv.org/abs/2101.03961
[2] https://arxiv.org/abs/2205.14135
[3] https://huggingface.co/blog/long-range-transformers
[4] https://arxiv.org/pdf/1910.10683.pdf
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments, I've update the score to "accept" | Summary: This paper proposes AltUp, a new method for reducing the inference cost of Transformers by not computing blocks of the FFN layers. The authors find real-world speedups at inference time without sacrificing accuracy on benchmark tasks.
Strengths: The results are strong - it is impressive to get real-world speedup using sparsity without sacrificing accuracy. The idea is simple and appears to work well on hardware. Overall good work.
Weaknesses: The evaluation section is missing baselines to compare against. It is hard to evaluate novelty without also evaluating against similar works that use sparsity to accelerate model inference.
One example of recent work that is very similar is Deja Vu (https://openreview.net/forum?id=wIPIhHd00i). I'm not 100% sure when the paper went public relative to the NeurIPS deadline so it is fine if that one turns out to be concurrent, but it is a little odd to me that there would be no baselines at all.
It would also be useful to contextualize the results against complementary methods such as quantization.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What are similar baselines in the literature that should be compared against to contextualize the strength and novelty of results?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive review and helpful feedback. Please see our specific comments below.
### Comparison to Deja Vu
Thank you for your helpful reference to Deja Vu [1]. As we state in our coverage of prior work, virtually all prior approaches in conditional computation, such as MoE, apply to selecting a subset of parameters of the MLPs and/or attention blocks to activate, usually in an input-specific way. This includes the work in Deja Vu, which centers around selecting subsets of the attention and MLP parameters to apply for each input (contextual sparsity) . Our work is orthogonal to Deja Vu and synergistic with these approaches at large as it focuses on conditional computation along the *representation dimension* (shown in Sec. C of the appendix for MoE, for example). However, as you guessed, we weren’t aware of Deja Vu at the time of our submission and will include it in a discussion of related work.
### Contextualization of the strength and novelty of results relative to baselines
To the best of our knowledge, our work is the first in conditional **representation** computation for transformers, which makes it challenging to find similar baselines for comparison. We would like to emphasize that the dense T5 baselines depicted in Figures 4 and 5, as well as our comparisons to 2x and 4x dense baselines in Table 4 of the appendix, serve as comparison points since these models are well-optimized and popularly-deployed T5 models. Any speedups over these baseline dense models with Alternating Updates signify gains that can be further improved by combining it with other techniques such as Deja Vu, MoE, or quantization. For instance, Sec. C of the supplementary material depicts this synergistic combination with MoE (see Table 6).
The favorable properties of our method and its operation on the **representation** dimension make it difficult to directly compare to existing techniques. For example, an MoE T5 model [2], Switch-Base, leads to a 2.5x speed-up measured in pretrain accuracy (not downstream evaluations) relative to a T5-Large dense model. However, Switch-Base contains 10x more parameters than a T5-Large dense model (7B vs. 0.7B), necessitates careful sharding along the model and expert dimensions, introduces auxiliary losses for load balancing and stability that need to be implemented, and requires tedious hyper-parameter tuning. Even when resources are available for sharding, there is a high degree of implementation and maintenance complexity involved with sharding-aware models. These challenges are in contrast to AltUp, which introduces an often negligible amount of additional parameters (especially with Recycled-AltUp), does not require sharding, and only introduces a single integral hyperparameter $K$ with a default value that works well across the scenarios we considered.
More generally, we view AltUp as an orthogonal method that can work synergistically with existing approaches like MoE or quantization (which reportedly provides 4-5x speedup [3]). Thank you again for your constructive feedback, we will include this discussion surrounding the contextualization of our results and comparison to baselines in our final submission.
[1] https://openreview.net/forum?id=wIPIhHd00i
[2] https://arxiv.org/abs/2101.03961
[3] https://proceedings.neurips.cc/paper_files/paper/2022/hash/adf7fa39d65e2983d724ff7da57f00ac-Abstract-Conference.html
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions, and I look forward to the updated camera ready. I will be keeping my score a 6. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Online Convex Optimization with Unbounded Memory | Accept (poster) | Summary: This paper focuses on online convex optimization with memory, a topic with increasing attention recently. Traditional framework assumes that the current environment is only affected by the decisions of a limited past, while this work considers that the current environment is affected by \emph{all} previous decisions. Specifically, the authors generalize the existing framework by studying the sequences of decisions in a typical sequence space. The authors then define the notion of memory capacity with the help of a bounded weighted norm in such sequence space and achieve new policy regret bounds. The authors verify its applicability by reducing several problems into the proposed framework, including variants of online linear control and performative prediction.
Strengths: (1) The motivation is meaningful. It is definitely important in online learning literature to capture the historical impact in the sequence.
(2) The solution is simple, and the proof is clear and correct.
Weaknesses: (1) My first concern comes from the novelty of the proposed framework. Although it enables an infinite memory length, the impact of history is modelled as typical linear operators, and positive results are obtained only when the operator behaves like a geometric combination of past decisions, which has been studied in some areas like linear control and reinforcement learning. Could the authors provide more special cases (in sec. 2.3), or elaborate more (in the paragraph just above def 2.3), to show that the proposed framework does give some new intuition to this problem?
(2) The technical contribution seems insufficient. The results mostly follows that of the traditional proofs in (Anava et al., 2015), with the operations on a finite sequence replaced by linear operators in functional space of sequences. Could the authors highlight the technical contributions in the proofs of their theory? For example, the explanation of (theorem 3.4) is quite clear and intuitive, is there something worth highlighted in the main results of the upper bounds?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the comments above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see the comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We are glad you found the motivation meaningful, and the liked the simplicity and clarity of our proofs.
## Novelty of the extension from OCO with finite memory to OCO with unbounded memory and technical contributions.
* Our formulation of OCO with unbounded memory bears a strong resemblance to online linear control with adversarial disturbances. In fact, you would be correct in asserting that our setting is a special case of online linear control with adversarial disturbances **if the latter problem allowed for an infinite-dimensional state space**. However, the treatment of online linear control in [Agarwal et al., 2019b] and follow-up work is limited to finite-dimensional state spaces because their regret bounds include a polynomial dependence on a dimension-dependent constant that makes them inapplicable to problems with infinite-dimensional state unless one makes additional assumptions. One of the key contributions of our work is to find the right generalization of finite-dimensionality (namely, finite effective memory capacity) that enables generalizing these results to problems with infinite-dimensional state.
* Our OCO with unbounded memory framework and upper bound (Algorithm 1 and its analysis) might seem like simple extensions of their OCO with finite memory counterparts. However, we believe it is a feature that our framework and upper bound provide a clean abstraction for the user, while at the same time giving them a lot of power and hiding the technical details. For instance, our framework allows the user to define non-standard norms on the decision and history spaces. This can be a simple but powerful way of encoding prior knowledge about a problem. The technical complications that arise from this are captured in bounding the relevant quantities of interest, e.g., the Lipschitz constant $\tilde{L}$, the operator norm $\| A \|$, etc. Indeed, consider the application to online linear control with adversarial disturbances (section 4.1). Our seemingly simple framework and upper bound applied to this problem (Theorem 4.1 and Appendix E.3) improve upon the existing upper bound, which used a finite memory approximation. See Lemmas E.2 and E.6 for an illustration of the technical details involved when using non-standard norms, e.g., lines 885 - 891 in the proof of Lemma E.6.
* One of our main technical contributions is the **first lower bound** for OCO with finite memory, and therefore, for OCO with unbounded memory. Furthermore, this **lower bound is tight**. While the proof of the upper bound is an extension of existing proofs, our tight lower bound, which was previously unknown and uses new technical ideas, shows that the upper bound is unimprovable in the worst-case. Therefore, without additional assumptions, no other algorithm or proof technique can improve the upper bound in the worst-case. (We also provide an explicit proof of the lower bound for OCO with $\rho$-discounted infinite memory problem in Theorem D.2 in the appendix.)
* Another technical contribution is the upper bound for online linear control with adversarial disturbances (Theorem 4.1, lines 328 - 332), which improves upon existing results (lines 328 - 330 and Appendix E.3). Our regret bound (Theorem 4.1, lines 328 - 332) quantitatively improves upon the existing one (lines 328 - 330 and Appendix E.3). This is possible due to a novel use of defining weighted norms on the history and decision spaces, and using that to bound the relevant quantities in the upper bound (Lemmas E.2 and E.6).
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I understand the points that you argue, and admit that the proposed lower bound is new. But to be very honest, I am still concerning the novelty of this paper.
On the techniques to derive the framework, it is not surprising to me to replace the assumption of "bounded regularity in each direction" in finite-dimension cases with the assumption of a bounded norm of coefficient matrices in infinite-dimension cases.
On the novelty of the framework, I still keep my point that the current version is not enough to show its potential to give new ideas in designing algorithms. In particular, could the authors provide more special cases that the proposed framework can recover (while the existing works cannot)?
---
Reply to Comment 1.1.1:
Comment: Re: special cases - In addition to dynamics that exhibit geometric decay, our framework also captures examples that exhibit a transient behavior. Even though we use a scalar decay factor in our OCO with infinite memory example for simplicity, a more general example is when the decisions are multiplied by a matrix Z with spectral norm less than one. Dynamics describing a vector rotating with decreasing norm have operator norm and spectral radius less than 1. However, by stretching the space, the dynamics would be rotating according to an ellipse, so the norm would be alternatively growing and shrinking even as the vector eventually decays to the origin – as a result the transients are nontrivial and the operator norm is larger than 1 even as the spectral radius is less than 1. This is another simple example that our proposed framework can recover. | Summary: This paper introduces a problem called online convex optimization (OCO) with unbound memory, which generalizes an existing problem called OCO with memory. To address this problem, the authors propose an algorithm called FTRL and analyze its regret. Moreover, the authors demonstrate that this new problem and their algorithm have several applications.
Strengths: 1) The problem of online convex optimization (OCO) with unbound memory is more general than the existing problem called OCO with memory.
2) The proposed algorithm enjoys a regret bound and has some applications.
3) It seems that the results in this paper can simplify the regret anlysis for the problem of online linear control.
Weaknesses: 1) The extension from OCO with memory to OCO with unbound memory seems to be straightforward. Moreover, the algorithm for OCO with unbound memory and the corresponding analysis are very similar to existing algorithms for OCO with memory and their analysis. So, to some extent, the novelty of this paper is limited.
2) Although OCO with unbound memory seems to be more challenging, the authors only consider the history determined by fixed linear operators, i.e., $A$ and $B$ in the paper. Moreover, the authors do not explain why they only consider this case.
3) In the experiments, the authors simply set the step size of the proposed algorithm and the existing algorithms as $1/\\sqrt{T}$, instead of tuning it according to their theoretical guarantees. In this way, it is not clear whether the improvement in experiments is consistent with their theoretical guarantees.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) Please explain the reasons for only considering the history determined by fixed linear operators.
2) It would be better if the authors could conduct experiments by setting the parameters of algorithms according to their theoretical guarantees.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: There does not exist a potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We are glad that you liked the generality of our framework and the simplification of the results for online linear control.
## Novelty of the extension from OCO with finite memory to OCO with unbounded memory.
* Our OCO with unbounded memory framework and upper bound (Algorithm 1 and its analysis) might seem like simple extensions of their OCO with finite memory counterparts. However, we believe it is a feature that our framework and upper bound provide a clean abstraction for the user that they can use for a variety of applications. For instance, our framework allows the user to define non-standard norms on the decision and history spaces. This can be a simple but powerful way of encoding prior knowledge about a problem. The technical complications that arise from this are captured in bounding the relevant quantities of interest, e.g., the Lipschitz constant $\tilde{L}$, the operator norm $\| A \|$, etc. Indeed, consider the application to online linear control with adversarial disturbances (section 4.1). Our seemingly simple framework and upper bound applied to this problem (Theorem 4.1 and Appendix E.3) improve upon the existing upper bound, which used a finite memory approximation. See Lemmas E.2 and E.6 for an illustration of the technical details involved when using non-standard norms, e.g., lines 885 - 891 in the proof of Lemma E.6.
* One of our main technical contributions is the **first lower bound** for OCO with finite memory, and therefore, for OCO with unbounded memory. Furthermore, this **lower bound is tight**. Our tight lower bound, which was previously unknown and uses new technical ideas, shows that the upper bound is unimprovable in the worst-case. Therefore, without additional assumptions, no other algorithm or proof technique can improve the upper bound in the worst-case. (We also provide an explicit proof of the lower bound for OCO with $\rho$-discounted infinite memory problem in Theorem D.2 in the appendix.)
* As alluded to above, another technical contribution is the upper bound for online linear control with adversarial disturbances (Theorem 4.1, lines 328 - 332), which improves upon existing results (lines 328 - 330 and Appendix E.3). Our regret bound (Theorem 4.1, lines 328 - 332) quantitatively improves upon the existing one (lines 328 - 330 and Appendix E.3). This is possible due to a novel use of defining weighted norms on the history and decision spaces, and using that to bound the relevant quantities in the upper bound (Lemmas E.2 and E.6).
## History is determined by fixed linear operators.
* We consider linear operators because the composition of such operators with convex functions remains convex. Nonlinear dynamics would lead to non-convexity. That is, if $f_t$ is convex, then $\tilde{f}\_t(x) = f_t ( \sum_{s=0}^{t-1} A^s B x )$ is convex if $A$ and $B$ are linear operators. Instead, if the history evolved according to nonlinear dynamics, then $\tilde{f}_t(x)$ would be $f_t$ applied to a $t$-fold composition of nonlinear operators acting on $x$, and this may not be convex.
## Choice of the step-size in the experiments.
* We have attached a PDF to the global response with after running the experiments with the theoretically optimal step-size. We will update the revision with these experiments.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. | Summary: The paper considers an online learning problem between a learner and adversary. The learner chooses action $x_t$ each round, and the state $h_t$ evolves according to the dynamics $h_t = Ah_{t-1} + Bx_t$. The oblivious adversary commits to a loss function $f_t$ each round. The learner suffers cumulative loss $\sum f_t(h_t)$. This paper uses FTRL to solve this problem.
Strengths: The paper studies online learning with a linear control component. The framework captures a setting where past decisions affect future loss. The paper presented theoretical results with upper and lower bounds.
The paper showed that the difficulty of this problem is captured by the so-called $p$-effective memory, which essentially quantifies the memory(less) property of the operator $A$.
Weaknesses: While the problem proposed in this paper is seemingly new, it does seem to shed much new algorithmic ideas and insights.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: na
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors addressed limitations and directions for future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We are glad that you liked the definition of effective memory capacity, and tight upper and lower bounds on regret.
## "While the problem proposed in this paper is seemingly new, it does seem to shed much new algorithmic ideas and insights."
* From context (i.e., the fact that this sentence was listed under "Weaknesses") we suspect the reviewer meant to write "_doesn't seem to shed much new algorithmic ideas and insights_" rather than "_does_ seem to shed...". It's true that the algorithmic ideas and insights in our paper are strongly influenced by Anava et al.'s (2015) use of FTRL to solve online convex optimization with finite memory. An important innovation in our work is the use of weighted norms to prove regret bounds in the case of linear sequence dynamics, which allows us to derive non-trivial regret bounds in the case of unbounded-length histories and even leads to improved regret bounds in the case of online linear control (lines 328 - 330 and Appendix E.3). A brief summary of the novelty and technical contributions is as follows:
* Our OCO with unbounded memory framework and upper bound (Algorithm 1 and its analysis) might seem like simple extensions of their OCO with finite memory counterparts. However, we believe it is a feature that our framework and upper bound provide a clean abstraction for the user that they can use for a variety of applications. For instance, our framework allows the user to define non-standard norms on the decision and history spaces. This can be a simple but powerful way of encoding prior knowledge about a problem. The technical complications that arise from this are captured in bounding the relevant quantities of interest, e.g., the Lipschitz constant $\tilde{L}$, the operator norm $\| A \|$, etc. Indeed, consider the application to online linear control with adversarial disturbances (section 4.1). Our seemingly simple framework and upper bound applied to this problem (Theorem 4.1 and Appendix E.3) improve upon the existing upper bound, which used a finite memory approximation. See Lemmas E.2 and E.6 for an illustration of the technical details involved when using non-standard norms, e.g., lines 885 - 891 in the proof of Lemma E.6.
* One of our main technical contributions is the **first lower bound** for OCO with finite memory, and therefore, for OCO with unbounded memory. Furthermore, this **lower bound is tight**. Our tight lower bound, which was previously unknown and uses new technical ideas, shows that the upper bound is unimprovable in the worst-case. Therefore, without additional assumptions, no other algorithm or proof technique can improve the upper bound in the worst-case. (We also provide an explicit proof of the lower bound for OCO with $\rho$-discounted infinite memory problem in Theorem D.2 in the appendix.)
* As alluded to above, another technical contribution is the upper bound for online linear control with adversarial disturbances (Theorem 4.1, lines 328 - 332), which improves upon existing results (lines 328 - 330 and Appendix E.3). Our regret bound (Theorem 4.1, lines 328 - 332) quantitatively improves upon the existing one (lines 328 - 330 and Appendix E.3). This is possible due to a novel use of defining weighted norms on the history and decision spaces, and using that to bound the relevant quantities in the upper bound (Lemmas E.2 and E.6).
---
Rebuttal Comment 1.1:
Comment: Thanks for the response ( yes I did mean to say 'doesn't', apologies for the typo ). I think this is a technically sound paper and will keep my original score. I will also read other reviewers' comments. | Summary: This paper studies a generalization of online convex optimization (OCO) with memory. The setting allows the current stage cost to depend on all past decisions via a discrete-time linear dynamical system. The authors proposed a follow-the-regularized-leader algorithm that can achieve a sublinear static regret against any fixed action. They also showed a lower bound that matches the regret upper bound in the order of horizon $T$ and Lipschitz constants. The authors discussed two applications of their results to online control and online performative prediction.
Strengths: Theory for online (convex) optimization with unbounded memory is important in the field of learning for control. While this problem is intractable in general, it is good to see results that formally define the “effective memory” and study the corresponding upper/lower bounds.
Weaknesses: My major concern about this work is about the problem setting: I believe the setting proposed here is a special case of online control with adversarial disturbances (e.g., [Agarwal et al., 2019b]). Specifically, the history $h_t$ corresponds to the state and the decision $x_t$ corresponds to the control input. The only difference might be the benchmark: While online control compares against the best DAC policy, this work compares with a fixed action. But the DAC policy class can easily contain any fixed action if we add a dummy dimension with entry 1 in the (adversarial) disturbances. I hope the authors can correct me if my understanding is wrong, and a discussion in the revision may be helpful.
Since the proposed problem setting can be reduced to (or maybe equivalent to) online control with adversarial disturbances, one should evaluate the results in this work by comparing with not only [Agarwal et al., 2019b], but also more recent works on online control like [Minasyan et al., 2022], [Chen et al., 2022], and [Lin et al., 2022]. To the best of my knowledge, existing results can handle much more complicated settings that involves time-varying dynamics, unknown dynamics, or even nonlinear dynamics. They considered stronger benchmarks like adaptive regret or dynamic regret. I encourage the authors to do a more detailed literature review about online control and clarify the significance of the main results.
Besides, I also have a concern about the time complexity of the proposed algorithm. It seems that the time/memory complexity of constructing $\tilde{f}_t$ from past $f_1, \cdots, f_t$ grows linearly with respect to time $t$. However, existing online control algorithms like the one in [Agarwal et al., 2019b] only requires $log(T)$ time/memory for each decision. Thus, I am not sure how practical the proposed FTRL algorithm is.
[Minasyan et al., 2022]: https://arxiv.org/pdf/2202.07890.pdf
[Chen et al., 2022]: https://arxiv.org/pdf/2110.07807.pdf
[Lin et al., 2022]: https://arxiv.org/pdf/2210.12320v1.pdf
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see my comments in the previous section. I gave the score based on my current understanding about the similarity with online control, and I would be happy to raise the score if the authors address my concern.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discussed about some future directions in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We are glad that you liked the definition of effective memory capacity, and tight upper and lower bounds on regret.
## Application to Online Linear Control with Adversarial Disturbances.
* You are correct that our formulation of OCO with unbounded memory bears a strong resemblance to online linear control with adversarial disturbances (OLC) with a fixed control input. In fact, you would be correct in asserting that our setting is a special case of OLC **if the latter problem allowed for an infinite-dimensional state space**. However, the treatment of OLC in [Agarwal et al., 2019b] and follow-up work is limited to finite-dimensional state spaces because their regret bounds include a polynomial dependence on a dimension-dependent constant that makes them inapplicable to problems with infinite-dimensional states unless one makes additional assumptions. One of the key contributions of our work is to find the right generalization of finite-dimensionality (namely, finite effective memory capacity) that enables generalizing these results to problems with infinite-dimensional states.
* Our framework is *not* a special case of OLC. In fact, in lines 305-330 we show the reverse.
* Decisions in our framework correspond to policies in the linear control framework, not a fixed control input. The decision is a DAC policy (Definition 4.1) defined by a sequence of matrices and a fixed matrix (line 294).
* Our DAC policy acts on the entire history of past disturbances, whereas the "truncated" DAC policy used in [Agarwal et al., 2019b] only acts on a fixed, constant number of past disturbances. The class of strongly-stable linear controllers is a subset of our DAC policy class, but not a subset of the "truncated" DAC policy class. (See [Agarwal et al., 2019a, Section 16.5].) Unbounded-length DAC policies are why modelling an infinite-dimensional space is relevant.
* Our regret bound (Theorem 4.1, lines 328 - 332) improves upon the existing one by $O(d (\log T)^{3.5} \kappa^5 (1-\rho)^{-1})$ (Appendix E.3). This is possible due to a novel use of defining weighted norms on the history and decision spaces, and using that to bound the relevant quantities in the upper bound (Lemmas E.2 and E.6).
* A "truncated" DAC policy is a sequence of $d \times d$ matrices of length $2 \kappa^4 (1-\rho)^{-1} \log T$ [Agarwal et al., 2019b]. Our DAC policy is a sequence of $d \times d$ matrices of unbounded length. Yet, we capture the dimension of this infinite-dimensional space in a way that still improves the overall bound, including completely eliminating the dependence on $\log T$, and improving the dependence on $d, \kappa$ and $(1-\rho)$ (Theorem 4.1 and Appendix E.3).
* Thank you for citing the additional works. We will add them to the revision. However, all of them analyze the problem under a finite memory approximation even though it is inherently an unbounded memory problem. We focused on addressing gaps in the OCO with memory literature by first developing the general framework of OCO with unbounded memory. Then, we proved an upper bound and a **tight lower bound** on the regret, including a previously unknown, tight lower bound for OCO with finite memory. The basic online control setting is just one application - we also consider an application to a performative prediction problem, showing how our general framework can unify two seemingly disparate areas of work. The value of our work lies in (i) an improvement in the upper bound for control (Theorem 4.1); (ii) a simplification of the regret analysis; (iii) a new lens to study extensions that you cited. Our improvements will carry over to these extensions after developing appropriate extensions of our framework, and it is an important direction for future work.
## Time complexity of the proposed algorithm.
* We provide details for efficient implementation of Algorithm 1 in Appendix G. Here, we summarize how it only has a constant overhead compared to standard FTRL and FTRL for OCO with finite memory (Algorithm 1 in Anava et al.)
* Our Algorithm 1 chooses iterate $x_{t+1}$ as the minimizer of $\sum_{s=1}^t f_s(\sum_{k=0}^{s-1} A^k B x) + \frac{R(x)}{\eta}$. Each such minimization problem requires evaluating $\sum_{s=1}^t f_s(\sum_{k=0}^{s-1} A^k B x)$. In Appendix G, we show how to do this in $O(t)$ time by iteratively updating the function arguments. Here, $O(\cdot)$ notation hides constant factors excluding $t$ and $T$ but includes constant factors related to the dimensionality of the decision space. See Appendix G for details.
* Standard FTRL and Algorithm 1 of Anava et al. also choose iterate $x_{t+1}$ by minimizing over a sum of $t$ functions. For example, Algorithm 1 of Anava et al. chooses iterate $x_{t+1}$ as the minimizer of $\sum_{s=1}^t f_s(x, \dots, x) + \frac{R(x)}{\eta}$. Each such minimization problem requires evaluating $\sum_{s=1}^t f_s(x, \dots, x)$. This takes $O(t)$ time, where the $O(\cdot)$ notation hides constant factors excluding $t$ and $T$ but includes constant factors related to the dimensionality of the decision space.
* For OLC specifically, the squared-norm regularizer $R$ (Lemma E.4) results in an update rule similar in spirit to online gradient descent and the existing algorithm in [Agarwal et al., 2019b]. The dimensionality of our decision space depends on $O(T)$, whereas it depends on $O(\log T)$ in [Agarwal et al., 2019b]. This is the only factor in $T$ that leads to a difference between the runtimes of the two algorithms. One can change our decision space to be the "truncated" DAC policy class and the runtime will depend on $O(\log T)$. This will introduce an error term that is the difference between the costs of the best strongly-stable linear controller and of the best "truncated" DAC policy. We can bound this using Lemma 5.2 of [Agarwal et al., 2019b]. This is not the dominant term in the regret bound (Appendix E.3 of our paper), so the final bound is unchanged.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed response!
Comment: I want to thank the authors for the detailed response. I apologize for ignoring the difference in the infinite/finite dimension issue in my initial review. I believe allowing the state (or history) space to have infinite dimensions is a good technical contribution, and I would appreciate it if the authors can elaborate more about the significance of this contribution.
1. I’m not fully convinced about the necessity of allowing the state (or history) space to have infinite dimensions. From the current two examples, it seems like the infinite-dimension state (or history) space is a result of using a specific proof technique, while other techniques may solve the same problems in finite dimensions.
2. If I understand correctly, one can still apply the algorithm in [Agarwal et al., 2019b] to the setting of this paper, but the regret will be unbounded when the state (or history) space has infinite dimensions. Is the dependence on the dimension a fundamental limit of the algorithm in [Agarwal et al., 2019b] or due to the proof technique?
3. The computational cost of the algorithm depends on how fast one can implement an oracle call to the operator A. I’m not sure if the oracle $O_A$ can always run in a constant time especially when the dimension of the state (or history) space is infinite.
---
Reply to Comment 1.1.1:
Comment: 1.
* Infinite dimensional objects arise from finite dimensional control problems when we use a **policy** to react to disturbances.
* The linear control problem is not convex unless we lift from system state space into the (infinite dimensional) DAC policy space. This is *not* a result of a proof technique and has algorithmic implications.
* The linear dynamics that define the control problem have a finite ($d$) dimensional representation. However, the presence of external disturbances means that we need to model the *system response* rather than merely the *system state*. The system response is infinite dimensional. For example, see [System Level Synthesis, Anderson et al., 2019](https://arxiv.org/abs/1904.01634).
* To compute how policies (decisions) affect costs (losses) through the function $\tilde{f}_t$, we need to model the system response. So, this is an algorithmic consideration and not just the result of a proof technique.
* [Agarwal et al., 2019b] consider a *truncated* system response of length $O(\log T)$ that is an approximation. This is also where they appeal to results for OCO with finite memory, which our paper should be seen as a replacement for.
* More generally, other examples of infinite dimensional history/states include infinite impulse response filters in signal processing (e.g., elliptic filters).
* To contrast, suppose instead of choosing policies, we were interested in a "fixed constant input" model of control. That is, the learner chooses a fixed control input in each round and the benchmark is the best fixed control input. This would indeed result in a finite dimensional history space.
* The history combines "noiseless" state with action: $\bar{s}_{t+1} F \bar{s}_t + G a_t, h_t = (\bar{s}_t, a_t)$.
* The linear operators are block matrices: $A = [F, 0; 0, 0]$ and $B = [G; I] / \| [G; I] \|$. (The scaling on $B$ is to ensure that it has unit norm, which we assume in our paper for convenience.)
* The loss functions are $f_t(h_t) = c_t( \bar{s}\_t + \sum_{k=1}^t F^k G w_{t-k} , a_t) = c_t(s_t, a_t)$.
2. Yes, you are correct. The algorithm of [Agarwal et al., 2019b] and their proofs work for standard Euclidean norms and the corresponding matrix norms. If the state and history spaces are infinite dimensional with non-standard norms, then there is a mismatch between the problem setup and their algorithm and proofs. So, in some sense the regret is unbounded because of both the algorithm and the proof technique. The algorithm would use a regularizer that is strongly convex with respect to the Euclidean norm, resulting in suboptimal regret compared to our algorithm that uses a regularizer adapted to the appropriate norm; the proofs would compute the Euclidean norm of infinite dimensional objects.
3. We agree with your point.
* See lines 981 - 988 in the appendix where we discuss this issue. We believe that it is a feature that our framework provides a clean abstraction for the user that they can use for a variety of applications. The dimensionality of $\mathcal{X}$, the choice of the operator $A$, etc. are application dependent. The user could use our framework with a lower dimensional decision space $\mathcal{X}'$ and then analyze the error that results from such an approximation. Meanwhile, our framework allows the user to define non-standard norms on the decision and history spaces. This can be a simple but powerful way of encoding prior knowledge about the application, and our framework handles the technical complications that arise from this (e.g., bounding the Lipschitz constant, operator norms, etc. - see Lemmas E.2 and E.6 (especially lines 885 - 891) for an illustration). | Rebuttal 1:
Rebuttal: We have responded to each reviewer individually. This global rebuttal only includes plots for the experiments requested by Reviewer xqDp.
Pdf: /pdf/bf6696b3bed33d47f1c0a9899ea0ffa42e11bd7d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Energy Guided Diffusion for Generating Neurally Exciting Images | Accept (poster) | Summary: In this work, the authors first employed attention readout to train a model for predicting neural responses ($y$) from image ($x$), aiming to address the issue of attention effects in the V4 area. Subsequently, they applied the image ($x$) from text ($y$) method proposed by Prafulla [1] to generate images ($x$) from neural responses ($y$).
Reference:
[1] Prafulla Dhariwal and Alex Nichol. Diffusion models beat GANs on image synthesis. May 2021.
Strengths: Compared to previous research, this study incorporates two technologies to enhance the performance of their model in predicting neuronal responses and MEIs. The first technology involves integrating an attention map into their encoding network. From a biological standpoint, the attention method allows for the receptive field of neurons to be dynamically adjusted based on different inputs, which aligns with observations made in experiments on V4. Empirically, attention-based Vision Transformer (ViT) models have shown superior performance compared to ResNet models in the field of computer vision. In this study, the performance of the proposed model surpasses the current state-of-the-art in predicting neuronal responses.
The second technology employed in this research is the use of Bayesian diffusion methods to decode input from neural responses. The diffusion model is renowned for its ability to generate intricate image details, and the authors demonstrate that their approach, called EGG, outperforms the previous GA method.
Although the authors did not develop these two methods themselves, it is possible that their application in neural data analysis is novel. To the best of my knowledge, this study appears to be the first instance of using the attention map in the encoding network for predicting neuronal responses, taking into consideration the observed phenomenon in V4. Furthermore, the utilization of the Bayesian diffusion method, specifically the EGG approach, for decoding neural responses seems to be a novel application in this context.
Weaknesses: 1.The current images are grayscale. Can the MEI images be colored?
2. The current neural data are collected by silicon probes. Can we use techniques such as two-photon or single-photon imaging to record more neuronal data?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How many neurons are used in EGG? 1,244 individual neurons?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review. We respond to your concerns below. In case you have any more questions we would be happy to discuss them.
### RE 1: Color MEIs
The diffusion model generates color images, so in principle, it can generate color MEIs. We attach some examples (Fig. B). Since the encoding models are trained on grayscale images because the animals only saw grayscale images the colors in these may not be meaningful. However, if one were to use color stimuli it would be possible to generate MEIs that are colored and potentially meaningful.
### RE 2: Applicability to other Neural Experimental Techniques
See general response **RE 3: Applicability to other Neural Experimental Techniques**
### RE Q1: How many neurons are used in EGG?
Yes, we use 1,244 individual neurons. For each MEI a single neuron is selected and for the reconstructions, we use all 1,244 neurons. | Summary: The authors tackle the problem of synthesizing most exciting inputs for neurons in the higher visual cortex (V4 in their case) of macaque, with data collected using electrophysiology.
The paper makes two claimed contributions:
1. The authors propose a new encoding architecture which uses a data-driven CNN core with a cross-attention read out layer. The cross-attention layer is parameterized similar to traditional cross-attention in other machine learning papers. In the author's design, there is a learned per-neuron query vector, and spatial key/value embeddings derived from pixel-wise linear projections of the CNN feature map that is shared for all neurons. The authors compare this against a task-optimized backbone with learned gaussian readout. The authors show that the the attention encoder performs better with high probability (via a Wilcoxon test) on novel non-training images.
2. The authors propose energy guided diffusion. Where by they modify the score-prediction with the derivative of vector valued function (not a score function corresponding to a well posed distribution). They propose a modification which does not require the energy function to take as input noisy images, they accomplish this by using the "pred_xstart" code provided by [1].
The authors validate their method by first comparing their method against gradient ascent + gradient blurring in MEI synthesis. They find that EGG is able to generate MEI in a faster fashion and with better cross-architecture generalization properties than GA.
In the MENI experiment, they show that a tradeoff between naturalness and excitation by adjusting the strength of the gradient of the energy function. They find that their synthesized MENI ($\lambda=1$) are roughly comparable to imagenet top-1 images in cross-arch predicted activations.
In the third experiment, they authors experiment with stimulus reconstruction. They accomplish this by modifying the energy function to minimize the L2 distance between predicted and ground truth neuron activities. They find that EGG regularized stimulus reconstruction is more faithful.
[1] Dhariwal, Prafulla, and Alexander Nichol. "Diffusion models beat gans on image synthesis." Advances in Neural Information Processing Systems 34 (2021).
In the supplementary, the authors provide additional descriptions of data collection (32 channel) and experiment design. The authors provide further experiments that compare ResNet & Attention encoders, the strength of $\lambda$, and how pure gradient methods compare with EGG in stimulus reconstruction.
Strengths: The paper on balance is well written, the authors are largely clear in their experimental design and their evaluation. The authors provide sufficient detail in the paper itself for reproduction, with additional code provided in the supplemental. The code is well written and easy to follow.
The use of diffusion models to regularize the synthesis of most exciting inputs for monkey V4 collected using electrophysiology is novel, and to the author's knowledge it has not been attempted before.
The authors perform a variety of experiments, and I find their proposed design for the attention encoder and evaluation of the attention encoder to be convincing.
Weaknesses: The authors did an good job writing a paper that has clarity and do a great job in providing details. But in my opinion, the authors overstate their contribution with regard to "Energy Guidance" (EGG). If the author can include additional citations, reduce their overly broad claims, and provide additional experiments/metrics, the paper would be improved significantly.
**The paper could stand out based on the experiments alone, but the authors have emphasized energy guidance to be a central contribution** without citing the vast number of papers in computer vision that have been published in the past two years that:
1. Similarly do not use a well-posed score function in the form of the derivative of a classifier. **This aspect is not novel.** In fact I would argue that most of gradient conditioning papers of diffusion models published today explicitly do not use a score function in the form of a classifier gradient. These papers are not cited.
2. Estimate a clean sample ($x_0$) and do not feed in a noise corrupted image to the model providing guidance. The paragraph in lines 162-174 seem to indicate that this approach is novel in the context of classifier/gradient guidance for diffusion models. However **similar approaches have been widely used in computer vision literature**. These papers are not cited.
* On contributions and prior work
* GLIDE from 2021 [1] proposed to perform image synthesis again using the gradient of the dot product of a CLIP image vector and a text vector to modify the diffusion output. Note that GLIDE used a noisy trained version of CLIP to perform guidance, however in this approach it seeks maximization of a dot product output which does not yield a "proper" distribution score function when you take the gradient.
* GLIDE and the DALL-E 2 [2] paper cite crowsonkb's 2021 open source CLIP guidance work [3, 4]. These two codebases combine CLIP guidance with the pred_x0 trick (eq 6 of this paper) without retraining the diffusion or gradient model. Similarly, the hugginface diffusers library minimize the orthodromic distance (derivative is not a proper score function) in CLIP space rather than maximizing a dot product using CLIP, and also use the pred_x0 trick without retraining the diffusion or gradient model. This estimated $x_0$ trick has also been formally described in [5] eq's 3 and 4. I suggest the authors cite at least one of these papers and clarify their contributions.
* On the soundness of the experiments
* I also found some of the experimental setups to be inconsistent.
* in the MEI experiment line 221, the naïve SGD optimizer is used, and this forms the basis of the claim in Figure 4 to show GA is much slower than EGG. However in the image reconstruction experiment line 282, the more sophisticated AdamW optimizer is used. There is no reason why the AdamW optimizer cannot be used with gaussian blur gradient conditioning via filtering at an higher stage of the backpropagation.
* for the MEI experiment, GA is run for 1,000 steps, while EGG was run for 100 steps. This does not seem to be an entirely fair comparison time wise.
* I'm not sure why the authors decided to normalize the image itself to 25 (line 230) for MEI, 50 (line 251) for MENI, and 60 (line 281) for reconstruction. This step seems to quite explicitly break the energy guidance output. Can you provide a justification in the text for why this is done, and why you use different norms for different tasks? Can the authors perform MEI/SGD/AdamW experiments without this step?
* On the lack of evaluation
* There is a lack of quantitive evaluation metrics aside from predicted neural activity. The paper almost entirely relies on qualitative claims when it comes to MEI/MENI/Reconstruction output. There are no actual metrics to indicate that the MEI/MENI/Reconstruction outputs are more similar to the ground truth most exciting natural stimuli.
* I suggest that the authors use common vision metrics like SSIM/PSNR/MSE to evaluate the low level similarity of the images, or high-level image metrics like perceptual loss (VGG), CLIP cosine distance, or distribution-wise comparisons like FID/CLIP-FID (see question 3 in section below). I don't think all the suggested metrics here are needed, but at least a few (any of SSIM/Perceptual VGG/CLIP for image reconstruction, and any of FID/CLIP-FID for MENI; or if the previous metrics are not possible, perhaps a human survey on Mechanical Turk/Prolific to evaluate if the images are better?) should be added where appropriate.
Overall I think the authors have presented an interesting system, but there is no citation or acknowledgement of prior work from computer vision which use non-classifier based gradient guidance of diffusion models, or using estimated $x_0$ to alleviate the need for noisy trained classifiers. Otherwise I think the paper is interesting and would be improved if the authors can clarify the scope of their contributions and better quantify their claims. I would happily re-evaluate if the authors can improve this paper in a subsequent revision.
[1] Nichol, Alex, et al. "Glide: Towards photorealistic image generation and editing with text-guided diffusion models." arXiv preprint arXiv:2112.10741 (2021).
[2] Ramesh, Aditya, et al. "Hierarchical text-conditional image generation with clip latents." arXiv preprint arXiv:2204.06125 (2022).
[3] https://github.com/afiaka87/clip-guided-diffusion/blob/c1d5906225586bc8455bb17c29a3c2caf9a02766/cgd/cgd.py#L141
[4] https://colab.research.google.com/drive/12a_Wrfi2_gwwAuN3VvMTwVMz9TfqctNj#scrollTo=X5gODNAMEUCR&line=41&uniqifier=1
[5] Li, Wei, et al. "UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance." arXiv preprint arXiv:2210.16031 (2022).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Could you clarify why the MEI experiment uses SGD, but the image reconstruction experiment uses AdamW?
2. For Figure 3, you make qualitative claims that the EGG MEIs are better, can you back this up with quantitive numbers like SSIM/Perceptual loss (VGG)/Inception/CLIP distance against the natural input that most excites the neuron?
3. For Figure 5, can you characterize the distributional similarity of the images using standard image metrics like Fréchet inception distance or CLIP-FID proposed by Kynkäänniemi (MENI vs top-k of natural images for a neuron)? Something like the Figure 4 of Imagen [1] Figure 4's pareto curves which measure how the energy scale affects the image distribution distance (FID/CLIP). For Q3/Q4, if the image metrics are not possible, perhaps a human survey on Mechanical Turk/Prolific to evaluate if the images for the experiments are more similar/better.
4. For Table 1, Figure 3B, Figure 5C, and Figure 6B, could you clarify what is the "base" model, as in which model you use for image synthesis, and which model is the evaluating model?
5. Can you clarify the solver you use in the diffusion model? From the code, it seems like you use the DDPM solver, however there are a variety of stronger solvers (DDIM/PNDM/DPM-solver [2,3,4]) which yield convergence in as few as 10 steps. Is there any reason you decided to go with such an old solver?
Overall I think the clarity of the paper is good, but could be further improved with a few small clarifications and incorporation of standard vision metrics (SSIM/VGG perceptual loss/FID/CLIP-FID).
[1] Saharia, Chitwan, et al. "Photorealistic text-to-image diffusion models with deep language understanding." Advances in Neural Information Processing Systems 35 (2022): 36479-36494.
[2] Song, Jiaming, Chenlin Meng, and Stefano Ermon. "Denoising diffusion implicit models." arXiv preprint arXiv:2010.02502 (2020).
[3] Liu, Luping, et al. "Pseudo numerical methods for diffusion models on manifolds." arXiv preprint arXiv:2202.09778 (2022).
[4] Lu, Cheng, et al. "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps." arXiv preprint arXiv:2206.00927 (2022).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors clearly describe the limitations of their experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your helpful review. Please find our responses to your questions below. If you have any further questions we are happy to discuss.
### RE: Prior work and scope
We will include and discuss the additional prior work, and make sure to make it even clearer that we do not claim to have invented gradient conditioning and the clean sample trick.
GLIDE and DALLE-2 use a different trick, their model directly predicts x_0, we predict $\epsilon$ and get an approximate x_0 from x_t and $\epsilon$. Crowsonkb's work indeed uses an approach similar to ours for guiding diffusion models with an unnoised CLIP model. Thank you for bringing this open-source project to our attention. We will cite the suggested work and discuss their work.
### RE: Experimental Setup - MEIs
For the GA optimization, we use the established method for generating MEIs that has been tested in vivo [1]. However, we perform a comparison study to show that the parameters chosen are selected to maximize the performance of the GA method. We rerun the MEI optimizations using the AdamW optimizer and find a significant decrease in performance in comparison to the SGD optimizer (r = 0.69). We also run the MEIs for 100 steps instead of 1000 and also find a performance decrease (r = 0.95) (Fig. F).
### RE: Norm constraint
Controlling for the norm/contrast of the image is a standard procedure when optimizing MEIs [2] because neurons are strongly driven by contrast. When the contrast is not controlled a trivial solution is to simply increase the contrast of images to values that are not realizable by a monitor. Furthermore, controlling the norm also controls the locality of the synthesized images. When optimizing MEIs, we choose a lower norm because neurons have a localized receptive field, for MENIs we increase the norm to 50, to allow for more full-field images, for the reconstructions, we further increase the norm budget to operate in a similar contrast domain as the ground truth natural images.
### RE: Evaluation
The goal of MEI generation is to elicit maximal responses in the brain. Most exciting natural images (selected from the ImageNet dataset) have been established to be less activating than the GA-optimized MEIs [1], which have also been tested in vivo. Therefore, a ground truth image to which MEIs could be compared via standard computer vision metrics does not exist. Hence, for MEIs, we rely on the predicted neural responses as the established metric of MEI performance. Since validating the images in the recorded neurons is not viable for us we introduced the cross-architecture evaluation paradigm as the *in silico* proxy evaluation of the stimuli transferability to the brain. We quantitatively show that the EGG MEIs are more robust to the model idiosyncrasies and we would thus expect them to perform better *in vivo*. Another reason why we use neural activation is that, for the reconstruction of images from neural responses, it has been previously shown that standard computer vision metrics do not necessarily appropriately capture the desired objective i.e. better correlation of the neural responses of the reconstructed image with the responses to the real image [3].
We agree, however, that additional metrics could be helpful for the MENIs and Reconstructions. Please see the general response for the additional evaluation results and discussion.
### RE Q2: Are EGG MEIs actually better?
For MEIs the hallmark metric is their performance in terms of maximizing neural responses. Natural images, on the other hand, are not the ground truth for MEIs. As seen in [1, 4], existing MEIs outperform natural images in the brain. Therefore, comparing MEIs to natural images via SSIM/Perceptual Loss would not indicate improvement in maximizing in-vivo neural responses. Please let us know if we somehow misunderstood your point.
### RE Q3: MENIs distributional similarities
We performed the requested evaluation of distributional similarities (see Fig. D and general response).
### RE Q4: Base model clarification
Firstly, we will improve the consistency of model naming: i.e. *Gaussian* is the ResNet + Gaussian Readout model, *Attention* is the CNN + Attention Readout model.
Table 1: The models in the brackets are the base models, for *within* we use the same architecture, for *cross* we use the other architecture. We will clarify that in the caption of the table.
Figure 3B: The label to the left of the plots is the base model, i.e. blue is the *Gaussian* model (ResNet + Gaussian readout), and pink is the *Attention* model (CNN + Attention readout). We will make that clearer in the caption of the figure.
Figure 5C: We use the *Gaussian* model, we have also included the *Attention* model comparison in the rebuttal. We will make that clearer in the caption.
Figure 6B: We use the *Gaussian* model, we have also included results from the *Attention* model in the rebuttal. We will make this clearer in the caption.
### RE Q5: Solver
We chose to use DDPM over the other solvers for its simplicity and closest resemblance to Langevin Dynamics providing the most elegant framework for incorporating energy gradients. It is possible that even better results could be achieved with more complex solvers. However, we considered this out of scope for the current study.
**References**
[1] Walker et al “Inception loops discover what excites neurons most using deep predictive models” (2019)
[2] Willeke et al. “Deep learning-driven characterization of single cell tuning in primate visual area V4 unveils topological organization” (2023)
[3] Cobos et al. “It takes neurons to understand neurons: Digital twins of visual cortex synthesize neural metamers” (2022)
[4] Bashivan et al. "Neural population control via deepimage synthesis" (2019)
---
Rebuttal Comment 1.1:
Comment: After careful consideration, I have bumped the score from a 3 (reject) to a 4 (borderline reject).
I think while the paper proposes an interesting method, there are still a couple of aspects that give me pause. **Considered as a whole, I think there needs to be substantial revisions to the original paper to clarify the scope of the original work.** I understand that revisions cannot be submitted at the current stage, and my comment is more regarding the scope of the needed changes.
I thank the authors for their extensive and clear response. I have re-read the main paper, the supplemental materials, the general response, and the subject-wise responses.
* It is slightly concerning to me the authors did not initially cite prior work on using the derivative of an energy function to guide diffusion, or were not familiar with widespread use of the predicted x0 trick (both in crowsonkb's work, and the diffusers' library). I understand that revisions cannot be submitted during this period. On re-reading the original paper, **I still get the sense that the authors were overly broad in their claims, even after reading the rebuttal.**
* I don't completely buy the claim that a norm constraint is needed, as current energy guided models (via CLIP) do not, and neither do approaches which use diffusion + energy to do image colorization, or inpainting, or object replacement. **This suggests to me that perhaps some aspect of their system is not well tuned.** In theory, the diffusion model should yield a constraint automatically that pulls the generated image towards the distribution of natural images, and the use of a norm constraint suggests that perhaps there needs to be more tuning. I understand the claim by the authors that a norm constraint is standard practice, but the paper you cite also doesn't use a diffusion model.
On the other hand, the following responses have answered my questions:
* On the prior work side, I agree that GLIDE & DALLE-2 do not use the predicted x_0 trick for gradient guidance. My point was confusingly worded, and I meant to indicate that GLIDE & DALLE-2 use CLIP/energy based gradient conditioning with noisy images.
* On additional FID scores, SSIM scores, and VGG scores, I thank the authors for proving those values in the new PDF. I would have been happy with any one of the scores (or CLIP distance as in most diffusion work! But it struck me that my request for CLIP was perhaps not totally valid, as CLIP eval is typically applied to naturalistic RGB images, while the authors focus on greyscale images, so SSIM/FID scores are more appropriate.)
* I agree the solver is not in the scope of this study, I meant it more as a suggestion for potential improvements. This work is one if the only papers that I've read on diffusion models in the last two years that uses the original DDPM setup, and not a fancy DDIM/PNDM/DPM-Solver (or the brand new UniPC solver).
---
Reply to Comment 1.1.1:
Title: Response to Reviewer H4Az
Comment: Thank you for your quick response and increasing the score. We would like to respond briefly to your two remaining points.
### RE: Scope
We would like to respectfully disagree. We did cite prior work for both in the original manuscript. For the clean sample trick, we wrote: “This is achieved by a simple trick, used in the code of Dhariwal and Nichol [46], of inverting the forward diffusion process” (l.163)
Regarding the gradient guidance we wrote “Here we extend this [Dhariwal and Nichol et al. 2021] approach to i) use neuronal encoding models, such as the ones described above, to guide the diffusion process and ii) to use a model trained on clean samples only.” (l.153), thus saying that we extend it for neural encoding models, not generally.
So while we did not cite all references you suggested (thanks for pointing them out), we neither claimed originality on the clean sample trick nor guidance. We understand that it is an important issue and we will make sure to be even more clear about this point in the revised manuscript.
### RE: Norm Constraint
We understand now where your concern comes from, but just to re-emphasize: The motivation behind the norm constraint comes from neurophysiological experiments. This constraint aims to prevent confounding results, as actual neurons respond to contrast [see, e.g. Cheng et al. 1994]. Specifically, when comparing two images by how much they activate a neuron in an experiment, both images need to have the same norm/contrast to avoid a trivial result for which one image just has more contrast than the other. When you check experimental papers that use MEIs, you will find that they control for either the contrast or the norm. For instance, [Walker, Sinz et al. 2019 - Nature Neuroscience] rescale the GA MEI after generation to a particular contrast level [Franke, Willeke et al. 2022 - Nature, Bashivan et al. 2019 - Science] directly apply the norm constraint, as we do.
To show the importance of constraining the norm, we optimized the MEIs without the norm constraint and controlled for contrast post-optimization. We found a decrease to 0.45 of responses to the norm-constrained optimized MEIs in the within architecture validation and 0.81 in the cross validation. Without constraining the norm, the generated MEIs have a mean norm of 1524 and standard deviation of 1511.
Given the nature of the optimization problem it is expected to observe an escalation of the contrast/norm. For simplicity, imagine a linear neuron model. In this case, the energy gradient leads to a consistent shift towards images with higher norms. This behavior holds true for more complex encoding models as well, since the encoding model's relationship with image contrast is monotonous. Consequently, by introducing the norm constraint, we introduce an additional prior that counteracts the continuous shift.
This is significantly different from the clip guided methods, while we are looking for a method that provides more meaningful insight into the function of neurons, CLIP guided methods optimize for better looking images. To achieve this they minimize the distance between clip embeddings. The CLIP embeddings cannot be "hacked" by increasing contrast (unlike neurons) and as a result the CLIP-based energy function is robust to contrast and does not require norm constraining, nor is it a requirement posed by the task that these methods are designed.
**References**
Cheng et al. “Comparison of neuronal selectivity for stimulus speed, length, and contrast in the prestriate visual cortical areas V4 and MT of the macaque monkey” (1994)
Walker, Sinz et al. “Inception loops discover what excites neurons most using deep predictive models” (2019)
Franke, Willeke et al. “State-dependent pupil dilation rapidly shifts visual feature selectivity” (2022)
Bashivan et al. "Neural population control via deepimage synthesis" (2019) | Summary: Further characterizing the complex coding properties of V4 neurons might require (1) better encoding models of neuronal activity as well as (2) better methods to generate informative most exciting inputs (MEI). The paper tackles (1) by proposing a new readout mechanism for a convolutional data-driven core based on attention that outperforms the SOTA at predicting neural responses of neurons in macaque V4. Then, it tackles (2) through a new simple method called EEG that generates MEI using a pre-trained diffusion model and the guidance signal of the encoding model. EEG can also be used for image reconstruction from neuronal activity, in both applications it generalizes better than the more traditional gradient ascent (GA) across architectures. Finally, their method can produce most exciting natural inputs (MENI) on par with highly activating natural images.
Strengths: - The problems tackled are well motivated
- The paper is clear
- The proposed readout mechanism is: novel to the best of my knowledge, biologically motivated, sound, and backed up by positive results (outperform current SOTA in predicting neuronal activity)
- The method to generate MEI is: ingenious, simple but sound, and backed up by positive results (better generalization than GA and faster)
Weaknesses: - (1) compares two models with: a different core, a different readout, and a different training strategy, hence it is hard to isolate the benefit from the new readout mechanism only. This part would be strengthened with additional baselines like the core model with Gaussian readout and the pre-trained resnet50 with attention readout
- While the new model seems to be a better encoding model of neuronal activity --according to the results of (1)--, MEI generated with it seem overall worse at driving neurons activation (fig 5.b) compared to the baseline ResNet50 (both EGG and GA). In fact, the results on MENI reported in Fig5.c are with the ResNet50 model, and the examples of image reconstruction put forward in the paper all come from the ResNet50 model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Did the author consider performing a CKA between models in the within condition and between models in the cross condition, to quantify the generalization/transferability of the MEIs?
- Why is the ResNet50 and not the ACNN considered for comparing MENI to top1 most activating ImageNet images? At least this is what I am inferring from the color used in Fig 5.c, as it is not written in the text which model is finally used.
- It would be interesting to add GA as a baseline for fig 5.b. I assume that the intuition behind the plateau/decrease is that if we increase the energy scale "too much", we fall back to "overfitting to the idiosyncrasies" of the encoding model, this baseline would give a bit of context w.r.t to this hypothesis.
- How are the images shown in the paper selected?
- My understanding for the motivation behind the MENI proposed is speed. Getting more natural-looking stimuli (e.g., for control stimuli) without having to go through millions of images. While I agree with the motivation, given the random nature of the generative process, I was wondering how much sampling was necessary before falling on satisfactory MENIs? Will I trust that EEG will still be faster, I worry about the meaningfulness of some of the MENIs generated (In Dhariwal et al. 2021, samples generated with a lower guidance scale are of significantly lower quality, e.g., $\lambda=1$ leads to FID=33)
- For image reconstruction, it would be interesting to see examples generated with the ACNN model, only results from ResNet50 are shown or discussed in the paper.
Clarification comments:
- Slight lack of consistency w.r.t to naming: the encoding model is called through the paper: data-driven with attention readout, Attention CNN, ACNN, and Attention.
- Within: l.201: "task-driven ResNet with Gaussian readout or data-driven with attention readout" & Cross: l.203: "ResNet and data-driven with attention readout". I assume in both case the ResNet uses a gaussian readout
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - It is important to note that, while the trick used to make the classifier guidance works on models not trained on noise does allow for more flexibility, the encoding model still has to be trained on images from the same dataset as the diffusion model, as the approximate clean sample $x_0$ falls within the data distribution of the diffusion model and not necessarily of the encoding model. If not, the $x_0$ will be o.o.d to the encoding model and its gradient will be significantly less meaningful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for your in-depth review.
Please find our responses to your questions below.
### RE Ablation study
See general response.
### RE: Apparent decrease in driving neural response
We apologize for the confusion in Fig. 5b. The values there are shown as normalized to the max responses of each neuron across lambdas. Thus, when normalizing, the values become smaller than 1. The result indicates that there is less consistency in the *Attention* model as to which value of lambda is most activating. We replotted the figure normalizing the neural activation of the GA-optimized MEI (Fig. G) and we will replace Fig. 5b with it.
### RE: using the Attention Model for reconstructions
We attach *Attention* model reconstructions (Fig. H), showing that they perform similarly to the *Gaussian* model (Fig. J). We will add these figures to the appendix. We chose to reconstruct using the *Gaussian* model, such that the cross-architecture evaluation is performed on the *Attention* model. This is because the *Attention* model is better correlated with the brain responses and thus provides a better proxy for the brain. Thus, better mimic the setup of testing reconstructions in the brain.
### RE: Using CKA
Centered Kernel Alignment compares representational similarity, which would be applicable to the core representations. However, investigating the core representations was not the focus of our paper since we focused on finding the readout that would best predict the neural responses. As we show in the ablation experiment (see our general response), the attention readout shows performance superior to the Gaussian readout even when applied on top of exactly the same core. However, we might have misunderstood your suggestion and are happy to comment if you clarify it.
### RE: Using *Attention* model for top-1 ImagNet comparison
We will include the *Attention* model comparison (Fig. I) to the top-1 ImageNet images in Fig. 5. The *Attention* MEIs at $\lambda = 1$ slightly outperform the top-1 ImageNet images.
### RE: add GA as a baseline for Fig 5.b
Please see **RE: Apparent decrease in driving neural response**
### RE: How are the images shown in the paper selected?
The neurons used for the study are randomly chosen from the neurons with test correlation > 0.5. The images shown in the main text are selected to show the performance of various neuron properties (eyes, fur, edges, curves). The complete set of images is shown in the appendix.
### RE: motivation behind the MENI and sampling required
For the MENIs presented we generate them from 3 seeds and choose the highest activating of the 3 seeds. Our main objective was to show that controlling $\lambda$ allows us to move closer to the natural images manifold. We do not consider it a replacement tool for searching natural images, but rather an additional tool for better interpretability of MEIs as at times (e.g. top row ResNet fig.5A) it is difficult to interpret the MEI ($\lambda=10$) and traversing $\lambda$ can help to interpret the function. For FID we include the comparison of FIDs across $\lambda$.
### RE: Naming consistency
Thank you for pointing out the inconsistency. We will unify the naming to generally use *Attention model* for the data-driven with attention readout model. For the task-driven ResNet model + Gaussian readout we will refer to it generally as *Gaussian model*. We will also add a statement that defines these two terms to avoid confusion.
### RE: l.201 and l.203
As in the previous section, we will unify the naming to remove confusion. Yes, the *task-driven* (*Gaussian*) model always uses the Gaussian readout (except for the new ablation study).
### RE: Encoding model needs to be trained on the same distribution
You raise an interesting point. However, we do not entirely agree. Mainly, we consider the problem the other way around: The goal for encoding models is to faithfully represent the visual system as well as possible, no matter what kind of image is shown. The encoding model is mainly determined by the choice of images shown by the experimenters. The diffusion model, on the other hand, determines the space (prior) of images within which the most exciting images are generated. For example, it would be possible to use a diffusion model trained on sketches to generate the most exciting images in the space of sketches. Since the visual system has evolved to encode natural images, we focus on diffusion models for natural images here.
We will include this discussion in the Discussion section.
---
Rebuttal Comment 1.1:
Comment: The authors have successfully addressed my 2 main concerns, hence I upgrade my update my rating from 6 to 7.
I will also reply to the few minor concerns left below:
### RE: Apparent decrease in driving neural response
Thank you for clarifying figure 5b. I agree that figure G better communicates that MEIs generated from the Attention model are better drivers of neural response than the one from the ResNet model. That being said, figure 5b highlights the fact that MEI generated with EEG for different neurons can be pretty sensitive to the energy scale hyperparameter, which is the case with the Attention model but not with the ResNet model. Hence, I believe that this figure still has its place in the paper to discuss such limitations of the method, albeit in the appendix.
Do the authors have a hypothesis as to why their model is more sensitive to the energy scale?
### RE: Using CKA
I will clarify my question. The paper cares to evaluate the generalizability of the MEI generated from one architecture to another as a proxy for its application on the brain. To quantify how much downstream generalizability we can expect, it seems reasonable and informative to perform a CKA between the representations (which are driving the generation of MEIs through the gradient) of the 2 architectures tested.
### RE: Encoding model needs to be trained on the same distribution
While I understand the logic in theory, in practice, if the images used to train the diffusion model and the ones used to train the encoded are too dissimilar, I am not too clear how meaningful the gradient from the encoder will be to drive the generation of the MEIs.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer GN1s
Comment: Thank you for increasing your score. We are happy to see that we have addressed your main concerns. We would like to briefly address your remaining concerns.
### RE: Attention model is more sensitive to the energy scale
The Attention data-driven model is not pretrained on natural images. We thus hypothesize that it has less of the natural image bias than the task-driven Gaussian model. This can be seen when comparing the different MEIs:
- **Attention Model GA vs Gaussian Model GA**: GA MEIs for the Attention model retain less of the naturalistic features (e.g. eye neuron column 5 Fig. 3a).
- **Attention Model EGG vs Gaussian Model MEIs**: The EGG generated MEIs show more naturalistic features similar to the MEIs obtained for the Gaussian Model.
- **Attention Model EGG, different lambdas (5 & 10)**: By increasing the lambda from 5 to 10 (less naturalness) we observe MEIs that are more similar to the Attention Model GA MEIs. We will include examples of Attention model MEIs with lambda 10 in the appendix in the revised manuscript.
We hypothesize that, since the data-driven model is trained on neuronal images directly, it has a smaller bias towards natural images which allows the model to better predict neural responses in the Attention model. However, this could make it more difficult to generate generalizable MEIs, because the optimization might overfit to the idiosyncrasies of the model. Therefore, by using EGG we can control the amount of naturalness bias that the MEIs have.
### RE: CKA
We now better understand your point with CKA. We computed CKA of the neural encodings *across* architectures between the Attention model and Gaussian model and *within* architecture between different seeds (e.g. Attention 1 and Attention 2 are models with the same architecture, but trained with different seeds). The CKA is computed between the predicted neuronal responses.
| Model | Attention 1 | Attention 2 | Gaussian 1 | Gaussian 2 |
| ----------- | ----------- | ----------- | ---------- | ---------- |
| Attention 1 | 1 | 0.9949 | 0.9133 | 0.9116 |
| Attention 2 | 0.9949 | 1 | 0.9145 | 0.9129 |
| Gaussian 1 | 0.9133 | 0.9145 | 1 | 0.9994 |
| Gaussian 2 | 0.9116 | 0.9129 | 0.9994 | 1 |
We observe that the within architecture similarity is very high (> 0.99) for both architectures and the cross architecture similarity is slightly lower, but also high (> 0.9). We expect such an outcome, since both architectures were trained to model the same neural representation.
### RE: Same training dataset between encoding model and diffusion model.
We agree that in practice the encoding model needs to generalize to the manifold of the diffusion model. Prior work has shown that in mice encoding models do generalize to some extent outside their training manifold [1]. However, how far they generalize is an empirical question. Nevertheless, we will add the discussion point, that “the encoding model needs to generalize to the manifold of the diffusion model” to the limitations.
**References**
[1] Wang et al. “Towards a Foundation Model of the Mouse Visual Cortex” (2023) bioRxiv | Summary: This paper proposes using the prior implicit in a diffusion model as a regularization term when generating images that maximally excite neurons. The authors refer to this as “Energy Guided Diffusion” and compare this to a standard gradient assent procedure with a smoothness prior. The authors also introduce a new model consisting of a CNN with an attentional readout that is trained to predict the responses of biological neurons, which (unlike the comparison model) allows for spatial components to be weighted differently for each input stimulus. The authors conduct experiments on all combinations of these two model changes to see how the generated stimuli change, and analyze how the stimuli transfer across models.
Strengths: Overall, I enjoyed this paper and found it accessible, interesting and a good combination of many ideas that are currently relevant for the NeurIPS community.
* The authors incorporation of an attention module into the final layer of a model for neural predictivity is novel as a way to capture stimulus-dependent changes
* Showing the MEIs replicate for the chosen neurons and have very similar properties even with different models (ie Figure 3) is a nice validation of the MEI technique as way of interpreting neural tuning
* I was excited to see the prior in a diffusion as a way to improve some of the neural synthesis techniques, as this is perhaps a bit more principled than some previous methods (i.e. smoothness priors, or GANs).
Weaknesses: Overall, I think that the contribution of the paper is novel, however there are some weaknesses in the experimental design and the interpretation/presentation of the results.
A) It is not possible to determine which change in the proposed model with an attention readout leads to better predictivity in Figure 2. This makes the rest of the results comparing the models difficult to interpret. Specifically, compared to the “Task-Driven” ResNet, the authors have changed (1) the core model architecture from a ResNet50 to a 4 layer CNN (2) the readout from the “Gaussian” readout to an attention readout and (3) the training task/dataset (the “task-driven” core of the ResNet50 is pre-trained on image recognition, while the entirety of the “data-driven” attention CNN is trained explicitly to predict the responses of V4 neurons). Which of these changes are critical for the performance increase? Are the changes due to the utilization of the attention readout, the fact that more parameters can be learned in the attention CNN, or some other change?
B) Along the lines of the above, the authors talk about the attention change as being novel and providing a way to incorporate shifts in location of receptive fields, but there is no experiment showing that the learned attention readout *actually* learns to do this (it could be using the same location for every image).
C) Perhaps my biggest concern – the authors discuss generating “most naturally exciting images” however the generated images are not natural. Calling these images “natural” is incredibly misleading, as there is an underlying diffusion model that has learned a specific distribution of image statistics. The generated images may be high-probability images given the prior learned by that diffusion model, but unless the diffusion model is a perfect model of the world these are not fully “natural” (and indeed, from the presented images they do not look natural). It is interesting to present images with different regularizations applied, however, in my opinion, these images should not be presented as “natural” and fully removing the paragraph on lines 262-268 (and also any associated references to “MENIs”) would only strengthen the paper.
D) In the discussion the authors note that the MEIs generated for the attention-readout model are more “concentrated” however, as far as I can tell, this seems to be just an observation and not quantified in any way.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Some of my major questions were listed in the weaknesses section, but additional things would improve the paper.
1) In Figure 5b, I’m a little confused about how the increase in the energy scale (so a higher importance on the “maximization” component of the synthesis) starts to decrease for large values of lambda. Naively, it seems like this component of the loss should only get better when it is weighted more strongly. Is there a problem with the optimization (i.e. too large of steps from the energy gradients?) causing this decrease?
2) The authors should potentially discuss other ways that people have “regularized” generated images and the limitations. For instance, in Bashivan, Kar, DiCarlo (2019) a TV loss was incorporated into the gradient assent procedure to encourage smoothness (similar to the gaussian blur on the gradients used here). Some of these other methods have directly trained models that incorporate some statistics of the “natural” world, such as GANs (Ponce et al. 2019 and follow-up work). Limitations of including priors into synthesis techniques have been discussed in Engstrom et al. 2019 and Feather et al. 2022 (specifically, the inclusion of a prior for generation can hide some of the model biases).
3) With regard to the limitations of priors mentioned above, the authors mention that the MEIs “look more complex and natural” and that EGG improves the “quality” of the generated images. Can we gain understanding from having things that “look” better? Or are these potentially just misleading us to finding things that *seem* interpretable? (This is maybe a bit more of a philosophical question beyond the scope of the work, but it is something I am sometimes puzzled by with regard to these neural generation procedures, especially when neural data on the generated stimuli is not present).
4) What sort of biases do we expect from the fact that these models are trained to only handle clean images? The authors state as a fact that neural encoding models are trained on responses to “clean” images, but this seems like it would bias the generation in specific ways.
5) How much spread is there in the generation across multiple seeds? Specifically, how does the spread in Figure 3 compare to what is observed from multiple initializations within the same model?
Minor clarifications:
a) What are the training details for the pre-trained robust ResNet50 that is used (ie $l_p$ norm, $\epsilon$ step etc)? And are multiple pre-trained versions used in the ensemble or is it just a different readout?
b) How is the feature space for this model 1024 dimensional? Are the activations spatially averaged?
c) The paper “Solving linear inverse problems using the prior explicit in a denoiser” by Zadkhodaie and Simoncelli came to mind as very relevant previous work, but I didn’t see it cited.
d) It might be helpful to have an additional line/equation around line 166 explicitly stating the generation steps in terms of $\bar{x}_0$.
e) In Figure 6 it states that the distance is compared in “unit activations space”. Is this the same as the predicted neural response, or is it measured at an internal model stage?
Typos/confusions:
i) Line 160 says that a constant value of lambda is used for the study, however this was searched over (I think) based on Figure 5?
ii) Should equation (5) and associated text be $\epsilon_\theta$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have a section on limitations, however there are a few things along these lines that might be worth further addressing.
* The authors do not validate the generated stimuli with neural recordings and the improvement in “quality” is only judged qualitatively or by looking at the predicted responses from other models. It is possible the generated stimuli with the diffusion generation or the attention model do not perform as well at controlling the neural data due to underlying biases in the sampling etc. I understand that this would require additional neural experiments, however it is worth mentioning this as a limitation and I didn’t see it discussed.
* Are there possible negative societal impacts of the work?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for your in-depth review and we are glad you enjoyed the paper.
Please find the responses to your questions below:
### RE A: Identifying what contributes to the improved performance
We performed an ablation study showing that the *Attention readout* is critical for improving performance. For more details please see the general response.
### RE B: Experiment to show that the attention readout exhibits stimuli-driven receptive field shifts
We analyzed the attention maps (i.e. receptive fields) of the attention model. We show that across the test stimuli for the majority of units the receptive field shifts to different locations (Fig. E). See the general response for details.
### RE C: Generated "Natural" Images
We realize that there might be different definitions of what constitutes a natural image. To avoid any confusion we will refrain from calling the images “natural”. We will move the top-1 Imagenet comparison to the appendix, and we will show that decreasing $\lambda$ results in lower FID with the top-5 ImageNet dataset (Fig. D), indicating that the images become more similar to natural images and thus are close to the "naturalness" manifold.
### RE D: Concentrated Attention MEIs
To show that the MEIs from the *Attention* model are more concentrated we compute an isotropic Gaussian envelope for the MEIs. We find that the *Attention* model generates MEIs for which their Gaussian envelope on average is smaller than for the *Gaussian* MEIs ($\sigma_{At}$ = 49.62 vs $\sigma_{Ga}$ = 55.36, Wilcoxon signed rank test p-value: 0.0078).
### RE Q1: Increase in energy scale results in lower performance
Thank you for pointing this out. The reason is that under the formulation $\varepsilon = \varepsilon(x_t, t) + \lambda \nabla_x (x_t)$ increasing the $\lambda$ parameter is related to the step size between each diffusion step. Thus, for very large $\lambda$ the step becomes too big and thus resulting in lower performance. We cannot put $\lambda$ on $\varepsilon(x_t, t)$ as this results in the noise level that is out-of-distribution for the diffusion model and thus the diffusion process would fail.
### RE Q2: Other regularization techniques
See general comment: **Re 4: Additional Related Work**
### RE Q3: Can we gain understanding from having things that “look” better?
If enhanced visual representations (MEIs) were solely about aesthetics, they might be deceptive. However, EGG-generated MEIs not only look better, but they also enhance predicted neural responses and exhibit better generalization across architectures. Importantly, their improved visual quality aligns with improved performance, making their enhanced appearance a valuable aspect of interpretability.
### RE Q4: Biases on clean images
We consider the proposed method as a way to understand what image features neurons are encoding. Since the visual system has evolved for natural images many experiments use them as stimuli. Thus also the encoding models are trained on these “clean” images. Our method needs to be able to deal with this and generate images that drive the modeled neurons well. For that reason, we use the clean sample prediction trick to circumvent the requirement for an encoding model trained on noisy images. This could potentially introduce some bias, but it’s hard to quantify it, in particular since we don’t have neuronal responses to noisy images.
### RE Q5: Spread across seeds
Spread between seeds in Macaque V4 is expected [1] due to single-cell invariances. To show this we attach a figure of the MEIs from different seeds (Fig. A) (limited examples for the rebuttal due to space limitation, we will include more in the revised version of the manuscript).
### RE Clr a) Pretraining details for ResNet50 and ensemble
The pretrained network is the L2, $\epsilon = 0.1$ ResNet50 obtained from [2]. For the ensemble model, we use the same core but separately trained readouts (same architecture, different weights).
### RE Clr b) 1024-dimensional feature space
The ResNet50 at layer 3 has 1024 channels, as the readout selects a single point in that layer and performs a dot product between the readout weight vector and the 1024 channels. Therefore, this results in a 1024-dimensional feature space.
### RE Clr c) Missing Citation
We will include the citation for "Solving linear inverse problems using the prior explicit in a denoiser"
### Re Clr d) Additional line for $\hat{x}_0$
We will include the equation for computing the predicted $x_0$
### Re Clr e) Units activation space
Yes, the unit activation space is the same as the predicted neural response. To avoid confusion between the predicted neural responses (model outputs) and real neural responses, we refer to predicted neural responses as unit responses.
### RE Typos/Confusions
i) constant value of lambda - For generating all of the MEIs we use a single value of lambda, meaning that we do not select the best lambda for each neuron. We choose the overall lambda based on the validation model as shown in Figure 5. The goal is to simulate a paradigm where we can identify select lambda within our model, but the end goal is to optimize the best MEI for the brain (not the model itself)
ii) $\epsilon_\theta$ typo - that is correct, thank you for spotting this typo, we will fix it in the revised manuscript.
### RE: Limitations
- Not tested in the brain - that is a good point, we will make sure to discuss it in the limitations section.
- Negative societal impact - We have thought about the negative societal impact, however, this is a method for fundamental neuroscience research. All the scenarios we were able to come up with seemed far-fetched. If you have a particular scenario in mind, we are happy to discuss it.
**References**
[1] Willeke et al. “Deep learning-driven characterization of single cell tuning in primate visual area V4 unveils topological organization” (2023)
[2] Salman et al. “Do Adversarially Robust ImageNet Models Transfer Better?” (2020)
---
Rebuttal Comment 1.1:
Comment: Thank you for providing these clarifications and updated results! I've read through the responses to other reviewers, looked at the updated PDF, and went back through the related parts of the paper.
The ablation study is a nice contribution to clarify that the attention readout is the more important part of the model. I also appreciate the discussion about "naturalness" and the various experiments the authors have performed to quantify their claims. I've updated my score based on this.
One concern that I still have, is that the optimization that is occurring and the use of constraints may lead to the diffusion to "fail". This seems to be the general idea of the discussion with Reviewer H4Az, and I share similar concerns about whether the different models are tuned correctly to result in a fair comparison (ie how increasing the energy scale eventually decreases the performance). I think the authors should think deeply about this, and, in addition to including some of the references provided by H4Az, also list such concerns as a limitation. Nevertheless, I think that the overall idea is a nice contribution to the line of work generating MEIs.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer YaDz
Comment: Thank you for increasing your score. We are happy to address your remaining concerns.
### RE: Backbones for ablation study
You are correct, we use the Robust ResNet as the backbone for the Task-Driven setup and the CNN for the Data-Driven setup. We did not consider alternative architectures because the CNN has been picked from previous work that optimized the architecture to be fitted to neuronal data directly and the ResNet was designed to be trained on large-scale image datasets like ImageNet. To show this we additionally conducted a test where we trained the ResNet model directly on the neuronal data, like the CNN. In this case, the model achieves performance of 0.25 test correlation with the Gaussian Readout and 0.26 with the Attention readout, which is not as good as when using the Data-Driven core. We will make sure the backbones are clear when adding the Ablation study to the manuscript.
### RE: Optimization failure
We agree that the case of increasing the energy scale which eventually decreases the performance is a limitation of the way energy guidance operates. We agree that it is important for this to be clear and will make sure to discuss it as a limitation. However, one should note that while the performance decreases at higher lambdas, it still performs better than GA (Fig. G, where 1 on the y-axis refers to the activation achieved by GA MEIs). Since increasing the energy scale brings the generation process closer to GA, this decrease is this expected. | Rebuttal 1:
Rebuttal: We appreciate the thorough and constructive reviews and are glad to see that the reviewers found our work to be **ingenious** (**GN1s**), **novel** (**YaDz**, **GN1s**, **H4Az**, **BnXF**), **well-written** (**H4Az**) and **clear** (**GN1s**, **H4Az**). We are happy to see that Reviewer **YaDz** **enjoyed** the paper and is **excited** by our method.
However, the reviewers also raised some concerns including:
1. It is not possible to determine which change in the proposed model leads to better predictivity (**YaDz**, **GN1s**)
2. Request for additional quantitative evaluations (**YaDz**, **H4Az**)
3. Can the method be applied to other neural experimental techniques like two-photon, single-photon microscopy? (**BnXF**)
4. More comprehensive discussion of prior work (**YaDz**, **H4Az**)
We discuss the above points here, and any remaining points in the individual comments below. We are confident that we can address these concerns and are happy to clarify any remaining issues in the discussion. In the comments, we refer to the figures in the uploaded PDF document.
### Re 1: Ablation Study
We performed an ablation study comparing the effects of transitioning from the *Task-Driven ResNet + Gaussian Readout* model to the *Data-Driven CNN + Attention Readout* model as requested by Reviewers **YaDz**, **GN1s**. We measure performance in terms of test correlation.
| Core \ Readout | Gaussian | Attention |
| -------------------- | ------------------- | ------------------- |
| Task-Driven | 0.262 | 0.276 (+5%) |
| Data-Driven | 0.229 (-13%) | 0.294 (+12%) |
The percentages in the parentheses denote the change in performance relative to the *Task-Driven model with Gaussian readout*. This shows that shifting to the *attention readout* improves the performance for both *Task-Driven* and *Data-Driven* cores.
### Re 2: Quantitative Evaluations
**YaDz** asked for experimental proof that the attention readout does indeed use its ability to shift receptive fields based on the input image. We show this by inspecting the attention mask of the attention model and computing the average distance between the center of mass of the upper 5% percentile of this mask across different images for each neuron. We plot this against the test correlation of each neuron observing that the attention readout does perform shifts (Fig. E). We also include qualitative examples of the masks and the means in Fig. E.
**H4Az** suggested characterizing the distributional similarity of the EGG-generated most exciting natural inputs and the ImageNet top-k natural images using standard image metrics like Fréchet inception distance (FID). We measured the FID score between the generated images at different $\lambda$ values and the top-5 ImageNet images (Fig. D). Our results show that by changing $\lambda$ we approach the natural images manifold (lower FID).
Furthermore **H4Az** suggested using SSIM and VGG perceptual loss to additionally evaluate the performance of reconstructing images from predicted neural responses. We did not include that previously because, as shown in [1], metrics like SSIM are not necessarily a good predictor of how well neuronal responses are reproduced *in vivo*. Similarly, we observe that SSIM and VGG perceptual loss show no improvement (Fig. D). However, to strengthen our claim on the improvement of our reconstructed images we conducted a two-alternative forced choice task with 45 voluntary participants on 50 test images (Fig. C). The participants were instructed to choose which image (GD optimized or EGG generated) was more similar to the ground truth image. Results show an 82.22% average preference for EGG-generated images (95% confidence interval [80.59%, 83.75%]; Wilson score interval).
### Re 3: Applicability to other Neural Experimental Techniques
Reviewer **BnXF** asked whether our method can be used with other experimental techniques than electrophysiology. While the dataset we use for this study was recorded from in macaque visual cortex, it is in principle possible to use EGG for MEI generation and reconstructions with calcium imaging similar to the GA method on two-photon data in [2]. In fact, EGG can be applied to any modality that yields an encoding model. Showing this, however, on other experimental techniques is out of scope for this paper and one week of response time.
### Re 4: Additional Related Work
Thank you for pointing out additional references. We will include and discuss the following references in the revised manuscript.
- Nichol, Alex, et al. "Glide: Towards photorealistic image generation and editing with text-guided diffusion models." arXiv preprint arXiv:2112.10741 (2021).
- Ramesh, Aditya, et al. "Hierarchical text-conditional image generation with clip latents." arXiv preprint arXiv:2204.06125 (2022).
- crowsonkb's open-source work: https://github.com/afiaka87/clip-guided-diffusion
- Li, Wei, et al. "UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance." arXiv preprint arXiv:2210.16031 (2022).
- Bashivan, Kar, DiCarlo "Neural population control via deepimage synthesis" (2019)
- Ponce et al. "Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences" (2019) and follow-up work
- Engstrom et al. "Adversarial Robustness as a Prior for Learned Representations"(2019)
- Feather et al. "Model metamers illuminate divergences between biological and artificial neural networks" (2022)
- Kadkhodaie & Simoncelli "Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser" (2020)
**References**
[1] Cobos et al. “It takes neurons to understand neurons: Digital twins of visual cortex synthesize neural metamers” (2022)
[2] Walker et al. “Inception loops discover what excites neurons most using deep predictive models” (2019)
Pdf: /pdf/e71edd7a301e3369a49746f5361e4a9e8e497251.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adjustable Robust Reinforcement Learning for Online 3D Bin Packing | Accept (poster) | Summary: This work proposed a novel adjustable and robust Reinforcement Learning framework for Online 3D Bin Packing task. The proposed method can achieve balance between performance and the worst-case environment.
Strengths: The writing is clear, and the experimental results demonstrate that the proposed method works well compared with other RL methods for the targeted 3D Bin Packing problem.
Weaknesses: - In the section of “Training and Evaluation Setting”, the author mentioned that they are using two different settings which are discrete and continuous. However, I did not find the table showing the experimental results using continuous setting.
- The author mentioned that they use three metrics(U_ti, Std and Num) to evaluate the performance. But only the best U_ti are highlighted in Table 2. I do think the analysis for Table 2 needs more details especially for the Std and Num.
- The computation complexity is also crucial for the RL algorithms. The comparison of computation complexity should be analyzed.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Table 2 is the main result, where larger N_B and larger beta represent the harder problem, and we can see that throughout the table, the baseline RARL often achieves the best results, and the proposed Exact AR2L and Approx AR2L is inferior. What is your interpretation?
- How to ensure the Approx AR2L can approximate the AR2L in the "real-world" setting? Since you do not perform experiments in the real world? I mean, can your added randomness correspond to a real-world robot arm use case (in some industry), and if so, what kind of distribution can characterize such randomness? I believe this must be addressed by data-driven experiments; however I do not see this in your paper.
- Attacker: I understand the "attacker" is used to generate proper samples to train your DRL algorithm for policy improvement. However, do you have a better name for such adversarial or adversary other than an attacker, as they are used in a normal setting. If there is a true attacker of your DRL training, they will probably attack in other places the let the framework down. I understand this might be due to the term already used in the literature, I just want to not confusing the reader about technical terms.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: There is no such limitation in this aspect for this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate the valuable feedback provided by the reviewer. We have carefully considered these concerns, and we would like to address them in the following responses.
Q1: Continuous setting results
Due to space limit in the main text, for the experimental results under the continuous setting where item size follows a continuous distribution, please refer to the Appendix 1.1.2 in our supplementary material.
Q2: Experiment results clarification
We thank the reviewer for looking into details of our algorithm performance. As illustrated in Table 2, the ExactAR2L algorithm demonstrates its superiority over PCT with smaller standard deviation (Std) in 17 tasks, while producing a slightly larger Std in only 3 tasks, which demonstrates that ExactAR2L can indeed improve the robustness of the packing policy. Since ExactAR2L is trained on instances from both the nominal and worst-case environments, while RARL is trained only on the worst-case dynamics, the resulting ExactAR2L policy is less conservative than the RARL policy. While the conservativeness of RARL may result in smaller Std in most tasks, it produces worse results when given nominal instances compared to AR2L. It is worth noting that compared to other methods, the value of Std from ExactAR2L is the closest to that of RARL. This observation tells us our ExactAR2L framework can trade off between conservative and risky behavior, as the Std from ExactAR2L is between that of RARL and PCT. Compared to the baseline method RfMDP, ApproxAR2L with $\alpha=0.5$ has a smaller Std in half of the tasks. This is because, similar to ExactAR2L, ApproxAR2L is less conservative than RfMDP, which results in the fact that ApproxAR2L cannot achieve a smaller Std in all tasks.
As illustrated in Table 2, the ExactAR2L algorithm can pack more items in 17 tasks compared to PCT, and shows a slight drop in only 3 tasks, where the average drop is 0.2. We found that to achieve a higher score in terms of Num, ExactAR2L consistently favors $\alpha=1.0$ across various tasks of different $N_B$ and $\beta$. Compared to RARL, the ExactAR2L algorithm with $\alpha=1$ can pack at least the same number of items in 16 tasks, and shows a slight drop in 4 tasks. We thus conclude that ExactAR2L with $\alpha=1$ can produce competitive results compared to both RARL and PCT in terms of Num. Compared to the baseline method RfMDP, ApproxAR2L with $\alpha=0.5$ can pack more items in 16 tasks, and shows a slight drop in 4 tasks, where the average drop is 0.25. In the revised paper, we would take the reviewer's suggestions and give more detailed and comprehensive discussions for the simulation results.
Q3: Computation complexity of AR2L
In our conclusion and limitation section, we highlighted that the ExactAR2L algorithm introduces additional computational complexity in the training phase due to the mixture-dynamics attacker. However, in the bin packing research community and industrial applications, researchers are more concerned about the computational complexity during the inference phase, as we require the packing strategy to efficiently determine the location for each item. From this perspective, AR2L does not introduce additional complexity during inference compared to RARL and PCT.
Q4: AR2L's performance compared to RARL
The AR2L policy is trained on instances from both the worst-case and the nominal dynamics. When evaluating the scenario of $\beta=100$, due to the deviation between the data distributions used for training and testing, AR2L may not consistently outperform RARL. However, the objective of our paper is to *strike a balance between the policy's performance in average and worst-case environments*. As such, for $\beta=0, 25, 50, 75$, AR2L outperforms RARL, demonstrating its superiority in achieving such balancing goal.
Q5: Real world setting
We would like to clarify that our work focuses specifically on the bin packing problem. Our goal is to develop a packing policy that assigns a position for each item in a container to ensure the high space utilization. Our focus is not on the manipulation or locomotion of robots, which is a separate module from the location assignment problem of bin packing. The location assignment problem is commonly viewed as a combinatorial optimization (CO) problem, while robotics planning and control typically address the manipulation and locomotion of robots. In the bin packing research community, the real world data is well simulated by the item sequence environment, rather than using mujoco-like robotic environments for dynamics modeling. And we follow the exact setup as bin packing research community to *setup the environment to cover the real-world cases. Furthermore, our study is specifically designed to improve the robustness of the RL-based policy in a CO problem against the common randomness in the permutation of item sequence, rather than perturbations added to the robot arm*. We hope that this clarifies the scope of our research and the specific problem we are addressing. In addition, to validate the practicality of AR2L in real-world scenarios, we directly evaluated it on the Mixed-item Dataset (MI Dataset) [1] which follows the generation scheme for the realistic 3D-BPP instance. We apologize that due to limited space we have included the results on the MI dataset to Table 1 of the submitted PDF file of the global response. Upon analyzing the results, we observe that our AR2L approach outperforms both PCT and RARL across various metrics in the real-world MI dataset
Q6: Naming of adversarial policy
Thank you for your suggestions regarding the name of the "attacker". We are considering using a more appropriate name to avoid any confusion.
[1] Samir Elhedhli, Fatma Gzara, and Burak Yildiz. Three-dimensional bin packing and mixed-case palletization. INFORMS Journal on Optimization 2019.
---
Rebuttal Comment 1.1:
Title: Comments after rebuttal
Comment: I acknowledges that I have read the author's rebuttal. I believe it has answered my questions. I think overall this is a paper with sufficient amount of efforts.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for taking a careful review of our work, and we will incorporate the suggestions into the revised paper. We would appreciate that if the reviewer could re-evaluate the review score. | Summary: For solving the online 3D Bin Packing Problem (BPP), the authors employ an iterative procedure to search for relevant hybrid dynamics and refine their corresponding strategies. By optimizing the weighted sum of returns, AR2L algorithm which improves the robustness of the packing policy achieves a balance between nominal insurance performance and worst case insurance performance.
Strengths: In experiment, the authors evaluate the robustness of six heuristic method. Moreover, this paper gives the exact AR2L algorithm and Approximate AR2L algorithm which has positive results. This two part of solving online 3D Bin Packing Problem has been proved to have relatively good results theoretically and experimentally.
Weaknesses: Experimentation is not sufficient. Continuously valued perturbations for addition may not correspond to distributions in the physical world. Is there any difference between the response of the baseline to the disturbance of these continuous values and the method in the paper to the disturbance of these continuous values. Whether there is a simulation or a robotic arm, a demo for dealing with extreme situations.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. For Figure 2, it should be ensured that the positions of the same modules remain unchanged, and the comparison changes should be highlighted, so as to achieve of the exact and approximate AR2L algorithm
2. The relationship between offline and online BPP introduced by introduce should be clarified, if not needed, offline should be deleted. Because the whole article does not focus on online BPP
3. At the same time, please supplement the experimental part in weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: due to the mixture-dynamics attacker, it will introduce additional calculations to increase complexity. At the same time for the latency of the whole process will be increased as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewer for providing valuable feedback. We appreciate the reviewer's insights, and we would like to address the concerns raised.
Q1: Illustration of schematic figure
Thank you for your valuable suggestions. We agree that it can make the comparison in Figure 2 more clear, and we appreciate your feedback. We will refine the two schematic diagrams in Figure 2 to better align with your suggestions.
Q2: Offline and online BPP
Thank you for your feedback regarding the introduction of the relationship between offline BPP and online BPP in our paper. We appreciate your input and understand your point that our focus is on the online BPP rather than the offline BPP.
We included a brief introduction to the background of offline BPP as it is the foundation of the online BPP. However, we understand that it may not be directly relevant to the main focus of our paper. Therefore, we will refine this section as per your suggestion to streamline the paper and better align with our focus on the online BPP.
Q3: Experiments on continuously valued perturbations
Thank you for your comments on the experiment section. Since the main focus of our paper is the *perturbations from the permutation of item sequence in the online 3D-BPP*, and we also considered that adding continuous-value noise may not correspond to real-world perturbations, given that the state is defined by the bin configuration and the currently observed item sequences. Therefore, we did not conduct experiments on continuously valued perturbations in our paper. However, we recognize the value of such experiments in extending our framework and better understanding the robustness of learning-based policies. As such, we supplement our paper with an additional experiments on continuously valued perturbations, as per your suggestion. Please refer to the global response for detailed simulation results.
In this experiment, we examine the performance of three different approaches, namely PPO, RARL, and our AR2L method. We conduct this study in the CartPole environment, where we introduce the continuously valued perturbation to the "gravity" parameter to investigate the robustness. The robustness to changes in the 'gravity' parameter is a well-established aspect of robust RL [1]. During the training phase, RARL trains a worst-case attacker to select a continuously valued perturbation to add to the 'gravity' parameter in the environment. On the other hand, AR2L trains a mixture-dynamics attacker to balance the worst-case dynamics derived from the worst-case attacker and the nominal dynamics. In the evaluation phase, we introduce various perturbations to the environment to test the performance of the different approaches across different settings. Please refer to Figure 1 from our submitted PDF file in the global response for the detailed experiment results. Upon analyzing the results, we observe that when larger perturbations (larger absolute x-axis values) are introduced, both AR2L and RARL consistently outperform PCT. However, as the perturbation decreases, RARL's performance shows a significant decline. On the other hand, AR2L's performance lies between that of PCT and RARL, indicating a more balanced and robust performance across varying perturbation levels.
To further validate the practicality of the AR2L algorithm in real-world scenarios, we directly evaluated our model on the Mixed-item Dataset (MI Dataset) which follows the generation scheme proposed by Elhedhli et al. [2] for the realistic 3D-BPP instance generator. MI dataset has 10 thousand items, with 4668 species, and occurrences vary from 1 to 15. The bin dimensions is set to the size often used in practice: L = 120, W = 100, and H = 100. The results are presented in the table below (or Table 1 in the submitted PDF file of the global response). It is evident that in both settings of $N_B=15$ and $N_B=20$, our AR2L approach demonstrates superior performance compared to PCT and RARL across various metrics in the real-world dataset.
**Table 1**: Algorithm evaluation on real-world MI Dataset
| | - | $N_B=15$ | - | | - | $N_B=20$ | - |
|:----:|:-----:|:--------:|:-----:|:-:|:-----:|:--------:|:-----:|
| | $Uti$ | $Std$ | $Num$ | | $Uti$ | $Std$ | $Num$ |
| PCT | 48.3 | 8.5 | 16.8 | | 48.7 | 10.1 | 16.9 |
| RARL | 48.8 | 8.8 | 16.9 | | 48.8 | 8.3 | 16.9 |
| AR2L | 50.2 | 8.5 | 17.4 | | 52.9 | 6.5 | 18.3 |
Q4: Demo of packing policy
Thanks for your concern regarding the demo of the packing policies. We have created a video that showcases the packing processes of the PCT policy, RARL policy, and AR2L policy in both the worst-case dynamics and the nominal dynamics. To adhere to the double-blind reviewing policy, we have submitted the video link to AC. Additionally, we have included visualized packing results in our Appendix.
Q5: Additional calculations and latency of the algorithm
We thank the reviewer for bringing this out. In our conclusion and limitation section, we commented that the ExactAR2L algorithm introduces additional computational complexity *in the training phase due to the mixture-dynamics attacker*. The resulting robustness is coming from the additional training of the attacker. However, in the bin packing research community and industrial applications, the computational complexity during the inference phase is more important, as we require the packing strategy to efficiently determine the location for each item. From this perspective, AR2L does not introduce additional complexity during inference compared to RARL and PCT.
[1] Panaganti, K., Xu, Z., Kalathil, D., Ghavamzadeh, M. (2022). Robust reinforcement learning using offline data. Neurips 2022
[2] Samir Elhedhli, Fatma Gzara, and Burak Yildiz. Three-dimensional bin packing and mixed-case palletization. INFORMS Journal on Optimization 2019. | Summary: This paper investigates the online three-dimensional bin packing problem (3D-BPP) and extends the PCT algorithm by proposing an adjustable robust reinforcement learning (AR2L) framework that balances the performance of policies in average and worst-case scenarios. The paper designs a permutation-based adversary and introduces an objective function that combines expected return and worst-case return with weights. A lower bound is derived to guide policy learning. The paper presents two algorithms to implement the AR2L framework, one exact that requires training a hybrid dynamics adversary, and another approximate that samples from the original dynamics and permutation-based adversary. Experiments are conducted in both discrete and continuous settings, demonstrating the effectiveness and superiority of the AR2L framework.
Strengths: 1. The problem addressed in the paper is highly significant. The authors extend the state-of-the-art PCT algorithm by optimizing its worst-case performance through an adversarial approach. I also have a strong impression of the PCT work, as it intuitively and effectively solves the important online bin packing problem. It is great to see the functionality of PCT being further enhanced through this work.
2. I appreciate the organization of the paper. The authors demonstrate a strong understanding of packing and reinforcement learning, supporting each argument with data or theory. The paper is clear, understandable, and reasonable, making it enjoyable to read.
3. The proposed approach is clever. The authors introduce a novel robust reinforcement learning framework, where a permutation-based adversary is designed to generate worst-case problem instances. This is an innovative and practical method. Additionally, the paper derives a lower bound for an objective function and utilizes it to guide policy learning, providing a theoretical contribution.
4. The paper presents two algorithms to implement the AR2L framework, one exact and one approximate, catering to different scenarios and requirements.
5. The paper conducts extensive experiments, comparing the AR2L framework with multiple packing and benchmark methods, demonstrating its advantages in terms of average performance and robustness.
Weaknesses: The paper does not provide guidance or suggestions on how to choose the hyperparameters α and ρ. I noticed that the optimal values of α vary when β changes in the best-performing AR2L algorithm. Perhaps this could be considered as future work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. If I understand correctly, in Table 2 of the main text and Table 2 in the appendix, each trained policy is still tested using data generated by the attacker (as the performance of the baseline algorithms varies with different beta values, indicating a change in the tested data). So, if we remove the attacker and everyone uses the same data for testing, what would be the performance of AR2L? I am very interested in the practical implications of packing policies and would like to know if the packing policy learned through adversarial training still has advantages when tested on normal data (e.g., object sizes following a uniform distribution). This inquiry is purely out of curiosity and will not affect my scoring.
2. In the appendix, the authors mention that they failed to reproduce some PCT results. Based on my own attempt, I believe PCT is quite reliable. I suggest that the authors contact the PCT authors as they may be able to assist in successfully reproducing the results. It is possible that there are some issues related to the usage or implementation, as packing involves various complex settings that require consideration of multiple aspects, such as discrete domain, continuous domain, stability checks, and data types. The functionality of the code can be quite complex. I believe the authors of PCT will be willing to help you successfully reproduce the results, and I have sent them a message to notify them and draw their attention to possible coming inquiry emails.
3. A quick question: I would like to reproduce the results mentioned in the paper. In the code provided, what does "bal_policy" represent? Does the code refer to the code for ExactAR2L or ApproxAR2L, or is there an option to switch between the two algorithms?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper only considers one type of permutation-based attacker and does not explore other possible attack methods, such as adding or removing items, modifying the size or shape of items, etc.
----------------------After Rebuttal----------------------
The reviewer appreciates the author's response. However, the reviewer is still considering whether it is necessary to propose a new candidate generation method. During the rebuttal period, the reviewer downloaded the official implementation of PCT. The final test results were even higher than those reported in the PCT paper. Because the reviewer does not see a direct response in the rebuttal, holding onto this doubt, the reviewer intends to slightly lower the rating.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for recognizing our contribution in developing the permutation-based attack method and the adjustable robust reinforcement learning algorithm. We are grateful for the reviewer's valuable feedback, and we would like to address their concerns as follows.
Q1: The settings of hyperparameters
Thank you for raising the concern about the hyperparameters in our AR2L framework.
Based on observations from Table 2, the larger value of $\alpha$ (i.e., $\alpha=1$) is the best choice for ExactAR2L across different test settings. In tasks where $\beta=50, 75$, ExactAR2L with $\alpha=1$ performs the best compared to the baseline methods and ExactAR2L with other values of $\alpha$ (with a slight drop compared to ExactAR2L with $\alpha=0.7$ in the setting of $\beta=50, N_B=15$). If $\beta=100$, ExactAR2L with $\alpha=1$ can still produce competitive results compared to RARL and significantly outperforms PCT. In the task of $\beta=25$, although $\alpha=1$ is not the optimal choice for ExactAR2L, ExactAR2L with $\alpha=1$ can still outperform other baselines. In the task of $\beta=0$, ExactAR2L with $\alpha=1$ significantly outperforms RARL, and the slight drop of ExactAR2L with $\alpha=1$ compared to PCT is acceptable, as our goal is to improve the robustness while maintaining average performance at an acceptable level.
The parameter $\rho$ is only used in ApproxAR2L. As shown in Figures 3(c) and 3(d), we chose different values of $\rho = 0.1, 0.2, 0.3, 0.4$ in different settings of $N_B$ and $\beta$. We found that $\rho=0.1$ is a trustworthy choice for ApproxAR2L. Based on the observations from Table 2 and Figures 3(c) and 3(d), we conclude that $\rho=0.1$ and $\alpha=0.5$ is the best choice for ApproxAR2L, as it outperforms its corresponding baseline (RfMDP) in almost all the tasks. In the main text, we will justify more on the choices of hyperparameters, and will also investigate a more integrated way for adjusting these parameters in robust reinforcement learning settings.
Q2: The setting of testing data and testing on normal data
Thank you for your insightful feedback regarding the experiment section. In Table 2, the column for $\beta=0$ indicates that all the algorithms were exposed to the same dataset from the nominal dynamics, while for other $\beta$ values, we construct mixture datasets by randomly selecting $\beta\%$ nominal box sequences and reordering them using the learned permutation-based attacker for each packing policy. We observed that in all tasks where $\beta=0$, ExactAR2L with $\alpha=1$ outperformed RARL, as RARL was too conservative to handle instances from the nominal dynamics. When compared to PCT, ExactAR2L with $\alpha=1$ showed a slight drop in space utilization in the $\beta=0$ tasks. This was because ExactAR2L had to consider instances from both the nominal and worst-case dynamics. If the deviation between the perturbed problem instance distribution and the nominal distribution was too large, the performance of ExactAR2L in the nominal dynamics would be degraded. However, when such deviation was constrained in an acceptable range, the training instances were diversified due to the mixture-dynamics attacker, improving the generalizability of the packing policy and resulting in better performance compared to PCT.
Q3: Regarding the setting and implementation of PCT
Thank you for expressing your concerns regarding the implementation of the baseline method PCT. After reproducing it, we found that we could only obtain good results in the discrete settings with default hyperparameter settings. However, despite trying different hyperparameter configurations, we failed to reproduce the results as reported in PCT for the continuous setting. Therefore, we decided to propose a new heuristic method, the Intersection Point heuristic, to generate candidate locations in the continuous setting. And please refer to our Appendix 1.4 for more details on the Intersection Point heuristic. This approach provided satisfactory results for us. We appreciate your suggestions and will contact the authors of PCT to reproduce their results.
Q4: Code naming
We apologize for the confusion caused by the naming conventions used in our code. To clarify, in our code, "BPP_policy" refers to the packing policy, "adv_policy" corresponds to the permutation-based attacker used for the worst-case dynamics, and "bal_policy" corresponds to the mixture-dynamics attacker. Thank you for asking about reproducing our work.
Q5: Implementation of other possible attacks
Thank you for expressing your interest in extending our framework. The main focus of our paper is on the robustness against permutation-based attackers. And in practice, such permutation is happening and has to be considered for improving algorithm robustness. Other types of attack methods like you suggested could be explored in future work to study the extendibility of our method.
---
Rebuttal 2:
Comment: We appreciate the feedback provided in the review. In fact, we found that the official implementation of the baseline method PCT was just updated 3 weeks ago. Consequently, we were only able to download the older version of PCT before the Neurips deadline. When attempting to reproduce PCT following the official instructions, we did not achieve satisfactory performance in the continuous setting. As a result, we made the decision to use a new heuristic method for candidate generation in our experiments.
In addition, we would like to clarify that our study specifically focuses on the robustness of the packing policy, rather than exploring alternative candidate generation methods. Therefore, to maintain a fair comparison in the continuous setting, we ensured that the same heuristic method was used for generating all candidate positions across all approaches.
We appreciate the reviewer’s thorough understanding and insightful comments regarding our method. We are grateful for the valuable feedback provided, and will emphasize in the revised paper about the settings of the candidate generation method. If there are any additional concerns or questions, we are more than happy to provide further explanations and address them to the best of our abilities.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. As I am interested in the packing problem, I hope to avoid unnecessary complexities that might confuse the community. I took the initiative to reach out to the authors of the PCT paper in hopes that they could provide assistance and update their usage documentation. However, in the authors' response to this article, I did not observe them presenting any persuasive experimental results, which is why I lowered my rating.
Nonetheless, because I still find great value in this paper, ensuring the worst-case performance for the packing problem is meaningful. I acknowledge the authors' efforts; however, I urge them to thoroughly replicate the performance of existing methods in the revision. Based on these conditions, I am inclined to raise my recommendation to a score of 7.
---
Reply to Comment 2.1.1:
Comment: We sincerely appreciate the reviewer’s insightful comments and valuable feedback on our work. Your concerns and suggestions are highly valued and contribute to the packing problem research community. We acknowledge the importance of thoroughly reproducing the performance of the baseline methods using the latest official implementation, as you have recommended. In the revision, we will carefully address this concern and ensure accurate and up-to-date comparisons with the baseline methods. We are grateful for your valuable feedback, which has helped us improve the quality of our work. | Summary: This work addresses the problem of 3D bin packing problem (3D-BPP). Specifically, it develops a permutation-based attacker and subsequently proposes an adjustable robust reinforcement learning (AR2L) framework. This allows an algorithm to consider both the average and worst-case performance with the attacker, where the packing objective is a weighted sum of expected and worst-case returns over the space utilization rate. The proposed framework is integrated with prior work to develop the exact and approximate variant of the proposed AR2L.
--------------------------
I acknowledge the author's effort in the rebuttal and have made changes to the review accordingly.
Strengths: + This work proposes to train a novel permutation-based attacker (RL-based policy) that can re-order the sequence of the observed items to reduce the space utilization of bin packaging tasks. The permutation-based attacker is used to quantitatively evaluate a given algorithm's performance under the worst-case scenario. The empirical results show new insight into the robustness of existing works.
+ This work developed an adjustable robust reinforcement learning (AR2L) approach, which learns a packing policy based on both average scenarios and worst-case scenarios. This can be achieved via a surrogate problem where an optimal policy can be identified with the maximal lower bound.
+ Two forms of implementation of the AR2L algorithm are shown in this work. The exact AR2L algorithm performs better worst-case performance under the worst-case attack with additional computation to the entire framework. On the other hand, the approximate AR2L algorithm introduces estimation errors with the cost of performance drop. For both variants, it outperforms the corresponding baseline methods.
Weaknesses: - This work shows comprehensive evaluation with different combination of \alpa, \beta, and N_B parameters. While this provides valuable insights, it is unclear what would be the most optimal configuration in real-world deployment since the conditions could vary across task.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In eqn 2, should the last term be \alpha d( P^m || P^w )?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No immediate limitations or impact raised from the novelties from this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere appreciation to the reviewer for acknowledging our contribution in developing the permutation-based attack method and the adjustable robust reinforcement learning algorithm. We are grateful for the feedback provided by the reviewer, and below, we address these concerns in detail. We also provide additional experimental validations and further explanations of the settings in the global response.
Q1: Configurations of parameters and variables
The variables $\beta$ and $N_B$ are used to indicate and evaluate the varying difficulty levels of problem instances used in the testing phase. Therefore, $\beta$ and $N_B$ do not need to be tuned for best performance, and here we detail how $\alpha$ affects the policy's performance. Based on observations from Table 2, the larger value of $\alpha$ (i.e., $\alpha=1$) is the best choice for ExactAR2L across different test settings. In tasks where $\beta=50, 75$, ExactAR2L with $\alpha=1$ performs the best compared to the baseline methods and ExactAR2L with other values of $\alpha$ (with a slight drop compared to ExactAR2L with $\alpha=0.7$ in the setting of $\beta=50, N_B=15$). If $\beta=100$, ExactAR2L with $\alpha=1$ can still produce competitive results compared to RARL and significantly outperforms PCT. In the task of $\beta=25$, although $\alpha=1$ is not the optimal choice for ExactAR2L, ExactAR2L with $\alpha=1$ can still outperform other baselines. In the task of $\beta=0$, ExactAR2L with $\alpha=1$ significantly outperforms RARL, and the slight drop of ExactAR2L with $\alpha=1$ compared to PCT is acceptable, as our goal is to improve the robustness while maintaining average performance at an acceptable level.
The parameter $\rho$ is only used in ApproxAR2L. As shown in Figures 3(c) and 3(d), we chose different values of $\rho = 0.1, 0.2, 0.3, 0.4$ in different settings of $N_B$ and $\beta$. We found that $\rho=0.1$ is a trustworthy choice for ApproxAR2L. Based on the observations from Table 2 and Figures 3(c) and 3(d), we conclude that $\rho=0.1$ and $\alpha=0.5$ is the best choice for ApproxAR2L, as it outperforms its corresponding baseline (RfMDP) in almost all the tasks.
We would also like to note the AR2L algorithm has the practicality in real-world scenarios. We directly evaluated our model on the Mixed-item Dataset (MI Dataset) which follows the generation scheme proposed by Elhedhli et al. [1] for the realistic 3D-BPP instance generator. MI dataset has 10 thousand items, with 4668 species, and occurrences vary from 1 to 15. The pallet dimensions is set to the size often used in practice: L = 120, W = 100, and H = 100. The results are presented in the table below (or Table 1 in the submitted PDF file of the 'global' response). It is evident that in both settings of $N_B=15$ and $N_B=20$, our AR2L approach demonstrates superior performance compared to PCT and RARL across various metrics in the real-world dataset.
**Table 1**: Algorithm evaluation on real-world MI Dataset
| | - | $N_B=15$ | - | | - | $N_B=20$ | - |
|:----:|:-----:|:--------:|:-----:|:-:|:-----:|:--------:|:-----:|
| | $Uti$ | $Std$ | $Num$ | | $Uti$ | $Std$ | $Num$ |
| PCT | 48.3 | 8.5 | 16.8 | | 48.7 | 10.1 | 16.9 |
| RARL | 48.8 | 8.8 | 16.9 | | 48.8 | 8.3 | 16.9 |
| AR2L | 50.2 | 8.5 | 17.4 | | 52.9 | 6.5 | 18.3 |
Q2: Typo in Equation
We apologize for the typo in Equation (2), where the second term on RHS should be $(d(P^{m}||P^{o}) + \alpha d(P^{m}||P^{w}))$. We will correct the references to the equations in our paper. We thank the reviewer for bringing up these concerns and questions. We hope our explanations have addressed these concerns.
[1] Samir Elhedhli, Fatma Gzara, and Burak Yildiz. Three-dimensional bin packing and mixed-case palletization. INFORMS Journal on Optimization 2019.
---
Rebuttal Comment 1.1:
Comment: I acknowledge that I have read the rebuttal and the authors has responded to my concerns. | Rebuttal 1:
Rebuttal: Dear Reviewers:
We appreciate your valuable comments and have made clarifications to all of your questions and concerns in our response. Below are some shared concerns among reviewers.
Q1: Practicability and generalizability of AR2L
To validate the practicality of AR2L in real-world scenarios, we directly evaluated our model on the Mixed-item Dataset which generates the realistic 3D-BPP instance. To assess the generalizability of AR2L, we applied AR2L to the CartPole environment. The results are presented in the submitted PDF file of the global response.
In addition, to adhere to the double-blind reviewing policy, we have submitted the link of the video demo of the packing process to AC.
Q2: Contributions of our work
In our work, we address the challenges posed by the randomness in the permutation of item sequences widely existed in online bin packing problem (BPP). To tackle this issue, we introduce the permutation-based attacker with limited capabilities. This attacker aligns with practical considerations and our research goals. To enhance the robustness of the packing policy while maintaining performance in nominal cases, we propose the AR2L algorithm based on the general theorems we derived. AR2L avoids overprioritization while also considering robustness. Our approach not only solves the BPP but also can extend to other RL problems of a more general nature.
Q3: Performance under $\beta=0$ and $\beta=100$
The AR2L policy is trained on instances from both the worst-case and nominal dynamics. When evaluating the scenario of $\beta=100$, due to the deviation between the data distributions used for training and testing, AR2L may not consistently outperform RARL. However, the objective of our paper is to *strike a balance between the policy's performance in average and worst-case environments*. As such, for $\beta=0, 25, 50, 75$, AR2L outperforms RARL, demonstrating its superiority in achieving such balancing goal.
The PCT policy is trained on instances from nominal dynamics, but it is important to note that random item permutation in the nominal dynamics can naturally produce some challenging instances. AR2L can diversify the training data by providing more challenging instances with various patterns. If instances are only generated from the nominal dynamics, challenging instances may be overwhelmed by moderate instances. By training on the diversified instances from AR2L, the policy can improve generalization, and can effectively handle both moderate and challenging instances from the nominal dynamics ($\beta=0$). However, we also recognize that larger $\alpha$ and $N_B$ may cause a large deviation between the data distributions. But AR2L still demonstrates its superiority in the cases of $\beta=25, 50, 75, 100$. It is important to keep in mind the "No Free Lunch Rule".
Q4: Discussions about $Std$ and $Num$.
As shown in Table 2, ExactAR2L demonstrates its superiority over PCT with smaller Std in 17 tasks, while producing a slightly larger Std in 3 tasks. Thus, ExactAR2L can indeed improve the robustness. Additionally, we observe when $N_B=5, 10$, ExactAR2L tends to choose $\alpha=0.7$ for smaller Std, and when $N_B=15, 20$, ExactAR2L favors $\alpha=1$. Since ExactAR2L is trained on both nominal and worst-case dynamics, while RARL is trained only on the worst-case dynamics, the ExactAR2L policy is less conservative than the RARL policy. While the conservativeness may result in smaller Std in most tasks, it produces worse results of Uti under nominal dynamics. It is worth noting that the value of Std from ExactAR2L is the closest to that of RARL. This observation shows ExactAR2L can trade off between conservative and risky behavior, as the Std from ExactAR2L is between that of RARL and PCT. Similarly, ApproxAR2L is less conservative than RfMDP, which causes ApproxAR2L cannot achieve a smaller Std in all tasks.
We use ExactAR2L(1) to denote ExactAR2L with $\alpha=1$. The ExactAR2L algorithm can pack more items in 17 tasks compared to PCT, and shows a slight drop in 3 tasks, where the average drop is 0.2. We found that to pack more items, ExactAR2L consistently favors $\alpha=1$ across various tasks. Compared to RARL, ExactAR2L(1) can pack at least the same number of items in 16 tasks. Thus ExactAR2L(1) can produce competitive results compared to RARL and PCT in terms of Num. Compared to the baseline method RfMDP, ApproxAR2L(0.5) can pack more items in 16 tasks, and shows a slight drop in only 4 tasks, where the average drop is 0.25. In the revised paper, we would take the reviewer's suggestions and give more detailed and comprehensive discussions.
Q5: Hyperparameter selection
$\beta$ and $N_B$ are used to indicate the varying difficulty levels of data used in the evaluation. Thus, they do not need to be tuned for best performance. Based on observations from Table 2, $\alpha=1$ is the best choice for ExactAR2L across different test settings. When $\beta=50, 75$, ExactAR2L(1) performs the best compared to baselines and ExactAR2L with other values of $\alpha$ (with a slight drop compared to ExactAR2L(0.7) when $\beta=50, N_B=15$). If $\beta=100$, ExactAR2L(1) can still produce competitive results compared to RARL and significantly outperforms PCT. When $\beta=25$, although $\alpha=1$ is not the optimal choice, ExactAR2L(1) can still outperform other baselines. When $\beta=0$, ExactAR2L(1) significantly outperforms RARL, and the slight drop compared to PCT is acceptable, as our goal is to improve the robustness while maintaining average performance at an acceptable level.
$\rho$ is only used in ApproxAR2L. As shown in Figures 3(c), 3(d), we chose different values of $\rho$ in different settings. We found that $\rho=0.1$ is a trustworthy choice for ApproxAR2L. Based on the observations from Table 2 and Figures 3(c), 3(d), we conclude that $\rho=0.1$ and $\alpha=0.5$ are the best choice for ApproxAR2L, as it outperforms its corresponding baseline in almost all the tasks.
Pdf: /pdf/767bb5fd4ca13a6349ec2a98d38972e0449d2c0b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper is in the category of work that uses reinforcement learning for combinatorial optimization problems. More specifically, it focuses on the online version of the 3d-bin-packing problem (3D-BPP), and proposes a robust RL solution to address the uncertainty that arises from the permutation of an adversarial nature. At the core of their solution, Adjustable Robust Reinforcement Learning (AR2L), is 1) a new attacker that is based on permutation of the observable sequence of the items, 2) a new objective function that adjusts weights between nominal performance and worst-case performance, and 3) theories and algorithms that solve the new optimization.
Strengths: + RL for combinatorial optimization is an interesting and trending topic. It can potentially draw interest from the RL/OR community.
+ Robustness in 3D-BPP is a practical and important issue.
+ Permutation-based attacker makes sense.
+ The adjustable objective makes sense.
+ The evaluation of robustness for existing methods is applausible.
+ Connections of AR2L to RARA and RfMDP are interesting.
Weaknesses: - The attacker capacity seems questionable. Because of the following reasons, the problem setting seems rather impractical. These make the formulation, theorems, and evaluation somewhat trivial.
1) If you formulate the nature as an adversary, then the nature should have control over the permutation of the entire sequence of items that are not presented to the packing policy, not just the observable ones.
2) Also, it seems that the evaluation assumes the packing policy only observes one item. This is a bit weired because in this case the attacker does not have any capacity.
3) Moreover, the attacker's policy is to change only one item to the most preceding item, but the attacker should be able to change the entire permutation. This is really how the attacker makes the training robust, and evaluations should be conducted against such an attacker.
4) Finally, the setting that the sequence of the observable items can be changed is also weired. This is because the packing agent/policy has already seen the set of observable items, you cannot suddenly change them without the notice from the packing agent.
- The paper, along with its formulation, seems to be too much restricted to robustness to permutation-based attack in 3D-BPP. Do the theories/results generalize to other settings? There is a lack of discussion from that aspect. This makes the technical contribution rather limited. The paper would be stronger if it can shed light on how the algorithms are generally applicable to more combinatorial optimization problems with uncertainty.
- Theorem 1: $\gamma$ is not explained -- is it the discount factor? Isn't is assumed to be 1 in the Preliminaries section? If so, the denominator is 0. At least $1-\gamma$ is a very small value, making the second term on the RHS of Equation (2) dominating the entire RHS. This significantly restricts the meaningfulness of Theorem 1 (a major theoretical contribution). It seems that the objective in Equation (3) makes sense by itself, but without Theorem 1 this is only a heuristic. Also, why the coefficients are thrown out in Equation (2) and turned into Equation (3) was not explained.
- The empirical results are not convincing
1) it looks like that the average performance of AR2L is outperformed by the baselines (esp. RARL) in many cases (esp. in the most challenging setting where $\beta=100$. It is surprising that PCT is not the best even for the nominal setting ($\beta=0$). These have not been discussed and explained.
2) Also, although the paper is about robustness, it primarily focuses only on average performance (Uti), discussions of Std are limited even thought the results are shown.
3) I was expecting to see some guidelines/lessons-learned from what values I should take for the hyperparameters $\alpha$ and $\rho$, but did not see. If I were to use the algorithms, how do I choose them? It seems that if I choose the wrong ones, I might end up with a solution that is worse than the baselines. The mysterious performances regarding hyperparas are very un-convincing.
Minor issues:
* Refereces to equations should be Equation (1), instead of 1. See e.g., Line 205.
* A typo at the end of Equation (2). $P^0$ -> $P^w$.
*
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See questions in the weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. We appreciate your valuable feedback. However, we would like to clarify that there may be some misunderstandings regarding our settings. We would like to take this opportunity to address your concerns. Some shared concerns about AR2L's empirical performance is additionally addressed in global author rebuttal with additional results to broader RL tasks.
Q1: The settings of attacker
**Adversary setup**
We thank the reviewer for asking about the attacker capability. In the online bin packing, we assume two crucial components can control the entire item permutation, namely the item distribution and the permutation of the observable items. Since these two components are included in the environment for the packing policy, the attacker aims to change the environment parameter/setting by permuting the observable items. This setting aligns with RARL, which investigates the robustness against environment parameter changes.
Most importantly, we want to emphasize that the permutation-based attacker's objective is to identify challenging instances by solely reordering the item sequence without selectively favoring specific types of items. But upon analyzing Figure 1 in Appendix, we observed that as the number of observable items increases, the attacker appears to prefer smaller items to make the instance harder. So if we do not limit the attacker's capabilities, it may trickily select certain types of items to construct harder instances, which goes against our intended goal.
**Attacker capacity**
In section 5.1, it is important to note that the evaluated packing methods except PCT, can only observe the first item at each step. To ensure a fair comparison for the robustness evaluation, we set the number of observable items for packing policies to 1. To make the attacker effectively perturb the packing policy, *we allow the attacker to observe more items (i.e. $N_B=5, 10, 15, 20$)*, which is also aligned to changing the environment parameter/setting. Our evaluation ensures a fair comparison for the robustness studies, while also recognizing the attacker's need for more information about the item sequence to effectively perturb the packing policy in this scenario.
In section 5.2, we use PCT as the packing policy in all the algorithms, which allows the *packing policy to observe multiple items at each step*. Therefore, we set the number of observed items for both the attacker and the packing policy to be the same. It still provides a fair comparison for the robustness evaluation. In addition, although the attacker does not obtain more information compared to the packing policy, they can still effectively perturb the packing policy.
**Attacker's permutation**
We thank the reviewer for bringing the setup issue out. In our designed attacker, the entire permutation is progressively changed as the time step increases. At each step, the attacker chooses to move one item from the observed items to the most preceding position. Since the attacker aims to choose the item with the highest potential to decrease the space utilization, *the entire permutation is actually altered throughout the attack*. By contrast, in the unperturbed item sequence, each item is randomly sampled and does not reduce the space utilization adversarially. Thus, the packing policy can learn to improve its robustness with the presence of item sequences progressively generated by the attacker.
**Sequence of observed items**
As we mentioned earlier, the environment consists of both the item distribution and the permutation of observable items. The attacker first permutes the item sequence to change the environment, aligned to RARL. Then the packing policy packs the first item in the reordered item sequence.
Based on these justifications, our setting of the permutation-based attacker with limited observable items aligns with practical considerations and our intended objectives. This ensures that our formulation and theorems are reasonable. By conducting a fair comparison in our evaluation, we verify the value and effectiveness of our work. It is important to note that our study provides the first comprehensive analysis of adjustable robustness by attacker training and integration, which is a nontrivial setup.
Q2: The applicability of AR2L.
Theorem 1 is general and derived from the standard MDP, and the adjustable Bellman operator in Theorem 2, is also a general operator like the robust Bellman operator. We are confident that our framework can be extended to other combinatorial optimization problems with uncertainty and applied to various RL problems beyond the scope of this study (please see global response).
Q3: Theorem 1's parameter.
We thank the reviewer for pointing this out. $\gamma$ is the discount factor in Theorem 1. And we want to emphasize that Theorem 1 is a general theorem. In our paper, it provides insight and guidance for estimating the lower bound of the general objective in Equation 1. Specifically, Theorem 1 suggests that to increase the lower bound of our objective, it is an option to increase $\eta(\pi, P^{m})$ and $-(d(P^{m}||P^{o}) + \alpha d(P^{m}||P^{w}))$. Based on this analysis, we propose the new objective in Equation 3. We believe that this operation and objective transformation, guided by the insight from Theorem 1, is reasonable and commonly used in the field of RL. For example, in TRPO, the use of a similar coefficient can result in very small update step sizes, which is why they use the trust region constraint and discard the coefficient for practical implementation.
Q4: Performance evaluation.
We apologize that due to the limited space we include the discussions about performance comparisons, metrics of $Std$ and $Num$, and the selection of hyperparameters to the global response. Please refer to the global response for details.
### Minor Issues
We apologize for typos and refereces in equations and draft, and will revise them accordingly.
---
Rebuttal Comment 1.1:
Comment: I thank thre authors for the responses. However, I am not entirely convinced by the issues around the attacker's capacity, the empirical evaluation, and the hyperparameters.
In terms of the attacker's capacity, e.g., why doesn't the attacker permute the entire sequence of observable items instead of just putting one item ahead of the others? Although eventually the attacker will change the order of the entire sequence via multiple steps, this limits the power of the attacker. Also, why doesn't the attacker have the capacity of changing the item distribution (like Kong et al.)?
In terms of the empirical evaluation. E.g., it is not convincing why PCT is outperformed by AR2L even when $\beta=0$. Does this imply that the baselines are weak in the first place? Also, the std of AR2L(1) is larger than RARL in many cases -- this is counter-intuitive since RARL focuses on the worst-case, it should have higher stds.
In terms of the hyperparas, how about $\alpha>1$? $\alpha=1$ is the largest value being tested, but it seems that larger $\alpha$'s may yield even better performances. But this may indicate that the "robustness" does not matter that much and the algorithm reduces to maximizing the nomimal objective.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We sincerely appreciate your valuable feedback. We would like to provide further clarification and address these concerns.
Q1: The settings of attacker
**Moving one item**
From the perspective of the practical setting of online BPP, the packing policy can observe multiple items, but it can only pack the first item into the container. Let's consider a scenario where the attacker permutes the entire sequence of observable items at timestep $t$. In this case, regardless of the permutation applied to the remaining items at timestep $t$, at timestep $t+1$, the remaining items will combine with a new incoming item to form a new sequence that will be further permuted. Therefore, any permutation of the remaining items at timestep $t$ will be ignored and subsequently permuted with the new incoming items.
From the perspective of the influence on the packing policy, let's consider a simplified value function $V(C_{t}, B_{t}) = r_t + V(C_{t+1}, B_{t+1})$, where $C_{t}$ and $B_{t}$ represent the bin configuration and the permuted observable item sequence, and $r_t$ is the reward. It tells us that the value of the packing policy at timestep $t$ depends on both the reward $r_t$ and the value function $V(C_{t+1}, B_{t+1})$. And $r_t$ and $C_{t+1}$ are influenced by the first item in $B_{t}$. The remaining items in $B_{t}$ will combine with a new item to form a new sequence, which will undergo further permutations as $B_{t+1}$. Consequently, the permutation of the remaining items at timestep $t$ will be disregarded in the attack process.
From the perspective of the implementation, if we allow the attacker to permute the entire sequence of observable items, the action space will grow exponentially as the number of observable items increases. This large search space may have a detrimental effect on the convergence of the RL-based attacker.
**Item distribution**
In our paper, we specifically focus on the impact of item permutation on the packing policy, rather than the item distribution. To ensure that our study remains focused on the randomness introduced by item permutation, we design the permutation-based attacker, which allows us to avoid any influence stemming from changes in the item distribution. Furthermore, as mentioned in PCT, larger items can simplify the scenario, while smaller items can trickily make the task more challenging. Therefore, we aim to prevent the attacker from simply learning to selectively favor certain types of items to create challenging instances. Instead, we want the attacker to genuinely learn how to permute the item sequence in order to identify and create challenging scenarios. Hence, we have designed an attacker that is unable to alter the item distribution in order to maintain the integrity of our study.
Q2: The evaluation
**The performance of $\beta=0$**
We would like to emphasize that there are inherently challenging instances present in the nominal dynamics ($\beta=0$). Therefore, one approach to improving performance when $\beta=0$ is to enhance the performance specifically on those challenging instances while maintaining performance on others. When using a smaller value of $N_B$, the distribution of training data (mixture instances) and testing data ($\beta=0$) exhibits less deviation. This enables the policy to effectively handle both challenging instances and moderate instances, thereby improving generalization. However, when $N_B=20$, the substantial deviation between the distributions can negatively impact performance when testing on $\beta=0$. And we have confidence in the strength of the baseline methods chosen due to the fair experimental comparison settings.
**Standard Deviation**
We would like to clarify that in a high uncertainty environment, such as online 3D BPP, robustness specifically refers to the ability of a policy to consistently perform well despite the presence of uncertainties. It is intuitive that the RARL policy will consistently exhibit a conservative behavior, regardless of the problem instances provided. This conservative behavior ensures a consistent performance and reduces the variance of the policy. On the other hand, the AR2L(1) policy demonstrates a less conservative behavior, resulting in relatively larger variance.
Q3: The hyperparameter
We have the flexibility to set $\alpha > 1$. However, based on observations from our empirical study, we found that larger $\alpha$ can actually lead to a degradation in performance in the nominal dynamics. This suggests that AR2L with larger $\alpha$ tends to overprioritize worst-case scenarios, similar to previous robust methods. Given that our objective is to achieve a desired balance between performance in the nominal and worst-case dynamics, it is not recommended to set $\alpha > 1$. Doing so may result in an excessive focus on worst-case performance, compromising the overall performance in the nominal dynamics. | null | null | null | null | null | null |
Exploring the Optimal Choice for Generative Processes in Diffusion Models: Ordinary vs Stochastic Differential Equations | Accept (poster) | Summary: The paper studies diffusion models, which comprise a class of generative models based on stochastic differential equations. In diffusion models, data is first transformed into Gaussian noise via a stochastic differential equation (usually an Ornstein-Uhlenbeck process), and then a backward process, which turns noise into synthetic data, is learned via the score function. In most works, the diffusion coefficient of the backward process is either chosen to be zero, or to be a constant matching the diffusion term of the forward. The first case is an ODE implementation, while the second case is an SDE corresponding to the exact time reversal of the forward process. However, any function of time can be chosen for the diffusion coefficient of the backward process, and in this work the authors explore the consequences of different choices of diffusion coefficient. In particular, they ask how the optimal choice of diffusion coefficient depends on the error induced by the estimated score function.
The main results of the paper are as follows. Let $h$ denote the diffusion coefficient of the backward process. Assume that the score function estimator has the for $s_t(x)= \nabla \log p_t(x) + \epsilon E_t(x)$, where the first term is the exact score function and the function $E_t(x)$ is a bounded error function. Let $\hat{q}_T$ denote the law of the backward process at time $T$ with $s_t(x)$ in the place of the true score function. Then under certain assumptions on $p_0$, it holds that $$ KL(p_0 \parallel \hat{q}_T = L(h) \epsilon^2 + O(\epsilon^3),$$ where the leading order term $L(h)$ depends on $h$, $p_0$, $E_t(x)$ and $T$. The leading order term $L(h)$ is computed explicitly, which allows the authors to analyze several specific cases when $h$ is constant. In particular, they show that if the error $E_t(x)$ is time-localized (i.e., only nonzero at $t=s$ for some $s > 0$) in the middle of the time interval, the leading order term $L(h)$ decays exponentially fast to zero as $h \rightarrow \infty$. This suggests that the SDE implementation can significantly outperform the ODE implementation in some cases. They also show that for certain forms of the error function, the opposite is true: the ODE implementation has better KL divergence error than the SDE implementation.
--------------
After rebuttal: Thank the authors for the clarifications and comments, which addressed my earlier concerns. I will keep my rating unchanged.
Strengths: Overall, the paper presents a novel and interesting question with some practical implications. The question of choosing the diffusion coefficient in the backward SDE seems largely unstudied because, as the authors mention, the ODE and reverse-time SDE approaches are standard conventions. The fact that they show that different choices of the diffusion coefficient can lead to vastly different performances suggests that the problem is worth studying further.
The authors provide a nice explicit characterization of the leading order term in the KL divergence error, the derivation of which requires a lot of work. Namely, they show that as $h \rightarrow \infty$, the term $L(h)$ converges exponentially fast to a constant $\Tau$ which is upper bounded by $$ \Tau \lesssim \frac{1}{2m_0^2} \int_{\mathbb{R}^d} \frac{(\nabla \cdot (p_0 E_t))^2}{p_0} dx.$$ Here, $m_0$ is the second moment of $p_0$. This suggests the quantity on the right-hand side above plays a significant role in the error analysis of diffusion models, and thus it could be insightful to understand it for different classes of probability distributions.
The authors also leave a lot of room for interesting future work. Most interestingly, the authors pose the question of how to infer the optimal choice of $h$ from data, which could lead to a lot of subsequent research.
Weaknesses: One weakness of the paper is that the upper bound on the constant $\Tau$ is not further explained through discussion or examples. It seems like the upper bound plays a big role in bounding the KL divergence error for diffusion models. It would be very interesting to discuss explicit examples of distributions $p_0$ where the upper bound on $\Tau$ can be explicitly computed or controlled, other than Gaussian distributions. This could be mentioned as a direction for future work. It would also be helpful to describe intuitively what the bound represents about a distribution (what does a large/small $\Tau$ say about a distribution?).
In addition, the numerical experiment on the 2D 4-mode Gaussian mixture could benefit from additional explanation. It doesn’t seem like a machine learning problem, since you already know the score of the distribution from which you are trying to sample. Perhaps the point is simply to illustrate the convergence of $L(h)$ to a constant as $h \rightarrow \infty$, in which case the purpose of the experiment should be stated more clearly.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: 1. Can you explain in what setting, if any, one would ever encounter a score estimator where the error term was localized in time? Or was the purpose of this choice of error term simply to illustrate that different choices of diffusion coefficient can lead to very different performance? It seems like a very strong assumption that would never be realized in practice.
2. It would be good to state in the main result any dependence on the sub-Gaussian constant $c_U$. As it is stated in the paper, one often works with a mollified version of the empirical distribution, which leads to a tradeoff between bias from smoothing and faster convergence for learning smoother distributions. Thus, rather than treat the constant $c_U$ as hidden, it could be beneficial to discuss how it balances with other terms.
3. In this paper, only constant choices of the diffusion term $h$ are discussed. Is this just for simplicity? Could it be of interest to consider certain parameterizations of nonconstant functions for $h$? If so, this could be mentioned for future work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation and valuable feedback of our paper.
### Q1: More understanding about the upper bound.
This suggestion is insightful. We are also interested in exploring the structure of this upper bound. But we are afraid that a simple and heuristic interpretation of this formula for upper bound is not available. Quantifying the upper bound's structure poses challenges due to the dependence of the score function error, which relies on multiple hyper-parameters and is problem dependent. We've observed some incomplete but interesting aspects regarding this question, and we'll discuss these findings in the revised manuscript.
Consider employing constrained score models, where we parameterize the log probability $\log p_t(x)$ during training, rather than the score function $\nabla\log p_t(x)$. The training process remains unchanged, differing only in how the score function is parameterized via a neural network. In this scenario, the error $\mathcal{E}^\leftarrow_{T} = \nabla\varphi$ for a scalar-valued function $\varphi$ (a result derived from the nature of energy-based models), leading to the following modified upper bound expression:
$$
\frac{1}{2m_0^2} \int_{\mathbb{R}^d} (\Delta \varphi - \nabla U_0 \cdot \nabla\varphi)^2 e^{-U_0} = \frac{1}{2m_0^2} \int_{\mathbb{R}^d} (\mathcal{L}^{*}\varphi)^2 e^{-U_0},\qquad p_0 := e^{-U_0}
$$
where $\mathcal{L}^{*}(\varphi) := \Delta \varphi - \nabla U_0 \cdot \nabla\varphi$. We can readily verify that its adjoint operator $\mathcal{L}(\mu) = \nabla\cdot(\nabla U_0 \mu) + \Delta \mu$ corresponds to the Fokker-Planck generator of the following Langevin dynamics $d X_t = -\nabla U_0(X_t)\ d t + \sqrt{2}\ d W_t$ where $W_t$ is the Brownian motion.
Given that the error $\mathcal{E}^\leftarrow_{T} = \nabla \varphi$ originates from approximating $\nabla\log p_0$, it is logical to consider the scenario where $\varphi = \log p_0$. In this context, the aforementioned upper bound can be expressed as follows:
$$\mathcal{T} \lesssim \frac{1}{2 m_0^2} \int_{\mathbb{R}^d} \big(|\nabla U_0|^2 - \Delta U_0\big)^2 e^{-U_0}.$$
The above two expressions appear to be slightly more informative than the formula in manuscript.
### Q2: Gaussian mixture experiment
Thank you for the suggestion. Indeed, the primary use of GMM is for verifying the theory. We will provide clearer clarification of its purpose in the revised version.
### Q3: About the practicality of pulse shape error.
Thank you for your question. A true pulse shape error might not manifest in real-world scenarios; it is, in essence, a theoretical construct. Nevertheless, we find value in studying it, which also hold practical significance:
- The time-localized ansatz is a foundational element of our broader findings. It serves as a crucial component in establishing the general result presented in Prop. 3.6, which addresses a generic score error function.
- The pulse-shape error emerges from analyzing the impact of score estimation errors at each time on the final sample generation error, rather than examining an averaged score-matching loss over time. This concept contributes to a deeper comprehension of the significance of both the overall score-matching loss and the distribution of score function errors across time.
- To briefly elucidate the origin of the pulse-shape error, we would like to recall Prop. 3.2, where the leading-order term $L$ has a quadratic dependence on $v_T$, which in turn has a linear dependence on $\mathcal{E}^{\leftarrow}_t$. Since $\mathcal{E}^{\leftarrow}_t$ can be decomposed as a linear combination of step functions without loss of generality, it is effectively equivalent to investigating the influence of individual values of $\mathcal{E}^{\leftarrow}_t$ for a fixed $t$ on the final leading-order term $L(h)$—essentially examining a pulse-shape error. We will provide full technical details of this argument in revision.
### Q4: Scaling of error with respect to $c_U$.
Below are our understandings. We shall discuss three instances where the role of $c_U$ becomes apparent:
- In Prop. 3.4, $c_U$ explicitly appears in the main result. It influences the lower bound of $\mathsf{h}$ required to ensure exponential decay.
- In Propositions 3.5 and 3.6, where we address the asymptotic case, the scaling involving $c_U$ becomes harder to track. A lot more effort is required to develop non-asymptotic results and we will defer this to the next stage of research.
- Another potential context in which $c_U$ could play a role is in determining the upper bound $\mathcal{T}$ as mentioned by the reviewer. Characterizing how $\mathcal{T}$ relates to $p_0$ (and consequently $c_U$) could offer valuable insights. However, one challenge lies in understanding the specific form of the score function error (which is problem-dependent). We acknowledge this intriguing question and plan to explore it further in future research endeavors.
### Q5: discussion on the constant $h$
The primary reason for the time-independent choice is its simplicity. Furthermore, the complexity increases significantly for non-asymptotic analysis; when we resort to asymptotic cases ($h_t = 0$ and $h_t=\infty$), time-dependence becomes less significant. In the case of large diffusion ($h_t \gg 1$), little distinction arises from using time-independent $h_t = \mathsf{h}$. Practicality also guides our choice, as a time-independent $h_t$ is easier to tune: only a single scalar parameter needs adjustment for potential sample generation error reduction. While acknowledging the intrigue of time-dependent cases, we plan to address this in future work, which will be noted and discussed in the revision.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
The author-reviewer discussion period ends in 2 days. Please review the authors' rebuttal and engage with them if you have additional questions or feedback. Your input during the discussion period is valued and helps improve the paper.
Thanks, Area Chair | Summary: In this work the authors try to understand the impact of noise in the reverse process of diffusion models in the presence of an approximate score network. In particular, they look at how the Kullback-Leibler divergence between the true data distribution and the denoised distribution evolves w.r.t. the diffusion term $h$ of the backward (generative) process, with an asymptotic expansion of this KL divergence and studying the first non-zero term.
They then show that if the score has a pulse (delta) error early on in the generative sampling, higher values of $h$ are able to recover the density trajectory better, whilst with and error in the late stage of the denoising process it's the opposite.
They empirically back these results on synthetic experiments (With heuristic errors) and on the swiss-roll and MNIST dataset with trained score network.
Strengths: - I believe this paper tackle a really important question, which is the role of noise in the generative process. In particular with the success of flow matching which shows that continuous normalising flows, which are (deterministic) ODE based generative models, can be trained akin to denoising score matching models.
- The analysis seems sounds and rigorous, and the assumptions reasonable.
- The take home message (if I understood correctly) is that apart if the score error is concentrated at the end, the higher the level of noise the better (with infinite computational budget to actually discretise the backward process).
- They link they theoretical findings with empirical results, both on simple settings which is nice to unambiguously be able to conclude, and more realistic settings when the score network is trained.
Weaknesses: - No major weakness, but some of the interpretation of the theoretical results could perhaps be enhanced (cf following questions).
- The fact that higher values of $h$ requires more discretisation steps is something that should be further stressed I believe.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Eq 5: missing the score? Or this is the process of the error?
- Section 3.3: typo in title 'Placn'
- Equation 9: This does not assuming $h \gg 0$ right? Worth moving it above in Section 3.2 to avoid any ambiguity perhaps?
- line 194: 'By convection [...]' what does this mean? Doesn't this simply come from the definition of the time-reversal?
- line 196-200: So when $h \rightarrow \infty $ not only $V_t \rightarrow U_t$ but because of the prefactor $h_t^2/2$ in Eq 9 (left) the distribution $\mu_t$ converges extremely fast to to $\mu_0$? Not sure to know what '“almost quasi-static” thermodynamics' refers to. Likely worth expanding a bit more on this in the main paper?
- Can this be seen trough the perspective of Langevin dynamics as a corrector, which would 'project' $\rho_t$ back to $p_t$ at $t>s$?
- line 233: '$L(h)$ will decay to zero exponentially fast' with $h$?
- Section 3.5: Here the setting in that the error accumulates over time in contract with the 'pulse' of Section 3.4. The results of Section 3.5 are that with error at the end of the generative process the smaller $h$ the better, yet if the pulse is also near the end, wouldn't large $h$ help (as the larger $h$ the faster it would correct this er§ror?).
- MNIST: This section is not really self-contained as the weighting functions are in the appendix. What is the motivation for this cubic 'noise' weighting?
- appendix: 'The computational cost of SDE-based models with large diffusion (h > g) is relatively high due to the necessity of a larger number of time steps.' What is the reason for this? That is quite key and worthy of being further developed (in the main text).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - Figure 4: It shows that indeed higher values of $h^2/g^2$ can lead to smaller error yet it also requires a lot more step. At $10^3$ steps, which is quite a reasonable number of steps already, the probability flow ODE ($h=0$) perhaps best and the curves starts overlapping around ~2000-3000 steps. So although the theoretical conclusion is that larger values of $h^2/g^2$ is better, for practical implementations there is a trade-off between larger number of discretisation steps (with larger values of $h$) and larger score network architecture.
- The authors look at different types of errors on the score network, both theoretically with pulse-shape error to simplify the analysis, but also experimentally with simple errors (weighting the true score for some time range) which makes it easier to interpret. It would be interesting to look at the empirical error of trained score network (e.g. on image dataset), depending on the weighting $w(t)$ in the loss, but also the scheduling $g(t)$, and in particular its (average) evolution through the denoising time. Is the error spread uniformly (wrt time)? Writing the score as $\epsilon / \sigma(t)$ (with $\sigma(t)$ the marginal variance at time $t$) couldn't one argue that with uniform error in the estimation of noise $epsilon$, the error in the score will blow up near $t=0$ thus the setting where the error is large close to the end of the generative process is the most likely setting (and therefore $h=0$ should work best)? Perhaps this is related with the fact that practitioners stop the denoising process at time $t=\eta$ with $\eta$ a small hyperparameter (e.g. \eta=e-3)?
- Would suggest increasing the fontsize of Figures (e.g. Fig 4)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide detailed and valuable feedback on our paper. The summary from the reviewer regarding the take-home message is accurate. For a detailed error distribution illustration in score-matching loss over time, see the supplementary PDF. We'll enhance presentation in the revised version based on suggestions. In the following, we address technical queries and concerns.
### Q1: The fact that higher values of $h$ requires more discretisation steps is something that should be further stressed.
Indeed, if one uses a large $h$ in the generative model, it holds crucial significance for time discretization. We will give more explicit statement about this practical limitation for large $h$ in the revised manuscript. We would like to highlight that this work does not say a large $h$ is favored in practice: it discussed both small and large $h$, but under different cases of score error. Our analysis also indicates under what circumstance, the use of ODE model is preferred, which seems more efficient with less NFE.
### Q2: Missing of score in Eq. 5
Thanks for pointing out this typo. In the second line below eq (5), ``$\mathfrak{S}^{\leftarrow}_t = \epsilon\mathcal{E}_t^{\leftarrow}$'' should be $\mathfrak{S}^{\leftarrow}_t = \nabla\log p_t^{\leftarrow} + \epsilon\mathcal{E}_t^{\leftarrow}$. We will correct this.
### Q3: About eq. (9)
We don't assume that $h^{\leftarrow}_t \gg 1$ (we only need $h^{\leftarrow}_t > 0$) in Eq. (9). We will remove $h^{\leftarrow}_t \gg 1$ in the title of Sec. 3.3 to avoid confusion.
### Q4: In the line 194
We mean "by notational convention". We'll rephrase it to "by the notation of time-reversal".
### Q5: About Lines 196-200 and quasi-static dynamics
Indeed, rapid convergence is unattainable unless the prefactor $\frac{(h^{\leftarrow}_t)^2}{2}\to\infty$.
We use "quasi-static" to indicate that the system will remain in proximity to the equilibrium $\rho_t^{\leftarrow}$ at time $t$, regardless of pulse perturbation. We will incorporate the following explanation in the revision: For any distribution ${\mu}_t^{\leftarrow}$ (even quite deviating from $\rho_t^{\leftarrow}$), over a brief time $\Delta t$ slightly exceeding $\mathcal{O}(1/(h_t^{\leftarrow})^{2})$, we can anticipate that
$\mu_{t+\Delta t}^{\leftarrow}$ is close to $\rho_{t+\Delta t}^{\leftarrow},$
assuming the evolution of $\mu_s^{\leftarrow}$ aligns with $\mathcal{L}_{s}^{(h^{\leftarrow})}$.
### Q6: Relation to the Langevin corrector
This resembles a Langevin corrector: both our findings and the Langevin corrector are blessed by the same convergence property. However, nuanced distinctions exist: In literature, the Langevin corrector addresses errors from discretization schemes, without referring to the score estimation errors, whereas our work deals with diminishing effect from score error.
### Q7: In line 233
Indeed, it decays exponentially fast with respect to $\mathsf{h}$.
### Q8: Regarding Sec 3.5
When the pulse occurs near (but not at) the generative process's end, larger $h$ leads to smaller sampling errors (as in Sec 3.4). For instance, an error like $\mathbb{I}_{[T-2\delta, T-\delta]}(t) E(x)$,
where $\delta>0$ is fixed and $E:\mathbb{R}^d\to\mathbb{R}^d$ still belongs to the case in Sec. 3.4. However, the findings of Sec 3.4 become inapplicable if $\mathcal{E}_T^{\leftarrow} \neq 0$, addressing the subtleties in Sec 3.5.
We isolate the key aspect of $\mathcal{E}^{\leftarrow}$ as the following $\mathbb{I}_{[T-a, T]}(t)E(x)$
with a small $a \ll 1$. Larger $h$ may not be advantageous here and the tricky part is the complex scaling relationship between $a$ and $\mathsf{h}$. For simplicity, let us say $a = \frac{1}{\mathsf{h}^2}$.
Examining the score function error $\mathcal{E}^\leftarrow_t$ within $t\in [T-a, T]$, the effective time of Langevin dynamics is only $\mathcal{O}\big(a \frac{\mathsf{h}^2}{2}\big) = \mathcal{O}(1)$, insufficient to return to equilibrium and reduce score estimate error. The $\frac{\mathsf{h}^2}{2}$ scaling comes from the prefactor in eq (9). Heuristically, the key distinct is that the SDE, with constant diffusion $\mathsf{h}$, can address error $\mathcal{E}_{t}^\leftarrow$ in the interval $[0,T-\frac{c}{\mathsf{h}^2}]$ (section 3.4), but not in $[T-\frac{c}{\mathsf{h}^2}, T]$ (section 3.5), where $c$ is a constant irrelevant herein.
### Q9: The cubic training weight
Please refer to our global discussion regarding the relationship between training weight, score error, dynamics selection, and numerical schemes. Our analysis suggests that one might enhance ODE via improving its score training near the noise end, and cubic weight function, increasing monotonically from 0 to 1 as $t\in [0,T]$, reduces data-side loss contribution compared to the quadratic default in literature (i.e., this prioritizes noise-side effects in training loss). For more details, see the supplementary PDF.
### Q10: Computational cost of large $h$
The rationale for introducing additional discretization steps stems from the amplified magnitude of both drift and diffusion terms (see Eq. (6)) when using larger $h$. Enhancing the differential equation's magnitude is akin to prolonging dynamics, inevitably resulting in larger errors in most cases. This necessitates the choice of an improved scheme or a reduction in step size.
### Q11: Distribution of loss over time and connection to the training weight
Supplementary figures depict score-matching loss over time. We explored loss distribution with varying training weights $\omega_t$. Investigating the impact of $g$ practically remains an interesting future endeavor. Errors are unevenly distributed across time, especially pronounced at the data side under conventional training loss. It appears that the current training weight scheme favors ODE over SDE. Indeed, such a score error distribution over time also connects to the adoption of the early-stopping techniques.
---
Rebuttal Comment 1.1:
Title: response
Comment: Thanks for the detailed response and clarifications! | Summary: The authors focus on reverse diffusion process in the presence of non-negligible error in the score function, and estimate KL divergence between the data distribution and the distribution generated by reverse process. They analyze how the KL divergence varies with the diffusion coefficient and demonstrate that a large diffusion coefficient is beneficial when the error of score function is concentrated near beginning of generation process, while a small diffusion coefficient is beneficial in the opposite case. The findings are validated through numerical experiments on Gaussian mixture model and MNIST data set.
Strengths: The paper introduces a novel analysis of how error accumulation occurs in diffusion models when the error in the score function is not negligible. This is in contrast to conventional analysis that bound error between target and generated distributions when the error in the score function is small.
The analysis is only employed in exploring the diffusion coefficient in reverse generative process. However, the analysis has potential implications for various applications where the correction or modification of score function is important. For instance, more theoretical treatments could be possible on image editing(classifier-free-guidance), fair generation(discriminator-guidance), or so on.
Weaknesses: In Line 178, relation between the assumption and low-dimensionality of data distribution require further explanation to enhance understanding.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Regarding the scaling hyperparameter in the loss function of conventional diffusion model, which aims to balance the error at different time of the generative process, can the practical choice of the scaling be associated with the theoretical results presented in this work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors acknowledge limitation of their work. One additional limitation is lack of large-scale experiments to further validate the findings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide valuable feedback on our paper.
### Q1: Need to clarify low-dimensional data distribution.
We acknowledge that the previous description lacks informative detail, and we aim to provide further elucidation as follows:
Given that many realistic datasets exhibit effective compact support, it becomes possible to identify a constant denoted as $\widetilde{C}$, such that the distribution follows the inequality:
$$
\rho_0(x) \le \widetilde{C} \exp(-|x|^2/2),\qquad \forall x.
$$
Consequently, we can deduce:
$$
e^{-U_0(x)} \le \widetilde{C} \exp(-|x|^2/2)\ \implies U_0(x) \ge \frac{|x|^2}{2} - \log{\widetilde{C}}.
$$
This assertion corresponds to the third condition in Assumption 3.1.
To delve deeper, in situations where $\rho_0$ is predominantly supported within $B_0(R)$, it is feasible to ascertain an adequately large $\widetilde{C}$ such that $\rho_0(x) \le \widetilde{C} e^{-|x|^2/2}$ within $B_0(R)$. Outside of this domain, the aforementioned assumption suggests that the decay surpasses that of a standard Gaussian distribution. In instances where data conforms to a low-dimensional subspace, a common practice is to pump in certain noise for the degenerate coordinate, e.g., approximate the distribution in the degenerate coordinate via $N(0, \sigma^2)$ with small $\sigma$ but NOT infinitesimally small. The above equation can hold true by selecting an appropriate $\widetilde{C}$. We will incorporate these clarifications in our manuscript to enhance accuracy and clarity.
### Q2: The practical implication of the score-training weight hyperparameter based on our analysis
Our theoretical analysis indicates that for a large diffusion SDE, the sample quality is much less unaffected by score error occurring near the noise end, while the quality is quite sensitive to score error near the data side ($p_0$). In contrast, for the ODE model, the conclusion is reversed. This finding immediately suggests a practical hint for enhancing ODE flow about how to train the score function by adjusting the weight to favor ODEs: one would like less error in the noise end and the noise-driven weight (stated in the Appendix of manuscript, which use larger weights near the noise for score training) will likely enhance ODE flows.
Our previous submission clearly demonstrated the effectiveness of this practical strategy for MNIST (in Fig 5). The supplementary PDF features loss function distributions under various training weight functions, verifying our conjecture. We've also included initial CIFAR-10 results, subject to further exploration and incorporation in the revised version.
### Q3: about limitations for larger datasets
We have conducted preliminary assessments on CIFAR-10, and the findings are documented in the supplementary PDF. And our numerical outcomes on CIFAR-10 echo those observed in the MNIST dataset. We will undertake a more comprehensive exploration toward a broader array of results to be incorporated in the forthcoming revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
The author-reviewer discussion period ends in 2 days. Please review the authors' rebuttal and engage with them if you have additional questions or feedback. Your input during the discussion period is valued and helps improve the paper.
Thanks, Area Chair
---
Rebuttal 2:
Comment: I have chosen to maintain my original evaluation.
While I acknowledge the valid concerns by reviewer LAed, I believe this work offers a valuable theoretical contribution. I suggest that the authors include a discussion with reviewer LAed to address their concerns and acknowledge the limitation of this work.
Additional suggestions:
- I do not think CIFAR10 alone is large scale experiments. Therefore, I strongly urge the authors to include more experiments in the revised version. Even if some of the experimental results do not align perfectly with the theory, they would not diminish the value of this work and could instead serve as limitations that could open up future research directions.
- To enhance the accessibility of your paper to a wider range of readers, consider adding intuitive figures that visually elucidate the implications of your theorems. | Summary: The paper provides a theoretical analysis of the estimation error of SDE and ODE methods along with some numerical experiments.
Strengths: By perturbing the score function, the authors study how the estimation error changes in ODE and SDE methods.
Weaknesses: It is commonly known that the sample generation error consists of three parts: discretization error, estimation error, and initialization error, see e.g. [1, 2]. It is also commonly known that usually the ODE works better with fewer NFE and SDE works better with more NFE due to the interplay between discretization error and estimation error. The estimation error might not even dominates the total error. I don't think only studying the continuous time model provides very useful insight into how to choose those two methods optimally. There is a significant lack of connection between this theory and what happens in reality. Please comment on this.
[1] Chen et al., Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
[2] Lee et al., Convergence of score-based generative modeling for general data distributions
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: see weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide feedback on our paper.
It's noteworthy that our findings align with certain factual aspects you raised. In the ensuing discussion, we will predominantly focus on two pivotal matters:
- the significance of refining score function estimation, with the intention to surpass existing conventions;
- the inherent naturalness and illuminative quality of the continuous-time model within our framework.
### Clarification of our results.
We agree with the reviewer's observation that ODE models tend to perform better with fewer function evaluations (NFE), while SDE models excel with higher NFE—a crucial finding. This prompts the question: why does increased NFE potentially boost SDE models over ODE models **whose framework lies in the continuous-time model**, and under what specific conditions? This forms the core of our study.
Importantly, there is no definitive explanation for this phenomenon according to current literature. Our research is fueled by this recognized but mathematically unverified insight. Our objective is to bridge this gap by delving into underlying dynamics, illuminating the conditions that empower SDE models to surpass ODE models with heightened NFE.
### Score function estimation is important
It's crucial to emphasize that the estimation error in score training holds comparable, if not greater, significance than the design of an accurate discretization scheme for the inference process. While the reviewer correctly notes that the estimation error may not always dominate the total error, this holds true for numerous scenarios. Notably, situations with limited training data or a preference for lighter-size architectures (e.g., for memory conservation) fall within this category. Score function estimation forms the nucleus of score-based diffusion models, and we believe it's essential to refrain from assuming that training error can be easily minimized or disregarded in most circumstances.
### The continuous-time model is useful.
We acknowledge the reviewer's valid point that a continuous-time model does not offer a finalized solution. Nonetheless, despite these acknowledged gaps, the continuous-time model retains its utility:
- **Many seminal works are essentially based on continuous-time models.** It's notable that prevailing theoretical studies heavily focus on the continuous-time model or are built upon it. The references highlighted by the reviewer also lean toward continuous-time analysis rather than discrete-time mappings. When we mention the continuous-time model, we encompass both a fully continuous-time representation and a discretized model using a highly accurate numerical scheme. This contrasts with a fully discrete model like DDPM with $10$-NFE or optimal transport maps. Furthermore, we spotlight the seminal work [Score-Based Generative Modeling through Stochastic Differential Equations, ICLR 2021] focusing on continuous-time models. This exemplifies the continued relevance and influence of this framework within the research community.
- **Continuous-time model is still useful.** If the reviewer acknowledges the significance of comprehending the mentioned wisdom—namely, the interplay between time-discretization error and score estimation error—we believe that a comprehensive response necessitates an understanding of two pedagogical scenarios: (1) absence of score estimate error, addressed in numerous numerical analysis textbooks; (2) absence of discretization error, requiring an exploration of continuous-time models.
Given the limited analyses exploring score error's impact on the inference process, our study initiates this inquiry as far as we know, and validates it through numerical experiments. It's crucial to clarify that we don't directly extend findings from continuous-time models to real-world scenarios. Nonetheless, these findings hold significance. For instance, our numerical experiments indicate that for ODE models, the score error during initial inference stages significantly affects final sample errors— typically not a big challenge for SDE models (proved in Sec. 3.4). Prioritizing score error refinement in the noise end could potentially enhance sample generation quality. This insight is illustrated with examples like MNIST and supported by preliminary CIFAR-10 experiments (see the additional PDF).
- **The perception of discretization error being uncontrollable perhaps has evolved.** Efficient numerical schemes for inference have been thoroughly explored through collaborative research efforts. Notable examples encompass DDIM, gDDIM, and several others. The control of numerical discretization error has evolved from a formidable challenge to a manageable one. Our raised question yet still lacks comprehensive systematic exploration, as far as we know.
### Future direction.
We have indeed undertaken some efforts to employ analogous techniques utilized in this paper to comprehend a discrete-time model, characterized by a modest number of function evaluations (e.g., around 10-20 NFE).
---
Rebuttal Comment 1.1:
Comment: I understand the authors are trying to justify why studying the continuous time model is necessary. However, the reasons are not convincing.
1. "(1) absence of score estimate error, addressed in numerous numerical analysis textbooks; (2) absence of discretization error, requiring an exploration of continuous-time models"
The authors seem to suggest that it is common to assume one error is absent and study another one solely. That might be the case in numerical analysis textbooks, but it is not what people do at conferences like this, as far as I know. I'd be really interested to see some references that solely study one of the three errors: score function estimation error, discretization error, or initialization error. In fact, for example, [1] does not neglect the score function estimation error.
2. "The perception of discretization error being uncontrollable perhaps has evolved"
My point was not the discretization error being uncontrollable. What I was trying to say was there was no evidence suggesting the discretization error is negligible, compared to the score function estimation error. For example, let's assume the total error is $E=A+B+C$, where $A$ the is score function estimation error, $B$ is the discretization error, and $C$ is the initialization error. I agree with the authors that $B$ is bounded under certain conditions, but this doesn't mean $B\leq A$. To make this study useful, we need some results like $B\ll A$.
3. I think it might be more appropriate to compare this work with, for example, [2], instead of Song's work at ICLR. In [2], they provide a unified framework for flow models and diffusion models using continuous time stochastic processes. Notably, the framework they proposed is new. Given the high similarities in both techniques and results between those two works, and considering this work is only a small subset of [2], I decided to keep the rating for now. I am glad to have more discussions if there are still misunderstandings or confusion.
[1] Chen et al., Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions, ICLR, 2023.
[2] Albergo et al., Stochastic Interpolants: A Unifying Framework for Flows and Diffusions, 2023
---
Reply to Comment 1.1.1:
Comment: ### About comment #1 and #2
These two comments are again about whether a continuous-time model without directly working on the discrete-time setting is still of sufficient scientific contribution or not. In fact, the work of (Albergo et al., Stochastic Interpolants 2023) that the reviewer just appraised in the feedback is entirely based on the continuous-time setting.
We believe that it is more beneficial if focusing on the *contributions* of the continuous-time model, especially when the conclusions are both rigorous in theory and also backed up by substantial numerical examples, as we did in our original submission and the new supplementary PDF.
Additionally, considering feedbacks from other reviewers, we maintain that utilizing a continuous-time model isn't inherently a defect in our paper.
### About comment #3
Thanks for presenting this argument that did not appear in the initial review report. We are glad to respond and make clarifications, despite that this comment is a bit confusing for us to comprehend. Please allow us to explain based on our understanding.
*“Given the high similarities in both techniques and results between those two works, and considering this work is only a small subset of [2]”*
+ “the high similarities in both techniques and results between those two works”.
We are quite puzzled by why those two works from others (Albergo et al., Stochastic Interpolants 2023) and (Song et al. Score-based generative modeling through SDE. ICLR 2021) matter to our own work here and becomes one of reasons our own work gets low rated.
In our appraisals of these two works, both are excellent and insightful works by utilizing the continuous-time models, and each has its own perspectives toward the generative models: one used the time-reversed SDE and the other directly parametrized a path between two random variables.
+ “considering this work is only a small subset of [2]”
Here [2] refers to (Albergo et al., Stochastic Interpolants 2023) which first appeared at arxiv on 15 Mar 2023 (two months before our submission to NeurIPS). Our original submission has mentioned this reference on page 2. Even though these contemporaneous works are in the large field of diffusion and generative models, they clearly focused on different problems and adopted entirely different techniques. We strongly disagree that our work is only a small subset of [2].
+ **The problems solved in our work and [2] are different:** Stochastic Interpolants (2023) derives the continuity equations governing a stochastic interpolant by identifying the underlying velocity field. Our problem here is that within the standard score-based diffusion model, how the KL accuracy is impacted by the score error, under different diffusion coefficients.
+ **The techniques are different:** Theorem 2.21 in (Stochastic Interpolants 2023) is to directly estimate the KL error by two loss functions for their velocity field and score function. Our approach works on the first variation (functional derivative) of the KL error w.r.t. the perturbation of score. The detailed mathematical techniques and skills behind these works are also completely different.
+ **Our main result is not covered by (Stochastic Interpolants 2023):** In stochastic interpolants (2023), even though Section 2.4 shows the upper bound of sampling error in the setting of stochastic interpolant minimizing two vector fields and Section 3.2 has some discussion on deterministic vs stochastic generative models, there is indeed no similar conclusion for the score-based diffusion model as our main results here. Our results indicate that it is not only the loss of the score function matters, but *the distribution (in time) of the score errors matters for the choice of the diffusion coefficient*. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for comments and suggestions. We are glad by the positive reception of the proposed question, and we hold respect and appreciation for the critical and diverse perspectives provided.
### About practicality and motivation:
A concern or question is about the discretization error (or practicality). It's well-acknowledged that the interplay between score function estimation error and discretization error collectively shapes the ultimate quality of sample generation.
Our overarching perspective encompasses four sequential facets:
1. We choose hyper-parameters for training.
2. Training scheme introduces score estimation error.
3. Score error guides dynamics selection.
4. Subsequently, a numerical scheme needs to be determined.
Rapid advances in inference solvers and diverse algorithmic forms pose many challenges in directly integrating discretization error into optimal dynamics study. Our current focus centers on the interplay between score training error and noise-level within SDE (ODE or large diffusion), i.e., the relation between (2) and (3) above. This is necessary in order to comprehend the whole four facets. Our objective is not to favor specific dynamics but to reveal their relationship and underlying mechanism. This approach may offer a practical avenue to explore optimal dynamics selection under finite NFE and a fixed numerical scheme, which awaits future research pursuits to integrate more factors.
### Explanation of supplementary PDF:
Regarding the supplementary PDF, we highlight the following: our theoretical analysis indicates that large diffusion SDE is unaffected by score error near the noise end (start of inference process) and is sensitive to score error near the data side ($p_0$), with the conclusion reversed for ODEs. This suggests the potential for enhancing ODE flow through training weight adjustment: less error in the noise end (namely, $N(0,I_d)$) is beneficial for ODEs. Our previous submission demonstrated this for MNIST. Additionally, the supplementary PDF features how score-matching loss function distributions with respect to time under various training weight functions, in line with the above theoretical conjecture. We've also included initial CIFAR-10 results, subject to further exploration and incorporation in the revised version.
Pdf: /pdf/4c0ed332cb38a80ec9dd834251b38ff0efc0835e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper explores the difference between ODE-based probability flow and SDE-based diffusion models when score training errors are present. Specifically, they investigate how setting the generative diffusion coefficient h impacts sample quality.
Strengths: - The question the authors are trying to answer seems to be a good research question.
- It's nice to see experiments validating some of their assumptions (Fig 1.).
Weaknesses: - As is, the paper is very difficult to follow.
- Many equations are presented, but few of them are explained.
- With so many variables presented in the paper, it is difficult to keep track of all of them, making the paper challenging to comprehend.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - The authors briefly address potential negative societal impacts of their work.
- I did not see any mention of limitations of their analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide feedback on our paper. We value your insights regarding
the presentation of equations and variables. We are dedicated to enhancing the comprehensibility of the
revised manuscript by incorporating more detailed explanations and motivations. It’s important to note
that this paper serves a technical purpose, primarily centered around mathematical and asymptotic
analyses. As such, a substantial number of equations and variables are often unavoidable to precisely
conveying our intended message, which may require a high level of understanding of mathematical
analysis. We acknowledge that we have already dedicated Appendix A to define clearly the most
important variables in the last submitted manuscript, to hopefully facilitate a more streamlined
tracking of the variables.
Regrettably, we cannot concur with the outlined limitations in the review report:
+ We have indeed addressed the potential social impact in our paper, as we believe it aligns
with the submission requirements, particularly when considering the potential enhancements
to train generative models. As a result, we want to clarify that the negative societal impact
should not be regarded as a limitation of our study.
+ We want to emphasize that we have taken explicit steps to address certain potential limitations.
This includes clearly outlining our assumptions, highlighting the interests of incorporating low-
dimensional manifold information in the summary section (a step we hadn’t previously taken
in this work), and explicitly indicating that certain theoretical results hold under asymptotic
conditions. We believe that these aspects could be construed as limitations in our analysis,
and we have already presented and discussed limitations.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
The author-reviewer discussion period ends in 2 days. Please review the authors' rebuttal and engage with them if you have additional questions or feedback. Your input during the discussion period is valued and helps improve the paper.
Thanks, Area Chair
---
Rebuttal Comment 1.2:
Comment: I understand that the nature of the paper requires a large amount of equations and variables and I agree that adding more explanations/motivations will enhance the readability of the paper. I appreciate the authors revisiting this. I thank the authors for pointing out areas in which they addressed the limitations and potential societal impacts of their work.
With these, I am comfortable moving my rating to a 5. | null | null | null | null | null | null |
Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding | Accept (poster) | Summary: The authors address the problem of expressiveness in graph neural networks. Positional encodings using spectral approaches have suffered from sign and basis invariance. The authors propose Laplacian canonization which finds unique representations, and they analyze what properties the canonization should preserve and in practice what is the ratio of eigenvectors that fulfill these conditions. They propose a new simple canonization algorithm for sign and basis invariance—Minimal Axis Projection (MAP)---that uses axis projection functions to identify canonical directions. They also present conditions under which MAP can guarantee sign and basis canonization. Experimental results evaluate MAP on graph classification benchmarks and show improved results with lowest computational runtime.
Strengths: Originality: the approach appears to be novel.
Significance: Designing proper architectures that preserve symmetries and are more expressive is an important and open topic currently in GNN literature.
Quality: the approach is both simple and fast. Experiments show pairing MAP with multiple architectures to obtain gains in performance. The ablation study demonstrates the contributions of the different parts of the approach. Full implementation details are provided in the supplement.
Clarity: the paper is well-written.
Weaknesses: Experimental results show incremental / modest improvement compared to BasisNet. The main contribution is in reduced runtime. Furthermore, BasisNet model with k=all in their paper (Table 1) achieve better performance than MAP in the current paper (Table 3). Authors should explain decision to not include these results from the BasisNet paper
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. RSE: the eigenvectors should be unit-normalized (or normalized to same length)
2. If using the low frequency eigenvectors, the corresponding eigenvalues are the smallest, and thus the values are very small (especially if also unit normalized and n is large). How is this useful in practice? Are these concatenated scaled in some way at input to the network?
3. Are the uncanonizable eigenvectors in the datasets concentrated in low or high frequency, or spread uniformly?
4. How would this method work for a ring graph (cycle)?
5. Why are result using SignNet not included for tables 4 and 5?
6. Figures 4 and 5 in the appendix are very useful toy illustration that should be moved to the main paper
7. What if all eigenvectors are used for MAP, as in the BasisNet paper (k=all)?
Minor comments:
* line 176: “question that which…”
* line 190 - can reference Table 2
* 3.3.3 - heading should be “summary” not “summarization”
* line 273: eigendecomposition complexity is cubic for dense Laplacians, whereas most real world graphs are sparse, so that is a worst case complexity. In practice it is lower.
Line 336: why does increasing k (8–>16) make performance worse? Shouldn’t the network just utilize the lower freq vectors if they are more important?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: limitations aren't discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer R8b6 for appreciating the simplicity and significance of our paper. We address your concerns as follows.
---
**Q1.** SignNet model with k=all in their paper (Table 1) achieve better performance than MAP in the current paper (Table 3). Authors should explain decision to not include these results from the BasisNet paper.
**A1**. Among all the baselines of our paper (LapPE, RS, SignNet) and also other papers that also use LapPE [1-2], we find that SignNet seems the only one that incorporates all eigenvectors in PE, which could be problematic for larger graphs (e.g., MalNet-Tiny has 1410 nodes on average). For a consistent comparison, we adopt $k=8$ in our comparison.
**References:**
[1] Kreuzer et al. (2021). Rethinking graph transformers with spectral attention. *NeurIPS.*
[2] Rampášek et al. (2022). Recipe for a general, powerful, scalable graph transformer. *NeurIPS.*
---
**Q2**. RSE: the eigenvectors should be unit-normalized (or normalized to same length).
**A2**. Here, the eigenvectors in $\mathbf{U}$ are already unit-normalized, and we reweight them with $\mathbf{\Lambda}^{1/2}$ to add eigenvalue information. We will state it clearer in the revision.
---
**Q3**. If using the low frequency eigenvectors, the corresponding eigenvalues are the smallest, and thus the values are very small (especially if also unit normalized and n is large). How is this useful in practice? Are these concatenated scaled in some way at input to the network?
**A3**. In practice, we used the eigen-decomposition of the adjacency matrix $\hat{\mathbf{A}}=\mathbf{I}-\mathbf{L}$, which has the same eigenvectors as the Laplacian while avoiding the small-eigenvalue issue (whose eigenvalues are $1-\lambda_i$ instead).
---
**Q4**. Are the uncanonizable eigenvectors in the datasets concentrated in low or high frequency, or spread uniformly?
**A4**. We measured the number of sign-uncanonizable eigenvectors with low, mid, and high frequency on 3 datasets. It appears that uncanonizable eigenvectors are distributed more in high frequency, which is good news as studied in the LapPE paper, as low frequency components usually matter the most for model performance.
| Dataset | Low | Mid | High |
| --- | --- | --- | --- |
| MOLTOX21 | 130 | 1271 | 4017 |
| MOLTOXCAST | 170 | 1413 | 4449 |
| MOLPCBA | 8723 | 54004 | 280361 |
---
**Q5**. How would this method work for a ring graph (cycle)?
**A5**. This is an interesting question. As shown below, we find that a larger node size $n$ leads to more uncaonizable features, suggesting a close correlation between automorphism and uncanonizable features. This is understandable because an automorphic graph like rings can rotate itself and stay the same, we cannot specify a unique location for a node. We believe that the theoretical relationship between automorphism and sign/basis canonizability would be an interesting open problem to explore in the future.
| #Nodes | Ratio of sign-canonizable eigenvectors | Ratio of basis-canonizable eigenspaces |
| --- | --- | --- |
| 5 | 3/5 | 0/2 |
| 6 | 2/6 | 0/2 |
| 7 | 4/7 | 0/3 |
| 8 | 1/8 | 0/3 |
| 9 | 5/9 | 0/4 |
| 10 | 3/10 | 0/4 |
---
**Q6**. Why are results using SignNet not included for tables 4 and 5?
**A6**. All accuracy performance of the baselines are taken from the original papers of LapPE and SignNet. Some models are not conducted with SignNet because the original paper did not provide them. The SignNet authors only provided codes on ZINC and did not implement them on SAN and GraphiT. To be more complete, here we also reproduce some newly tuned SignNet results on MOLTOX21, where MAP still outperforms SignNet under different backbones.
| Model | PE | $k$ | #Param | ROCAUC |
| --- | --- | --- | --- | --- |
| GatedGCN | None | 0 | 1004K | 0.772 ± 0.006 |
| GatedGCN | SignNet | 3 | 1754K | 0.782 ± 0.004 |
| GatedGCN | MAP | 3 | 1505K | **0.784 ± 0.005** |
| PNA | None | 0 | 5245K | 0.755 ± 0.008 |
| PNA | SignNet | 16 | 1367K | 0.745 ± 0.008 |
| PNA | MAP | 16 | 1951K | **0.761 ± 0.002** |
---
**Q7**. Figures 4 and 5 in the appendix are very useful toy illustration that should be moved to the main paper.
**A7**. Thanks for your suggestion. We will move Figures 4 and 5 to the main paper.
---
**Q8**. What if all eigenvectors are used for MAP, as in the BasisNet paper (k=all)?
**A8**. We try using all eigenvectors with GatedGCN on ZINC and the results are provided below. We can see that MAP has no improvement by using k=all, and thus underperforms SignNet. We believe this is attributed to the uncanonizable eigenvectors being more densely distributed in high frequencies, as shown in **A4**. In comparison, SignNet achieves sign-invariance w.r.t. these uncanonizable eigenvectors (although with more computation cost and losing expressive power) and attains better performance. To validate this hypothesis, we mask the uncanonizable eigenvectors of MAP and conduct experiments again. We call this variant **MAP-mask**. The result shows that masking uncanonizable eigenvectors significantly improves the performance of MAP, reaching comparable performance with SignNet. We believe that in the future, developing better canonization algorithms could further close this gap.
| Model | Test MAE |
| --- | --- |
| SignNet (k=all) | 0.100 ± 0.007 |
| MAP (k=8) | 0.120 ± 0.002 |
| MAP (k=all) | 0.121 ± 0.003 |
| MAP-mask (k=all) | 0.106 ± 0.001 |
---
**Q9**. Why does increasing k (8–>16) make performance worse? Shouldn’t the network just utilize the lower freq vectors if they are more important?
**A9**. Here, for efficiency, we only tune hyperparameters on $k=8$ and deploy them to other $k$’s. When further tuned under $k=16$, we can see that they attain the same performance.
| $k$ | 8 | 16 |
| --- | --- | --- |
| Test MAE | 0.120 ± 0.002 | 0.120 ± 0.004 |
---
Hope our elaborations and new experiments above could address your concerns. Please let us know if there is more to clarify.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for the detailed response and am satisfied with the additional results. I am keeping my score. | Summary: This paper introduces a new approach called Laplacian Canonization (LC) for ensuring the sign and basis invariance of spectral embeddings. This is done by determining the canonical direction of eigenvectors in the pre-processing stage. They propose to perform the Laplacian Canonization via Maximal Axis Projection (MAP) algorithm that is guaranteed to canonize all sign canonizable features. Experiments are performed on molecular benchmarks.
Strengths: - The paper is well written, contains clear rigorous definitions and is easy to follow.
- Experiments show consistent improvements.
Weaknesses: With my limited knowledge of the topic, I could not point out any obvious weaknesses of the method. I do have two questions regarding the guarantees of the method:
- The authors claim that the method can canonize more than 90% of all eigenvectors, however, this has only been tested on molecular graphs. Are there any guarantees for other graphs?
- You observe that your assumptions are violated for some small percentage of eigenvectors on real-world datasets. I was wondering if these eigenvectors are evenly spread out over the eigenvalue spectrum or could it be the case that these are all for the lowest eigenvalues? This could be problematic since these are the eigenvectors one would like to use.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Besides the questions, listed in the weaknesses section, I have a few more minor questions:
- In the axis projection step, why do you need c?
- I suppose that in practice alpha_i will not be exactly the same. Do you then consider all values or do you apply some binning algorithm to get B_i? I also wonder what is k in practice?
- in Tables 4 and 5, why is it that MAP increases the number of parameters?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I have no limitations to point out and the code is provided in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer for appreciating our paper. We address your questions in the following points.
---
**Q1**. The authors claim that the method can canonize more than 90% of all eigenvectors, however, this has only been tested on molecular graphs. Are there any guarantees for other graphs?
**A1**. Following your suggestions, we also evaluate the proposed method on other kinds of graphs. From the table below, we can see that we can also canonize >90% on other kinds of graphs (like networks and database), and have *even less* uncanonizable features, like 2.59% on COLLAB. This is because molecular graphs are already more symmetric than other kinds of real-world graphs.
| Dataset | Ratio of sign-uncanonizable eigenvalues | Ratio of basis-uncanonizable eigenvalues | Total |
| --- | --- | --- | --- |
| MOLPCBA (molecular) | 2.24% | 7.37 % | 9.61% |
| COLLAB (network) | 0.88 % | 1.71 % | 2.59% |
| IMDB-BINARY (movie database) | 2.29 % | 6.32 % | 8.61% |
| github-stargazers (network) | 2.81 % | 3.44 % | 6.25% |
---
**Q2**. You observe that your assumptions are violated for some small percentage of eigenvectors on real-world datasets. I was wondering if these eigenvectors are evenly spread out over the eigenvalue spectrum or could it be the case that these are all for the lowest eigenvalues? This could be problematic since these are the eigenvectors one would like to use.
**A2**. We measured the number of sign-uncanonizable eigenvectors with low, mid, and high frequency on 3 datasets. It appears that uncanonizable eigenvectors are distributed more in high frequency, which is good news as studied in the LapPE paper, as low frequency components usually matter the most for model performance.
| Dataset | Low | Mid | High |
| --- | --- | --- | --- |
| MOLTOX21 | 130 | 1271 | 4017 |
| MOLTOXCAST | 170 | 1413 | 4449 |
| MOLPCBA | 8723 | 54004 | 280361 |
---
**Q3**. In the axis projection step, why do you need c?
**A3**. Here, adding a constant $c$ does not affect the correctness of our theorem. But with different $c$’s, we can get different numbers of uncanonizable eigenvectors. Therefore, we can tune $c$ to attain better canonization. In practice, we set $c$ to 10 for all cases as a good default choice.
---
**Q4**. I suppose that in practice alpha_i will not be exactly the same. Do you then consider all values or do you apply some binning algorithm to get B_i? I also wonder what is k in practice?
**A4**. That is a good point. We do not use binning here. Instead, we judge two float numbers `(a, b)` to be the same if they are close enough. Specifically, we use the PyTorch function `torch.allclose(a, b)` that checks if two floating point numbers are equal up to some tolerance.
We also measure the average value of $n$ (#nodes) and $k$ (#distinct $\alpha$ values) in 3 datasets. From the table below we can see on average $k$ is slightly smaller but also close to $n$, because there are rather few repeated $\alpha_i$’s.
| Dataset | $n$ | $k$ |
| --- | --- | --- |
| MOLTOX21 | 19.1 | 17.3 |
| MOLTOXCAST | 19.2 | 16.8 |
| MOLPCBA | 26.0 | 20.2 |
---
**Q5**. In Tables 4 and 5, why is it that MAP increases the number of parameters?
**A5**. In our experiments, we allow a flexible choice of model architectures (layers, hidden dimensions) to achieve the best performance. This setting is consistent with prior works, specifically [1] and [2]. When further ensuring a similar model size, as shown below, the results are still consistent with the original ones as our MAP still outperforms LapPE+RS.
*New Results on MOLPCBA under similar model size.*
| Model | PE | $k$ | #Param | AP |
| --- | --- | --- | --- | --- |
| GatedGCN | None | 0 | 2641K | 0.265 ± 0.003 |
| GatedGCN | LapPE + RS | 3 | 2642K | 0.266 ± 0.002 |
| GatedGCN | MAP | 3 | 2658K | **0.268 ± 0.002** |
**References:**
[1] Dwivedi, V. P., Luu, A. T., Laurent, T., Bengio, Y., & Bresson, X. (2021). Graph neural networks with learnable structural and positional representations. *arXiv preprint arXiv:2110.07875*.
[2] Lim, D., Robinson, J., Zhao, L., Smidt, T., Sra, S., Maron, H., & Jegelka, S. (2022). Sign and basis invariant networks for spectral graph representation learning. *arXiv preprint arXiv:2202.13013*.
---
Hope our new results above could address your concerns. We are very happy to take your further questions.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: Thanks for taking the time to thoroughly answer my questions. I am happy with the clarifications and provided extra evaluations, and I recommend the paper to be accepted. I increased my score accordingly. | Summary: The recently emerging graph transformers uses the spectral embedding.
The spectral embedding has two empirically known problems; I) sign invariance ii) basis invariance.
The existing remedy for these problem comes at the cost.
This paper addresses this problem by the method this paper proposes called Laplacian Canonization (LC).
This paper provide the theoretical analyses of LC.
Also, the experimental results show that the LC outperforms the existing methods.
Strengths: - The simplicity of the proposed algorithm.
- Theoretical guarantee of the canonoization for the MAP algorithm.
Weaknesses: - We do not know **why** the sign and basis invariance hinder the performance of PE.
Thus, I kind of feel little anxious that the readers cannot judge if the work on top of the series of these studies is in the right direction or not to address the fundamental problems. See also the limitation section.
- Seeing the experiment, there is a little improvement from SignNet even though the LC deals with not only sign but also the basis invariance.
Why some method is not conducted with sign net? The basis invariance does not matter? Or The LC show the weaker accuracy performance regarding sign ambiguity than SignNet?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Regarding Table 2.
From the intuition, the basis invariance is quite rare, while the sign invariance always happens, since the basis invariance needs the "shared eigenvalues."
But looking at the table 2, sign non-canonized is less than basis invariance.
Thus, how do you justify the definition of the canonization? Or basis invariance matters more?
Also, if the basis invariance is more frequent than sign invariance, how do you defend the little improvement from SignNet?
Basis invariance vs. sign invariance.
As I raise the weakness section, the LC improves the accuracy of the SignNet only marginally. But seeing the Table 2, the there are more basis invariance.
Is this an indirect evidence that the LC does not deal with sign not quite well? Or basis invaraince actually does not matter?
It will be interesting if the authors conduct the experiments using the "separate" MAP regarding basis and sign.
Thus, I believe that at least one of the following is happening
i) some non-intuitional thing is happening in Table 2 regarding the number of noncanonization of sign v basis.
ii) Even if the number of non canonization of the basis, basis does not matter
iii) The proposed method deal with the sign case less well than SignNet.
As I wrote in this section, I am not clear the point how much the sign and basis matters, and how MAP addresses each invariance, and therefore I'm giving 4.
---
POST REBUTTAL: Given the synthetic data experiment regarding basis, I slightly increase my score from 4 to 5.
Also, the discussion above on Table 2 contains my misunderstanding. Please see the discussion below for the details.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The whole community does not know why the sign and basis invariance hamper the performance of PE as I raised in the weakness section, as far as I understand. This study is built on these studies. But although I read the Appendix A where the authors summarize the point of basis and invariance ambiguity, I'm less confident on this point.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer NB7V for appreciating the simplicity and theoretical guarantees of our approach. We address your main concerns below, especially those on the meaning of studying sign and basis invariance.
---
**Q1**. We do not know **why** the sign and basis invariance hinder the performance of PE.
**A1**. Indeed this is a good question. First, it is a well-known principle (and practice) that we need to obey the symmetry properties of graphs when designing GNNs. For example, graph convolution and aggregation operators preserve the permutation invariance property of graphs. Similarly, **sign and basis invariance is also an intrinsic symmetry property of the graph data (the graph structure is invariant under different sign/basis choices of eigenvectors),** so similarly, we also need to preserve this symmetry when designing GNNs. As noted by Reviewer R8b6, *“designing proper architectures that preserve symmetries and are more expressive is an important and open topic currently in GNN literature.”* As also mentioned by Reviewer zwuu, sign/basis ambiguities are important problems not only in the GNN community, but in other fields as well.
| Model | Method for Sign/Basis Invariance | MAE on ZINC |
| --- | --- | --- |
| GatedGCN + LapPE | None | 0.319 ± 0.010 |
| | RandomSign (RS) | 0.202 ± 0.006 |
| | SignNet ($\phi(v)$ only) | 0.148 ± 0.007 |
| | SignNet | 0.121 ± 0.005 |
| | MAP (ours) | 0.120 ± 0.002 |
**Reasons.** Intuitively, GNNs without symmetries can waste a lot of model capacity on fitting exponentially many permutation/sign/basis ambiguities of the same graph, while GNNs obeying symmetries do not need to. Concretely, the benefits of this principle can be justified in two folds:
- **Empirically**, as shown in the table above (quoted from Table 3), **encouraging sign/basis invariance by RS, SignNet, and our MAP can all bring significant improvements on real-world datasets**.
- **Theoretically**, a recent paper [1] rigorously shows that we can **gain sample complexity by obeying data symmetry**. The upper bound of the generalization error in Theorem 3.1 [1] contains a term $\mathop{\mathrm{vol}}(M/G)$, the volume of the quotient space that is invariant to group $G$ on the manifold $M$. As the group size $G$ grows (more symmetries), we observe a decreasing generalization error (or equivalently, reduced sample size to attain the same error).
Thus, sign/basis ambiguity is not only a well-established problem, but also solving it brings concrete benefits empirically and theoretically. We will add these discussions in the revision.
**References:**
[1] Tahmasebi, B., & Jegelka, S. (2023). The Exact Sample Complexity Gain from Invariances for Kernel Regression on Manifolds. *ICML 2023 Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML)*.
---
**Q2**. Why some method is not conducted with sign net? The basis invariance does not matter? Or The LC show the weaker accuracy performance regarding sign ambiguity than SignNet?
**A2**. Here, for a fair comparison, we compare with reported scores from the original papers of LapPE and SignNet. In the SignNet paper and their official code, for graph tasks, **they only provide results on ZINC,** as we included in Table 3. We also find that directly porting their code to other datasets leads to much worse performance and requires costly tuning. To be more complete, here we reproduce a delicately tuned SignNet result on MOLTOX21, and we can see that MAP still outperforms SignNet.
| Model | PE | $k$ | #Param | ROCAUC |
| --- | --- | --- | --- | --- |
| GatedGCN | None | 0 | 1004K | 0.772 ± 0.006 |
| GatedGCN | SignNet | 3 | 1754K | 0.782 ± 0.004 |
| GatedGCN | MAP | 3 | 1505K | **0.784 ± 0.005** |
| PNA | None | 0 | 5245K | 0.755 ± 0.008 |
| PNA | SignNet | 16 | 1367K | 0.745 ± 0.008 |
| PNA | MAP | 16 | 1951K | **0.761 ± 0.002** |
---
**Q3**. From the intuition, the basis invariance is quite rare, while the sign invariance always happens, since the basis invariance needs the "shared eigenvalues.” Seeing the Table 2, there are more basis invariance than sign invariance. [Several questions are further raised based on this observation.]
**A3**. We are afraid there are some misunderstandings of the Table 2 here. As you noted, there are indeed much fewer basis ambiguities (only around **5-6%**, see **Table 9**) than sign ambiguities (**100%**) in all eigenvectors. Clearly, **sign ambiguity is indeed more frequent and more important than basis ambiguity.** Table 2 lists the proportion of MAP-uncanonizable eigenvectors (i.e., **all the other can be canonized by MAP**), e.g., 2.5% for sign and 1.6% for basis on ZINC. Thus, **our MAP algorithm can successfully resolve ~97.5% sign ambiguities and ~3.4% basis ambiguities** (70% within basis itself) among all eigenvectors. So MAP indeed did better at solving sign ambiguity, which also contributes most to its improvements (see **Table 8**)**.** We will explain this relationship more clearly to avoid possible confusion in the revision.
We believe that this clarification could also help address your sequential concerns on the importance of sign and basis invariance. Please let us know if there is more to clarify.
---
**Q4**. It will be interesting if the authors conduct the experiments using the “separate” MAP regarding basis and sign.
**A4**. We note that in **Table 8**, we have included an ablation study on the MAP-sign and MAP-basis methods. We can see that while both methods contribute to the final performance, removing MAP-sign hurts the performance more than removing MAP-basis, which also confirms our explanations above.
---
Hope our elaboration on the meaning of the problem, and our clarification on the quantities in Table 2 could address your concerns. We are very happy to take your further questions.
---
Rebuttal Comment 1.1:
Title: Thank you for clarifying!
Comment: Thank you very much for rebuttal.
First of all, thank you very much for clarifying Table 2. I need to admit that I had misunderstanding on the Table.
Also, thank you very much for the detailed discussion on why the sign and basis invariance hinder the performance of PE. The theory part is particularly interesting.
At the same time, if I'm not wrong, experimentally we still do not observe improvements of MAP over SignNet, except for PNA on MOLTOX21 (additional experiment for my comment). Although the averages seem to be slightly better, both values are within the close range of the deviations of multiple runs. For example, for GatedGCN + LapPE on ZINC, 0.121 $\pm$ 0.005 for SignNet and 0.120 $\pm$ 0.002 for MAP (yours) seems almost the same, and for PNA on ZINC, 0.105 $\pm$ 0.007 for SignNet and 0.101 $\pm$ 0.005 for MAP (yours) are also almost same. Do you have any other advantages of MAP over SignNet, since the performance is somewhat weak to appeal.
---
Reply to Comment 1.1.1:
Title: Further Response to Reviewer NB7V
Comment: Thanks for the prompt response and for appreciating our explanations! We will certainly add these explanations in the revision.
As for the comparable performance between MAP and SignNet, we note that we have mentioned and explained this phenomenon in the experiment session (L314-319), as quoted below:
> Third, we also observe that **MAP and SignNet achieve comparable performance**. This is because **both methods aim at the same goal—eliminating ambiguity**. However, SignNet does so in the training stage while MAP does so in the pre-processing stage, thus **the latter is more computationally efficient**. Lastly, we would also like to highlight that as a kind of positional encoding, **MAP can be easily incorporated with any GNN architecture by passing the ```pre_transform``` function to the dataset class with a single line of code**.
To summarize, the comparable performance is **expected** because they can both address the sign ambiguity problem well. However, the two adopt quite different approaches to achieve this goal: SignNet uses a dual-branch NN, while ours only uses a **learning-free preprocessing algorithm**. As a new approach, our Laplacian canonization (MAP) has the following advantages:
- **Efficiency.** As a preprocessing method, MAP only needs to preprocess the graphs once before training, while SignNet needs to propagate and update the dual-branch NN during training, which leads to a lot more computation cost. As shown in Table 6, GatedGCN+MAP takes 64.72h while GatedGCN + SignNet takes 108.78h during training, i.e., SignNet costs 68% more training time than MAP to attain comparable performance.
- **Simplicity and Generality.** As a learning-free algorithm, MAP does not have hyperparameters and module designs to tune ($c$ is the only hyperparameter and we use $c=10$ in all cases). As a preprocessing method, it can be applied to any existing GNNs using Laplacian embedding **with NO change on the model architecture and training process**. In comparison, SignNet introduces new modules and requires specific tuning on each model/dataset to work well. Thus, MAP is more generally applicable and easy to use than SignNet.
- **Applicable for basis invariance.** We note that the basis version of SignNet, i.e., BasisNet, is very computationally prohibitive ($O(n^m)$ where $m=O(n^2)$) that is not applicable on real-world data (see SignNet paper). Thus, on real-world data, SignNet can actually only address sign invariance, and our MAP algorithm also can efficiently solve basis invariance to some extent (~70% of features are canonizable by MAP, Table 2).
Given the above advantages, we believe that MAP offers a new and promising alternative to SignNet for addressing the sign/basis ambiguity problem.
Hope the explanation above could address your concerns! We are happy to take your further questions during the discussion stage. | Summary: The authors propose Laplacian canonization, a way to select canonical Laplacian embeddings that resolve the sign and basis ambiguities often present in graph embeddings. The proposed method is a preprocessing step that is relatively fast. The authors perform experiments to evaluate the performance of Laplacian canonization.
Strengths: - Originality: Laplacian canonization is an original idea. To the best of my knowledge, this is a novel contribution.
- Quality: I see this paper as mainly making a methodological contribution. As such, the quality of the experiments are sufficiently extensive to be convincing, and the arguments/theoretical derivations are sound, to the best of my knowledge.
- Clarity: The presentation of the paper and the motivations are clear.
- Significance: It is certainly very relevant and important to study graph embeddings for GNNs these days. The approach is novel and works well. It is significant enough to warrant publication at a venue like NeurIPS.
Weaknesses: - One potential weakness is the theoretical portion of the paper, whose results are rather marginal and unsurprising. However, I see this as a methods paper, and the empirical good performance of the proposed method more than makes up for it.
- One potential limitation of this approach is that not all eigenvectors can be canonized (even though in the datasets 90% of them can be). The authors seem to regard the 90% canonizable rate as a good feature of their approach, rather than a shortcoming. This is a fair perspective. But to provide a more balanced discussion, I would like to see the authors discuss more on this potential limitation, especially since 1. other methods for resolving ambiguities do not suffer from this issue and 2. it is unclear to me whether it is possible that the non-canonizable eigenvectors share any common patterns/structures in real datasets that might bias the results.
- There is a literature (beyond GNN) that also considers spectral embeddings of graphs and going around the basis/sign ambiguity problems. For example, in point cloud registration, Lai and Zhao's "Multiscale Nonrigid Point Cloud Registration Using Rotation-Invariant Sliced-Wasserstein Distance via Laplace-Beltrami Eigenmap. SIAM J. Imaging Sci. 10(2): 449-483 (2017)", and in graph comparison Tam and Dunson's "Multiscale graph comparison via the embedded laplacian distance. arXiv preprint arXiv:2201.12064 (2022)." I suggest incorporating these references and others to round out the prior work section in the appendix.
- Typos:
line 347: practical
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - See above section
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - See above section on weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer zwuu for appreciating the originality and effectiveness of the proposed canonization method. We address your concerns as follows.
---
**Q1**. One potential weakness is the theoretical portion of the paper, whose results are rather marginal and unsurprising. However, I see this as a methods paper, and the empirical good performance of the proposed method more than makes up for it.
**A1**. Thank you for appreciating the good performance of our method. However, we are a bit confused about your comments on the theory part as “marginal and unsurprising”. As far as we could see, our theoretical results establish the theoretical formulation and prove some important properties for Laplacian canonization, making it a theoretically rigorous approach for resolving the ambiguities. ****Reviewer pkjp comments that this theoretical formulation is “important for this community”****. Here, we highlight some ***new and valuable results that have not been observed by prior works***:
1. We constructed a theoretical framework for canonical forms with differnet invariances and equivariances. **Prior works only consider canonical forms with one kind of invariance or equivariance, and do not face the issue of uncanonizability**, thus their theories do not apply to the sign/basis ambiguity of Laplacian eigenvectors.
2. Using this theoretical framework, we found that universality, permutation equivariance, and sign/basis invariance cannot be achieved at the same time (last point in Appendix A). **Past works on sign/basis ambiguity do not propose this observation**, and SignNet actually tries to propose a “universal” sign-invariant network. However, as we discussed in Appendix A, their “universality” does not take permutation equivariance into account, and it’s impossible to be universal when you do.
3. We gave the necessary and sufficient condition of uncanonizability and showed their ratio on real datasets. **Previously we knew that sign/basis ambiguity is harmful for LapPE, but didn’t know the extent, and we were not aware why LapPE still underperforms RWPE even after removing sign ambiguity in some experiments [1].** The characterization of these uncanonizable eigenvectors can help us better understand the harm brought by sign/basis ambiguities.
Hope the explanations above could ease your concerns. If there are more specific questions on the theory part, and we are happy to address them in the discussion stage.
**References:**
[1] Rampášek et al. (2022). Recipe for a general, powerful, scalable graph transformer. *NeurIPS*.
---
**Q2**. I would like to see the authors discuss more on this potential limitation, especially since 1. other methods for resolving ambiguities do not suffer from this issue and 2. it is unclear to me whether it is possible that the non-canonizable eigenvectors share any common patterns/structures in real datasets that might bias the results.
**A2**. Thanks for your suggestions. We highlight that as elaborated in **A1**, three desirable properties of GNNs, universality (U), permutation equivariance (P), and sign invariance (S) **cannot be achieved at the same time**. So when preserving permutation equivariance, there is **a fundamental tradeoff between universality (expressive power) and sign invariance**. Accordingly, the fact that other methods attain both P and S (like SignNet) cannot attain universal expressive power. Instead, our methods attain universality and permutation invariance, while preserving S as much as possible by Laplacian Canonization.
As for the common patterns of these non-canonizable eigenvectors, **Corollary 1** suggests that they are highly symmetric (having identical positive and negative parts up to a permutation). Intuitively these non-canonizable eigenvectors would appear more often in graphs that are more symmetric (e.g. having high-order automorphisms), and predictions of these graphs might be more negatively affected. Of course it still requires more rigorous research as to how structural symmetries are related with non-canonizable eigenvectors and how they might bias the results.
---
**Q3**. There is a literature (beyond GNN) that also considers spectral embeddings of graphs and going around the basis/sign ambiguity problems. For example, in point cloud registration, [1] Lai and Zhao’s “Multiscale Nonrigid Point Cloud Registration Using Rotation-Invariant Sliced-Wasserstein Distance via Laplace-Beltrami Eigenmap. *SIAM J. Imaging Sci. 10(2)*: 449-483 (2017)” and in graph comparison [2] Tam and Dunson’s “Multiscale graph comparison via the embedded laplacian distance. *arXiv preprint arXiv:2201.12064* (2022).”
**A3**. Thanks for your suggestion, and we will incorporate these works in the related works. Both papers proposed ways to address the sign/basis ambiguity issue of Laplacian eigenvectors. The paper [1] proposed to address sign/basis ambiguities using optimal transport theory that involves solving a non-convex optimization problem, thus it could be less efficient than our approach. The paper [2] proposed to symmetrize the embedding using a heuristic measure called ELD that is quite similar to the form of SignNet, while our MAP algorithm offers an axis projection approach and establish its theoretical guarantees.
---
Hope our elaborations above could address your concerns. Please let us know if there is more to clarify.
---
Rebuttal Comment 1.1:
Title: reply to authors
Comment: I thank the authors for replying to my comments. I think that the proposed modifications and the clarifications from the authors have sufficiently addressed my concerns. I am satisfied with the authors' response. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper explores the Laplacian canonization approach to address the sign and basis ambiguities of eigenvectors. Previous sign- and basis-invariant methods suffer from high complexity and the proposed canonization method is light-weighted and can be used for any graph neural networks. Since the Laplacian canonization algorithm only runs in the pre-processing stage, it significantly reduces the forward and backward overhead of the neural networks. Experimental results on various graph classification datasets validate the effectiveness and efficiency of the proposed method.
Strengths: 1. The proposed canonization method is effective and efficient. Existing sign- and basis-invariant models suffer from high complexity. And this paper addresses this issue by proposing a new pre-processing algorithm, which not only reduces the training computation costs but also makes the model suitable for any graph neural network.
2. This paper gives a theoretical framework for Laplacian canonization and detailly discusses the conditions for canonizing the sign and basis invariance of eigenvectors, which I think is important for this community. Based on the theoretical results, this paper proposes an efficient canonization algorithm that can heuristically determine the signs of eigenvectors.
3. This paper is well-written and easy to follow.
Weaknesses: In the experiment, this paper tests the performance of different positional encoding methods with the same base model. To ensure fairness, the authors should ensure that different methods have similar model parameters. For example, in the PCBA dataset, the model parameters of GatedGCN-MAP are 2.5 times that of the GatedGCN-LapPE but the performance improvement is negligible. Besides, it is confusing that the parameters of PNA-None and GraphiT-None are fewer than PNA-MAP and GraphiT-MAP. Why removing the positional encoding will increase the number of parameters?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper discusses the limitation of the proposed algorithm that not all eigenvectors can be canonized and shows that this situation does not have a potential impact on the model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer pkjp for appreciating our method and theoretical results. We address your concerns on parameter sizes.
---
**Q1**. To ensure fairness, the authors should ensure that different methods have similar model parameters.
**A1**. In our experiments, we allow a flexible choice of model architectures (layers, hidden dimensions) to achieve the best performance. This setting is consistent with prior works, specifically [1] and [2]. Following your suggestion, we further ensure the same model size for baseline methods. As shown below, the results are consistent with the original ones as our MAP still outperforms LapPE+RS.
*New Results on MOLPCBA under similar model size.*
| Model | PE | $k$ | #Param | AP |
| --- | --- | --- | --- | --- |
| GatedGCN | None | 0 | 2641K | 0.265 ± 0.003 |
| GatedGCN | LapPE + RS | 3 | 2642K | 0.266 ± 0.002 |
| GatedGCN | MAP | 3 | 2658K | **0.268 ± 0.002** |
---
**Q2**. Why the parameters of PNA-None and GraphiT-None are fewer than PNA-MAP and GraphiT-MAP? Why removing the positional encoding will increase the number of parameters?
**A2**. We note that removing PE alone does not increase the number of parameters on a fixed model, but decreases it a bit with smaller input size. Similar to **A1**, we also allow a flexible choice of model size to achieve the best performance, which may choose an even smaller model than the baseline. In these cases, MAP-based models can outperform baselines with even fewer parameters, showing their effectiveness. Here, we further rerun MAP with roughly the same model size as the baseline methods (PNA/GraphiT) on MOLTOX21. It can be seen that MAP-based models still consistently outperform their baselines.
*New Results on MOLTOX21 under similar model size.*
| Model | PE | $k$ | #Param | ROCAUC |
| --- | --- | --- | --- | --- |
| PNA | None | 0 | 5245K | 0.755 ± 0.008 |
| PNA | MAP | 16 | 4716K | **0.758 ± 0.003** |
| GraphiT | None | 0 | 958K | 0.743 ± 0.003 |
| GraphiT | MAP | 16 | 916K | **0.755 ± 0.005** |
**References:**
[1] Dwivedi, V. P., Luu, A. T., Laurent, T., Bengio, Y., & Bresson, X. (2021). Graph neural networks with learnable structural and positional representations. *arXiv preprint arXiv:2110.07875*.
[2] Lim, D., Robinson, J., Zhao, L., Smidt, T., Sra, S., Maron, H., & Jegelka, S. (2022). Sign and basis invariant networks for spectral graph representation learning. *arXiv preprint arXiv:2202.13013*.
---
Hope our explanations and new experiments above could address your concerns. Please let us know if there is more to clarify.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Hi, thanks for your rebuttal. The new experiments convince me a lot. I have no further concerns. | null | null | null | null | null | null |
Exponentially Convergent Algorithms for Supervised Matrix Factorization | Accept (poster) | Summary: This paper proposed a novel supervised dictionary model, with two variations, one feature-based and one filter-based. The problem setting is on classification tasks with both high- and low-dimensional feautures, where the high dimensional features are learnd through dictionary learning, and then intergrated in to multinomial logistic regression.
The problem estimation is formulated to use a low-rank projected gradient descent algorithm. Exponential convergence guarantee is provided for the proposed algorithm.
Strengths: The paper, overall, is written clearly and it is a pleasure to read.
1. The proposed algorithm provides theoretical guarantees in terms of convergence.
2. The proposed algorithm demonstrates improved performance compared to prior algorithms, as well as selected standard classification algorithms. It additionally exhibits good interpretability.
Weaknesses: It would be beneficial to conduct benchmarking against deep neural networks in order to gain insights into gaps, if any, in performance and to understand potential trade-offs between interpretability and classification accuracy.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: A few typos:
- Line 151, "is scalar and continuous"
- Line 156, the equation should be the one mentioned on line 154 instead of (6).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have sufficiently addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback on our submission. The reviewer has provided the following comments:
**`Q1. It would be beneficial to conduct benchmarking against deep neural networks in order to gain insights into gaps, if any, in performance and to understand potential trade-offs between interpretability and classification accuracy.`**
**Response**: Thank you very much for the suggestion. In the revision, we have included an additional benchmarking analysis involving deep convolutional neural networks (CNN) and feedforward neural networks (FFNN) to provide deeper insights into the performance of our proposed methods. (We report this result in the supplementary 1-page author response PDF).
For the task of classifying microarray data into cancer classes, we compared the performance of our method with both CNN and FFNN. Specifically, the CNN architecture was designed with a convolutional layer with 32 filters and a kernel size of 3, followed by an average pooling layer with a pool size of 2. Subsequently, a second convolutional layer with 64 filters and a kernel size of 3 was integrated, further followed by another average pooling layer with the same pool size. The architecture was finalized with a flatten layer, a fully connected layer of 128 neurons activated by ReLU, a dropout layer with a rate of 0.5, and finally a fully connected layer with a sigmoid activation. On the other hand, the FFNN consists of a fully connected layer featuring 64 neurons with ReLU activation, followed by a dropout layer with a regularization rate of 0.5. A subsequent fully connected layer with 32 neurons activated by the ReLU was incorporated, followed by a fully connected layer with a sigmoid function. This comparative analysis was repeated five times, consistent with the procedure outlined in the main paper.
In our experiment, the CNN achieved an average accuracy of 0.769 (0.07) (here 0.07 is the standard deviation, and we use this notation hereafter) on the pancreatic cancer dataset, and an average accuracy of 0.854 (0.06) on the breast cancer dataset. In contrast, the FFNN yielded an average accuracy of 0.816 (0.04) for pancreatic cancer and 0.890 (0.02) for breast cancer. We also included the revised table with all methods in the one-page supplementary PDF for rebuttal so that the reviewer can check.
An intriguing observation emerges from our benchmarking analysis. While the FFNN's performance on the breast cancer dataset is comparable to ours (The LPGD algorithms for SDL), the overall performance of CNN is notably inferior to ours. This disparity can primarily be attributed to the small sample size of the training set (145 samples for breast cancer and 25 samples for pancreatic cancer) in comparison to the substantial dimensionality of gene features (exceeding 30,000 features). We note that obtaining a substantial volume of biomedical data for cancer research is very expensive, making it challenging to feasibly train complex models such as deep neural networks. The significance of our approach becomes evident in its ability to retain robust performance even when facing the challenges posed by a restricted sample size and a complex high-dimensional feature landscape. Moreover, our method augments this resilience with the advantage of interpretability. We will include the comparison with CNN and FFNN, as well as the above discussion, in the revision.
**`Q2. A few typos: Line 151, "is scalar and continuous"`**
**Response**: Thank you for this comment. We fixed this in the revision.
**`Q3. Line 156, the equation should be the one mentioned on line 154 instead of (6)."`**
**Response**: Thank you for this comment. We fixed this in the revision. | Summary: The work focuses on a generic supervised dictionary learning formulation, which is convex in each of the variable-blocks but not overall. The idea is to stack up the matrix-variables and obtain an equivalent low-rank optimisation problem with a convex objective. And, then this is solved using a projected gradient descent algorithm (algo1).
Interestingly, under mild conditions, exponentially fast convergence to global minimiser for algo1 is proven. This is a far stronger guarantee than the popular plain block-coordinate descent (BCD), which (in general case) guarantees only stationary-point convergence.
As a side-result, assuming a generative model for the data and under low noise conditions (<O(1/n)), statistical consistency of the sample based-optimal solution is also presented.
The convergences rates are empirically verified on synthetic data. Interestingly in fig3a, it is shown that just by using the proposed pgd solver instead of the popular BCD, one may achieve considerable improvement in application performance.
Strengths: 1. While BCD style of algorithms are popularly studied in this context, I think studying the lifted-pgd is interesting.
2. Theorem 3.5 result is interesting and potentially can be applied elsewhere.
3. Improvement over BCD in fig3a is considerable, highlight the impact of the study.
Weaknesses: 1. Though theoretical guarantees are proven, from simulations it is not clear how much is the improvement of pgd vs baseline (BCD) in terms of time. For e.g. in figure 2, it would have been very helpful if BCD is also included.
Very minor comment:
1. The presentation of Fig 2 & Eqn 9 could be adjusted to not interfere with the text.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Following points may improve the presentation of this already good submission
1. I guess the order in which we perform the projection \pi_\Theta and \pi_r matters for the rate of convergence. Also, as noted in the paper, works like [12] alternate between projections. Any discussion on this may help the reader appreciate the algorithm better. For example if one does \pi_r followed by \pi_\Theta, then what may happen? Why not alternate between projections? do these make the convergence analysis difficult or is it that the convergence actually slows down if the order of projection is tampered with ?
2. Will a distribution free analysis in section 4 be more interesting ? With feasibility set being bounded and the objective in terms of the data is lips.conts., this should be possible.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are clearly mentioned in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s overall positive comments.
> Though theoretical guarantees are proven, from simulations it is not clear how much is the improvement of pgd vs baseline (BCD) in terms of time. For e.g. in figure 2, it would have been very helpful if BCD is also included.
**Response:** We sincerely thank your suggestion here! We added comparison with BCD in Figure 2 in the revision. We also included the revised Figure 2 in the one-page supplementary PDF for rebuttal so that the reviewer can check.
> Very minor comment: The presentation of Fig 2 & Eqn 9 could be adjusted to not interfere with the text.
**Response:** Fixed.
>Questions: I guess the order in which we perform the projection $\Pi_{\Theta}$ and $\Pi_r$ matters for the rate of convergence. Also, as noted in the paper, works like [12] alternate between projections. Any discussion on this may help the reader appreciate the algorithm better. For example if one does $\Pi_r$ followed by $\Pi_\Theta$, then what may happen? Why not alternate between projections? do these make the convergence analysis difficult or is it that the convergence actually slows down if the order of projection is tampered with ?
**Response:** Thanks for these great questions. First, in principle, one could alternate between the two projections $\Pi_{r}$ and $\Pi_{\Theta}$ at every iteration after a gradient descent step until convergence, similarly to the alternating projection in [12]. However, this would make each iteration of the algorithm prohibitively expensive as this requries to perform rank-$r$ SVD until convergence at *every iteration*. The problem in [12] is much simpler than ours as the objective function is simply the Frobenius norm between the target and the estimated low-rank constrained matrix.
Second, it is a great insight to switch the order of two projections $\Pi_{r}$ and $\Pi_{\Theta}$. Our proposed LPGD algorithm performs the convex projection $\Pi_{\Theta}$ first and then applies the low-rank projection $\Pi_{r}$. The key inequality we derive in the proof of Thm. C.2 is (eq. (60) in the submission)
$$
\lVert \mathbf{Z}_\text{$t$} - \mathbf{Z}^{\star} \rVert_F \le 2\eta \lVert \mathbf{Z} _{t-1} - \mathbf{Z}^{\star}\rVert_F + \lVert \Pi_t (\tau \Delta _{\Theta} \mathbf{Z}^{\star} ) \rVert_F,
$$
where $\tau \Delta_\text{$\Theta$}\mathbf{Z}^{\star}:=\mathbf{Z}^{\star} - \Pi_\text{$\Theta$}(\mathbf{Z}^{\star}-\tau \nabla f(\mathbf{Z}^{\star}))$ denotes the gradient mapping at $\mathbf{Z}^{\star}$ w.r.t. the convex constraint $\Theta$, and $\Pi_\text{$t$}$ is a linear projection onto a $3r$-dimensional linear subspace that depends $\mathbf{Z}^{\star}$, $\mathbf{Z} _{t}$, and $\mathbf{Z} _\text{$t-1$}$. The last error term above can be bounded above uniformly in $t$ using $\lVert \Pi _\text{$t$}( A)\rVert _\text{$F$}\le \sqrt{3r} \lVert A \rVert _\text{2}$. So we can apply the above inequality recursively to obtain the desired result.
Now if we consider an alternative algorithm that uses the low-rank projections $\Pi_{r}$ first and then the convex projection $\Pi_{\Theta}$, then we can derive a corresponding key inequality:
$$
\hspace{-2cm}(*)\cdots \qquad \qquad \lVert \mathbf{Z} _\text{$t$} - \mathbf{Z}^{\star} \rVert _\text{$F$} \le 2\eta \, \lVert \mathbf{Z} _\text{$t-1$} - \mathbf{Z}^{\star}\rVert _\text{$F$}+ \lVert \tau\Delta^{t} \mathbf{Z}^{\star} \rVert _\text{$F$},
$$
where $\tau \Delta^{t}\mathbf{Z}^{\star}:=\mathbf{Z}^{\star} - \Pi _{t}(\mathbf{Z}^{\star}-\tau \nabla f(\mathbf{Z}^{\star}))$ denotes the gradient mapping at $\mathbf{Z}^{\star}$ w.r.t. the *virtual* linear constraint that we constructed during the proof to approximate the low-rank constraint. Indeed, the above inequality can be derived by modifying the argument in (42)-(47) in the submission assuming the reverse order of projections. We omit the details due to the space constraint.
Then by recursively applying the inequality $(*)$, we can obtain
$$
\lVert \mathbf{Z} _\text{$t$} - \mathbf{Z}^{\star} \rVert _\text{$F$} \le (2\eta)^{t} \lVert \mathbf{Z} _\text{$0$} - \mathbf{Z}^{\star}\rVert _\text{$F$} + \sum _\text{$k=1$} ^{t} (2\eta)^{t-k} \lVert \tau\Delta^{k} \mathbf{Z}^{\star} \rVert _\text{$F$}.
$$
Hence the rate of convergence we would get is the same as the original algorithm, but the additive error takes a different form. Since the "low-rank gradient mapping" $\Delta^{k}\mathbf{Z}^{\star}$ depends on the iterates $\mathbf{Z} _\text{$k$}, \mathbf{Z} _\text{$k-1$}$, we find it easier to control the gradient mapping with respect to the convex projection that comes out from the analysis of the original algorithm.
We absolutely agree with the reviewer that this discussion will be helpful for the readers. We will add this discussion as a remark in the revision.
> Question: Will a distribution free analysis in section 4 be more interesting ? With feasibility set being bounded and the objective in terms of the data is lips.conts., this should be possible.
**Response:** This is an excellent suggestion. The only distributional assumption we have made in Section 4 is that the noise terms $\epsilon_{i}$ and $\epsilon_{i}'$ follow Gaussian distributions with a mean of zero. In fact, our current analysis holds when assuming only a sub-Gaussian distribution, as the only place we used Gaussian distributional assumption was in the concentration inequalities in Lemmas D.2 and D.3 (in order to obtain tail bounds in eq. (162) and (174)-(175)), which are already stated for the sub-Gaussian case. If we restrict the support of noise to be bounded, then similar concentration inequalities hold without any distribution assumption (e.g., by matrix Burnstein inequality). Such bounded noise assumption might be reasonable when the feasibility set is bounded, as the reviewer pointed out. We will add this discussion in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. I have read it and other reviews. My concerns have been addressed and I feel it’s a nice idea with publishing. I would like to keep my previous score. Thanks.
---
Reply to Comment 1.1.1:
Comment: Thank you very much. We again appreciate your thoughtful comments and effort for reviewing our submission. | Summary: This paper explores the optimization of supervised dictionary learning, focusing on a non-convex objective function with a matrix factorization structure. The authors propose a reformulation of the problem as a minimization task with a low-rank constraint. To solve this reformulated problem, they employ a projection gradient descent-type method. The paper provides convergence results and statistical analysis, along with practical applications to showcase the efficacy of the proposed algorithm.
Strengths: Optimizing non-convex structured problems poses intriguing and challenging tasks. The problem addressed in this paper represents a classic learning paradigm in the machine learning field, making the authors' contribution well-motivated. Additionally, they demonstrate the effectiveness of their new PGD-variant through classification tasks using medical data. The paper is well-written, presenting its ideas coherently.
Weaknesses: Overall, the paper is of a standard quality. Considering the extensive research on optimization problems with low-rank matrix factorization structure, the newly proposed techniques appear nontrivial yet not entirely surprising. My primary concern lies with the claim in the abstract (and other sections of the paper, e.g., L55) that the proposed method "provably converges exponentially fast to a global minimizer of the objective." Here are my questions:
* [Clarification question A] It appears that the global convergence result (Theorem 3.5) holds only when a low-rank stationary point Z^* exists, where "stationary" is defined as first-order optimality with respect to F under the convex constraint \Theta. In such a case, the objective function seems to be already strongly convex, and the mentioned existence might imply that the low-rank constraint could be eliminated (please correct me if I am mistaken). This assumption is quite strong and essentially transforms a non-convex problem into a convex one. I believe this could be a significant limitation of the analysis.
* [Clarification question B] I acknowledge that the authors present a general convergence result in Theorem D.1. My second question pertains to Theorem D.1(ii). It seems that, under the "possibly misspecified case," Theorem D.1(ii) cannot even guarantee the sequential convergence of {Z_t} since the residual term (second term on the right-hand side of Equation (91)) might not be zero even at the global optimal Z^*. I fail to see (please correct me if I am mistaken) why the gradient mapping would be zero at optimal Z^*, as this mapping does not consider the normal cone (defined in a certain generalized sense) of the rank constraint.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **`Clarification question A`**
**Response:** We would like to express our gratitude for the thoughtful comments provided by the reviewer. To begin, we wish to emphasize that our theoretical analysis comprises three parts:
> 1. (Thm. C.2) Establishing exponential convergence results for the general low-rank projected gradient descent (LPGD) algorithm for $r$-restricted strongly convex (RSC) and $r$-restricted smooth (RSM) objectives.
> 2. (Thm. 3.5, Thm. D.1) Conducting an extensive second-order analysis of the SDL objective to verify that the reformulated SDL problems satisfy the hypotheses of Thm. C.2.
> 3. (Thm 4.1) Establishing statistical estimation guarantee for SDL under generative models by using previous computational guarantees (Thm D.1).
The reviewer's concern appears to stem from mixing the two statements in Thm 3.5 (on SDL) and Thm C.2 (on LPGD). Indeed, the objective functions in Thm 3.5 correspond to the lifted SDL objectives in eq. (8) and (9). Notably, these objectives are not inherently ($r$-restricted) strongly convex. In fact, verifying that the lifted SDL objectives do satisfy the RSC/RSM properties is the content of the contribution 2 mentioned above.
Furthermore, we emphasize that the proof of Thm. C.2.(i) for the correctly specified case, where the assumed RSC/RSM objective has a rank-$r$ stationary point, is nontrivial, since the strong convexity holds only when we restrict the objective on matrices with rank at most $r$. Thus one cannot eliminate the low-rank constraint, as opposed to the reviewer's comment.
More specifically, even if the objective function admits a low-rank stationary point $\mathbf{Z}^{*}$, there could be many other stationary points among matrices with rank $>r$. Moreover, since our LPGD algorithm in general goes in (after low-rank projection) and out (after gradient descent in the ambient space) of the low-rank space, one needs to carefully address that the possibly wild landscape outside of the low-rank space does not significantly impact the convergence.
In addition, Theorem 3 of Zhu et al. '18 (see below) shows that if an objective function with a matrix input is both RSC and RSM, with a condition number $L/\mu<3/2$, and if it admits a low-rank critical point (without additional convex constraint), then there are no spurious local minima in the factored parameter space:
> [1] Zhihui Zhu, Qiuwei Li, Gongguo Tang, and Michael B Wakin. *Global optimality in low-rank matrix optimization.* IEEE Transactions on Signal Processing, 66(13):3614–3628, 2018.
This result also indicates that minimizing an RSC function with a low-rank critical point is a nontrivial problem. To put in context, our Thm C.2(i) shows that under the hypothesis of a condition number $L/\mu<3$ (weaker than in Zhu et al.) and with a convex constraint, the assumed low-rank stationary point $\mathbf{Z}^{\star}$ is the global minimizer of the objective among the low-rank matrices and that the LPGD algorithm converges to $\mathbf{Z}^{\star}$.
**`Clarification question B`**
**Response:** Thank you for your thoughtful question.
We do see the reviewer's concern regarding the claimed global exponential convergence to the global minimizer of the SDL problem. The exponential convergence technically holds for the correctly specified case, and for the misspecified case, the exponential convergence is up to an additive error that depends on the extent of misspecification. We will revise the statement in the abstract and throughout the paper to clarify this point. We would also like to bring the Reviewer’s attention to two relevant points below.
First, even in the presence of a nonzero misspecification error (which bounds the unnormalized estimation error $\lVert \mathbf{Z}^{\star}-\mathbf{Z}_\text{$\infty$} \rVert_F$, $\mathbf{Z}^{\star}\in \mathbb{R}^{p\times n}\times \mathbb{R}^{q\times n}$), our Thm. 4.1 demonstrates that, under natural generative models for SDL, this error becomes vanishingly small with high probability with noise variance $\sigma^{2}=O(1/n)$ for SDL-$\mathbf{W}$ and $\sigma^{2}=o(1/\sqrt{n})$ for SDL-$\mathbf{H}$. Roughly speaking, these results indicate that the generative SDL models are nearly correctly specified with high probability. As a result, our algorithm achieves exponential convergence to the correct parameters for the generative SDL model up to a statistical error that vanishes as the sample size $n$ tends to infinity.
Secondly, it seems to be challenging to exactly recover the global minimizer of an RSC function under a low-rank constraint even when there is no additional convex constraint. To the best of our knowledge, all existing works on similar low-rank matrix estimation problems without additional assumptions (e.g., incoherence assumption for matrix sensing problems) recover global optimum up to a misspecification error, which is zero when there is a low-rank stationary point (correctly specified) but is nonzero otherwise. This is the case, for example, in the following references:
>[2] Lingxiao Wang, Xiao Zhang, and Quanquan Gu. *A unified computational and statistical framework for nonconvex low-rank matrix estimation.* In Artificial Intelligence and Statistics, pages 981–990. PMLR, 2017.
>[3] Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, and Sujay Sanghavi. *Finding low-rank solutions via nonconvex matrix factorization, efficiently and provably.* SIAM Journal on Imaging Sciences, 11(4):2165–2204, 2018.
>[4] Sahand Negahban and Martin J Wainwright. *Estimation of (near) low-rank matrices with noise and high-dimensional scaling.* The Annals of Statistics, 39(2):1069–1097, 2011.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the clarification, which has addressed my previous concerns. I have adjusted my score accordingly. | Summary: This paper proposes a variant of supervised dictionary learning (SDL), provides some theoretical guarantee on finding the global minimizer of the problem with arbitrary initialization, and showcase its application in pancreatic cancer.
Strengths: Theoretical analysis on dictionary learning (DL) has a rich literature. However, theoretical analysis on convergence of supervised DL is scarce. So, this is of interest. The paper is written clear.
Weaknesses: The paper lacks a proper literature, introduction, and formulation of the dictionary learning problem. Several statements in this paper are not properly claimed. The paper in its formulation is not rigorous. Here are some indication of this.
1. Abstract:
- SDL is not properly defined. There is no info on its key property which is trying to represent data with a sparse combination of columns of a dictionary, which itself is learned.
- The global convergence with random initialization is a bold statement. Indeed, this is misleading, as this paper (I explain below) is solving a different problem than DL or SDL. I note that all theoretical analysis on DL work with the assumption that the initial estimation of the dictionary must be close to the true one [1,2,3,4] (to name a few)
- There is no reference to the sparsity characteristic of sparse coding/dictionary learning. So, then what is the motivation behind using SDL and not another method?
2. Line 24, DL is not introduced properly. DL is a method trying to learn a sparse representation that can explain the data through a dictionary in a generating fashion.
3. Line 26, This is confusing. DL is a specific model with sparsity. NMF, PCA, ... are different models. However, the statement is mixing all up.
4. Line 26, These citations are proper but old (there are many other recent works on DL).
5. Line 41, DL extract a high-dimension sparse features (D is overcomplete) not a low-dim feature.
6. Line 48, Literature on theoretical analysis on convergence and recovery for DL is missing.
7. Line 50, Theoretical analysis on DL have generative perspective such that there is a sparse representation that has generated the data (or in the SDL case, is also mapped to a class label), and the goal is given data, learn the dictionary and recover that sparse representation. DL is not about global minimizer of an objective (as the solution to overcomplete dictionary may not be unique without sparsity).
8. Why there is no notion of sparsity on H in (3)? Same on line 117.
9. Related work is missing DL literature. Moreover, DL is a bi-convex problem.
10. Line 154 is bi-convex in (beta, W) and (H). (6) objective is exactly similar to the one above it. Moreover, the objective is bi-convex not convex.
11. For section 3, theoretical analysis on SDL may aim to recover the sparse representation that has generated the data, and has produced the label through a probabilistic generative model. This paper is solving another problem, hence their convergence analysis does not address the SDL problem.
Overall, this paper's analysis is not on recovery of the sparse representation that has generated (explained) the data. Hence, statements are confusing in this regard. The paper provides convergence on another parameter theta.
[1] Agarwal, A., Anandkumar, A., Jain, P., & Netrapalli, P. (2016). Learning sparsely used overcomplete dictionaries via alternating minimization. SIAM Journal on Optimization, 26(4), 2775-2799.
[2] Chatterji, N. S., & Bartlett, P. L. (2017). Alternating minimization for dictionary learning: Local convergence guarantees. arXiv preprint arXiv:1711.03634.
[3] Rambhatla, S., Li, X., & Haupt, J. (2018, September). NOODL: Provable Online Dictionary Learning and Sparse Coding. In International Conference on Learning Representations.
[4] Arora, S., Ge, R., Ma, T., & Moitra, A. (2015, June). Simple, efficient, and neural algorithms for sparse coding. In Conference on learning theory (pp. 113-149). PMLR.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your evaluation of our work, which greatly contributes to its enhancement. We value your expertise in the field and understand your concern about the term DL being closely linked to sparse representation. While we acknowledge the prevalent association of DL with recovering overcomplete dictionaries and sparse representations, our primary focus is on *undercomplete/low rank* SDL in *high-dimensional* setting.
We would like to request the opportunity to delve deeper into our approach's motivations and implications either during the rebuttal process or through subsequent discussions. At a minimum, we are committed to clearly distinguishing our high-dimensional SDL via undercomplete DL from the conventional overcomplete DL. To facilitate reader comprehension, we are open to altering our terminology. Terms like "supervised undercomplete dictionary learning" or "supervised matrix factorization" could aptly encapsulate our approach's essence and better distinguish it from the supervised version of the standard overcomplete DL with sparse representation.
In response to your feedback:
**(1)-(2)**
**Motivation for Undercomplete SDL**: Our motivation for undercomplete dictionaries in SDL stems from the classification of high-dimensional data (e.g., genomic data analysis for cancer classification). We aim to learn a low-rank basis that offers interpretable, data-reconstructive, and class-discriminative features, addressing challenges posed by high-dimensional data. The overcomplete dictionary approach, with its associated sparse representation, proves computationally infeasible and complex to interpret. Our preference for undercomplete dictionaries aligns with our goals. Additionally, we find that assuming sparse representation over an undercomplete dictionary for high-dimensional data isn't always reasonable. As a result, we've excluded the sparsity regularizer for the code matrix $H$.
**Related Literature on Undercomplete/Low-rank DL**: Although most DL literature emphasizes overcomplete dictionaries and sparse representation, notable works explore undercomplete or low-rank dictionaries for various purposes (examples refs below). Notably, [1] introduces undercomplete dictionaries for efficient SDL tasks on high-dimensional data, resembling our approach (without theoretical guarantees). In [2], the identifiability of sparse component analysis is examined under an undercomplete dictionary. [3] employs undercomplete dictionary learning for low-rank dictionary learning, leading to unsupervised low-rank feature extraction.
[1] Mohseni-Sehdeh et al. "A Fast Dictionary-Learning-based Classification Scheme Using Undercomplete Dictionaries." Signal Processing (2023): 109124.
[2] Cohen et al. "Identifiability of complete dictionary learning." SIAM Journal on Mathematics of Data Science 1.3 (2019): 518-536.
[3] Parsa et al. "Low-rank dictionary learning for unsupervised feature selection." Expert Systems with Applications 202 (2022): 117149.
**(3)**
We'll revise this to "matrix factorization" to better encompass a broader spectrum of basis-learning problems.
**(4)**
We'll enhance our reference section with more recent DL sources.
**(5)**
DL with overcomplete dictionary does extract high-dimensional features, but DL with either low-rank or undercomplete dictionary extracts low-dimensional features. We will clarify this point.
**(6)**
Acknowledging the importance of theoretical analysis on DL, we'll incorporate the references you suggested, which delve into aspects like local convergence and recovery guarantees.
**(7)**
Thank you for your insightful comment. The classical references [30, 33] define the SDL problem as an optimization problem, with the possibility of undercomplete $r<p$ dictionary not excluded. Our formulation of SDL (eq. (3)) mirrors [30] (eq. (4)) without sparseness regularizer under $r<p$. Our work guarantees that one can find *some* global minimizer of the non-convex SDL objective exponentially fast from any initialization (under some conditions). This is by no means about recovering the ground-truth dictionary and representation. In fact, as we noted in the first paragraph of Sec. 3 (and also as the reviewer correctly pointed out), our SDL optimization problem does not have a unique global minimizer. However, if we transform separate matrix factors into a combined low-rank matrix (denoted $\theta$), then uniqueness is guaranteed. Under this setting, we also obtain statistical estimation guarantee under generative SDL models, see Sec. 4.
**(8)**
See our response for 1 and 2.
**(9)**
While we're constrained by space, we concur on the importance of showcasing recent DL advances. These references would fortify our work's positioning.
**(10)**
Acknowledging the non-convexity of (6), if we introduce the combined low-rank matrix factor $\theta$ (in line 162), then the objective function in (6) is quadratic $\theta\mapsto \lVert \theta_{0} - \theta \rVert_{F}^{2}$ for a fixed $\theta_{0}$. One can then minimize this convex function under a low-rank constraint on $\theta$, yielding solutions for $\beta$, $W$, and $H$. These points are stated in lines 160-163. We will further clarify our discussion in lines 156-163.
**(11)**
While the recovery of the ground truths is a natural goal of an SDL that draws a direct analogy from the literature of overcomplete DL, it is not what we aim for in this work. Our underlying assumption is that the observed high-dimensional labeled data is generated by a linear combination of unknown low-rank features. In this setting, there are infinitely many, equally effective undercomplete dictionaries and representations (not necessarily sparse) that could have generated the observed labeled data. Hence our goal is to find an optimally effective pair of class-discriminative dictionaries and data representation by solving an optimization problem.
We sincerely hope our clarifications assuage your concerns, encouraging your reconsideration.
---
Rebuttal Comment 1.1:
Title: Reviewer's Comment after Rebuttal
Comment: I thank the authors for their rebuttal, and appreciate their explanation. My concern remains on the following: this paper is applying a matrix factorization (a step very similar to PCA) but refers to the method as dictionary learning (which is unfamiliar to the general reader of the literature). I found the text to have several statements (which I noted in my original review, e.g., the discussion around Eq. (6)) that are not fully correct; it's hard to evaluate if those will be fully addressed upon acceptance (see below for more explanation).
- Overcomplete/undercomplete: I now understand that your focus is on learning a low-dimensional representation and provide a low-rank matrix factorization. I strongly suggest improving the literature on including sparse (overcomplete) dictionary learning, as this is a most well-known case; when one talks about dictionary learning. Moreover, I note that one may still want to apply sparsity on the low-dimensional representation (see [1] which uses an undercomplete dictionary and apply sparsity on high-dimensional gene perturbation data).
- On the usage of dictionary learning: this paper is performing a matrix factorization or the method (in line 174) is explaining an approach similar to PCA through SVD (if not exactly the same). Applying a least square optimization on data, while enforcing orthogonality on the basis, recovers PC (up to permutations). I suspect that applying PCA on the data, following a similar procedure in this paper, should result in very similar performance. I appreciate the authors to provide such comparison and an explanation on how this approach differs from using PC.
[1] Pan, J., Kwon, J. J., Talamas, J. A., Borah, A. A., Vazquez, F., Boehm, J. S., ... & Hahn, W. C. (2022). Sparse dictionary learning recovers pleiotropy from human cell fitness screens. Cell systems, 13(4), 286-303.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's valuable feedback on our rebuttal.
>*I found the text to have several statements (which I noted in my original review, e.g., the discussion around Eq. (6)) that are not fully correct*..
**Response.** We have elaborated on this point in item 10 in the rebuttal, but here take another chance for further clarification. In the submission lines 155-162, we wrote
>>Instead, consider reformulating the above nonconvex problem into a problem with a convex objective function by suitably stacking up the matrices using the following matrix factorization: .. eq. (6) .. Proceeding one step further, another important observation we make is that it is also equivalent to finding a *single* matrix $\theta:=[ \beta^{T} H \parallel W H ]\in \mathbb{R}^{(1+p)\times n}$ of rank at most $r$ that minimizes the function $f$ in (6), which is convex (specifically, quadratic) in $\theta$.
The first sentence reads as (6) itself has a convex objective function, which is indeed not correct. Our claimed reformulation of the nonconvex problem is by the matrix stacking in (6) AND the change of variable using $\theta$ after "proceeding one step further" above. We fully agree that our phrasing was not very clear, and we will clarify this point in the revision.
>Overcomplete/undercomplete: I now understand that your focus ..
**Response.** We very much appreciate that the reviewer now understands that we seek for low-dimensional representation of labeled data. As we have written in our previous rebuttal, as the reviewer suggests strongly, we will improve on reviewing the literature of sparse (overcomplete) dictionary learning.
>Moreover, I note that one may still want to apply sparsity on..
**Response.** We appreciate the reviewer providing this valuable reference. We will add a discussion on low-dimensional and sparse representation with the suggested reference.
>On the usage of dictionary learning: this paper is performing a matrix factorization or the method (in line 174) is explaining an approach similar to PCA through SVD (if not exactly the same). .. I appreciate the authors providing such a comparison and an explanation of how this approach differs from using PC.
**Response.** The reviewer's suspicion that whether our method of effectively solving a supervised low-rank matrix factorization problem, is essentially (if not exactly) the same as applying standard PCA via SVD, is entirely not true. We have already discussed thoroughly why supervision makes a great difference in unsupervised matrix factorization. Our experiment in Section 6 is devoted to demonstrating this point, which the reviewer might have missed. We will elaborate on the difference in experimental and theoretical aspects below.
As we have demonstrated in Figure 3 in the original submission, unsupervised PCs could result in poor performance in classification tasks. In Figure 3 **a**, the benchmark method "MF-LR" refers to applying the standard PCA first and then using the resulting low-dimensional representation for logistic regression. This method significantly underperforms our SDL method, especially poorly on the breast cancer dataset. Also in Figure 3 **b**, we visualize the PCs learned by the standard PCA from the pancreatic cancer dataset. Not only this method underperforms our SDL (73% vs. 96%), the PCs do not detect any clinically known prognostic markers. On the contrary, our SDL method (shown in Fig 3 **c**) achieves much higher accuracy (96%) and the detected *supervised* dictionary contains several clinically known prognostic markers (panel **d**). We also have provided extended experiments in the rebuttal on breast cancer, where our method even detected the well-known oncogene BRCA1 of breast cancer (provided in our 1-page supplementary rebuttal). There have also previous works on supervised PCA (e.g., [45]), experimentally validating the need for supervising PCs.
Theoretically, the objective function of SDL in (3) combines the matrix factorization and classification loss. For instance, for the filter-based SDL (SDL-$W$), the objective function is
\begin{align}
\min _{W,H,\beta,\gamma } \sum _{i=1}^{n} \ell(y _{i}, \beta^T W^{T} x _i+\gamma^T x' _i) + \xi \lVert X _{\text{data}} - W H \rVert _{F}^{2},
\end{align}
The above optimization problem cannot be solved by a single application of SVD due to the coupling between the classification and matrix factorization loss. This is why we propose to use the low-rank projected gradient descent (LPGD) algorithm, which iteratively applies gradient descent and low-rank SVD to solve the above non-convex problem in the combined factor space. To our best knowledge, this work is the first to provide an algorithm and convergence guarantee to an optimal solution of the joint optimization problem above and more generally in (3).
We eagerly await further discussions and value any additional insights you may have. | Rebuttal 1:
Rebuttal: We submit an optional 1-page PDF to show the revised Figures 2 and 3 with captions. In Figure 2, we present a comprehensive comparison between our LPGD algorithm and BCD. In Figure 3, we include additional experimental details, focusing on breast cancer classification, which successfully identifies well-known oncogene and cancer-associated genes (prognostic markers). Moreover, Figure 3 now contains further benchmarks that compare our approach to convolutional neural networks and feed-forward neural networks.
Pdf: /pdf/234b2d6f63d33a950fdebb3070148227bf9e9e05.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Implicit Transfer Operator Learning: Multiple Time-Resolution Models for Molecular Dynamics | Accept (poster) | Summary: The authors propose Implicit Transfer Operator (ITO) learning, which aims to learn a surrogate of a molecular dynamics (MD) simulation process. Since standard MD simulations integrate Newton's equations of motion numerically, small integration time-steps are necessary, making simulations costly when processes on long time-scales (requiring many integration steps) are of interest. The proposed ITO method learns to simulate the dynamics across multiple time-scales, allowing to study processes on long time-scales more efficiently by using large time steps. It is also shown that ITO models are able to learn surrogate dynamics using partial observations, which is useful e.g. in the context of coarse-graining.
Strengths: The presented method has strong theoretical foundations and the underlying theory is well-described.
Weaknesses: While the premise of the paper is interesting, the results and contributions are not particularly impressive:
1. ChiroPaiNN is an extremely simple modification of the existing PaiNN architecture. So simple in fact, that I find it hard to argue this is a separate contribution (the only modification seems to be an added cross product between vector features, followed by a scalar product).
2. The authors write that their models show quantitative agreement with dynamic and stationary observables, which is not backed by the results shown in Table 2. Free energy differences of folding have relative errors in excess of 400% and also absolute errors are extremely large with over 3 kT in some cases. Mean first passage times have similarly large errors. These results cast doubt on the practical usefulness of the proposed method.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - It is unclear to me why symmetry with respect to parity inversion needs to be broken for the task at hand. Can the authors explain why this is necessary? What happens when an E(3)-equivariant models is used instead of an SE(3)-equivariant one? Additional ablations studies in this direction would be insightful.
- Recent work (see arXiv:2302.00600) has pointed out an interesting connection between the score field $\nabla \log p$ (see Eq.7) in diffusion models and force-fields. Are the authors aware of this connection? Have the authors tried to "extract" the effective force-field learned by their model? My gut feeling is that it might be possible to run ordinary MD simulations with the learned force-field, but possibly using much larger time steps than would typically be feasible. I encourage the authors to investigate and run additional experiments in this direction.
- Can the authors think of ways to improve their method to achieve better quantitative agreement for observables like free energy differences with conventional MD?
- Direct efficiency comparisons to conventional MD would be interesting. What is the expected effective speed up achievable with the ITO method? Considering that training data for ITO needs to be generated, the models needs to be trained, and finally evaluated, I wonder how big the advantage is compared to directly running MD simulations.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations with respect to transferability and scalability of the proposed method and the accuracy of the surrogate dynamics model are mentioned. In some cases it is unclear how the limitations can be reconciled in the future (e.g. the requirement of a closed-form expression for target path probabilities). Potential societal impacts of the work are not discussed, which however, I find acceptable considering the topic of the manuscript. A discussion of societal impacts would probably appear contrived.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our manuscript and providing insightful comments and questions. We believe your input will help us shape a much improved camera ready version. Please find commentary and replies to your concerns and questions below.
_Comments on weaknesses:_
1 and 2. We believe that the modification, while indeed “simple” , is still a contribution as it solves a specific problem we had: we needed a highly efficient architecture with SE(3) equivariance (see clarification below). The values in Table 2 are generally very good considering there are no other methods which offer estimates of these values at all. Nevertheless, after submission, we found that training the models longer, led to better agreement between the MD and ITO values, suggesting that we systematically underfitting our models due to time-constraints. We now provide updated values for Chignolin and show a plot of convergence of observables as a function of training time in the global reply (Global reply fig 2 and table 2). We will update all values for the camera ready and provide convergence plots in the updated appendix.
_Replies to questions:_
1. E(3)-equivariant models do not have the capacity to distinguish between chiral configurations. As a consequence, if we don’t use an SE(3)-equivariant score model, ie. symmetry is not broken when inverting the orientation, half of the predicted configurations will have the opposite chirality of the conditioning configuration. This is a structural problem because the model can not tell a configuration and its mirror image apart. In the Global response we illustrate this problem in Figs. 3 and 4. We will elaborate this discussion for the camera ready version.
2. This is indeed interesting work (two for one, Arts et al). We expect for large N that -\nabla_{x_N\tau} log p(x_N\tau|x_0) we will approach the effective force field since the density will converge to the Boltzmann distribution (see our appendix A.1). We have not tried to extract the forcefield from our current models, as learning a machine learned potential is not the focus of this work and we believe it would distract attention from the main contributions of this work. We do not believe that the learned forcefield would necessarily be amenable for simulation steps as large as those ITO enables, it is unclear what the reasoning here is. Since our focus is on sampling and avoiding MD simulations, we leave these experiments for future work.
3. As outlined above, we found that we had systematically under fitted our models due to time-constraints. After submission we found much better agreement, and show convergence of observables in our global response.
4. In the manuscript we do provide timings of sampling. The sampling speed in the current implementation declines quadratically in the number of particles for the simulation (appendix C.3, and limitation section) whereas MD scales from Nlog(N) to N^3 depending on different factors. Even so, with ITO we can stably simulate at a rate 6 orders of magnitude faster than MD. Nevertheless, current timings are not directly comparable, as MD is subject to other degrees of freedom which we ignore for now. To be specific, since the model is not transferable currently, we need MD data before we can do sampling for each system. In the strict sense, this means that in its current instance, there is no gain compared to MD, for an arbitrary new case. Nevertheless, we argue that by systematic data curation and parameter sharing, we can achieve a transferable ITO model in the near future which can capitalize on the multiple orders of magnitude speed ups presented. We transparently discuss this limitation in our manuscript and believe that in spite of it, the advances presented in this paper are extremely interesting for the broader AI4Science community.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed reply and clarifications.
1. It is not true that E(3)-equivariant models do not have the capacity to distinguish between chiral configurations. For example, libraries like e3nn separate features into irreps of O(3), so they have "separate channels" for scalars (not able to distinguish mirror images) and pseudoscalars (able to distinguish mirror images). The proposed modification to the PaiNN architecture mixes pseudoscalars and scalars into one channel (breaking the symmetry w.r.t. to parity inversion). It is still unclear to me why that is necessary when pseudoscalar channels from an E(3)-equivariant model could also be used to distinguish mirror images, without needing to break symmetry.
2. Regarding my reasoning about extracting the learned force field from the model (point 2): I simply find this is an interesting test to consider, which could illuminate what the model has learned. As I explained in my original comment, the learned force field might allow much larger time steps than conventional MD, so there could still be a substantial speedup compared to conventional simulations. Of course, it is up to the authors whether they want to explore this direction or not.
3. I am glad to see you get improved results when training for a longer time!
4. I thank the authors for clarifying and I agree that a transferable ITO model that provides actual speedup over MD would be much more interesting than the current model.
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with our rebuttal. A few comments on your replies.
1. Including a pseudo-scalar to an E(3) equivariant network would make it SE(3) equivariant -- we achieve the same thing through a different means, as the reviewer outlines. We are not using e3nn, since it did not have the performance we needed for our application. We did use it initially, but had to leave it behind due to speed issues. We recognize e3nn is under heavy development and may in the future be suitable for applications like ITO.
2. If the reviewer could clarify their position, as to why and how they expect a Langevin type MD simulation in a machine learning potential would enable larger time-steps we would be very grateful. We have limited resources at our disposal and have to chose our experiments very carefully. We emphasize, that ITO does not rely on MD (integration of an SDE), and as such we are not sensitive to discretization errors which limit the time-step accessible in classical MD.
4) We are transparent about the limitations in the original manuscript, and yes we agree the model would be much more interesting if it was transferable at this point. However, keep in mind that this is a big ask, currently no ML coarse-grained models are transferable, and no existing methods allow for long-time scale simulations like ITO. | Summary: The paper develops a conditional diffusion model to sample unnormalized densities - e.g., the Boltzmann distribution of molecules to replace molecular dynamics simulations. Given a molecule structure, noise is added to it and the diffusion model denoises this distribution to the distribution generated by MD. The diffusion model is not only conditioned on diffusion time, but also on MD time, such that a single model can be used to make larger or smaller MD steps.
The method is not generalizable - training data needs to be collected via the expensive MD that the method aims to replace. The paper is transparent about this. Experiments are performed on a toy system, alanine dipeptide, and 5 fast folding proteins.
The paper has strong contributions but is very unpolished. I think it can be substantially improved in the rebuttal, especially by addressing my first two content concerns in the weaknesses.
I thank the authors for any time they take to answer my questions!
Strengths: ### Strengths
1. The authors tackle an important and impactful task (sampling Boltzmann distributions of molecules).
2. The idea is clever. Since it is hard to sample the Boltzmann distribution of a molecule unconditionally, the authors instead construct a noisy distribution that still shares structure with the bolztmann distribution such that the generative model has an easier job at producing a sample from the less complicated conditional distribution instead of the unconditional one. **************************************************Notably this was already done by TimeWarp************************************************** but the authors bring diffusion models to this task which seem better fit for it (if no reweighting is done).
3. It is an interesting hypothesis that training with multiple MD times improves stability and would perform better than a single MD timestep size. Furthermore, this is justified and illustrated with interesting transfer operator theory. I think there is great value in bringing this theory to the attention of the ML for MD and boltzmann generator community.
4. The model is self consistent. The experiments set up for this are ideal and illustrate the self consistency very well.
Weaknesses: The first three content concerns are by far the most relevant concerns
### Content concerns (approximately in order of importance)
1. The main claim (as I see it), that training with multiple MD times helps, is not sufficiently investigated:
1. Why do you only show results for the Muller Potential for this? I do not see why not to evaluate it for all systems or at least for some molecule instead of only the toy system.
2. Why do you only show VAMP2 score gaps and not kl divergences?
3. What about 1000 lag in Table 1?
2. What is the comparison in wallclock time for sampling the whole distribution compared to MD? I would expect to see e.g., a plot of how the KL divergence of some ovservable goes down in MD vs. with your method in wall-clock time. Running diffusion models also takes quite some time I imagine, so is this not one of the main things that need to be investigated? Am I missing this information somewhere in the paper?
3. The paper claims to be reproducible, but there is no code provided.
4. Is there related work on ML for transfer operators (e.g., by Anima Anandkumar) that should be explained in the related work? I am not familiar with the field - just a potential pointer.
### Presentation Concerns
I think the presentation has to be improved and currently suffers from missing information, information relegated to the appendix, distraction with irrelevant material, and unnecessarily complicated explanations. The first two concerns are the most relevant.
1. Are we obfuscating the method unnecessarily with math to make it seem more sophisticated? I think it would be tremendously helpful if the transfer operator theory is explained and justifies correctness, but the explanation of what is actually done in the end can be vastly simplified and made much more transparent by just stating that it is now a diffusion model that is conditioned on the MD timestep as well and can therefore make larger or smaller MD timesteps during inference.
2. The “novel” architecture explanation is unnecessary and it is really not needed to claim this as a contribution instead of just focusing on the clearly good contributions and the point of the paper (to me at least) in using multiple MD times and a conditional diffusion model to replace MD. These are two novel **************actual************** contributions. How about just saying you use Painn and there is a little tweak similar to what you can do with e.g. e3nn to deal with chirality that the reader can look up in the appendix?
3. (would take quite some rewriting I suppose): I think the presentation might benefit a lot from distinguishing between e.g., an “MD SDE” and a “Diffusion SDE”. Then you can distinguish between an MD time and a diffusion time and say that in the text next to your superscript, subscript notation.
4. Figure 3 does not show any visible difference between fixed and stochastic lag? My recommendation would be to put it in the appendix and use the space for something useful.
5. I would recommend not talking about a special coarse grained approach in the contributions introduction and abstract. I went into the paper thinking you do something interesting with coarse graining.
6. You talk about nested sampling vs. direct sampling. While it is clear to me what is meant, this is not explained anywhere. A simple sentence would suffice.
7. More explicitly point out the dependence of the added noise on the size of the MD timestep size.
8. It should be mentioned in the main text that the number if diffusion ODE steps is the same independent of the size of N.
9. Maybe fix all the typos especially in the related works section for the next/camera-ready version :sweatsmile:
10. The appendix pointer for Algorithm 1 is wrong.
Technical Quality: 4 excellent
Clarity: 1 poor
Questions for Authors: ### Questions
1. For the dependence of the diffusion/the amount of added noise on the size N: What is the connection between increasing the maximum diffusion time T and making beta_i dpend on i instead and controlling the variance with that? Is one correct and the other is not or are they equivalent?
2. Did you try an SDE solver for the diffusion SDE?
3. How are the standard deviations in Table 2 calculated?
4. Why do you use the same number of diffusion steps/discretization steps if you have a higher or lower N?
5. How many samples do you draw in all your experiments? How long do you run the “MD” with the diffusion model?
6. Why does the same i index both t and N? Wouldnt that mean that a particular step in the trajectory always uses the same N?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 1 poor
Contribution: 2 fair
Limitations: The paper is transparent with the main limitation, that there is currently no capability to generalize. I think the extent to which this is made clear could go further though: point out that the model is only **********************overfitting********************** on a distribution and that there is **********no "real-world use value" (e.g., speeding up MD) in its current form.**********
The paper also points out the important difference to e.g. Timewarp that there is no direct way to do reweighting and get exact convergence to the boltzmann distribution in the limit. However, in the discussion it is made to seem like obtaining exact likelihoods from a diffusion model is very much feasible as well and would not come with a large amount of difficulty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking time to read our manuscript thoroughly, and providing their insightful comments. We in particular appreciate the comments about the presentation, which will help us prepare a much improved camera ready version. We are addressing their highlighted weaknesses and questions in a point-by-point reply below.
_Comments on weaknesses:_
1. We show for MB as it is the system which can be extensively sampled using conventional methods and where statistics can be computed with confidence. We have trained models for Alanine dipeptide with fixed lags and include VAMP2 gaps calculated for these models as well for completeness, preliminary figures are attached in the Global response (Fig 1), and ecco the results reported for MB. Final figures will be included in the camera ready version. KL divergences are insensitive to low density regions per construction, capturing slow dynamics means capturing low density regions well. VAMP2-gaps compare the singular values of a Koopman operator and directly measures meta-stability, e.g. how well we capture the gaps between high and low density regions of the configuration space. We used trajectories of length 1000\tau to generate the statistics in the table, to ensure the same statistics across the lag times. The estimation of the Koopman operator is not stable when the lag is equal to the length of the trajectory, so we cannot compute the VAMP2-gap reliably at 1000\tau. We will redo these calculations for the camera ready versions and include all lags.
2. We provide estimates of wall-clock time per conditional sample for the systems in the appendix C.2. We also discuss this briefly in section 4.3 in the main text for one example (Chignolin). The main difference is that for MD, we need to run simulations for N\tau steps to draw one sample for the Monte carlo estimator (2. Dynamic observables). With ITO we can draw one sample at N\tau in a 1-shot fashion, and for coarse-grained Chignolin drawing one sample takes ~4 ms. If we very conservatively say we can simulate a coarse-grained Chignolin at 100 microseconds per day with similar GPU resources, and want to evaluate a correlation function at N\tau = 1 microsecond, it means we can get 100 independent samples per day for a TitanX, for ITO we would get on the order of 10^7 independent samples. We realize that this discussion has been lackluster in the manuscript and will improve it for the camera ready version.
3. We will provide code when the paper is accepted for publication.
4. We have been in active conversation with Prof. Anandkumar and solicited feedback from her group about ITO, and while we got pointers to work on ODEs and PDEs using their Neural Operators approaches, they did not so far investigate transfer operators to our knowledge. If the reviewer can point us to specific references we are happy to expand this discussion.
_Replies to questions:_
1. Interesting idea, we have not explored this. We encode the physical time (N) using a positional encoding, which then defines the conditional diffusion model, and in turn the distribution of the random variable x_N\tau | x_0.
2. No. However, we are using ODE solvers to solve the probability flow ODE, for efficient inference of the trained model. We discuss this in the appendix and provide timings for inference.
3. The uncertainties are estimated using Bayesian posterior sampling of Markov models consistent with the data (using procedure described in Trendelkamp-Schroer and Noe JCP 2013). We will clarify this in the camera ready version.
4. We have not experimented with varying the number of diffusion steps for different N. However, the motivation to keep it the same is that the conditional density at different N are combinations of the same functions but weighed differently (see. Eq. 8, and Appendix A.1.). Since we do not know the weights a priori, we delegate this to a positional encoding. The hope was – and what we see is – that ITO is indeed able to generalize across time horizons (N), even if we do not represent the operator explicitly.
5. Each plot in Figure 4 is calculated using 15,000 trajectories. For direct sampling, this means that we had to take 15,000 samples in total. In the case of ancestral sampling, we had to sample 4/64/512 steps for 15,000 trajectories, depending on the value of \Delta t.
The contour lines of the fast folding proteins in Figure 5-8 were calculated using 10,000 trajectories. The density in the background are samples from the reference trajectories. The simulation step is 200ps and the simulations were performed with ancestral sampling. This means that in order to plot the contour lines of the \Delta t = 200ns we had to perform 1000 steps. We will clarify these details for the camera ready version of the manuscript.
6. The i index in the context of the mini-batch training is indicates a tuple of B_i=(x_{t_i}, x_{t_i+N_i\tau}, N_i), where the t_i and N_i are sampled randomly for each batch, i. The i is an index in the batch, which is composed of conditioning state, time-lagged state and time-lag (horizon). We realize this notation is not standard and might be confusing. We are revising it for the camera ready version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses and thoughtful explanations.
1. Thank you for the VAMP2 clarification. Figure 1 in the new pdf: the average over this scattered performances over epochs, with the average being worse for lag 100, is not very convincing for the argument that training with multiple lags will likely give me a better model than a fixed lag. What if we had epochs 50 to 60 - is 40 to 50 cherry picked?
2. Is my understanding incorrect that this does still not provide the same information as, e.g., a KL divergence decrease over time would provide? If the samples of ITO are bad, it will not matter how fast they are generated - the KL divergence (or better metrics) will not decrease.
3. If you do not provide code, I think it is incorrect to claim that the results are reproducible.
4. I should not have listed this as a concern, it is a mere question whether neural operator learning is relevant here and I cannot provide specific points that should be discussed in the references.
The time taken to answer my questions is highly appreciated.
---
Reply to Comment 1.1.1:
Comment: Thanks for engaging in the discussion!
1. No these are not cherry picked, they are the last 10 epochs of training, where the loss has stabilized. The fluctuations are large, but the averages are converged. In all cases but lag 100 ITO is better, and in that case ITO and fixed lag are statistically indistinguishable. We would say that this is a clear out performance. Consider the ITO model is a single model predicting all time-scales, and fixed time-scales require a new model for every time resolution to be estimated.
2. Thanks for the clarification. Making such a comparison is challenging for any of the molecular systems due to the computational requirements incurred. But we will include such a plot for MB in the appendix for the camera ready version. Thanks for the suggestion.
3. We agree, and consequently we will release the code once the paper is published to comply with the NeurIPS recommendations. | Summary: This paper presents implicit transfer operator (ITO) learning for the simulation in molecular dynamics. Their approach adopts the SE3 equivariant MPNN architecture (ChiroPaiNN) to parameterize the transition kernels in the denoising diffusion probabilistic model. The method displays a decent performance on several all-atom molecular simulation tasks with only coarse-grained representations.
Strengths: - The problem is very relevant and the motivation is clear. Most notably, the method takes SE3-equivariance into consideration, which I think is a very good practice to combine state-of-the-art generative models with the simulation of molecular dynamics.
- The paper presents a good balance between the introduction, methodology, and experiments. It is an interesting paper to read.
- Illustrative examples are shown to demonstrate the effectiveness of their approach with detailed implementation details.
Weaknesses: - The theory part concerning transfer operators is chaotic. Eq. (3) has a typo in it ($\rho$ should be $f$). And in Appendix A.2, I don't think Eq. (17) should be correct if the transfer operator is defined as in (16). It seems they mix up the definition in Eq. (16) with that defined in Eq. (5) of [1].
- The experiments seem sound but it is very hard for me to evaluate the effectiveness of their approach since there is a lack of comparisons. As listed in the related works section by the authors, the idea of transfer operator surrogates has already been commonly used in molecular modeling as well as deep generative Markov state models. I believe there would be state-of-the-art methods other than all-atom MD simulations, to which they are comparing.
- As I mentioned before, the use of an SE3 equivariant message-passing neural network seems one of the main contributions of this work. Following the last point, I would suggest at least one experiment that could explicitly demonstrate how and why this architecture is beneficial to the diffusion model. Will the results be much worsened if we remove this SE3-equivariance from the architecture, given a similar size of the NN?
[1] Prinz, Jan-Hendrik, et al. "Markov models of molecular kinetics: Generation and validation." The Journal of chemical physics 134.17 (2011).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - I don't see a direct connection between Eq. (4) and (5) and their method. Would you explain more about the motivation of the algorithmic design from this "relaxation"?
- Following the discussion in Appendix A.1, it seems that when $N$ is very large, the spectrum of the transfer operator $T_\Omega$ will become very ill-posed (with one eigenvalue being 1 and all others converging to 0). Will this affect the performance of the method? Or does this mean that the $N_{\rm max}$ in Algorithm 1 should not be too large?
- I don't quite understand the results in Figure 5. There are two dashed lines in the figure all labeled "Folded".
- Following the previous point, there seems to be a large discrepancy between the value of $\langle \tau_f \rangle$ for Trp-Cage, BBA, and Villin in Table 2. Would you explain how should we interpret this? Is the MD result generally accurate or it is also just a reference? If so, how could we know that the simulation yields reasonable results?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review of our manuscript and the comments, in particular regarding the presentation of the maths and comparison to previous work. We believe the input will help us prepare a much improved version for the camera ready version.
_Comments on weaknesses:_
1. There was indeed a typo in eq. 3 \rho and f were mixed up in that equation, we will fix this in the camera ready version. Great catch! The reviewer here refers to the propagator which is mathematically different from the transfer operator, but missed the scaling by \mu which makes the difference. Our eq. 17 and we refer to A.1, for a more verbose derivation in bra-ket notation.
2. There are no other methods available which allow for sampling on these time-scales and allow an apples to apples comparison. DeepGenMSMs are essentially deep Markov models, which encode configurations as state memberships and then model transition densities between discrete states in a latent space, with a Koopman operator. As shown in their original paper, these methods introduce severe distortions in the local structures when long time-scales are queried. So while DeepGenMSMs have realistic latent space dynamics, the real space dynamics cannot be faithfully reconstructed. ITO does not suffer from this drawback. The only methods which allow for sampling of long time-scales faithfully are conventional MD simulations and timewarp (arxiv:2302.01170); the latter paper is in preprint with no code or data available publicly, and suffer other severe drawbacks. Nevertheless, we recognize that expanding the section on prior work will help clarify the ITO contributions to the readership better.
3. We provide a figure in the global reply showing how a E(3) equivariant model samples both mirror images of alanine dipeptide with equal probabilities, whereas only one mirror image is accessible in the physical dynamics used to train the model. Introducing SE(3) equivariance overcomes this problem.
_Replies to questions:_
1. Indeed, we do not directly use eq 4, 5 and 8, as they would involve optimization on the Stiefel manifold, and would require us to choose the number of eigenfunctions to estimate. We therefore model these expressions implicitly, by modeling the transition density. We use these equations to motivate the connection between the time-horizon N\tau and transition probability between two configurations x and y. We conjecture that we can leverage this underlying mathematical structure to instead train a single model which models the transition density at multiple different time horizons [1, N_max], since the proportions of different functions vary exponentially in N. Indeed, we see that training a model with multiple N is an advantage: we get better prediction of meta-stability when training with multiple time horizons.
2. No, we have not experienced any negative impact of large time-scales in our experiments. Since we converge to the Boltzmann distribution for large N, e.g. the state x_N\tau ~ \mu(x) independently of x_0, this is the hallmark of ill posedness, but it is also what we expect from the dynamics: at sufficiently long time-scales we expect our Markov chain to mix instantaneously.
3. We are illustrating two collective variables tIC1 and tIC2 in our analyses. The dashed lines correspond to the tiC1 and tiC2 coordinates of the folded state. For clarity, we will change the style of the dash to distinguish the two tICs for the camera-ready version.
4. We expect these values to align within an order of magnitude. However, after submission, we found that training the models longer, led to better agreement between the MD and ITO values. We now provide updated values and show a plot of convergence of observables as a function of training time in the global reply for Chignolin. We will include these updated values for all four proteins in the camera ready version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I am raising my score to (5) but with less confidence (2).
---
Reply to Comment 1.1.1:
Comment: Thank you! | Summary: The proposed approach aims to learn molecular dynamics via an implicit transfer operator framework that can perform modeling at multiple time scales. The framework is using diffusion model with SE3 equivariant architecture. The approach has shown capability in stable and self-consistent modeling at multiple time and space resolutions for molecular dynamic modeling.
Strengths: - a novel framework for multi-scale simulation for molecular dynamics, combined with the SE3 network for stable, long-term results.
- demonstrate good experimental results on multiple MD datasets and achieve an order of magnitude speedup
- work on an important task of molecular dynamic modeling
- use diffusion network for exact likelihood evaluation
- honest and clear notes on limitations - appreciated
Weaknesses: - in terms of presentation, it'd be nice to make each figure self-contained in terms of points to make and the meaning of the notations.
-
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - how are the number of time scales chosen during the training?
- could you explain in the caption or here all the input for ITO networks in Figure 2
- what's the difference between the diffused time and actual time
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - this paper may be intense for people noting working on MD. it'd be nice to include more descriptions in captions.
- I am curious how it can be generalized to other dynamics like some other physics system that is also multiscale.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their favorable and supportive evaluation and helpful comments and questions on our paper. We reply to the questions below and welcome a discussion about any outstanding doubts.
_Questions:_
1. The number of time-scales chosen during training, can be understood as how many linear combinations of the eigenfunctions we want the model to see during training (eq.8). We set this number (N_max) to 1000 initially, and we did not experiment with varying this value. We leave optimization of this hyper parameter to future work.
2. z is an atom-embedding (nominal embedding), N is the multiple of the physical time-step \tau (positional embedding), x_0 is the cartesian coordinates of the conditioning state, t_diff is the diffusion time, and x_{0+N\tau} is the partially denoised state of the time-lagged configuration given x_0. Note that there is a hat missing on the x_{0+N\tau} to be consistent with the main text, we will fix this for the camera ready version. B) Variable names have the same interpretation, and there is also a hat missing on x_{0+N\tau}.
3. The diffusion time is the progress along the diffusion process which models the conditional probability density p(x_N\tau | x_0), the physical time (“actual time”) is the time-step N\tau, and is connected to the physical process which we are modeling. | Rebuttal 1:
Rebuttal: Global response. see attached pdf.
Pdf: /pdf/58b23013f0958cdfff6349cdacbe3b91b34d4a89.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this work, the authors proposed a framework that combines SE(3)-equivariant MPNN and conditional DDPM, called Implicit Transfer Operator (ITO) learning, as a method to efficiently sample observales from MD simulation trajectories. Such a framework is validated on Muller-Brown potential data generated by the authors, as well as the commonly used alanine dipeptide and fast-folding protein dataset generated using MD simulation. ITO learning framework is shown to bear the potential of accelerating or surrogate MD simulation.
Strengths: 1. It is a nice effort to formulate the MD trajectory using the ITO such that the sampling can be learned by the conditional DDPM.
2. Proposed a SE(3) version of the PaiNN.
Weaknesses: #### Major
My major concerns are related to the experiment and results sections of the manuscript.
1. In Figure 4, the $\phi$-$\psi$ plot or Ramachandran plot is often colored by the free energy of the system (e.g. [Köhler et al. 2023](https://pubs.acs.org/doi/epdf/10.1021/acs.jctc.3c00016), [Marloes et al. 2023](https://arxiv.org/abs/2302.00600) and many more), which is directly related to the probability of the state given Boltzmann distribution. From the current Figure 4, the physical property of the system, energy, is not directly visualized.
2. Another issue with Figure 4 is the large bin size of the 2D histogram. The current binsize of 2D histogram is about 0.4 rad, which makes it really hard for readers to understnad the performance of the SE3-ITO model. Moreover, the MD simulation data are represented in the form of 2D histogram while the model sampled data are in the form of contours. I would highly recommend that the authors to show the MD data in separate subfigures with the same form of the model sampled data. If 2D histogram is to be used, a much smaller bin size should be used for clarity.
3. Although the effort of proposing a SE(3)-equivariant version of PaiNN (ChiroPaiNN) should be recognized, the necessity of CPaiNN is not well estabilished in the manuscript. As the author mentioned in the manuscript, there is no parity change during MD simulation. I am wondering if there will be significant difference in accuracy if the original PaiNN is used in the ITO learning framework. Assuming an SE(3)-equivariant model is absolutely necessary, the authors have not shown any comparison between CPaiNN and other established SE(3)-equivariant model such as the [SE(3)-Transformer](https://arxiv.org/abs/2006.10503). Such a benchmark can definitely help to improve the manuscript.
4. For the fast-folding protein experiments, CG-SE3-ITO model is compared with MD data. In Table 2, the error of the model is significantly higher for proteins with more residues or $C_{\alpha}$ atoms. Yet, the discussion about the high error is limited in the manuscript. The purpose of coarse graining is to achieve relatively high accuracy with large system. Underwhelming accuracy on large system impairs the applicability of the SE3-ITO model on coarse graining.
#### Minor
5. Line 65, the probability is written as $p_{\tau} (x_{t+\tau} | x_{t})$, which is inconsistent with the notation in the dynamics observables equation ($p_{\tau} (x_{t+\Delta t} | x_{t})$) in line 64.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. In Figure 3, I assume that the green curve ($N=1000$) is not sampled from nested conditional probability. Can you verify my understanding?
2. In Figure 3, the model with ($N=10$ and $\Delta t=1000\tau$) is lower comparing to higher $N$ values. In Figure 4, $\Delta t=4$ps seems to lead to less accurate samples from model. My interpretation of those results is that the ITO learning framework suffers higher error when predicting more immediate conformational change of the molecule in MD simulation. If so, the dynamics of the molecule when moving between high probability state might be missed when using ITO framework. Can the authors discuss more about this?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for taking the time to carefully read our manuscript and providing constructive input and criticism as to how we can improve it. Below, you will find a point-by-point reply to your concerns and questions. We are looking forward to continue the discussion, please do not hesitate to ask if things remain unclear.
*Comments to major weaknesses*
Points 1-2: To clarify, Fig 4, does not show Boltzmann distributions but transition densities. Specifically, we show the log transition probabilities at three given time-lags (or time-horizons) from MD (coarse 2D and 1D histograms). SE3-ITO samples are overlaid with contour lines. As we are amongst the first to present work on models able to do this, we have experimented with visualization to make it as informative as possible. To clarify, the low resolution on the MD histograms are due to the high dimensionality of the transition density; it scales quadratically in the number of bins. Recall that we need to compute the probability from any bin to any other bin. For these 16x16 histograms we divide the data into 65536 transition counts, with 750000 MD samples in our training data we doubling the resolution would give us less than one sample per bin on average. In the plots we show only the transition statistics from the bin to which the initial condition is assigned.
We would like to emphasize that, we do also compare SE3-ITO samples (blue and orange) and MD samples (black) in the marginal histograms.
Nevertheless, we recognize the reviewers' concern about the comparison of statistics shown with different resolutions, and will include a 1- and 2-dimensional histogram of the model and training samples for more direct comparison, for the camera-ready version. 2D histograms will be separated into individual plots.
Point 3: In the Global response we show why an SE(3) equivariant model is necessary for the molecular applications pursued here (figs 3-4). Briefly, if we use a parity invariant model, we will sample mirror images of molecules with equal probability a priori, even if one of the mirror images is inaccessible in the physical dynamics used to simulate the system. This is also a shortcoming observed in timewarp (arxiv:2302.01170), that uses a permutation equivariant architecture. While it is interesting to investigate and compare different SE(3) equivariant architectures for SE3-ITO models, we are confident that these are best addressed in future work, as such a benchmark would add little scientific value to the current manuscript
Point 4: After submission we found our models were slightly underfitted, and after training longer we found improved agreements with observables across all proteins. We are providing new values in the global response table 2, for Chignolin, along with convergence plots. We will update all values for the camera ready version.
_Questions:_
1. Yes this is correct. It is sampled directly with \Delta t = 1000\tau.
2. We have to be careful comparing N across different data-sets, with different resolutions in space and time. In general, we cannot make a statement as to whether fast (‘immediate’) dynamics are captured poorly by our ITO models. Our preliminary investigations of this (Fig 10, in the supplement) suggest that we indeed do not capture very fast dynamics perfectly. However, we argue that these fast dynamics do not need a model like ITO. We can already study fast dynamics very well with conventional MD simulations. Their limitation is the mixing between modes interconnected by low probability barriers. On that task ITO does extremely well.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns and answering my questions regarding the manuscript. For the transition density plot (Fig.4), I gain deeper understanding of your transition density visualization after reading your explanation. I would highly recommend that the authors add more detailed explanation about Figure.4 to the camera-ready version of manuscript. Also, the Fig.3 and Fig.4 is sufficient to show the necessity of SE(3) equivariant model in this case. I suggest to briefly mention that in the manuscript.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are glad to hear that our more in depth explanation of our figures helped the reviewer understand the presented results better. For the camera ready version we will explain what is shown Fig 4 in more detail. We will tone down the discussion of the ChiroPaiNN architecture, but include the figures 3-4 (global response) to illustrate the need for the modification we introduce in PaiNN in the appendix. | null | null | null | null | null | null |
Neural Sculpting: Uncovering hierarchically modular task structure in neural networks through pruning and network analysis | Accept (poster) | Summary: When conventionally trained, neural networks do not demonstrate structural properties like input separable functions or reusability of sub-modules. The authors investigated this phenomenon and proposed iterative pruning to enhance structural properties of neural networks. They demonstrate the effectiveness of their method on boolean functions and a modified MNIST dataset.
Strengths: * The paper is well-written with nice motivations and clear logical flow.
* The paper investigates this interesting scientific question "whether conventionally trained neural networks display structural properties", which is an important question for network interpretability
* The paper proposes iterative pruning to enhance structural properties, which is a technical contribution.
Weaknesses: * The scope is a bit limited. The paper only discusses two properties, input separable functions and reuse of sub-modules. The examples are a bit too simple. Said so, maybe this is not too much of a problem for a scientific paper whose goal is to understand something. Still, I'd love to see your method applied to larger-scale experiments.
* The novelty is not very clear to me. I'm glad to see that iterative pruning works pretty well for module reuse, but it seems there is not too many technical contribution. I suggest authors should highlight the technical contributions/comparisons to previous works.
* Regarding the method, I feel encouraging modularity directly in training (e.g., adding Eq. (2) as a penalty in training) may further enhance structural properties. I'd love to see how this trick changes the outcomes (especially for the failed cases).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * Figure 3 and Figure 4 seem to have the exact same titles and captions. Should distinguish them more clearly.
* I'm not very sure about your definition of reusability (your definition sounds more like "shared features" to me). An example I would count as reuse of submodule is: consider input (x1, x2), output ((x1-x2)^2, (x1+x2)^2), the squared function should be learned twice (reused). However, there is probably no shared feature in a trained network. A trained network (e.g., a fully-connected network) can only learn the squared function twice and independently, even with your pruning strategy I guess.
* Do you expect your method to generalize to larger models, e.g., large language models?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments. We provide responses to the identified weaknesses and questions raised next:
**W1: The scope is a bit limited. The paper only discusses two properties, input separable functions and reuse of sub-modules. The examples are a bit too simple.This is not too much of a problem for a scientific paper whose goal is to understand something.**
As the reviewer also pointed out, the goal of the paper is to understand under which training constraints neural networks (NNs) acquire the hierarchically modular structure reflecting that of the given task – and it proposes a methodology to uncover that structure. The use of simple Boolean function graphs and sub-function properties allows for a principled analysis, making it easier to gain insights into the resulting NN structure. Further, the evaluation of the proposed training and network analysis methodology requires the knowledge of the task’s structure (ground truth). Boolean functions allow us to construct a wide range of tasks with different (yet known) hierarchical structures for detailed evaluation as demonstrated in section 5. Whether properties other than input separability and module reuse are similarly discoverable through our proposed methodology remains to be investigated in future work. However, we believe that these two properties are both quite central and general.
**W2: The novelty is not very clear to me. I'm glad to see that iterative pruning works pretty well for module reuse, but it seems there is not too many technical contributions. I suggest authors should highlight the technical contributions/comparisons to previous works.**
We would like to highlight the technical contributions in our work:
1. Propose a novel training methodology based on iterative pruning of units and then edges that results in NNs with a hierarchically modular structure that reflects the corresponding structure of the given task.
2. Develop a method based solely on unit connectivity to organize units remaining after pruning into modules and infer the hierarchical structure learned. The method utilizes path-based unit features, clustering, and cluster merging to uncover the underlying hierarchy of modules.
3. Design experiments and analysis tools to detect whether NNs, after training, acquire structural properties resembling the properties of the task’s sub-functions.
4. While individual components of our proposed training (pruning) and module detection methodology have been previously explored, our main contribution is to synthesize these components into a coherent and empirically evaluated pipeline.
To the best of our knowledge, our paper is the first work that presents a combined training (pruning) and network analysis tool (module detection) to uncover hierarchical modularity. Previous works in modularity and NNs have only worked on the latter. Due to page limitations, we combined comparisons with previous works in the introduction section (paragraph 2). That paragraph provides a high level overview of prior work and differences with our paper. We will include a more detailed comparison at a technical level in the camera-ready version, using the additional page provided.
**W3: I feel encouraging modularity directly in training (e.g., adding Eq. (2) as a penalty in training) may further enhance structural properties.**
Thank you for this very interesting suggestion. The concept of hierarchical modularity naturally incorporates sparsity and reusability, resulting in more efficient task/function representations. Our pruning approach revolves around the idea of first restricting the number of units to promote module reuse and then the number of edges to reveal the sparse connectivity, thus promoting the emergence of hierarchical modularity.
Adding a penalty term to the loss function to restrict the number of units and edges is a promising idea. However, it requires careful consideration, including the sequential nature of unit and edge penalty application (refer to section 3) to capture densely connected reused modules effectively. We acknowledge the potential of this approach to reduce training costs. It is a future research direction that is definitely worth exploring.
**Q1: Figures 3 and 4 seem to have the exact same titles and captions.**
We will make the figure titles and captions more distinguishable in the final version.
**Q2: I'm not very sure about your definition of reusability. An example I would count as reuse is: consider input (x1, x2), output ((x1-x2)^2, (x1+x2)^2), the squared function should be reused. A trained network can only learn the squared function twice and independently, even with your pruning strategy.**
We want to clarify the distinction between function reuse and operation reuse, as we define them in this work. In the example provided, squaring is an operation that is independent of its input variables. On the other hand, a function is a combination of such operations along with specific ordered input variables to it. As correctly pointed out, specific operations have to be relearned if applied to different inputs due to the fixed data flow in NNs. In systems with dynamic routing, it would be possible to learn squaring only once, qualifying as a reused operation. However, in fixed graphs like NNs, this is not feasible, as also highlighted in prior works (Gref et. al. 2020, Csordas et. al. 2021). We will clarify this distinction and elaborate on it in the next version of the paper.
**Q3: Do you expect your method to generalize to larger models?**
We expect the overall idea of restricting the number of units and edges (see W3) to remain valid. We do hope to apply our method to larger models and analyze the resulting structure. However, this remains out of scope for this work as the hierarchical structure of those tasks is unknown (see W1) and any identified structure couldn’t be compared against a ground truth.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarification! I will raise my score from 5 to 6. | Summary: This paper conducts an investigation of hierarchical modularity in neural networks by studying boolean networks. The paper studies simple hierarchically modular boolean functions and learns them using MLPs. It then propose metrics to discover this modularity by examining 1) input separability and 2) reusability of sub-functions. Finally, the paper provides a clustering based method to identify the modules in neural networks applied to arbitrary tasks.
Strengths: 1) Clear and scientifically principled investigation into an important yet understudied facet of neural networks
2) Interesting results showing that hierarchical modularity in neural networks often doesn't emerge with standard training but with pruning (both edge and neuron), the sparsity forces the networks to become hierarchically modular.
3) Novel method to find with some empirical backing
Weaknesses: 1) The experiments on more realistic datasets e.g. MNIST can be expanded to link the findings of the paper more concretely to what is observed in practice.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Currently, the experiments on MNIST to verify the method to discover modularity do serve as a proof of concept but more extensive experiments would be needed to confirm the effectiveness of the proposed approach to identify modules in neural networks in general.
In particular, extensions to CNNs on more realistic image datasets might be very interesting. It might be useful to conduct experiments with a subset of classes from a dataset like CIFAR100 or ImageNet where the superclasses and subclasses often offer some natural opportunities for hierarchical modularity.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments about this work.
**Weakness: Currently, the experiments on MNIST to verify the method to discover modularity do serve as a proof of concept but more extensive experiments would be needed to confirm the effectiveness of the proposed approach to identify modules in neural networks in general. In particular, extensions to CNNs on more realistic image datasets might be very interesting. It might be useful to conduct experiments with a subset of classes from a dataset like CIFAR100 or ImageNet where the superclasses and subclasses often offer some natural opportunities for hierarchical modularity**
The primary aim of this paper is to understand the conditions under which a neural network will acquire a hierarchically modular structure that reflects the functional structure of the given task – and to design a methodology for uncovering that structure. To validate this approach, tasks with known hierarchical structures are needed. Boolean functions offered a diverse range of tasks with distinct structures, allowing for an effective evaluation of the proposed methodology. Additionally, as the reviewer points out, the MNIST experiments provide a proof of concept for relatively larger-scale applicability.
While we acknowledge the potential for broader experiments involving larger models and complex tasks like CNNs or transformers, that would be a highly intriguing future research direction that is naturally a follow-up work to this first study.
We hope this perspective resonates with the reviewer's understanding.
---
Rebuttal Comment 1.1:
Comment: I have read the response and I stand by my original assessment of the paper. | Summary: The paper proposes a methodology for uncovering hierarchical modularity in neural networks (NNs). It combines iterative pruning and network analysis to reveal the underlying hierarchy of sub-functions in tasks. The paper demonstrates the effectiveness of the method on both Boolean functions and vision tasks using the MNIST dataset.
The main contribution of the paper lies in providing a novel approach to uncover hierarchical modularity without prior knowledge of the task's hierarchy.
The methodology offers insights into efficient and interpretable learning systems and showcases the potential of pruning and network analysis methods in revealing and utilizing structural properties in NNs.
Strengths: The paper demonstrates a significant strength through its comprehensive experimental evaluation, encompassing modular and hierarchical Boolean function graphs, as well as tasks utilizing the MNIST digits dataset. The authors meticulously conduct numerous trials, systematically varying network parameters like depth, width, and seed values to thoroughly validate the efficacy of their proposed methodology. The experimental results substantiate the approach's ability to precisely uncover the hierarchical and modular structures within the tasks, serving as empirical evidence of the methodology's robustness and broad applicability.
Weaknesses: One potential weakness of the paper is the limited discussion and analysis of the results regarding the failures or limitations of the proposed methodology. While the experiments highlight the success rates in detecting modules and uncovering hierarchical structures, there is less exploration of cases where the methodology might not perform as well or situations where the detected modules do not align perfectly with the expected sub-functions. A more in-depth analysis of the challenges and limitations of the approach could provide valuable insights for further improvement and understanding.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. How does the proposed iterative pruning process affect the overall performance of the neural networks in terms of learning efficiency and accuracy?
2. Are there any limitations or challenges encountered when applying the proposed methodology to more complex tasks beyond Boolean functions and MNIST digit classification?
3. Could the approach be extended to tasks with dynamic or evolving hierarchical structures, where the sub-functions change over time?
4. How does the proposed methodology compare to existing approaches in terms of accuracy, efficiency, and scalability when uncovering hierarchical modularity in neural networks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Minor comments:
- Figure 5B1 -> the text is hard to read on a printed paper, due to small fonts in the figure. This is a minor comment, because if one is reading the PDF paperlessly, one can zoom in.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments and appreciate your valuable feedback. Below we provide response to the weaknesses and questions:
**W1: One potential weakness of the paper is the limited discussion and analysis of the results regarding the failures or limitations of the proposed methodology.**
Please note that the paper clearly identifies some failure cases in the experiments section and appendix sections. However, we acknowledge the importance of providing more in-depth analysis of those cases. To improve the paper, we will extend the appendix sections and update the experiments section in the camera-ready version, offering a more comprehensive examination of the limitations and challenges encountered.
**Q1: How does the proposed iterative pruning process affect the overall performance of the neural networks in terms of learning efficiency and accuracy?**
The iterative pruning process, combined with cyclic learning rates, is computationally demanding compared to other pruning methods. During each iteration of the pruning process, units and edges are eliminated, which reduces the operation count for each iteration, still the overall cost is larger than training the dense neural network (NN). However, iterative pruning has been shown to produce highly sparse NNs that generalize well compared to other pruning methods (Leslie N Smith et. al. 2017, Alex Renda et. al. 2020). Our objective is to constrain the NNs to utilize as few units and edges as possible while still learning the task well. Further, the algorithm must operate without any prior knowledge of the final NN configurations (width, density). The iterative pruning algorithm is well-suited for this purpose. Improving computational efficiency is a future research direction that is worth exploring. (additionally see response to reviewer znid, W3)
Throughout the pruning process, the validation accuracy is consistently maintained at the same level as the dense NN. When the NN can no longer achieve the desired accuracy, the pruning process is halted, and the algorithm reverts to the previous sparse NN. Despite pruning, the test accuracy remains largely unchanged, primarily due to longer training time (Tian Jin et. al. 2022) and the structure learned by the NNs.
**Q2: Are there any limitations or challenges encountered when applying the proposed methodology to more complex tasks beyond Boolean functions and MNIST digit classification?**
We focus on Boolean tasks because it is easier to know their correct hierarchical structure (ground truth), which is required for evaluating our methodology. Analyzing the resulting structure of larger models on more complex tasks could be a follow up work.
For larger networks and complex tasks, it’s possible that NNs may learn numerous functional decompositions for the same task. The proposed pipeline can uncover only one of these decompositions, which could significantly differ from other possible ones.
We anticipate good structures to be extracted for tasks learned using MLP-based NNs (e.g., simple MLPs, transformers). For CNNs, adapting unit pruning and clustering features for convolutional layers may also pose additional challenges but the methodology is expected to work well for CNNs after appropriate adjustments.
**Q3: Could the approach be extended to tasks with dynamic or evolving hierarchical structures, where the sub-functions change over time?**
If our interpretation of this question deviates from your intended one, please let us know. In scenarios where tasks vary, leading to evolving hierarchical structures and sub-functions over time, our current method may not be directly applicable. Our approach requires NNs to first learn a task well before pruning, making it less suitable for dynamic task settings. However, exploring this direction in future research could be promising. Limiting the number of units and edges under evolving tasks may naturally promote the reuse of sub-functions (Kashtan et. al. 2007). Dynamic sparse training algorithms (Mostafa et. al. 2019, Evci et. al. 2019) with growing and pruning of edges during training may facilitate such adaptability.
**Q4: How does the proposed methodology compare to existing approaches in terms of accuracy, efficiency, and scalability when uncovering hierarchical modularity in neural networks?**
To the best of our knowledge, this work is the first to propose a combined training (pruning) and network analysis tool to uncover hierarchical modularity. However, previous works have proposed methods to detect modules in trained NNs without requiring knowledge of sub-functions or data (structural decompositions).
Daniel Filan, Shlomi Hod, and colleagues (Filan et al. 2021, Hod et al. 2022, Casper et al. 2022) employed normalized spectral clustering to globally extract unit clusters and analyze them. Spectral clustering optimizes for N-cuts, measuring internal connectivity against external connectivity of unit clusters. Our experiments with that method suggest that it often does not uncover the expected modules in sparse NNs. This could be attributed to its global nature and the absence of edges between NN units at the same layer.
Watanabe and colleagues (Watanabe et al., 2018; Watanabe, 2019) published a sequence of interesting papers where they utilized layer-wise clustering of units based on incoming and outgoing connectivity. Our method aligns with this class of previous methods. Although we have not directly tested the latter on the pruned NNs, it’s worth noting that they were designed for conventionally trained NNs. In contrast, our method is simpler and more tailored to the pruned NNs we obtain. Due to the limited time available we were unable to make such quantitative comparisons during the rebuttal phase.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I have read the response. | null | null | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their comments, time and effort. We have carefully thought about and responded to individual reviews, focusing on the weaknesses pointed out and the questions asked. If additional details, explanations, or clarifications are needed, we will be happy to provide them. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Towards Efficient Image Compression Without Autoregressive Models | Accept (poster) | Summary: This paper aims at providing a efficient and effective entropy model to achieve better trade-off between performance and complexity for learned image compression. They introduce a correlation loss to force the latent to be spatially decorrelated so that it can fit the independent probability model better.
Strengths: 1. While most existing methods primarily focus on the context model, this paper makes a try at decorrelating latents with the proposed correlation loss. The correlation loss can act as a general plug-in for the hyperprior-based methods, which is flexible.
2. It is proved that when the proposed method is combined with the checkerboard method, the performance gain is about 85% compared with the auto-regressive model with only 1/50th inference time, which is efficient and effective.
Weaknesses: 1. As shown in the RD curves, the BD-rate gains are obvious in the low bit-rate regime, however, there are less or even no gains in the high bit-rate regime. I think these results require further analysis, such as why it works better in the low bit-rate regime. It is one of the core problems that should be answered to help readers better understand the method.
2. It is just a little problem. There are many references of the figures and equations in the paper, but they are somewhat orderless. It is better to put the figures not so far away from their references.
3. I am somewhat confused about the analysis about Figure 4. It is better to provide the input image, which can help readers know the correspondence between the visualization and the original input.
4. I am not sure about the claim in line 307 "The fourth column of Figure 4 shows that for our method, latent space has significantly reduced correlation compared to the baseline indicating the correlation loss’s efficacy". I can only see that compared with Cheng's Hyperprior, the normalized latent of the proposed method has lower energy. I do not think it can provide evidence to show the proposed approach will reduce the correlation. Maybe it is because the proposed method just gives more accurate mean and variance. I think you can also provide the visualization of using auto-regressive context model but without the proposed approach. It may provide similar visualization results.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: No
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: see the weakness part
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions.
> As shown in the RD curves, the BD-rate gains are obvious in the low bit-rate regime, however, there are less or even no gains in the high bit-rate regime. I think these results require further analysis, such as why it works better in the low bit-rate regime. It is one of the core problems that should be answered to help readers better understand the method.
Please refer to the general response for a detailed answer to this question
> It is just a little problem. There are many references of the figures and equations in the paper, but they are somewhat orderless. It is better to put the figures not so far away from their references.
We appreciate the valuable feedback provided by the reviewer. We will address and correct these errors in the camera-ready version of the paper. Thank you for bringing these to our attention.
> I am somewhat confused about the analysis about Figure 4. It is better to provide the input image, which can help readers know the correspondence between the visualization and the original input.
Please refer to the Figure 1 in the attached PDF, where we have included the input image for a better understanding of the correspondence between the visualization and the original input. We shall update all the Figures, including Figure 4 in the main manuscript and Figure 5 in the Supplementary, in the camera-ready version of our paper to ensure clarity and accuracy. Your feedback is greatly appreciated.
> I am not sure about the claim in line 307 "The fourth column of Figure 4 shows that for our method, latent space has significantly reduced correlation compared to the baseline indicating the correlation loss’s efficacy". I can only see that compared with Cheng's Hyperprior, the normalized latent of the proposed method has lower energy. I do not think it can provide evidence to show the proposed approach will reduce the correlation. Maybe it is because the proposed method just gives more accurate mean and variance. I think you can also provide the visualization of using auto-regressive context model but without the proposed approach. It may provide similar visualization results.
We appreciate the reviewer's feedback regarding the interpretation of Figure 4 and the need for improved clarity, particularly concerning the representation of reduced correlation of the latent variable y in the last column of our figures. We have given thoughtful attention to this matter and have taken measures to enhance the clarity of the Figure, as demonstrated in the Figure 1 of the attached PDF. We aim to offer a more straightforward and comprehensive visualization of the effects of correlation loss. For a concise and coherent explanation, we kindly direct the reviewer to the general response section. | Summary: This paper proposed a correlation loss to decrease the correlation among spatially-neighbored elements in the latent space. By only modifying the loss function, this method acts as a plug-in method for the existing neural compression methods with no complexity increasing. Experiments show improvement in the compression performance to several baseline models.
Strengths: The main contribution of the paper is the correlation loss proposed to decrease the correlation among spatially in latent features.
Experiments show improvement in the compression performance to several baseline models which provide some insights about designing better neural image compression networks from the prospective of feature map decorrelation.
Weaknesses: 1) For SwinT and Cheng with checkerboard, the correlation loss seems to only work for lower bitrates, the author may provide some explanation. Besides, The current tested bitrate range is relatively low. RD performance at higher bitrates (>1bpp) should also be provided.
2) The paper writing can be improved. The citation format in the article looks messy. For equations, Some are ‘eq’, but some are ‘Equation.’
3) Experiments on factorized Ball\'e method [1] and HP+AR+correlation loss can be provided to makes the experiments and evaluation more complete.
[1] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Minor issues:
1) Some experiments in the supplementary materials are kind of important and can be put into the main paper.
2) Some citations are still preprint version (e.g. [1][2][3] in reference). Their officially published version should be cited.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions.
> For SwinT and Cheng with checkerboard, the correlation loss seems to only work for lower bitrates, the author may provide some explanation
Please refer to our general response for a detailed explanation.
> Besides, The current tested bitrate range is relatively low. RD performance at higher bitrates (>1bpp) should also be provided.
Our experimentation focused on a bits per pixel (bpp) range spanning from 0.05 to 0.6, corresponding approximately to a peak signal-to-noise ratio (PSNR) range of 25 to 35. In alignment with this approach, Cheng's and Checkerboard methods also adhere to a comparable range, operating within 0.1 to 0.8 bpp, which roughly corresponds to a PSNR range of 27 to 37.
We also observed from the recent works that as the compression rate increases, lossy compression approaches the realm of lossless compression, where there is limited potential for coding gain through improved predictions, as the codec must encode the inherent noise (unpredictable) within the data [5]. This notion aligns with the modest enhancements observed in generative modeling, such as the slight improvement in negative log-likelihood as depicted in Figure 1 (a) of [3], or the minor bit-per-dim discrepancy in recent learned lossless image compression, as shown in Table 1 of [4].
To align with the trends observed in recent research, we concentrated our experiments within widely adopted ranges where significant improvements have been reported.
However, we are committed to incorporating more bpp points for our experiments, particularly for SwinT and Minnen's hyperprior-based methods, in the camera-ready version of the paper. This will allow us to provide a comprehensive evaluation of these methods across a broader range of bpp values.
[1] Cheng, Z., Sun, H., Takeuchi, M., & Katto, J. (2020). Learned image compression with discretized gaussian mixture likelihoods and attention modules. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7939-7948).
[2] He, D., Zheng, Y., Sun, B., Wang, Y., & Qin, H. (2021). Checkerboard context model for efficient learned image compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14771-14780).
[3] Kingma, D., Salimans, T., Poole, B., & Ho, J. (2021). Variational diffusion models. Advances in neural information processing systems, 34, 21696-21707.
[4] Berg, R. V. D., Gritsenko, A. A., Dehghani, M., Sønderby, C. K., & Salimans, T. (2020). Idf++: Analyzing and improving integer discrete flows for lossless compression. arXiv preprint arXiv:2006.12459.
[5] Zhu, Y., Yang, Y., & Cohen, T. (2021, October). Transformer-based transform coding. In International Conference on Learning Representations.
> Experiments on factorized Ball'e method [1] and HP+AR+correlation loss can be provided to makes the experiments and evaluation more complete.
As recommended by the reviewer, we will incorporate Balle's Factorized prior [1] and Balle's Hyperprior (HP) [2], along with HP + Correlation Loss and HP + AR, in the camera-ready version of our paper. This addition will provide a comprehensive analysis and further insights into our proposed approach.
[1] Ballé, Johannes, Valero Laparra, and Eero P. Simoncelli. "End-to-end optimized image compression." arXiv preprint arXiv:1611.01704 (2016).
[2] Ballé, Johannes, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. "Variational image compression with a scale hyperprior." arXiv preprint arXiv:1802.01436 (2018).
> Some experiments in the supplementary materials are kind of important and can be put into the main paper.
> Some citations are still preprint version (e.g. [1][2][3] in reference). Their officially published version should be cited.
> The paper writing can be improved. The citation format in the article looks messy. For equations, Some are ‘eq’, but some are ‘Equation.’
We thank the reviewer for the valuable comments, we shall fix these errors in the camera-ready version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will stick to my rating. | Summary: This paper presents a new loss to improve neural image compression models. The authors identify a potential issue with typical neural image compression models where the hyperprior, which predicts entropy parameters over a quantized latent representation, assumes conditional independence between latents, but that may not be true in practice. Previous methods improved the entropy model through context modeling (e.g., using a spatially autoregressive model or a "checkerboard" decomposition) but this leads to higher compute and slower runtimes.
Instead, the authors add a local correlation loss averaged spatially over the latents. This loss directly encourages the encoder to generate decorrelated latents, which is a better match for the conditional independence assumption built in to the hyperprior. In particular, the loss affects training (and isn't very expensive) and adds no additional computation at inference time, which leads to significant runtime improvements.
The paper shows that the new correlation loss improves rate-distortion (RD) performance when applied to several popular models (see Fig. 1 and Fig. 7).
Strengths: The primary strength of the paper is the simplicity and effectiveness of the main idea.
Most papers on neural compression boost RD performance by making the model more complex, which typically leads to slower decode times. The authors discuss this problem and present a well-motivated loss that leads to significant RD gains without affecting runtime. Their correlation loss is also quite general and can be applied to many different compression models.
As far as I know, the correlation loss has not been proposed elsewhere in the neural compression literature. There are some similarities to a diversity loss in VQ or clustering, though I don't have a specific reference for this. So I think the originality, especially for the neural compression subfield, is high.
The quality and clarity of the writing is also high.
Weaknesses: Ideally, the correlation loss presented in the paper would be applied to a SOTA comperssion model leading to a new SOTA. For instance, the paper cites (He 2021), which introduced the checkerboard decomposition for entropy modeling, but they don't build on top of (He 2022), which presents a more powerful model that combines the checkerboard with CHARM.
ELIC: Efficient Learned Image Compression with Unevenly Grouped Space-Channel Contextual Adaptive Coding
Dailan He, Ziming Yang, Weikun Peng, Rui Ma, Hongwei Qin, Yan Wang
https://arxiv.org/abs/2203.10886
Figure 1 implies that the benefit of the correlation loss shrinks as models become more powerful so it may be that the benefit for ELIC is negligible?
The visualization in Fig. 4 is great (and is common for compression papers) but it's not obvious to me what I should be looking at to see the impact of the correlation loss. Is it a sharper scale image? The goal is an i.i.d. Gaussian normalized image (far right column) but it's not visually obvious to me that "our approach" is closer to i.i.d. Gaussian than the baseline.
Mask patterns are mentioned and shown in Fig. 6 but results are only in the supplemental material. Presumably that's because the mask shape did not have a large impact. That's fine, but maybe cut Fig. 6 or at least add a sentence to the main paper summarizing the findings.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Analysis exploring why the correlation loss boosts performance would strengthen the paper. Specifically, why is it needed when the existing rate-distortion loss is minimized by decorrelated latents? The generic answer is "the network is stuck in a (bad) local minimum, and the correlation loss changes the loss landscape such that the optimizer doesn't get stuck".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions.
> Ideally, the correlation loss presented in the paper would be applied to a SOTA compression model leading to a new SOTA. For instance, the paper cites (He 2021), which introduced the checkerboard decomposition for entropy modeling, but they don't build on top of (He 2022), which presents a more powerful model that combines the checkerboard with CHARM.
ELIC: Efficient Learned Image Compression with Unevenly Grouped Space-Channel Contextual Adaptive Coding Dailan He, Ziming Yang, Weikun Peng, Rui Ma, Hongwei Qin, Yan Wang https://arxiv.org/abs/2203.10886
Please refer to the “Performance on ELIC: Efficient Learned Image Compression” in General Response for Common Comments Section
> Figure 1 implies that the benefit of the correlation loss shrinks as models become more powerful so it may be that the benefit for ELIC is negligible?
The primary advantage of the correlation loss lies in its capacity to minimize the discrepancy between the actual and presumed probability distribution within the entropy model. Figure 2 in the attached PDF illustrates that for a given encoder and decoder architecture, which is the Cheng's Hyperprior (CH) in this case, the choice of entropy model determines performance of the resulting model. Note that CH signifies the lower bound with the given encoder-decoder architecture, whereas Cheng’s AR defines the upper limit with the same encoder-decoder architecture. When correlation loss is applied to CH, significant improvements can be achieved due to the large performance gap between the CH baseline and the upper limit. On the other hand, since the gaps are diminished in the models like CH + CKBD and CH + ChARM, the pothential for improvement by applying correlation loss are also limited when compared to the CH baseline. In summary, the introduction of correlation loss to these models would result in a closer approximation to the full AR performance, albeit with gains that are comparatively smaller.
We expect the similar pattern might be observed if we change the encoder-decoder architecture from CH's to ELIC's. Note that ELIC not only impove the entropy model by the proposed space-channel context model (SCCTX), which combines CKBD and ChARM, but also introduced architectural modification of the encoder and decoder, which contribute an additional BD rate gain of approximately 8-12% [1]. While we expect that the performance of Cheng's with SCCTX might be lower than ChARM with correlation loss, the inclusion of correlation loss has the potential to elevate it to a level comparable to, or akin to, the full AR performance, all while costing only about 1/15th of the computation expense (2x of ChARM).
[1] He, D., Yang, Z., Peng, W., Ma, R., Qin, H., & Wang, Y. (2022). Elic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5718-5727).
> The visualization in Fig. 4 is great (and is common for compression papers) but it's not obvious to me what I should be looking at to see the impact of the correlation loss.
We acknowledge the reviewer's observation that the interpretation of Figure 4 lacks clarity, particularly in relation to the depiction of the reduced correlation of the latent variable y in the last column of our Figures (main manuscript [Figure 4], supplementary material [Figure 5]). We have given careful consideration to this concern and have taken steps to enhance the clarity of the Figure (please see Figure 1 in the attached PDF). Our goal is to provide a more straightforward and comprehensive illustration of the impact of correlation loss. For a more concise and coherent explanation, we kindly refer the reviewer to the general response section.
> Mask patterns are mentioned and shown in Fig. 6 but results are only in the supplemental material.
We highly value the constructive feedback provided by the reviewer and are committed to incorporating these suggested changes into the camera-ready version of our paper.
> Analysis exploring why the correlation loss boosts performance would strengthen the paper.
> Specifically, why is it needed when the existing rate-distortion loss is minimized by decorrelated latents? The generic answer is "the network is stuck in a (bad) local minimum, and the correlation loss changes the loss landscape such that the optimizer doesn't get stuck".
The primary focus of our correlation loss is directed at diminishing the correlation present among neighboring elements within the latent space, as illustrated in Figure 1 of the attached PDF. This reduction in correlation assumes a critical role in mitigating the disparities between the assumed probability distribution of the hyperprior entropy model and the actual distribution of latent variables. The effect of decreased correlation is illustrated in the figure and the detailed analysis can be found in the text of the our main rebuttal section.
We agree there is high possibility that the correlation loss introduces alterations to the loss landscape. However, it remains uncertain due to the current lack of concrete theoretical evidence about exact interplay between the correlation loss, rate loss, and the distortion loss. Thus, in the current circumstances, we cannot definitively assert whether this is indeed the case. | Summary: This paper focus on efficient learned image compression. Different from existing method, which generally aims to parallelize the autoregressive operations, this paper propose to speed up the framework by removing the whole autoregressive model by introducing the correlation loss, aiming to decorrelate the latent features. The introduced correlation loss can act as a plug-in for the existing learned image compression methods to achieve superior RD performance and at the same time reduce the inference time without autoregressive entropy model.
Strengths: 1. The paper is well-motivated and easy to follow.
2. The introduced correction loss is statistically sound and empirically proved to be effective. The proposed loss is novel to me and can act a plug-in for the existing learned image compression to achieve consistent improvement, especially at low-bit ranges.
3. By removing the correlation, the propose method can ease the requirement of autoregressive entropy model, so as to speed up the whole compression framework.
Weaknesses: 1. The citation in Figure 3 is wrong.
2. The caption of Figure 4 states that the correlation loss can provide more flexible parameterized distribution models with significant spatial redundancy reduction. However, from my perspective, the plots show in Figure 4 cannot showcase both "more flexible parameterized distribution models" and "significant" spatial redundancy reduction. I hope the author can have more comments on this, otherwise such claims are unsupported or inaccurate.
3. The introduction of section 3.1 should be reduced, which are general, well-known concepts in LIC, the main focus with more words should be put into the contribution introduced section 3.2.
4. The paper should also report the performance of the proposed method on top of some more recent efficient LIC methods like [1].
[1] Dailan He, Ziming Yang, Weikun Peng, Rui Ma, Hongwei Qin, and Yan Wang. ELIC: Efficient Learned Image Compression with Unevenly Grouped Space-Channel Contextual Adaptive Coding. CVPR 2022
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Is the hyperprior shown in Figure 4 a GMM-based (K=3) or a simple mean-scale hyperprior? If it is GMM, the plots shown in Figure 4 is only one latent feature of the K=3? I think this should be specified to be clearer to readers.
2. Will the code be released to the general public?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions.
> The caption of Figure 4 states that the correlation loss can provide more flexible parameterized distribution models with significant spatial redundancy reduction. However, from my perspective, the plots show in Figure 4 cannot showcase both "more flexible parameterized distribution models" and "significant" spatial redundancy reduction. I hope the author can have more comments on this, otherwise such claims are unsupported or inaccurate.
We acknowledge the reviewer's observation regarding the clarity of the caption of Figure 4 and the accompanying text, and we appreciate the feedback. To rectify this concern and provide a clearer presentation of our intended message, we have undertaken significant revisions to the Figure itself, which can be referred to in the attached PDF document (Figure 1). Moreover, we have taken comprehensive steps in our general response to thoroughly address all the concerns raised by the reviewers concerning Figure 4.
> The paper should also report the performance of the proposed method on top of some more recent efficient LIC methods like [1].
Please refer to the “Performance on ELIC: Efficient Learned Image Compression” in General Response for Common Comments Section
> Is the hyperprior shown in Figure 4 a GMM-based (K=3) or a simple mean-scale hyperprior? If it is GMM, the plots shown in Figure 4 is only one latent feature of the K=3? I think this should be specified to be clearer to readers.
The hyperprior shown in Figure 4 is a simple mean-scale hyperprior.
> Will the code be released to the general public?
We shall release the code with the camera-ready version of the paper.
> The citation in Figure 3 is wrong.
>The introduction of section 3.1 should be reduced, which are general, well-known concepts in LIC, the main focus with more words should be put into the contribution introduced section 3.2.
We express our gratitude to the reviewer for their valuable insights, comments, and suggestions, which have greatly contributed to the improvement of our paper. We are committed to incorporating these recommended changes and enhancements in the final version of the paper to ensure its quality and accuracy.
---
Rebuttal Comment 1.1:
Comment: I appreciate the responses from the authors, which address my concerns and the clarity issue. I decide to raise my rating. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful comments and suggestions.
# Additional Experiment Results
During the rebuttal period, we have performed additional experiments as shown in Figure 2 of the attached PDF, which will be the updated version of the Figure 1 of the main manuscript. The updated results include enhancements to the performance of the Checkerboard (CKBD) model with the integration of correlation loss, as well as the application of correlation loss to the ChARM model.
With the incorporation of correlation loss, the CKBD model now exhibits a BD rate gain of 16.5% when compared to Cheng’s Hyperprior (CH), representing around 90% of the performance achieved by a full AR model, all at approximately 1/50th of the computational cost.
Furthermore, the utilization of correlation loss with the ChARM model results in a significant BD rate gain of 18% over the baseline Cheng’s Hyperprior (CH). This gain corresponds to approximately 98% of the improvement obtained through the use of a full AR model. Notably, this progress is achieved with a considerably reduced computational cost, amounting to roughly 1/30th of the full AR cost.
Figure 2 in the attached PDF also serves to visually underline the implications of these findings. It demonstrates that a fixed encoder and decoder architecture lays down performance boundaries defined by the chosen entropy model. At one end, Cheng's Hyperprior (CH) signifies the lower performance limit, while at the other, Cheng's AR defines the upper threshold for given the CH's encoder-decoder architecture. With correlation loss incorporated into CH, significant improvements are attained. Note that CH + CKBD and CH + ChARM offer performance gains over CH closer to the upper limit. Consequently, the room for enhancement by adding correlation loss in these cases is relatively constrained when compared to the CH model. Nonetheless, applying correlation loss to these models leads to better approximations of full AR performance.
# General Response for Common Comments:
## Analysis on less gains in the high bit-rate regime is required. {Reviewers: 7bvo, qpdX}
One limitation of our proposed method, as mentioned in Section 5 of the main manuscript, is that the efficacy of the proposed loss function tends to be less pronounced at higher bit rates, despite the considerable performance gains at lower bitrates. This behavior can be attributed to the characteristics of learned image compression (LIC) models, which exhibit higher correlation among latent variables at lower bit-per-pixel (bpp) values compared to higher bpp values, as reported by Zhu et al. [1] and also evident from the correlation maps in Figure 4 of the supplementary material.
We conducted a comprehensive analysis and presented the findings in Figure 3 of the attached PDF. This Figure showcases the relationship between PSNR gains, bpp, and correlation for different models, including Cheng's Hyperprior (CH), CH + CKBD, SwinT Hyperprior, and Minnen's Hyperprior. From the graphs in the Figure 3, a clear pattern can be observed: as the bpp decreases, the correlation of the latents increases, resulting in a higher gain in PSNR. However, as we move towards higher bpps, the correlation becomes notably reduced, resulting in decreased PSNR gain. This trend explains the reason why the efficacy of the correlation loss is larger in low bpp range and diminish at high bpp range.
[1] Transformer-based transform coding, ICLR. 2021.
## The visualization of Figure 4 does not explain the impact of Correlation loss. {Reviewers: xepj,7bvo, qpdX}
The reviewers have pointed out that the last column in all our Figures (Figure 4 in the main manuscript and Figure 5 in the supplementary material) may not clearly illustrate the reduced correlation of the latent variable y. While we acknowledge this limitation, the bpp values and correlation maps in Figure 4 (Supplementary material) provide compelling evidence of the reduced correlation in the latent space achieved through the application of correlation loss. To address this concern and provide a better visual demonstration, we have included the correlation map and the corresponding total correlation value of the latent space in the Figure 1 of attached PDF.
The Figure 1 illustrates that the improved mean μ and scale σ effectively capture image structures with notable superiority, incurring a minimal extra cost of 0.001 bpp. Similarly, the latent variable y undergoes a reduction of around 0.002 bpp, stemming from a correlation decrease of roughly 3.5 times. These combined enhancements manifest in a PSNR gain of approximately 0.38 dB when compared to the baseline, all while ensuring a total bpp that remains lower than the baseline.
## Performance on ELIC: Efficient Learned Image Compression: {Reviewer: xepj, 7bvo}
We conducted experiments on ELIC (using the unofficial implementation available on Github), investigating its performance both with and without the integration of the correlation loss. A comparison between ELIC and Cheng's Hyperprior revealed a significant BD rate gain of 28.85% for ELIC. Intriguingly, when the correlation loss was introduced to ELIC, the BD rate gain was further elevated to approximately 30.36%.
It is worth highlighting that the complete inference process for the ELIC on Kodak dataset took approximately 17.77 seconds which is about 1/15th of the total inference time of full AR. These findings strongly underscore the effectiveness of our proposed approach, showcasing its potential to achieve enhanced image compression performance.
### If given the opportunity to revise our original manuscript, we will update the Figures and text to clearly convey our new results and findings in this rebuttal and improve the overall clarity of our message. We appreciate the reviewers' feedback and are committed to presenting our research in the best possible way in the final version of the paper.
Pdf: /pdf/2fcf4b8046955a6a9ff4a5b9f00e51db0d321999.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation | Accept (poster) | Summary: This paper proposes Demand-driven Navigation (DDN), which leverages the user’s demand as the task instruction and prompts the agent to find an object which matches the specified demand. This paper also proposes a method of first acquiring textual attribute features of objects by extracting common sense knowledge from a large language model. These textual attribute features are subsequently aligned with visual attribute features using Contrastive Language-Image Pre-training (CLIP). They experimented on AI2Thor with the ProcThor dataset and demonstrated that the visual attribute features improve the agent’s navigation performance and outperform the baseline methods.
Strengths: The paper flows nicely with good motivation of the novel task and its challenges clearly stated.
The proposed method makes sense and comparison is good.
Weaknesses: 1. L40, the second condition is a bit questionable. I don’t think it is a necessary requirement of only searching objects that are in the scene. It is probably the case in current benchmark setup, however it is not a constraint for the research task. Subsequently, it might also impact the definition of “navigation failure” at L48. Asking the robot to search for a non-existent object in the scene and the robot could not find it. This should not be defined as a failure from my point of view. Rather, if you ask a robot to look for some non-existent object and it reports a finding, this should be counted as a failure as it is a clear false positive detection. Nevertheless, the new task DDN does not necessarily remove ‘requirement’ 2) as you might anyway asks for a demand that no object in the scene can satisfy.
2. From L171, my impression is that the problem is constrained by only one demand d at a time. In reality, one could have multiple demands to further constrain what object they might want to search. For example, something “quench thirst” and “contains caffeine”.
3. It will be more convincing with more benchmark datasets. In addition to AI2Thor, there is Habitat Challenge on ObjectNav which can be exploited, plus it is based on 3D scan of real scenes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Can we handle a search composed with multiple demands, with the current method?
2) Why other benchmark dataset are not used for evaluation? is there any justification?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed both limitations and the societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: REPLY: Thank you! We have clarified some questions, address your concerns, and hope to hear back from you if you have further questions!
**Q1**: L40, the second condition is a bit questionable.
A1: Thank you very much for your insightful suggestion. We understand the concern about the second condition in L40. In the current VON task, the robot is typically asked to find an object that is guaranteed to exist in the current scene. However, considering real-world applications, users may request the robot to find an object that they are unsure exists in the scene. In terms of satisfying the user, the robot does fail to find it, which is considered a failure. But we also agree with you that it's not really a navigation failure because the object doesn't exist in the scene; we'll revise the statement here to say "failure to satisfy the user's demand" rather than "navigation failure".
**Q2** : From L171, my impression is that the problem is constrained by only one demand d at a time. Can we handle a search composed with multiple demands, with the current method?
A2: Thank you very much for your valuable suggestion. What you are describing is a more refined instruction, and we have some similar examples in our dataset, such as "I need a place to rest" and "I need a soft place to rest". The former can be "bed, wooden chair, sofa" while the latter cannot be "wooden chair" but "bed, sofa". The latter contains two demands in its instruction: "soft" and "available for rest".
In future work, we will add some explicit personal preferences as well as more complex demand instructions, including negation of object samples, combining multiple demands, and prioritising between objects. Theoretically our method works for any demand expression including multiple demands, since any demand corresponds to some attributes that match the demand.
**Q3**:It will be more convincing with more benchmark datasets.
A3: Thank you very much for your reminder. Our dataset generation process and navigation methods can be easily migrated to any other dataset, including the scene dataset used in the Habitat Challenge. We will later generate some DDN dataset (e.g., Matterport3D) for the current mainstream scene datasets for training and testing.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: I thank authors for the response, and I would sincerely suggest to add these clarifications in the paper. As I mentioned in the original comments, the new task makes sense and the method is convincing with the supported comparison. I'd appreciate a good clarity in the statement in terms of the assumptions and limitations.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable suggestion! We will add these clarifications about assumptions and limitations in the main paper.
Many thanks again for your response! We hope to hear back if you have further questions! | Summary: This paper presents a new visual navigation setting, where the goal is not specified by objects or images but described by a sentence. The sentence encodes the essential information to search for specific objects during navigation. Different from VLN, the task is able to analyze the demand within each sentence rather than step-by-step language guidance. Common sense and knowledge will be explicitly extracted from a large language model. To align the visual and attribute features, CLIP is employed. Overall, the motivation of this work is reasonable and interesting.
# Post rebuttal
The authors have addressed most of my concerns. If the authors can provide some visualization results, that would be more convincing. Therefore I changed my score to accept.
Strengths: The motivation of this work is interesting. Providing a demand description to an agent would enable the agent to search for not only one object in order to complete the specified goals.
The authors also provide semi-automatically generated data for this new task. This would be complementary to the existing VLN or VN tasks.
Weaknesses: The introduction part is overly lengthy. The authors exert three pages to describe the motivations of this work, making reading quite tedious. I highly suggest the authors could trim the introduction part a bit.
The natural questions come to this task is whether the proposed method can complete object-goal navigation after training? For example, after the network is trained based on demand driven sentences, whether it can be used as a object-goal navigation agent?
The common knowledge or sense is pre-defined. In the illustrative figures, it seems a demand may correspond to three different objects. Whether this would restrict the options?
In L189, the WG mappings are different depending on the environment. I am not sure whether this implies that these WG mappings need to be specified manually. If so, this may contradict with the original motivation of this work, where humans may not know the environment in advance.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: My questions mainly focus on two tasks:
In this comparisons, the results of some baseline methods are significantly lower than the results reported by their original papers. Therefore, I am wondering how the authors adapt their methods to this setting?
A few works leverage CLIP for VLN or VN tasks.
CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
These works can be adopted for comparison.
As the searched objects have strong association with the demand, what if some demands cannot be processed properly? For example, I am thirsty but I cannot drink cold. Simply providing demands and their corresponding objects would lead to overfitting.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: There is no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: REPLY: Thank you! We have clarified some questions, address your concerns, and hope to hear back from you if you have further questions!
**Q1**: The introduction part is overly lengthy.
A1: Thank you for your valuable feedback. We understand your concern, and we agree that the introduction may have become lengthy due to the detailed descriptions of the new task (DDN) and the comparison with the old task (VON). To streamline the introduction, we will remove redundancies and avoid duplicating information that will be discussed in detail in later sections. Additionally, we will consider moving some of the descriptions of the DDN task to a separate section to maintain a better flow in the paper.
**Q2**: The natural questions come to this task is whether the proposed method can complete object-goal navigation after training?
A2: Thank you for raising this question. Theoretically, our proposed method can be utilized for object-goal navigation, by translating the target object into a demand instruction. We use the template "I want a $object$" to transform an object name into the form of a demand instruction, and then test our trained DDN model in the Object Navigation setting (consistent with the setting of the -object suffix in our paper). The results of the experiment are 9.0/9.0 (SR/SPL) in seen scene and 8.5/8.3 (SR/SPL) in unseen scene. Compared to VTN-object and ZSON-object, the results demonstrate that our method also performs well on object navigation.
**Q3** : The common knowledge or sense is pre-defined. In the illustrative figures, it seems a demand may correspond to three different objects. Whether this would restrict the options?
A3: Thank you so much for pointing that out. We apologize for any confusion. To clarify, common knowledge or sense is not pre-defined; rather, it arises from human consensus and understanding of the world. In the illustrative figures, we provided an example explanation of the DDN task and our method, showing that a demand may correspond to three different objects. However, this example is not intended to limit the number of objects that can satisfy a demand. In real-life scenarios, a demand can indeed be fulfilled by numerous objects, and the number of possible options can vary significantly depending on the environments.To demonstrate the flexibility and diversity of the DDN task, we use GPT-3 to generate 10 object categories for each demand instruction in a DDN dataset, showcasing the multiple possibilities that can exist for a single demand (please refer to the description of LG mapping in the Supplementary Material 8.2.1).
**Q4**: If so, this may contradict with the original motivation of this work, where humans may not know the environment in advance.
A4: Thanks for pointing that out. During the training process in a simulator, we require an "expert" to assess whether the object found by the agent satisfies the demand or not, which allows the "expert" to provide the agent with correct training signals. The WG mapping serves as this "expert" during training.
Since different environments can have distinct categories of objects, different WG mappings are needed for training in different environment. However, it is crucial to note that when the agent is fully trained and deployed in a real-world environment, it no longer requires WG mapping. Once a user gives a demand instruction, the agent will locate an object and present it to the user. In this case, the user does not need to know the details of the scene or the objects present; they only need to judge whether the object found by the agent satisfies their demand or not.
**Q5**: Therefore, I am wondering how the authors adapt their methods to this setting?
A5: Thank you for raising this question. To ensure a fair and comprehensive benchmark of the baseline methods, we employed different adaptation protocols for each one. For the -demand suffix baselines, we directly replaced the original VON input with the BERT feature of the demand instruction. The -GPT suffix baselines involve asking GPT-3 what objects satisfy a given demand instruction and then providing the answered object as input to the VON baselines. We have described the training and testing protocols for each baseline in detail in the Supplementary Material 8.3.2, providing a comprehensive explanation of how we adapted the VON method to suit the DDN task.
**Q6**: VLN can be adopted for comparison.
A6: Thank you for your insightful suggestion. We have taken your advice into account and included two additional baselines in our experiments. These baselines utilize CLIP-Nav as the navigation policy and GPT-3 and MiniGPT-4 as recognition policies, respectively. It is important to note that while the task instructions for vision-language navigation (VLN) are typically step-by-step, the instructions for the Demand-Driven Navigation (DDN) task revolve around the concept of "demand" for describing an object. As a result, we have adapted CLIP-Nav for the DDN task without utilizing its instruction breakdown. **Due to Rubuttal character limitations, the results of the experiment are shown in the attached PDF in Common Response**.
**Q7** : As the searched objects have strong association with the demand, what if some demands cannot be processed properly?
A7: Thank you very much for your valuable suggestion. What you are describing is a more refined instruction, and we have some examples in our dataset, such as "I need a place to rest" and "I need a soft place to rest". The former can be "bed, wooden chair, sofa" while the latter cannot be "wooden chair" but "bed, sofa". In future work, we will add some explicit personal preferences as well as more complex demand instructions, including negation of object samples, combining multiple demands, and prioritising between objects.
---
Rebuttal Comment 1.1:
Title: More explanations about baselines' low performance
Comment: Thank you so much for reading the following. We explain the question in Q5 in more detail, especially in response to why the results of some baseline methods are significantly lower than the results reported by their original papers.
**Q8**: In this comparisons, the results of some baseline methods are significantly lower than the results reported by their original papers.
A8: There are several factors contributing to this observation.
(1) It's important to highlight that the scope of object categories within the DDN task has expanded significantly, encompassing a total of 109 categories. In contrast, the original VON papers focused on a narrower range of objects: 22 categories in the case of VTN, and 6-21 categories for ZSON. This broader object category coverage inherently introduces greater complexity.
(2) The DDN task involves the utilization of natural language instructions, resulting in a considerably wider description space than the VON task. This expanded description space inherently escalates the level of task complexity.
(3) the VON methods do not take into account the many-to-many object-instruction mapping phenomenon present in the DDN task, resulting in their lack of reasoning about the combination of instructions and scenes.
These are the reason why the VON methods perform lower on the DDN task than reported in their original VON paper.
Our method uses GPT-3 to generate numerous language-grounding mappings for learning demand-conditioned object attribute features. By leveraging CLIP's capability to align visual and textual information, our method integrates instruction and scene details.
We hope to hear back if you have further questions!
---
Rebuttal 2:
Comment: Dear reviewer #i2zg
Thank you very much for your time and effort spent in reviewing our paper. We appreciate your valuable suggestions for our papers.
**As the discussion period is ending soon, we would like to kindly request that you take into consideration the possibility of adjusting your score.** Please let us know whether you have further concerns. We are sincerely waiting for your response!
Best wishes,
Authors of 2423 | Summary: This paper introduces a new task called Demand-Driven Navigation (DDN) that, unlike previous Visual Object Navigation (VON) tasks that evaluate the ability of an agent to find a specific object in an unknown environment, considers fulfilling the demand of a human. This new task is motivated by the lack of real-world grounding of current VON tasks that either require an agent to find an instance of an object category from a pre-defined fixed vocabulary or a language-specified object in an open-vocabulary fashion. However, in a real environment, a specific object might not be present or, if thinking from the point of view of fulfilling a human’s demand, other objects might be equally as good as the targeted one. As a result, authors suggest querying an autonomous agent with a language demand instruction. This allows one to be more flexible with respect to environments but also requires common sense knowledge, an understanding of how objects can be used, and where they are likely to be located in a scene.
The paper evaluates several baselines inspired by previous literature to show the introduced task is hard and cannot be solved with currently known methods. A new approach to solving the DDN task is thus introduced in this paper. The main goal is to learn a mapping between human demands and attributes of objects that can fulfill them. At training time, a GPT-3 model is thus used to generate a series of demands and objects that can meet each of them. Each demand is encoded by a BERT model and each object is encoded by a CLIP text encoder. For each demand, a demand-object vector is created by concatenating the demand representation and an object representation. An attribute module takes the demand-object vector as input and is trained with contrastive learning to extract representations that are as close as possible for different pairs sharing the same demand. This attribute module is then used in the final policy that is composed of a Transformer model and is trained with imitation learning. The paper shows this new method outperforms other baselines.
Strengths: 1. The introduced task is very interesting: querying an agent with a human demand seems more aligned with what is needed when deploying a robot in a real environment to help humans. Authors have properly motivated the caveats of current VON tasks in the literature.
2. The extraction of object attributes seems very relevant and well-motivated. Learning attributes is indeed a way to guide the learning of common sense knowledge and experiments showcase the gain in performance it allows to reach.
3. Authors propose diverse baselines to evaluate the performance of current language models in robotics/navigation tasks.
Weaknesses: 1. [Major] The introduced method is composed of many different modules and pre-trained models (GPT-3, BERT, CLIP text encoder, CLIP vision encoder, Attribute Module, DETR, Image Encoder, Policy Transformer, Visual Grounding model). It is not clear what parts are the most important and whether the overall method could maintain the same performance without some of these building blocks. Additional ablation studies would be very interesting.
2. [Major] Most baselines except *Ours* showcase very low and close performance. However, section 6.3 discussing the experimental results is quite long and detailed. I am not convinced authors can draw as many conclusions as they do when we consider how close all baselines are in terms of average performance (even more when considering the standard deviation).
3. [Major] This remark is very related to the previous one. All concurrent baselines reach very low performance. Could it be that these methods were not trained to convergence or would simply require much more training? The authors mention 1.8M training frames in the paper, which seems rather small compared with the number of frames required to train baselines in other navigation tasks (generally closer to 100M training frames).
4. [Minor] When presenting baselines in section 6.2, a lot of information is missing. The paper refers to details given in the supplementary material. When reading this supplementary information, we can understand the introduced baselines. I would suggest including this information in the main paper directly.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Four questions (1.-4.) were already asked in the "Weaknesses" section. I would like authors to address these concerns. I also add two additional questions (5., 6.), that do not appear to me as paper weaknesses but rather required clarifications:
1.-4. See "Weaknesses" section.
5. [Major] When describing the *Random* baseline, authors say it is about randomly selecting an action in the action space. I thus do not understand the difference between *Random-object* and *Random-demand*. Further clarifications are needed.
6. [Major] Performance for *GPT-3+Prompt** and *MiniGPT-4* in Table 1 is always the same independently from the scene (seen/unseen) and instructions (seen/unseen). This should either be corrected or explained.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The paper mentions limitations regarding the drawn conclusions about the performance of current language models on the DDN tasks. Authors explain they did not have access to the code for recent methods such as GPT-4 with visual inputs or PaLM-E, and thus were not able to evaluate these methods. It is a good thing to mention this limitation, but their experiments were already conducted with many recent models, which might still allow them to draw relevant conclusions (see “Weaknesses” section for remarks about the drawn conclusions, which are however orthogonal to the mentioned limitations in this section).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: REPLY: Thank you! We have clarified some questions, address your concerns, and hope to hear back from you if you have further questions!
**Q1**: Additional ablation.
A1: Thank you very much for your advice. We show ablation experiments for pre-training of attribute modules in the main paper. Now we additionally add two ablation experiments on BERT and Attribute Module's network: replacing BERT with MLP and AttributeTransformer with a 4-layer MLP, and the results of the experiments are shown **in the attched PDF file in Common Response**. The results reveal that BERT features are essential for learning a robust navigation policy, especially on unseen instructions. The ablation results on Attribute Module's network show that the transformer outperforms the MLP in learning attribute features.
Since we need to rely on CLIP's ability to align text and vision, we cannot perform ablation experiments on CLIP. DETR, VG Model and GPT-3 are just tools we use to make the pipeline complete, they can be replaced by any model with similar functionality. For example, DETR can be replaced by Faster-RCNN, VG model can be replaced by SAM. We regret not being able to provide ablation experiments on the policy transformer and image encoder due to computational and time constraints during the rebuttal period. However, our network follows the structure of VTN, where you may find some related ablation experiments.
**Q2**: I am not convinced authors can draw as many conclusions.
A2: Thank you for bringing up this concern. Our conclusions were derived from a comprehensive analysis of the agent's trajectory and decision-making data. We acknowledge that we fell short in providing this crucial information and apologize for the oversight.
To explain the claim in L334-335, the difference between ZSON-object and ZSON-demand needs to be explained. Both ZSON-object and ZSON-demand were trained using Image Navigation (task input is CLIP-vision-feature of the target image), but for testing, ZSON-object was tested with Object Navigation (with the task input being CLIP's text feature of the target object's name), while ZSON-demand was tested with the DDN task (with the task input being CLIP's text feature of the demand instruction). ZSON 's motivation is to rely on CLIP's ability to align between text and vision to accomplish zero-shot object navigation. This alignment ability is reflected in the cosine similarity of the features.The average cosine similarity between object image features and object name features in ZSON-Object is 0.28; whereas the average cosine similarity between object image features and instruction features in ZSON-demand is only 0.22. So we argue that the alignment between instructions and objects is not good.
Our contention in L336-338 is rooted in rigorous statistical analysis. Our findings indicate a 54.4% probability that the object suggested by GPT-3 may not even exist within the current environment.
The reason we claim in L343-345 is that we counted the distribution of GPT-3+Prompt's actions and the number of the episodes that exceed the step limit, and found that 66.41% of the steps were spent rotating in place or adjusting the camera, whereas these actions accounted for only 32.41% of the expert's data in our traject dataset; and also that 80% of its episodes of failure were due to exceeding the 100-step limit.
Regarding MiniGPT-4, we statistically obtained that "MoveAhead" accounts for only 15% of the total number of actions, while rotating and adjusting the camera in place accounts for 66% ("MoveAhead" accounts for 66.41% and rotating and adjusting the camera in place accounts for 32.41% in the expert trajectories); and the average episode length of MiniGPT-4 is 4.38 (the average trajectory length of the expert trajectories is 27.24). This suggests that MiniGPT-4 does not tend to move around looking for objects, but rather observes in place and then quickly decides on the target object to end the episode.
**Q3**: Could it be that these methods were not trained to convergence or would simply require much more training?
A3: We set up the validation set for model selection (picking the model with the highest NSR on validation). We found that long before the 1.8M step, the baselines' performance on the validation set has been decreasing. After trading off training time and computational resources, we chose 1.8M as the training step size for RL. While further training might yield improvements, we argue that the chosen step size provided a reasonable balance on computing resources, time, and performance.
**Q4**: I would suggest including this information in the main paper directly.
A4: Thank you for your suggestion. We will make the necessary changes to the main paper to include a more detailed baseline description. Additionally, we will provide a comprehensive account of the training and testing processes, with other essential information, in the supplementary material.
**Q5**: the difference between Random-object and Random-demand.
A5: Due to the characters limitation, please see **Common Response 2**.
**Q6**: Performance for GPT-3+Prompt* and MiniGPT-4 in Table 1
A6: Thanks for pointing that out. The results for GPT-3+Prompt* and MiniGPT-4 in Table 1 being the same across all scene and instruction settings can be attributed to several factors. Firstly, both GPT-3 and MiniGPT-4 were not trained on any scene or instruction. As a result, there is no distinction between seen and unseen scenes/instructions for these models. Secondly, to ensure the reliability of the results, we conducted extensive testing with thousands of episodes using MiniGPT-4 and GPT-3. Over the course of testing, the results stabilized across all four settings, leading to consistent performance values. Lastly, when presenting the results in the table, we rounded the values, which might lead to data that appears the same in the table but might differ at multiple decimal places.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their efforts in trying to address my concerns and the ones of other reviewers. It should particularly be noted that many additional experiments were conducted which is highly appreciated.
The authors gave reasonable answers to most of my concerns. I would just like to come back to **Q5**: As mentioned by the authors, several reviewers were confused regarding the difference between *Random-object* and *Random-demand* baselines. After reading the answer from reviewers, it seems that this is not a difference in the baseline itself but rather in the task at hand. I feel like this should be made clearer in the paper, and even in Table 1. Indeed, comparing different methods on the same task and the same method on different tasks is very different, and it looks to me like the authors are doing both simultaneously in Table 1, which makes it harder to draw clear conclusions from experimental results in my opinion.
However, I still feel like this paper asks interesting questions, is well-written, and involves an important amount of experiments.
---
Reply to Comment 1.1.1:
Comment: We are glad that our responses help alleviate your concerns. We also thank you for appreciating our additional experiments.
**Q7**: I feel like this should be made clearer in the paper, and even in Table 1. Indeed, comparing different methods on the same task and the same method on different tasks is very different, and it looks to me like the authors are doing both simultaneously in Table 1, which makes it harder to draw clear conclusions from experimental results in my opinion.
A7: Thank you sincerely for your insightful suggestion. We highly appreciate your feedback. we will implement your suggestion by separating the contents of object navigation and DDN in Table 1 into two distinct tables. Additionally, we will provide a more comprehensive and detailed explanation of these two different tasks.
Many thanks again for your response! We hope to hear back if you have further questions!
---
Reply to Comment 1.1.2:
Comment: Dear reviewer #7yMw
Thank you very much for your time and effort spent in reviewing our paper. We appreciate your valuable suggestions for our papers.
**As the discussion period is ending soon, we would like to kindly request that you take into consideration the possibility of adjusting your score.** Please let us know whether you have further concerns. We are sincerely waiting for your response!
Best wishes,
Authors of 2423 | Summary: This paper proposes a Demand-driven Navigation (DDN) problem to leverages the user’s demand as the task instruction and prompts the agent to find an object which matches the specified demand. Then the authors proposed a method by learning demand-conditioned object attribute features from LLMs and align them to visual navigation via CLIP. The experiment shows the efficiency of the proposed method. However, I have some concerns about this paper. My detailed comments are as follows.
Strengths: 1. This paper proposes a novel Demand-Driven Navigation task to explore the navigation with only the user’s demand as the task instruction. This task is practical and worth more research, especially with the development of open-vocabulary foundation models.
2. The proposed attribute module is interesting and helpful for extracting attributes of objects.
3. This paper provides a method to tackle the DDN task by extracting common sense from LLMs to learn textual attribute features and uses CLIP to align the textual and visual attribute features. The results obviously outperform the baselines.
Weaknesses: 1. One important baseline is missing. The agent could explore the environment using a heuristic algorithm like FBE. At each time step, the agent detects all objects in observation and ask LLM whether these objects can satisfy the human demand.
2. What are the differences between common sense knowledge and human preferences mentioned in the paper?
3. It is not clear why the results of random-object are different from random-demand. Are they both execute random actions? More explanations are needed.
4. In Table1, the ZSON-demand performs better the ZSON-object. Does it indicate that CLIP performs better in understanding high-level abstract demand compared to concrete objects? This result seems to conflict with the claim that “CLIP does not perform well on alignment between instructions and objects” in Line 335.
5. Some related works that try to solve open-vocabulary navigation[1,2] or scene understanding[3] are missed. It would be better to add and discuss them in the related work part for the sake of completeness.
[1] Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigation, NeurIPS 2022.
[2] Visual Language Maps for Robot Navigation, ICRA 2023.
[3] LERF: Language Embedded Radiance Fields, ArXiv 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My main concerns are the lack of an important baseline and the analysis of the experimental results.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: REPLY: Thank you! We have clarified some questions, address your concerns, and hope to hear back from you if you have further questions!
**Q1**: One important baseline is missing. The agent could explore the environment using a heuristic algorithm like FBE. At each time step, the agent detects all objects in observation and ask LLM whether these objects can satisfy the human demand.
A1: Thank you very much for your suggestion. We supplement two experiments with FBE as an exploratory module and MiniGPT-4 and GPT-3 as recognition modules with the following results. Due to Rubuttal character limitations, the results are provided in the attached PDF file in **Common Response**. Since ProcThor is a large scene consisting of multiple rooms, the results show that heuristic search is not efficient. Our model structure mimics VTN, a structure that can learn associations between objects, and learns the attribute features of the objects with Contrastive Learning, which in a way improves the search efficiency by using the semantics of the objects.
**Q2**: What are the differences between common sense knowledge and human preferences mentioned in the paper?
A2: Thank you for your valuable commont. In the context of the paper, common sense knowledge refers to the general knowledge and understanding that humans possess about the world and its functioning. This knowledge includes basic facts and principles that are commonly accepted and expected in everyday life. On the other hand, human preferences refer to the individual preferences, desires, and choices that vary from person to person. These preferences can be influenced by personal experiences, cultural background, and subjective judgments, leading to variations in how individuals perceive and prioritize different options. In our DDN dataset, we did not explicitly express personal preferences, but instead added some modifiers to further specify the target category and reflect personal preferences, such as "I want a place to rest" vs "I want a soft place to rest."
**Q3**: It is not clear why the results of random-object are different from random-demand. Are they both execute random actions? More explanations are needed.
A3: Thank you for pointing that out. Indeed, both random-object and random-demand baselines execute actions randomly from the action space provided. However, the key difference lies in the tasks assigned to them and the criteria used to evaluate success, leading to varied results. For random-object, the baseline is given **a specific category of objects** and is tasked with finding an object of that category within the environment. On the other hand, random-demand is given **a demand instruction** and is asked to find an object that satisfies that particular demand. In some scenarios, a single demand instruction may be satisfied by **multiple categories of objects** present in the environment. Due to this broader range of possible successful outcomes, the success criteria for random-demand are more relaxed compared to random-object, resulting in higher success rates for random-demand.
**Q4**: In Table1, the ZSON-demand performs better the ZSON-object. Does it indicate that CLIP performs better in understanding high-level abstract demand compared to concrete objects? This result seems to conflict with the claim that “CLIP does not perform well on alignment between instructions and objects” in Line 335.
A4: Thank you for raising this concern. It is essential to note that ZSON-demand and ZSON-object represent different task settings. In ZSON-demand, the objective is to find the category of objects that fulfills a given demand instruction, where multiple object categories in the scenes may satisfy the demand. In contrast, ZSON-object entails locating a specific object category. The difference in task settings makes it challenging to draw direct comparisons between ZSON-demand and ZSON-object performance.
Regarding why we claim that CLIP does not perform well on alignment between instructions and objects, we need to first explain the difference in testing and task input between ZSON-object and ZSON-demand. Both ZSON-object and ZSON-demand were trained using Image Navigation (task input is CLIP-vision-feature of the target image), but for testing, ZSON-object was tested with Object Navigation (with the task input being CLIP's text feature of the target object's name), while ZSON-demand was tested with the DDN task (with the task input being CLIP's text feature of the demand instruction). ZSON 's motivation is to rely on CLIP's ability to align between text and vision to accomplish zero-shot object navigation. This alignment ability is reflected in the cosine similarity of the features.The average cosine similarity between **object image features** and **object name features** in ZSON-Object is 0.28; whereas the average cosine similarity between **object image features** and **instruction features** in ZSON-demand is only 0.22. So we argue that the alignment between instructions and objects in DDN task is not as good as in Object Navigation task.
**Q5**: Related work
A5: Many thanks for the papers you have provided. We think these papers are very relevant to our work, so we will add them all to the related work section and discuss them.
---
Rebuttal Comment 1.1:
Title: Thanks for Response
Comment: Thanks for your detailed response, which solved all my concerns about experimental settings and results analysis. I am happy to raise my score.
---
Reply to Comment 1.1.1:
Comment: We are glad that our responses help alleviate your concerns. Thank you for raising your score! We greatly appreciate your valuable suggestions on our paper. | Rebuttal 1:
Rebuttal: ## Common Response ##
We thank all reviewers for appreciating our DDN task, method and experiments. "The idea of demand-driven navigation is interesting and novel." (JLnR) "The proposed attribute module is interesting and helpful for extracting attributes of objects." (nwVS) "Authors propose diverse baselines to evaluate the performance of current language models in robotics/navigation tasks." (7yMw) "The motivation of this work is interesting." (i2zg) "The paper flows nicely with good motivation of the novel task and its challenges clearly stated. The proposed method makes sense and comparison is good." (FyRT)
**(Common Response 1)** However, we notice that some reviewers (JLnR, nwVS, 7yMw, i2zg) have some suggestions about our experiments, such as missing some important baselines and ablation experiments. We supplement with five baselines (FBE+MiniGPT-4, FBE+GPT-3, CLIP-Nav+MiniGPT-4, CLIP-Nav+GPT-3, VTN+CLIP) and two ablation experiments (Ours_w/o_BERT, Ours_w/o_Attribute_Transformer) as they suggested. The original ablation Ours_w/o_attr in the main paper is renamed Ours_w/o_Attribute_Pretrain. Due to time and computational resource constraints, we conducted only one round of each experiment. We will conduct more rounds of experiments using different random seeds if the paper is accepted. **The supplementary experimental results are shown in the attached PDF file in Common Response**.
**(Common Response 2)** We also notice that some reviewers (JLnR, nwVS, 7yMw) have some questions on our Random baselines. The inclusion of variants of "Random" in our baselines is motivated by previous research [1,2,3], where "Random" is used as a baseline to reflect the task's difficulty.
Random-object and Random-demand are different in terms of task settings. Random-object's task is to find **a given object category** whereas Random-demand's task is to find an object that satisfies a given demand instruction. In some scenarios, a single demand instruction may be satisfied by **multiple object categories** present in the environment. Due to this broader range of possible successful outcomes, the success criteria for Random-demand are more relaxed compared to Random-object, resulting in higher success rates for random-demand. Therefore, theoretically the result of Random-demand should be better. However, because these two baselines have different task settings, it is difficult to draw valuable conclusions from a comparison between them.
**(Common Response 3)** Some reviewers (JLnR, i2zg, FyRT) have valuable suggestions on the content of our DDN dataset. They suggested that we could add more fine-grained demand instructions such as "I am thirsty but I cannot drink cold", "something quench thirst and contains caffeine". We have actually included some similar examples in our DDN dataset, such as "I need a place to rest" and "I need a soft place to rest". The former can be "bed, wooden chair, sofa" while the latter cannot be "wooden chair" but "bed, sofa". The latter contains two demands in its instruction: "soft" and "available for rest". In future work, we'll take their suggestions into serious consideration and add some explicit personal preferences as well as more complex demand instructions, including negation of object samples, combining multiple demands, and prioritising between objects.
We truly appreciate the time all the reviewers, AC and SAC have taken to carefully review our work. What follows are the revision plan and point-to-point responses, and we hope that our responses addresses your concerns. In the attached PDF file, we summarise the results of all experiments (both baseline and ablation experiments) including the experiments we supplemented in the rebuttal period. Thanks again for all valuable comments and suggestions.
## Revision Plan ##
As suggested by reviewer i2zg, we will reduce some of the redundant expressions in the Introduction section to make it more concise and focused.
In the Related Work section, we will add some papers recommended by reviewer nwVS and discuss them.
In the Experiment section, we will add the results of our supplementary baseline and ablation experiments to Table.1. We wiil also add more detailed description about baselines.
### References ###
[1] Du, H., Yu, X., & Zheng, L. (2021). VTNet: Visual transformer network for object goal navigation. ICLR 2021
[2] Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sünderhauf, N., ... & Van Den Hengel, A. (2018). Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. CVPR 2018
[3] Chen, C., Jain, U., Schissler, C., Gari, S. V. A., Al-Halah, Z., Ithapu, V. K., ... & Grauman, K. (2020). Soundspaces: Audio-visual navigation in 3d environments. ECCV 2020
Pdf: /pdf/44069dacd91295a2e5c4e934582337a88e9d21ba.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes the novel task of demand-driven navigation, where a robot must navigate towards a goal object that satisfies the human user's demand (e.g., for the demand "I am thirsty", the robot has to find water/juice/tea, etc.) Citing limitations of navigation methods used for other object-goal navigation variants, the paper also proposes a novel architecture that relies on extracting attribute features conditioned on the demand text features and object features for visible objects. The attribute features extracted are indicative of what physical / semantic properties are fulfilled by a given object to satisfy the demand (e.g., the object "water bottle" has the property to "quench thirst"). These features are learned using a contrastive learning objective on prior-knowledge extracted from a large-language model (GPT-3). Experiments on the ProcThor dataset demonstrate the difficulty of the demand-driven navigation task and the superiority of the proposed method over prior navigation methods.
Strengths: * The idea of demand-driven navigation is interesting and novel. While the goal specification is natural language (similar to prior work), it focuses on a very different concept of "demand" where multiple functionally equivalent objects can satisfy a given demand.
* The problem statement is well motivated and the paper writing clarity is largely good (see weaknesses for some questions).
* The supplementary material provides necessary implementation details and details about the dataset for reproducibility.
* The experimental design is good. Recent baselines for open and closed vocabulary navigation are considered. Multiple experimental trials have been performed to show statistical significance. The proposed method significantly outperforms the baselines.
Weaknesses: # Post-rebuttal update
I thank the authors for addressing my questions and concerns in the rebuttal. It is clear to me that this paper presents a valuable new direction in this space of navigation tasks and the experiments are sufficiently strong to recommend acceptance. The motivation can be clarified further and the rebuttal responses need to be reflected in the final paper. With the understanding that this will be done, I am increasing my rating to accept (7).
---------------------------------------------------------------
## Task motivation good, but practical implications are not clear
I liked the task itself, but a key question that concerns me is *"how often does an object demanded not be present in the scene, and therefore, a functionally equivalent object was needed to satisfy the demand?"*. That is, how often do demands for objects become infeasible because the object was missing in the scene? E.g., if the demand is "Get me a water bottle because I am thirsty", I would expect most scenes would contain water bottles. The task definition avoids addressing this issue by having a generic demand itself as the input.
Of course, it is possible that there are other objects that can be functionally equivalent **in addition** to the object demanded, but that's not the scenario motivated in the introduction. Additionally, as a user, if I request a water bottle, that's exactly what I'd want unless it is nowhere to be found. So the proposed task makes more sense in the absence of the primary object.
## Dataset is underwhelming
L63 - 64 - "mapping between demands and objects is many-to-many" ---This is only partly true. Based on statistics in Figure 4 supp., only 2.3 objects, on average, correspond to a given instruction. Calling this "many" is underwhelming. The other direction is still true though, i.e., there are many instructions satisfied by a given object (Figure 5 supp).
## Task definition fails to consider whether an object instance satisfies the demand
L89 - "Both a bottle of water and a cup of tea" can "quench thirst" --- this is only true if the bottle has water and cup is filled with tea. Does the dataset / task differentiate between bottle/cup instances that contain liquids vs. those that are empty?
## Approach clarifications needed
* L175 - "only take RGB images as sensor inputs" --- is the GPS+compass information also included here or does the model learn to localize on its own?
* L185 - why is the time step limit only 100? That seems very short for navigation in large environments.
* L257 - why is the attribute module a transformer (e.g., why not just an MLP)? Is self-attention across demand-object features needed?
* L281 - why use only imitation learning and not reinforcement learning?
## Experiment clarifications needed
* Table 1 - why is the performance on seen scene, seen instruction so low? I'd expect close to 100% success due to overfitting.
* L306 - how are the object categories derived from demand inputs for the *-object methods?
* L317 - why are there variants of "Random" if the policy only selects actions randomly?
* L336 - 338 - "likely to be due to the fact that ... have a high likelihood not to be present ... meaningless search" --- can we empirically quantify this? How often is it the case that objects predicted by GPT-3 are missing in the scene?
* L371 - 373 - "surpasses all baselines ... CLIP-visual features helps ..." - it seems to me an unfair advantage to use CLIP visual features only for the proposed method and not the baselines. How do baselines perform when equipped with CLIP features?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I'd appreciate it if the authors can clarify the questions raised in weaknesses. It will help me arrive at my final decision.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: REPLY: Thank you! We have clarified some questions, address your concerns, and hope to hear back from you if you have further questions!
**Q1**: how often does an object demanded not be present in the scene, and therefore, a functionally equivalent object was needed to satisfy the demand?
A1: Thank you for pointing this out! We carefully analyzed the scenario you pointed out, wherein the Procthor contains multiple objects (e.g., objects A, B, and C) that fulfill the demand instruction, but not all of them are simultaneously present within a specific scene. Our statistical calculations revealed a probability of 54.4% for this situation to occur. Correspondingly, there is a 45.6% chance that all objects satisfying the demand instruction are indeed present in the scene.
**Q2**: Additionally, as a user, if I request a water bottle, that's exactly what I'd want unless it is nowhere to be found.
A2: Thank you very much for your valuable suggestion. Due to Rebuttal character limitations, we put the relevant discussions into **Common Response 3**. we would appreciate it if you could read them.
**Q3**: Dataset is underwhelming. only 2.3 objects, on average, correspond to a given instruction
A3: Thank you very much for pointing this out. We argue that the nature of DDN tasks is many-to-many. We acknowledge that the limited number of object categories available in the ProcThor dataset might have influenced the presentation of this nature, resulting in an average of only 2.3 object categories per instruction being fulfilled. In real life, a demand can be satisfied by a much larger number of object categories, for example, we can easily generate 10 different categories of objects for each instruction using GPT-3 ( please see language-grounding mappings in supp 8.2.1). In the future work, we will focus on more diverse scenes and objects to generate a more complex DDN dataset.
**Q4**: Task definition fails to consider whether an object instance satisfies the demand. Does the dataset / task differentiate between bottle/cup instances that contain liquids vs. those that are empty?
A4: Thank you for raising this important point. Yes, the dataset differentiates them. For instance, in the demand "I am thirsty," the required object is "water," rather than specifying a particular container like "bottle" or "cup." On the other hand, in the demand "I need a container to hold water," the required object is explicitly defined as "bottle." We can identify them using the simulator's metadata easily.
**Q5**: Approach clarifications
L175:
No, because many previous visual navigation works have used depth and GPS, we want to emphasize that we only used RGB.
L185:
Our decision to set the time step limit to 100 is based on the analysis of our traject dataset. The dataset, collected using the A* algorithm, revealed that 90% of the trajectory lengths are less than 50 and the average length is 27.24. Consequently, we opted for a time step limit that is twice times longer than 50.
L257:
Thank you for your insightful question and valuable suggestions. We chose to implement the attribute module as a transformer because transformers have demonstrated remarkable effectiveness in various domains, such as NLP and computer vision.
To show the transformer's ability for learning attribute features, we added experiments by replacing the transformer with a 4-layer MLP in the attribute module. **Due to Rubuttal character limitations, the results of the experiment are shown in the attached PDF in Common Response.** The results clearly showed that the MLP was not superior to the transformer in learning attribute features. Thus, we found that self-attention across demand-object features, provided by the transformer, is essential for achieving optimal performance in our task.
L281:
Thank you for your insightful suggestion! We agree that integrating our method with reinforcement learning (RL) could potentially lead to further performance improvements, conceptually. However, due to the size of our model (even several times larger than VTN), the reward signal from RL turns out to be weak, rendering RL less effective for our method. Consequently, we initially focused on exploring an imitation learning (IL)-based method, which yielded significant performance gains over baselines. Nonetheless, we recognize the value and promise of integrating RL into our method as a valuable future direction for further research.
**Q6**:Experiment clarifications
Table 1:
The performance that is not 100% on seen scenes and instructions can be attributed to the limited size of our trajectory dataset. We only collect up to 3 trajectories with different initial positions for each instruction and each room. Since each room contains hundreds or even thousands of initial positions, each corresponding to different final objects found, the trajectory dataset we collect is relatively small compared to the vast number of all possible trajectories, even less than 1%. This limited dataset size makes it challenging for our model to overfit to the specific seen scenes and instructions, resulting in performance below the expected 100% success rate.
L306:
The object categories are obtained by asking GPT-3 for the current demand instruction and letting GPT-3 provide the categories of objects that fulfill the demand. Subsequently, we convert the GPT-3 generated answers into formats acceptable by VTN and ZSON, respectively, using different methods. For detailed information on the conversion process, please refer to the supplementary materials Section 8.3.2.
L317:
Please see **Common Response 2**.
L336-338:
As we described in Q1, there is a 54.4% probability that the object given by GPT-3 does not exist in the current scene, and there are other objects that satisfy the given demand instruction.
L371-373:
Thank you for your suggestion. We added the experiments VTN+CLIP-demand. The results are in the attached PDF file in **Common Response**.
---
Rebuttal Comment 1.1:
Title: Some clarifications on Q6 L306
Comment: We apologize that we misunderstood your question on the Q6 L306 at first. Here are some clarifications.
Methods with the -object suffix are trained in **visual object navigation** and tested in **visual object navigation**. Instead of deriving from demand inputs, we directly inform the robot of the object category that the robot needs to find. We apologize for putting them together with the demand-driven navigation results that caused some misunderstanding. We will make a separate table for methods with -object suffix later.
Methods with the -GPT suffix were used with models trained in **visual object navigation** and tested in **demand-driven navigation**. We get an target object category that can satisfy the demand instruction by asking GPT-3. Then we inform the robot of the target object category given by GPT-3.
We will also explain the different suffixes in more detail in the main paper. | null | null | null | null | null | null |
Momentum Provably Improves Error Feedback! | Accept (poster) | Summary: This paper introduces a modification to the EF21-SGD algorithm by incorporating momentum, resulting in a new algorithm named EF21-SGDM. The innovative analysis accompanying this new method successfully addresses the challenges associated with EF21-SGD, reducing the sample complexity from $\Omega(\sigma^2/\epsilon^2)$ to $\mathcal{O}(\sigma^2/(L\delta_0)$. Importantly, EF21-SGDM operates without requiring an assumption of bounded gradient.
Strengths: The paper proposes a new algorithm for distributed settings with compressed gradient. The authors also provide an original analysis that delivers improved results, contributing substantially to the existing work.
Weaknesses: Although the paper's sample complexity in each iteration number is independent of $\varepsilon$, it still depends on the variance term, $\sigma$. This dependence should be explicitly stated to ensure a comprehensive understanding of the algorithm. Furthermore, Algorithm 1 uses $B_{init}$, but the batch size remains consistent throughout the iterations. It may be more appropriate to avoid this term, considering the batch size does not vary.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The theoretical results suggest that $\eta=\mathcal{O}(1/T)$, but the experiment utilizes $\eta=0.1$. Could the authors clarify the reasoning behind this discrepancy between theoretical and experimental parameters?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Although the paper's sample complexity in each iteration number is independent of $\varepsilon$ , it still depends on the variance term, $\sigma$. This dependence should be explicitly stated to ensure a comprehensive understanding of the algorithm.
The sample complexity of each iteration of our algorithms (EF21-SGDM and EF21-SGD2M) is $1$, since using one stochastic gradient is sufficient at each iteration at every node, see Theorems 2, 3 and 5. Our total sample complexities (see Corollaries 2 and 3) naturally depend on both $\varepsilon$ and $\sigma$. We report this dependence in all theorems, corollaries, tables and carefully compare the dependence on each quantity to the previous work, see lines 281-288 after Corollary 2.
> Furthermore, Algorithm 1 uses $B_{init}$, but the batch size remains consistent throughout the iterations. It may be more appropriate to avoid this term, considering the batch size does not vary.
Our equation (9) in the main Theorem 3 holds for any initial batch-size including $B_{init} = 1$, thus large initial batch size is not necessary for convergence. However, to make a fair comparison in the total sample complexities of EF21-SGD, EF14-SGD, EF21-SGDM and EF21-SGD2M, we distinguish between $\delta_0$ and $\Lambda_0$ (it could be that $\delta_0 << \Lambda_0$) and use a large initial batch size to make them of the same order.
> *Question*. The theoretical results suggest that $\eta = 1/T$, but the experiment utilizes $\eta = 0.1$. Could the authors clarify the reasoning behind this discrepancy between theoretical and experimental parameters?
According to Theorem 3, the best theoretical choice of $\eta$ depends not only on the number of iterations $T$, but also on some unknown parameters such as $\sigma$, $L$ and $\delta_0$. Therefore, we resort to tuning the value of $\eta$. We finetuned $\eta$ from the set {$ 0.01, 0.1 $} on the independent dataset (w8a) before all our main experiments. We describe this procedure in Lines 312-313. We also include an additional discussion regarding this in Appendix J.
---
We believe we addressed all criticism raised. As we have shown, some of it was just based on a misunderstanding. We hope this might lead to a better score; thanks! We are ready to answer any further questions!
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttals. I do not have more questions. | Summary: This paper proves that momentum helps EF21-SGD. It makes several non-trivial contributions. First, it theoretically shows that EF21-SGD cannot converge when batch-size is small. Second, it proposes a simple remedy to this issue, i.e., incorporating momentum to EF21-SGD. Third, it proves that EF21-SGDM can converge well even if with small batch-size. Fourth, it extends EF21-SGDM can achieve linear speedup in distributed non-convex optimization without any assumptions on bounded gradient similarity. All these contributions are solid and novel.
Strengths: - It theoretically shows that EF21-SGD cannot converge when batch-size is small.
- It proposes a simple remedy to this issue, i.e., incorporating momentum to EF21-SGD.
- It proves that EF21-SGDM can converge well even if with small batch-size.
- It extends EF21-SGDM can achieve linear speedup in distributed non-convex optimization without any assumptions on bounded gradient similarity.
- The paper is well-written and easy to follow.
Weaknesses: The paper is well-written. I have a few minor questions.
1. In table 1, the authors claim that NEOLITHIC uses a large mini-batch, which may not be correct. While NEOLITHIC uses R times larger batch-size than EF21-SGD per iteration, it runs R times fewer iterations than EF21-SGD (see the NEOLITHIC algorithm in Huang et. al., 2022). On average, NEOLITHIC uses a normal O(1) mini-batch as EF21-SGD.
2. Typo: In line 176, "such" should be "such as".
3. In line 182, the authors claim that the proof technique can help establish linear speedup for Scaffold without relying on data similarity assumption. But Scaffold is not relying on this assumption, right?
4. In Figure 2, NEOLITHIC is far slower than EF21-SGD, which somehow contradicts with the results shown in Huang et. al., 2022. Can the authors provide more details on the experimental settings on NEOLITHIC? How many iterations do NEOLITHIC run? Is accumulated gradient used? How does the total communication cost be counted in NEOLITHIC? Does NEOLITHIC converge slower than the other algorithms in terms of iterations (not in communication cost)?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > *Question 1.* In table 1, the authors claim that NEOLITHIC uses a large mini-batch, which may not be correct. While NEOLITHIC uses R times larger batch-size than EF21-SGD per iteration, it runs R times fewer iterations than EF21-SGD (see the NEOLITHIC algorithm in Huang et. al., 2022). On average, NEOLITHIC uses a normal O(1) mini-batch as EF21-SGD.
According to Theorem 3 in [Huang et al, 2022], NEOLITHIC requires at each iteration a batch-size of order $\frac{1}{\alpha} \log(G/\varepsilon)$ (even in the deterministic case) in our notations. While the dependence on $G$ and $1/\varepsilon$ are logarithmic, the dependence on $\alpha = K / d$ can still result in a very large batch-size, especially when $K$ is small and the dimension $d$ is large. We are not aware if the analysis of NEOLITHIC can achieve $\mathcal O \left(\frac{1}{\alpha \varepsilon^2} \log(\frac{G}{\varepsilon}) + \frac{\sigma^2}{n \varepsilon^4}\right\)$ sample complexity using the batch-size equal to one at each iteration (i.e., $R = 1$).
> *Question 3.* In line 182, the authors claim that the proof technique can help establish linear speedup for Scaffold without relying on data similarity assumption. But Scaffold is not relying on this assumption, right?
We thank the reviewer for pointing this out and admit that this sentence might be confusing. In fact, we meant to mention only Scaffnew/ProxSkip algorithm by Mishchenko et al, 2022 here. This remark relates to the corresponding limitation of Scaffnew in the stochastic setting, which is mentioned in the end of Section 5.1 of their work. That is although Scaffnew is provably faster than Scaffold in the deterministic case, their sample complexity in the stochastic setting does not have the linear speedup. We will edit this sentence accordingly in the revision.
> *Question 4.* In Figure 2, NEOLITHIC is far slower than EF21-SGD, which somehow contradicts with the results shown in Huang et. al., 2022. Can the authors provide more details on the experimental settings on NEOLITHIC? How many iterations do NEOLITHIC run? Is accumulated gradient used? How does the total communication cost be counted in NEOLITHIC? Does NEOLITHIC converge slower than the other algorithms in terms of iterations (not in communication cost)?
It does not contradict the results from [Huang et. al., 2022] because they take the parameter $R = 4$ in their experiments. While we follow the theory from [Huang et. al., 2022, Theorem 3] and take $R \approx 1 / \alpha$ because the choice of $R = 4$ is not well justified. Although we admit that $R = 4$ can be a good practical choice, we want to avoid heuristics as much as possible.
All methods (including Neolithic) run the same number of iterations. In all methods, we calculate the number of sent bits in each iteration. This procedure is independent of the methods.
We have checked the convergence of Neolithic in terms of iterations. Neolithic converges **not** slower than other methods and has good performance w.r.t. # of iterations. This is the expected result because Neolithic sends $R \approx 1 / \alpha$ compressed vectors in each iteration.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I thank the authors for the rebuttal. They have addressed my concerns. I will keep my current rating and vote to accept the paper in the discussion. Please clarify in the camera-ready version that your experimental results are not contradictory with [Huang et. al. 2022] because of the choices of the $R$ value. | Summary: The authors present a new method called EF21-SGDM by combining EF21 and Polyak's momentum SGD. The theoretical contribution is that , it improves the communication and sample complexities of previous error feedback algorithms under standard smoothness and bounded variance assumptions. They also propose a double momentum variant to further improve the complexity. The experiments are conducted with non-convex logistic regression problems.
Strengths: - only standard smoothness and bounded variance assumptions are needed.
- identify an issue that EF21 with stochastic gradients has weak sample complexity guarantees, and fix it by leveraging our new Lyapunov function construction and new analysis.
- sample complexity is free of $\alpha$ and batch-free, the best from table 1.
- The theoretical analysis is comprehensive and seems solid.
Weaknesses: - Compared with theoretical contribution, the algorithm itself is straightforward, i.e., combining two existing methods EF21 and SGDM.
- The experiments are validated on only logistic regression, where large-scale distributed training is not as crucial as in larger models.
- EF21-SGD2M is not implemented in experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Does EF21-SGD2M has practical benefits over EF21-SGDM? If not, how do you verify the complexity is improved over EF21-SGDM?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not discuss societal impact. I believe this is a theoretical paper and do not see much negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > 1. Compared with theoretical contribution, the algorithm itself is straightforward, i.e., combining two existing methods EF21 and SGDM.
In our opinion, the simplicity of the algorithms should be viewed as a strength rather than weakness of our work.
**Multiple ways to combine EF21 and SGDM.** When our work is put into the context of the literature on compressed gradient methods, one can notice that several attempts were made before to combine EF with momentum. There are a number of ways one can combine EF with momentum, and it is not clear a priori which combination "works", i.e., gives any provable benefit over the non-momentum variant. For instance, [Xie et al., 2020] analyze the combination of EF14-SGD with Nesterov's momentum (M-CSER) and derive the convergence rate, which matches the one for EF14-SGD without any improvement (!). They also make a strong assumption on bounded gradients (BG) as for EF14-SGD.
Another approach to combine EF with momentum was proposed in [Fatkhullin et al., 2021]. In their work, it is suggested to look at the following scheme (EF21-HB):
$$
\text{Master: } \quad x^{t+1} = x^t - \gamma v^t ,
$$
$$
\text{Nodes: } \quad g_i^{t+1} = g_i^t + \mathcal{C} (\nabla f_i(x^t) - g_i^t ) ,
$$
$$
\text{Master: } \quad g^{t+1} = \frac{1}{n} \sum_{i=1}^n g_i^{t+1}, \qquad v^{t+1} = v^{t} + \eta g^{t+1} .
$$
As our EF21-SGDM, the above method is also a combination of EF21 and Polyak's momentum. However, the algorithm is very different: first EF21 mechanism is applied, the gradient estimators are aggregated by the Master, and finally the momentum is applied at the server level. In contrast, our EF21-SGDM applies momentum at each node followed by EF21 mechanism. Convergence analysis of EF21-HB is shown in the deterministic case and again only matches the rate of non-momentum variant. It is unclear if such variant would even converge when stochastic gradients (without mini batch) are used.
In summary, there are many ways to combine EF and momentum, and it is unclear if any other combination can show the provable benefit. A few previous works tried, but failed. That is why we believe it is *not straightforward* to find the combination that *works*. Our work is the first to demonstrate the provable advantage of our combination over all non-momentum variants of EF in the non-convex case. We manage to do so by proposing a new Lyapunov function analysis, which appears to be novel even when EF and the compressor ($\alpha = 1$) are removed from our method.
**Differences even in case of no compression.** We would like to point out that even in case of no compression ($\alpha = 1$) the choice of the *step-size* and *momentum* parameters (in our Theorems 2, 3 and 8) are completely different from those proposed in the earlier works on SGDM, e.g., by Liu et al (2020). Due to a different order of momentum parameters, our proof technique is also completely different from [Liu et al., 2020]. Namely, we use the Lyapunov function presented in the equation (8), while the analysis of Liu et al. [2020] and the majority of other works on momentum (including EF21-HB in [Fatkhullin et al., 2021]) relies on (11), which has a completely different interpretation. We elaborate more on these differences in Appendix A (momentum).
From the algorithmic side, this analysis with different momentum parameters can be viewed as an encouragement to use smaller momentum parameters (of order $\eta_t = 1/\sqrt{T}$ instead of constant) or even time varying momentum ($\eta_t = \eta/\sqrt{t+1}$) as we describe in Appendix J.
**New double momentum variant.** Additionally, we analyze a double momentum variant of EF21-SGDM (Section 3.4 and Appendix G), which further improves the sample complexity of EF21-SGDM. We are not aware if such an algorithm was proposed or analyzed before in the literature even in case of no compression ($\alpha = 1$).
> 2. The experiments are validated on only logistic regression, where large-scale distributed training is not as crucial as in larger models.
As suggested by the reviewer, we include additional experiments with a larger model (ResNet-18 deep neural network with CIFAR10 image dataset), see attached PDF file above in the general response (Author Rebuttal). In summary, our previous observations based on non-convex logistic regression (with MNIST dataset) translate into this large scale experiment.
> 3. EF21-SGD2M is not implemented in experiments.
In the above PDF file, we also provide experiments with the double momentum variant of our algorithm (EF21-SGD2M). Our simulations show that the performance of EF21-SGD2M is comparable to that of EF21-SGDM and also improves convergence of the previously known algorithms under this setting.
> Question: Does EF21-SGD2M has practical benefits over EF21-SGDM? If not, how do you verify the complexity is improved over EF21-SGDM?
By "EF21-SGD2M further improves the sample complexity of EF21-SGDM", we mean the *theoretical improvement* when comparing the upper bounds in Corollaries 2 and 3. Notice that the iteration/sample complexity reported in Corollary 3 (for EF21-SGD2M) is better than the one in Corollary 2 (for EF21-SGDM) because the additive term with $1/\varepsilon^3$ disappears for the double momentum variant. We provide some intuition behind this in the beginning of Appendix G. In our work, we do not claim any practical benefits of EF21-SGD2M over EF21-SGDM.
---
We believe we addressed all criticism raised. We hope this might lead to a better score; thanks in advance! We are ready to answer any further questions! | Summary: The authors propose a new version of EF-SGD which uses momentum. The authors show that under standard assumptions the proposed method has a better convergence rate. The authors claim that in several cases it is hard to perform large batch sampling like when performing RL training. To overcome this problem they propose a momentum based method, which can overcome the small batch issue. The perform additional experiments on Mnist and real sim datasets and compare there effectiveness.
Strengths: 1. The proof indeed looks novel the assumptions are reasonable and the improvement in rates for small batch size is very encouraging.
2. The problem is well motivated.
3. The authors perform experiments and compare with different version of EF-SGD.
Weaknesses:
1. EF21 SGD although theoretically motivated is not a practical due to only 1 way compression and often highoverhead, authors should comment on the real world implication of EF21-SGD and EF21-SGDM.
2. The authors are motivating their problem using examples from Medical Literature and Federated RL, but the actual experiments are performed on MNIST. In 2023, work like https://www.mosaicml.com/blog/mosaic-resnet trains Imagenet in 27 minutes using just 8 GPU, shouldn’t the paper at least have experiments using Cifar.
3. The only comparison is with other EF21-SGD variants, can you please have additional comparison with other communication efficient methods. And the comparisons should be on wall clock time rather than bits communicated.
4. Authors seems to be not accounting the memory consumption needed because of momentum, it would be great to have a discussion on that.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please comment on the concerns raised in weakness sections.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors in the opinion of this reviewer have not addressed limitations of their methods and their applicability to problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > 1. EF21 SGD although theoretically motivated is not a practical due to only 1 way compression and often high overhead, authors should comment on the real world implication of EF21-SGD and EF21-SGDM.
In our work, we specifically focus on the uplink communication (from clients to the server) since it is often the key bottleneck in distributed systems with many clients (many clients are trying to the server simultaneously), and it is crucial to tackle this problem before considering the downlink compression. It is common in this line of work to focus on the uplink compression [Stich et al., 2018], [Beznosikov et al., 2020], [Richtarik et al., 2021]. On the other hand, we agree with the reviewer that the downlink communication can be also important. One way to tackle this problem is to use compression with error feedback for both uplink and downlink communications. In this direction, Fatkhullin et al., 2021 propose a modification of the EF21 algorithm, which supports the bidirectional compression (Algorithm 5, EF21-BC), and analyze this method in a deterministic setting. Our momentum variant can be combined with EF21-BC to achieve (batch-free) convergence of the combined algorithm with stochastic gradients. Due to the space limitation, we describe the pseudocode of such combination above in the "global" response (Author Rebuttal).
> 2. The authors are motivating their problem using examples from Medical Literature and Federated RL, but the actual experiments are performed on MNIST. In 2023, work like https://www.mosaicml.com/blog/mosaic-resnet trains Imagenet in 27 minutes using just 8 GPU, shouldn’t the paper at least have experiments using Cifar.
As requested by the reviewer, we additionally test the algorithms on image recognition task CIFAR10 with ResNet-18 deep neural network, see the attached PDF file. In summary, our observations based on non-convex logistic regression (with MNIST dataset) translate into this large scale experiment.
> 3. The only comparison is with other EF21-SGD variants, can you please have additional comparison with other communication efficient methods. And the comparisons should be on wall clock time rather than bits communicated.
We respectfully disagree that the only comparison is with other EF21-SGD variants. Table 1 shows *theoretical comparisons* with Neolithic and EF14-SGD, which are based on the classical EF14 (EF) mechanism (Seide et al. [2014]). These two methods are not related to the EF21 mechanism. In experiments, we also compare our new method with Neolithic and EF14-SGD. The methods from Table 1 provide the current SOTA theoretical guarantees in our setting, which is why we only consider them.
Our paper is not a systems/software work, where comparison using run time would be appropriate and expected. Instead, we address a specific algorithmic and theory issue present in existing SGD methods with error feedback (the methods require large minibatches both in theory and practice), and thus our contributions are in designing an algorithmic fix and associated theory which proves that the fix indeed works. A theorem is worth a thousand experiments. Our experiments are meant to illustrate that our method solves this issue, and that our theory has predictive power. We believe that our experiments do exactly that. Combined with our theory, we believe this is conclusive evidence. Note that to show what we set out to show we do not need to rely on any particular computer system - indeed, our aim is to underline the system/architecture independent nature of our improvements.
For this reason, we aim to capture the dependence between the number of sent bits and convergence rates in experiments; these are system/architecture/runtime independent quantities. This way of comparing methods is standard in the literature on communication compression [Gorbunov et al, 2021], [Richtarik et al, 2021], [Huang et al, 2022], [Zhao et al, 2022], since such a measure is independent of specific implementation and computing system / architecture. It is also a way of comparison which ages much more gracefully - systems change quickly, but our plots are independent of these changes.
Notice that compared algorithms calculate the same number of stochastic gradients and communicate the same number of bits in each communication round, so the comparison of methods will not change if we plot the convergence on the number of epochs. To support our argument, we re-run the experiment from Figure 3 (c) and measure the wall-clock time to get $\varepsilon$-solution with $\varepsilon = 0.1.$ One can see that the wall-clock times are strongly correlated with the communication complexity results from Figure 3 (c).
| Alg. | Wall-Clock Time |
| ----------- | ----------- |
| EF14-SGD | 1179.96 sec. |
| EF21-SGD | 682.28 sec. |
| EF21-SGDM | 314.47 sec. |
> 4. Authors seems to be not accounting the memory consumption needed because of momentum, it would be great to have a discussion on that.
Compared to EF21-SGD, our momentum variant requires to store one more vector at each node $v_i^t$, thus the memory requirement of the proposed algorithm is indeed larger than that of EF21-SGD by a small numerical constant (< 2). It seems to be a relatively small price to pay considering that EF21-SGD may fail to converge (at least without batch size). On the other hand, compared to EF14-SGD, our EF21-SGDM algorithm requires to store the same number of vectors ($e_i^t <--> v_i^t$), which means that our improvement over EF14-SGD in the sample complexity and the strength of the assumptions comes without resorting to additional memory. We will include a brief discussion about this in the next revision; this is a good suggestion.
---
We believe we addressed all criticism raised, which seems very minor to us. We hope this might lead to a better score. Thanks in advance! We are ready to answer any further questions!
---
Rebuttal Comment 1.1:
Title: Additional Clarifications
Comment: 1. I am curious why did the authors turn off all the optimizations to achieve SOTA accuracies ? These optimizations like LR decay, data augmentation are standard and are widely used.
2. "A theorem is worth a thousand experiments" and "Our paper is not a systems/software work, where comparison using run time would be appropriate and expected" unfortunately I as a reviewer strongly disagree with these statements. However, I understand the authors have a different opinion. I personally believe to have a meaningful impact both theoretical and experimental validation should be provided on real metrics. For example - Work like Powersgd (Vogels et al. ) which end up having meaningful impact on improving accuracy had extensive experiments. I understand from the statements that this might not be goal of reviewers, but for a top tier conference like Neurips it would be expected to provide real world impact. Especially in the case where previous methods have been actually compared on runtime.
Given these issues I can not convincingly recommend for the acceptance of the paper.
---
Reply to Comment 1.1.1:
Title: Re: Additional Clarifications
Comment: > I am curious why did the authors turn off all the optimizations to achieve SOTA accuracies ? These optimizations like LR decay, data augmentation are standard and are widely used.
We do this for several reasons:
- First, our paper's aim is *not* to compete with methods/papers/systems whose goal is to achieve SOTA generalization accuracies on selected benchmarks - we agree that in such work such optimizations should be used. Our work is of a completely different variety : our paper is not about generalization at all. Our work isolates an open theoretical question (can error feedback provably work with small minibatches?) and proposes an algorithmic fix (use of momentum), and theory which conclusively shows that this trick works as advertised, in the class of smooth nonconvex functions.
- Second, our experiments are designed to test the predictive power of our theory. Heuristics such as data augmentation and LR decay are orthogonal considerations which are entirely irrelevant in our study. They are important as far as actual generalization performance of various optimizers is concerned, but as explained above, this is not the subject of our paper. For this reason, if we included these heuristics in our experiments, it would actually make the experiments and conclusions one can draw from them more confusing.
> "A theorem is worth a thousand experiments" and "Our paper is not a systems/software work, where comparison using run time would be appropriate and expected". Unfortunately I as a reviewer strongly disagree with these statements. However, I understand the authors have a different opinion.
Yes, we are of a different opinion. We strongly believe that theory and empirics have equal value in ML research. One feeds into the other and vice versa. We believe that the ML field needs to stand on both its feet (theory and empirics) to advance and to be truly useful. State of the art empirics typically stands on the shoulders of strong theory, and uses additional tricks, heuristics and ideas to push things further. Such tricks are then studied by theoreticians, uncovering their robustness or brittleness, improving and modifying them, or replacing them with more theoretically well grounded tricks. The actual interplay between theory and practice is much more complicated and intricate than this, of course.
Once you have the belief that theory and empirics have equal intrinsic value (and we actually believe this is what the community ideally/hopefully *should* believe), then it becomes clear that the community should be able to equally appreciate strong theory and empirical works. Fortunately, as the record of papers accepted to NeurIPS in the past clearly shows, this is the case. We believe it is in fact dangerous to use a double standard. For example,
- We believe that a strong practical/empirical work should stand on its own, and be perfectly acceptable to NeurIPS without the requirement that it contains any theory whatsoever. Of course, some theory is welcome, and can make that paper even stronger (say an 8/9/10), but it should not be a requirement for acceptance.
- Likewise, a strong theory work should stand on its own, and be perfectly acceptable to NeurIPS without the requirement that it contains any experiments whatsoever. Of course, some empirics is welcome, and can make that paper even stronger (say an 8/9/10), but it should not be a requirement for acceptance.
It seems to us you do not subscribe to this philosophy.
> I personally believe to have a meaningful impact both theoretical and experimental validation should be provided on real metrics.
This can't possibly be the way to evaluate NeurIPS papers since otherwise no pure empirical and pure theory work would ever get published - and there are many examples of immensely influential works in these categories.
> For example - Work like Powersgd (Vogels et al. ) which end up having meaningful impact on improving accuracy had extensive experiments.
This paper is not a theory paper - it does not include a single theorem in the main body of the paper. The one theorem in the appendix is minor, and not central to the paper. This is a very good example of an empirical work which we believe should be accepted to a top venue. In the same manner, pure theory papers with perhaps just one experiment in the appendix should also be perfectly acceptable to a top conference. As an example, consider the 2012 NeurIPS paper by Nicolas Roux, Mark Schmidt and Francis Bach: A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets. This work is of a theoretical nature just like ours, isolating an important theory problem and proposing a solution. The experiments are designed to test the theory. Yet, this work won the Lagrange Prize in Continuous Optimization, and had an enormous impact.
---
Reply to Comment 1.1.2:
Title: Re: Additional Clarifications (part 2)
Comment: > I understand from the statements that this might not be goal of reviewers, but for a top tier conference like Neurips it would be expected to provide real world impact. Especially in the case where previous methods have been actually compared on runtime.
We disagree.
We believe that each work needs to be judged based on its own merits, and by the standards of the subfield/field it belongs to. Theory works need to judged based on the theoretical breakthroughs and contributions, works that build systems should be judged on the real world efficiency of the systems, network architecture works on the benefits the architecture brings, and so on. If we use a single parameter to judge all works (e.g., SOTA generalization performance), we would be doing a massive disservice to the community, and would effectively narrow down the scope of the field to the detriment of of everybody.
NeurIPS papers are like the olympics. Different fields have different quality standards. We can't judge all sports by the standard of one. We can't evaluate a marathon runner by the standards of 100m sprint. If we did so, we would disqualify marathon as a discipline, and would not be able to appreciate even a world-record-breaking marathon run. | Rebuttal 1:
Rebuttal:
We thank the reviewers for their feedback and the overall positive evaluation of our work. We are glad that the reviewers appreciate that our studied problem is “**well motivated**”, the paper is “**well-written and easy to follow**”, the analysis is “**novel, comprehensive, solid**”, the assumptions are “**reasonable and standard**”, the improvement in rates is “**very encouraging and contributes substantially to the existing work**”. Reviewers hi3u and 5FHY also appreciate our **lower bound construction** for EF21-SGD and the **linear speedup** property of the proposed algorithms.
At the same time, we took all the criticism seriously, and will soon upload a detailed response to each comment. As requested by several reviewers, we conducted additional experiments with deep neural networks and also tested the performance of the EF21-SGD2M method. We attach these results as a separate one page PDF file below.
---
Due to the space limitation in the response to Reviewer Pi6q, we include here the description of the combination of our EF21-SGDM with EF21-BC algorithm (from [Fatkhullin et al., 2021]) to achieve sparse communication in both directions: from the server (master) to clients (nodes) and from clients (nodes) to the server (master). We denote by $\mathcal C_W$ and $\mathcal C_M$ contractive compressors at the clients and the master respectively.
*Nodes:*
$$
x^{t+1} = x^t - \gamma g^t ,
$$
$$
v_i^{t+1} = (1-\eta) v_i^{t} + \eta \nabla f_i(x^{t+1}, \xi_i^{t+1}) ,
$$
$$
c_i^{t+1} = \mathcal C_W ( v_i^{t+1} - \widetilde g_i^{t} ) , \quad \text{send } c_i^{t+1} \text{ to master, }
$$
$$
\widetilde g_i^{t+1} = \widetilde g_i^{t} + c_i^{t+1} ,
$$
*Master:*
$$
\widetilde g^{t+1} = \widetilde g^{t} + \frac{1}{n} \sum_{i=1}^{n} c_i^{t+1} ,
$$
$$
b^{t+1} = \mathcal C_M (\widetilde g^{t+1} - g^t ) , \quad \text{send } b^{t+1} \text{ to nodes, }
$$
*Master and Nodes:*
$$
g^{t+1} = g^t + b^{t+1} .
$$
One can extend the convergence analysis of our EF21-SGDM variant to the above bidirectional scheme. For this, one should combine Lemma 7 and 8 from [Fatkhullin et al., 2021] with Lemma 2 and 3 in our work.
Pdf: /pdf/2730d4d190b11db1dd27dc340d00b48214a21813.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Reliable learning in challenging environments | Accept (poster) | Summary: The paper develops learning methods that provide theoretical guarantees for point-wise predictions in challenging scenarios at test-time. In particular, the methods presented address situations affected by adversarial attacks and distribution shifts. The paper provides a theoretical contribution to a relevant line of work on reliable learning.
Strengths: The paper presents a significant theoretical contribution on reliable learning, addressing important scenarios such as those affected by adversarial attacks and distribution shifts. In addition, the paper covers several different notions of losses and reliability. The results presented can lead to a better understanding of the possibilities and limitations of such scenarios.
Weaknesses: The main limitation I can see in the paper is the reliance on the realizable case. I guess that reliance cannot be easily avoided but it may be worth to further discuss and emphasize that issue. For instance, the guarantees in the paper are contingent to the fact that we are in a realizable setting, which is not possible to know in most practical cases. Such assumption is even more significant for the distribution shift case, since it constraints the possible shifts.
The paper contains many theoretical results for multiple losses, scenarios, etc. In general, it is good to have many interesting results but the paper is quite dense, and it is difficult to get the main ideas. For instance, in Section 4, the authors mention "The optimal robustly-reliable learner described above may be implemented" but the authors hardly describe such learner before Section 4 besides stating that it exists (it is described in the proofs). The paper would be significantly more readable (at least for readers not very familiar with the topic) if the authors first describe the main ideas and results over the simplest scenario and then generalize them to other cases.
It is not clear that the probabilities lower bounded in Theorem 3.3. are not very small or that the abstention probabilities are not very high. I think this point is important because the learners would not be very useful in other cases. It would be good if the authors can quantify numerically those probabilities. The bounds provided show that the probabilities increase when the number of samples increases and the dimension decreases but it is not clear in what scenarios those probabilities are sizable or even the lower bound is larger than 0.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
Why is that the robustly-reliable region does not contain the instances x such that r(x)=0?
I guess there is a typo in Theorem 3.1 and the subscript CA in "RR^\set{L}_CA(S,\eta)" should be omitted as in Definition 4 or be a generic W?
English usage can be improved in few places, e.g., "Given a 0-1 loss function" should read "Given the 0-1 loss function", "provide bounds the probability mass" should read "provide bounds for the probability mass"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors correctly describe the limitations of the results in the paper. In that sense, it would be good if the reliance on the realizable case is further discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
> 1. The main limitation I can see in the paper is the reliance on the realizable case. I guess that reliance cannot be easily avoided but it may be worth to further discuss and emphasize that issue. For instance, the guarantees in the paper are contingent to the fact that we are in a realizable setting, which is not possible to know in most practical cases. Such assumption is even more significant for the distribution shift case, since it constraints the possible shifts.
We believe that it is always good to first address the realizable case to build the intuition for the definitions and results. This follows a well-established pattern of developing theories in the learning theory literature. As we discuss below, extension to the non-realizable (agnostic) setting is possible, building from the definitions and concepts we have developed in the realizable case.
In fact, it is possible to relax the realizability assumption for the robustly-reliable learner. We will provide a discussion in the camera-ready version. We include a definition of an agnostic robustly-reliable learner below, along with the statement of a theorem for bounding the robustly-reliable region for the true label robust loss function. Our result below implies that the robustly-reliable region in the agnostic case is a slightly smaller agreement region than in the realizable case, depending on the error rate $\nu$ of the best classifier in $\mathcal{H}$. Below, we will illustrate the definition and result in the agnostic case for the TL loss.
**Definition**: A learner $\mathcal{L}$ is *$\nu$-tolerably* robustly-reliable w.r.t. $\mathcal{M}$-ball attacks for sample $S$, hypothesis space $\mathcal{H}$ and robust loss function $\ell$ if, for every concept $h^*\in\mathcal{H}$ with $\mathrm{err}\_{S}(h^*)\le\nu$, the learner outputs functions $h^\mathcal{L}\_{S}:\mathcal{X} \to \mathcal{Y}$ and $r^\mathcal{L}\_{S}:\mathcal{X} \to [0,\infty)\cup\\{-1\\}$ such that for all $x,z\in \mathcal{X}$ if $r^\mathcal{L}\_{S}(z)=\eta > 0$ and $z\in \mathbf{B}^o\_\mathcal{M}(x,\eta)$ then $\ell^{h^*}(h^\mathcal{L}\_{S},x,z)=0$. Further, if $r^\mathcal{L}_{S}(z) = 0$, then $h^*(z)=h^\mathcal{L}\_{S}(z)$.
Given sample $S$ such that some concept $h^*\in\mathcal{H}$ satisfies $\mathrm{err}\_S(h^*)\le \nu$, the robustly-reliable region of $\mathcal{L}$ is defined as $RR^{\mathcal{L}}(S,\nu,\eta)=\\{x\in\mathcal{X}\mid r^\mathcal{L}\_{S}(x) \ge \eta\\}$ for $\nu,\eta\ge 0$.
We can prove the following theorem (stated below for the true label loss $\ell\_{\text{TL}}$) which gives pointwise optimal bounds on the robustly-reliable region.
**Theorem**: Let $\mathcal{H}$ be any hypothesis class with respect to $\mathcal{M}$-ball attacks and robust loss function $\ell\_{\text{TL}}$, for $\eta\ge 0$,
- There exists a robustly-reliable learner $\mathcal{L}$ such that $RR^{\mathcal{L}}\_{\text{TL}}(S,\nu,\eta)\supseteq \text{Agree}(\mathcal{H}\_\nu(S))$,
- For any robustly-reliable learner $\mathcal{L}$, $RR^{\mathcal{L}}\_{\text{TL}}(S,\nu,\eta)\subseteq \text{Agree}(\mathcal{H}\_\nu(S))$
where $\mathcal{H}\_\nu(S) = \\{h \in \mathcal{H} \mid \mathrm{err}\_S(h)\le \nu\\}$. Furthermore, the safely-reliable region for robust loss function $TL$ is defined as $SR\_{TL}^{\mathcal{L}}(S,\nu,\eta_1,\eta_2)=\\{x\in\mathcal{X}\mid \mathbf{B}\_\mathcal{M}(x,\eta_1)\subseteq RR\_{TL}^{\mathcal{L}}(S,\nu,\eta_2)\\}$. Thus, the above theorem implies bounds on the safely-reliable region as well.
We will include the above result, along with formal proof and discussion in the camera-ready version.
> 2. The paper contains many theoretical results for multiple losses, scenarios, etc. In general, it is good to have many interesting results but the paper is quite dense, and it is difficult to get the main ideas. For instance, in Section 4, the authors mention "The optimal robustly-reliable learner described above may be implemented" but the authors hardly describe such learner before Section 4 besides stating that it exists (it is described in the proofs).
Thank you for your point. Indeed, the optimal learner is provided in the proof of Theorem 3.1 in Appendix. We will describe the learner in Section 3 in the camera-ready version.
> 3. It is not clear that the probabilities lower bounded in Theorem 3.3. are not very small or that the abstention probabilities are not very high. I think this point is important because the learners would not be very useful in other cases. It would be good if the authors can quantify numerically those probabilities. The bounds provided show that the probabilities increase when the number of samples increases and the dimension decreases but it is not clear in what scenarios those probabilities are sizable or even the lower bound is larger than 0.
The probability $1 - \delta$ in the lower bound in Theorem 3.3 can be arbitrarily close to 1 as the number of sample m is large. Also, the non-abstention probability will be arbitrarily close to 1 for large sample size m. For the safely-reliable region, this non-abstention rate depends on the robust loss, but is generally large for small $\eta_1,\eta_2$. For example, it can be close to $1 - 2(\eta_1 + \eta_2)$ for the stability loss.
Since our main purpose for the paper is developing theory, we would defer the numerical quantification of those probabilities in this simple setting and the real-world setting to future application-oriented research.
**Questions**
1. The robustly-reliable region also includes $r(x) = 0$ which is when we know that the prediction is correct. In the definition 3, we have two cases when $r(x) = \eta > 0$ and $r(x) = 0$ for the mathematical rigorousness of the definition.
2. Yes, this is a typo. Thank you for pointing it out.
3. Thank you for your point, we will correct the grammar accordingly.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers. I believe the paper deserves to be published and it will be improved in the camera ready version. Just an additional comment. I agree with the authors that starting with the realizable case is in general good to get the main intuition. However, for this paper I think it would be useful if the authors discuss such assumption in relation with the distribution shifts, since the realizable assumption somehow constraints the shifts addressed in the paper. | Summary: The authors explore advesarial test-time attacks and distribution shift. They propose a learning algorithm with performance guarantees.
Strengths: The paper is well written with many intuitive illustrations.
The authors use refusal as a means to guarantee reliability. As noted by the authors, the trivial classifier that always refuses is reliable. Therefore, I particularly liked section 3.1, which characterizes the probability of refusal (or non-refusal). The concept of reliability radius is readily understandable.
The algorithms provided in section 4 can be efficient enough for many real-world applications.
Weaknesses: The authors' analysis is limited to the realizable case; that the true target function is a member of the hypothesis class. While this simplifies analysis, it limits the applicability of the authors' results since in most real-world applications we do not know whether this assumption is valid.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Can slack be added to the quadratic program in section 4 in the same manner as it is done for SVMs?
For the regularized objective in section 4, what good is a lower bound on the reliability radius? It can guarantee an unreliable prediction is unreliable, but cannot guarantee a reliable prediction is reliable. Yet it's the later case that is needed to not refuse for reliable test points.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
> The authors' analysis is limited to the realizable case; that the true target function is a member of the hypothesis class. While this simplifies analysis, it limits the applicability of the authors' results since in most real-world applications we do not know whether this assumption is valid.
Thank you for your feedback. We agree that the realizable assumption may not hold in most real-world applications. Starting by studying the realizable case helps to build intuition for what definitions and results are appropriate and possible in general. This follows a well-established pattern of developing theories in the learning theory literature. Fortunately, as we discuss below, extension to the non-realizable (agnostic) setting is possible, building from the definitions and concepts we have developed in the realizable case.
We illustrate this for the true label loss $\ell_{\text{TL}}$ loss below, and remark this can be similarly done for the other loss functions studied in our work.
**Definition**: A learner $\mathcal{L}$ is *$\nu$-tolerably* robustly-reliable w.r.t. $\mathcal{M}$-ball attacks for sample $S$, hypothesis space $\mathcal{H}$ and robust loss function $\ell$ if, for every concept $h^*\in\mathcal{H}$ with $\mathrm{err}\_{S}(h^*)\le\nu$, the learner outputs functions $h^\mathcal{L}\_{S}:\mathcal{X} \to \mathcal{Y}$ and $r^\mathcal{L}\_{S}:\mathcal{X} \to [0,\infty)\cup\\{-1\\}$ such that for all $x,z\in \mathcal{X}$ if $r^\mathcal{L}\_{S}(z)=\eta > 0$ and $z\in \mathbf{B}^o\_\mathcal{M}(x,\eta)$ then $\ell^{h^*}(h^\mathcal{L}\_{S},x,z)=0$. Further, if $r^\mathcal{L}_{S}(z) = 0$, then $h^*(z)=h^\mathcal{L}\_{S}(z)$.
Given sample $S$ such that some concept $h^*\in\mathcal{H}$ satisfies $\mathrm{err}\_S(h^*)\le \nu$, the robustly-reliable region of $\mathcal{L}$ is defined as $RR^{\mathcal{L}}(S,\nu,\eta)=\\{x\in\mathcal{X}\mid r^\mathcal{L}\_{S}(x) \ge \eta\\}$ for $\nu,\eta\ge 0$.
We can prove the following theorem (stated below for the true label loss $\ell\_{\text{TL}}$) which gives pointwise optimal bounds on the robustly-reliable region.
**Theorem**: Let $\mathcal{H}$ be any hypothesis class with respect to $\mathcal{M}$-ball attacks and robust loss function $\ell\_{\text{TL}}$, for $\eta\ge 0$,
- There exists a robustly-reliable learner $\mathcal{L}$ such that $RR^{\mathcal{L}}\_{\text{TL}}(S,\nu,\eta)\supseteq \text{Agree}(\mathcal{H}\_\nu(S))$,
- For any robustly-reliable learner $\mathcal{L}$, $RR^{\mathcal{L}}\_{\text{TL}}(S,\nu,\eta)\subseteq \text{Agree}(\mathcal{H}\_\nu(S))$
where $\mathcal{H}\_\nu(S) = \\{h \in \mathcal{H} \mid \mathrm{err}\_S(h)\le \nu\\}$. Furthermore, the safely-reliable region for robust loss function $TL$ is defined as $SR\_{TL}^{\mathcal{L}}(S,\nu,\eta_1,\eta_2)=\\{x\in\mathcal{X}\mid \mathbf{B}\_\mathcal{M}(x,\eta_1)\subseteq RR\_{TL}^{\mathcal{L}}(S,\nu,\eta_2)\\}$. Thus, the above theorem implies bounds on the safely-reliable region as well.
We will include the above result, along with formal proof and discussion in the camera-ready version.
**Questions:**
> Can slack be added to the quadratic program in section 4 in the same manner as it is done for SVMs?
Yes, but with the slack, the optimal solution might not lie in the agreement region.
> For the regularized objective in section 4, what good is a lower bound on the reliability radius? It can guarantee an unreliable prediction is unreliable, but cannot guarantee a reliable prediction is reliable. Yet it's the later case that is needed to not refuse for reliable test points.
We can use the lower bound for the reliability guarantee. However, providing the bound on the difference between the lower bound and the actual reliability radius would be an interesting future research direction.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: I have read the authors' rebuttal. I feel they have adequately addressed my questions and concerns. I am leaning toward leaving my score as it is. | Summary: This paper presents robustly-reliable learners with optimal guarantees for environments where the training and test data are not drawn from the same distribution, e.g., natural distribution shift and adversarial attacks during test time. The main idea is that for a given point, the robustly-reliable learner either outputs a prediction and a reliability region, or abstains from prediction. The prediction is guaranteed to be correct as long as the test-time perturbation is constrained to this reliability region.
Strengths: * Designing reliable learners with guarantees for challenging environments is highly important and relevant to both the theoretical and applied machine learning community.
* It seems likely that other researchers will find relevant the reliability criterions and tools developed in the paper.
* The approach appears to be novel and technically sound. The claims of the paper are well-supported by extensive proofs.
* The submission is well organized and clearly written overall.
Weaknesses:
* A computationally efficient method is presented only for the case of linear separators. It is not clear how easily the presented tools can be used to obtain practical algorithms for more general cases (e.g., neural networks) in practice.
* An evaluation on a simple synthetic scenario to demonstrate the empirical effectiveness of the approach would make the paper more compelling. This is a minor point given that this is clearly a theory paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Is there an efficient method for other types of classifiers and loss functions, such as neural networks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > A computationally efficient method is presented only for the case of linear separators. It is not clear how easily the presented tools can be used to obtain practical algorithms for more general cases (e.g., neural networks) in practice.
An evaluation on a simple synthetic scenario to demonstrate the empirical effectiveness of the approach would make the paper more compelling. This is a minor point given that this is clearly a theory paper.
Thank you for your comment. We agree that a computationally efficient method for a general hypothesis class such as neural networks is an interesting direction. Since our main purpose for the paper is developing theory, we would defer the empirical evaluation of the methods and extension to neural networks to future application-oriented research.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I read the other reviews and your responses to them. I lean towards keeping my score unchanged. | Summary: This paper studies the problem of classification under found different kinds of adversarial loss functions:
1) ST which I think is by far the most popular adversarial loss. This is same as the expected sup loss of [Madry et al. 2018](https://arxiv.org/abs/1706.06083).
2) TL which is equivalent to the the "exact in the ball" risk of [Gordeau et al](https://www.jmlr.org/papers/volume22/20-285/20-285.pdf) or the "error region risk" of [Diochnos et al](https://proceedings.neurips.cc/paper_files/paper/2018/file/3483e5ec0489e5c394b028ec4e81f3e1-Paper.pdf)
3) IA which is the same as ST but where the adversary only perturbs points belonging to one of the two labels
4) CA loss which is the same as TL with the additional constraint that the true label does not change at the perturbed point.
For each type of loss, the paper establishes the "optimal robustly reliable region". Here, the classification is deemed reliable if the the classification is correct on the _perturbed ppoint_. The papers shows that these regions can be computed efficiently for simple hypothesis classes like linear classifiers. The paper also proves lower bounds on "safely-reliable region" for linear separators under log-concave distributions, where a point is classified safely reliably at $\eta_1, \eta_2$ levels if it can be reliably classified at $\eta_2$ level, even when perturbed in a ball of radius $\eta_1$.
Finally, the paper also lower bounds the amount by which the reliability region can change under a distribution shift from P to Q in terms of a $P\to Q$ disagreement coefficient.
Strengths: - Generality: The results of the paper hold for 4 kinds of adversarial loss functions, each with different use-cases.
- Nice result on reliable learning under distribution shift: Theorem 5.3 proves a connection between reliable learning under distribution shift, and a previously studied notion called the disagreement coefficient. This connection appears interesting and non-trivial.
Weaknesses: - Writing: The paper is very difficult to read. There are many new definitions but very few illustrations / examples. The paper seems to be written in a hurry. Some of the results that are listed under main contributions in section 1.1 only appear in the appendix. Section 6 is super short - it only presents one new definition, followed by an example satisfying the definition the details of which are relegated to the appendix.
- Limited contributions: The paper makes limited contributions on the "safely-reliable" notion of classification (Definition 6), which I think is much more important than the notion of "reliable" classification (Definition 3, 4). Theorem 3.3 on probability mass on reliable region only holds for the special case of linear separators with isotropic log-concave distributions. Section 6 on safely-reliable correctness under distribution shift establishes the safely-reliable correctness again only for the special case of linear separators with isotropic log-concave distributions. Further, the shifted distribution is also assumed to only shift in mean, while the covariance matrix remains as Identity matrix. Section 4 also focuses on simple settings like linear separators.
- Limited applicability of the notion of "reliability": I want to stress again that the "safely-reliable" notion is more important than the "reliable" notion. The importance of safely-reliable region over the reliable-region is stated by the authors themselves in lines 216-218: "...we note that the probability mass of the robustly-reliable region may not be a meaningful way to quantify the overall reliability of a learner because a perturbation may lie outside of the support of the natural data distribution and have zero probability mass." Basically, a classifier can be robustly reliable at z with level $\eta$ even if predicts a different label at a slightly perturbed point x such that d(x, z) < $\eta$. Hence, reliability does not ensure robustness against adversarial attacks. Literature on certified robustness (for example, randomized smoothing, interval bound propagation) all focus on something like the "safely-reliable" notion rather than reliable. In lines 91-94, the paper says "Prior works have examined pointwise consistency guarantees [SKL17, CRK19, WLF22], i.e. the classifier’s prediction is guaranteed not to have changed under an attack. In contrast, we study a much more desirable property of reliability—guaranteeing that the prediction of the algorithm is correct." I would like to see the paper make a stronger case advocating for the study of reliability than the one given here.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Definition 7 is unclear: What is z in line 296? I think it should be any x' in U(x).
- I would recommend cutting down on the size of section 1.1: summary of contributions.
- It would be good to have more examples to accompany the definitions, for instance contrasting the reliable vs safely-reliable notions.
I would also like the authors to comment on my view of the paper in the "weaknesses" section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness**
> 1. Writing:
We will use the extra page in the camera-ready version to bring to the main body some of the results that now appear in the Appendix. Concerning illustrations, we already have 4 illustrations in the main body and in fact, the other reviewers appreciated the clarity and the many examples. “The submission is well organized and clearly written overall.” (Reviewer uQQ5) and “The paper is well written with many intuitive illustrations.” (Reviewer 8X7Z).
> 2. Limited contributions:
We respectfully disagree on the limited contribution of our paper. Our main contributions are numerous and we are happy to clarify this in the camera-ready version. First of all, coming up with the right definitions to express our concepts, in particular, robustly-reliable learner (Definition 3), robustly-reliable region (Definition 4) and safely-reliable region (Definition 6). Given these definitions, we work out the abstract results; Theorem 3.1 provides an optimal characterization of the robustly-reliable region for any hypothesis class (which further implies optimality of the safely-reliable region); Theorem 5.1 for safely-reliable correctness is also applicable to general settings. Additionally, we have numerous examples. The result presented in the main paper such as in Theorem 3.3 is one concrete example that the safely-reliable region can be large. We also have additional results for a much more general class of classifiers with smooth boundaries in Appendix E and G for both safely-reliable region (implies bound on safely-reliable correctness) and reliability under distribution shift.
In particular, we address individual points below.
- “The paper makes limited contributions on the "safely-reliable" notion of classification (Definition 6), which I think is much more important than the notion of "reliable" classification (Definition 3, 4).”
We disagree with this characterization. Definition 6 implies that our tight characterization of the optimal robustly-reliable region extends to the safely-reliable region as they are closely related for each robust loss function. Moreover, our examples that compute the probability masses focus on the safely-reliable region instead of the robustly-reliable region (Theorem 3.3, Appendix E ).
- “Theorem 3.3 on probability mass on reliable region only holds for the special case of linear separators with isotropic log-concave distributions.”
We also provide bounds on the probability mass of the safely-reliable region for general smooth classifiers (Lines 263-264, Appendix E, Theorem E.3).
- “Section 6 on safely-reliable correctness under distribution shift establishes the safely-reliable correctness again only for the special case of linear separators with isotropic log-concave distributions.”
By Definition 10, our previous results on the safely-reliable region of smooth boundary classifiers also extend to the “safely-reliable correctness” in Section 6. We mention the mean-shift example to highlight the advantage over previously studied measures like $\mathcal{H}$-divergence and the discrepancy distance.
- “Section 4 also focuses on simple settings like linear separators.”
We extend beyond the linear separator to a wide range of hypothesis classes in Appendix F.
> 3. Limited applicability of the notion of "reliability":
We would like to clarify this major misunderstanding between the robustness and reliability guarantee. The prior work on certified robustness [SKL17, CRK19, WLF22] is different from our proposed notion of safely-reliable. The certified robustness guarantee is only that a prediction does not change with an adversarial perturbation, but it does not guarantee that the prediction is correct (neither for the original point nor the perturbation); in particular, a constant function is always certified robust but it may not be useful. In contrast, a robustly-reliable learner guarantees that, for any test point $x$ and perturbation $z$, if $z$ has distance less than $\eta$ to $x$ ($\eta$ = reliability radius), then the prediction will be “correct” (robust loss zero) in a sense informed by which robust loss we are addressing; we discuss this idea for several different losses, leading to different interpretations of this guarantee. For the stability loss, the prediction being “correct” means that it predicts the true label of the original point $x$; in particular, this implies certified robustness, but is even stronger, since it also guarantees the correct label. For the “true label” loss, being “correct” means that it predicts the true label of the perturbation $z$. For the “constrained adversary” loss, being “correct” means predicting the true label for both $x$ and $z$ assuming the adversary would only perturb $x$ to $z$ if they have the same true label.
To summarize, for a given robust loss function, a robustly-reliable region (Definition 4) is a region where points in this region are guaranteed to have a zero robust loss (to be “correct”). A safely-reliable region further guarantees that even after a perturbation of some distance $\eta_1$, the point is still in the robustly reliable region for some radius $\eta_2$. So being safely-reliable means the algorithm is robust against adversaries trying to perturb a point $x$ to a $z$ where the learner is less “confident” (i.e., outputs a small reliability radius).
**Questions**
1. Thank you for pointing this out, this is a typo. It is supposed to be $h(x) = h^*(x)$.
2. We will address this in the camera-ready version.
3. Thank you for your comment. The safely-reliable region is indeed the same as the reliable region when there is no adversarial perturbation. We will make this clearer in the final version.
---
Rebuttal 2:
Title: please take a look at the author response
Comment: Thank you. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Exact Verification of ReLU Neural Control Barrier Functions | Accept (poster) | Summary: This paper studies the exact conditions under which a learned CBF $b$ with ReLU activations yields the positive invariance property. Even though the set of inputs for which the CBF is non-differentiable has measure zero, the paper shows by example that safety can be still violated. This is because the slope of $b(x)$ can be discontinuous at the boundary $b(x) = 0$. To address this, the paper derives a theoretical result (Proposition 1) that provides a necessary and sufficient condition for a valid ReLU CBF. Based on this central result, the paper presents a verification algorithm that consists of state space discretization and search, activation set enumeration, and solving nonlinear programs. The experiments show faster safety verification for low-dimensional systems than SMT-based approaches.
Strengths: * ReLU is a popular activation function that is often used for constructing feedforward neural networks. The paper addresses the important theoretical problem of verifying learned CBFs with ReLU activations.
* The paper provides a novel, central theoretical result (Proposition 1) that gives necessary and sufficient conditions for a given ReLU neural CBF to be valid.
* As shown in Section 5, the proposed verification algorithm yields more efficient run-time than SMT-based approaches.
Weaknesses: * From a practical point of view, an engineer or a researcher can always choose a differentiable activation function to design a neural CBF, such as tanh or softplus. Then, the CBF remains differentiable. This leads to a motivational question of why we should use ReLU activations for modeling CBFs in the first place. Specifically, the proposed approach seems to possess several disadvantages compared to learning differentiable CBFs and verifying them. 1) The verification algorithm involves state-space discretization and activation set enumeration, both of which have inherent poor scalability with respect to the state-space dimensionality and the size of the neural network, respectively. In particular, the enumeration seems to suffer a lot as the network becomes complex, as observed in Table 2. 2) The online optimization problem for control (i.e. equation 10) requires solving possibly multiple quadratic programs, adding complexity to the standard CBF-QP.
* The presentation can be improved so that mathematical descriptions are easier to follow. Especially, it is recommended that the paper 1) clearly define dimensionality of every vector and matrix-valued variables introduced in the paper, such as $W_i$, $W_{ij}$. 2) Define clearly any non-trivial variables in the main statement of mathematical results. For instance, the definition of $\bar{W}_0$ and $\bar{r}_0$ should be provided in the main statement of Lemma 1, not in the proofs in the appendix.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * What are the theoretical and practical advantages of using ReLU activations over differentiable activations (as done in [1], for example) for modeling neural CBFs?
* Shouldn't the last formula in the proof of Lemma 1 use $W_{Lj}$, not $W_{ij}$?
[1] Dawson, Charles, Zengyi Qin, Sicun Gao, and Chuchu Fan. "Safe nonlinear control using robust neural lyapunov-barrier functions." In Conference on Robot Learning, pp. 1724-1735. PMLR, 2022.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: * The most concerning limitation is the scalability of the verification method. Even if the state space has a low-dimensionality, the usage of more layers or neurons can lead to combinatorial explosion of the runtime complexity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We conducted an experimental study to compare NCBFs with different activation functions, including ReLU, sigmoid, and tanh. The results are shown in the pdf attachment to the general rebuttal, and are discussed in detail in the rebuttal to Reviewer wvvT above. In particular, we find that our proposed algorithm for verifying ReLU-activated NCBFs is more computationally tractable compared to SOTA algorithms for verifying NCBFs with differentiable activation functions.
The reviewer is correct that the online optimization-based control requires solving multiple quadratic programs in the worst-case. However, there will only be multiple quadratic programs at points where the NCBF is nondifferentiable, or equivalently, where the pre-activation input to one of the neurons in the NCBF is exactly zero. In order to evaluate the complexity of the CBF-QP, we compared the computation time of (i) Eq. (10) with a NCBF consisting of one hidden layer of 32 neurons and (ii) a CBF-QP using a degree-two polynomial CBF. Averaging over 300 iterations, we found that the NCBF-based controller had a runtime of 0.0015s while the polynomial CBF-based controller had a mean runtime of 0.0012s.
$W_{ij}$ has dimensionality $n \times 1$ for $i=1$ and $M_{i-1} \times 1$ for $i > 1$. $W_{i}$ is an $n \times M_{i}$ matrix. We will clearly define matrix and vector dimensions throughout the paper, as well as provide definitions of $\overline{W}\_{0}$ and $\overline{r}\_{0}$ in the main statement of the lemma. The reviewer is correct regarding the last formula in the proof of Lemma 1. It should be $W_{Lj}$.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thank you for your convincing response. The additional experiments are interesting in that the ReLU activation results in faster learning and verification than other commonly-used functions, such as tanh. This indeed motivates the use for ReLU activations for NCBFs.
Based on Table 2 and 3 in the new PDF, it seems that the proposed algorithm has better scalability than existing methods, which is an encouraging result.
Thank you also for clarifying the condition under which the worst-case solve time for QP occurs.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their time and effort providing feedback on the manuscript and rebuttal. | Summary: Neural control barrier functions offer wider expressive power but do not satisfy the continuously differentiable assumption with ReLU activation functions. In this paper, the authors propose a method to verify that a neural ReLU CBF is a valid CBF and is hence capable of rendering a set forward-invariant. This method derives its foundations by extending the conditions of Nagumo’s theorem. These observations provide a set of necessary and sufficient conditions for verification. Algorithmically, these conditions are verified by a combination of linear relaxation to do a coarse analysis followed by interval bound propagation to fine-tune the regions of violation. Then, two non-linear programs are used to check for the CBF conditions. The final verification engine runs faster than existing SMT methods and works correctly on three benchmarks: Darboux, Obstacle avoidance and Spacecraft.
Strengths: The theoretical conditions for verification and the proposed algorithm are novel, principled, creative and non-trivial. Previous works typically gloss over the points of non-differentiability in energy functions such as CLF/CBF. Hence, I believe this is an important contribution which can be extended further in future work.
Weaknesses: 1) The presentation needs to be improved and there are several minor mistakes which put off the reader.
Line 243 - missing reference, Line 181 - mistake in the definition of complete collection, Line 439 - What definition of distance is used in the definition of tangent cone?, Multiple typos in Lemma 5 in appendix
2) If the set of non-differentiable points has measure zero, it will not actually affect the practical safety filtering application as the system can shoot through those points.
3) For the experiments section, it would be interesting to also plot the points of non-differentiability in figure 2. The current result comparing to improperly trained CBF is rather expected. Showing the enhancement of safety verification/safety filtering by virtue of considering non-differentiable points will enhance the contribution.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) In proposition 1, equation (11), is the set difference explicitly needed given that the collection of sets $\{ S_1, \dots, S_r \}$ is defined as complete.
2) What is the meaning of equation (15) in line 186? Why can that not just be written as $\mathcal{D} \subset \mathcal{C}$ ?
3) For the procedure discussed in figure 1, I did not find a lot of background material in the appendix on exactly how IBP is used to enumerate activated sets and which linear program is used for pruning. I recommend a discussion on this in greater detail in the Appendix similar to Section 7.7 for the non-linear program. This is the phase that is seen to be time-consuming from Table 2 and needs to be justified.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The societal impact of this work is likely positive.
Limitations on the scalability of the approach to higher dimensional systems and deeper networks is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for identifying some minor mistakes in the manuscript. The missing reference refers to Eq. (27) from the supplementary material and will be corrected. The definition of complete collection will be modified to $\cdots \cap \overline{\mathcal{X}}(\mathbf{S}^{\prime})$. Distance is defined by $\mbox{dist}(x,\mathcal{C}) = \min{\\{||x-z|| : z \in \mathcal{C}\\}}$, where $||\cdot||$ is a $p$-norm for some $p \in [1,\ldots,\infty]$. In Lemma 5, the definition of $J(x)$ should be $J(x) = \\{k: q_k(x) = 0\\}$, while the following equation should be $\mathcal{T}\_{\mathcal{A}}(x) = \\{z : z^{T}\nabla q_{k}(x) \leq 0 \ \forall k \in J(x)\\}.$
The reviewer is correct that the set of non-differentiable points will have measure zero. As shown in the example on Page 4, although this set has measure zero, safety of the CBF cannot be guaranteed if the non-differentiable points fail the conditions (7)--(9). When applying the safety filter at runtime, the optimization problem (10) reduces to a standard CBF-based quadratic program, except at the measure-zero set of non-differentiable points. If non-differentiable points occur in the interior of the region $\mathcal{D} = \\{x : b(x) \geq 0\\}$, we conjecture that it may be possible to shoot through them as suggested by the reviewer. Safety violations may occur if the controller attempts to shoot through non-differentiable points at the boundary, as shown in the following example.
Consider the setting of the example on Page 4 of the manuscript. Let $b_{c}$ denote the NCBF defined in the example, which fails our defined safety conditions. For comparison, we trained an NCBF $b_{\theta}$ and verified it using our proposed approach. We then constructed a nominal controller $\mu_{nom}$ as a Linear Quadratic Regulator (LQR) controller that drives the system from initial point $(0,0.1)$ to the origin. We compared the trajectories arising from the optimization-based controller defined by Eq. (10) using the $b_{\theta}$ and $b_{c}$. For the unsafe NCBF $b_{c}$, the optimization-based controller is unable to satisfy the safety constraints at the boundary point $(0,1)$, resulting in a safety violation as described in the manuscript. On the other hand, while the NCBF $b_{\theta}$ contained multiple non-differentiable points, it is possible to choose $u$ to ensure safety at these points. For example, the point $(-0.19, 2.91)$ is a non-differentiable point on the boundary $b_{\theta} = 0$. There are four activation sets intersecting at this point, with corresponding values of $\frac{\partial b_{c}}{\partial x}g(x)$ given by $\\{-0.0455, -0.053, -0.025, -0.033\\}$. Since any control input $u$ with negative sign and sufficiently large magnitude will satisfy $\frac{\partial b_{c}}{\partial x}(f(x)+g(x)u) \geq 0$ for all of these values, this non-differentiable point does not compromise safety of the system, and the trajectory of the system constrained by $b_{\theta}$ remains in the safe region for all time.
The reviewer is correct that the set difference is not needed in Eq. (11). It will be removed. Eq. (15) is presented to specify the conditions that must hold for all activation set $\mathbf{S}$. After taking the union over $\mathbf{S}$, the statement is equivalent to $\mathcal{D} \subset \mathcal{C}$ as pointed out by the reviewer.
Interval bound propagation aims to compute an interval of possible output values by propagating a range of inputs layer-by-layer, and is integrated into our approach as follows. We first use partition the state space into cells and, for each cell, use CROWN to derive upper and lower bounds on the value of b(x) when x takes values in that cell. When the interval of possible b(x) values in a cell contains zero, we conclude that that cell may intersect the boundary b(x) = 0. For each neuron, we use IBP to compute the pre-activation input interval for values of x within the cell. When the pre-activation input has a positive upper bound and negative lower bound, we identify the neuron as unstable, i.e., it may be either positive or negative for values of $x$ within the cell. Using this approach, we enumerate a collection of activation sets $\mathcal{S}$. We then identify the activation sets $\mathbf{S} \in \tilde{\mathcal{S}}$ such that $b(x) = 0$ for some $x \in \overline{\mathcal{X}}(\mathbf{S})$ by searching for an $x$ that satisfies the linear constraints in (16). This approach uses CROWN and IBP to identify the activation regions that intersect the boundary $\\{x: b(x) = 0\\}$ without enumerating and checking all possible activation sets, which would have exponential runtime in the number of neurons in the network. We will add a section to the appendix that elaborates on IBP and its use in our verification algorithm.
---
Rebuttal Comment 1.1:
Title: Read author response
Comment: I have read the author's response to my comments. The response is convincing. Further, additional supplementary experiments have been provided to show that ReLU activations are more beneficial and can potentially give a larger RoA. These results are encouraging. The only concern that remains for me are the tractable relaxations for verification that have recently come to my knowledge through the other reviewers.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the detailed feedback and for considering our rebuttal. We emphasize that the relaxations considered in the related works highlighted by Reviewer U7BK were developed for discrete-time systems and are not applicable to the continuous-time setting of our manuscript. Verification of continuous-time NCBFs raises new technical issues, for example, the non-differentiability of the NCBF with ReLU activation, that we address in the paper. | Summary: The authors consider the problem of synthesising control barrier functions parametrised as ReLU neural networks for non-linear deterministic dynamical systems. The authors first extend standard approaches for synthesising control barrier functions to the case where the barrier function is non-differentiable in a zero-measure set. Then, they encode the synthesis of a neural control barrier function as a nonlinear optimisation problem and illustrate the effectiveness of the approach on three benchmarks
Strengths: - The theory is sound and the extension of standard results for synthesizing control barrier functions to non-differentiable barriers is surely of interest
- Paper is overall well written and the problem considered of interest
Weaknesses: - The main weakness is the scalability of the approach wrt the complexity of the neural barrier function. While I acknowledge that the algorithm is superior in terms of scalability than SMT based approaches, still experimental results are limited to neural networks of 1 hidden layer and 20 neurons at most
- Some of the statements in the related works are not precise. In fact, the authors claim that it is not possible to use convex programming to synthesize/verify neural barrier functions. This is not accurate, in fact recent approaches rely on piecewise linear uncertain relaxations of neural networks to encode the problem of verifying neural barrier functions using linear programming [Mathiesen, Frederik Baymler, et al. "Safety certification for stochastic systems via neural barrier functions." IEEE Control Systems Letters 7 (2022): 973-978.] or SDP [Mazouz, Rayan, et al. "Safety guarantees for neural network dynamic systems via stochastic barrier functions." Advances in Neural Information Processing Systems 35 (2022): 9672-9686.]. Of course, the resulting approach will be more conservative compared to the one proposed by the authors, but more scalable. Also, please expand the discussion of why approaches employed to synthesize neural Lyapunov functions, e.g. [Abate, Alessandro, et al. "Formal synthesis of Lyapunov neural networks." IEEE Control Systems Letters 5.3 (2020): 773-778], cannot be employed in the setting of this paper
- To demonstrate the importance of using neural control barrier functions, I believe that in the experiments there should be at least one experiment where the authors compare with the standard approaches commonly employed to synthesise control barrier functions, e.g. parametrising them as a SoS polynomial. This should serve to empirically demonstrate the advantages in being able to make use of the flexibility of neural networks in the context of barrier functions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please, see Weaknesses Section and in addition consider the following points:
- Equation (7) and (8) quantifies over pairs $(i, j)$ of the set $\mathbf{T}(x) \cap \mathbf{S}$. However, $\mathbf{T}(x)$ is the set of unstable neurons produced by input $x$, hence it is unclear what pairs the quantification is referring to. Is it any two unstable neurons in the set $\mathbf{T}(x) \cap \mathbf{S}$?
- Set $\bar{\mathcal{X}}(\mathbf{S})$ is defined both inside Lemma 1 and right after the Lemma. Please, be consistent
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please, see Weaknesses Section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: In order to evaluate our approach on more complex neural networks, we conducted additional experiments as shown in Tables 2 and 3 of the pdf attachment to the general rebuttal. We verified an NCBF for the two-dimensional Darboux example with two hidden layers of 512 neurons each. The verification terminated successfully in 620 seconds. We verified an NCBF with two hidden layers of 64 neurons each for the three-dimensional obstacle avoidance example in 3 hours, 15 minutes. Finally, we verified an NCBF with three hidden layers, each containing 32 neurons, for a six-dimensional spacecraft rendezvous problem within four hours. For each of these networks, state of art verification algorithms dReal and Z3 did not terminate within three hours.
The reviewer is correct that prior works have developed computationally tractable relaxations of the problem of verifying neural Lyapunov and barrier functions, whereas the aim of our paper is to develop exact conditions and algorithms. Moreover, we note the following key differences between our problem setting and the problems studied in the works identified by the reviewer. First, the prior works consider a discrete-time setting. Developing analogous conditions in continuous time requires addressing the non-differentiability of the gradient of the neural network activation functions, which is a contribution of the present paper. Second, the prior works assume that a control law has been given, while our proposed approach can be applied as a safety filter to an arbitrary given controller via the optimization problem (10).
To reduce confusion, we will change the last sentence of the first paragraph of the related work to “However, SOS-based approaches for polynomial CBFs cannot be applied directly to NCBF verification, since activation functions used in neural networks are not polynomial and may be non-differentiable.” We will also add a sentence to the second paragraph of the related work that reads “Piecewise linear approximations of ReLU neural networks have been used to develop tractable safety verification algorithms using linear and SOS programming. This approach leads to sound and incomplete verification algorithms, whereas the present paper proposes exact verification algorithms. Moreover, these existing works apply to discrete-time systems with a priori given controllers.” We will add citations to the papers suggested by the reviewer.
We compare NCBF with traditional SOS synthesized polynomial CBF for the obstacle avoidance case study in two aspects, namely, training time $T_t$ and volume $V$ of the guaranteed safe region. In order to synthesize the polynomial CBF, we adopt the procedure of “SOSTOOLS and its Control Applications” (Prajna et al). This procedure first constructs a nominal controller $\mu(x),$ and then uses SOS programming to construct a barrier certificate for the system $\dot{x}(t) = f(x) + g(x)\mu(x)$. We choose $\mu(x)=-x_{3}$ as the nominal controller and synthesize CBFs of degree 2, 4, 6, 8, and 10 using the Matlab SOSTOOLS toolbox. We compared the result with an NCBF with one hidden layer of 32 neurons trained using the method proposed in [34] with the same nominal controller.
The experiment results are shown below. The time of SOS CBF synthesis grows with the degree of the barrier function. Degree 10 CBF takes twice the time compared to NCBF. On the other hand, NCBF outperforms all SOS synthesized CBFs by having the largest safe region volume.
| CBF Type | $T_t \ (s)$ | $V \ (m^2\times deg)$ |
|----------------------|-------------|-----------------------|
| NCBF 3-32-$\sigma$-1 | 262.89s | 37.76 |
| SOS Degree 2 | 7.36s | 16.14 |
| SOS Degree 4 | 6.65s | 13.44 |
| SOS Degree 6 | 19.88s | 31.36 |
| SOS Degree 8 | 125.10s | 25.93 |
| SOS Degree 10 | 551.31s | 19.99 |
In Eqs. (7) and (8), $(i,j)$ refers to the $j$-th neuron at the $i$-th layer. We will remove redundant definitions from the final manuscript.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for the rebuttal and the additional experiments, which clarified some of my doubt and the contributions. Consequently, I increase my score to weak accept.
- We will also add a sentence to the second paragraph of the related work that reads “Piecewise linear approximations of ReLU neural networks have been used to develop tractable safety verification algorithms using linear and SOS programming. This approach leads to sound and incomplete verification algorithms, whereas the present paper proposes exact verification algorithms. Moreover, these existing works apply to discrete-time systems with a priori given controllers.
I would put the emphasis especially on the fact that you focus on exact verification algorithms, while these relaxations will only produce sound and incomplete results and they have only be applied for discrete-time systems. In fact, controller synthesis for these alternative approaches has been considered, see e.g., Section 5 in [Mazouz, Rayan, et al. "Safety guarantees for neural network dynamic systems via stochastic barrier functions." Advances in Neural Information Processing Systems 35 (2022): 9672-9686.]
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the feedback. We will follow the reviewer's recommendation when revising the related work and presenting the contributions of the paper. | Summary: In this paper, the authors present a new strategy for verifying neural control barrier functions (NCBFs) to ensure safe control of nonlinear systems. Specifically, the authors address the challenge posed by NCBFs when ReLU activation functions are employed. This renders existing verification strategies inapplicable due to the non-differentiability of resulting NCBFs. The authors tackle this issue by introducing conditions for a ReLU NCBF to satisfy \emph{positive invariance}, which forms the basis for constructing safe control policies. The proposed verification algorithm enumerates \emph{activation sets} and individually verifies their positive invariance defined based on the positivity of the activation outputs. The paper demonstrates the effectiveness of the proposed approach by comparing it with Satisfiability Modulo Theory (SMT) based methods across three control problems.
Strengths: - The paper contributes to the field by introducing a new verification algorithm and control strategies that apply to NCBFs using ReLU activation functions. Existing verification approaches are not directly applicable, and the authors address this limitation by introducing a new characterization of active sets and a corresponding verification algorithm. This approach has the potential to broaden the application domain of neural control barrier functions.
- The paper appears to be technically sound.
Weaknesses: - While the comparison with SMT-based methods is informative, it would be beneficial to include a more comprehensive performance evaluation of ReLU NCBFs against classical NCBFs with differentiable activations, such as sigmoid. Assessing factors like NCBF training time, verification time, and the expressive capabilities of the resulting NCBFs in real-world control problems would add depth to the analysis. If ReLU NCBFs consistently outperform sigmoid NCBFs, the proposed verification strategy's utility would be significantly enhanced.
- The authors could provide a thorough analysis of the time complexity of the proposed verification strategy. As the algorithm discretizes the state space into hyper-cubes and calculates corresponding bounds of $b(x)$ individually, it is important to understand how the complexity scales in high-dimensional problems. Also, the authors should provide experiments on more complex, potentially higher-dimensional control problems.
- (Minor) the manuscript contains several typos that require careful proofreading.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors should consider the possibility of clearly outlining the limitations of their study and potential negative societal impacts.
After the rebuttal phase (the same as the post-rebuttal comments):
I thank the authors for their response and conducting the additional experiments. The majority of the concerns raised in my initial evaluation have been effectively attended to. While I acknowledge the efforts undertaken by the authors to conduct an additional round of experiments, I still have reservations about how well the proposed algorithm would work for problems with high dimensions. The additional experiments provided are still confined to low-dimensional scenarios. I recommend that the authors explore this aspect in their future research endeavors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: In order to compare with NCBFs that have differentiable activation functions, we conducted the following additional simulation study. We considered three test systems, namely, the Darboux, obstacle avoidance, and spacecraft rendezvous test cases. For the Darboux and obstacle avoidance systems, we trained three NCBFs with the same architecture, i.e., 2 hidden layers and 32 neurons in each hidden layer, but different activation functions. The chosen activation functions were ReLU, Sigmoid, and tanh. We compared the performance based on three metrics: (i) training time, defined as the time required for the running loss to converge to 0, (ii) volume of the safe region $\{x: b(x) \geq 0\}$, and (iii) time required to verify the trained NCBF. We verified the ReLU-activated NCBF using our proposed method and verified the Sigmoid and tanh-activated NCBFs using dReal and Z3.
We trained NCBFs for spacecraft rendezvous using the NCBF training approach proposed in “Safe Control with Learned Certificates: A Survey of Neural Lyapunov, Barrier, and Contraction Methods for Robotics and Control” (Dawson et al, ref. [13] of the manuscript). We compared NCBFs with ReLU and tanh activation functions using the metrics (i)-(iii) above.
The results are summarized in Table 1 of the pdf attachment to the general rebuttal. We found that, for the Darboux and obstacle avoidance case studies, the ReLU NCBF completed training faster than both sigmoid and tanh NCBFs. The volume of the safe region was comparable for all three activation functions, with the tanh outperforming the ReLU NCBF in Darboux and the ReLU NCBF providing the largest volume for obstacle avoidance. The most significant difference between the three activation functions was at the verification stage. Our proposed method for verifying ReLU NCBFs terminated within 15 and 274 seconds in the Darboux and obstacle avoidance, respectively, while SMT-based methods did not terminate within three hours for both test cases. In the spacecraft rendezvous example, the ReLU NCBF completed training before the tanh NCBF. Moreover, while our approach verified the correctness of the ReLU NCBF within 4 hours, the tanh NCBF exhibited a safety violation.
Finally, to address the concern regarding high-dimensional problems, we evaluated our approach on an eight-dimensional system first defined in “Fossil: A Software Tool for the Formal Synthesis of Lyapunov Functions and Barrier Certificates Using Neural Networks” (Abate et al, ref. [1] in the manuscript). The results are summarized in Table 2 of the pdf attached to the general rebuttal. Our approach verified an NCBF with a single hidden layer containing 16 neurons in 35 seconds. The SOTA algorithms dReal and Z3 did not terminate within six hours.
To address the concern on scalability to complex neural networks, we conducted additional experiments as shown in Tables 2 and 3 of the pdf attachment to the general rebuttal. We verified an NCBF for Darboux example with one hidden layer of 1024 neurons and an NCBF with two hidden layers of 512 neurons each. The verification terminated successfully in 108 and 620 seconds, respectively. The computational complexity of our approach will be determined by several factors including the dimension of the state, the number of layers, the number of neurons in each layer, and the geometry of the 0-level set of the NCBF. As shown in Table 3, the dimension of the system plays the most important role, which is a widely-shared issue in traditional SOS verification of CBFs as well as neural network verification algorithms. While the focus of this paper is on developing exact safety conditions, improving scalability is a direction of future work that we will pursue.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for their response and conducting the additional experiments.
The majority of the concerns raised in my initial evaluation have been effectively attended to.
While I acknowledge the efforts undertaken by the authors to conduct an additional round of experiments, I still have reservations about how well the proposed algorithm would work for problems with high dimensions. The additional experiments provided are still confined to low-dimensional scenarios.
I recommend that the authors explore this aspect in their future research endeavors.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for taking the time to give detailed feedback. We agree that scalability is a key challenge with neural network verification in general, and with verification of NCBFs in particular. Our current approach shows a significant improvement in runtime compared to the state of the art. Moreover, we believe that our approach can lead to future approaches to safety verification, e.g., by developing tractable sufficient but not necessary relaxations of the conditions derived in this paper. We plan to explore this aspect in future work as suggested by the reviewer. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for providing detailed comments that have helped to improve the quality of our manuscript. We have provided rebuttals to the comments of each reviewer. We have also attached a pdf file containing figures and tables from additional simulations that were requested by the reviewers. In this general rebuttal, we briefly summarize these additional figures and tables. Figure 1 is in response to a comment from Reviewer yuYJ. The reviewer asked about the impact of non-differentiable points on safety filtering. We considered the example from Page 4 of the manuscript, and compared the unsafe NCBF from the manuscript (which we denote $b_{c}$) with a trained NCBF $b_{\theta}$ that we verified using our approach. We compared optimization-based controllers based on Eq. (10) of the manuscript using the NCBFs $b_{c}$ and $b_{\theta}$, where the nominal controller was a linear quadratic regulator that drives the system to the origin. The NCBF $b_{c}$ resulted in a safety violation due to the non-differentiable point identified in the manuscript, while the NCBF $b_{\theta}$ ensured that the system state remained within the safe region. Additional descriptions can be found in the rebuttal to Reviewer yuYJ.
Table 1 is in response to comments from Reviewers wvvT and hbj4. Both reviewers asked why ReLU activation function would be used for NCBFs instead of differentiable activation functions. To address this question, we considered three case studies, namely, Darboux, obstacle avoidance, and spacecraft rendezvous. Descriptions of each of these systems can be found in the manuscript. For each case study, we trained and verified three NCBFs with the same architecture (2 hidden layers of 32 neurons each) but different activation functions, namely, ReLU, sigmoid, and tanh. We found that the NCBF with ReLU activation function had the shortest training time for all test cases and resulted in the largest safe region volume for the obstacle avoidance test case. Moreover, our proposed verification algorithm for the ReLU NCBF terminated within 15, 274, and 13907 seconds for Darboux, obstacle avoidance, and spacecraft rendezvous, respectively, while the SOTA verification algorithms (dReal and Z3) did not terminate within three hours for any of the test cases.. Additional descriptions of these experiments can be found in the rebuttal to Reviewer wvvT.
Tables 2 and 3 are in response to comments from Reviewers 5o49, wvvT, U7BK, yuYJ, and hbj4 regarding the scalability of the proposed approach to higher-dimensional systems as well as neural networks with larger numbers of neurons. We trained NCBFs for an additional eight-dimensional system that first appeared in “Fossil: A Software Tool for the Formal Synthesis of Lyapunov Functions and Barrier Certificates Using Neural Networks” (Abate et al, ref. [1] in the manuscript), as well as more complex neural networks for the three case studies already considered in the paper. Our approach verified a 1024-neuron NCBF for the Darboux system in 620 seconds, a 128-neuron NCBF with two hidden layers for obstacle avoidance in 11749 seconds, a 96-neuron NCBF for spacecraft rendezvous in 13907 seconds, and a 16-neuron NCBF for the eight-dimensional test case in 35 seconds. Neither of the SOTA verification algorithms (dReal and Z3) terminated within three, six, and six hours for the Darboux, obstacle avoidance, and eight-dimensional systems, respectively. Additional details can be found in the rebuttals to Reviewers 5o49 and wvvT.
Pdf: /pdf/6ce50181c3ab9e18c89946b6cfb5e35bca48d1b0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a technique to verify the safety of NCBF-based control policies. Usually, the verification depends on the CBF to be continuously differentiable. This is not the case for NNs, however using NCBFs is beneficial as it allows to encode more complex safety constraints.
Their approach identifies the piecewise linear segments of the NCBF. The number of segments is reduced by only focusing on those at the boundary of the safe region. The remaining linear segments are overapproximized and verified using nonlinear programs.
The authors compare their proposed technique with SOTA SMT based methods and demonstrate that they are able to verify NCBFs that previously resulted in a timeout.
Strengths: The paper propose a new original technique to prove the safety of NCBFs.
The authors do a good job of motivating their new approach by demonstrating the shortcoming of techniques that expect $b$ to be continuously differentiable.
The paper is mostly clear in its explanation and formulas, and gives intuitive explanations for many of them.
In their experimental evaluation, they use one benchmark to compare their performance against two other SOTA techniques, and demonstrate that they are able to verify instances that would otherwise lead to a timeout. This indicates the significance of their proposed technique, should the results carry over to other benchmarks.
Weaknesses: The equations in the paper get increasingly complicated to follow, even with the provided explanations. Especially Lemma 4 is hard to follow.
In the experimental evaluation, the paper would strongly benefit from exploring more benchmarks. The comparison to SOTA techniques dReal and Z3 is limited to one benchmark, with two more benchmarks that do not include a comparison to those tools. Also, a comparison to an approach based on neural-network-verification (e.g. "A Hybrid Partitioning Strategy for Backward Reachability of Neural Feedback Loops" by Nicholas Rober, Michael Everett, Songan Zhang, and Jonathan P. How) would help to demonstrate that the proposed technique can solve previously hard problems.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Main question: Why can't one use regular verification of safety properties of neural networks? If $D$ is convex (or can be split into a reasonable number of convex subsets), then one could verify that no input in $D$ is mapped to an output outside of $D$. When $f$, $g$ and $\\mu$ in Equation 1 are known, that equation could most probably be encoded as a NNs, so all common tools (compare e.g. VNN-COMP 2022 or 2023) should be able to verify this (or time out). You state the main benefit of your approach is that it does not depend on the specific choice of $\\mu$ (line 288). However, I do not know if this is often a requirement. If $\\mu$ changes, the verification using a technique that depends on $\\mu$ could be repeated. Did you do an experimental comparison of your approach with a NN-Verification-Tool-based approach to see how costly this would be?
Other questions:
1) In the text above Proposition 1, is $\\overline{X}(S_1) \\cap \\ldots \\cap \\overline{X}(S_r) \\cap S'$ well-defined? $\\overline{X}(S)$ is a set of inputs $x$, but $S'$ is a set of neurons. Should this be $\\overline{X}(S')$?
2) In Equation 11, is the term to the right of $ \\textbackslash $ obsolete? If $\\{S_1, \ldots, S_r\\}$ is complete, then no input that activates all of $\\{S_1, \ldots, S_r\\}$ also activates any other $S'$ (based on the text above Proposition 1). So what is removed by the term right of $ \\textbackslash $ ?
3) What's the significance of the three different lines in Figure 2? They represent different "set boundaries", bu I do not know what that visualization achieves compared to one with just set boundary 0
Minor: What is the missing reference in line 243?
Minor note: Line 268: "with in" -> "within"
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not provide a list of limitations or potential negative societal impact. However, the potential negative societal impact is probably small, as this technique is developed to increase the provable safety of NCBFs. So the potential negative societal impact is identical to that of any AI-based technology.
The paper would benefit from a description of the limitations of their proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Regarding the benchmarks considered in this work, we have evaluated our approach on additional test systems and compared with dReal and Z3. Our results are summarized in Tables 2 and 3 of the pdf attachment to the general rebuttal. Our approach verified a three-dimensional system (obstacle avoidance) with a 128-neuron NCBF in roughly 3 minutes, a six-dimensional system with an NCBF consisting of 102 neurons with three hidden layers within four hours, and an eight-dimensional system with an 18-neuron NCBF within 35 seconds. In contrast, the SOTA algorithms dReal and Z3 did not terminate within six hours for the obstacle avoidance and eight hours for the eight-dimensional system. All experiments were performed on the desktop PC environment described in our manuscript.
We thank the reviewer for suggesting comparisons with the VNN-COMP benchmarks. VNN-COMP is primarily concerned with verifying input/output relationships of neural networks, i.e., proving that, given a neural network $b$ and a set of inputs $\mathcal{X}$, we have $b(\mathcal{X}) \subseteq \mathcal{Y}$ for some set $\mathcal{Y}$. In principle, it would be possible to train a neural network $\phi(x)$ to approximate $f(x)+g(x)\mu(x)$ as suggested by the reviewer, and then use the methodologies of VNN-COMP to check whether there exists $x$ with $b(x) \geq 0$ and $b(x+\phi(x)dt) < 0$, where $dt$ is a discrete-step size that approximates the evolution of the ODE (1). However, in order to achieve an exact verification algorithm, the errors introduced by the NN approximation $\phi(x)$ and the discrete-time approximation of (1) would need to be characterized and incorporated into the verification. Furthermore, as mentioned by the reviewer, our approach does not depend on the control policy $\mu(x)$. We believe that this is an advantage because we can ensure safety under any nominal control policy $\mu(x)$ by incorporating the policy into the optimization-based control (10). This provides the system designer with an additional degree of flexibility, which has been highlighted in recent works on CBF-based safety filters (e.g., A.D. Ames et al, “Control Barrier Functions: Theory and Applications”) and safe shield policies in reinforcement learning (e.g., I. ElSayed-Aly et al, “Safe Multi-Agent Reinforcement Learning via Shielding”).
Similarly, we would like to highlight the following distinction with the related work “A Hybrid Partitioning Strategy for Backward Reachability of Neural Feedback Loops” by Rober et al. The main goal of this related work is to verify safety of a given neural network controller using backwards reachability analysis. In contrast, the goal of our work is to construct an NCBF b and prove that any control policy $\mu$ satisfying Eqs. (7)--(9) is safe. Our approach could be considered complementary to Rober et al in the following two ways. First, a given neural network controller $\mu$ could be modified to provide verifiable safety guarantees by following the quadratic program-based policy defined by Eq. (10) with $\mu_{nom} = \mu$. Second, one could attempt to prove that a NN feedback control policy satisfies Eqs. (7)--(9) for a given $b$ and all x, which would prove that the policy is safe. This latter approach to NN safety verification would be an alternative to backward reachability analysis, and is a direction of future research.
Finally, we thank the reviewer for identifying several typos in the paper. Proposition 1 should be
$\cdots \cap \overline{\mathcal{X}}(\mathbf{S}_{r}) \cap \overline{\mathcal{X}}(\mathbf{S}^{\prime})$
as suggested by the reviewer. The term to the right of \ can indeed be omitted in Eq. (11). We have revised the visualization of Fig. 2 to remove the additional lines and thus improve readability. The missing reference in line 243 refers to Eq. (27) of the supplement and will be fixed. We will revise the paper carefully including Lemma 4 to simplify and clarify the notations.
---
Rebuttal Comment 1.1:
Comment: Why would the construction of $\phi$ require a training? I'm not an expert in this field, so maybe I misunderstand what $\mu$ typically looks like. Could it easily be encoded by a neural network? E.g. $\mu(x) = \max(0, Mx)$ for some matrix $M$ could be encoded as one linear layer followed by a ReLU, without the need for any training. Then, $g(x)\phi(x)$ is a NN as well, and $f(x) + g(x)\phi(x)$ would be as well, using a residual connection. This would ensure that no errors are introduced.
I also do not understand the argument concerning the discrete-time approximation. Assuming a NN encoding $\phi$ can be constructed, shouldn't $x(t)$ be simply the output of the NN? I do not see how this introduces an additional error.
Thank you for the additional experiments!
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the comment. We would like to respond and clarify the model of Eq. (1) of the paper. First, there may be a straightforward neural network encoding of the right-hand side of (1) under certain cases, e.g., when $f$ is represented as a NN (or linear as a special case), $g$ is constant, and $\mu$ is linear or represented as a NN. However, for general nonlinear $f$, $g$, and $\mu$, e.g., when $f$ and $g$ contain polynomial or trigonometric functions of $x$ or the control $\mu$ is nonlinear, representing the right hand side of (1) as a NN will be nontrivial and may require training a NN $\phi$ as described in our rebuttal.
Second, the left hand side of (1) is the time derivative $\dot{x}(t) = \frac{dx}{dt}$, not the state $x(t)$. Hence, applying the methods of VNN-COMP would involve checking whether there exists $x$ with $b(x) \geq 0$ and $b(\bar{x}) < 0$, where $\bar{x}$ is the forward integration of (1) over a time interval of length $dt > 0$ from initial state $x$. This forward integration operator would then need to be included in the formulation, for example, through the approximation $\bar{x} \approx x + dt\cdot\phi(x)$ described in the rebuttal. | null | null | null | null | null | null |
Learning Unseen Modality Interaction | Accept (poster) | Summary: This work studies the problem of learning interactions of unseen modality combinations. Specifically, all training data is modality-incomplete, and the model must learn to perform inference on modality-complete data. The paper claims to be the first to study inference under such settings, and proposed two novel improvements to tackle this challenge: (1) feature projection layer that projects all encoded modalities into the same dimensionality and constrained by an alignment loss, and (2) a dual-branch prediction layer that predicts a pseudo-label in addition to the real label. The paper performed experiments on 3 datasets that contains a diverse set of modalities, domains and tasks (including classification, retrieval and regression). The paper included thorough ablation studies on their methods to justify all of their design choices, and they showed that under the their new setting, their method significantly outperforms previous modality-complete and modality-incomplete approaches.
Strengths: 1. This paper proposed a new setting: learning to infer from unseen combinations of modalities, and proposed a new method to tackle this challenge.
2. The paper's experiments and evaluations are very comprehensive. The experiments involved 3 datasets that contains a very diverse set of different domains (kitchen videos, robotics, Youtube videos), modalities (14 total modalities), and task (classification, regression, retrieval), and the diversity shows that their method generalizes well. There is a comprehensive ablation study that justifies the design choice of each model component as well as loss function. Since this is a new setting, the authors re-implemented or re-ran several existing approaches on the new setting and showed that the new method outperforms all of them.
Weaknesses: The presentation/clarity of the paper needs improvement, especially in some parts of the methods section. For example, the use of different variable names in the feature projection are inconsistent (e.g. F'm in line 96 and line 102 have different dimensions); the notation of the alignment process is confusing, and the intuition behind the whole alignment process is unclear; and the description of how exactly the psudo-labels are obtained is very vague and unclear. See Questions section for more details.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: (1) I think figure 1 can be a bit misleading, since in this paper, we focused on the setting where the test set is always the union of all modalities.
(2) The dimension of F'm on line 96 and 102 are inconsistent.
(3) For feature projection, is it correct that we simply do a linear combination to change the sequence dimension? When we change dm to d*, do we have a fully connected layer that maps each dm-dimensional vector to a d* dimensional vector individually, or does it map the entire k*x dm matrix to a k* x d* matrix?
(4) On line 111, why do we average the features within each modality? It seems to me that the alignment loss is sort of like a vector-qualitization process, and it is difficult for me to understand why quantizing each modality separately (as opposed to, for example, quantizing each d-dimensional vector in the sequence, or the average of the vectors in the same corresponding locations across different modalities) helps features from different modalities occupy a common space. Wouldn't the current approach prevent the projected features from occupying a common space? For example, we could have modality 1 always close to the first few u in the dictionary, and modality 2 always close to the next few u, etc.
(5) In equation 1, the notation of u_m is a bit confusing. I think it is supposed to mean the closest vector in [u1, u2,... ] to fm, but it could be confused with the mth vector in [u1,u2,...]
(6) What exactly does "average across training epochs" mean? Since we need the pseudo-labels during training, do we just average the first-branch prediction on this data point from each previous epoch? How do we obtain this for the first epoch? Also, is it correct that the pseudo-labels are probability-distributions across all labels in the classification case? I also have a hard time understanding the intuition behind dual-branch prediction helping with overfitting problem. Perhaps a more clear and detailed description on how the pseudo-labels are obtained could make it more clear.
(7) In line 167, since there are 3 modalities in this dataset, how exactly are they partitioned into the two training set partitions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitation discussion is adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***We thank the reviewer for their time and effort. We are glad that the review appreciated the new setting of unseen modality interaction, the new method proposed for this and found the experiments and evaluations comprehensive***.
**Figure 1.** We will make Figure 1 clearer by presenting two scenarios where all the modalities are available and where subsets of the modalities are available during inference. While our experiments in Tables 1-3 do focus on the test set having the union of all modalities, in Figure 3 we show that our method is beneficial to subsets of modalities. We will also expand the analysis in Figure 3 in the final version with the comparison to previous multimodal learning methods provided in the response to reviewer 2qSh.
**Dimension of $F’_m$.** The reviewer is correct. We made a mistake here and F’m on line 96 should be modified to $\hat{F}_m$.
**Feature projection.** We have a fully connected layer that maps each dm-dimensional vector to a d dimensional vector individually.
**Averaging the features within each modality.** We average the modality specific features for the alignment loss instead of aligning each individual feature vector as many feature vectors may be uninformative to the target problem. We demonstrate that averaging is better than aligning individual features with an experiment on EPIC-Kitchens. When encouraging each d-dimensional vector to be close to one of the learnable tokens, we obtain 20.7%. When using the individual features averaged across modalities, we get 19.8%. Both are worse than our 23.7%. Thus, our alignment strategy is more effective.
The reviewer is correct that it could be possible to have one modality always close to the first few u in the dictionary and another modality always close to other u in the dictionary. However, we observed that this didn’t happen and therefore didn’t find it necessary to discourage such cases in the alignment loss. Instead, we observe that each modality covers the majority of the tokens across the training samples, allowing the predictions to be diverse and satisfy the groundtruth supervision.
**Notation $u_m$.** Yes, we mean the closest vector in [u1, u2, …] to $\bar{f_m}$. We agree that the notation of $u_m$ is confusing and modify it as:
$$L_{align} = \sum_{m \in M_1} ||\bar{f_m} - u_{n_m}||^2_2,$$
where $u_{n_m}$ is the learnable token from $[u_1, …, u_{n_u}]$ selected for feature $\bar{f}_m$.
**Average across training epochs.** We obtain the pseudo-labels by averaging the predictions from the last e epochs of the pretrained unimodal encoders. For video classification e=10, for robot state regression e=20 and for multimedia retrieval e=20. We will add these details to the paper. Yes, the pseudo-labels are probability distributions across all labels in the classification case.
**Intuition behind dual-branch prediction.** Our intuition for the pseudo-labeling strategy is inspired by the observation that a single modality alone often cannot provide enough information for accurate prediction. Take the example of conducting activity recognition with audio and video modalities, the audio is often less discriminative than video. For instance, the audio modality can be crucial in distinguishing that the activity is one of *swimming*, *surfing* or *water skiing*, but cannot make fine-grained distinctions. By forcing the model to predict the ground-truth activity *swimming*, it may overfit to some unrelated features such as background noise. By using average predictions as pseudo-labels to provide a distribution over classes, the model is able to incorporate the important distinguishing information while avoiding such overfitting as it allows uncertainty between multiple classes. We will make this clearer in the final version.
**Training set partitions.** We have three settings on video classification with the EPIC-Kitchens dataset each using two of the three modalities. Therefore, we divide the training set into two splits with each containing only one modality.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! My review remains positive. | Summary: The paper is about multimodal learning and deals in particular with the mismatch between modality combinations at training and inference time.
The authors propose to project the multimodal features in a shared space, apply an alignment, and enforce the discriminative ability of the method with a dual branch prediction with full and pseudo supervision.
The method is evaluated on three scenarios, ablation studies and comparative analysis are reported.
Strengths: Overall, the paper is well-written, with some originality with respect to previous approaches. What is reported in the document is clear, figures are appropriate and descriptive of the concepts.
Weaknesses: My main concerns on the paper are the following.
On the main goal: while I understand the benefit of learning from multiple modalities while being able to use a single modality at inference time, I am not sure I fully understand the benefit of learning according to the set defined in Sect. 2, where at training time the method learns from a group of modalities which is a subset of what is seen at inference time.
I expected to see in the empirical evaluation a quantitative justification that using at inference time a superset of the training modalities helps to improve the results. For instance, what if at inference time I use the combination used at training time, with no additional modalities? Is it really helpful to have these extra modalities at inference time? This is not clear to me.
On the objectives: in my opinion, some somehow strong statements of the authors on the abilities of their method are not appropriately justified with an empirical evaluation. For instance, the authors state that their method is more robust to scenarios where some modalities are corrupted by severe noise. More robust with respect to who and what? There is only one experiment tackling this challenge where only the proposed method is used. Moreover, the authors state that one of their challenges "...is reducing the overfitting to the specific modality combinations from the modality-incomplete training data." Again, I think this would deserve more attention on the comments to the experimental analysis
I am not sure I fully understood how the architecture I structured: I assume the architecture is accommodating a maximum number of input modalities while disregarding (or having learnable tokens for) the ones that are not present. From what I understand this requires the input modalities to be provided always in the same order. Is this interpretation correct?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - I find it confusing having a separation of the considerations on the existing literature at the beginning and at the end of the manuscript, I would group them
- On the same topic, the authors mention existing approaches and how their method is different. However (at least in the first part) only for some of them ([25, 30]) they explicitly mention limitations, what about the others?
- As reported above: the statement "Our approach enables effective unseen modality interaction and is more robust to scenarios where some modalities are corrupted by severe noise." would require a deeper investigation I think
- What about the ability to generalize to new tasks/settings, different from the ones of the training set? What I mean is: if you have in training data collected with a certain combination of sensors from a certain platform to solve an action classification problem for instance, could the method be able to generalize to test data for the very same problem and possibly different sensors combinations acquired with a different platform?
- The section on method lacks details in the description that are probably then reported at the beginning of the section on experimental analysis. For instance, it is said in Section 3 that the attention matrix Om is obtained through several transformer layers. Be sure all the details are properly reported. This is very important for the reproducibility of the results
- I don’t understand when the alignment is applied, before or after the integration via the sum.
- Eq. (2) appears without the appropriate context, I find
- “… we propose to generate pseudo-supervision which reflects the discriminability a modality combination” Sentence to be rephased
- Are the splittings in the training-validation-test used for the experiments provided with the datasets? If yes it should be stated, if not it should be justified
- How many epochs for the training stage? Learning rate? Again, this is important to share as many implementation details as possible to favour the reproducibility
- References to tables 7 and 8 should be 2 and 3 instead
- “While our model learns from modality-incomplete data, previous experiments use modality-complete data in testing” It would be better to cite the methods
- In the section “Benefit for Modality-Incomplete Testing” it is not quite clear how the experiment has been designed. Are the combinations in Figure 3 used at inference time? What was the training? I think some details are missing. The same considerations can be done for other sections, for instance “Benefit for Noisy Modalities“
- In the experiments: “We use publicly available implementations where available, otherwise re-implementing ourselves.” Please be more specific: for which ones you used public implementations?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors explicitly mention some limitations of their method in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***We thank the reviewer for their time and effort and are glad the reviewer found our paper well-written***.
**Benefit of supersetting training modalities.** We show the benefit in Table 2 where we compare our model and multimodal baselines, which use all available modalities, to unimodal results which use the single modalities seen in training. For example, our method achieves 25.7% accuracy with RGB & audio as opposed to 18.2% using only RGB and 10.9% using only audio. We will highlight this. Furthermore, when using only RGB or audio, our method gives 19.6% and 12.3%, worse than using both modalities (25.7%). Thus, using a superset at inference is beneficial.
**Benefit for Modality-Incomplete Testing.** We use the same model as in Table 4: trained on the modality-incomplete data splits as described in Section 4. In Fig 3 we test the model on different modality combinations at inference. For “Benefit for Noisy Modalities”, we also use the same trained model but add noise to some modalities during inference. We will add these clarifications.
**Robustness to noise.**. In our paper, we show that when applying Gaussian noise $N(0,1)$ on one modality for all test samples in video classification, our method is more robust to the noise than late fusion or a vanilla multimodal transformer. We further compare our approach with more methods under different amounts of noise:
|Model|N(0,0.5)|N(0,1)|N(0,2)|
|-|-|-|-|
|Late fusion|14.1|11.2 |10.0|
|Gabeur et al.|13.2|10.1|9.3|
|Nagrani et al.|16.4|15.5|14.4|
|Wang et al.|14.9|12.3|10.8|
|Shvetsova et al.|15.5|13.4|11.3|
|Recasens et al.|15.3|13.1|12.0|
|Ours|**20.5**|**18.0**|**16.2**|.
We conclude that our model achieves better results when modalities are corrupted by severe noise than these prior works and will add this table.
**Reducing overfitting to specific modality combinations.** In Fig 3, we show our model improves robustness to various unseen modality combinations over a vanilla multimodal transformer. To further demonstrate our generalizability to different modality combinations and demonstrate that prior works indeed overfit to the combinations in training, we expand this experiment. Specifically, we compare our method on both seen and unseen modality combinations with previous fusion works and report the results in the tables below. Note for both mean rank (multimedia retrieval) and MAE (robot state regression), lower is better.
|Model|Gabeur et al.|Nagrani et al.|Wang et al.|Shvetsova et al.|Recasens et al.|Ours|
|-|-|-|-|-|-|-|
|RGB, Audio, OCR, Speech|90.6|89.7|90.0|90.6|90.5|**79.4**|
|RGB, Object, Scene, Face|89.1|88.8|88.9| 89.3|89.4|**80.3**|
|RGB, Object, Speech, OCR|92.4|90.2|91.5|78.1|77.4|**70.2**|
|RGB, Scene, Audio, OCR|91.0|89.9|90.7|75.3|74.2|**70.3**|
|RGB, Scene, Speech|95.6|94.5|95.1|80.2|79.3|**74.3**|
|RGB, Object, Audio|96.1|96.0|96.3|82.1|80.3|**76.9**|
|RGB, Speech|98.3|98.0|98.8|84.6|83.0|**81.4**|
|RGB, Audio|98.0|97.3|98.4|85.3|84.5|**79.8**|
Table: Multimedia retrieval
|Model|Gabeur et al.|Nagrani et al.|Wang et al.|Shvetsova et al.|Recasens et al.|Ours|
|-|-|-|-|-|-|-|
|Image, Depth|1.40|1.39|1.41|1.38|1.41|**1.29**|
|Force, Proprioception|1.39|1.37|1.39|1.37|1.39|**1.27**|
|Depth, Proprioception, Force|1.48|1.45|1.44|1.37|1.34|**1.19**|
|Image, Proprioception, Force|1.47|1.43|1.45|1.32|1.30|**1.18**|
|Depth, Force|1.77|1.72|1.79|1.62|1.58|**1.47**|
|Depth, Proprioception|1.60|1.52|1.55|1.49|1.43|**1.38**|
|Image, Force|1.71|1.64|1.70|1.58|1.52|**1.44**|
|Image, Proprioception|1.54|1.50|1.52|1.38|1.35|**1.27**|
Table: Robotic State Regression
Since Gabeur et al., Nagrani et al. and Wang et al. assume modality-complete data, they obtain worse performance with unseen combinations than with seen combinations (top two rows), even with additional modalities. Thus we conclude these methods overfit to seen combinations. Since Shvetsova et al. and Recasens et al. aim to be robust to some modality-incomplete training data, they benefit from some modality combinations. However, these methods are not generalizable to all combinations. In contrast, our method is the most effective on all seen and unseen modality combinations.
**Generalizability.** The tables above also show the generalizability of our method to different sensor combinations.
**Input modality order.** We do not require input modalities to be provided in a set order since we project the features of each modality into a shared space before summation (L105-107).
**Literature grouping.** We will move the related work to be section 2 to group the literature discussion.
**Limitations of prior work.** The limitations in [30] also exist in [21,22,40] as these works assume modality-complete data is available like [30]. We will add clarification.
**Method Details.** We will add more context to Eq. 2 and add other requested details to the method. We train our method with 120 epochs on video classification with an lr of $10^{-4}$, reduced to $10^{-5}$ for the last 50 epochs. On robot state regression and multimedia retrieval, we train with 50 epochs and an lr of $10^{-2}$. We will release the code provided in the supplementary on publication to ensure reproducibility.
**Alignment.** We project the features of different modalities into the same space by the alignment loss before fusing them via a sum.
**Dataset splits.** We use the same validation and test splits as provided with the datasets. To facilitate our research on unseen modality interaction, we divide the original training set of each dataset into multiple splits where each split contains different modalities. We will release our training set division.
**Table numbering.** We will correct table indexes.
**Modality-complete data in testing.** We mean we use modality-complete data at inference in Tables 1-3. We will make this clearer.
**Implementations.** We re-implement Recassens et al. since their code is not available. We use released code for all others. We will clarify this.
---
Rebuttal Comment 1.1:
Title: Still on Table 2
Comment: I thank the authors for the detailed responses to my concerns. They clarify a number of points raised in my first review.
However, I still do not fully understand Table 2 and how I should interpret it. Considering the proposed approach focuses on settings where some of the modalities at inference time might not be available at training time, I do not quite understand how I should read Table 2. In particular, is each column referring to a specific combination at inference time? If yes, what happens in the Multimodal approach? What modalities are employed at training time in such cases? Considering the way the setting is defined, I would have said only one, but then where is the multimodality?
Thanks in advance for your clarifications
---
Reply to Comment 1.1.1:
Title: Clarification on Table 2
Comment: ***We thank the reviewer for engagement and encouragement***.
The reviewer is correct that in this ablation we only have one modality for each training sample (e.g., either RGB only or audio). However, we have two modalities (the unseen modality combination) during inference. For each column, we consider two different modalities to study the unseen modality combination at inference. While each video sample in EPIC-Kitchens contains all the three modalities, we divide the original training set into two splits and let only one modality be available in each split during training. We leave the test set as is. For example, for the RGB & Audio column, one training split has RGB available only and the other training split has audio available only. During inference, both of the two modalities are available.
We train an unimodal encoder on each split for the video classification task. The performance of these unimodal encoders are reported in the three rows of the `unimodal’ part in Table 2. For the multimodal part in Table 2, the late fusion indicates we directly average the predictions from the unimodal encoders of the two modalities during inference. For the rest of the multimodal approaches, we send a single modality into each variant of our multimodal model during training while we send both modalities into the model at inference time.
We will expand the setting description to make this clearer. | Summary: This paper introduces a method that can enhance the performance of multimodal models in scenarios involving unseen modality interactions.
Strengths: 1, The issue of "unseen modality interaction" explored in this paper is quite novel.
2, This paper maps the features of different modalities into the same space and merges different modalities by simple addition. As the number of modalities increases, the number of parameters required for fusion does not significantly increase.
Weaknesses: 1, The performance of the baseline in this paper is so low that it makes the entire method proposed by the paper unconvincing. Specifically, although the method of this paper surpasses its own set baseline methods on EPIC-Kitchens, it only achieves an accuracy of 23.7%. Meanwhile, the most naive baseline in the paper [1] also has an accuracy of 23.7%. Therefore, it's hard for me to be convinced by the experimental conclusions of this paper.
2, In Table 2, the authors compare the performance of different methods on different datasets, but why can't many of these methods outperform a simple late-fusion? Is it because there are significant differences in certain settings? If the author cannot clarify the situation here, I would consider the experimental comparison to be very unfair.
3, Also in Table 2, the performance of DEQ Fusion in MM-IMDB is 61.52/53.38, while the performance of MMBTl[2] is 66.8/61.6 (Micro F1/Macro F1). Why choose not to report this method? It's a well-known classic approach that has already one hundred of citations.
[1] What Makes Training Multi-modal Classification Networks Hard?
[2] Supervised Multimodal Bitransformers for Classifying Images and Text
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See Weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I don't think the experimental results of this paper convince me, so I tend to reject this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***We thank the reviewer for their time and effort. We are glad to hear the reviewer found our proposed problem of “unseen modality interaction” novel and that the reviewer appreciates that number of parameters required for fusion does not significantly increase as the number of modalities increases***.
**Clarification on the experimental setups.** Our results on EPIC-Kitchens are not comparable to the results of Wang et al. [1] as we aim for unseen modality interaction where we do not have access to data with all modalities present. The A/V baseline from Table 8 in [1] instead assumes all samples have all modalities present.
Specifically, we modify the training set of EPIC-Kitchens by dividing the training data into two splits with each split having a different set of modalities. The A/V baseline is trained with the full training set. Training the naive A/V from [1] with our unseen modality training splits, we get 19.0%, much lower than our result (23.7%). We will add this baseline to the final version.
**Late fusion vs. recent methods.** Our late-fusion baseline outperforms recent multi-modal fusion methods as late fusion simply averages the predictions from unimodal models. Previous multi-modal fusion models learn correspondences under the assumption that some or all of the data is modality-complete. When there is no modality-complete data for model learning, these models cannot learn cross-modal correspondences and instead overfit to the modality combinations seen during training. We realize the term ‘late-fusion’ can be ambiguous and will clarify it refers to averaging the final predictions from unimodal encoders in our results.
**Comparison with MMBT.** Even though this comment seems to be for another paper, we further compare our paper with MMBT on EPIC-Kitchens, and report the performance in the table below:
| | Top-1 (%) |
|-------|---|
| MMBT | 17.4 |
| This paper| 23.7 |
While MMBT is an effective method for modality-complete multimodal fusion, we conclude that our method is more effective in unseen modality interaction.
[1] Wang et al. "What makes training multi-modal classification networks hard?." In CVPR 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you to the author for responding to my questions and pointing out my mistakes. I have raised my score to borderline accept. | Summary: This paper tackles the issue of unseen modality interaction, which challenges the conventional assumption of modality completion during training. The approach taken in this study involves formulating a training setting that accounts for modality incompleteness. Subsequently, the proposed method focuses on projecting multiple modalities into a shared feature space through an alignment objective and leveraging a pseudo-labeling strategy to alleviate the model's tendency to overfit to unreliable modalities. The experimental results on multiple benchmarks demonstrate the efficacy of the proposed framework.
Strengths: - The paper addresses a practical problem, since incompleteness of modality occurs often in reality.
- The authors conduct their experiments on the datasets with various modalities.
Weaknesses: - There are some ambiguous parts in the manuscript.
- The authors mention that L_{align} ensures the projection of features from different modality spaces into the same feature space. However, based on Eq (1), it appears that the loss is computed as the summation of the difference between modality-specific features and modality-specific learnable tokens. In light of this, how can we ensure that the features from all modalities are projected into the same feature space?
- Regarding the dual branch prediction part, please provide further elaboration on the intuition behind the pseudo-labeling strategy to address less discriminative modalities. Can we consider less discriminative modalities as unreliable modalities? Even if a modality is less discriminative than others, it may still contain important information for the model. I am curious about the purpose of using pseudo labels to suppress such less discriminative modalities or encourage them.
- The numbering of tables is incorrect in the experimental section. For instance, the table referenced in line 160 should be Table 1, not Table 6. Most of the tables are misnumbered.
- Line 273 needs to be ended with ‘.’
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - In a similar context, is there any reason to split $\hat{F}$ into exactly half? Since the contribution of $L_{pseudo}$ is extremely far from $L_{supervised}$, may be dividing $\hat{F}$ into half is not the optimal solution.
- In line 146, the paper mentions selecting the modality-specific pseudo-label that is closest to the ground truth annotation. How is the distance measured between the label and ground truth annotation when using a one-hot vector label?
- Why the number of video classification accuracy in Table 2, 3 (23.7%) and Table 4 (23.8%)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper sufficiently deals with the limitation of the paper in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***We thank the reviewer for their time and effort and are glad that the reviewer found our paper addresses a practical problem and appreciate the experiments conducted on various modalities***.
**Alignment Loss.** We apologize for the unclear text. The learnable tokens are not modality-specific but are shared across modalities. We make this clearer by modifying Eq.1 as:
\begin{aligned}
L_{align} &= \sum_{m \in M_1} || \bar{f_m} - u_{n_m} ||^2_2,
\end{aligned}
where $u_{n_m}$ is the learnable token from $[u_1, …, u_{n_u}]$ selected for feature $\bar{f}_m$. With the shared learnable tokens, the modality-specific features are encouraged to be projected into a common feature space.
**Intuition behind pseudo-labeling strategy.** We agree with the reviewer that even if the modality is less discriminative, it can still contain important information. Our intuition for the pseudo-labeling strategy is inspired by the observation that a single modality alone often cannot provide enough information for accurate prediction. Take the example of conducting activity recognition with audio and video modalities, the audio is often less discriminative than video. For instance, the audio modality can be crucial in distinguishing that the activity is one of *swimming*, *surfing* or *water skiing*, but cannot make fine-grained distinctions. By forcing the model to predict the ground-truth activity *swimming*, it may overfit to some unrelated features such as background noise. By using average predictions as pseudo-labels to provide a distribution over classes, the model is able to incorporate the important distinguishing information while avoiding such overfitting as it allows uncertainty between multiple classes. We will make this motivation clearer in the final version.
**Typos.** We thank the reviewer for highlighting the incorrect table indexes and the typo. We will correct them.
**Dividing $\hat{F}$ in half.** We add experiments to test the model’s performance with different partition strategies on EPIC-Kitchens and report the results in the table below.
| Ratio of $L_{pseudo}$ to $L_{supervised}$ | Top-1 (%) |
|-------|---|
| 30:70 | 22.9 |
| 50:50 | **23.7** |
| 70:30 | 22.5 |
Dividing the tokens into half delivers the best performance. While the pseudo-labels can refine the overconfident predictions trained by groundtruth labels, they may also be noisy for some difficult samples while the groundtruth labels provide the correct supervision. Thus, making the pseudo supervision and the groundtruth supervision equally important is beneficial.
**Distance measurement for one-hot labels.** When using one-hot vector labels, the distance is measured by the cosine similarity between the modality-specific prediction and the groundtruth one-hot vector label. We will add this to the final version.
**Number inconsistency.** We apologize for this mistake, the accuracy in Table 4 should be 23.7%.
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: Thanks for the responses. I still want to discuss the following points.
Regarding $L_{align}$, is it correct to say that $u_{n_{m}}$ is not uniquely determined for each modality? In other words, within the set $[u_{1}, …, u_{n_{u}}]$, is it possible for features from different modalities $(f_i, f_j)$ to be assigned to the same $u_{k}$? This scenario would occur if $u_{k}$ is the nearest token for both modalities. Consequently, does the proposed objective facilitate the projection of features from distinct modalities into a shared space? It’s the authors’ claim, right? Please kindly correct me if there are any misconceptions in my understanding.
For the pseudo-labeling, I now understand what the authors tried to do after reading the response. Within this context, I believe that the paper could benefit from a more comprehensive exploration of the pseudo-labeling process. Specifically, the inclusion of analysis or qualitative results of the pseudo-labels generated for various modalities would support the authors' claim and enhance the overall robustness of the paper.
One more thing to point out is that I think the sentence “the RGB modality is more discriminative than optical flow or audio” in line 124 should be revised. For video classification [A, B], optical flow often performs better than RGB, so the statement is not always correct.
[A] : Carreira et. al, Quo vadis, action recognition? a new model and the kinetics dataset, CVPR 2017,
[B] : Wang et. al, Temporal segment networks: Towards good practices for deep action recognition, ECCV 2016
---
Reply to Comment 1.1.1:
Title: Further Clarifications
Comment: **We thank the reviewer for engagement and the opportunity for further clarification**.
**$L_{align}$**. The reviewer is correct that $u_{n_{m}}$ is not uniquely determined for each modality. It is possible for features from different modalities $f_i, f_j$ to be assigned to the same $u_k$ when $u_k$ is the nearest token for both modalities. The proposed objective does facilitate the projection of features from distinct modalities into a shared space. We further verify this claim by computing the feature distance between modalities on EPIC-Kitchens with the variants of our model used in Table 2 of the main paper (we provide more explanations for Table 2 in the response to reviewer 2qSh). Specifically, we first obtain the average feature before the multimodal transformer per class for each modality and then compute the average Euclidean distance between modalities across classes. For RGB & Audio, after adding the feature projection to the vanilla transformer, the average Euclidean distance reduces from 84.1 to 75.4. For RGB & Flow, the distance reduces from 81.3 to 72.5 and for Audio & Flow, it drops from 83.9 to 76.2. We will add this into the paper.
**Comprehensive Exploration of Pseudo-Labeling**. In the second row of Table 3, we show that when using the same supervised loss with groundtruth for both branches instead of the pseudo-labeling, our multimodal model suffers a 2.3% accuracy decrease. This is because the pseudo-labeling eliminates the overfitting to groundtruth labels, which can be harmful when a particular modality combination cannot give a reliable prediction. We also observe that when using the pseudo-labeling, the validation accuracy becomes higher than the groundtruth-supervision only, with the same number of epochs. Since we cannot provide figures in the discussion phase, we describe several examples for various modalities here and will provide the qualitative examples in the appendix.
For the RGB modality, given a video sample of taking bowl, the pseudo-label has a probability of 0.60 for *take bowl* while 0.30 for *take cup* since the activity happens out of view and it is hard to judge whether there is a bowl or cup. For the audio modality, given an audio track of opening cupboard, the pseudo-label has a probability of 0.50 for *open cupboard* and 0.45 for *open fridge* as the sounds are similar. For the optical flow modality, given a video sample of taking soy milk, the pseudo-label has a probability of 0.35 for taking soy milk while 0.65 for milk since the objects have a similar motion appearance. As a result, forcing our multimodal model to be far away from any of the similar activity classes would result in overfitting. We will add the clarifications into the paper.
**Inappropriate Statement**. The reviewer is quite right that our statement on RGB discriminability is far too general. We will modify the sentence “the RGB modality is more discriminative than optical flow or audio” into “for activity recognition in EPIC-Kitchen videos, the RGB modality is often more discriminative than audio”. Thank you. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Monte Carlo Neural PDE Solver | Reject | Summary: The authors propose an unsupervised neural technique for solving PDEs based on the classical correspondence between (parabolic) partial differential equations (PDE) and stochastic differential equations (SDE) as given by the Feynman-Kac formula. Specifically, they propose minimizing the error between the neural approximation's deterministic prediction at timestep t+1 and the expected prediction of the neural approximation over particles that have evolved up to time t stochastically according to the Feynman-Kac SDE representation of the given PDE.
Strengths: Incorporating Feynman-Kac into the PINN framework is an interesting idea.
Weaknesses: # Poor numerics
The experiments do not justify the claims, e.g. that long rollouts are more stable using this method. To prove this, at the very least the authors need to present results where the trajectories actually exhibit turbulence. Then, an ablation with the multi-scale framework is required.
# Poor presentation
The authors cannot expect the reader to be familiar with Feynman-Kac and need to give explanations in plain English of the significance of this result. Then, the equations in the main paper should aim to clarify this further, not give a comprehensive mathematical presentation. For example, the inclusion of the forcing function distracts from the main result which is using time reversal to obtain an SDE that moves in the right time direction for equations that are most often solved using neural networks, e.g. ones with an initial condition, not a final condition.
Furthermore, only the most experienced reads will walk away from this paper with a clear idea of how to implement the proposed algorithm. The emphasis in the paper should be to give the reader something to implement and try, not a theoretical proof.
Finally, there are a number of tricks that are not included in the initial idea and are not sufficiently explained. For example, the Fourier interpolation. The multi-scale framework makes the model non-parametric which is a significant departure from previous work and from the presentation of this paper as a Monte-Carlo approximation to PDEs.
Technical Quality: 1 poor
Clarity: 1 poor
Questions for Authors: Please explain the Fourier interpolation, that would be very helpful.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 1 poor
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and feedback! Based on your comments, we provide some replies to address your concerns as follows:
**The trajectories of NSE.**
Thanks for your suggestion! We add figures in the PDF of the **Global Response B** part to show the variation of trajectories and errors for each neural solver over time.
**An ablation with the multi-scale framework.**
Thanks for your comments! The ablation studies can be seen in Sec. 5.3, where the effects of all tricks are evaluated.
**The equations should aim to clarify this further, not give a comprehensive mathematical presentation.**
Thank you for your feedback! We follow the common description of Feymann-Kac law like most of the related literatures do [1, 2, 3]. As PDE belongs to mathematics, it is unavoidable to leverage advanced mathematical tools, and we believe that the NeurIPS readers in this research field could follow it if they are interested in neural PDE solvers or neural operators. Compared to other Feymann-Kac-based papers [1, 2, 3], we have tried to simplify the mathematical formulas in this paper, and we give an intuitive explanation in Lines 36-37, _the probabilistic expression of the PDE regards macroscopic phenomena as ensembles of random movements of microscopic particles._ Furthermore, considering that most PDEs are given by forward form, we introduce how to use time reversal to transform the initial value problem into the final value problem, rather than directly giving the final value form [1, 2, 3]. For readers who are not familiar with the Feynman-Kac formula, they may need to acknowledge the equivalence between the PDE (Eq. 3) and the corresponding SDE (Eq. 4) by default, and then the subsequent reading may not be affected too much.
We will seriously consider your suggestions and try to make corresponding modifications in the final version. Any further detailed comments on improving the presentation will be welcomed and helpful.
**Authors should give the reader something to implement and try, not theoretical proof.**
Thank you for your suggestions! We show the overall algorithm framework in Appendix A, and upload the code (please refer to the **Global Response C** part). We believe the readers can understand the implementation detail of the MCNP according to these materials. Furthermore, we hope the theoretical results can help the readers understand the advantages of MCNP when handling PDEs with large spatiotemporal variants. However, due to the page limit, we cannot show both the theoretical results and the algorithm framework in the main body. If you think replacing the theorem with the algorithm framework would be better, we will do so in the final version.
**The Fourier interpolation trick.**
Thanks for your question! We utilize Fourier transform to map the $N$ PDE fields with low resolution (like $N \times 64$, where $N$ is the batch size) to the frequency domain, and use the inverse Fourier transform to remap it to the high-resolution space (like $N \times 256$). The Fourier interpolation trick can be conducted in one line with the help of PyTorch as follows:
```python
# u.shape = (N, 64)
u_super = 4 * torch.fft.irfft(torch.fft.rfft(u), n=256) # u_super.shape = (N, 256)
```
**The multi-scale framework.**
In this paper, we aim to train neural PDE solvers in an unsupervised manner, especially in the scenario with high-frequency components. In such a scenario, stable and long-term simulating PDE fields would be challenging due to the absence of supervised data and the large deformation of PDEs. Therefore, we propose a multi-scale framework to enhance the robustness of MCNP, which is indeed less relevant to Feymann-Kac law, but plays an important role in the tasks targeted in this paper (Please refer to Sec. 5.3 for ablation studies).
We hope our rebuttal can address your concerns. Also, we would like to know whether there are any other questions, and we are happy to answer and discuss them. If the major concerns have been addressed, could you please kindly raise the rating?
[1] Han J, Jentzen A, E W. Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences, 2018.
[2] Richter L, Berner J. Robust SDE-based variational formulations for solving linear PDEs via deep learning. ICML, 2022.
[3] Berner J, Dablander M, Grohs P. Numerically solving parametric families of high-dimensional Kolmogorov partial differential equations via deep learning. NeurIPS, 2020. | Summary: The authors present MCNP, a new unsupervised training loss for surrogate simulation networks. This loss is based on the link between stochastic processes and PDEs, sampling one-step Brownian motion to estimate the PDE solution. The learned network takes an initial state and the target simulation time to compute the state at that time in one pass. For longer periods, multiple NNs are trained, one for each sub-interval.
Strengths: The paper is generally well-written and relatively easy to understand. It includes a good overview over related work.
The paper includes both theoretical and numerical results. The method is derived using the Feynman-Kac formula and the authors show how the errors of PSM and MCM scale when given an incorrect input state, such as predicted by a neural network.
A total of five numerical experiments are performed, covering a large range of simulation configurations. The paper contains an ablation study, giving some insight into the impacts of various parts of the MCNP method.
Weaknesses: While the experiments are varied in the tested configurations, all but on experiment consider simple diffusion equations, some of which can be solved analytically.
The paper does not show any simulation trajectories and gives no insight into how the various tested methods behave in their experiments. Instead, only the final losses are reported. This makes it hard to determine the cause of improvement from the numerical results. I strongly recommend documenting your observations in the appendix.
The Navier-Stokes experiment seems to be mostly forcing-driven with all initial states ending in a similar configuration. A different forcing, such as Kolmogorov flow, would result in a much wider range of trajectories.
The paper does not give details as to how the numerical simulation (PSM) of the Navier-Stokes experiment was performed. Please describe the simulator in more detail if you implemented it yourself.
The source code is not part of the submission and the authors have not declared their intent to make it public. I strongly recommend doing so, especially if the employed CFD solver was implemented from scratch.
Minor:
* Eq. 3: Please indicate the x-dependence of xi
* Fig 1 caption: The formulas are missing a factor of 1/M
* L184: You mention that cutting the gradients prevents numerical instabilities but ignore the positive effects that gradient backpropagation can have.
* You refer to Eq.13 as a convection-diffusion equation but it does not contain a convection term.
* L199: The notation Δ ₜ u is confusing since Δ is already in use.
* L216: The argument that label noise helps MCNP perform coarser time steps seems unfounded to me. This requires an explanation.
* L237: By lattices, you probably mean frames or time steps?
* L268 and Eq. 17: You can simplify the forcing to a single sine term.
* Figure 2 is never referenced.
* Figure 2 shows the vorticity, correct? Please specify in the caption.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: You state that the random walk of particles should stop when they hit a Dirichlet boundary. Is this still true when performing multiple steps for the time integration of the stochastic process? I think this would over-value the boundary condition, as particles are denied the chance to re-enter the domain.
Using the Feynman-Kac formula, you could also derive a formulation where the stochastic process is simulated forward in time instead of backward. Have you tried this?
In your periodic diffusion experiments, PSM should be equal to a purely spectral method, right?
Why did you choose a vorticity formulation for the Navier-Stokes experiment instead of a velocity formulation?
How did you choose the temporal discretization (like 100 for PSM, 2000 for PSM+) for the tested methods?
The Fourier trick seems to emulate interpolation on the grid. Have you tried linear interpolation or Fourier-upsampling + linear interpolation?
You say that you run some baselines with PyTorch and others with Jax. Is JIT compilation enabled on all examples? How big do you think the performance difference would be when switching all to one library?
It looks like the PSM methods fail due to the time increment being too large. Have you looked at the CFL numbers? Would this problem be resolved by dynamic time steps?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback! According to your constructive comments, we make some replies as follows:
**W1 & W3. The experiments.**
To address your concerns, we added an experiment to simulate Kolmogorov flow (see **Global Response. B**). Furthermore, we chose PDEs with analytical solutions to avoid the bias of numerical solvers. For example, PINO is based on PSM, and it would be unfair to MCNP if the ground truth is generated via PSM. Using PDE with analytical solutions is also a common approach to evaluate the performance in computational mathematics [1, 2].
**W2. Simulation trajectories.**
Thanks for your suggestions! We add figures in the PDF of **Global Response** to show the variation of trajectories and errors over time.
**W4, Q3, Q5 & Q8. Implementation of PSM.**
We use the code in the original neural PDE paper [3], whose implementation is a standard benchmark in neural PDE papers. For diffusion equation, the PSM is equal to the purely spectral method. For temporal discretization, we set PSM to align with the time step of PINO and hope to reveal why PINO fails. For PSM+, we refine its time step to make it succeed on most problems. We acknowledge that the PSM can outperform all ML-based methods if we further refine its grid, and some advanced tricks could enhance the performance, like dynamic time steps. In this paper, we only consider the standard implementation in most operator learning papers [3, 4]. And even compared with the PSM conducted on the coarse grid, neural solvers still enjoy a 20 times speed up.
**W5. Source code.**
We have uploaded the code (please refer to **Global Response. C**).
**Minor Problems**
Please note that we have updated the theorem parts in the **Global Response. A**. The lattices mean the time steps in this paper. Fig. 2 represents the vorticity.
For label noise, let's consider learning the network with the noisy label $y=y_g+e_d+e_r$, where $y_g$ is the ground truth, $e_d$ is the deterministic bias, and $e_r$ is random noise, which could change per epoch during training. Some papers discovered that $e_{r}$ could help the training, even counteracts $e_d$ [5]. We regard the deterministic error and the random error brought by MCM as $e_d$ and $e_r$, respectively. However, we acknowledge that rigorously proving still needs further analysis, and the main challenge is the nonconvex property of neural networks. To demonstrate this empirically, we add an experiment to reveal the effects of M on MCNP (settings align with Sec. 5.1, $\kappa$=0.02).
|M|32|64|128|256|
|-|-|-|-|-|
|Error, N=6|3.467$\pm$ 0.470|3.727$\pm$ 1.587|3.543$\pm$ 1.633|3.648$\pm$ 1.222|
|Error, N=12|6.322$\pm$ 0.991|6.575$\pm$ 1.948|6.564$\pm$ 1.902|8.731$\pm$ 2.738|
We can see that:
- The results of MCNP are relatively robust to M.
- M is not necessarily better when larger, which indicates that controlling the noise within a certain level can help generalization.
Thanks for your feedback! We will clarify them in the final version.
**Q1. Dirichlet boundary.**
We stop the random walk when the particle hits the boundary, which is consistent with the mathematical principle of the Feymann-Kac theorem. A more detailed mathematical explanation can be found in lecture note [6] (Theorem 4.2.1)
**Q2. Simulation of SDE.**
In theory, the SDE is given by a **backward** form in Feymann-Kac law [6] (Theorem 4.1.2). In practice, we try to simulate the SDE with a forward formula but fail to obtain meaningful results.
**Q4. Vorticity formulation.**
We choose this formulation to align with the settings in other neural PDE papers [3, 4].
**Q6. Fourier Interpolation.**
Before writing this paper, we tried other interpolation tricks, including linear and bilinear interpolation. However, Fourier interpolation obtained the best performance. One potential reason is that Fourier transform is more compatible with the characteristics of PDE.
**Q7. PyTorch v.s. Jax.**
In this paper, we only use Jax to conduct the experiments of PI-DeepONet due to the absence of the PyTorch code. In the code of PI-DeepONet, JIT compilation is involved, while other PyTorch methods are not. According to some literature, the main difference between deep learning APIs is speed and memory [7]. As reported in [8], _the JAX implementation is about 2.5-3.4x faster than PyTorch! However, with larger models, larger batch sizes, or smaller GPUs, the speed-up is expected to become considerably smaller._ In this paper, we use PyTorch as the main API because it is the most popular API in operator learning papers, and we hope to align the experimental settings with other papers.
We hope our rebuttal can address your concerns. Also, we would like to know whether there are any other questions, and we are happy to answer and discuss them. If the major concerns have been addressed, could you please kindly raise the rating?
[1] Labovsky A E. Approximate deconvolution with correction: A member of a new class of models for high Reynolds number flows. SIAM Journal on Numerical Analysis, 2020.
[2] Li B, Zhang J, et al. Stability and error analysis for a second-order fast approximation of the one-dimensional Schrodinger equation under absorbing boundary conditions. SIAM Journal on Scientific Computing, 2018.
[3] Li Z, Kovachki N, et al. Fourier neural operator for parametric partial differential equations. ICLR, 2020.
[4] Wu T, Maruyama T, et al. Learning to accelerate partial differential equations via latent global evolution. NeurIPS, 2022.
[5] Chen P, Chen G, et al. Noise against noise: stochastic label noise helps combat inherent label noise. ICLR, 2020.
[6] Nolen J. Partial differential equations and diffusion processes. Lecture Notes, Stanford University, 2009.
[7] Paszke A, Gross S, et al. Pytorch: An imperative style, high-performance deep learning library. NeurIPS, 2019.
[8] Tutorial 5 (JAX): Inception, ResNet and DenseNet.
---
Rebuttal Comment 1.1:
Title: Some further questions
Comment: Thank you for answering my questions and providing additional experiments.
The inclusion of Kolmogorov flow and example trajectories strengthens the paper. I hope you can also provide trajectories for the other experiments in the appendix and plot more than just one example per experiment.
Your discussion still does not explain in what way your method leads to more stable inferred trajectories. I realize that a detailed analysis of the advantages and disadvantages is not something you can do in a week but still I’d appreciate any insight you can give into why and how your method performs better than each of the baselines. What kind of mistakes do the different methods tend to make?
In Fig. 2 of the attached PDF page, what is going on with FNO in the case $\nu=10^{-3}$? Also, the error metric does not seem to match Table 2 from the main paper. E.g. for $\nu=10^{-4}$ and $T=15$, Table 2 claims a relative error of 6.553% for MCNP while the diagram shows about 14%.
I will update my rating after discussing with the other reviewers and the AC.
*Side note:* As a reviewer, I assign my ratings as objectively as I can. Of course, I will take the rebuttal into account. However, I do not appreciate being asked by the authors to raise my rating. I’d hate for OpenReview to turn into a platform where authors must beg for their scores to be raised.
---
Reply to Comment 1.1.1:
Title: Response to further questions
Comment: Thank you for the further comments. Here are our responses:
**1. Trajectories for the other experiments.**
Thanks for your suggestion! We will add additional examples in the final version.
**2. Why and how your method performs better than each of the baselines?**
Compared to FNO which uses pre-simulated fixed data for training, MCNP can sample new initial fields per epoch, increasing the diversity of training data. As a result, MCNP can achieve similar or better results, especially when the PDE fields are varying at the final time for different initial fields, where more data are required for supervised methods. This can be seen in cases like the diffusion equation with $N=12$ (As discussed in Lines 253-255).
Compared to PINO, MCNP is more robust against spatiotemporal variations due to the benefits of MCM (proved in Theorem 4.1 for convection-diffusion equation). Moreover, we propose the multi-scale framework to enhance the long-time simulation ability. Therefore, MCNP can outperform PINO significantly if PSM cannot simulate the PDE fields accurately with a relatively coarse time step, as in the cases with large spatiotemporal variations (such as diffusion equation with $\kappa=0.2$ and the Kolmogorov flow, discussed in Lines 255-256). Please note that we cannot make the time step in PINO sufficiently small due to the training cost.
**3. FNO in the case $\nu=10^{-3}$.**
For NSE with $\nu=10^{-3}$, the external forces make different initializations converge to the same final vorticity fields, which makes the final vorticity fields easier to learn. Moreover, the spatiotemporal variations are small in the low Reynolds number case. Therefore, the relative error for FNO is decreasing over time. On the other hand, unsupervised methods such as MCNP and PINO have an increasing error over time due to the numerical errors accumulated in each iteration.
**4. The error metric in Table 2.**
In Sec. 5.2, we conduct two kinds of experiments with different time ranges: $T\in[0,10]$ and $T\in[0,15]$. For each time range, we train neural PDE solvers separately and report the average relative error over time in Table 2. Therefore, the value of 6.553% in Table 2 corresponds to the average error from $t=0$ to $t=15$, while the value of ~14% in the figure is the relative error at $t=15$.
Thanks for your question! We will clarify it in the final version.
---
Reply to Comment 1.1.2:
Title: Any Further Questions are Welcomed!
Comment: Dear Reviewer urJf,
Considering the deadline of the current stage is approaching, we hope to know if there are any other concerns we haven’t addressed or any flaws in our rebuttal. We would be happy to discuss them with you and address them in future versions of the paper, which will also help us improve the quality of our work.
Many thanks for your time and constructive comments!
Sincerely,
Paper 2692 Authors | Summary: The authors propose Monte Carlo Neural PDE Solver (MCNP Solver) which leverages the Feynman-Kac formula to train neural PDE solvers in an unsupervised manner.
I'm willing to revise my score based on the rebuttal from the authors to the questions that I raised below.
Strengths: * That paper addresses and interesting problem: learning the neural operator in an unsupervised way.
* The authors propose practical enhancements to their method such as one-step rollout, Fourier Interpolation and the use of a multi-scale framework.
Weaknesses: * Limited set of experiments, only two cases: 1d diffusion and 2d Navier-Stokes.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Why aren't some other equations such as the Poisson equation, Schrodinger equation, Allen-Cahn not ran?
* To my understanding, you use the multi-scale framework to achieve longer simulations. Do you have an experiment that shows the benefit of this? Especially an experiment that shows how alternative method break at certain points whereas MCNP lasts longer.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: * The limitations that I see are encapsulated on the questions that I raised above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and valuable feedback! Based on your constructive comments, we provide some replies to address the weaknesses and questions:
**Weakness & Q1. The PDE types in the experiments of this paper**
Thank you for your question! Besides the 1D diffusion equation and 2D NSE, we also conduct additional experiments in the supplementary material (Sec. 5.4, Appendix C), including heat diffusion on a circular ring, fractional diffusion equations, and fractional diffusion equations with irregular grids. We choose the experiments according to the criterion that each experiment should convey new information to the readers. To clearly summarize our experiments, we list each experiment and corresponding insight as follows:
1. Diffusion equation (Sec. 5.1): echo our main motivation and theoretical result.
2. NSE (Sec. 5.2): a standard benchmark for neural PDE. We evaluate the ability of MCNP for handling large spatiotemporal variations and long-time simulation.
3. Heat diffusion on a circular ring (Appendix C.1): demonstrate the ability of MCNP to handle different boundary conditions.
4. Fractional PDEs (Appendix C.2): demonstrate the ability of MCNP to handle fractional Laplacian.
5. PDEs with irregular grids (Appendix C.3): demonstrate the ability of MCNP to handle irregular grids.
6. Kolmogorov flow (please refer to **Global Response B**): demonstrate the ability of MCNP to handle chaotic systems.
The Poisson equation and Schrodinger equation do not fall within the scope of MCNP (Eq. 1), and some mathematical transformations are required. Therefore, we do not include these PDEs in the current version. Compared to the NSE, the Allen-Cahn equation is generally simpler and less representative in low-dimensional cases, and it may not provide new information about the characteristics of MCNP. Therefore, we choose NSE to demonstrate the performance of MCNP Solver, which is also a standard benchmark in most neural operator learning papers [1, 2].
**Q2. The multi-scale framework**
Thank you for your question! The ablation study of the multi-scale framework can be seen in Sec. 5.4, where MCNP-~~MS~~ denotes the MCNP Solver without the MS trick. We also add figures in the PDF file of the **Global Response B** part to show the variation of trajectories and errors for each neural PDE solver over time.
We hope that our rebuttal can address your concerns. Also, we would like to know whether there are any other questions about our work, and we are happy to answer and discuss them. If the major weaknesses and questions have been addressed, could you please kindly raise the rating?
[1] Li Z, Kovachki N, Azizzadenesheli K, et al. Fourier neural operator for parametric partial differential equations. ICLR, 2020.
[2] Wu T, Maruyama T, Leskovec J. Learning to accelerate partial differential equations via latent global evolution. NeurIPS, 2022.
---
Rebuttal Comment 1.1:
Title: A further explanation to the effects of multi-scale framework
Comment: Dear Reviewer R6Cs,
We would like to add a further explanation to clarify the effects of the multi-scale (MS) framework in our paper.
In Section 5.4, we conducted the ablation study of the multi-scale framework to evaluate its effect, where MCNP-~~MS~~ denotes the MCNP without using the MS trick. To better address your concerns, we show the relative error of each step with MCNP and MCNP-~~MS~~ as follows:
|Time|2|4|6|8|10|12|14|
| - | - | - | - | - |-|- |-|
|Error, MCNP|5.940$\pm$ 0.467|5.983$\pm$ 0.330|5.910$\pm$ 0.232|6.873$\pm$ 0.274|7.333$\pm$ 0.147|10.363$\pm$ 0.266|15.367$\pm$ 0.428|
|Error, MCNP-~~MS~~ |24.941$\pm$ 2.323|24.876$\pm$ 2.280|21.985$\pm$ 1.946|20.809$\pm$ 1.632|22.325$\pm$ 1.392|26.269$\pm$ 1.188|32.465$\pm$ 0.851|
According to the above table, MCNP significantly outperforms MCNP-~~MS~~ throughout the entire time range, rather than after a certain time point. Therefore, the multi-scale framework can help the neural PDE solver obtain a more robust simulation during the whole time range for long-time simulation tasks. The potential reason for this has been discussed in our paper, _due to the independent parameterization and stop-gradient operator, the proposed multi-scale framework can prevent the prediction at time $t^{\prime}$ from producing harmful effects on the former time $t \leq t^{\prime}$ in the optimization stage_ (Lines 182-184).
We hope our rebuttal and this new message can fully address your concerns. Also, if you have any other questions about our work, please do not hesitate to contact us.
Sincerely,
Paper 2692 Authors | Summary: Designing neural PDE sovler using deep neural networks is a challenging task for which several solutions have been proposed in the literature using for instance networks that encode the initial conditions or physics informed neural networks.
The authors propose to use Monte Carlo methods to train neural PDE solver for the solution of a general convection-diffusion equation.
Using Feynman-Kacformula, the authors derive a loss function that can be used to learn a mapping that can simulate the target fields using the input parameters and the initial condition.
They propose a theoretical guarantee on the solution provided by the Monte Carlo solver and the paper illustrates the performance of the proposed method with a one dimensional differential equation and a 2-dimensional Navier-Stokes equation.
Strengths: The paper proposes a Monte Carlo based PDE solver strained via Monte Carlo approximation which can handle coarse time steps better than existing alternatives.
The proposed method does not require many particles in the numerical experiment and is computationally efficient in the settings explored.
Under some assumptions, the authors propose an upper bound on the error explicit in some hyperparameters of the approach in the case of a convection diffusion equation.
Weaknesses: Theorem 1 is obtained under several assumptions.
These assumptions should be discussed more, are they restrictive or common assumptions in the PDE literature ?
The claim that the approach is efficient even with few samples seems correct in the proposed experiments. However, these experiments are in dimension 1 and 2 and Monte Carlo methods can be cumbersome in high dimensional settings without tunning.
The authors could detail the explicit advice or theoretical guarantees we have for the error with respect to M.
The authors only explore one discretization scheme (Euler), the scheme used could have an impact on the performance of the method, can this be discussed ?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The algorithm relies on many hyperparameters which appear in the upper bound of Th.1. The dependency on these hyperparameters could be discussed more. For instance, the authors claim that the third term in (15) can be controlled by the number of samples M, and that an excessive number of particles is not required in practice.
Is it possible to provide an explicit way to balance each term in the upper bound to guarantee a given precision ? How to choose M with respect to N or $\Delta t$ for instance ?
Section 4 focuses on a specific case where parameters are constant. Can the authors elaborate on the difficulties (practical and theoretical) when these parameters are not constant ?
In Section 3.3, Eq. (11), the authors propose to use an Euler Scheme to sample the stochastic process. Can the results be improved by choosing another discretization scheme ? Is this a sensitive step of the implementation ?
In comparison with PSM which requires to decrease $\Delta t$ to improve the upper bound (which is costly) the proposed method seems less sensitive and increasing M is enough to reduce the impact of the additional term. In terms of computational complexity, increasing M is not too intensive ?
The simulation study provides application in dimensions 1 and 2, have you any insight on how the method scales with d ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors provide several research perspectives for this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and valuable feedback! According to your constructive comments, we make some replies to the weaknesses and questions:
**W1. The assumption in Theorem.**
The assumptions are reasonable for most cases and common in the PDE literature:
- The solution can be expressed via Fourier basis.
_Fourier series were historically developed in the analysis of classical PDEs in mathematical physics; these series were used to express the solution of such equations_ [1]. Moreover, many numerical methods assume the solution of PDE can be expressed by the Fourier basis, e.g., the algorithm and theoretical analysis in [2] and Theorem 2.1 in [3].
- The solution and its derivatives are Lipschitz functions.
Lipschitz assumptions are common in PDE literature. For example, paper [4] uses the property that the solution and derivatives are Lipschitz bounded in Lemma 9. Furthermore, most theorems in [5] rely on the Lipschitz assumption.
We appreciate your feedback and will discuss these in the final version.
**W2 & Q4. Experiments are in 1D and 2D.**
We follow the common experimental setting in the research field of *Neural Operator*, and most of them consider the 1D and 2D PDEs. To the best of our knowledge, there are only very few papers that generalize the supervised operator learning methods to the 3D scenario [6], and no paper considers the unsupervised neural operator in the 3D scenario. To generalize MCNP to high-dimensional problems, we need to use some more powerful tools (like transformer) and regard it as an important future work.
**W2, Q1 & Q3. Effect of M.**
The theorem for MCM has another equivalent expression:
- With probability at least $1-\gamma$, we have:
$$
|u_{t+\Delta t}^{MCM}(x) - u_{t+\Delta t}(x)| \leq \frac{1}{2H}\sum_{n=1}^N |na_n| + \sum_{n=1}^N |\delta_n| + \frac{\sqrt{4\kappa\Delta t}L_u^x}{\sqrt{M\gamma}}.
$$
Thank you for your suggestion. We will adopt this form in our final version. The theorem shows that the error term $\frac{\sqrt{4\kappa\Delta t}L_u^x}{\sqrt{M\gamma}}$ ($E_3$) can be controlled by the sampling number M. For MCM, we have to increase M to lower the error as $\Delta t$ and $\kappa$ increase. Please note this theorem is proved for MCM, and things are different for MCNP due to the use of neural networks. As discussed in the paper, $E_3$ _stems from the variance of random processes and can be regarded as a type of stochastic label noise. Some studies have found that such noise can aid generalization and counteract inherent biases._ To demonstrate this empirically, we add an experiment to reveal the effects of M on MCNP (the settings align with Sec. 5.1, $\kappa=0.02$).
|M|32|64|128|256|
|-|-|-|-|-|
|Error, N=6|3.467$\pm$ 0.470|3.727$\pm$ 1.587|3.543$\pm$ 1.633|3.648$\pm$ 1.222|
|Error, N=12|6.322$\pm$ 0.991|6.575$\pm$ 1.948|6.564$\pm$ 1.902|8.731$\pm$ 2.738|
According to the table, we can see:
1. The results of MCNP are relatively robust to M.
2. M is not necessarily better when larger, which indicates that controlling the noise within a certain level can help generalization.
Theoretically, to analyze the effects of M to MCNP more precisely, we need to consider the gradient flow during the training stage, and the main challenge is the non-convex property of neural networks. Therefore, we have acknowledged this limitation in the paper and regard it as future work.
**W3 & Q3. Discretization scheme.**
In this paper, we utilize the Euler–Maruyama method to approximate the SDE in Eq.11. Before that, we have tested other discretization schemes of SDE, including Runge–Kutta and Heun’s methods. However, these methods don’t give a significant improvement for MCNP while introducing remarkable computational costs. The results of these different discretization schemes are listed as follows (NSE data with $\nu=10^{-5}, T=15$):
||EM|RK|Heun|
|-|-|-|-|
|Error|8.667$\pm$ 0.350|8.648$\pm$ 0.266|8.621$\pm$ 0.318|
|Time|1.458|2.162|1.971|
To our best knowledge, other Feynman–Kac-based PINNs also don’t use high-order discretization schemes in their paper. One potential reason is these schemes may introduce extra optimization difficulties to the neural network and thus can not work as expected.
**Q2. When parameters are not constant.**
We have updated our theorem results in the **Global Response. A**. The main difficulty lies in the theoretical analysis. When $\beta$ is dependent on $x$, we need to estimate the error bound between the stochastic integral of ground-truth $\beta(x)$ and the simulated one $\beta(\hat{x})$ from $t$ to $t+\Delta t$. Please note that $\beta$ is involved in the random walk of $x$, and thus a composite structure arises. A rigorous proof requires more refined analysis and advanced mathematical tools in stochastic analysis.
We hope our rebuttal can address your concerns. Also, we would like to know whether there are any other questions, and we are happy to answer and discuss them. If your major concerns have been addressed, could you please kindly raise the rating?
[1] Plonka G, Potts D, et al. Numerical Fourier analysis. Basel: Birkhäuser, 2018.
[2] Burns K J, Vasil G M, et al. Dedalus: A flexible framework for numerical simulations with spectral methods. Physical Review Research, 2020.
[3] Gu Y, Shen J. An efficient spectral method for elliptic PDEs in complex domains with circular embedding. SIAM Journal on Scientific Computing, 2021.
[4] Chassagneux J F, Crisan D, et al. Numerical method for FBSDEs of McKean–Vlasov type. The Annals of Applied Probability, 2019.
[5] Kovachki N, Li Z, et al. Neural operator: Learning maps between function spaces with applications to PDEs. JMLR, 2023.
[6] Peng W, Yuan Z, et al. Linear attention coupled Fourier neural operator for simulation of three-dimensional turbulence. PoF, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications and for the answers to all reviewers, in particular for the additional experiments.
I will update my ratings after discussions with other reviewers. | Rebuttal 1:
Rebuttal: ## **Global Response**
**A. Errata of the main Theorem**
We found a typo in the main theorem, and we fix it as follows:
- The convection-diffusion equation in Eq. 13 should be:
$$
\frac{\partial u}{\partial t} = \kappa \Delta u + \beta \frac{\partial u}{\partial x},
$$
where $\beta \frac{\partial u}{\partial x}$ and $\kappa \Delta u$ denote the convection and the diffusion term, respectively.
- The bound and proof of MCM don't need to be corrected, and the error bound of PSM should be corrected as follows due to the involvement of the convection term:
$$
\left|u_{t+\Delta t}^{\operatorname{PSM}}(x) - u_{t+\Delta t}(x)\right| \leq \sum_{n=1}^N \frac{(|\kappa L_{\Delta u}^{t}| + |\beta L^t_{\partial_x u}|) {\Delta t}^2}{2}+\sum_{n=1}^N (|\delta_n(\kappa n^2 \Delta t - 1)| + |\beta n \Delta t\delta_n|),
$$
The new error term is introduced due to the error of the convection term.
**B. New experimental results**
In response to the reviewers’ request, we add some new numerical results as follows:
**B.1. NSE with Kolmogorov forcing.**
Apart from the force term introduced in Eq. 17, we also add an experiment to simulate NSE with Kolmogorov forcing [1]. We set the external forcing as $f(x) = 0.1\cos(8\pi x_1)$ and the viscosity term as $10^{-4}$. Other settings are in line with the ones in Sec. 5.2. The performances of all methods are presented in the following table:
|Method|PSM|PSM+|FNO|PINO|MCNP|
| - | - | - | - | - |-|
|Error, T=10|NAN|0.222|5.050$\pm$ 0.081|8.806$\pm$ 0.240|7.232$\pm$ 0.100|
|Error, T=15|NAN|0.319|9.738$\pm$ 0.219|26.250$\pm$ 0.608|10.747$\pm$ 0.346|
Please note that PSM and PSM+ are traditional numerical solvers, FNO is the supervised neural operator, and PINO and MCNP are trained in an unsupervised manner. We will integrate this new result into Table 2 of the original paper.
**B.2. Simulated trajectories for each neural PDE solver.**
In Fig. 1 of the attached PDF file, we show the ground-truth vorticity versus the prediction of learned neural solvers for an example in the test set from $t=3$ to $t=15$, with the viscosity terms $\nu=10^{−4}$. For both two forcings, the unsupervised MCNP obtains comparable simulation results compared to the supervised methods FNO. Furthermore, PINO fails to capture the details and trends of the fluid fields when $T\geq9$.
**B.3. Variation of errors for each neural PDE solver.**
In Fig. 2 of the attached PDF file, we compare the relative error of each time step with different neural PDE solvers. We summarize our observations as follows:
1. PINO fails to simulate NSE with $\nu≤10^{−4}$, where the fluid field changes drastically, and thus learning the subsequent vorticity field could have a bad effect on the front one for PINO.
2. MCNP obtains comparable results on most tasks compared to the supervised method FNO when $t\leq10$, and FNO can give a more precise prediction for $t>10$. As discussed in our paper, _FNO directly uses the ground-truth data as training labels for all $t\in[0,T]$, thus avoiding accumulated errors arising from the calls of the solver during the training stage like other unsupervised methods._ Moreover, when FNO simulating NSE with $\nu=10^{−3}$, we notice that the relative error even decreases over time, a possible reason is that in the low Reynolds number case, due to the presence of external forces, different initializations tend to have the same final vorticity fields, which is more convenient for the learning of supervised method.
3. For Kolmogorov flow, MCNP and FNO have similar performance during $t\in[0,15]$. The reason is that the final fields are very different among the datasets due to the involvement of Kolmogorov forcing, thus the supervised methods need more data to achieve a good performance.
**C. Code**
In response to the reviewers’ request, we decided to publish our code immediately. According to the NeurIPS review policy, _If you were asked by the reviewers to provide code, please send an anonymized link to the AC in a separate comment (make sure the code itself and all related files and file names are also completely anonymized)_.
We have submitted the code link to the AC and the code will be public once the paper is accepted.
[1] Smaoui N, El-Kadri A, Zribi M. On the Control of the 2D Navier–Stokes Equations with Kolmogorov Forcing. Complexity, 2021.
Pdf: /pdf/d8925a679372f2fc1e358234b4532b03a4443685.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a new physics informed neural network based solver that utilizes the connection between PDEs and SPDEs. This is achieved through the Feynman-Kac formula and applies to a large class of PDEs. It comes with a bound on the error at each step in the rollout. The results are compared with multiple supervised and unsupervised PDE solvers on a number of 1D and 2D equations.
Strengths: Originality: To the best of my knowledge, the paper is original in combining the Feynman-Kac-based approach with a neural operator architecture. The connections to existing work that relies on the Feynman-Kac formula as well as other PINN/NO approaches are discussed in detail in the main paper and the appendix.
Quality: The work is thorough in discussing existing literature and presenting the methodology. The theoretical result gives some intuition relating to how the proposed method scales as compared to the classical solver.
Significance: The methodology combines a number of existing techniques (Feynman-Kac formulation, neural operators, Fourier interpolation) to achieve some improvement in specific setups, e.g. where the solution of the PDE is rapidly varying in space and/or time.
Weaknesses: Clarity: The quality of the writing could be improved, particularly in the abstract/introduction. The rest of the paper is detailed enough in describing the experiments and discussing the results.
Significance: The proposed approach seems to give an advantage only in specific situations. While this is acknowledged in the paper, it could be beneficial to discuss specific applications where such oscillatory conditions occur.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What are FDM and PSM as mentioned in the abstract?
2. Is PSM a spectral solver?
3. Line 119: Can you explain in more detail what is meant by inversion of $\xi$?
4. What is the number of dimensions in the latent space (i.e. the number of terms in the Fourier expansion) for the FNO method in your experiments? I assume increasing the number of terms in the expansion would improve the performance of FNO for high-frequency initial states and might not affect the computational cost hugely.
5. To clarify, I assume the computational times given in Tables 1 and 2 do not include the generation of data for data-driven methods such as FNO?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations and extent to which this method gives an advantage over existing approaches are discussed in detail in the final section of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and valuable feedback! According to your constructive comments, we make some replies to the weaknesses and questions:
**W1: The quality of the writing could be improved.**
Thank you for the valuable comments. We will take your suggestions on polishing the final version.
**W2: The proposed approach seems to give an advantage only in specific situations.**
Thank you for your comments. We need to clarify that each solver has its strengths and weaknesses, and we do not intend to claim that MCNP is superior to all baseline methods in all scenarios. Instead, we aim to comprehensively compare and show each method's advantages, disadvantages, and suitable scenarios. Furthermore, MCNP is trained in an unsupervised manner, while still obtaining comparable or even better results compared to the supervised FNO.
In this paper, we tackle the challenging and crucial problem of solving PDEs with large spatiotemporal variations, which occur in many computational physics applications. For example, turbulence happens when an airplane meets unstable air currents in the sky [1]. Turbulence flow involves rapid multi-scale changes, which is a well-known Millennium Prize Problem and has significant implications for fluid physics and weather forecasting [2]. Therefore, developing robust numerical methods that can handle high spatiotemporal variations is a hot topic in both machine learning and computational physics communities [3,4,5].
We appreciate your feedback and will highlight the significance of our problem setting in the final version.
**Q1 & Q2. The meaning of FDM and PSM.**
FDM and PSM stand for the finite difference and pseudo-spectral methods, respectively. We will explain them in more detail in the final version. Thank you for your suggestions!
**Q3. Inversion of $\xi$ (Line 119).**
Thanks for your question. In formular, $\xi_{s} = \tilde{\xi}_{T-s}$ represents the time inversion random process of $\tilde{\xi}$.
**Q4. The number of terms in the Fourier expansion for FNO.**
Thanks for your question! We provide the details of the experimental settings in Appendix E. We choose the number of terms in the Fourier expansion (i.e., the modes) from the set {12,16,20,24} on the most challenging task when conducting FNO, i.e., the highest frequency task. For the diffusion equation and NSE, we select modes 20 and 16, respectively. Then, we align the modes in PINO and MCNP with those in FNO.
**Q5. The computational times given in the Tables do not include the generation of data for FNO?**
Yes, you are right. We will clarify it in the final version.
We hope that our rebuttal can address your concerns. Also, we would like to know whether there are any other questions about our work, and we are happy to answer and discuss them. If the major weaknesses and questions have been addressed, could you please kindly raise the rating?
[1] Gerogiannis V T, Feidas H. An 11-year analysis of in situ records of aviation-scale turbulence over Europe. Theoretical and Applied Climatology, 2021.
[2] Wilcox D C. Turbulence modeling for CFD. La Canada, CA: DCW industries, 1998.
[3] Krishnapriyan A, Gholami A, Zhe S, et al. Characterizing possible failure modes in physics-informed neural networks. NeurIPS, 2021.
[4] Li X A, Xu Z Q J, Zhang L. A multi-scale DNN algorithm for nonlinear elliptic equations with multiple scales. Communications in Computational Physics, 2020.
[5] Schaeffer H, Caflisch R, Hauck C D, et al. Sparse dynamics for partial differential equations. Proceedings of the National Academy of Sciences, 2013.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. | null | null | null | null | null | null |
A Theoretical Analysis of Optimistic Proximal Policy Optimization in Linear Markov Decision Processes | Accept (poster) | Summary: In this paper, the authors extend the theory of proximal policy optimization-based methods in the linear mixture MDPs and propose an optimistic variant PPO algorithm (OPPO+) for stochastic linear MDPs and adversarial linear MDPs with full information.
The proposed algorithm adopts a multi-batched updating rule from bandit literature, where the policy is updated at regular intervals of batches, instead of updating it every episode. During the policy improvement step, the policy is updated using proximal policy optimization, while the policy evaluation step involves estimating the value using least square value iteration with the average reward from the previous batch.
The proposed algorithm provides a regret guarantee for $\tilde{O}(d^{3/4} H^2 K^{3/4})$ and has a sample complexity of $\tilde{O}(d^3H^8/\epsilon^4 + d^5H^4/\epsilon^2)$. This regret bound is tighter than the existing PPO-based algorithms proposed for stochastic linear MDPs.
Strengths: - The proposed algorithm can be applied not only to stochastic linear MDPs but also to adversarial linear MDPs with full-information feedback.
In order to cover the case of adversarial linear MDPs, the average reward technique is used, and for the analysis, result for bound on the gap between the value of the policy from the previous batch and the value of the policy from the current batch is suggested.
- When applying the existing covering number theory from linear MDPs, the regret may depend on the size of the action space. To address this, a method using novel covering theory is proposed to reduce it to a logarithmic scale.
- Compared to existing policy optimization algorithms, it provides a tighter regret guarantee in both the stochastic linear MDP and adversarial linear MDP with full-information feedback settings.
Weaknesses: - The computational cost of policy improvement is not discussed in detail.
- It remains unclear what advantages the proposed policy optimization algorithms offer compared to value-based algorithms, apart from their theoretical purposes.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. What does it mean for the proposed algorithm to be "optimistic"? What I mean is that for example, in value-based algorithms, optimism refers to the estimated values of the algorithm being more optimistic than the true optimal value (Lemma B.5 in [17]).
2. Can you explain the computational cost of policy improvement in OPPO+ compared to the policy improvement in value-based algorithms [17]?
3. Besides the ability to learn stochastic policies, what other advantages does OPPO+ have compared to value-based algorithms?
4. In line 203, the author argued that value-based algorithms cannot handle adversarial rewards. Could you provide more details about this?
5. By performing policy optimization infrequently, the algorithm achieves a regret of order $O(K^{3/4})$. What are the drawbacks or trade-offs associated with this approach? In other words, in line 81, it was mentioned that if policy optimization is performed every episode, the regret would have a linear dependence on $K$. What benefits can be obtained from this approach?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: As this is a theoretical paper, it does not seem to have any negative societal impact. However, the authors have not mentioned the limitations of this algorithm. (If they have mentioned any, please let me know, and I will check.) Nonetheless, they have provided an introduction to future work in Remark 3.3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and positive feedback. We will try to address your concerns in the following.
**Q1:** What does it mean for the proposed algorithm to be "optimistic"? What I mean is that for example, in value-based algorithms, optimism refers to the estimated values of the algorithm being more optimistic than the true optimal value (Lemma B.5 in [17]).
**A1:** Thanks for your good question, this is indeed one of the most important differences between policy-based algorithms and value-based algorithms.
- For the value-based algorithm LSVI-UCB in [17], they can obtain that $V_1^* \le V_1^k$. Furthermore,
$$\mathrm{Regret}(K) = \sum_{k=1}^K V_1^* - V_1^{\pi^k} \le \sum_{k=1}^K V_1^k - V_{1}^{\pi^k} \lesssim \text{sum of bonus function} \le \tilde{\mathcal{O}}(\sqrt{K}).$$
- For policy-based algorithms, we cannot obtain the guarantee that $V_1^* \le V_1^k$ as Lemma B.5 in [17]. Instead, we can obtain that $V_1^{\pi^k} \le V_1^k$ for stochastic linear MDPs. In other words, $V_1^k$ is the optimistic estimation of the value of policy $\pi^k$. Hence, we cannot follow the analysis way like [17]. Instead, we use the regret decomposition lemma (Lemma 4.1) to decompose the regret into two terms --- policy optimization error and estimation error, and then tackle these two terms separately.
In terms of algorithm design, our algorithm shares the same spirit with LSVI-UCB [17]. In both [17] and our work, the bonus function $\Gamma_h^k$ serves to quantify estimation error (cf. Lemma 4.4 in our paper and Lemma B.4 in [17]). Furthermore, both OPPO+ and LSVI-UCB calculate the "optimistic" estimation by adding bonus functions to the estimated value function (cf. Line 15 of OPPO+ and Line 6 in LSVI-UCB). As a result, OPPO+ and LSVI-UCB can be regarded as the optimistic variant of PPO and LSVI, respectively.
**Q2:** Can you explain the computational cost of policy improvement in OPPO+ compared to the policy improvement in value-based algorithms [17]?
**A2:** OPPO+ updates the policy by solving a proximal policy optimization problem, while LSVI-UCB in [17] simply executes the greedy policy with respect to the estimated value function. Hence, it is hard to say that the computational cost of a single policy optimization in OPPO is lower than one step policy improvement in LSVI-UCB. However, it is worth noting that OPPO+ adopts the multi-batched updating rule, which leads to better computational efficiency compared with LSVI-UCB that updates policy in each episode.
**Q3:** Besides the ability to learn stochastic policies, what other advantages does OPPO+ have compared to value-based algorithms?
**A3:** OPPO+ can tackle adversarial rewards, while value-based algorithms (e.g., LSVI-UCB) cannot. Further elaboration on this point is provided in detail in **A4**. Moreover, PPO is one of the most widely recognized RL algorithms and linear MDP is arguably the most fundamental RL models with function approximation. Consequently, understanding the theoretical performance of PPO in linear MDPs is important.
**Q4:** In line 203, the author argued that value-based algorithms cannot handle adversarial rewards. Could you provide more details about this?
**A4:** In value-based algorithms, the learner estimates the Q-function and then chooses the greedy policy with respect to the estimated Q-function (see e.g., LSVI-UCB). Meanwhile, it is known to us that the deterministic policy policies will incur linear regret even for adversarial linear bandits (simplified problem of adversarial linear MDPs). Please see Chapter 11 (Exercise 11.2) in [1] for more details.
[1] Lattimore, T. and Szepesvari, C. (2020). Bandit algorithms. Cambridge University Press.
**Q5:** By performing policy optimization infrequently, the algorithm achieves a regret of order $O(K^{3/4})$. What are the drawbacks or trade-offs associated with this approach? In other words, in line 81, it was mentioned that if policy optimization is performed every episode, the regret would have a linear dependence on $K$. What benefits can be obtained from this approach?
**A5:** If OPPO+ performs the policy optimization for $L$ times, then by Lemma C.3, we have the complexity (log covering number) of policy class is roughly $\tilde{\mathcal{O}}(L)$ (ignoring the dependency of $d$ and $H$). Furthermore, by the new self-normalized analysis in Appendix C, we have $\beta = \tilde{\mathcal{O}}(\sqrt{L})$, which further implies that the model estimation error is bounded by $\beta \cdot \sum_{k=1}^K\sum_{h=1}^H\sqrt{\phi(x_h^k, a_h^k)(\Lambda_h^k)^{-1}\phi(x_h^k, a_h^k)} \le \tilde{\mathcal{O}}(\sqrt{L K})$ (cf. Lemma 4.5). Meanwhile, by Lemma 4.2, we have the policy optimization error is bounded by $\tilde{\mathcal{O}}(K/\sqrt{L}) = \tilde{\mathcal{O}}(K^{3/4})$. By choosing $L = \Theta(\sqrt{K})$, we obtain a regret of order $\tilde{\mathcal{O}}(K^{3/4})$.
In summary, the multi-batched updating mechanism is crucial to obtain a sublinear regret and the choice of the number of batches is optimal based on our current analysis.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the detailed clarification. It is helpful in understanding the paper. As a result, I have adjusted the scores accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for dedicating your time and providing your valuable support. We will further polish our paper according to your valuable suggestion. | Summary: The paper proposes a theoretical analysis of the PPO algorithm. Some novel techniques such as batch-wise update are proposed so the algorithm can work on the adversarial setting of linear MDPs. A regret bound is given, which is better then or comparable to previous results.
Strengths: The paper is well written, with a clear presentation and is easy to follow. The topic studied in the paper is important. The proposed technique is novel and the theoretical result is solid.
Weaknesses: 1. See Question 1. One concern is the similarity between the proposed method and NPG. And additional comparison is needed so the readers could understand the novelty and contribution of the paper more easily.
2. The algorithm design of OPPO+ is similar to OPPO. The difference is OPPO+ replaces the step-wise update with a multi-batched update, and considers the linear MDP setting so the state-action feature $\phi$ can be directly obtained instead of doing integration. The authors claimed the weakness of OPPO, or linear mixture MDP, is that the integration is computationally expensive. But I would rather just regard it as a separate model setting, instead of a weakness. Therefore, when doing comparisons with previous work, besides the literature on linear MDP, it could be great if the authors can also give a more detailed explanation on the setting of linear mixture MDP. In particular, given that the algorithm formulations are so similar, it might be helpful if the author can explain why it's not straightforward to migrate the proof of OPPO to the linear MDP setting. For example, it can be added to the challenge & novelty section, which will greatly strengthen their argument.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The updating rule of PPO has a very similar formula to Natural Policy Gradient (NPG). There also have been literatures that apply NPG to linear MDPs, can the authors make a comparison between their approach and these literatures?
For example, in section 4.2 of [1], the authors applied NPG to linear MDPs, and obtained a convergence rate of order $d^2H^6/\varepsilon^3$, which should imply a regret bound of order $d^{1/2}H^{3/2}K^{3/4}$.
[1] Liu et al., Optimistic Natural Policy Gradient: a Simple Policy Optimization Algorithm for Online Learning in Linear MDPs, https://arxiv.org/pdf/2305.11032.pdf
2. Is it possible to extend the analysis to the general function approximation setting?
3. One novelty of the paper is its ability to handle adversarial rewards. Can the authors explain which part of their algorithm is crucial to achieving this goal? Still take [1] as an example, I think the algorithms have similar formulation, and the difference of [1] is it doesn't use a multi-batched update. Does that mean the multi-batched updating rule is the crucial part for the adversarial setting?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Since the paper is focused on a theoretical side, it's unlikely to have potential negative social impact. And some limitations and future research directions are mentioned in the Conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and positive feedback. We will try to address your concerns in the following.
**Q1:** Comparison with OPPO for linear mixture MDPs.
**A1:** We agree that linear mixture MDPs and linear MDPs are two different types of MDPs with linear function approximation. We have discussed some related works on linear mixture MDPs in Appendix A. Here we provide a more detailed comparison between linear MDPs and linear mixture MDPs from the perspective of parameter size. For linear mixture MDPs, the transition kernel takes the form $\mathcal{P}_h(s' \mid s, a) = \psi(s, a, s')^\top \beta_h$ with some $\beta_h \in \mathbb{R}^d$. This means that the model of linear mixture MDPs is characterized by $d$ parameters. In contrast, the number of model parameters of linear MDPs scale with the number of states (the number of model parameters is $d \cdot |\mathcal{S}|$ since $\mu_h(s) \in \mathbb{R}^d$ for all $s\in\mathcal{S}$). Therefore, from our perspective, learning linear MDPs is harder than learning linear mixture MDPs.
Regarding the challenges involved in extending OPPO to linear MDPs, we provided a brief explanation in Challenge 1 of Section 1.1. Now, let's delve into a more detailed explanation of these technical challenges.
Technically, for linear mixture MDPs (Equation (B.20) in OPPO paper (Cai et al., 2020)), they need to analyze
$$\bigg\| \sum\_{\tau = 1}^{k - 1} \phi\_h^\tau(x\_h^\tau, a\_h^\tau) \cdot \big( V\_{h+1}^{\color{red}{\tau}}(x\_{h+1}^\tau) - (\mathbb{P}\_h V\_{h+1}^{\color{red}{\tau}})(x\_h^\tau, a\_h^\tau) \big) \bigg\|\_{(\Lambda\_h^{k})^{-1}}.$$
Since $V_{h+1}^\tau$ is adapted to $\mathcal{F}\_{k,h,1} = \{(x\_{i}^{\tau}, a\_{i}^{\tau})\}\_{(\tau, i) \in[k-1] \times[H]} \cup\{r^{\tau}\}\_{\tau \in [k]} \cup\{(x\_{i}^{k}, a\_{i}^{k})\}\_{i \in[h]}$, they can bound this term with classical self-normalized process analysis directly (see Lemma D.1 in (Cai et al., 2020)).
In contrast, for linear MDPs (see e.g., (B.26) in our paper or Lemma B.3 in [1]), we need to bound the term
$$\bigg\| \sum\_{\tau = 1}^{k - 1} \phi(x\_h^\tau, a\_h^\tau) \cdot \big( V\_{h+1}^{\color{red}{k}}(x\_{h+1}^\tau) - (\mathbb{P}\_h V\_{h+1}^{\color{red}{k}})(x\_h^\tau, a\_h^\tau) \big) \bigg\|\_{(\Lambda\_h^{k})^{-1}}.$$
Since $V\_{h+1}^\tau$ is NOT adapted to $\mathcal{F}\_{k,h,1} = \{(x_{i}^{\tau}, a_{i}^{\tau})\}\_{(\tau, i) \in[k-1] \times[H]} \cup \{r^{\tau}\}\_{\tau \in[k]} \cup\{(x_{i}^{k}, a_{i}^{k})\}\_{i \in[h]}$, we need to perform the uniform concentration on the function class of $V\_{h+1}^k$. The challenge of calculating the covering number of this function class has been elaborated in Challenge 1 of Section 1.1 (Lines 58-73).
Thanks for your question, and we will add these discussions in the revision.
[1] Provably Efficient Reinforcement Learning with Linear Function Approximation. Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan
**Q2:** Comparison with NPG algorithm [2].
[2] Liu et al., Optimistic Natural Policy Gradient: a Simple Policy Optimization Algorithm for Online Learning in Linear MDPs, https://arxiv.org/pdf/2305.11032.pdf
**A2** Thank you for pointing out this work. Firstly, it is important to note that this work has been released subsequent to the NeurIPS submission deadline. While we have discussed prior policy-based algorithms within our related works, we remain open to discussing the distinctions from [2].
- Regarding the updating rule, both PPO and NPG share a similar policy updating rule. However, we additionally introduce the multi-batched updating and average reward policy evaluation mechanisms, which are crucial to handle adversarial rewards.
- Regarding the results. Yes, their result implies an $\tilde{\mathcal{O}}(d^{1/2}H^{3/2}K^{3/4})$ regret for *stochastic* linear MDPs. However, their algorithm cannot tackle adversarial linear MDPs, which is the central focus of our paper.
**Q3:** Is it possible to extend the analysis to the general function approximation setting?
**A3** Yes, our analysis is ready to be extended to the kernel and neural setting [3]. Also, we believe we can deal with low eluder dimension setting like [2] and [4]. Thanks for your question and we will add more discussions in the revision.
[3] Yang, Z., Jin, C., Wang, Z., Wang, M. and Jordan, M. I. (2020). On function approximation in reinforcement learning: Optimism in the face of large state spaces. arXiv preprint
arXiv:2011.04622.
[4] Reinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension
Ruosong Wang, Ruslan Salakhutdinov, Lin F. Yang
**Q4:** One novelty of the paper is its ability to handle adversarial rewards. Can the authors explain which part of their algorithm is crucial to achieving this goal? Still take [2] as an example, I think the algorithms have similar formulation, and the difference of [2] is it doesn't use a multi-batched update. Does that mean the multi-batched updating rule is the crucial part for the adversarial setting?
**A4:** Compared with [2], we think the average reward policy evaluation and the corresponding analysis are the key to handling adversarial rewards (cf. Challenge 2 and Novelty 2 in Lines 87-102), though the multi-batched updating rule is also important to obtain a sublinear regret. If we use the instantaneous reward to perform policy evaluation like OPPO (Cai et al., 2020) or optimistic NPG in [2], we will suffer the linear regret (cf. Challenge 2 in Lines 87-94). To this end, we use the average reward to evaluate the policies (cf. Lines 95-97). This mechanism will introduce additional errors, which require a new smoothness analysis (cf. Lines 98-102 and Lemma 4.3).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have raised the score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review and support. We will further polish our paper according to your valuable suggestion. | Summary: This paper studies the theoretical performance of an optimistic variant of PPO in episodic adversarial linear MDPs with full-information feedback (i.e., without assuming the reward functions are linear in the feature map), and establishes a regret bound of O(d^3/4 H^2 K^3/4) that matches the optimal regret bound in both stochastic linear MDPs and adversarial linear MDPs. The authors also introduce a new multi-batched updating mechanism to enable a new covering number argument of value and policy classes in their theoretical analysis.
Strengths: Existing theoretical studies of PPO mainly focus on linear mixture MDPs with full-information feedback, which are implemented in a model-based manner and require an integration of the individual base model. In comparison, this work studies another set of linear MDPs that have low-rank representations and proposes a new optimistic variant of PPO that is provably efficient in both stochastic linear MDPs and adversarial linear MDPs.
In particular, this work exhibits several promising results:
1. From the algorithmic perspective, the proposed algorithm involves the novel design of a multi-batched updating mechanism and a policy evaluation step via average rewards.
2. Regarding the performance guarantee, the authors establish the optimal regret bound of O(d^3/4 H^2 K^3/4) with two fundamental findings. Instead of using the existing covering argument in linear MDPs, this work presents a new covering number argument for the value and policy classes. In addition, to ensure the sublinear regret, careful analysis has been done to analyze the drift between adjacent policies to control the error arising from the policy evaluation step.
3. Apart from the regret guarantee, it also provides a PAC guarantee as sample complexity, which allows fair comparison with existing works in this line.
This work is well-organized and clearly articulates each part with the corresponding motivation, challenges, as well as technical novelties in its solutions. Overall, this paper is technically sound and demonstrates considerable technical novelties. It provides a better understanding of PPO in a class of MDPs with function approximation, which potentially benefits policy optimization in practice.
Weaknesses: While this paper provides insights into the PPO algorithm in linear MDPs, it does have several drawbacks:
1. One of the claimed novelties is the multi-batched updating mechanism, which coincides with the similar idea of "policy switch" in literature. However, there is no discussion of the existing works that involve "policy switch", and thus it is not clear whether the computational efficiency is solely brought by the policy switch scheme.
2. Right now, the regret bound requires the batch size to be O(\sqrt{d^3 K}), with leads to the number of batches being O(\sqrt{K}). But whether this choice of batch size and the number of batches is optimal remains to be unknown. It would be worthwhile to study and discuss the balance between the batch size and the number of batches, the optimal choice, how the balance will affect the regret.
3. Right now, this is solely theory-based work. As PPO is applied widely in practice, it will be beneficial to include simple benchmark empirical studies to demonstrate its effectiveness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In the introduction, the authors mentioned both Cai et al. and He et al. that study linear mixture MDPs are implemented in a model-based manner, whereas existing theoretical studies in linear MDPs are typically value-based methods. However, from the algorithmic perspective, algorithms in Cai et al. and He et al. also perform regularized least-square approximation on value functions in the policy evaluation step. Do you treat all algorithmics that tries to explicitly learn /approximate the transition model (i.e., \hat{p}) as model-based methods in MDPs with function approximation? Could you explain whether your approach falls into the model-based category? If not, which step makes the difference? Is the multi-batched update the main reason that makes the proposed algorithm more computationally efficient compared to the direct extension of the existing OPPO method to linear MDPs from the algorithmic perspective?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This is a theory paper with no potential negative social impact under the discussed context.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and positive feedback. We will try to address your concerns in the following.
**Q1:** One of the claimed novelties is the multi-batched updating mechanism, which coincides with the similar idea of "policy switch" in literature. However, there is no discussion of the existing works that involve "policy switch", and thus it is not clear whether the computational efficiency is solely brought by the policy switch scheme.
**A1:** We have briefly discussed previous works that involve multi-batched updating (low policy switch) in Lines 201-203. The algorithms in these works are value-based and cannot tackle adversarial rewards. Furthermore, the corresponding covering number argument is completely new and crucial for the analysis of policy-based algorithms.
Our algorithm is computationally efficient since we can show the running time is polynomial in all parameters (e.g., $d, H, K$), akin to the computational efficiency showcased in LSVI-UCB [1]. Moreover, compared with existing algorithms that update policies in each episode such as LSVI_UCB, our method enjoys better computational efficiency due to the multi-batched update.
[1] Jin, C., Yang, Z., Wang, Z. and Jordan, M. I. (2020). Provably efficient reinforcement learning with linear function approximation. In Conference on Learning Theory. PMLR.
**Q2:** Right now, the regret bound requires the batch size to be O(\sqrt{d^3 K}), with leads to the number of batches being O(\sqrt{K}). But whether this choice of batch size and the number of batches is optimal remains to be unknown. It would be worthwhile to study and discuss the balance between the batch size and the number of batches, the optimal choice, how the balance will affect the regret.
**A2:** Based on our current analysis, the choice of batch size is optimal. If we choose the batch size as $B$ (and ignore the dependence of $d$ and $H$), then
- By the new self-normalized analysis in Appendix C, we have $\beta = \tilde{\mathcal{O}}(\sqrt{K/B})$, which further implies that the model estimation error is bounded by $\beta \cdot \sum_{k=1}^K\sum_{h=1}^H\sqrt{\phi(x_h^k, a_h^k)(\Lambda_h^k)^{-1}\phi(x_h^k, a_h^k)} \le \tilde{\mathcal{O}}(K/\sqrt{B})$ (cf. Lemma 4.5).
- By Lemma 4.2, we have the policy optimization error is bounded by $\tilde{\mathcal{O}}(\sqrt{KB})$.
Balancing these two terms, we know the optimal choice of $B$ is $\Theta(\sqrt{K})$. We appreciate your valuable suggestion and intend to clarify this in the revision.
**Q3:** Right now, this is solely theory-based work. As PPO is applied widely in practice, it will be beneficial to include simple benchmark empirical studies to demonstrate its effectiveness.
**A3:** Thanks for your suggestions. We will consider adding some empirical results in the future version.
**Q4:** In the introduction, the authors mentioned both Cai et al. and He et al. that study linear mixture MDPs are implemented in a model-based manner, whereas existing theoretical studies in linear MDPs are typically value-based methods. However, from the algorithmic perspective, algorithms in Cai et al. and He et al. also perform regularized least-square approximation on value functions in the policy evaluation step. Do you treat all algorithmics that tries to explicitly learn /approximate the transition model (i.e., \hat{p}) as model-based methods in MDPs with function approximation? Could you explain whether your approach falls into the model-based category? If not, which step makes the difference? Is the multi-batched update the main reason that makes the proposed algorithm more computationally efficient compared to the direct extension of the existing OPPO method to linear MDPs from the algorithmic perspective?
**A4:** For linear mixture MDPs, the transition kernel takes the form $\mathcal{P}_h(s' \mid s, a) = \psi(s, a, s')^\top \beta_h$ with some $\beta_h \in \mathbb{R}^d$. This means that the model of linear mixture MDPs is characterized by $d$ parameters. The algorithms in Cai et al. and He et al. perform regularized least-square regression to directly estimate the *model parameter* $\beta_h$. In contrast, for linear MDPs where $Q_h^\pi(s, a) = \phi(s, a)^\top \theta_h$, and we use the regularized least-square regression to estimate the value function. Therefore, our algorithm does NOT fall into the model-based category. In fact, the number of model parameters of linear MDPs scale with the number of states (the number of model parameters is $d \cdot |\mathcal{S}|$ since $\mu_h(s) \in \mathbb{R}^d$ for all $s\in\mathcal{S}$), and thus difficult to perform the model-based learning efficiently. This difference is one of the major differences between linear mixture MDPs and linear MDPs. See also [1] and [2] for more discussions.
We want to emphasize that the existing OPPO method (Cai et al.) is restricted to linear mixture MDPs, and its extension to linear MDPs is highly nontrivial (see Challenge 1 in Section 1.1). The incorporation of multi-batched updating, along with its corresponding analysis, plays a pivotal role in achieving sample efficiency. Furthermore, if we direct extend the existing OPPO method to linear MDPs from the algorithmic perspective (without any modifications)
- In terms of statistical efficiency, it is hard to provide a theoretical guarantee due to Challenge 1 in Section 1.1.
- In terms of computational efficiency, you are right --- the utilization of multi-batched update is the main reason that makes the proposed algorithm more computationally efficient.
[1] Jin, C., Yang, Z., Wang, Z. and Jordan, M. I. (2020). Provably efficient reinforcement learning with linear function approximation. In Conference on Learning Theory. PMLR.
[2] Ayoub, A., Jia, Z., Szepesvari, C., Wang, M. and Yang, L. (2020). Model-based reinforcement learning with value-targeted regression. In International Conference on Machine Learning. PMLR.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed reply and the further information to provide a better understanding. I am keeping my score and vote for accept. Good luck.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review and support. We will further polish our paper according to your valuable suggestion. | Summary: This work resolves the known issue for generalizing the policy-based algorithm proposed in [Cai el al.] for linear mixture MDPs to linear MDPs, by multi-batch updating and a bew covering number argument. The proposed model-free policy optimization algorithm advances the theoretical study of PPO in adversarial linear MDPs with full-information feedback.
Strengths: 1 This work resolves the issue in generalizing the policy-based algorithm in [Cai. el al.] for linear mixture MDPs to linear MDPs.
2 Besides stochastic linear MDPs, the proposed algorithm can handle adversarial rewards with full-information feedback.
Weaknesses: 1 Full-information feedback instead of bandit feedback is considered.
2 The proposed novel techniques developed in this work are insufficient for the policy-based algorithm to achieve the minimax optimal regret.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1 in Remark 3.3, the authors claim that they achieve the state-of-the-art regret bound for adversarial linear MDPs with full-information feedback. does it mean their result matches or improves the best existing result in linear MDPs? It would be better to give the reference of the best known exisiting result here.
2 Equipped with the new proposed techniques, is it possible to improve the result by refined analysis?
3 It seems the proposed novel techniques are specialized for policy-based algorithm for linear MDPs. Can the authors comment on the impact of the new techniques? For example, by applying those techniques, there is any existing result that can be improved or any hard problem that now can be solved.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and positive feedback. We will try to address your concerns in the following.
**Q1:** In Remark 3.3, the authors claim that they achieve the state-of-the-art regret bound for adversarial linear MDPs with full-information feedback. does it mean their result matches or improves the best existing result in linear MDPs? It would be better to give the reference of the best known exisiting result here.
**A1:** To the best of our knowledge, there is no previous work that mainly focuses on the adversarial linear MDPs with full-information feedback. However, there are two recent works ([1] and [2]) studying the more challenging adversarial linear MDPs with bandit feedback and their results can directly imply the $\tilde{\mathcal{O}}(d^{2/3}A^{1/9}H^{20/9}K^{8/9})$ and $\tilde{\mathcal{O}}(dH^{2}K^{6/7})$ regret. We have discussed these two related works in the introduction and related works. We will add more discussions in Remark 3.3. Thanks for your suggestion.
[1] Dai, Y., Luo, H., Wei, C.-Y. and Zimmert, J. (2023). Refined regret for adversarial mdps with linear function approximation. arXiv preprint arXiv:2301.12942
[2] Sherman, U., Koren, T. and Mansour, Y. (2023). Improved regret for efficient online reinforcement learning with linear function approximation. arXiv preprint arXiv:2301.13087
**Q2:** Equipped with the new proposed techniques, is it possible to improve the result by refined analysis?
**A2:** We think it is possible to improve the result by refined analysis. For instance, if we only perform the policy updating rule $\log K$ times by the doubling trick on the covariance matrix, the model estimation error is at the order of $\tilde{\mathcal{O}}(\sqrt{K})$. However, based on our current analysis, the policy optimization error term will be linear in $K$. If we can make a refined analysis for this term, then we can derive a $\tilde{\mathcal{O}}(\sqrt{K})$ regret as desired. To progress towards a minimax regret bound, techniques in [3] and [4] might be helpful. Thanks for your question, and we intend to delve deeper into these challenges in our future explorations.
[3] Agarwal, A., Jin, Y. and Zhang, T. (2022). Vo q l: Towards optimal regret in model-free rl with nonlinear function approximation. arXiv preprint arXiv:2212.06069.
[4] He, J., Zhao, H., Zhou, D. and Gu, Q. (2022). Nearly minimax optimal reinforcement learning for linear markov decision processes. arXiv preprint arXiv:2212.06132.
**Q3:** It seems the proposed novel techniques are specialized for policy-based algorithm for linear MDPs. Can the authors comment on the impact of the new techniques? For example, by applying those techniques, there is any existing result that can be improved or any hard problem that now can be solved.
**A3:** Based on our new techniques, we improve existing regret bound for adversarial linear MDPs with full-information feedback. Also, we achieve the SOTA regret bound compared with existing policy-based algorithms for stochastic linear MDPs. In our view, these two problems are important and hard. Moreover, our techniques may have several potential applications:
- Application to multi-agent RL:
- In linear Markov games, the covering number issue of value function class still exists and even more severe since the Nash equilibrium maybe stochastic. Our algorithm design and accompanying covering number analysis offer a potential avenue for understanding policy optimization in unknown Markov games. Here we remark that previous works (e.g., [5]) mainly study policy optimization in Markov games with known transitions and rewards.
- The statistical hardness of learning Markov games with adversarial opponents is well-established [6]. Our smoothness analysis could potentially shed light on this complex challenge, especially under the assumption of "smooth" policy changes by the adversary.
- Application to adversarial decision making:
- Our algorithm design (especially policy evaluation via average rewards) seems new and may motivate more adversarial decision making problems and algorithms. For example, an interesting finding is that our algorithm does not need to know reward functions at the end of each episode (i.e., full-information feedback). Instead, OPPO+ only requires the average reward function at the end of each batch. This paves the way for tackling a novel class of adversarial decision making problems, where the learner interacts with an adversary and receives the average reward function (or the sum of reward functions) after a set number of episodes (e.g., 100 episodes). Finding more motivational examples of this type of problem and providing more efficient algorithms will be interesting.
- Notably, our algorithm is a low switching (multi-batched updating) algorithm for adversarial linear MDPs. We hope algorithm design and analysis can motivate further understanding of low switching algorithms in adversarial decision making.
[5] Policy Optimization for Markov Games: Unified Framework and Faster Convergence. Runyu Zhang, Qinghua Liu, Huan Wang, Caiming Xiong, Na Li, Yu Bai
[6] Learning Markov Games with Adversarial Opponents: Efficient Algorithms and Fundamental Limits. Qinghua Liu, Yuanhao Wang, Chi Jin.
**Q4:** Weakness: (i) Full-information feedback instead of bandit feedback is considered; and (ii) the regret bound is not minimax optimal.
**A4:** Yes, you are right. As we have discussed in our paper, our work cannot tackle these two challenges. These are important open questions and we hope these problems can be addressed in future work. Thanks for your questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response and I am keeping my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review and support. We will further polish our paper according to your valuable suggestion. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Incentivized Communication for Federated Bandits | Accept (poster) | Summary: The authors study a new problem in federated bandits that involves incentivizing clients to share data. They propose a solution called Inc-FedUCB, which offers incentives in a linear contextual bandit setting. They demonstrate that Inc-FedUCB can achieve near-optimal regret levels with guarantees on communication and payment costs. They also conduct extensive experiments to validate the effectiveness of their incentive designs in various environments.
Strengths: 1. The paper is the first work that formulates and proves theoretical guarantees for the incentive design in federated bandit learning.
2. Designing incentives to encourage collaborations among clients is important in federated bandit learning.
3. The proposed algorithms are supported theoretically and numerically.
Weaknesses: The paper's focus is on designing incentivized communication protocols. The contribution is meaningful but as mentioned by the authors in Line 51, a well-defined metric to measure the utility of data sharing is important. The current presentation related to the utility design is not clear. For example,
- Line 126: Did the authors assume that all the clients share the same $\theta_{\star}$? If true, the assumption might be strong.
- Eqn. (4): It seems that the value of new data is independent of $\theta_{\star}$. Again, does this need the assumption that all clients share the same $\theta_{\star}$? If not, the value should also depend on different $\theta$ because some high-value data for client 1 may be useless for other clients.
- Lemmas 5 and 7 seem to be important in designing the incentive but are deferred to Appendix.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please address the questions in weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive suggestions to clarify problem formulation and data valuation design, as well as for helping us improve the overall organization of the paper.
**[Q1]**: Line 126: Did the authors assume that all the clients share the same $\theta_\star$? If true, the assumption might be strong.
**[A1]**: Thanks for pointing out the place that could cause unnecessary confusion. We want to clarify that in our work all the clients share the same $\theta_\star$, and this is actually the standard assumption and widely adopted in the federated bandits literature [1,2,3,4]. For a detailed discussion of our data utility design, please refer to our response to CQ2 in the general feedback provided to all reviewers.
**[Q2]**: Eqn. (4): It seems that the value of new data is independent of $\theta_\star$. Again, does this need the assumption that all clients share the same $\theta_\star$? If not, the value should also depend on different $\theta$ because some high-value data for client 1 may be useless for other clients.
**[A2]**: As the first work of this new incentivized federated bandit problem, we start with the standard homogeneous clients setting [1,2,3,4], where clients share the same unknown reward parameter $\theta_\star$. In this way, the data valuation design does **NOT** require to be dependent on $\theta_\star$ since all clients are estimating the same $\theta_\star$, and this design aligns well with the client’s objective of regret minimization as explained in Section 4.2.
We absolutely agree that it is interesting to explore the heterogeneous setting where clients are associated with different $\theta_\star$. And depending on the problem assumption, our current data valuation design of Eq(4) may or may not apply to the heterogeneous setting. Please see a detailed answer to CQ2 in the general response to all reviewers.
**[Q3]**: Lemmas 5 and 7 seem to be important in designing the incentive but are deferred to Appendix.
**[A3]**: Thanks for the great suggestion! Due to space limit, after presenting our original contribution in the main paper, we had to make such compromises and leave existing important lemmas to the appendix. In the updated version (w/ more space), we will ensure better organization of the paper and address this concern adequately.
**References**
[1] Yuanhao Wang, Jiachen Hu, Xiaoyu Chen, and Liwei Wang. Distributed bandit learning: Near-optimal regret with efficient communication. ICLR 2020
[2] Ruiquan Huang, Weiqiang Wu, Jing Yang, and Cong Shen. Federated linear contextual bandits. NeurIPS 2021.
[3] Chuanhao Li and Hongning Wang. Asynchronous upper confidence bound algorithms for federated linear bandits. AISTATS 2022.
[4] Chuanhao Li, Huazheng Wang, Mengdi Wang, and Hongning Wang. Learning kernelized contextual bandits in a distributed and asynchronous environment. ICLR 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I keep my original rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer’s constructive feedback on our submission, and we are glad that the reviewer is satisfied with our submission and rebuttal. As the first work investigating and facilitating incentives in federated bandit research, we are very excited to share our results and findings in improving the efficiency and practical operability of federated bandit learning with the community, from a fresh yet realistic perspective that every client must be motivated to participate. With this in mind, we are truly enthusiastic about integrating any additional suggestions the reviewer might have that would further improve our work and increase the chance of publication of our work.
We look forward to the possibility of your updated evaluation, which will undoubtedly contribute to the advancement of our work and open up an interesting field in this line of research. | Summary: This paper introduces a novel federated learning protocol, so that 1) the setting is online instead of the more common offline setting, and 2) during each iteration, each client only chooses to participate (sharing information with the central server) if the client is gaining a sufficient amount of utility via participation through an incentive mechanism. The second property is particularly interesting because in the traditional setting, every client unconditionally participates and exchanges information with the central server, while in reality they may be reluctant because of low potential benefits. To ensure sufficient number of participants during every iteration, the central server may also provide extra support to motivate some clients. Finally, the protocol obtains near-optimal regrets and reasonable communication and incentive costs.
Strengths: 1. The topic is important and the mechanism based upon monetary and non-monetary incentives is novel.
2. The paper is generally well-written, with convincing support for the motivation. It also covers an extensive literature as references to earlier works in this area.
3. Such incentive mechanism may be well applied to other related areas.
4. The theoretical argument is extensive. However, please take a look in the weaknesses and questions section.
Weaknesses: 1. We may need a more organized guide to notations in such a paper with a large number of variables. In particular, I didn’t find the definition of $g$ in $V_{g,0}$ and $b_{g,0}$ in the line 1 of algorithm 1. The same is in line 15: what do the negative subscripts $-j$ mean?
2. The bar of utility for the participation standard did not have sufficient support: in line 9 of algorithm 1, why are the determinants of those matrices a good standard for a client to decide whether to participate or not?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The questions are listed in the weaknesses section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: I did not find any concerns in this regard.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive suggestions to clarify data valuation design and improve the organization of notations.
**[Q1]**: We may need a more organized guide to notations in such a paper with a large number of variables. In particular, I didn’t find the definition of $g$ in $V_{g,0}$ and $b_{g,0}$ in the line 1 of algorithm 1. The same is in line 15: what do the negative subscripts $-j$ mean?
**[A1]**: Thanks for the great suggestion! Following the standard notation in the literature [1], $V_{g,t}, b_{g,t}$ represents the global (g) sufficient statistics stored in the server at time step $t$. The negative subscripts $V_{-j, t}, b_{-j, t}$ represent the aggregated updates stored at the server that have not been sent to client $j$. As suggested by the reviewer, we have added a notation table for the main technical notations. Please refer to the supplementary PDF for rebuttal (in our general response to all reviewers).
**[Q2]**: The bar of utility for the participation standard did not have sufficient support: in line 9 of algorithm 1, why are the determinants of those matrices a good standard for a client to decide whether to participate or not?
**[A2]**: We should clarify the confusion. A client decides to participate only if the incentive offered by the server exceeds its cost, as defined in Eq(1). Line 9 of algorithm 1 does **NOT** imply the decision of participation but rather serves as an event trigger for communication (see detailed description at Line 201 in Section 4.1). As highlighted in Algorithm 1, once a communication event is triggered, all clients will upload their $\Delta V_{i,t}$ to the server (Line 10 of Algorithm 1) so as to compute potential incentives for clients. After that, the client will upload its corresponding $\Delta b_{i,t}$ only if the incentive offered by the server is deemed to exceed its data sharing cost (Line 13 of Algorithm 1). Note that this design does not compromise clients’ privacy because only having $V_{i,t}$ is insufficient to update the model and the clients’ secret essentially lies in $\Delta b_{i,t}$, as explained in Section 4.1 (Line 212). For a further discussion on the data valuation design, please find our answer to CQ2 in the general response to all reviewers.
**References**
[1] Chuanhao Li and Hongning Wang. Asynchronous upper confidence bound algorithms for federated linear bandits. AISTATS 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and justification, which solve my concerns. I keep the original rating.
---
Reply to Comment 1.1.1:
Comment: It is great to know that our responses properly addressed the reviewer's concerns, and we are grateful for the reviewer’s constructive feedback and favorable evaluation of our submission. Being the first work to investigate and facilitate incentives in federated bandit research, we are very excited to share our results and findings in this new direction with the community, which we believe will significantly enhance the efficiency and practical operability of federated bandit learning. Therefore, we are truly enthusiastic about integrating any additional suggestions the reviewer might have that would further improve our work and increase the chances of publication.
We look forward to the possibility of your updated evaluation, which will undoubtedly contribute to the advancement of our work and open up an interesting field in this line of research. | Summary: The paper studied the problem of incentivizing data sharing in federated learning under the linear contextual federated bandit model with self-interested clients. While most previous works in federated bandit assume that all clients are willing participants in model sharing, this assumption is often unrealistic and neglects the inherent cost of data sharing for each client. The author proposed a general framework INC-FEDUCB that achieves near-optimal regret and provided upper bounds on the monetary and communication cost. Finally, the author provided empirical experiments to evaluate their mechanisms in different environments.
Strengths: - The studied problem of incentives in federated bandit learning is interesting and novel.
- The theoretical claims are strong contributions.
- The payment-efficient mechanism, while using a heuristic method, provides an improvement over the naive method.
- In general, the paper is easy to read.
- The provided empirical experiments support the theoretical claims.
Weaknesses: - The paper made an important assumption that the server knows the vector of cost values exactly as an input to the incentive mechanism. This assumption relies on clients truthfully reporting their personal costs, which can be leveraged by adversarial clients who purposefully misreport their costs in order to game the incentive system. Also, since an objective of federated learning is to protect the privacy of participating clients, relying on clients to disclose their personal costs seems unrealistic.
- Figures 1 and 2 do not have error bars.
- There are some minor typos in the paper.
- The monetary incentives might also come with some negative societal consequences, where clients are ranked by their potential contributions. There could be a scenario where clients are not selected due to their background and raise a fairness issue (e.g.: less-fortunate hospitals in a federated network that have fewer samples and thus do not contribute much are also less likely to be paid).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the current assumption on the cost function be relaxed by making a more explicit assumption on the form of the cost function for each client that account for the data-generating cost, communication cost, privacy cost, etc?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: - The authors have addressed the limitation of the public cost vector. However, the author did not address the potential fairness issue that can come up with their monetary payment mechanism.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive suggestions to clarify the important problem formulations, improve the presentation of results, and strengthen the discussion on the broader societal impact on fairness.
**[Q1]**: The paper made an important assumption that the server knows the vector of cost values exactly as an input to the incentive mechanism. This assumption relies on clients truthfully reporting their personal costs, which can be leveraged by adversarial clients who purposefully misreport their costs in order to game the incentive system. Also, since an objective of federated learning is to protect the privacy of participating clients, relying on clients to disclose their personal costs seems unrealistic.
**[A1]**: Thanks for pointing out the place that could cause unnecessary confusion. Please find our answer to CQ1 in the general response to all reviewers.
**[Q2]**: Figures 1 and 2 do not have error bars.
**[A2]**: Thanks for the great suggestion! We have updated the figures with error bars. Please refer to the supplementary PDF for rebuttal.
**[Q3]**: There are some minor typos in the paper.
**[A3]**: Thanks for your careful review! We have thoroughly examined the paper and corrected the typos in the updated version.
**[Q4]**: The monetary incentives might also come with some negative societal consequences, where clients are ranked by their potential contributions. There could be a scenario where clients are not selected due to their background and raise a fairness issue (e.g.: less-fortunate hospitals in a federated network that have fewer samples and thus do not contribute much are also less likely to be paid).
**[A4]**: Thanks for the valuable comments! We would like to clarify that in the federated bandit problems, the clients’ primary goal is to minimize their regret [1,2,3,4]. According to our Algorithm 1, all clients (no matter whether they get paid or not) will receive the same amount of data after each communication for better arm selection (thus minimizing regret). Therefore, there is **NO** fairness issue in the sense of regret minimization. In fact, our incentive design is reasonably fair for both sides of the problem: 1. From a rational individual’s perspective, only those who suffer from data sharing (e.g., potential cost of privacy) will get monetary compensation; 2. From the system designer’s perspective, all clients are helped equally by the system in terms of regret minimization.
Following the reviewer’s hospital example, a client (hospital) minimizing its regret can be interpreted as providing better treatment to its patients. Based on our incentive mechanism design, all hospitals will receive the same improvement of treatment quality by our federated bandit learning solution, therefore ensuring fairness in terms of helping patients. Although it may result in different hospital revenues, as some hospitals get more payments from the server, revenue distribution is beyond the scope of our work.
With that being said, we absolutely agree fairness could be a concern in some application problems, e.g., when maximizing monetary utility is also part of the client’s objective. Additional treatment is then needed from the system side for fairness in this regard, e.g., providing extra monetary incentive even when data incentive is enough to motivate the client for participation. As also mentioned in the ethical review, enforcing fairness is undoubtedly an important issue in not only our problem setup, but more generally in modern machine learning. The present work primarily focuses on developing methods to incentivize communications for more efficient federated learning. We plan to add a discussion on potential fairness issues our methods may potentially cause to some specific applications, though developing systematic methodologies to address this issue seems out of the scope of our work.
**[Q5]**: Can the current assumption on the cost function be relaxed by making a more explicit assumption on the form of the cost function for each client that account for the data-generating cost, communication cost, privacy cost, etc?
**[A5]**: Thanks for the insightful suggestion! We would like to clarify that our current cost function is not limited to any specific cost forms, such as communication resource consumption, data production cost, or potential privacy loss. Instead, it is a general design that jointly considers multiple aspects and is represented by a scalar (as introduced in Section 3.2). As the reviewer suggested, we completely agree that it would be more flexible to further customize the cost function by making assumptions on fine-grained cost forms, especially when our method is applied to different scenarios where certain aspects may be prioritized over others. For example, study the case where the data sharing cost for each client is based on the amount of local data it possesses or cost varies by different clients’ productivity. Unlike the fixed cost setting, this dynamic cost may introduce more difficulties in incentive cost analysis, and we believe it's a worthwhile future direction to explore more cost function designs.
**[Q6]**: The authors have addressed the limitation of the public cost vector. However, the author did not address the potential fairness issue that can come up with their monetary payment mechanism.
**[A6]**: Please find our answer to Q4 in the above response.
**References**
[1] Yuanhao Wang, Jiachen Hu, Xiaoyu Chen, and Liwei Wang. Distributed bandit learning: Near-optimal regret with efficient communication. ICLR 2020
[2] Ruiquan Huang, Weiqiang Wu, Jing Yang, and Cong Shen. Federated linear contextual bandits. NeurIPS 2021.
[3] Chuanhao Li and Hongning Wang. Asynchronous upper confidence bound algorithms for federated linear bandits. AISTATS 2022.
[4] Chuanhao Li, Huazheng Wang, Mengdi Wang, and Hongning Wang. Learning kernelized contextual bandits in a distributed and asynchronous environment. ICLR 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer rmwW,
We would appreciate it if you could acknowledge the author's rebuttal and see if they have addressed your concerns. Thank you!
AC
---
Rebuttal Comment 1.2:
Comment: Thank you for the response and justification. I have updated my rating.
---
Reply to Comment 1.2.1:
Comment: We would like to express our gratitude to the reviewer for the positive response and the updated rating. As the first work to investigate and facilitate incentives in federated bandit research, we are very excited to share the results and findings of our research in this new direction with the community. And we firmly believe this will greatly enhance the efficiency and practicality of federated bandit learning.
Therefore, we are more than happy to incorporate any additional suggestions the reviewer might have that could further enhance our work and lead to a more favorable evaluation. We have no doubt that the reviewer's valuable advocacy will significantly contribute to the advancement of our work and open up an interesting field in this line of research. | Summary: This paper introduces an incentivized communication problem for federated bandits. They study the contextual linear bandit setting and propose the first incentivized communication protocol, namely, INC-FEDUCB, that achieves near-optimal regret with provable communication and incentive cost guarantees.
Strengths: The paper is well-organized. The problem studied in the paper is novel and important. They provide both technical results and empirical experiments on both synthetic and real-world datasets.
Weaknesses: The paper lacks intuitive explanations for the technical results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It might be great to add some high-level idea of the proofs for the theorems.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: They assume all clients truthfully reveal their costs of data sharing to the server.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of our work, and the constructive suggestions to enhance the presentation of our theoretical results.
**[Q1]**: The paper lacks intuitive explanations for the technical results. It might be great to add some high-level idea of the proofs for the theorems.
**[A1]**: Thanks for pointing out the places where we can make our paper more friendly to readers with diverse backgrounds. Below, we comprehensively summarize the intuition behind our algorithm design, as well as providing explanations of the main theoretical results.
**Payment-free vs., Payment-efficient Mechanism Design**:
As we explained in Section 4.2, maximizing the determinant of the client’s local data $V_{i, t}$ directly corresponds to reducing its regret in this federated bandit problem. This principle drives the design of our metric defined in Eq(4): the greater the server’s offer (denoted as $D_{i,t}(S_t)$) can increase the determinant of the client’s local data, the higher incentive it can provide. But as we proved in Theorem 3, the payment-free mechanism might not motivate any client to participate under specific circumstances. To address this issue, the payment-efficient incentive mechanism introduces additional monetary incentives to motivate clients. And to avoid trivially paying everyone more than enough, we look for minimum incentive cost to achieve the desired level of regret, controlled by the hyperparameter $\beta$. This poses a challenging optimization problem, as a brute-force search solution can yield a time complexity up to $O(2^N)$; and we implement a heuristic-based search (Algorithm 3) to minimize the incentive cost, with a time complexity of only $O(N)$.
**Theorem 3**: As detailed in Appendix C, the data incentive is bounded by the environment configuration. Thus, when client $i$’s cost $D^p_i$ exceeds this bound, it becomes impossible for the server to provide enough incentive to motivate client $i$ to participate in the communication. Theorem 3 implies that when the number of participating clients is less than the threshold $\frac{c}{2C} \cdot \frac{N}{\log(T/N)}$, a sub-optimal regret of the order $\Omega(d\sqrt{NT})$) (i.e., no regret reduction) is inevitable.
The proof of Theorem 3 relies on Lemma 8, which indicates that once we encounter a situation where the number of participants in the payment-free mechanism falls below a threshold, a sub-optimal regret is inevitable. Therefore, we first establish a data incentive bound, and then create a situation specified in Lemma 8, which completes the proof.
**Communication Cost**: As detailed in Appendix D (Line 519), the communication cost is $C_T = P \cdot O(Nd^2) = O(N^2 d^3 \log T)$, where $O(Nd^2)$ is the communication cost per epoch, and $P = O(Nd\log T)$ is the number of communication epochs, given the communication threshold $D_c = \frac{T}{N^2d\log T} - \sqrt{\frac{T^2}{N^2dR\log T} }\log \beta$. This result suggests that a lower value for $\beta$ results in a higher communication threshold $D_c$, thus leading to reduced communication frequency and cost, which is also supported by our numerical experiments (Appendix E.3).
To bound the communication cost, we first analyze the communication cost per epoch, then establish an upper bound for the total number of epochs, thereby completing the proof.
**Regret**: As explained in Appendix D (Line 567), the regret is analyzed under good epochs and bad epochs of communication. Our result $R_T = R_{good} + R_{bad} = O\left ( \frac{d }{\sqrt{\beta}}\cdot \sqrt{T} \cdot \sqrt{\log\frac{T}{\delta} \cdot \log T}\right) + O\left(Nd^{1.5}\sqrt{D_c \cdot \log\frac{T}{\delta}}\log T \right)$ shows that a larger value of $\beta$ leads to smaller regret in good epochs (first term), as the client’s local data gets closer to the global oracle. Meanwhile, a larger communication threshold $D_c$ results in worse regret in bad epochs (second term). This is because a higher communication threshold causes less frequent communications, ultimately resulting in worse regret.
**Incentive Cost**: As detailed in Appendix D (Line 527), the incentive cost $M_T \leq \max_{i\in[N]}\{D_i^p\} \sum\limits_{p = 1}^P N_p - \sum\limits_{i=1}^N\sum\limits_{p \in \mathcal{\bar{P_i}}} \mathcal{I}^{d}_{i, t_p}$ consists of two parts. The first part represents the case where the server only uses money to motivate clients in each communication epoch, disregarding the data incentive. The second part represents how much monetary incentive the server could have saved by motivating clients with data incentive. With the proper $D_c$, a lower value of $\beta$ not only leads to less communication frequency but also decreases the demand for the server to collect data from the clients in each epoch, jointly resulting in reduced incentive cost, which is verified in our numerical experiments (Appendix E.3).
To derive this bound, we first analyze the monetary incentive cost by associating it with the client’s data sharing cost and the data incentive already provided by the server. Then, by establishing a lower bound for the data incentive, we can upper bound the incentive cost.
**[Q2]**: They assume all clients truthfully reveal their costs of data sharing to the server.
**[A2]**: Please find our answer to CQ1 in the general response to all reviewers.
---
Rebuttal Comment 1.1:
Comment: Thanks for the quick response and explanation. I keep the original rating.
---
Reply to Comment 1.1.1:
Comment: We are glad that the reviewer found our responses helpful, and we genuinely thank the reviewer for recognizing our work. Indeed, we are very excited to introduce this novel incentivized setting into federated bandit research and share our results and findings in this new direction with the community.
With the reviewer's valuable advocacy, we firmly believe that our work will pioneer an interesting and important field in this line of research, significantly enhancing the efficiency and practical operability of federated bandit learning. | Rebuttal 1:
Rebuttal: # General response to the reviewers:
We sincerely thank all the reviewers for their thoughtful comments and constructive suggestions, which will significantly help us strengthen our paper. It is encouraging that all reviewers appreciate the novelty and importance of the studied problem and our proposed solution, with solid theoretical analysis (Reviewer zWH8, rmwW, NbUe, U6am), extensive numerical validation (Reviewer zWH8, rmwW, NbUe, U6am), and potential broader impact to other related areas where incentivized federated learning is needed (Reviewer NbUe, U6am).
There are also shared comments regarding truthful cost revealing and data utility design. Indeed, one of the intended goals of our work is to inspire further investigations into this new direction with more diverse settings, such as learning with strategic clients where truthful mechanism design is needed, and heterogeneous client settings where relevant data utility design is needed. And we agree with the reviewers that those are all important future directions. Next, we first provide our responses to these common questions (CQs), and endeavor to provide individual responses to each reviewer.
**[CQ1]**: Truthful cost revealing (Reviewer zWH8, rmwW)
**[CA1]**: In this work, we introduce the incentivized communication problem for federated bandits. As the first work of its kind, we aim to establish a foundation and initiate the study with a simplified setting, where clients truthfully reveal their data sharing costs. But as we discussed in the conclusion section, the study of truthful mechanism design is a very interesting and important future work. Specifically, to prevent clients from strategically misreporting their costs to gain more utility (either monetary or data incentive) from the server, one potentially promising direction is to investigate truthful (also known as incentive-compatible) mechanisms like the Vickrey–Clarke–Groves (VCG) mechanism, under which being truthful is the best response for all clients. We firmly believe this will open up an interesting new field of studies in federated bandits and beyond.
On the other hand, as Reviewer rmwW suggested, explicitly sharing individual costs may expose a side channel of privacy breaches. In practice, one promising solution is to avoid this direct revelation via secure computation. As the cost values are simple scalars, like prices, and the associated operations are simply comparisons, it will not incur high overhead in secure computation.
**[CQ2]**: Data utility design (Reviewer NbUe, U6am)
**[CA2]**: In federated bandit problems, the clients typically share the same unknown reward parameter $\theta_\star$, which is a standard setting in this line of research [1,2,3,4]. We followed this setting and refer to it as the homogeneous client setting. As illustrated in Eq(3) of our paper, the determinant ratio $\frac{\det(\widetilde{V}\_{t-1})}{\det(V_{i_t, t-1})}$ reflects the additional regret due to the delayed synchronization between client $i_t$’s local sufficient statistics $V_{i_t, t-1}$ and the global statistics $\widetilde{V}_{t-1}$. This argument has been recognized in prior works, e.g., Section 3.2 of [3], and is also theoretically supported by Lemma 5 and 7 of our paper. In other words, if all clients participate in data sharing in every time step, the ratio will be kept at 1, and thus every client essentially enjoys the optimal regret (discussed in Section 3.1). **Therefore, minimizing this ratio directly corresponds to reducing client $i_t$'s regret**. As a result, the proposed Eq(4) is a natural design of data valuation for homogeneous clients. In essence, if we denote the data valuation function as $f(x)$, where $x$ is the metric defined in Eq(4) that directly measures the value of data for regret reduction, our current design can be regarded as $f(x)=x$. For different application scenarios, we can further generalize it to any monotonic function $f(x)$ on top of this metric.
Furthermore, we should emphasize that Line 9 of Algorithm 1 is **NOT** where the clients decide whether to participate. Instead, it is the “communication trigger” - to ensure communication efficiency during the federated learning process. An event trigger is introduced to control the communication frequency as detailed in Section 4.1.
We also acknowledge that there are also studies [3] that explore the heterogeneous client setting in federated bandits, where the unknown parameter $\theta_\star$ for each client consists of a globally shared component $\theta_\star^g$ and a unique local component $\theta_\star^l$. In this case, the data valuation design should be different from our current solution, because data valuation now depends on each client’s own $\theta_\star$, which is unknown to the server and clients. For example, we may need to assume the availability of additional knowledge about the relation among the set $\{\theta_\star\}$, or design strategies that estimate such relations on the fly. Alternatively, one simplified solution is to assume clients value the data only based on the shared component $\theta_\star^g$, and then the data valuation design will be essentially the same as our current choice.
**References**
[1] Yuanhao Wang, Jiachen Hu, Xiaoyu Chen, and Liwei Wang. Distributed bandit learning: Near-optimal regret with efficient communication. ICLR 2020
[2] Ruiquan Huang, Weiqiang Wu, Jing Yang, and Cong Shen. Federated linear contextual bandits. NeurIPS 2021.
[3] Chuanhao Li and Hongning Wang. Asynchronous upper confidence bound algorithms for federated linear bandits. AISTATS 2022.
[4] Chuanhao Li, Huazheng Wang, Mengdi Wang, and Hongning Wang. Learning kernelized contextual bandits in a distributed and asynchronous environment. ICLR 2023.
Pdf: /pdf/1c4fc123cb17b10588d89f8f798ba83a89799976.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Unified Lower Bounds for Interactive High-dimensional Estimation under Information Constraints | Accept (poster) | Summary: This paper discusses parametric estimation under a communication setup. This setup adds variation to classic parametric estimation and focuses on the setup $\theta \to X^n \to Y^n$ where the goal is to estimate $\theta$ given $Y^n$, which is generated in an interactive, sequential manner with $Y_i$ possibly dependent on other signals.
The main result of this paper is Theorem 1, which gives an information contraction bound. With additional assumptions on orthogonality and subgaussianity, the paper's next result is Theorem 2, which gives two corollary information contraction bounds expressed in terms of variance, mean and mutual information terms relating to channels.
The authors then provide several examples of tightness (and near tightness) for these bounds by looking at minimax rates for examples such as product Bernoulli, sparse Gaussian, and discrete distributions under information or communication constraints.
Strengths: **Originality**: The paper builds upon the classic framework of parametric estimation under an interactive framework. While the problem setup itself is not original, this paper contributes new findings to minimax theory for these setups, particularly with communication and privacy constraints.
**Quality**: The paper is nicely written, and the authors explain the setting and results well. No glaring issues were found with regards to technicalities.
**Clarity**: Overall, the paper is cleanly written, bar some comments stated in below sections.
**Significance**: The interactive parametric estimation problem considered by the paper is an interesting one and is directly relevant to the community.
Weaknesses: Overall, the paper itself is quite nicely written and the topic is interesting. Some comments regarding weaknesses include:
1. The paper is somewhat structured reversely in many scenarios, where the paper references equations that appear later on in the paper. This may cause a little bit of confusion for readers.
2. It may be helpful to highlight the difference in technical contributions (for example, proof techniques) between this work, [4], and [16], considering the problem setup is quite similar and both this paper and [16] "build upon the framework presented for the discrete setting in [4]". What are some scenarios where the minimax results in this paper can significantly improve upon results in [16]?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. While the paper focuses on *interactive* protocols, it is curious whether similar techniques can be applied to the general parametric estimation model (i.e., in the setting where Y = X) and achieve tighter lower bounds than previous works, considering the techniques used in this paper do not depend on "bounded ratio" type assumptions as claimed in the paper. Would it be possible to use results in the paper to achieve significant improvement to classic minimax bounds for sparse/non-sparse Gaussians with non-identity covariance $\mathbb{I}$ such as in the cases where covariance matrix eigenvalues are largely varying (i.e., on a different order)?
2. How much are we sacrificing by using the inequality in (30) within the supplementary material? Would not dropping the second term in the denominator give improved results?
3. How limiting is the assumption of Assumption 4 and its corresponding Lemma 1? In other words, how easy is it to find $\theta$'s that satisfy this assumption?
4. It may be beneficial to add some references to the parts of the paper where it is claimed that previous works regarding bounded ratio type assumptions are "flawed" (for example, maybe previous papers that analyze this flaw).
Some additional suggestions/typos/confusions:
a. The term $S$ is not clearly defined in Equation (2). It may help to add some description of what this means.
b. Line 193 seems to be incorrectly formatted.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There does not appear to be any negative societal impact associated with this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading and acknowledging the novelty and significance of the paper. Below we address technical questions raised in the review.
#### ***It may be helpful to highlight the difference in technical contributions (for example, proof techniques) between this work, [4], and [16], ...***
For comparison of our result to [4], see response to Reviewer DETb. While [16] does build on [4] as well, our goal and results are significantly more general: our objective was to obtain results under minimal assumptions, and in particular without any “bounded ratio assumption”. This is in particular apparent in the results of [16]:
- Their Assumption 2 (“Likelihood ratio condition”) is exactly such a bounded-ratio assumption.
- As a result, their bound for the sparse Gaussian case (Theorem 3 in their arXiv version: https://arxiv.org/abs/1802.08417) cannot use their general theorem (as the sparse case does not satisfy that assumption), and so they must use a completely different argument (Appendix D of their paper) which is restricted to the *non-interactive* case (termed simultaneous message passing protocols in their paper).
This is to be contrasted to our general approach, which, free of that assumption, directly applies to sparse Gaussians in the interactive case as well (see lines 353-354, and Theorem 5 (Section G.2) of the supplemental of our paper)
#### ***Would it be possible to use results in the paper to achieve significant improvement to classic minimax bounds ...***
Our result can be viewed as an extension of classic methods for proving lower bounds of general parametric estimation problems to the case under information constraints in the interactive setting. Without information constraints, our technique will reduce back to these classic methods, e.g. Assouad’s method. In particular, the “bounded ratio” assumption we get rid of in this work is not needed without these information constraints since the divergence measures will have explicit forms when Y = X and the distribution of X is known.
#### ***How much are we sacrificing by using the inequality in (30) within the supplementary material?***
We drop that term to relate Hellinger distance to chi-square distance. It is critical for us, since we exploit the bilinear form of chi-square distance to get a handle over contraction in distances due to information constraints. In all the examples we have seen, this weakening does not hurt us. However, it is of course conceivable that there could be examples where this weakening of Hellinger distance to chi-square distance becomes the limitation.
#### ***How limiting is the assumption of Assumption 4 and its corresponding Lemma 1?***
This is a good question, with two (complementary) answers.
- The first is that it is indeed limiting, analogously to the widespread Assouad’s lemma in standard minimax lower bounds: i.e., Assumption 4 does not really restrict further our results than it would using Assouad’s lemma in the “non-constrained” estimation literature.
- The second is that it is not *really* limiting, in that this assumption (and corresponding lemma) is completely independent from Assumptions 1-3 and our main theorems. Namely, Theorems 1 and 2 (the key contribution of our work) allow to upper bound a given quantity, the average discrepancy (AD), and do not rely on Assumption 4 in any way. Then, Lemma 1 provides a lower bound on this same quantity (AD) (and does not rely on assumptions 1-3): putting the two together gives the minimax lower bounds. But if one found another, different way to lower bound (AD) that doesn’t need Assumption 4 (e.g., via a Fano-type argument instead of Assouad-type), or anything else really, then the same template would go through and one would readily get a minimax lower bound by combining this with Theorem 1 or 2. While we do not provide an alternative to Assumption 4/Lemma 1 in our paper (as this Assouad-like bound suffices for our purposes), such an alternative is entirely conceivable.
#### ***It may be beneficial to add some references to the parts of the paper where it is claimed that previous works regarding bounded ratio type assumptions are "flawed"***
This is a good suggestion, and we will do our best to incorporate it in the final version. Yet, as several of these flaws were confirmed via personal communication with the authors, we felt it would be awkward (and potentially in violation of the double-blind policy) to include them in the submission.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thank you for addressing my questions. Please add the discussions here into further revisions as appropriate. | Summary: This work investigates the distributed parameter estimation problem under local information constraints such as communication constraints, local privacy constraints, and restricted measurements. The authors focus on information-theoretic lower bounds for the minimax error rates of these problems and present "plug-and-play" lower bounds that can be applied to various estimation problems. In addition, for most cases, the lower bounds are complemented with matching upper bounds.
Strengths: This work presents a unified framework for deriving a variety of minimax lower bounds for various families of distributions under local information constraints for the distributed parameter estimation problem. Additionally, matching upper bounds have been provided to demonstrate that the lower bounds are tight for most settings.
Weaknesses: It would be beneficial to see a table displaying the lower/upper bounds for the various cases considered in this work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: no
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments. We will improve our paper based on the suggestions in future revisions. | Summary: This paper studies parameter estimation problem in a setting with the following two important assumptions:
1. Raw samples X_i's are generated from a certain parametric distribution, however, they are not directly observed. Instead, Y_i's which are generated from channels subject to certain constraints are observed. These constraints can encapsulate various problems of interest, notably communication constraints and local privacy.
2. Interaction is allowed. That is, Y_i's are generated with memory.
The main result is a general abstract upper bound on the sensitivity of the transcript (i.e., public view Y_i's and common randomness if any) of the interactive protocol.
This, upon nontrivial specification, yields a suite of tight or nearly tight impossibility results for problems mentioned above.
Strengths: I'm not an expert in this field.
Though the main text is somewhat dense, I found it pretty accessible.
Most importantly, the results seem pretty satisfactory.
Weaknesses: I don't see obvious weaknesses.
Please instead see my questions below (most of which are very minor).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Major questions:
1. Is it possible to explain why in the main bounds (Theorem 1 & 2), TV distance was used? Is this the only reasonable choice? If so, why?
If not, is using other distance/info measure beneficial in any way, e.g., simplifying the proof, improving leading constant factors, etc.?
2. I saw that most examples in this paper are regarding product (or "product-like") distributions (e.g., product binary distribution, isotropic Gaussian). And my shallow understanding is that this seems to be a limitation of the technique.
This question may go beyond the scope of the paper, but I'm wondering if it is possible to obtain similar results when certain structured correlation is present.
E.g., Gaussian with general covariance, or simple temporal correlation such as Markovian structure.
Other comments:
1. Line 28, "local information constraint". I'm not familiar with this line of work and this may be a silly question. Could the authors offer some examples of non-local/global constraints?
2. I feel like already in line 44, the buzz word "Assouad" should have been mentioned. This is a rather standard tool and a general audience can smell that when reading upon line 44.
3. Mysteriously the line breaks at 193.
4. In Equation (2), the Gothic letter under the first sup was defined in footnote 2. It took me a while to find it. Perhaps upgrade this footnote to the main text?
5. To information theorists the notation y^t (perhaps more commonly: y_1^t) for (y_1, ..., y_t) is common but I'm not sure how standard it is to a general reader, especially given that this can be confused with the t-th power of a number.
I encourage the authors define this notation somewhere.
6. The notation p_Z^{Y^n} denotes the density of (Y^n, U), not just Y^n. Somehow U is dropped. Is there a reason for doing so?
7. In Equation (3), the integral is against y w.r.t. mu. Sometimes it might be slightly more clear to explicitly write $\mathrm{d}\\, \mu(y)$.
Also, this measure mu was defined in footnote 2, which is somewhat hidden.
8. A typo on line 359, double commas following "i.e.".
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed reading and suggestions on improving the presentation of the paper. Below we address technical concerns raised by the reviewer.
#### ***Is it possible to explain why in the main bounds (Theorem 1 & 2), TV distance was used? Is this the only reasonable choice?***
Thanks for the great question. Here we discuss three commonly used discrepancy measures, including TV distance, Hellinger squared distance ($d_H$), and mutual information (expected KL divergence). In general, it holds that $d_{TV}^2 \le d_H^2 \le d_{KL}$. Our main bounds also hold if we switch to $d_H^2$, which is also how Theorem 1 is approved (see Appendix E.1). Further relaxing discrepancy measures to mutual information (or KL) is trickier. In particular, [16] (as referred in our paper) uses mutual information as their discrepancy measure, and as a result, their bound would require an additional “bounded ratio” assumption in the distribution family, which fails for the sparse Gaussian family (see details in the response to reviewer Zd1o regarding the comparison to [16]). To summarize, both $d_{TV}^2$ and $d_H^2$ could be used, and would lead to similar results, comparable both in terms of proof length and resulting constants (which, for the sake of this paper, we didn’t focus on optimizing). TV distance was chosen here mainly because of its convenience and more widespread use.
#### ***Examples of "non-local constraints."***
The central version of differential privacy [Dwork and Roth 14] requires that a private algorithm should output similar outputs for neighboring datasets (those differ by at most one data point). The constraint cannot be expressed as a local constraint to the best of our knowledge.
Another type of “non-local constraint” would be adversarial corruptions of data (see, e.g., https://sites.google.com/view/ars-book/): while one can model the Huber contamination model as a local constraint (as each data point is corrupted in an i.i.d. fashion), the common “strong contamination model” which allows an adversary to corrupt say 10% of the samples (of its choice) in an arbitrary fashion is not a local constraint, as it allows for arbitrary correlations among corrupted samples.
#### ***... if it is possible to obtain similar results when certain structured correlation is present. E.g., Gaussian with general covariance ...***
See answer in the global response.
[Dwork and Roth 14] Dwork, Cynthia, and Aaron Roth. "The algorithmic foundations of differential privacy." Foundations and Trends in Theoretical Computer Science 9.3–4 (2014): 211-407. | Summary: This paper studies distributed parameter estimation under local information constraints. In this problem, independent samples $X_1,..., X_n$ are generated from an unknown distribution $p_\theta$ from a parametric family $P_\Theta$ of distributions. The samples are not accessible directly, rather available is only partial inofmration $Y_1, ..., Y_n$ generated by passing the original samples through classical channels $W_i$. The goal is to estimate the underlying parameter $\theta$ by accessing $Y_1, ..., Y_n$. The channels can be interactive, dependig on the past observations. They also local information constraints, including communication constraints, and privacy constraints.
This paper derives lower bounds on the sample compelxity of this problem in a general framework. Moreover, this work explores applications of the general bound to problems of high-dimensional mean estimation and distribution estimation, under privacy and communication constraints, for the entire family of $l_p$ loss functions for $p \geq 1$.
Strengths: The problem formulation and the related results are substantially general and can be used in various problems involving different constraints.
The paper is rigorous and well written.
Weaknesses: It is not clear how significant the generalization is compared to previous works (i.e., reference [4]). Is the paper generalizing [4] to $l_p$ loss functions? If that is the case, I am not sure how important that is beyond a mathematical curiosity.
Minor comment: it looks like that the symbol $\wedge$ is not defined.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As another point, the current approach focuses on a finite perturbation subset of $\Theta$. How does this approach work for continuous parameter space $\Theta$ ? Do we need to make sure that the Z space is a good representation of $\Theta$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed reading and acknowledging the generality of our technique. Below we address the questions raised by the reviewer.
#### ***The significance of the generalization compared to [4]***
We stress that the results and techniques of [4] are specific to discrete distributions: extending these techniques to handle general distribution classes (e.g., high dimensional distributions) is a non-trivial task. For example, for high-dimensional distributions, to get the correct dependence of $d/\ell$ under communication constraints, we use a measure change bound (Lemma 3 in the appendix) to prove an upper bound of $\log|\mathcal{Y}|$ instead of $|\mathcal{Y}|$ in the information contraction bound (see Corollary 2). For discrete distributions, the latter will be sufficient. Moreover, the proof of the result in [4] relies on the sum of mutual information as a key step, which will inherently cause an “bounded ratio” assumption in the distribution family similar to [16] (see details in the response to reviewer Zd1o regarding comparison to [16]). We resort to average TV distance as the discrepancy measure to remove this assumption. This being said, [4] also provides bounds for *testing* (and not just estimation), which our paper doesn’t include.
As such, our paper is a considerable generalization of [4], going way beyond the simple extension to other $L_p$ norms.
#### ***How does this approach work for continuous parameter space? Do we need to make sure that the $Z$ space is a good representation of $\Theta$?***
Yes, indeed. Making sure that the $Z$ space captures the hardness of $\Theta$ is an important step for using our framework to prove information-theoretic lower bounds. This typically requires an understanding of the structure of the parametric estimation problem, which is also necessary for proving lower bounds for parameter estimation in classic settings without local information constraints. Our framework captures the additional cost imposed by local information constraints on top of this.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. Some of my concerns are addressed, and I increase my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading of our submission, and are delighted to see their very positive assessments of our work. We will take into account their comments and suggestions in the final version of our paper, and respond individually to their questions below. We first address a question mentioned by several reviewers.
#### ***Whether our techniques could apply beyond identity-covariance matrix or independent coordinates.***
This is, indeed, a very natural question: while we believe our techniques can provide lower bounds for this case (for a suitable family of hard instances, such as the one described in Lemma 6.11 of [https://arxiv.org/abs/1805.00216 ], which is indeed parameterized by a vector of $\binom{d}{2}$ independent $\pm 1$ parameters), such a lower bound would most likely be quite technical, and lengthen the paper significantly. As our main focus here was to establish the general lower bound framework and demonstrate its applicability via a few notable examples, we left these additional applications for future work. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The work considers the problem of proving sample complexity lower bounds for parameter/density estimation in a distributed setting with information constraints (local privacy, communication constraints etc). What distinguishes this paper from prior work is that it focuses on the scenario where there is interaction between the users who share their data. Thus, while the data-points are initially generated in an i.i.d. fashion, when the time comes for the data-points to be subjected to the information constraints, the results depend on previous users' actions.
The authors develop general tools for obtaining lower bounds in this setting, where the main challenge involves dealing with the dependencies implied by the interactions between users. The main tool developed is a lemma that is essentially reminiscent of Assouad's method that is adapted to the setting of sequentially interactive distributed protocols with information constraints. The resulting framework manages to capture various existing techniques such as strong data-processing inequalities and bounds derived using the distributed version of the Bayesian Cramer-Rao bound know as the Van Trees inequality. The paper also includes applications of these techniques to a number of estimation problems, while the derived lower bounds are in most cases complemented by nearly-matching upper bounds. Finally, there's a description of how the techniques could be adapted to work in the fully interactive setting as well.
Strengths: This is a quite strong paper with technical contributions that improve our understanding of statistical estimation tasks in the distributed setting. The work provides both techniques (which are presented in a quite general way) and applications (which confirm the effectiveness of said techniques). The applications include the fundamental tasks of mean estimation for binary product distributions and Gaussian distributions under the $\ell_p$-norm, as well as density estimation for discrete distributions. The work is also notable for identifying mistakes in previous papers where some of these results were claimed and using the developed framework to fix the proofs. Finally, the writing is very clear and the work is clearly positioned within the overall literature.
Weaknesses: No obvious weaknesses.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I have one question. The applications given in this paper within the context of parameter estimation for high-dimensional distributions all involve independent marginals (binary product distributions, spherical Gaussians). Additionally, the lemma the proofs of these statements rely on (lemma 3) is an analogue of Assouad's method. Assouad's method assumes that the family of distributions considered as part of proving the lower bound are parameterized by the vertices of the boolean hypercube, with each coordinate being drawn independently. This frequently ends up limiting the applicability of the method to problems involving distributions with independent marginals. Is this a limitation of the work, or is there evidence the method can lead to tight lower bounds even with non-product distributions (e.g., for Gaussian covariance estimation)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: This is a purely theoretical work with a focus on lower bounds. Thus, there is no apparent societal impact (positive or negative).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very positive assessment of our work. We refer the reviewer to the global response for our discussion on generalizing the technique beyond distributions with independent marginals.
---
Rebuttal Comment 1.1:
Comment: Thank you very much! My score remains unchanged. | null | null | null | null | null | null |
Convolutional State Space Models for Long-Range Spatiotemporal Modeling | Accept (poster) | Summary: This paper presents ConvS5, a convolutional state-space model that aims to model long-range spatiotemporal dependencies in video data. They extend the prior S5 model to operate in a convolutional state-space, and retain the long-range benefits of state-space models by using a point-wise convolutional kernel for diagonalizable and S5 HiPPO initialization. The resulting model can then be efficiently trained using parallel scans due to linearity in the recurrent mechanism. They show strong results on several complex long-range video benchmarks in Moving MNIST, DMLab, Minecraft, and Habitat.
Strengths: - The paper is generally clear and well written
- Comprehensive experiments on a range of datasets focused on long-range understanding in videos, showing that their proposed method is better than or competitive with SOTA methods
- Experiments show that better scaling in sequence length allows for training on longer sequences, which results in better / more stable generation
- Faster inference speed compared to existing methods while retaining high generation fidelity
Weaknesses: - Overall technical novelty is a little low, as the model itself is a straightforward extension of the S5 model to a convolutional setting.
- A primary concern I have about recurrent-based models for long-range understanding in video generation is limited capacity in the recurrent state, as these models are required to store all fine-grained details about the scene in the recurrent state (since potentially every part of the scenes will be visited / need to be reconstructed at a future timestep), whereas transformer-based models can dynamically retrieve from all prior states for each timestep. A rigorous study on this aspect would benefit the paper greatly, as this issue is more present in visually complex videos (minecraft -> habitat -> real videos), and could be a potential bottleneck to the scalability of these models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - For Moving MNIST, how would ConvS5 potentially compare against temporally hierarchical methods in modeling long-range dependencies (e.g. Clockwork-VAE)? Or would an analogous hierarchical version of ConvS5 be better than a hierarchical GRU model?
- How would ConvS5 compare for Minecraft / Habitat? I believe the primary difference is without the MaskGit / iterative decoding on the prior, so would ConvS5 struggle with more stochastic environments?
- What is the potential cause of the difference in performance between FVD and PSNR / SSIM / LPIPS in Minecraft? Is FVD lower due to better frame fidelity of TECO-ConvS5, and TECO-transformer has better long-term consistency? Or perhaps other factors?
- Are generated video samples available anywhere to view? It is hard to evaluate temporal coherence in the rows of frames provided in the appendix.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback. We are glad the reviewer appreciated the comprehensive experiments as well as strong results.
**--Re. technical novelty**: We respectfully disagree that the proposed approach is a straightforward extension of S5. Please see the Technical Contributions section of the General Response for a detailed explanation of the contributions required to ensure the convolutional recurrence could be efficiently parallelized and model long-range spatiotemporal dependencies.
**--Re. recurrent vs attention architectures**: Thank you for the question. This is an interesting discussion. While it is true that recurrent methods are required to store the history in its recurrent states, this capacity can be increased by adding more hidden units or layers. On the other hand, the attention mechanisms have bounded context by their design which limits the amount of history they can retrieve. Therefore, it is not necessarily true that the transformer-based models can retrieve from all prior states for each timestep or that recurrent models are a bottleneck for scalability.
In addition, recent architecture improvements have been proposed for SSMs in language modeling, such as multiplicative gating/routing operators (e.g. in H3 [1]) to allow for recall operations similar to those you have suggested. These ideas can be applicable to ConvSSMs as well. Another possibility is to apply chunked/bounded context attention mechanisms to the outputs of an SSM/ConvSSM as suggested in MEGA [2].
However, the main focus of this paper was to develop the base ConvSSM architecture. Our evaluation on the long-range video benchmarks (the most challenging ones that currently exist) show that ConvS5 performs comparably or better than Transformers on these tasks. We believe this provides some evidence that this type of method is capable of scaling to more complex long-range datasets.
We would be happy to include these discussions and potential modifications in the final version. Please let us know if the reviewer has suggestions for an additional study they would like to see in this regard.
### Questions:
**--Re. comparing against temporally hierarchical methods:** Thank you for your suggestion! We added the CW-VAE comparison to the MNIST table for models trained on 600 sequence length below. We will add a table for models trained on 300 sequence lengths for the final version (we could not finish both before the rebuttal deadline). We found that ConvS5 outperforms CW-VAE in this setting across the metrics with both models having a similar number of parameters. This result indicates that ConvS5 itself compares well to temporally hierarchical methods in modeling long-range dependencies. However, we also agree that it could be interesting to combine temporally hierarchical methods with ConvSSM methods. We can include a discussion of this in the final version.
**New CW-VAE baseline trained on 600 Frames:**
| | | 100 → | 800 | | | 100 → | 1200 | |
|-------------|------------|------------:|-------------|--------------|------------|------------:|-------------|--------------|
| | FVD ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ | FVD ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ |
| Transformer | **42** | 13.7 | 0.672 | 0.207 | 91 | 13.1 | 0.631 | 0.252 |
| ConvLSTM | 91 | 15.5 | 0.757 | 0.149 | 137 | 14.6 | 0.727 | 0.180 |
| CW-VAE | 93 | 12.5 | 0.599 | 0.268 | 109 | 12.4 | 0.590 | 0.280 |
| ConvS5 | 47 | **16.4** | **0.788** | **0.134** | **71** | **15.6** | **0.763** | **0.162** |
**--Re. convS5 comparison for Minecraft/Habitat**:
We were not able to add the comparisons of (non-TECO) Transformer, S5, and ConvS5, because they are a lot more expensive to run than the TECO variants. Note that the size of the datasets and the model size used for these datasets are much bigger than DMLAB. However, we believe that the trend would be similar to the DMLAB non-TECO comparisons.
Nevertheless, we may be able to try the comparison with smaller model sizes for these datasets if the reviewer would like to see the relative comparisons of the models for the final version. Please let us know.
**--Re. performance difference between metrics on Minecraft**:
We are not confident to make any strong conclusion about the difference in performance between FVD vs PSNR/SSIM/LPIPS. However, the gap between TECO-Transformer and TECO-ConvS5 is small for all metrics (especially relative to the baselines), and the visual differences in the prediction between these models were not significant. Our conclusion is that the performance of these models is comparable on this dataset, but our model has a faster sampling speed than TECO-Transformer (1.7x at this sequence generation length; note that the TECO-ConvS5 speed will stay constant for generating longer sequences while the TECO-Transformer speed will decrease).
**--Re. link to videos**:
In the appendix of the original submission (Line 746 in Appendix C), we included a link to an anonymized website that includes the video samples. We will move this link to the main paper to make this clear. Please see the link in the appendix. Note that we cannot include any external link in our response due to NeurIPS rebuttal policy.
**We thank the reviewer again** for the time they have taken to review our paper and read this rebuttal. We hope we have addressed the reviewer's questions and the reviewer is willing to increase their score. Please let us know if we can provide additional clarification or information.
**References:**
[1] Hungry Hungry Hippos: Towards Language Modeling with State Space Models. [2] Mega: Moving Average Equipped Gated Attention.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your rebuttal, and the extra experiments.
**re: ConvS5 in Minecraft / Habitat**
Could the authors clarify on the ConvS5 architecture? Is it autoregressive per token, or autoregressive over time and predicts all tokens of the next image simulatenously, or something else?
**re: novelty**
Please correct me if I am misunderstanding something, but the proposed method / architecture seems very similar to a simple baseline of a standard S5 layer that is vmaped across space with shared parameters for each spatial token - i.e. (`jax.vmap(s5_layer, axis=1)(x) # x is of shape [T, H * W, C]`) - equivalent to ConvS5 with $1\times 1$ kernels for both $A$ and $B$. The only difference I see is that $B$ is replaced $k\times k$ kernel, motivated by a technical contribution of using the connection between convolutions and matrix multiplications applied to S5 / ConvS5.
If my understanding above is correct, the contribution over just using S5 this way does not seem too significant, or is there large gain in the extra $B$ being some $k\times k$ instead of $1\times 1$ convolution?
That being said, I would be happy to raise my score to a 6. My current main concern preventing further increase is the degree of impact / novelty of the contribution.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reading our rebuttal and increasing their score.
--In these experiments, ConvS5 is autoregressive over time and predicts all tokens of the next image simultaneously. But it can also be plugged into other settings as you mentioned (e.g. autoregressive per token at each timestep, or using MaskGit as in the TECO version). Please let us know if we can provide further clarification regarding this.
--The reviewer's understanding regarding the $B$ kernel is correct. To clarify, if all kernels (both $B$ and $C$) are replaced with $1 \times 1$ kernels, then it is equivalent to vmapping S5 across the (reshaped) pixels/tokens of the image. You can consider ConvS5 to be a generalization of simply vmapping S5. It allows for modulating the amount of convolution/weight sharing and shared spatial information that is fed to the dynamical system.
However, our novel contribution is not only this generalization but also the development of parallel scans for convolutional recurrences and the parameterization scheme that retains SSM's properties. This provides a practical extension of S5 for spatiotemporal data. We are working to include additional ablations that address this.
An additional contribution is developing the connection of ConvRNNs with recent SSM methods, and providing a parallelizable and efficient ConvRNN-like model which overcomes difficulties of both Transformers and traditional ConvRNNs. We think this can open up a new line of research to consider these alternative architectures. | Summary: This paper proposes a new state space model for spatiotemporal modeling by introducing inductive bias of spatial locality. The core idea is to extend SSMs to ConvSSMs (just like extending FC-LSTM to ConvLSTM), which has a inherent convolutional structure. The new model also establishes an equivalence between the dynamics of particularly structured ConvSSMs and SSMs. Based on recent state space method S5, a model instantiation ConvS5 combines the stateful autoregressive generation with the ability to be parallelized across the sequence. A experimental evaluation shows that the proposed method captures spatiotemporal information better.
Strengths: * The idea is simple but effective as it combines fast autoregressive generation and parallelized process.
* The proposed method achieves better performance than Transformer and ConvRNN on Moving-MNIST prediction task. It also strikes a balance on computational complexity between the two methods.
Weaknesses: * The lack of different ConvSSM variants. The paper proposes ConvSSMs to address long-range spatiotemporal modeling. However, only a variant, ConvS5, is present and evaluated. It is not convincing to support the basic idea of ConvSSMs. More variants are needed.
* The experiments about comparison with Transformer and ConvLSTM could be more extensive. In Table 2, we see that the ConvS5 outperforms the other methods across the metrics. We notice that ConvLSTM achieves a slightly lower performance result. However, ConvLSTM shows a slightly better sampling speed than ConvS5. It is not sufficiently significant that ConvS5 has a better quality-speed tradeoff than ConvLSTM.
* There lacks explanation and experiments to the design of convolutional operator upon SSMs. The comparison results of whether convolution or not are missing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The idea of parallel scan for SSMs is not new. This design is considerable overlap with S5, which also leverages parallel scans to efficiently compute the states. The authors are suggested to discuss more about this.
* In Table 3, ConvS5 shows a significant improvement than S5 without TECO. When trained using the TECO, ConvS5 only achieves a comparable result with S5. It makes me wonder if the proposed method is effective consistently on various settings (e.g. framework, SSMs backbone, etc).
* As in Weakness 3, I wonder if the proposed method is applicable to other SSMs. The authors are suggested to discuss more about this.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback. We are glad the reviewer appreciated the simplicity and effectiveness of the proposed method.
--**Re. convSSM variants:** We present the general ideas of ConvSSMs (tensor-valued states, linear transitions and continuous-time parameterization) in Section 3.1, similar to how RNNs and ConvRNNs are general model classes with different variants. We note here that the general ConvSSM formulation of Section 3.1 could be run sequentially as is, similar to ConvRNNs. This could be considered a Vanilla ConvSSM variant. The downside of course is that this would be slow to train.
We then work through the steps required to get a practical, effective, and modern version of a ConvSSM that allows efficient parallelization (Section 3.2), enables modeling long-range dependencies (Sections 3.3 and 3.4), while retaining the fast autoregressive generation abilities of ConvRNNs. The result is the ConvS5 method that we propose.
Note that various recent extensions to SSMs such as data-dependent gating (H3 [1]) and data-dependent dynamics (Liquid SSMs [2]) could be applied to provide different extensions of ConvS5 (and thus different ConvSSM variants). We can add a brief discussion of these potential extensions and variants in the discussion. However, our main focus in this work is to show the base ConvS5 architecture is effective.
While the ConvS5 formulation we develop is simple and effective, we hope our work inspires others to consider the possibilities for this type of linear SSM and ConvRNN-inspired architecture, as well as other alternatives.
If the reviewer has suggestions to improve the exposition of the general idea of ConvSSMs and the particular formulation of ConvS5, we are happy to incorporate these changes to make these distinctions more clear.
--**Re. more comparison with Transformer and ConvLSTM:** We compare 3 aspects between these models. We believe they illustrate the main benefits of our method.
1. Performance: The experiments on Moving-MNIST show that
ConvS5 is able to significantly outperform ConvLSTM across the metrics (cutting FVD nearly in half while also outperforming by a fair margin on PSNR/SSIM/LPIPS) when trained on the longer context (600 frames). This indicates that ConvS5 is able to take better advantage of the longer context and better capture the long-range dependencies than ConvLSTM.
2. Faster training: Because ConvS5 can be parallelized across the sequence, like Transformer, it trains much faster than the sequential ConvLSTM (>3X faster for these experiments, see Table 6 in the Appendix).
3. Fast autoregressive generation: Like ConvLSTM, ConvS5 has a much faster autoregressive generation speed than the Transformer. It is true that ConvLSTM had a slight edge (557x vs 427x faster than the Transformer) in these experiments, but the main point is that both methods are orders of magnitude faster than the Transformer.
Also, ConvLSTM and ConvS5 are both stateful recurrent-based methods, so there is nothing that makes ConvLSTM inherently faster at autoregressive generation than ConvS5. Since both ConvS5 and ConvLSTM were much faster than the Transformer, and ConvS5 trains much faster than ConvLSTM, we did not spend time optimizing the speed of the generation process. Nonetheless, given the results from this experiment, there are likely many applications in which halving the FVD for a slightly slower inference speed (though still much faster than Transformer) would be worth it.
We welcome suggestions from the reviewer for any additional comparisons to add here.
--**Re. design and ablation of convolutions**: We discuss in detail the design of the convolution operators of ConvS5 in Section 3.2-3.4. Please let us know if we can provide further clarification on this point.
In terms of ablating the convolutions, while we did not explicitly label it as an ablation, the S5 baselines in the DMLab experiments serve as this ablation since S5 is the closest possible variant to ConvS5 without the convolution structure. We will make a separate ablations table and discussion of these points to make this more clear. Please see the new ablations table and discussion in the General Response above.
### Questions:
--**Re. the parallel scan design:** We propose a parallel scan for convolutional recurrences which is different from the one in S5. S5's parallel scan is for a linear recurrence while ConvS5's is for a convolutional recurrence. There were several challenges for applying a parallel scan to a convolutional recurrence. We discuss the details of this in Section 3.2. Section 2.3 explains the parallel scan used in S5. We would also be happy to incorporate suggestions the reviewer has on this point to help provide further clarification.
--**Re. the effect of the TECO Framework:** We agree with the reviewer that the TECO framework provides S5 a large performance improvement on FVD (though still slightly below ConvS5). However we note that even with the TECO framework, ConvS5 still significantly outperforms S5 on the other long-range consistency metrics (PSNR/SSIM/LPIPS), so we respectfully disagree on the point that TECO-S5 achieves a comparable overall result to TECO-ConvS5. Therefore, ConvS5 performs well with and without the TECO framework, while S5 only performs well with the TECO framework. This would seem to suggest ConvS5 might be expected to generalize well to other settings since its performance on these experiments is not dependent on the framework.
--**Re. other SSMs methods**: Please see the response of ConvSSM variants above.
**We again thank the reviewer** for taking the time to review our paper. We hope we have answered the reviewer's questions and the reviewer is willing to increase their score. Please let us know if we can provide additional clarification or information.
**References:** [1] Hungry Hungry Hippos: Towards Language Modeling with State Space Models [2] Liquid Structural State-Space Models
---
Rebuttal 2:
Title: Response to authors
Comment: Thank the authors for providing the response and addressing my concerns. The motivation of this work looks much clearer to me now. My current main concern preventing further increase is the lack of different ConvSSM variants (not only the particular formulation of ConvS5). That being said, I would be happy to raise my score to 5.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for taking the time to read our rebuttal and for increasing their score!
Re. ConvSSM variants:
Since the reviewer requested to introduce more ConvSSM variants, we can add another possible variant.
Here, motivated by the SSM to ConvSSM connection made in Section 3.3, we can consider a "ConvS4" variant related to S4 [3]. S4 uses single-input, single-output SSMs (one for each feature), so this would require applying a single-input, single-output SSM to every pixel and feature channel independently. From the ConvSSM point of view, this would require restricting the input/output kernels (B and C) and dynamics kernel A to $1\times1$ kernels and applying a different single-input, single-output ConvS4 to every channel independently. This stack of ConvS4s could be shared/convolved across each pixel of the frame. For efficient sequence parallelization, FFTs would need to be used with this structure similar to S4 rather than using a parallel scan as used in ConvS5. In addition, an additional mixing operation would be required to mix the information of all the independent channels and pixels. It can be viewed as a depthwise-separable ConvSSM that could potentially reduce the number of parameters and operations. However, this approach has restricted kernel sizes, independent dynamical evolution of features, and a requirement of using FFTs and time-invariant dynamical systems.
We can add a separate discussion section with "ConvSSM Variants" to the final paper which includes this ConvS4 variant, in addition to H3 [1] and Liquid SSM [2] variants that we mentioned in our rebuttal. We can also include an experiment for one of these variants.
In addition to the proposed variants, we think the connection we make in this work between ConvRNNs and recent SSM methods can open up a new line of research to consider these alternative ConvSSM architectures. We hope this addresses the Reviewer's concerns. Please let us know if we can provide further clarification or information regarding ConvSSM variants and help to further increase the score.
References: [1] Hungry Hungry Hippos: Towards Language Modeling with State Space Models. [2] Liquid Structural State-Space Models. [3] Efficiently Modeling Long Sequences with Structured State Spaces. | Summary: The paper builds upon ConvRNNs and proposes the use of SSM (State Space Models) as a replacement for RNNs. This allows for efficient computation using parallel scan. The proposed method is evaluated on the Long Horizon Moving-MNIST Generation and Long-range 3D Environment Benchmarks datasets, where it achieves promising results.
Strengths: The organization of this paper is clear and easy to understand. It starts from ConvRNN and replaces RNN with SSM, leading to the instantiation of ConvSSM known as ConvS5.
Weaknesses: I think the main issue with this paper is the lack of motivation for the proposed approach, as replacing the matrix multiplication of SSM with convolution seems trivial.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The motivation for the proposed approach should be made more explicit, whether it is based on intuitive explanations or considerations of efficiency, among other factors.
2. The experiments conducted in the paper are not comprehensive enough. It would be beneficial to compare the proposed method with other efficient attention-based approaches, such as kernel-based linear attention, 1+elu, performer, and cosformer, on long sequence tasks.
3. More ablation studies are needed to validate the rationale behind the design, specifically regarding the convolutional aspect. The existing ablation studies primarily focus on validating the initialization, but it is equally important to investigate and verify the effectiveness of the convolutional components.
References:
[1] Fast Autoregressive Transformers with Linear Attention
[2] Rethinking Attention with Performers
[3] cosFormer: Rethinking Softmax in Attention
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our paper and provide feedback.
--**Re. motivation:** Thank you for this feedback. We were a little surprised since several other reviewers found the design well-motivated and clear (e.g. see Reviewer Kz49). Here we will walk through the motivation as presented in the paper. Please let us know where we can improve the presentation or provide further clarification.
Spatiotemporal prediction methods need to scale efficiently with sequence length and effectively capture long-range dependencies in the spatiotemporal data. ConvRNNs have been historically popular due to their spatiotemporal modeling strengths and fast inference abilities. However, they are slow to train and suffer from vanishing/exploding gradients. On the other hand, Transformer methods provide strong performance but poor scaling in sequence length and slow inference. The recent SSM methods show subquadratic scaling in sequence length, ability to capture long-range temporal dependencies, and fast inference. Our method combines the best of ConvRNNs (strong spatiotemporal modeling abilities and fast inference) and SSMs (parallelizable and favorable scaling, long-range dependencies, and fast inference).
We hope this answer provides clarification, but if the reviewer has concrete suggestions for how we can further improve this exposition of the motivation we would love to incorporate their recommended changes!
--**Re. contributions**:
We also note here that the move from SSMs to an effective version of a ConvSSM is not as trivial as simply replacing the SSM matrix-vector multiplications. Please see our detailed response regarding Technical Contributions in the General Response above where we outline the technical contributions required to provide a modern and scalable method.
--**Re. experiments**: We were surprised by the feedback that our experiments were not comprehensive enough. We chose the most challenging and well-thought out long-range video prediction benchmarks that exist (the 3D Environment Benchmarks from TECO) and compare to state-of-the-art methods, many of which have been specifically designed for long-range video prediction. We would like to note that reviewers Kz49 and xT6A complimented our paper for the broad benchmarking, comprehensive experiments and strong results of our paper and method. But we are also happy to include any additional baselines.
As requested, we add the Performer baseline on DMLAB below. We see that ConvS5 also outperforms the Performer. We also note that one of our original baseline methods, Perceiver AR is a modern efficient attention alternative that was published more recently than linear attention or Performer and concurrently with Cosformer. We also include S5 as a baseline, which has been shown to significantly outperform Linear attention, Performer, CosFormer and a host of other efficient attention alternatives on long-range sequence tasks (see Table 10 in [1] and Table F.1 in [2] and also compare these to the results in Table 4 of the Cosformer paper [3].).
**New Performer baseline on DMLab:**
| | FVD ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ |
|-------------|------------|-------------|-------------|--------------|
| Perceiver-AR | 96 | 11.2 | 0.304 | 0.487 |
| Performer | 78 | 17.3 | 0.513 | 0.203 |
| Transformer | 97 | 19.9 | 0.619 | 0.123 |
| S5 | 221 | 19.3 | 0.641 | 0.162 |
| ConvS5 | **53** | **23.6** | **0.782** | **0.074** |
We are also in the process of training Performer on the Moving MNIST experiments and will add this baseline to the final version as well.
--**Re. ablations**: While we did not explicitly label it as an ablation, the S5 baselines also serve as the best ablation of the ConvS5 convolutional approach, as it is the closest possible variant that does not include ConvS5's convolutions. We have also added an ablation of the nolinearity choice that connects the ConvS5 layers. Please see the new ablation table and the discussion in the General Response. We will add this separate Ablation Table and discussion to make the ablations more clear.
**We thank the reviewer again** for taking the time to review our paper. We hope we have addressed the reviewer's questions and the reviewer is willing to increase their score. Please let us know if we can provide additional clarification or information.
**References:** [1] Efficiently Modeling Long Sequences with Structured State Spaces. [2] Simplified State Space Layers for Sequence Modeling. [3] CosFormer: Rethinking Softmax in Attention.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer R5qx
Comment: Thanks for the author's response. Most of my questions have been addressed, and I am upgrading my rating from 4 to 5.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for taking the time to read our rebuttal and for increasing our rating. Please let us know if we can address any additional questions/concerns to further increase the score. | Summary: This paper proposes convolutional state space models (ConvSSMs), that combine ConvLSTM with the long sequence modeling approaches like S4/S5. In particular, authors propose ConvS5, that allow parallelizing the stateful autoregression of convRNNs across the sequential direction. Naively applying parallel scan speedup tricks like in S4/S5, leads to convolutional kernels of exploding kernel size. Hence, the authors propose an interesting way of structuring modeling complexity, choosing to have simple linear dynamics in the sequence direction that are learnt with 1x1 conv kernels, and complex non-linear operations in depth that can be learnt with entire resnet blocks. The authors benchmark the proposed ConvS5 model on the Moving-MNIST dataset by training on 300 and 600 frames and predicting 400/800/1200 frames conditioned on the initial 100 frames. Also benchmarking results are reported on the DMLab, Minecraft, and Habitat long-range benchmarks. Across the axes of image quality such as FVD, PSNR, SSIM and LPIPS as well as efficiency axes such as sampling speed.
Strengths: + **Well motivated and principled design** : The design considerations in ConvS5 is very well motivated with both theoretical (most of which are directly inspired from Deep SSMs) and practical arguments that make an interesting and worthwhile contribution to the community.
+ **Broad benchmarking and strong results** : The authors perform benchmarking across a number of datasets and benchmarks across all of which ConvS5 outperforms prior works such as ConvLSTM and a vanilla transformer on image quality as well as sample speed.
+ Paper is also well written with good exposition of the prior work ideas that are necessary to grasp convSSM which is difficult to balance given the large amount of history in S5 model family development.
Weaknesses: - **Model Size** such as FLOPs / parameters and **Training Efficiency** such as Training Speed / Number of Epochs need to reported for both convS5 and also all the baselines that are being compared to. Without these details, quality results on their own are meaningless since it is unclear if the best footing was also provided to the baselines as the proposed method. It is mentioned as an offhand remark that convS5 uses ResNet blocks as intermediate non linearities but that can lead to major slowdown and burdens (as also alluded to by the authors in limitations). If so, this should be made clear in the tables through the metrics mentioned before.
- **Boarder Ablations** : The paper is quite weak on ablation studies presented. Since the authors propose a new architecture, boarder ablations on more than just initialization are requires to be convincing of choices proposed.
- Experiments on non synthetic real data/video. While the authors do experiment on a bunch of datasets, they're all either toy like Moving-MNIST or, synthetic like DMLab/Minecraft/Habitat. For ConvS5 to be successful we need to make sure that it does well on real world benchmarks as well.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - What hyperparameter sweeps and effort went into optimizing the Transformer performance on the proposed benchmarks? Was it commensurate with the proposed convS5? Also see weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time the reviewer took to review our paper and thank the reviewer for their positive feedback. We are glad the reviewer found the design of our method well-motivated and principled and also recognized the broad benchmarking of our paper and strong results of our approach.
-- **Re. model size and training efficiency**: Please see the discussion of the computational cost and the other comparisons including flops between Transformer, ConvLSTM and ConvS5 in the General Response above. We also note that parameters, training speed, and max number of training steps were included for all experiments in Appendix C and D of the original submission. Parameter counts were included in the expanded Tables 5,7,8 and train step speeds were included in Tables 6,9. Parameters, max number of training steps and V100 days are listed in Tables 10-19. As the reviewer can see, the training footprints for the methods were commensurate. We will include a full summary of the tables in the final version to make this more clear.
-- **Re. ablations:** Please see the discussion of ablations in the General Response above. Even though we did not explicitly label it as an ablation, the S5 baseline was meant to be an ablation on the convolutional structure of ConvS5, since it is the closest possible model without the convolutional structure. We have added a separate ablation table and discussion that makes the ablation of the convolutional structure (using S5) more clear and also added new ablations of the nonlinearity choice between layers.
As our model sticks to a pretty basic architecture, there were not many other obvious design choices that can be ablated. If the reviewer has suggestions for other ablations we should run to improve the paper, please let us know and we will be happy to include!
-- **Re. datasets:** We agree that real-world long-range spatiotemporal datasets are important to test and improve long-range spatiotemporal models. Unfortunately, strong benchmarks in this area do not currently exist, and the 3D Environment tasks from TECO are the most challenging and well-thought-out long-range video benchmarks we are aware of. We hope these recent works on long-range video prediction help to inspire the creation of better real-world long-range video benchmarks.
--**Re. hyperparameter sweeps:** All of this information is included in Appendix D of the supplement of the original submission.
Here is the summary:
- Moving-Mnist: For the Transformer, we swept over 2 model sizes (hidden dimension of 512 and 1024) and 3 learning rates and chose the best model. For ConvS5 and ConvLSTM, we chose a single model size (less than the Transformer) and swept over the same learning rates as the Transformer.
- DMLab: For each of the methods we chose model sizes and hyperparameters very close to those used in the TECO paper and swept over 3 learning rates for the Transformer, S5, ConvS5, TECO-Transformer, TECO-S5, and TECO-ConvS5, and chose the best run with the best learning rate for each model.
- Minecraft: TECO-ConvS5 was run with two different learning rates with no other tuning due to the cost of this experiment.
- Habitat: We only did 1 run with no further tuning due to the cost of this experiment.
**Thank you again for your review.** We hope we have addressed the reviewer's questions and they are willing to increase their score. Please let us know if we can provide any additional clarification or information. | Rebuttal 1:
Rebuttal: # General Response
We thank the reviewers for reviewing our submission and providing constructive feedback. We provide a general response here and respond to each reviewer individually. We presented the ConvS5 spatiotemporal sequence model which has parallelizable training, fast autoregressive inference, and effectively captures long-range spatiotemporal dependencies.
We were pleased all reviewers agree that the paper is **1)** easy to follow. Also, reviewers Kz49, rhqo, and xT6A appreciated **2)** the effective model design for fast autoregressive generation and the parallelization process, and **3)** broad benchmarking and strong empirical results.
## Technical Contribution
There was some concern from reviewers P1WH and xT6A regarding technical novelty (though some reviewers such as Kz49 praised ConvS5's well motivated and principled design based on theoretical and practical arguments). We agree that the basic idea of going from SSM to ConvSSM by replacing the SSM's matrix-vector multiplications with convolutions is relatively straightforward. However, there are challenges to make this approach scalable and effective for modeling long-range spatiotemporal data.
**1. Computational efficiency, parallelization across the sequence for fast training and inference.** For the linear recurrence of SSMs, there are different ways to parallelize the model across the sequence (e.g. FFTs or parallel scans). However, a parallel scan for convolutional recurrences has not been studied before. In this paper, we are the first to introduce the parallelization of convolutional recurrences using a binary associative operator. In Section 3.2, we show both theoretical (Proposition 1) and practical results required to make this feasible and efficient.
**2. Capture long-range spatiotemporal dependencies.** We developed a theoretical connection between the dynamics of SSMs and ConvSSMs (Proposition 3, Section 3.3). Based on this result, in Section 3.4, we are able to introduce a parameterization and initialization design that allows ConvS5's convolutional recurrence to capture long-range spatiotemporal dependencies.
**Result:** The result of these contributions is ConvS5 which is parallelizable and overcomes difficulties during training (e.g., vanishing/exploding gradient problems) that traditional ConvRNN approaches experience. It also provides fast (constant time and memory per step) autoregressive generation compared to Transformers. Furthermore, our empirical results validate these design choices for modeling long-range spatiotemporal dependencies.
We argue that these contributions are not trivial, and our paper provides a rigorous framework that ensures both computational efficiency and modeling performance for spatiotemporal sequence modeling.
## Computational Cost
The appendix of the paper includes the experimental details including the number of parameters, training and inference times for all experiments (Tables 5-19). As a couple of reviewers asked us to provide FLOPs, we provide the comparison table below for Moving-MNIST trained on 600 frames. We will include the full summary in the final paper.
Although ConvS5 requires a few more FLOPs due to the convolution computations and some architecture choices (ResNet blocks for nonlinearity), our model is parallelizable during training (unlike ConvLSTM) and has fast autoregressive generation (unlike Transformer) --- training 3x faster than ConvLSTM and generating samples 400x faster than Transformers. Note that the number of parameters are comparable.
| | GFLOPS ↓ | Parallelizable | Train Step Time (s) ↓ | Train cost (V100 days) ↓ | Sampling Speed (frames/s) ↑ |
|-------------|:------------:|:------------:|:--------------:|:---------------------------:|:---------------------------------:|
| Transformer | 70.0 | o | 0.77 (1.0x) | 50 | 0.21 (1.0x) |
| ConvLSTM | 64.9 | x | 3.0 (3.9x) | 150 | 117 (557x) |
| ConvS5 | 96.8 | o | 0.93 (1.2x) | 50 | 90 (429x) |
## Ablations
A common request was the ablation of the convolutional structure of ConvS5. We note that the S5 baseline also serves as an ablation of ConvS5 since it is the closest possible model without ConvS5's convolutional structure, even though we did not explicitly label this as an ablation. We will add a separate ablation table in the final version and make this more clear.
We additionally include the ablations of the nonlinear connections used between the ConvS5 layers. We use ResNet blocks but also consider elementwise GELU and GLU activations (used in the S5 paper). Due to the time limit, we only show the nonlinearity ablation for non-TECO models here. We will include the rest in the final version.
**DMLAB Ablations**
| | conv. | nonlinearity | FVD ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ |
|--------------------|---|----------|------------|-------------|-------------|--------------|
| S5 | x | Glu | 221 | 19.3 | 0.641 | 0.162 |
| ConvS5 | o | Gelu | 129 | 21.4 | 0.709 | 0.110 |
| ConvS5 | o | GLU | 112 | 21.6 | 0.720 | 0.098 |
| ConvS5 (ours) | o | ResNet | 53 | 23.6 | 0.782 | 0.074 |
| TECO-S5 | x | Glu | 35 | 20.1 | 0.687 | 0.143 |
| TECO-ConvSSM (random init.) | o | ResNet | 44 | 21.0 | 0.691 | 0.010 |
| TECO-ConvS5 (ours) | o | ResNet | 31 | 23.8 | 0.803 | 0.085 |
**We once again thank the reviewers for their time and positive and constructive feedback**. We will now respond to individual comments.
--The ConvS5 authors | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a method for long-term sequential modeling. The authors extend prior RNN-based work for sequential modeling, i.e. S5[20], by substituting the linear operations with convolutions. The authors show the superiority of their method on the future prediction task with transformers. A favorable property of the RNN methods, in general, compared to transformers is that they scale linearly with respect to the sequence length, while the transformer's complexity is quadratic with respect to time.
Strengths: 1) The proposed method makes sense and is a natural extension of prior work.
2) The paper is well-written and easy to follow.
Weaknesses: 1) Compared to the transformers, the linear complexity with respect to the sequence length is clearly favorable. However, it would be informative to compare the actual computational cost, e.g. in terms of flops.
2) Despite transformers, RNNs are notorious for being difficult to train. It would be great to compare the training of two architecture as well. This can be a potential blocker for scaling the method for more complex datasets.
3) I am not an expert in this domain to have a proper evaluation on the impact of the experiments, but the contributions of this paper sound marginal with the NeurIPS standards. Also, it looks like the paper is missing some prior work to compare[1].
[1] Gao et al, Simvp: Simpler yet better video prediction, CVPR 2022.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see the weaknesses.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: No, they have not. However, I do not see a particular negative social impact associated for this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time they took to review our paper and for their feedback. We are glad the reviewer found the proposed method easy to follow.
-- **Re. flops:** Please see the flops comparison in the computational cost section of the General Response above. Note the appendix of the original submission included the computational cost including training and inference runtimes, parameter counts, etc.
-- **Re. training RNNs:** We agree RNNs are notoriously difficult to train, generally due to the vanishing/exploding gradient problem. We note the SSM line of work (e.g. S4/S5, etc.) that ConvS5 extends to the spatiotemporal domain effectively mitigates the vanishing/exploding gradient problem through its special parameterizations and initialization schemes. One of the major contributions of our work is to ensure the convolutional recurrence formulation retains these same favorable properties (See Section 3.3). We will add further discussion in the paper to make this more clear.
In practice, we did not observe any stability issues when training ConvS5. However, we found that Transformer and ConvLSTM can be unstable with higher learning rates during the hyperparameter search (discussed in Appendix D), especially with Moving-MNIST trained on 600-length context. In addition, random initialization of the ConvSSM kernel instead of careful design of the kernel parameterization/initialization proposed in our paper (Section 3.3) was also unstable with higher learning rates.
We do not think the recurrent structure of ConvS5 is a blocker to scaling the method, and the empirical results on complex, large scale datasets also support this. Please let us know if you have any other suggestions for comparing the training of the two architectures.
-- **Re. contribution:** We respectfully disagree with the reviewer that the contributions are marginal. Please see the Technical Contributions section of the General Response above for a detailed explanation of the contributions.
In short, we have introduced a method that can be parallelized during training like a transformer, provides fast autoregressive generation like an RNN, and provides high performance on complex video generation tasks requiring long-range reasoning and high-quality frames. Achieving these three aspects required contributions to ensure both computational efficiency and high performance.
We show how convolutional recurrences can be parallelized both theoretically and practically (Section 3.2). To our knowledge, this has not been previously considered. Also, to capture long-range spatiotemporal dependencies, we developed a connection between the dynamics of SSMs and ConvSSMs (Proposition 3), which informs our parameterization and initialization design as discussed in Sections 3.3 and 3.4.
Finally, our empirical results validate these design choices.
-- **Re. SimVP:** We are happy to include a citation of SimVP in the related works. However, this work did not consider long-term prediction tasks (they only considered up to 40 frames). Since the SimVP model considers the sequence length as an input channel of the convolution performed by its Translator module, the SimVP model size has to scale with the sequence length. This may affect the scaling of this model to long-sequence video modeling. In our work, we compare against numerous state-of-the-art video prediction baselines designed for modeling long video sequences and evaluate on the most challenging existing video prediction baselines (training on hundreds of frames and predicting hundreds to thousands of frames). Nonetheless, we are working to include this method as an additional baseline. There was not time to complete the training and evaluation of this prior to the rebuttal deadline, but we will include it in the final version.
**Thank you again** for taking the time to review our paper. We hope we have addressed the reviewer's questions and the reviewer is willing to raise their score. Please let us know if we can provide any additional clarification or information.
---
Rebuttal Comment 1.1:
Title: response to the authors
Comment: I thank the reviewers for their rebuttal, specifically for the comparisons in terms of FLOPs. However, I still believe the technical contributions of this paper do not match NeurIPS standards. I'll keep my rating as it is. | null | null | null | null | null | null |
Langevin Quasi-Monte Carlo | Accept (poster) | Summary: This paper analyzes the effect of using quasi-random numbers in place of the usual IID Gaussians for the driving noise of a Langevin algorithm. Assuming that the loss is strong convex and a the quasi-random numbers are completely uniformly distributed, a bound on the Monte Carlo estimation error is derived. This bound is substantially better than what is currently achieved for IID Gaussian driving noise.
Strengths: The paper makes a nice contribution to the theory of sampling with Langevin dynamics, by bringing in the use of quasi-Monte Carlo methods. The proof is quite clean, and it is very clear where the power of the proposed method becomes useful.
The experiments are promising.
Weaknesses: My only real concern is that this paper focuses on a relatively easy case, albeit an important one. It would be interesting to see how easy or difficult it is to extend the results to more complex scenarios, such as with non-convex losses or external random variables.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * Can the result be extended to common measures between distributions? For example, the result looks almost like a bound on the 1-Wasserstein distance between the distribution of the iterates and the stationary distribution. The main difference is that it assumed that $f$ is both bounded and Lipschitz, as opposed to just Lipschitz.
* Are there general classes of strongly convex functions for which the you can check that the Hardy-Krause variation of $\bar f_{\ell}$ is finite?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: These are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your questions as follows:
- **Extension to non-convex losses or external random variables** In the "global response", we provided a comprehensive discussion about the strong convexity and smoothness assumptions and discussed to what extent these assumptions might be relaxed. For instance, the strong convexity might be relaxed to functional inequalities, and the Lipschitz gradient might be relaxed to weaker Hölder-continuous conditions. In general non-convex settings, however, the improvement of LQMC might not be as significant, since the advantage of LQMC is more pronounced when the Markov chain mixes well. Nevertheless, in the optimization literature, QMC has demonstrated better exploration capability than traditional Monte Carlo. Hence, it remains an interesting open question to investigate whether LQMC exhibits better exploration than LMC in non-log-concave sampling scenarios. \
Regarding external variables, our current approach treats the dataset as deterministic, and the posterior distribution is treated no differently from any other target distribution. However, if the dataset is modeled as i.i.d. or a mixing Markov chain, it would be interesting to understand how the sampling of the dataset interacts with the sampling of the Gaussian variables in the LMC algorithm. In the traditional QMC setting, there are studies on combining QMC samples with external samples such as those obtained by acceptance-rejection algorithms or MCMC. How to incorporate external samples into the LMC algorithm is an interesting direction for future work.
- **Extension to common measures between distribution** To clarify, the question asks whether we can improve the convergence of certain metric such as KL divergence between the law of $x_T$ and the target distribution $\pi$.\
We do not anticipate the distance between the law of $x_T$ and the target distribution $\pi$ to be smaller by using LQMC as opposed to LMC. In fact, in a well-mixing scenario, the law of $x_T$ obtained by LQMC should be roughly the same as that of LMC.
However, the key advantage of QMC lies in the collective behavior of the sequence of samples $\{x_1,x_2,\ldots,x_T\}$. Specifically, we expect the empirical distribution $\frac{1}{T}\sum_{k=1}^T\delta_{x_k}$ to be closer to the target $\pi$ in the Kolmogorov-Smirnov distance (also called the star discrepancy). In other words, while each individual sample is not closer to the target, the ensemble of all samples collectively provides a better approximation of the target distribution. As a result, the ergodic average $\frac{1}{T}\sum_{k=1}^T f(X_k)$ provides a more accurate estimate of $\pi(f)$ with LQMC. Furthermore, we believe the ergodic average is of more practical importance since researchers commonly utilize all the samples generated by the Markov chain (excluding potential burn-in or thinning) rather than relying solely on the last individual sample. Thus, the empirical distribution $\frac{1}{T}\sum_{k=1}^T\delta_{x_k}$ is more closely aligned with reality than the distribution of $x_T$.
- **Check bounded variation condition** In response to your question about the bounded Hardy-Krause variation condition, we provided some sufficient conditions and discussions in the "global response".
References:
[1] Hintz, E., Hofert, M., & Lemieux, C. (2022). Quasi-random sampling with black box or acceptance-rejection inputs. In Advances in Modeling and Simulation. Cham: Springer International Publishing.
---
Rebuttal Comment 1.1:
Title: Response
Comment: The rebuttal answers my questions adequately. Probably, the most relevant concern of mine was the verification of Hardy-Krause veriation condition, and the global response does a good job of addressing this. I will raise my score by 1. | Summary: For suitably smooth functions defined on a bounded support, it is well known that quasi Monte Carlo (QMC) can achieve faster rates of convergence than standard Monte Carlo when the goal is to integrate the function. This paper studies whether techniques from QMC can be beneficial for Langevin Monte Carlo (LMC), a sampling procedure based on the discretization of stochastic differential equations. The paper shows both theoretically and empirically that when the function $f$ is suitably smooth, one can augment the standard Gaussian perturbations in LMC with a specialized sequence such that the error rates converge at a faster rate than standard LMC.
Strengths: The paper is very easy to follow and understand. The authors do a great job of laying out the appropriate context, defining the necessary notions, and laying out their new procedure.
The idea, while relatively simple to implement, seems quite powerful. It’s simplicity seems like a feature rather than bug, and should make it relatively easy to incorporate into open source implementations.
The main result and empirical work seem sound. And while QMC is not new, incorporating it into LMC would be a nice extension of work that has previously been confined to relatively small use cases.
Weaknesses: The main critiques of this paper are associated with its overall applicability. On the theoretical side, the authors make some assumptions to prove their result. How strong are these assumptions? Can they apply generally as the number of samples increases?
In terms of the empirical work, all the examples are done on toy models where the final distributions are known or can be estimated for long chains. Can the authors show LQMC is useful for problems with real data? That would bolster the case for using this as a drop in replacement for vanilla LMC.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: [Q1] The main result requires $\bar{f}_l$ to have bounded variation. Is this possible to show this for any of your examples, i.e. if f is Lipschitz and the target distribution is Gaussian? It would be great to know whether this assumption is weak or strong.
[Q2] Given the main results assumptions on d, m and n, can this result always be applicable for values of n going to infinity?
[Q3] The empirical examples only demonstrate the utility of LQMC on toy models. Can the authors demonstrate its usefulness in a Bayesian context on real data, say for improving the posterior predictions associated with some application?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: It would be great if the authors mentioned the importance of doing proper inference in ML.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your questions as follows:
- **Strong assumptions/bounded variation:** In the "global response", we have provided a comprehensive discussion on the smoothness, strong convexity, and bounded Hardy-Krause variation assumptions.
In particular, to address your question regarding the case where $f$ is Lipschitz and $\pi$ is Gaussian, we consider integrands of the form $f\circ \Phi^{-1}:[0,1]^d\to \mathbb R$, where $\Phi^{-1}$ is the component-wise inverse CDF of the standard Gaussian distribution. According to [1], a sufficient condition for QMC to achieve the rate $O(n^{-1+\delta})$ for the integrand $f\circ \Phi^{-1}$ is that, for arbitrarily small $B_i> 0$, there exists $C$ such that
$$
|\partial^u f(z)|\leq C\prod_{i=1}^d [1 - \Phi(|z_i|)]^{-B_i}
$$
for any $u\subset [d]$.
This condition implies that as $|z_i|$ tends to infinity, the first-order mixed partial derivatives of $f(z)$ should not grow faster than $1/(1-\Phi(|z_i|))^{B_i}$ for any $B_i>0$. Therefore, when $d>1$, Lipschitz continuity on $f$ is not sufficient nor necessary for this sufficient condition; all first-order mixed partial derivatives $|\partial^u f(z)|$ need to grow relatively slowly.
- **Can they apply generally as the number of samples increases?** With the increase in the number of samples, the central limit theorem (Bernstein–von Mises theorem) comes into effect, bringing the posterior distribution closer to a normal distribution, where smoothness and strong convexity assumptions hold. However, a challenge arises when utilizing a subsample to estimate the gradient, as the error of the gradient estimator usually scales with the sample size. In such cases, as highlighted by [2], the stochastic gradient's noise dominates, potentially diminishing the applicability of the paper's theorem.
- **Can the result be applicable for $n$ going to infinity?** In our notation, $n$ is the number of iterations rather than the dataset size. However, we interpret your question as being about the dataset size approaching infinity, and this aspect has been addressed above.
- **Real example:** We appreciate your interest in real examples and have taken your suggestion into account. We have included new experiments using realistic data and tasks. In particular, we have introduced an example of sparse regression (commonly used for Bayesian variable selection) and several prediction tasks on real datasets. We kindly direct you to the "global response" and the uploaded PDF for detailed descriptions and results of these new experiments.
References
[1] He, Z., Zheng, Z., & Wang, X. (2023). On the error rate of importance sampling with randomized quasi-Monte Carlo. SIAM Journal on Numerical Analysis, 61(2), 515-538.\
[2] Dalalyan, A. S., & Karagulyan, A. (2019). User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient. Stochastic Processes and their Applications, 129(12), 5278-5311.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments. I have adjusted my score upwards to an "Accept." | Summary: In this paper, the authors proposed the Langevin quasi Monte Carlo algorithm (LQMC), that is, to use a completely uniformly distributed series (CUD quasi random number) in the Langevin Monte Carlo instead of iid pseudo random numbers. The quasi-random number is generated by the LFSR method. For a smooth and strongly convex potential, the estimation error scales as $O(n^{-1 + \delta})$, where $\delta$ encodes the dependence on $\log n$ and $d$. The numerical experiment is performed for various $d$, using gradient or stochastic gradient. In all the cases, LQMC has better performance than the Langevin MC.
Strengths: * Originality: It appears to me that the combination of CUD and Langevin MC is new.
* Quality: This paper contains rigorous derivation of error bounds and numerical experiments covering a wide range of cases. Both derivation and experiment seem convincing for me.
* Clarity: This is a well written paper in general. This paper demonstrates the backbone idea well by using examples and pseudo codes. This paper also provides a good context for the discussion. The setup for the experiments are stated clearly, and the numerical results are interpreted. I also appreciate attaching the code together with the paper.
* Significance: This paper shows that, without making any change to the Langevin MC, changing the random number in the algorithm can lead to vast performance gain. If the performance gain shown in the paper migrates to realistic problems, the precision of a lot of computational work could be improved without a lot of software development.
Weaknesses: * The numerical experiments are performed with synthesized data instead of real ones.
* Unlike classic Monte Carlo, the scaling of error of quasi-Monte Carlo depends on the dimension of the system $d$. As a result, when $n$ is small and $d$ is large, quasi-Monte Carlo may perform worse than classic Monte Carlo in theory. In this paper, the error still depends on $d$ if I am not mistaken. The $d$ dependence is encoded in $\delta$, and not shown explicitly in the main text. Although, in the numerical experiments, it is shown that the LQMC performs very well for $d=100$, I still think it would be nice to make the reader know that, in theory, the error of LQMC depends on $d$.
* The error bound is given in a rather constraint setting.
The weaknesses and questions are addressed by the authors in their rebuttal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, I think this is already a very solid paper. However, I think it would be nice if the authors:
* State what this algorithm could do in reality. New experiments using realistic data are welcomed. Discussions on the time scale of the realistic tasks are also interesting (for example, is there any task that was impossible and made possible by LQMC).
* Write the error bound in the format of equation (4), that it, express the $d$ dependence explicitly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: My only concern regarding the limitation is that the dependence on $d$ of estimation error is not well studied. After all, $d$ is the criterion of when we should use lattice rules, quasi MC, or MC. However, I understand that the dimension of quasi MC is itself a tricky problem, and, in my humble opinion some ambiguity in this regard could be forgiven.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your valuable feedback. We address your questions as follows:
- **Numerical experiments are performed with synthesized data:** We understand the importance of real data experiments and have taken your suggestion into account. We have conducted new experiments using realistic data and tasks. Please refer to the "global response" and the uploaded PDF for the detailed description and results of these new experiments.
- **Dimension dependence error for QMC:** We acknowledge that the standard QMC error bound of $O(n^{-1}(\log n)^{d})$ may grow larger than $O(n^{-1/2})$ for large dimensions with insufficient $n$. In practice, however, QMC often performs better than the theoretical bounds suggest. The success of QMC in high-dimensional integration can be attributed to the concept of *effective dimension* [1]. Functions with low effective dimensions can be well approximated by a sum of low dimensional functions, which is favorable for QMC. Moreover, there exist established methods to reduce the effective dimension in the traditional QMC setting, e.g. [2-4]. How to extend these techniques to the Markov chain setting to reduce the effective dimension is an interesting future direction.
We appreciate your advice, and to highlight the dependence on dimension, we have updated the term $O(n^{-1+\delta})$ to $O(n^{-1}(\log n)^{d})$ in our paper.
- **Error bound is given in a constraint setting:** We acknowledge that the assumptions of strong convexity, smoothness, and bounded variation on the integrand are restrictive. We have discussed these assumptions in the "global response" and mentioned possibilities to relax them. We hope that the theoretical analysis, despite its constraints, provides valuable insights into the performance of our method and its potential for more general scenarios.
References:
[1] Wang, X., \& Fang, K. T. (2003). The effective dimension and quasi-Monte Carlo integration. Journal of Complexity, 19(2), 101-124.\
[2] Moskowitz, B., & Caflisch, R. E. (1996). Smoothness and dimension reduction in quasi-Monte Carlo methods. Mathematical and Computer Modelling, 23(8-9), 37-54.\
[3] Imai, J., \& Tan, K. S. (2004). Minimizing effective dimension using linear transformation. In Monte Carlo and Quasi-Monte Carlo Methods 2002 (pp. 275-292).\
[4] Xiao, Y., & Wang, X. (2019). Enhancing quasi-Monte Carlo simulation by minimizing effective dimension for derivative pricing. Computational Economics, 54, 343-366.
---
Rebuttal Comment 1.1:
Comment: Thank you for the comments and modifications. I would like to keep my rating as "Accept". | Summary: The paper analyses a variant of Langevin Monte Carlo which, instead of using standard Gaussian random variables (using standard random number generation), instead uses correlated random variables following the quasi-MCMC literature. The paper demonstrates that this provably improves the efficiency of certain statistical tests, such as e.g. mean estimation.
Strengths: The usage of non-random Gaussians (using an LSFR sequence instead of standard Mersenne twister) is a novelty for Langevin Monte Carlo. Consequently, the Algorithm is entirely novel, and is provably better in the sense shown in Theorem 4.1. This improvement is a significant technical innovation and is of use to both theoreticians and practitioners.
The experiments are thorough, in that many settings are considered, both in terms of convexity, dimensionality and type of gradient. The improvement for LQMC is clear and significant, which therefore lends credence to the theoretical claims.
The paper is very well written and the proofs are cleanly presented. I could not find any issues with the results.
Weaknesses: The improvement of this method is weak, in the sense that the expectation of certain test functions converges . In contrast, standard Langevin MCMC results will hold in “stronger” measures of convergence, such as KL divergence, total variation, etc. This is expected since quasi-MCMC methods typically can only outperform in these specific circumstances, but is nonetheless a disadvantage of the work.
The analysis seems to be rather trivial and combines the standard inequalities in previous works on quasi-MCMC, with the standard contractivity results of LMC-type algorithms under strong convexity.
Overall, while the analytic simplicity of this paper makes the result seem obvious in hindsight, I would still argue that this result is novel and impactful enough to merit publication.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Are the assumptions of smoothness, strong convexity and bounded variation (of f) necessary? I would recommend a deeper exploration of these conditions in order to enrich the paper.
Although stochastic gradient LMC appears in the experiments, it is not analysed. Could similar results be established in this setting?
Could this approach be combined with e.g. Metropolis/HMC algorithms, or other MCMC methods based on similar walks? It seems there may be some significant theoretical barriers to this end.
Typos:
Some citations are not properly formatted, e.g. in L. 22, L. 126, L. 151 (using citet instead of citep or vice versa).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: None beyond those raised in my earlier comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your valuable feedback.
We appreciate your recognition regarding the differences in our analysis compared to the standard literature on Langevin Monte Carlo (LMC). Traditional results in LMC primarily focus on studying convergence through metrics like the KL divergence between the law of $x_T$ and the target distribution $\pi$, thus investigating the distributional properties of the sample drawn at the $T$-th iteration in the LMC algorithm. In contrast, our analysis centers on the convergence of the ergodic average $\frac{1}{T}\sum_{k=1}^T f(x_k)$ to the target $\pi(f)$. This perspective directly examines the behavior of the average function evaluations over the LMC samples, providing a more direct evaluation of the quality of LMC in estimating expectations. While our approach differs from the conventional ones, we believe that our result is not inherently weaker for the following reasons:
1. Firstly, the convergence rate of the law of $x_T$ does not necessarily imply the convergence rate of the ergodic average $\frac{1}{T}\sum_{k=1}^T f(x_k)$.
2. Furthermore, our result, while stated as the convergence of expectation, also implies the convergence of the empirical distribution $\frac{1}{T}\sum_{k=1}^T\delta_{x_k}$ to the target distribution in the sense of the Kolmogorov-Smirnov distance (i.e., star discrepancy). In fact, this convergence of star discrepancy is the reason why we can achieve smaller errors for integrands of bounded variation in the sense of Hardy and Krause.
3. Lastly, in practical applications, researchers often utilize all the LMC samples obtained (excluding possible burn-in and thinning) rather than focusing solely on the last single sample. Consequently, the quality of the empirical distribution $\frac{1}{T}\sum_{k=1}^T\delta_{x_k}$ is of greater practical relevance than the law of $x_T$.
While we acknowledge that our analysis introduces a nonstandard technical condition of bounded variation and establishes a different sense of convergence, we do not perceive this as a weakness. On the contrary, we believe this nonstandard approach enriches the standard literature and offers a new perspective to the understanding of LMC.
Below is a point-by-point response to your questions:
- **Are the assumptions necessary?** Please refer to the "global response" for a discussion on the assumptions of smoothness, strong convexity, and bounded variation. In particular, we discussed the possibilities to relax the assumptions of smoothness and strong convexity, and provided some sufficient conditions to check bounded HK variation.
- **Analysis for stochastic gradients:** We appreciate your suggestion. If we use a noisy gradient $\hat g(\theta_k)=\nabla U(\theta_k)+e_k$ where $e_k$ is the noise with mean zero and bounded variance such that $\mathbb{E}(||e_k||_2^2)\leq\sigma^2$, then an extra term $2h\sigma$ will appear in Lemma 2 in the proof. As $\sigma^2$ is usually expected to be proportional to the dimension $d$, this additional term is of the same order as the other term. If the stochastic gradient $\hat g$ is estimated by a subsample, then $\sigma^2$ might grow with the sample size of the dataset, making this extra term dominate, as pointed out in [1]. We have included this analysis in the revision.
- **Combining with Metropolis/HMC algorithms:** Algorithmically, incorporating a rejection step is straightforward by generating an additional uniform random variable. However, one crucial consideration that prevents us from adopting a rejection step in this work is that the rejection step introduces discontinuities, which is not QMC-friendly. In contrast, the unadjusted Langevin algorithm (ULA) is continuous in the underlying uniform random variables, making it more favorable for QMC. Beyond the issue of discontinuities, the rejection step may lead to slow mixing when the rejection rate is high. Thus, while we acknowledge the straightforward implementation of a rejection step, we agree with you that there may be theoretical barriers to this end.
- Thank you for bringing the typos to our attention. We have addressed them in the revision.
[1] Dalalyan, A. S., & Karagulyan, A. (2019). User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient. Stochastic Processes and their Applications, 129(12), 5278-5311.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their detailed responses. The authors presented a fair justification for the chosen measure of convergence, which was one of my primary criticisms. I am choosing to maintain my current score, which was already relatively high. | Rebuttal 1:
Rebuttal: # Global response
We thank the reviewers for their insightful and constructive feedback. The positive evaluations of our work as "novel, impactful, significant, promising, and important" are encouraging, and we appreciate the recognition of the clarity and cleanliness of the paper. In this global response, we address the common aspects raised by multiple reviewers regarding the strong assumptions in the theoretical analysis and the absence of real data examples. Specific response to individual reviewers are provided separately.
## Strong convexity and smoothness
- We acknowledge that these are restrictive assumptions. However, starting with these assumptions is a necessary and natural first step to study the proposed LQMC algorithm. Evaluating its performance in the simplest setting allows us to gauge its potential for more complicated scenarios.
- The simplicity of the analysis yields valuable insights into how the driving uniform variables impact LMC. The error decomposition provides insights regarding which aspect of the algorithm might benefit from QMC. Despite its simplicity, the theoretical analysis guides the design of uniform numbers and sets expectations for the algorithm's performance, which might not be as clear in a more general setting.
- Next, we recognize the opportunity to relax these assumptions. Techniques similar to [6] can be adopted in non-strongly convex settings by introducing a quadratic penalty. A broader class of measures satisfying log-Sobolev or Poincaré inequalities also presents convergence guarantees [7]. Relaxing the Lipschitz gradient to a weaker Hölder-continuous gradient might be possible [5]. Growth conditions like dissipativity might also ensure convergence [9]. Quantifying the improvement of LQMC in these more general yet tractable settings is an interesting future direction, albeit beyond the scope of this work.
- While the improvement of LQMC might not be significant in general non-convex settings, where only first-order stationarity guarantees can be expected, the question of whether LQMC enhances exploration in non-convex settings remains open. Encouragingly, studies in optimization demonstrate QMC's superior exploration properties, such as in reinforcement learning, where QMC outperforms iid Gaussian for parameter exploration [10], and in variational inference, where QMC converges faster compared to traditional Monte Carlo [11]. These findings highlight QMC's potential benefit for exploration.
## Bounded Hardy-Krause variation
The condition of bounded HK variation is indeed a standard condition in QMC theory, ensuring the error rate of $O(n^{-1+\delta})$ for any $\delta>0$, with log terms hidden. While verifying this condition is challenging in general, there exist sufficient conditions [1,2]:
$$V_{HK}(f)\leq\sum_{u\subset1:d,u\neq\emptyset}\int_{[0,1]^{|u|}}|\partial^uf(x_{u},1_{-u})|dx_u.$$
In our analysis, we require $\bar f_{\ell}=f\circ g$ to have bounded HK, where $g=\psi_{\ell}:[0,1]^d\to G$ is the $\ell$-step transition. Consider two cases:
- **$G$ is compact:** If all the mixed partial derivatives of $g$ are square integrable, then $f\circ g$ has bounded HK for any $f\in C^d(G)$ [1]. Hence, if LMC samples are constrained within a compact set and the transition is sufficiently smooth, then $\bar f_\ell$ has bounded HK for any $f\in C^d$.
- **Otherwise:** If the function grows not too rapidly on the boundary of the unit cube, QMC can still achieve the error rate of $O(n^{-1+\delta})$. Some boundary growth conditions are introduced in [8] to ensure this rate.
Verification of these sufficient conditions is easier if such mixed partial derivatives are bounded. In other cases, one needs to examine the function's growth at singularities. It is essential to note that the bounded variation condition is a technical requirement for QMC theory. Empirically, QMC can still outperform Monte Carlo without this condition, as its performance largely depends on factors like smoothness and effective dimension.
## Real data experiments
Real data experiments were conducted in response to reviewers' suggestions. We first emphasize that the primary contribution of this work is to *improve LMC as a Monte Carlo sampling algorithm, not as an optimization algorithm*. Therefore, our main focus is on providing a better estimation of $\pi(f)$ for some function of interest. Downstream tasks relying on such expectations can also benefit from LQMC. For posterior prediction, it is essential to recognize that the prediction error is not solely determined by the sampling method. Even with infinite perfect samples from the posterior, the prediction error can still arise due to model misspecification, noisy data, biased sampling, etc. So the improvement achieved by LQMC might be less pronounced when assessing the prediction error.
The results are presented in the uploaded PDF. The additional experiments provide a more comprehensive evaluation of LQMC's effectiveness in realistic tasks.
[1] Basu et al. (2016). Transformations and HK variation. SINUM.\
[2] Owen. (2005). Multidimensional variation for QMC. Contemporary Multivariate Analysis And Design Of Experiments.\
[3] Dalalyan et al. (2012). Sparse regression learning by aggregation and LMC. Journal of Computer and System Sciences.\
[4] Dubey et al. (2016). Variance reduction in SGLD. NeurIPS.\
[5] Chatterji et al. (2020). LMC without smoothness. AISTATS.\
[6] Dalalyan et al. (2022). Bounding the error of discretized Langevin algorithms for non-strongly log-concave targets. JMLR.\
[7] Chewi et al. (2022). Analysis of LMC from Poincare to Log-Sobolev. COLT.\
[8] Owen. (2006). Halton sequences avoid the origin. SIAM Review.\
[9] Erdogdu et al. (2021). On the convergence of LMC: The interplay between tail growth and smoothness. COLT.\
[10] Choromanski et al. (2018). Structured evolution with compact architectures for scalable policy optimization. ICML.\
[11] Buchholz et al. (2018). QMC variational inference. ICML.
Pdf: /pdf/3e73cfec3e8a9d7aa94987efcf9d6f7dd1776d30.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Tanh Works Better with Asymmetry | Accept (poster) | Summary: This paper proves the hypothesis that asymmetric saturation benefits network performance by swapping the position of Batch Normalization and Tanh activation functions. The Swap model generates high sparsity and asymmetric saturation which enables Tanh to behave like a one-sided activation function. Experimental results show the asymmetric distributions consistently outperforms the symmetric ones. However, because the BN and Tanh combination hardly appears in modern networks and there is no convincing theory facilitating network architecture design, the contribution of this paper is of less significance.
Strengths: 1) This paper is easy to follow.
2) This paper provides a comprehensive experimental demonstration to validate asymmetric activation functions are superior to symmetric ones. This implies that ReLU-like activation functions are better than tanh activation functions.
Weaknesses: 1) One major concern is the lack of prevalence of the BN and Tanh combination in modern networks. As a result, the analysis and experiments conducted in this study hold limited empirical significance.
2) The benefits of asymmetric activation functions have previously been demonstrated from the perspectives of gradient [1] and expressivity [2]. However, this paper fails to contribute new explanations or adequately discuss the limitations of existing works.
3) The experiments in this paper are restricted to a limited range of network architectures. For example, even the widely used ResNet18 and ResNet50 models have not been evaluated on the ImageNet dataset.
4) A noticeable error can be identified in Figure 1, where the right bottom subplot indicates that 'BN' exhibits lower sparsity compared to 'Tanh,' contradicting the accompanying label stating 'High Sparsity.'
[1] Maas, A. L., Hannun, A. Y., & Ng, A. Y. Rectifier nonlinearities improve neural network acoustic models. 2013.
[2] Hanin, Boris, and David Rolnick. Complexity of linear regions in deep networks. 2019.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1)The paper discusses sparsity and asymmetry, which have distinct mathematical definitions. However, it remains unclear which factor is the key determinant of network performance.
2) Whether the proposed swap model effectively addresses the issue of vanishing gradients?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: The authors did not discuss Limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and valuable suggestions.
**Weakness 1.**
Our main contribution lies beyond the combination of BN and Tanh. We have delved into understanding the significance of asymmetry in the activation functions, using Tanh as a base. This analysis can offer insights for future research on activation function design and interpretation.
The derived insights can be applied to other functions, like ELU, which shares structural similarities with Tanh. ELU has the asymptote away from the x-axis. Thus, when ELU is used in the Convention order, its ability to generate asymmetry is hindered due to BN's zero-centered output. This imposed limitation on asymmetry drops the model accuracy. The accuracy of VGG16_11 trained by CIFAR-100 with ELU and other activation functions (Tanh, ReLU, leaky ReLU) can be seen in Table 1 in the PDF.
Notably, Tanh and ELU outperform in the Swap order compared to the Convention order. This remarks our observation that restricted asymmetry degrades the accuracy. This enhanced performance can be seen in different models, not just VGG. Table 2 in the PDF file shows the accuracy of the ELU model in various settings.
Moreover, shifted Tanh remains effective even in non-BN settings. Examining Convention and Swap orders, we discerned the importance of asymmetry and sparsity. Such understandings hold potential for performance enhancements, regardless of BN's presence. Notably, in the BN-free VGG16_11 model, shifted Tanh outperforms the Tanh, with an accuracy of 92.27% against Tanh's 89.68%.
**Weakness 2.**
As far as we know, no previous works discuss the advantages of asymmetry of the activation function.
Our work differs in a critical aspect: asymmetric saturation. Specifically:
- [1] investigates gradient flow in the linear region of the ReLU in deep networks.
- [2] delves into the expressivity stemming from the linear regions formed by piecewise linear functions.
These studies primarily focus on the linear aspects and not on asymmetric saturation.
To illustrate further, [1] values the "sparse-dispersed code", where a few coding units are active at a given moment for an image, but all units contribute equally to coding over their lifetime. While this shared the sparsity perspective with ours, it is distinct from our main finding: asymmetric saturation helps improve performance.
Thus, our paper sheds light on the benefits of asymmetric saturation, offering a novel perspective on the activation function.
**Weakness 3.**
We recognize the widespread use of ResNet architectures. However, they aren't suitable for our study, which focuses on analyzing the performance difference in the layer order.
In the specific residual blocks of ResNet, BN exists in the skip connection. Thus, the number of BN and activation functions are different. This results in the Swap model having as many activation functions as BNs in the skip-connection, complicating a direct comparison with the Convention model.
Nevertheless, the results of the ResNet model with Tanh on ImageNet and CIfar-100 are below. The ResNet-18 was conducted at one hyper-parameter (lr 0.1, wd 0.0001) due to limited time, and ResNet-20 for the CIFAR-100 is the best hyper-parameter accuracy.
| | Convention | Swap |
|:------------:|:----------:|:--------:|
| ResNet-18 (ImageNet) | 63.08 | 69.96 |
|ResNet-20 (CIFAR-100)| 68.97 | 69.06 |
Swap shows improved performance compared to Convention.
Additionally, we extended our evaluation to PreAct-ResNet50 with Tanh on ImageNet. PreAct-ResNet, different from ResNet, lacks BN in its skip connection. This means that when the Swap order is employed, it does not introduce the challenges seen in ResNet. Our results with PreAct-ResNet50, at the best hyper-parameter. (Convention: 62.95, Swap: 72.82) The Swap model also outperforms the Convention model.
**Weakness 4.**
The sparse distribution refers to a state in which a small number of samples have large values.
The right bottom subplot shows a state where most have a value of 0 and a small number of samples have a large value (e.g., -5).
The averaged Sparsity over layers on Convention and Swap for VGG11 is as follows. (Convention: 0.718, Swap: 0.849)
The lines in the subplot are thin and do not seem visible.
We will update a figure that can add the existence range of values later.
**Question 1.**
It is not easy to control asymmetry and sparsity completely independently. However, we assume that asymmetry is more important in general. This can be confirmed by comparing the asymmetry and sparsity between the NWDBN and Convention models.
When using NWDBN, it was not shown because the sparsity metric was not introduced. While Convention shows an average layer Skewness of 0.254 and Sparsity of 0.718, NWDBN presents a pronounced Skewness of 0.718 and a lower Sparsity of 0.288. Even with reduced Sparsity, NWDBN surpasses Convention as asymmetry increases.
**Question 2.**
The vanishing gradients problem is inevitable when using Tanh with excessive saturation. However, BN in the Swap order somewhat alleviates the vanishing gradients caused by asymmetric saturation in two aspects.
1. the scaling effect on the gradient by normalization, specifically $\sigma$ in BN. When asymmetric saturation occurs in Tanh, the $\sigma$ becomes smaller. The normalization in the forward pass is $(x-\mu)/\sigma=z$. And it becomes $dL/dx=1/\sigma*dL/dz$, which has the effect of scaling due to the $1/\sigma$.
2. the unboundedness of the convolution input. It can calculate a large weight gradient. In the Swap model, the convolution layer gets its input from BN, unlike the Convention model, which uses Tanh's output. When calculating the weight gradient of convolution weight, unbounded large input can create a larger gradient than Convention.
In this respect, the Swap model can exhibits performs well across various settings even with asymmetric saturation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the careful response. I increase my rating to bordline reject considering that my concerns are partially addressed. I acknowledge the design of swapping the order of tanh and BN but still have concerns on the main insight provided by this paper that asymmetric saturation helps improve performance is significant. As recognized by the authors, asymmetry is highly associated with sparsity. It is commonly accepted that sparsity is the key to achieve a good performance (references like [5] in text). The benefits from asymmetry could be explained by sparsity under the intuition that sparse activations filter out some feature values at each layer and help distinguish in-class data and out-class data. In this way, the insight about asymmetric activation is less important. To emphasize asymmetry, the authors need to provide new findings on top of existing explanations on sparsity. I also suggest the authors to introduce the definition of sparsity metric at the first of paper since it is different from the usual definition referring to the number of non-zero values.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer hcmY
Comment: Thank you for your thoughtful response and for raising your score.
We do agree that sparsity and asymmetry often appear coincidently. However, the NWDBN model shows accuracy improved with increased asymmetry despite decreased sparsity. This observation holds for the LeCun Tanh and Softsign activation functions as well. Our findings suggest that asymmetry itself can boost accuracy, even if sparsity is not induced.
We compared the Convention and NWDBN of VGG16_11 with the Softsign and LeCun Tanh activation functions. All measurements were obtained using the optimal hyper-parameters, and the results were averaged across three different seeds. Please see the below Table for details on accuracy, average Skewness, and average Sparsity.
| | Convention with SoftSign | NWDBN with SoftSign | Convention with LeCun Tanh | NWDBN with LeCun Tanh |
|:-----------------:|:-----------------------------:|:--------------------------:|:--------------------------------:|:-------------------------------:|
| accuracy | 70.01 | 72.26 | 67.82 | 72.04 |
| avg. Skewness | 0.407 | 1.393 | 0.430 | 1.377 |
| avg. Sparsity | 0.664 | 0.385 | 0.623 | 0.354 |
Results show that NWDBN models with higher asymmetry and lower sparsity outperform the conventional models. These results support that asymmetric saturation plays a crucial role in enhancing accuracy.
Regarding your comment on the definition of sparsity, we will introduce the sparsity definition earlier in the final submission, as you suggested. | Summary: This paper investigates the performance of different activation function orders in deep learning models with batch normalization. The authors focus on the conventional order, where batch normalization is placed before the activation function, and the swapped order, where batch normalization is placed after the activation function. Surprisingly, they find that the swapped order achieves significantly better performance than the conventional order when using bounded activation functions like Tanh. The paper provides a thorough analysis of the underlying mechanisms and presents empirical evidence to support their findings.
Strengths:
- Novelty: The paper explores an interesting, and up to my knowledge previously overlooked aspect of activation function order in the context of batch normalization. The findings challenge the conventional wisdom and offer a new perspective on designing deep learning models.
- Comprehensive experiments: The authors conduct extensive experiments and carefully examine the output distributions of individual activation functions. Their investigation into the asymmetric saturation phenomenon provides lots of intuitions into the behavior of bounded activation functions that make intuitive sense.
- While focusing on Tanh as the primary activation function, the authors demonstrate that their findings are applicable to similar antisymmetric and bounded activation functions.
- Performance Improvement: The swapped order, combined with bounded activation functions and batch normalization, consistently outperforms the conventional order across various benchmarks and architectures. The results highlight the potential for achieving superior performance by exploiting the benefits of asymmetry and sparsity.
- Quality of presentation of ideas: the paper is very well written and gives the ideas in a clear and easy to follow manner. The contributions are clearly stated and there are no over claims in the text.
Weaknesses: *Limited scope*
My main concern with this paper is that the limited scope of the network configurations considered for the experiments. For example, the results presented in Table 1 shows that conventional models with ReLU are mostly equal or better than all swap models, as well as the ReLU activation with swap order. Can authors make any further comments on possible reasons for this? For example, one might discount the strength of the main empirical evidence that is presented (differential performance between swapped & conventional models for Tanh & Tanh shifted), is only observed for the sub-optimal models and not the original model. While this does not directly contradict the main message of the paper, it significantly weakens its potential impact and scope. If the authors can address these questions by further experiments this could potentially strengthen the main conclusions made in the paper.
On a similar note, given the central hypothesis that asserts "The experiments designed to induce a different degree of asymmetric saturation support the hypothesis that asymmetric saturation helps improve performance, " can this hypothesis explain the differential performance between ReLU & Tanh too? Can this hypothesis explain the difference & benefit of having BN layers at all? (if you remove them, you don't have the asymmetric saturation ? There seems to be plenty of adjacent configurations that can be added to expand on the generalizability of this central idea of the paper. These types of additional experiments could strengthen the study's conclusions.
*Lack of Theoretical Analysis* Although the paper presents compelling empirical evidence, a deeper theoretical analysis would enhance the understanding of why and how the swapped order, asymmetry, and sparsity contribute to performance improvement. Incorporating theoretical insights could strengthen the paper's contribution to the field.
*Figures & tables:* One minor but important issue: The tables and figures currently lack confidence intervals or standard deviations
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: no questions
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and valuable suggestions.
**Weakness: Limited scope**
- Possible reasons for the Convention ReLU performance
As the original author of BN[1] suggested, Convention seems to be fundamentally better than Swap. The good performance of Convention ReLU derives from these advantages. However, in the case of Tanh, the asymmetry caused by the Swap overcomes the disappeared advantages of the BN in the Convention order. We can confirm that Convention is better in the shifted Tanh, which can produce asymmetry even without a Swap.
- Differential performance between swapped & conventional models
Would you be kind enough to clarify what you mean by "sub-optimal model"? For reference, the VGG used in Table 1 is not VGG11 but VGG16. Only VGG16 has only BN added to the original model in that table, while all other models remain in their original versions.
- Explanation of the differential performance between ReLU & Tanh
We believe it's possible. We assume that the primary reason Tanh underperforms compared to ReLU in general cases is due to asymmetry. Therefore, when we tested with Swap or shifted Tanh to induce asymmetry, we observed a performance similar to ReLU's. From this, we deduce that the reason for the lower performance of Tanh compared to ReLU was its asymmetry.
- The Difference and benefits of having BN
There are various roles of BN, and there have been many studies on it. This paper focuses on analyzing the correlation between Tanh and BN rather than showing the general advantages of BN. Nevertheless, we assume that BN is needed to complement the low sparsity caused by the asymmetry of Tanh.
In the case of Tanh, BN increases the sparsity of the block output in the Swap structure, which helps improve the performance of the model, especially when it is combined with asymmetry.
The layer-wise Skewness and Sparsity of the VGG16_11 without BN, which is called the NoBN model, and the Swap model using Tanh can be seen in Figure 2 in the PDF file.
In the absence of BN, increasing the weight size can indeed produce asymmetry. However, the degree of skewness observed isn't as prominent as that in the Swap model. We interpret this subdued asymmetry in the NoBN model as a result of reduced sparsity. This limited generation of strong asymmetry could potentially contribute to a decline in performance.
On the other hand, in the Swap structure, even if a high asymmetry is generated due to the BN located behind, it can create a high sparsity.
**Weakness: Figures & tables**
We have added confidence intervals for Figure 1 to the PDF. We will update the standard deviation for Table 1 in the main paper.
[1] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." International conference on machine learning. pmlr, 2015.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors to respond to my queries and questions. I do find the answers to be satisfactory and thus have decided to increase my score to weak accept. | Summary: This paper investigates neural network classifiers with bounded activation functions. The authors first observe that swapping the batch norm and activation order improves performance with bounded activation functions. They then observe that asymmetric saturation and sparsity occurs in the swap model compared to convention, and show how it correlates with accuracy. They then propose a modified activation function that promotes asymmetric saturation and shows that even in convention order, it benefits performance.
Strengths: - The paper notes an interesting observation regarding the swap model with bounded activation
- It supports the hypothesis that asymmetric saturation and sparsity benefit benefit. This is achieved by caring the weight decay on the batch norm layers.
- The comparable performance with the modified tanh activation with relu is quite interesting. I wonder if ReLU-like behavior is what is the best for performance or if there is room for improvement past this.
Weaknesses: - The evaluation is somewhat limited, since it is only done on VGG on image classification tasks. It’s not clear if these results would also apply to different architectures, e.g. transformer-based ones, or on different tasks such as segmentation or text classification.
- Comparison with different ReLUs, such as LeakyRELU or ELU.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and valuable suggestions.
**Weakness 2**
The table below presents the accuracy of VGG16 models trained on CIFAR-100. The accuracy of both leaky ReLU and ELU is the best hyperparameter, and the accuracies are averaged over three seeds.
| | Convention | Swap |
|:------------:|:----------:|:--------:|
| ReLU | 73.68 | 71.79 |
| leaky ReLU | 74.75 | 73.61 |
| ELU | 69.85 | 72.27 |
| Tanh | 64.84 | 72.17 |
| Shifted Tanh | 73.87 | 73.21 |
The shifted Tanh model with Convention outperforms the others except for the leaky ReLU model with Convention.
Interestingly, the ELU model, like Tanh, exhibits enhanced performance in the Swap order, reinforcing our assertion that "asymmetry enhances performance."
Given ELU's asymptote away from the x-axis, its capability to establish asymmetry is constrained in the Convention order due to BN's zero-mean output. However, the Swap order can amplify its asymmetry, leading to performance boosts in multiple models, not just VGG.
| | Convention | Swap |
|:------------:|:----------:|:--------:|
| MobileNet | 68.72 | 70.26 |
|PreAct-ResNet18| 74.64 | 75.6 |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response, and will keep my score as weak accept. | Summary: The paper investigate the order of Batch Normalization and activation functions, and founded bounded activation functions like Tanh works better in the swapped order unlike bounded one like ReLU. To explain this, the authors analyze the asymmetric saturation levels at both the layer and channel levels and find that the Swap model has higher saturation levels, especially in layers with excessive saturation. They also introduce a new model, NWDBN, that encourages asymmetric saturation and show that it improves accuracy compared to the convention model. The paper concludes that asymmetric saturation can help improve performance in neural networks.
Strengths: - The paper revolve around the hypothesis that "asymmetric saturation helps improve performance", and analyize that ReLU and Tanh bring different asymmetric saturation levels, this sound interesting to me.
- The paper identify the high sparsity induced by Batch Normalization after bounded activation is functions and validate that the higher sparsity induced would booster the performance.
Weaknesses: - The logic chain of the story in this paper need some improvement, see more in Questions part.
- The input of convolution layer in Figure 1 upper and lower part seem different, can the authoer explain why?
- Why is x-axis in Figure 1 lower-right BN figure of range (-5, 2.5), while I didn't see any value in the range of (-5, -1), this plot could be misleading.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The paper did not explain well why higher sparsity induced by asymmetric saturation would booster the performance.
- Is there a way to control Asymmetric Saturation to archieve trade-off between sparsity and performance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: As stated in Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and valuable suggestions.
**Weakness 1.**
**Question 1.**
The robustness of the noise obtained through the sparsity results in improved performance. Sparse representation indicates that relatively few units represent the data sample. Thus, even if the perturbation is added to the input, this noise less affects sparse features.
The Swap model has strength on sparsity compared to the Convention model. In Section 3.3, we verified the robustness of the Swap model on the corrupted dataset. The increased sparsity of the Swap model outperforms the Convention model.
**Question 2.**
Thank you for the insightful question. However, it is not easy to completely isolate only asymmetry to achieve trade-offs between sparsity and performance.
**Weakness 2.**
For the Swap model, the mean of convolution layer output needs to shift substantially away from 0 to achieve asymmetry on Tanh. Consequently, this results in a distribution that is biased in one direction.
Conversely, in the Convention model, there is no reason to shift the mean of convolution output. The normalization of BN adjusts the mean of the convolution layer output to zero. This normalization process can lead to the output of the convolution layer non-biased.
**Weakness 3.**
The subplot shows most samples near 0, with a few outliers like -5, remarking the Swap model's high sparsity. Due to the line's thinness, such values may be hard to discern.
We will enhance Figure 1 for a more clear representation. | Rebuttal 1:
Rebuttal: We thank the reviewers for their positive feedback and valuable suggestions.
Pdf: /pdf/e3cbcc2d105648f17375fadcc64c8192f2a6d4af.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Robust Knowledge Transfer in Tiered Reinforcement Learning | Accept (poster) | Summary: The paper presents an extension of "Tiered-RL", a multi-fidelity RL framework where a "low-fidelity" environment is executed in parallel with the "high-fidelity" environment, with the purpose of training faster while keeping near-optimal regret. The paper is a theoretical exploration without empirical evaluation
Strengths: - The paper presents a deep theoretical evaluation of a setting that is very relevant yet not very methodologically-explored: When we have related tasks of varied importances to be solved (such as sim2real), and one task can be leveraged to learn faster another one.
- An algorithm able to guarantee near-optimal regret while training in multiple tasks might be of use in security-critical applications such as robotics or medical domains.
Weaknesses: - While the theoretical results sound exciting, I would expect at least a simple empirical evaluation of the proposed algorithm to be provided to show how hard it is to actually implement the algorithm in a practical domain.
- The "Tiered RL" setting sounds very similar (if not exactly the same) as the multi-fidelity RL modeling, that wasn't even cited by the authors. While there is still a novelty in the exact way the problem is solved by running all "tiers" (or fidelities) in parallel, the problem formulation seems exactly like multi-fidelity MDPs to me, and I would suggest to use the same multi-fidelity MDP formulation to keep consistency in the literature. Multi-fidelity MDPs are explicitly modeled in this paper:
Silva, Felipe Leno, et al. "Toward Multi-Fidelity Reinforcement Learning for Symbolic Optimization.", Adaptive and Learning Agents (ALA) workshop, 2023.
And the multi-fidelity RL problem has been explored in a similar way in the following papers
Sami Khairy and Prasanna Balaprakash. 2022. Multifidelity reinforcement learning with control variates. arXiv preprint arXiv:2206.05165 (2022).
Mark Cutler, Thomas J Walsh, and Jonathan P How. 2014. Reinforcement learning with multi-fidelity simulators. In 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 3888–3895.
At the very least all of those papers should have been included to your related works section.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - A doubt regarding the setting. Would the agent care about the performance in all tiers? Or is it only important to optimize the performance in the highest tier? If only the highest tier matter, the setting is exactly the same as in multi-fidelity RL as said in the "weaknesses" section, otherwise, I would need more clarification of in which practical application this modeling would be useful.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No foreseeable negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. Below we address the concerns raised by the reviewer.
### Experiment
Thanks for the suggestion. We conduct some empirical evaluation of our algorithm on toy examples, which well validates our theory. Please refer to our global response for more details.
### Difference with Multi-fidelity RL and Questions about the setting
Thanks for pointing out the references on multi-fidelity RL. We will properly cite them and add discussions about it. We believe our Tiered RL setting has fundamental differences from multi-fidelity RL setting.
Multi-fidelity RL considers the case when both low-fidelity, cheap data and high-fidelity, expensive data are available, and aims at solving the task with least queries to high-fidelity simulators by leveraging low-fidelity data.
In contrast, as we motivated in our introduction, the Tiered RL setting sits in between transfer RL and Multi-Task RL, where source and multiple tasks are solved in parallel. With close inspection, we can see
(1) In multi-fidelity RL, although there are simulators at different levels, there is only one task to be solved, while in Tiered RL, we have to solve source and target tasks.
More importantly, as we clearly mentioned in the ontroduction section (line 45-48), although we want to benefit target tasks by knowledge transfer, we still expect source tasks do not sacrifice for that and still retain near-optimal regret. This objective inherits from [1], and it is reasonable for user-interacting applications with “tiered customer” structure. Please refer to [1] for more concrete examples.
(2) In multi-fidelity RL, it is assumed that the low-fidelity data is cheap and abundant. However, in Tiered RL, like multi-task RL, the tasks are solved in parallel, so the samples from low-tier tasks are still limited (in each iteration, either source or target tasks can only collect one trajectory). The data scarce in source tasks also makes our setting more challenging.
[1] Huang et. al., Tiered reinforc ment learning: Pessimism in the face of uncertainty and constant regret.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, I will keep my already-positive score. | Summary: The authors look at the tiered reinforcement learning setting, which is a parallel transfer learning framework where the goal is to transfer knowledge from a low-tier source task to a high-tier target task in order to reduce the exploration risk of the high-tier task. Additionally, these tasks are solved in parallel. Contrary to previous related work, the authors do not assume the low-tier and high-tier tasks share dynamics or reward functions and focus on robust knowledge transfer without prior knowledge on task similarity. The authors use a condition called the “Optimal Value Dominance” to propose novel online learning algorithms that can achieve constant regret on partial states depending on the task similarity and near-optimal regret when the two tasks are dissimilar. Furthermore, for low-tier tasks, these algorithms keep near-optimal regret at very little cost. The authors also study the scenario when multiple low-tier tasks are present and propose a novel transfer source selection algorithm that can gather knowledge from all low-tier tasks and produce benefits on a much larger state-action space.
Strengths: * The regret analysis of the robust tiered multi-armed bandit (MAB) models was very thorough.
Weaknesses: * No experiments that compared performance with other tiered RL algorithms
* It would have nice to have had the related work in the manuscript instead of as a supplement.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * What type of data have others used on this problem or similar RL problems? Can it be leveraged in this setting as well?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: * The authors did not address the fact that they did not perform any experiments with either synthetic or real data on their model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. Below we address the concerns raised by the reviewer.
### Experiments that compared performance with other tiered RL algorithms
We highlight that the general setting of tiered RL without prior knowledge of task similarity and/or with multiple source tasks has not been studied in the literature (which is precisely the gap our work aims to fill in); thus there exists no benchmarks or state-of-the-art baselines in this case for us to run comparison with.
Following the reviewer’s comment, we have now conducted some empirical evaluation of our algorithm on toy examples, which well verifies our theory. Please refer to our global response and PDF attached there for more details.
### Related work
Thanks for the suggestion. We are happy to move part of the related work (currently in Appx A.2 and A.3) from the supplementary material to the main paper in the revision for ease of reading and completeness as long as the space allows. We apologize for the inconvenience and invite the reviewer to check these discussions in the Appendix for now.
### “What type of data …”
To our knowledge, there is no prior work studying the theory of parallel transfer RL with single or multiple source tasks.
For the normal transfer/multi-task RL setting, it is studied to transfer a fixed well-learned value functions [2], or data collected from other tasks [3].
In our setting, the source tasks are not solved in advance, and the target task should leverage the information from them once available, so we transfer the trajectory data in source tasks directly, which actually contains all the raw information and can be used to further construct value or transition estimations.
[1] Huang et. al., Tiered reinforcement learning: Pessimism in the face of uncertainty and constant regret.
[2] Noah Golowich and Ankur Moitra. Can q-learning be improved with advice?
[3] Chicheng Zhang and Zhi Wang. Provably efficient multi-task reinforcement learning with model transfer.
---
Rebuttal Comment 1.1:
Comment: Yes, please strongly consider moving your related works into the paper. I believe this will help provide more clarity to the paper and show the gaps you are filling with your work.
I am also glad to hear that you have used simulated data to at least show some empirical evaluation of your algorithm.
As a result of these revisions, I have decided to increase my rating. | Summary: The authors propose a robust parallel knowledge transfer reinforcement learning algorithm for single or multiple source tasks without knowledge on model similarity using the previously defined Tiered Reinforcement Learning framework. The paper remove the limitation on prior knowledge about the task similarity to generalize the framework. The main contribution is three-fold: 1) Establish necessary conditions for lower regret bound, 2) propose robust parallel transfer algorithms for reinforcement learning and its special case, multi-armed bandits, and 3) describe a new source task selection mechanism that guarantees constant regret for larger space of state-action pairs using multiple low-tier tasks.
Strengths: Originality. Removing the assumption on task similarity poses new challenges to guarantee lower regret bounds that are not trivial. The new algorithm to solve the general knowledge transfer require learning whether low-tier and high-tier tasks are similar from observed data and balance between exploration and exploration from the low-tier model at the same time. The lower bounds from simplifying the general case simplify to the prior work under the similarity conditions between low- and high-tier tasks. Therefore, the lower bounds are sound. To extend the approach for the high-tier task to leverage multiple low-tier tasks, the authors describe a method called “trust till failure” that still guarantees lower bounds on regret.
Quality & Clarity. The contribution and novelty is well defined. The definitions and equations are sound.
Significance. The proposed approach generalizes the previous work to a larger class of problems while still maintaining lower bounds on regret minimization.
Weaknesses: Originality & Significance. To remove the limitation of task similarity for generalization is not trivial. However, the motivation for generalization is lacking with few references to potential robotics applications. The paper can benefit from providing an illustrative example or toy problems to motivate new class of problems the proposed method can be applied to.
Quality. I am assuming the authors made a mistake and uploaded the incomplete paper. The related work is supposed to be in the Appendix but there are no Appendices attached to the submitted paper. Without these Appendices, it is difficult to compare how the new method compared to prior work. There are several grammatical errors and the conclusion section does not talk about the limitations of the current approach. The future work is also in Appendix which is not included in the paper.
Clarity. It is not clear how parallel knowledge transfer is any different from meta-learning and/or why it is not a special case of meta-learning. It will be good to discuss when the learning methods overlap or differ under which conditions to better motivate the approach. Specifically, for the use case of multiple low-tier learners with a single high-tier learner.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please provide the appendices to complete the evaluation of the work on soundness and its relation to prior work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not provide limitations of the proposed approach. There are no societal issues related to this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for valuable feedback. Below we address the concerns raised by the reviewer.
### About the appendix
We are afraid the reviewer might have overlooked the **Supplementary Material** part of the submission, where we include the full paper with appendix in a zip file.
If space allowed, we are happy to move part of the related work (currently in Appx A.2 and A.3) and future work (currently in Appex A.5) from the supplementary material to the main paper in the revision for ease of reading and completeness. We kindly invite the reviewer to check these discussions in the Appendix.
### Lack of motivation and illustration on toy problems
Thanks for the suggestion. We will strengthen our motivation by highlighting other potential applications and theoretical gaps in the existing work. Following the suggestion, we have now included some numerical illustration on a toy example which showcases the performance of our proposed algorithm and validates our theory. Please refer to our global response and the attached PDF for more details.
### Difference with Meta-Learning
In general, meta learning is about learning to learn from **metadata**. Here learning to learn means it learns an algorithm instead of outputting a policy, and metadata refers to the data about data, for example, properties about the algorithm used, learning task itself, etc.
In contrast, in our Tiered RL setting, like transfer RL, we distinguish the importance of tasks and directly use the **data from other source tasks** to accelerate the policy learning in the "high-tier" tasks.
In another word, meta learning considers to use “high-order” data to learn an algorithm, while we target at using normal data from source tasks to benefit “high-tier” tasks.
### Limitations of the proposed approach
We have discussed some limitations of our results and open problems in Appedix A.5, for instance, the dependency gap in the lower bound for RL case (see also lines 164-166), the OVD assumptions. Following the suggestion of the reviewer, we will assemble these limitations in an explicit section in the main paper.
---
Rebuttal Comment 1.1:
Comment: We thank the reviewer again for the valuable feedback. Given that the discussion period is ending, it would be appreciated if you could review our response and notify us if your concerns are addressed or you have further questions. | Summary: This paper studies the tiered reinforcement learning setting, a transfer learning setting where the goal is to transfer information from a low tier task to a higher tier one while learning both to solve both tasks in parallel.
Contrary to prior work the author do not assume that both tasks share the same rewards and dynamics, they show it is still possible to benefit from the low tier task to learn the higher tier one. The author also extend their work by considering multiple low tier tasks and present selection mechanism which can gather information the different tasks.
Strengths: Overall this is a solid paper, while technical the paper is well written and properly organized.
Removing the assumption that the low and high tier sources share the same dynamics and rewards makes this setting much more applicable and interesting.
The setup with multiple low tier tasks is also valuable could have many applications.
Weaknesses: It would have been interesting to add empirical evaluations of the proposed algorithms to understand their actual performance.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: In the multiple source tier setting is there a point where adding more tasks with a low similarity with the high tier tasks can hurt performance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very positive and valuable feedback. Below we address the questions raised by the reviewer.
### For the experiments
We conduct some empirical evaluation on toy examples, which well verifies our theory. Please refer to our global response for more details.
### More tasks could be harmful
Thanks for the interesting question. One direct observation is that, for the fixed failure rate, we need to adjust the bonus term by a $\log W$, where $W$ is the number of source tasks. Therefore, it could be harmful if one continuously adds source tasks which cannot enlarge the set of transferable states. It might be interesting to study how to select tasks which have more potential to introduce new transferable states, but it is out of the scope of this paper. | Rebuttal 1:
Rebuttal: We thank reviewers for their hard work and valuable feedbacks.
## General Remarks on Experiments
Several reviewers (Reviewer 5dMC, Reviewer 9F2D, Reviewer BLAr, Reviewer MBdo) pointed out the lack of empirical evaluations as a main weakness. In our humble opinion, as a theory paper, our results provide a solid understanding on the provable benefits and fundamental limits with robust knowledge transfer in tiered RL. We believe the results are already interesting on their own, both from theoretical and technical perspectives. Note that the majority of the references (e.g., [9, 36]) that we cited in the paper (many also published at ICML/NeurIPS) do not contain any numerical results.
Nevertheless, we agree with the reviewer that some empirical validation can still be valuable. Note that the general setting of tiered RL without prior knowledge of task similarity and/or with multiple source tasks has not been studied in the literature (which is precisely the gap our work aims to fill in); thus there exists no benchmarks or state-of-the-art baselines in this case. A reasonable experiment would be to validate the performance of our own algorithms. To this end, we select the most representative Tiered RL algorithm, Alg. 7 in multiple source tasks setting, and evaluate it on a toy tabular MDP task. In the PDF attached in this response, we report the numerical results.
As we can see from Figure 1,
* After the transfer is activated, the regret in the target task will suddenly increase for a while, because the target task has to make some mistakes and learn from it as a result of the model uncertainty. However, because of our algorithm design, the negative transfer will terminate after a very short period.
* If we add more source tasks which can introduce new transferable states, the target task will suffer less regret.
We believe this experiment well verifies our theory prediction and we are happy to add these and more experimental results in the revision if the reviewers find them necessary.
Below we provide details on the experimental setups.
### Construction of Source and Target Tasks
We set $S=A=3$ and $H=5$. We first randomly construct the transition function of the high-tier task $M\_{\text{Hi}}$ (i.e. $\mathbb{P}\_{\text{Hi}}$ are randomly sampled and normalized to make sure their validity).
Then, similarly, we randomly construct the reward function of $M_{\text{Hi}}$ and shift the reward function to ensure $M_{\text{Hi}}$ has unique optimal policy and $\Delta_{\min, \text{Hi}} = 0.1$.
Next, we construct the source tasks by randomly permute the transition matrix of $M_{\text{Hi}}$. In another word, for any $s_h$, we randomly permute $a_1,a_2,a_3$ to $a_1',a_2',a_3'$ and assign $\mathbb{P}\_{h,\text{Lo}}(\cdot|s_h,a_i') \gets \mathbb{P}\_{h,\text{Hi}}(\cdot|s_h,a_i)$ for $i\in[3]$. In this way, the Optimal Value Dominance (OVD) condition is ensured, and we can expect some of $s_h$ are transferable when $\pi^*_{\text{Lo}}(s_h) = \pi^*_{\text{Hi}}(s_h)$.
When the number of source tasks $W > 1$, we repeat the above process and construct $W$ different source tasks.
### Experiments Setting
We adapt StrongEuler in [1] as online learning algorithm to solve source tasks, and use the bonus function in [1] as the bonus function in our Alg. 7.
We evaluate our algorithm when $W = 0, 1, 2, 5$, where $W = 0$ means the high-tier task $M_{\text{Hi}}$ is simply solved by normal online learning method (StrongEuler) without any parallel knowledge transfer.
We choose $\lambda = 0.3 \approx 1/S$ in Alg. 7, and in the MDP instance we test, **among all $S\cdot H=15$ states, for $W=1,2,5$, the number of transferable states would be 6, 9 and 13, respectively**.
We evaluate for $K = 1e7$ iterations, where we start the transfer from $k=5e5$, in order to avoid large "burn-in" terms because of the large uncertainty in source tasks in early stage. Each curve is averaged over 20 runs, and the shadows indicate 96% confidence intervals.
[1] Max Simchowitz and Kevin G Jamieson. Non-asymptotic gap-dependent regret bounds for tabular mdps. (NeurIPS 2019)
Pdf: /pdf/414db85de760f67ef89c1fd9153f63d925a686bc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper extends the work of Huang et al (2022) on Tiered Reinforcement Learning (where the objective is to transfer knowledge from the low tier (risk-tolerant) source task to a high-tier (risk-averse) target task while solving the two two tasks in parallel) by relaxing the assumptions of identical reward and transition functions in the source and target tasks.
The paper first identifies ‘Optimal Value Dominance’ a necessary condition to keep the source algorithm near-optimal while achieving provable benefits for the regret in the high-tier algorithm when Low and High tier tasks are similar.
The paper then proposes algorithms for tiered multiarmed bandit and tiered reinforcement learning with regret lower bounds depending on the similarity between the low- and high-tier tasks, with an improved lower bound for the case when the tasks are the same compared to Huang et al (2022).
Finally, the paper also considers a case when there are multiple low-tier tasks, in which the most similar states to the high-tier task are chosen from any of the low tier tasks for an additional log W factor in regret.
Strengths: Originality
The paper extends the framework (multiple low-tier tasks) of and relaxes some of the assumptions (non-identical reward and transition function among the low- and high-tier tasks) of a previous paper on Tiered RL.
Quality
The paper is mostly well-written, clear and concise. The paper motivates the problem setting well in the introduction, and places it well in the literature. The concepts are introduced and explained in a logical order.
Clarity
The paper has a similar structure to Huang et al (2022). It is mostly clear and easy to follow, although certain parts do feel a bit rushed. (I will mention them in the Weaknesses part.)
Significance
The paper contributes a few findings in Tiered RL that could be useful to the community: it proves a tighter lower bound for the case when the lower and higher tier tasks are the same, and proves lower bounds for the case when the reward and transition functions differ depending on the difference in the gaps. It also introduces to setting when there are multiple low-tier tasks.
Weaknesses: The work in this paper results from a minor relaxation of certain assumption in Huang et al (2022), and a lot of possible extensions and improvements are left for future work.
The RL results depend on a hyperparameter, lambda that needs to be chosen wisely.
Minor typos/grammar/style issues that did not affect the rating of this paper:
Lines 37, 41 and 52: “In [13]” is considered bad style, either write Author et al (year) [13], or better yet just cite the claim.
Lines 62: conceptions -> findings
Lines 67-68, starting with while: that subsentence makes no sense, please rephrase
The sentence on lines 125-127 seems like it belongs to the Frequently Used Notations subsection, not to 2.1
Line 165: no need for ‘actually’
Line 166: ‘leave it to the future work’ -> ‘leave it for future work’
Line 176: remove ‘kind of’ or replace with ‘may [explain]’
Line 177 is -> are, literatures -> papers/works/publications
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What do you mean by ‘back-propagation process of value iteration’ (line 255)? I am familiar with the concepts of backpropagation and value iteration individually, but their combination is rather ambiguous to me.
How would one go about choosing the lambda hyperparameter before beginning to solve the tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors are upfront about the limitations, such as the unique optimal policy assumption, and mention in the appendix that the avoidance of the lower bound knowledge about the minimal gap would be beneficial. Concerns for negative societal impact in this theoretical paper are not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for valuable feedback. We will fix the typos as mentioned. Below we address the main questions raised by the reviewer.
### “this paper results from a minor relaxation of…”
We humbly disagree to credit our contributions as “minor relaxation” of the assumptions.
First of all, removing the assumption that the low-tier and high-tier tasks share the transition and reward functions is highly non-trivial and of great practical value, which is also noted by Reviewer 5dMC and Reviewer 9F2D . The sharing model assumption in Huang et al (2022) requires very strong prior knowledge (and often unrealistic), which allows simple algorithm design based on PVI. Moreover, without such prior knowledge, the problem becomes much more challenging as we need a carefully designed strategy to identify and avoid negative transfer. We introduce several novel components, including (1) a branching condition to decide transfer or not (noted in our Algs. 1,2), (2) value adjustment to re-ensure overestimation, etc, which are substantially different from PVI in [1].
Secondly, we generalize the results of tiered RL from single source task to multiple different source tasks, which is another substantial extension. This requires the introduction of a novel source task selection mechanism to ensemble information from low-tier tasks , which has not been studied in existing literature and can be of independent interest for transfer learning.
### The choice of hyperparamter $\lambda$
We want to highlight that we do not treat $\lambda$ as a parameter to be optimized.
Instead, we are interested in studying the effect of this parameter on the regret.
Moreover, our main results will not be undermined if the choice of $\lambda$ is not ideal (as long as keep it away from 0), since as we proved, the regret is always at least near optimal and \lambda will only affect the size of transferrable states.
In practice, without prior knowledge about $\max_s d^*_{\text{Lo}}(s)$, one can choose \lambda to be around $\Theta(1/S)$, since for each layer $h$, there exists at least one state such that $d^*_{\text{Lo}}(s_h)\geq 1/S$, or one can choose it to be constant level to avoid large “burn-in” terms.
### “back-propagation process of value iteration”
It refers to the backward update $V_h(\cdot) \gets \max_{a_h} r_h(\cdot,a_h) + \mathbb{P}_h V\_{h+1}(\cdot) + (bonus)$, which computes value functions from back to the front. We will modify the wording.
[1] Huang et. al., Tiered reinforcement learning: Pessimism in the face of uncertainty and constant regret.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and answering my questions.
``We humbly disagree to credit our contributions as “minor relaxation” of the assumptions.``
I would like to emphasize that I did not mean the expression “minor relaxation” as belittling at all: however it can hardly be disputed that the paper indeed relaxes certain assumptions of the work of Huang et al (2022), and extends it in others, hence it is not an original paper per se in the sense of proposing a novel framework to be investigated for example. However, there is nothing wrong with this, most research is incremental and that’s how it should be too.
`We want to highlight that we do not treat $\lambda$ as a parameter to be optimized.`
It was clear from the paper that it is not a parameter to be optimized, but the correct way of choosing it was unclear to me. An additional discussion of the role of \lambda akin to your response above (even in the appendix) would make the paper a bit more complete in my opinion.
---
Reply to Comment 1.1.1:
Comment: Thanks for the quick response and suggestion. We will add more discussion for $\lambda$ in the paper as suggested. | null | null | null | null | null | null |
Adaptive Topological Feature via Persistent Homology: Filtration Learning for Point Clouds | Accept (poster) | Summary: In this manuscript, the authors develop a special module that can adaptively learn a suitable filtration function for persistent homology (PH) and its downstream tasks. In particular, their learning module is specially designed, so that resulting persistent homology is isometry invariant. Their model has been tested on two datasets for protein classification and CAD data classification. The model is novel and very interesting!
Strengths: It is a novel approach to use machine learning to learn a suitable filtration function! It has also demonstrated the advantage over traditional approaches.
Weaknesses: Missing important references for related works. The test examples have limited data points. The model may suffer from over-fitting issues.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. One of the key point for the submission is to "propose a novel framework to obtain adaptive topological features for point clouds", as mentioned in Page 3, line 71. However, the submission seems to only focus on learning "weight function" using "different radii"! In fact, many other works have been done for designing special filtration process based on data properties, such as local homology, weighted homology, element-specific homology, cohomology (incorporated with special weights), etc. These works can all be viewed as learning (unsupervisedly) "adpative topological features". For instance,
(a) local homology: Bendich P, Cohen-Steiner D, Edelsbrunner H, Harer J, Morozov D (2007) Inferring local homology from sampled stratifed spaces. In foundations of computer science, 2007. FOCS’07. 48th Annual IEEE symposium on, IEEE, pp. 536–546
(b) element-specific homology & special distance matrices: Cang ZX, Mu L, Wei GW (2018) Representability of algebraic topology for biomolecules in machine learning based scoring and virtual screening. PLoS Comput Biol 14:e1005929
Cang ZX, Wei GW (2017) TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions. PLoS Comput Biol 13:e1005690
(c) Weighted simplicial complexes: Zhenyu Meng, D Vijay Anand, Yunpeng Lu, Jie Wu, and Kelin Xia, "Weighted persistent homology for biomolecular data analysis." Scientific Report, 10 (1), 1-15 (2020)
(d) Topological antoencoders: Moor, Michael, Max Horn, Bastian Rieck, and Karsten Borgwardt. "Topological autoencoders." In International conference on machine learning, pp. 7045-7054. PMLR, 2020.
Further, "adaptive topological features for point clouds" can also been achieved through different types of simplicial complexes or hypergraphs. For instance, the topological features of Rips complexes are dramatically different from Dowker complexes, Neighborhood complexes, Hom-complexes, etc.
Xiang Liu, Huitao Feng, Jie Wu, and Kelin Xia, "Dowker complex based machine learning (DCML) models for protein-ligand binding affinity prediction." PLOS Computational Biology, 18(4), e1009943
Xiang Liu, and Kelin Xia, "Neighborhood complex based machine learning (NCML) models for drug design." In Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data, pp. 87-97. Springer, Cham (2021).
Xiang Liu, Huitao Feng, Jie Wu, and Kelin Xia, "Hom-complex-based machine learning (HCML) for the prediction of protein–protein binding affinity changes upon mutation", Journal of Chemical Information and Modeling, 62 (17), 3961-3969 (2022)
2. The discussions of vectorization of PH are not proper. In particular, “The idea to learn vectorization of persistent homology was pioneered by Hofer et al. (2017” is incorrect. To learn statistic or combinatorial properties (unsupervised) from persistent diagram or persistent barcodes is a common approach. For instance,
Bubenik, Peter, and Peter T. Kim. "A statistical approach to persistent homology." Homology, homotopy and Applications 9, no. 2 (2007): 337-362.
Chung, Moo K., Peter Bubenik, and Peter T. Kim. "Persistence diagrams of cortical surface data." In Information Processing in Medical Imaging: 21st International Conference, IPMI 2009, Williamsburg, VA, USA, July 5-10, 2009. Proceedings 21, pp. 386-397. Springer Berlin Heidelberg, 2009.
Bubenik P (2015) Statistical topological data analysis using persistence landscapes. J Mach Learn Res 16:77–102
Chi Seng Pun, Si Xian Lee, and Kelin Xia, "Persistent-homology-based machine learning: a survey and a comparative study." Artificial Intelligence Review, (2022)
Dey, Tamal Krishna, and Yusu Wang. Computational topology for data analysis. Cambridge University Press, 2022.
3. In the test examples, the data sizes seem to be relatively small, i.e., less than 2000 datapoints. But the learning module has multiple fully connected layers. In this way, overfitting can easily be a problem. The authors are suggested to do some ablation studies to address the issue. Further, some more details for PH analysis are needed. For instance, filtration sizes for the data, vectorization of persistent images, etc.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It will be great if the authors can use more realistic examples and compare with state-of-the-art models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed comments and suggestions for improving our paper. We will reflect all the comments and suggestions in our final version. In the following, we respond to specific concerns and questions raised by the reviewer.
> 1. One of the key point for the submission is to "propose a novel framework to obtain adaptive topological features for point clouds", as mentioned in Page 3, line 71. However, the submission seems to only focus on learning "weight function" using "different radii"! In fact, many other works have been done for designing special filtration process based on data properties, such as local homology, weighted homology, element-specific homology, cohomology (incorporated with special weights), etc. These works can all be viewed as learning (unsupervisedly) "adpative topological features".
Thank you for pointing out that the expression “adaptive topological feature” might be confusing for readers, because "adaptive topological features" does not necessarily mean that the extraction of topological features is done in a supervised way.
In this paper, we tried to learn a filtration in a data-driven and supervised way, which is our main contribution. This is not completely on the same line with previous studies designing special filtration in an unsupervised way based on data properties. We missed some of the important references you cited and will add them in the final version.
> Further, "adaptive topological features for point clouds" can also been achieved through different types of simplicial complexes or hypergraphs. For instance, the topological features of Rips complexes are dramatically different from Dowker complexes, Neighborhood complexes, Hom-complexes, etc.
While we concentrated on determining the weight function of the weighted Rips filtration in this paper, we can also consider other kinds of filtrations as you pointed out. We will clearly state this remark in the limitation section in the final version.
> 2. The discussions of vectorization of PH are not proper. In particular, “The idea to learn vectorization of persistent homology was pioneered by Hofer et al. (2017” is incorrect. To learn statistic or combinatorial properties (unsupervised) from persistent diagram or persistent barcodes is a common approach.
While we intended to state that Hofer et al. (2017) first proposed to determine vectorization in an end-to-end/data-driven/supervised/automatic way, the sentence you pointed out is inappropriate. In the final version, we will modify this expression and add references related to the vectorization method.
> 3. In the test examples, the data sizes seem to be relatively small, i.e., less than 2000 datapoints. But the learning module has multiple fully connected layers. In this way, overfitting can easily be a problem. The authors are suggested to do some ablation studies to address the issue.
Thank you for your important suggestion. We will conduct additional experiments to investigate whether we can reduce the number of network parameters, and add the results in the appendix.
> Further, some more details for PH analysis are needed. For instance, filtration sizes for the data, vectorization of persistent images, etc.
Since we used (weighted) Rips filtration, the number of simplices can be computed from the number of points in the point clouds, which is stated in lines 273 and 302. To make it clear, we will add this in the appendix in the final version.
We used PersLay (Carriere et al., 2019) as a vectorization method, and the detailed settings for this are described in Appendix B.3.
We hope that we addressed all your questions and concerns adequately. In light of our clarifications, please consider increasing your score to accept. Please let us know if we can provide any further details and/or clarifications.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply! I have no further comments. | Summary: The paper develops a neural network to learn weights of given points in addition to other internal parameters to classify 3D clouds of unlabeled points on several public datasets.
Strengths: The authors should be highly praised for a rigorous approach to an important problem of point cloud classification by using isometry invariants from persistent homology.
The paper is generally well-written.
Weaknesses: Questions arise already when reading the abstract in line 11: "to make the resulting persistent homology isometry-invariant, we develop a neural network". For standard filtrations such as Vietoris-Rips, Cech, or Delaunay complexes on a point cloud, the persistent homology is already an isometry invariant of a given cloud of unlabeled points because constructions of all complexes above depend only on inter-point distances.
The main drawback of persistent homology is its weakness as an isometry invariant, which should have been clear to all experts in computational geometry many years ago but was demonstrated only recently. The paper by Smith et al (arxiv:2202.00577) explicitly constructs generic families of point clouds in Euclidean and metric spaces that are indistinguishable by persistence and even have empty persistence in dimension 1.
Though Topological Data Analysis was largely developed by mathematicians, the huge effort over many years was invested into speeding up computations, rather surprisingly, instead of trying to understand the strengths and weaknesses of persistent homology, especially in comparison with the much simpler, faster, and stronger invariants of clouds under isometry.
Persistence in dimension 0 was actually extended to a strictly stronger invariant mergegram by Elkin et al in MFCS 2020 and Mathematics 2021, which has the same asymptotic time as the classical 0D persistence and is also stable under perturbations of points.
A SoCG 2022 workshop included a frank discussion concluding that there was no high-level problem that persistent homology solves. In fact, persistence as an isometry invariant essentially tries to distinguish clouds up to isometry, not up to continuous deformations since even non-uniform scaling changes persistence in a non-controllable way.
On the other hand, the isometry classification problem for point clouds was nearly solved by Boutin and Kemper (2004), who proved that the total distribution of pairwise distances is a generically complete invariant of point clouds in any Euclidean space. The remaining singular cases were recently covered by Widdowson et al in NeurIPS 2022 and CVPR 2023.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Are there any theoretical guarantees that the output of the proposed neural network distinguishes infinitely many (almost all?) point clouds as pairwise distances do?
What theoretical results in the paper are stronger than the past work by Widdowson et al in CVPR 2023?
In subsection 5.1, what is the dataset size relative to the Protein Data Bank?
In subsection 5.2, how representatives are "subsampled 128 points" (line 302) for objects such as "beds, chairs, and desks" (line 298). For example, can humans distinguish between a bed and a desk by looking at 128 randomly sampled points?
What is the asymptotic complexity (in the number of points in a given cloud) of the algorithm described in section 3? What was the actual running time for training and testing, and what are the technical specifications of the used machine?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The very last paragraph on limitations only discusses talks about future work, for example about "generalizing our framework to include a wider class of weighted filtrations" (line 342). Yes, there are numerous papers that go even further and include a given point cloud as a raw input without computing any justified invariants.
Since the authors have written detailed and accurate mathematical proofs in the appendix, they could probably agree that *examples prove nothing* because counter-examples can still exist, especially when all possible data (as for point clouds) fill a continuous space.
For continuous data, a continuous parametrization or metric can be more suitable than a discrete classification, which practically cuts a continuous space into disjoint pieces.
Since all tables of experimental results include accuracies of at most 84% in table 1 (maximum 75% in table 2), the best and certainly publishable contribution seems to be Theorem 4.1. Can a mathematical venue be more suitable for this important result?
To help the authors with future submissions, the key insight from "AlphaFold2 one year on" https://www.nature.com/articles/s41592-021-01365-3 exposes the major limitation of all brute-force predictions, not only for AlphaFold. What resources (money, people, even electricity and water) are needed not only to get the first predictions but to annually update predictions with new training data? Is it really sustainable in the long term?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the detailed comments and suggestions for improving our paper. First, let us explain the problem setting and our motivation for this study to resolve your misunderstanding.
In this paper, we deal with the classification of *labeled point clouds.* In this setting, *point clouds are labeled beforehand* (for example, in the experiment on protein dataset, each point cloud has a label open or close), and two point clouds with the same label are not necessarily isometric. Our aim is NOT to distinguish the point clouds up to isometry.
In solving such classification tasks, topological features extracted by persistent homology (PH) would be useful. In fact, PH has been shown to be *effective for point cloud analysis and classification* for material science [1, 2], biology [3, 4], and medical science [5, 6]. Although it is isometry-invariant if the filtration is chosen to be isometry-invariant, the use of PH is NOT limited to distinguishing the point clouds up to isometry.
[1] T. Nakamura et al. Persistent homology and many-body atomic structure for medium-range order in the glass. *Nanotechnology*, 26(30):304001, 2015.
[2] A. Hirata et al. Structural changes during glass formation extracted by computational homology with machine learning. *Communications Materials*, 1(1):98, 2020.
[3] V. Kovacev-Nikolic et al. Using persistent homology and dynamical distances to analyze protein binding. *Statistical Applications in Genetics and Molecular Biology,* 15(1):19–38, 2016.
[4] Z. Cang and G. Wei. "TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions." *PLoS computational biology* 13(7), e1005690, 2017
[5] X. Zhu et al. Stochastic Multiresolution Persistent Homology Kernel. In *IJCAI 2016* (pp. 2449-2457).
[6] N. Singh et al. Topological descriptors of histology images. In *Machine Learning in Medical Imaging: 5th International Workshop, MLMI 2014. Proceedings 5* (pp. 231-239).
> the persistent homology is already an isometry invariant
We consider weighted Rips filtration whose weight function is implemented by the neural network. If we do not force this network to be isometry-invariant, the resulting PH can be non-isometry-invariant. This is why we make the network to be isometry-invariant.
> What theoretical results in the paper are stronger than the past work by Widdowson et al … ?
The result by Widdowson et al (2023) is not a competitor of our study but rather one that could potentially improve our results by incorporating it into our study. Currently, we do not have strict guarantees about the approximation ability of our network based on distance matrices. Widdowson et al (2023) showed that Simplexwise Centered Distribution (SCD) is complete isometry invariant and continuous. This helps us to obtain theoretical guarantees of our network or to propose a new network architecture with stronger theoretical guarantees. That is our future work, and we are grateful for your insight. We will mention this in the final version.
> In subsection 5.1, what is the dataset size relative to the Protein Data Bank?
As explained in lines 266—276, we created the protein dataset composed of 1,000 data by subsampling from 14 types of proteins. This is much smaller than the datasets in Protein Data Bank. We did not applied our method to larger datasets since we need to compute PH repeatedly in the training, which makes the computational cost higher. We will add this limitation in the final version.
> In subsection 5.2, how representatives are "subsampled 128 points" …, can humans distinguish between a bed and a desk … ?
Thank you for your important comment. Subsampling 128 points sometimes makes it hard for humans to judge the label for each point cloud. We subsampled a relatively small number of points due to the high computational cost, as we described above. It is our future work to conduct experiments for point clouds with a large number of points.
> What is the asymptotic complexity … ?
The computational complexity can be upper-bounded by $O(N^9)$, where $N$ is the number of points. However, actual computational complexity is expected to be smaller since computational cost of PH is known to be empirically less than cubic complexity with respect to the number of simplices.
Regarding the actual running time, it takes about seven hours to train the neural networks in our method. The bottleneck of computational time would be the computation of PH. While this is one of the limitations of our study, we believe our method is meaningful since it sometimes improves classification accuracy.
We will include this information related to the computational cost in the final version.
The technical specifications of the used machine are described in lines 261—264.
> The very last paragraph on limitations only discusses talks about future work
Thank you for pointing out the lack of a description in the limitation. We will add some explanation about the limitation of our method such as the high computational cost.
> Are there any theoretical guarantees …
> Since the authors have written …
Our method aims to estimate labels for meaningful labeled point clouds, rather than distinguishing between all possible point configurations up to isometry. Our theory supports the validity of the architecture to solve such a classification task. We believe our method is effective for this task.
> Since all tables …
Due to the high computational cost of our method, we could not apply our method to large point clouds, which makes the accuracy lower. If we improve our method so that we can apply it to larger point clouds, the accuracy will be much higher. We will state this fact in the limitation section of the final version.
We hope that we addressed your questions and concerns. In light of our clarifications, please consider increasing your score to accept. Please let us know if we can provide any further details and/or clarifications.
---
Rebuttal Comment 1.1:
Title: further questions
Comment: Thank you for the reply.
>Our aim is NOT to distinguish the point clouds up to isometry.
Have you checked that all your input clouds from different classes are distinguished by persistent homology? If not, all further outputs for these indistinguishable clouds will be identical.
> topological features extracted by persistent homology (PH) would be useful
How can these features be called topological if PH changes even under non-uniform scaling, much worse under more flexible topological transformations?
>PH has been shown to be effective for point cloud analysis and classification for material science [1, 2], biology [3, 4], and medical science [5, 6].
Could you please give exact references to rigorously proved theorems in the cited papers that show the effectiveness of PH?
>Although it is isometry-invariant if the filtration is chosen to be isometry-invariant, the use of PH is NOT limited to distinguishing the point clouds up to isometry.
If PH is not limited to distinguishing the point clouds up to isometry, under what other equivalence relation is PH an invariant?
> If we do not force this network to be isometry-invariant, the resulting PH can be non-isometry-invariant.
If the output is not an isometry invariant, does it have other theoretical guarantees?
>This is why we make the network to be isometry-invariant.
Have you compared the output with the much simpler and faster isometry invariants such as the total distribution of pairwise distances, which was proved to be complete for Euclidean clouds in general position by Boutine and Kemper in Adv. Appl. Math (2004)?
> we created the protein dataset composed of 1,000 data by subsampling from 14 types of proteins. This is much smaller than the datasets in Protein Data Bank.
Yes, the PDB started in 1971 with 7 structures and now has more than 200 thousands.
>The computational complexity can be upper-bounded by O(N^9), where is the number of points.
Where is this claim proved in the submission?
Is it possible to explain the conflicting quotes from the rebuttal below and justify any claimed beliefs by rigorous arguments?
Quote 1. "Due to the high computational cost of our method, we could not apply our method to large point clouds, which makes the accuracy lower."
Quote 2. "We believe our method is effective for this task."
Quote 3. "We did not claim that we had proposed a new and efficient method to analyze protein datasets."
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments.
> Have you checked that all your input clouds from different classes are distinguished by persistent homology? If not, all further outputs for these indistinguishable clouds will be identical.
There is no need to completely classify point clouds using only PH. If the classification based solely on PH information is insufficient, it can be combined with other features, such as features from DNN. In this case, PH is not used to perform classification on its own, but rather plays a supportive role in the classification. We have discussed this in Section 3.2 of our paper.
> Could you please give exact references to rigorously proved theorems in the cited papers that show the effectiveness of PH?
In our paper, we address the machine learning problem, and we do not aim to solve any mathematical problem. Our study is based on previous research showing the empirical effectiveness of PH.
> If PH is not limited to distinguishing the point clouds up to isometry, under what other equivalence relation is PH an invariant?
The fact that PH is isometry-invariant does not mean it can only be used to distinguish point clouds up to isometry. In our study, we attempt to use features obtained from PH for classification. It is not directly relevant to our study whether there are any other equivalence relations for which PH is invariant.
> If the output is not an isometry invariant, does it have other theoretical guarantees?
Our output is isometry-invariant.
> Have you compared the output with the much simpler and faster isometry invariants such as the total distribution of pairwise distances, which was proved to be complete for Euclidean clouds in general position by Boutine and Kemper in Adv. Appl. Math (2004)?
As well as the result by Widdowson et al. (2023), the work of Boutine and Kemper (2004) is not a competitor of our study but rather one that could potentially improve our results by incorporating it into our study. We will also mention this in the final version.
> Where is this claim proved in the submission?
This would be well-known in the field of computational topology. In the rebuttal, we wrote, “We will include this information related to the computational cost in the final version.”
> Is it possible to explain the conflicting quotes from the rebuttal below and justify any claimed beliefs by rigorous arguments?
>
> Quote 1. "Due to the high computational cost of our method, we could not apply our method to large point clouds, which makes the accuracy lower."
>
> Quote 2. "We believe our method is effective for this task."
>
> Quote 3. "We did not claim that we had proposed a new and efficient method to analyze protein datasets."
The three quotes you mentioned are not contradictory. We provide further details as follows.
Quote 1: We proposed the idea of learning filtration to achieve higher classification accuracy. We demonstrated its effectiveness for datasets with a small number of points. However, applying this to large datasets is currently challenging due to computational costs. This is our future work.
Quote 2: We believe that learning filtration makes it possible to extract more suitable information for classification by PH, which leads to an improvement in classification accuracy. We indeed demonstrated its validity in our experiments.
Quote 3: The experiments on the protein dataset were presented as an example to demonstrate the effectiveness of learning filtration. We do not aim to propose an "efficient" method from a resource consumption perspective. This quote is meant to address your review query regarding the sustainability of our research in terms of resources.
---
Rebuttal 2:
Comment: - The second to fourth paragraphs in weakness are not directly related to our paper. They seem just slander for the persistent homology community, which is based on the misunderstanding that “there was no high-level problem that persistent homology solves”. Please see the first part of our rebuttal.
- We did not really understand the sentence “Since the authors have written detailed and accurate mathematical proofs in the appendix, they could probably agree that *examples prove nothing* because counter-examples can still exist, especially when all possible data (as for point clouds) fill a continuous space.” in Limitations. Although we tried to answer this within our understanding, but could you explain it in more detail?
- The question “What resources (money, people, even electricity and water) are needed not only to get the first predictions but to annually update predictions with new training data? Is it really sustainable in the long term?” in Limitations is not related to our study. We did not claim that we had proposed a new and efficient method to analyze protein datasets. Could you explain what you meant by this question? | Summary: A neural network that learns the filtration for persistent homology on given point cloud data is introduced, theoretically justified, and evaluated experimentally on 2 data sets.
Strengths: (S1) If this is indeed the first work that considers learning filtrations on point clouds, I find the idea very relevant.
(S2) The filtration learning approach is very nicely motivated and described (Lines 185 – 196).
Weaknesses: (W1) The need for learned filtrations should be better motivated. Think of an example point cloud where some other learnable filtration is more meaningful than Rips or DTM, visualize all three filtrations and their PDs. For example, we know that DTM is more suitable than Rips in the presence of outliers, but when is another filtration better than Rips and DTM? This would provide guidance to readers when it would make sense to use your approach, instead of relying simply on the Rips and DTM filtration. From Table 1 results, it seems that some answers might lie in the protein data set you consider.
(W2) [1] and [2] seem to be related work, but are not referenced?
(W3) The improvement with a learned filtration is good for protein classification, but it is for sure not convincing for the 3D CAD data (whereas it is much more complicated and computationally difficult). It would therefore be useful to provide more information about the data (including visualizations), and more detailed insights. I took a look at Appendix C, but this does not provide answers to these questions. Negative insights are also meaningful, i.e., that learning a filtration might not be useful for a lot of problems, and that relying on Rips or DTM filtration is good enough.
(W4) You write: “… the classification accuracy is better when our method was concatenated with DeepSets/PointNet compared to using DeepSets/PointNet alone. The accuracy when we combine our method and PointMLP is not higher than that of PointMLP. This would mean that concatenating the topological feature is effective when the number of parameters of a DNN-based method is relatively small.” I find this argument extremely flawed, since the number of parameters for PointNet and PointMLP is very comparable?
[1] Zhang, Simon, Soham Mukherjee, and Tamal K. Dey. "GEFL: Extended Filtration Learning for Graph Classification." Learning on Graphs Conference. PMLR, 2022.
[2] Horn, Max, et al. "Topological graph neural networks." arXiv preprint arXiv:2102.07835 (2021).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: (Q1) PH wrt some filtrations (e.g., height, useful to distinguish MNIST digit 6 from digit 9) is not isometry-invariant, so why do you impose this condition?
(Q2) Figure 1: Comment more why this approach is meaningful, since 2 holes are not recognized with 1-dim PD for any of the two point clouds? Consider plotting (next to) more reasonable weights.
(Q3) You never mention simplicial complexes, so that it remains unclear why the filtration discussed on lines 143-144 is Rips, and not Čech?
(Q4) How does your weighted Rips filtration compare to the weighted Rips filtration discussed in [3] (Proposition 3.5), where point cloud point x appears according to its filtration function value (weight) f(x), and an edge (x, y) appears when f(x), f(y) and distance d(x, y) satisfy certain properties?
(Q4) Related to (Q3) and (Q4), do all point cloud points immediately appear in your filtration? In particular, if a point has a really large weight, the balls centered at this point will expand very late in the filtration, but will the point (ball with radius 0) be there from the beginning? This is important e.g. if the point is an outlier, since we commonly want to ignore such a point.
(Q5) “Although (a) and (b) can be learned together since the output of the resulting feature is differentiable with all parameters, it would make the optimization unstable.” Why, can you explain more?
(Q6) In Section 5, can you provide more intuition on what is captured with DistMatrixNet?
Other minor comments:
- The homology is persistent (not persistence homology), but we talk about persistence landscapes and images (not persistent landscape or image), rephrase throughout the paper.
- Line 86: Explicitly mention DeepSets.
- Line 142: “one can take function S defined by” -> “one can take S to be the distance to point cloud, defined by”
- Line 156: “which can also [add: be] computed only by distance matrices”. Do you not need the weights too?
- Line 168: What is function u, where is it used?
- Line 169: “we can vectorize persistent a persistent diagram”. Rephrase.
- Line 234: Do topological features and DNN features have to have the same dimension L?
- Line 251: “we can approximate any continuous function on X x [0,1]^m can be approximated”. Rephrase.
- Line 253: Cite specific Appendix.
- Line 283: “we replaced our topological featured replaced”. Rephrase.
- Line 285: “Note that we use DistMatrixNet not for the computation of filtration weights.” This sentence seems weird, what do you mean?
- Table 2 caption: “concatenated the feature” -> “concatenated with the feature”?
- Mention explicitly that the code is made publicly available.
- References: Check capitalization of acronyms in paper titles (e.g., Dtm, Perslay, Ripsnet, Toposeg, Homcloud, Pointnet, 3d, Pointet++, Pi-net, 3d shapenets, Sgmnet, …), and be consistent between capital case vs. lower case for journal names.
[3] Anai, Hirokazu, et al. "DTM-based filtrations." Topological Data Analysis: The Abel Symposium 2018. Springer International Publishing, 2020.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Some limitations and corresponding future research directions are mentioned, but they do not address the lack of insights on when a learned filtration can be expected to be beneficial (compared to e.g. Rips and DTM filtrations).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed comments and suggestions for improving our paper. We will reflect all the comments and suggestions in our final version. In the following, we respond to specific concerns and questions raised by the reviewer.
> (W1) The need for learned …
Thank you for your important suggestion. To clarify the validity of learning filtration using our method, in Section 1, we will add a sentence “We will show some experimental results which show that our method improves the classification accuracy compared to when using Rips or DTM filtration".
One example that motivates learning filtration is the point cloud in Figure A in the global response. In this example, the trained weight function gives large values to the outliers. Although this is similar to the DTM function, we remark that such a weight function was obtained in a data-driven and supervised way, without any information (other than data) in advance. We believe that the fact that our method can learn suitable filtrations for classification without any prior specification, as in this example, allows us to motivate the use of our method.
> (W2) [1] and [2] seem …
We are grateful to you for pointing out the lack of references. We will add these references in the final version.
> (W3) The improvement with …
Thank you for your constructive comments. We attached some visualization of the learned weights for the 3D CAD data (Figure B). We can observe that the points in some of the point clouds would have appropriate weight to be classified correctly, while there exist point clouds such that all points are assigned a weight of 0. This shows that learning filtration with our method would contribute to improving classification accuracy for some of the data, while is not effective for some data.
On the other hand, as you pointed out, the accuracy improvement by the proposed method for 3D CAD data is not remarkable. Based on this, in the final version, we will describe that Rips or DTM filtration is effective enough for some data (such as the furniture surface data) so that our method does not lead to further improvements in accuracy while learning filtration with our method is beneficial for some data (such as protein data).
> (W4) You write: “…” I find this …
Thank you for your essential remark. After reviewing the results, we now believe that the lack of accuracy is due to incompatibility with PointMLP, rather than to the large number of parameters. We hypothesize that PointMLP has already captured enough information, including topological features, during the 1st phase. If so, the information obtained by persistent homology may be redundant, potentially negatively impacting the classification. We will appropriately replace the current observation with the hypothesis above in the final version.
> (Q1) PH wrt some …
In this paper, we focus on the classification of point clouds. In this setting, it would be natural to impose the isometry-invariance.
> (Q2) Figure 1: Comment more …
Thank you for your great idea on the image explaining the procedure of our method. We will replace Figure 1 with Figure A in the PDF as we stated in the global response.
> (Q3) You never mention …
> (Q4) How does your …
> Related to (Q3) and (Q4), …
We appreciate your comments that we should add some explanation about simplicial complex and Vietoris-Rips filtration. Due to the page constraints, we could not include them in the main text. We instead described the increasing family of balls, which gives an intrinsic understanding, but it was not an honest way. We will modify the sentence from line 145 and create a new section to give detailed explanations of simplicial complexes, Cech, and Rips filtrations in the appendix. More concretely, we will change the sentences from line 145 as follows: “The persistent homology of this filtration can be captured by the filtration called Cech or Vietoris-Rips filtration (Rips filtration for short). In this paper, we use Rips filtration for computational efficiency. See Appendix … for details”. In the same section in the appendix, we will also describe the weighted Rips filtration we used, which is totally the same as the one defined in Section 3.3 of Anai et al. (2020).
> (Q5) “Although …” Why, can you explain more?
The loss function can be differentiable with respect to all of the parameters in the network (a) and (b), but is not smooth with respect to the parameters in (a)*.* Because of this, if we optimize both of the networks in (a) and (b), not only the resulting networks of (a) but also that of (b) are (empirically) not optimized well. This is why we separately optimized the networks in (a) and (b). We will append these details in the final version.
> (Q6) In Section 5, …
We believe that DistMatrixNet can extract the information of point clouds using the relative distance information. The experimental results show that DistMatrixNet is effective when used as a weight function in the proposed method, but not when used directly for classification. This might mean that DistMatrixNet can be effectively used to obtain information about the role of each point in the point cloud, but is not suitable for getting information to distinguish its global shape. Further investigation on the role of the DistMatrixNet is future work.
> Other minor comments:
We appreciate your detailed comments and giving us the suggested fixes. We will fix every mistake in the final version.
> **Limitations:** … they do not address …
Thank you for your important remarks that the description in the limitation section is not enough. In the final version, we will add some insights on the learned filtration compared to Rips and DTM filtration, including negative ones.
We hope that we addressed all your questions and concerns adequately. In light of our clarifications, please consider increasing your score to accept. Please let us know if we can provide any further details and/or clarifications.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I am happy to see that you agree with the most of the suggestions, but I do worry whether and to what extent will the improvements will be included in the revised version of the manuscript, since the new general .pdf consists only of two additional figures.
Some final comments:
- I don’t think you answered (Q4), about the filtration function value on the simplices that are vertices?
- I definitely agree that Figure A much better describes your motivation than the previous Figure 1. Reference the specific plots (a)-(d) in the caption, or remove the labels.
- Figure B is very nice too. I wonder though, it would be much more interesting to see similar plots for the protein data where learning the filtration is yielding better results than Rips and DTM? It would be particularly nice to also add plots here with the value of the DTM filtration function on the point cloud points (the average distance from a number of nearest neighbors), to see how your learned weights differ from the DTM filtration, and gain some insights on what was essential for the good performance of your method. Plotting some sets in the filtration, and the resulting persistence diagrams, for some interesting point clouds (where the learned weight is different from DTM) would also be very interesting to see.
- Why do you include the first row in Figure B, where the color represents the x coordinate, what information does this give us? For the second row, I don’t understand how do you see that “points in some of the point clouds would have appropriate weight to be classified correctly”? You should also include the color legends in both figures, to make it clear which points end up having lower weight.
- I would suggest not undermining your method too much, and rewrite “while is not effective for some data” in the Figure B caption to “while the learning of the filtration does not have an added value compared to the standard Rips filtration for some other data”, also to improve clarity.
- It would be interesting to try to gain some understanding on *why* learning the filtration is beneficial for the protein data (and not for the furniture data). Experiments on more data sets would be very helpful in this direction. In any case, visualizing the two data sets has now become even more important (e.g., at least by including an analogous figure to Figure B for the protein data, as already suggested above).
- To improve the impact of the paper, I would suggest to include a small Jupyter notebook as a part of your code, allowing the user to visualize the learned weights for their problem at hand. This will not influence my final rating of the paper, and is obviously up to you to decide if it makes sense.
---
Reply to Comment 1.1.1:
Comment: We appreciate your beneficial comments.
> I don’t think you answered (Q4), about the filtration function value on the simplices that are vertices?
Each vertex appears at $t=f(x)$, so they are not present at the beginning if the weights are greater than zero. This is the same as the weighted Rips filtration that is defined in Section 3.3 of Anai et al. (2020).
> I definitely agree that Figure A much better describes your motivation than the previous Figure 1. Reference the specific plots (a)-(d) in the caption, or remove the labels.
Thank you for your favorable feedback on Figure A. We will add the precise description in the caption with referring (a)-(d). (We omitted some of the descriptions in the caption that are the same as original Figure 1. )
> I wonder though, it would be much more interesting to see similar plots for the protein data where learning the filtration is yielding better results than Rips and DTM? It would be particularly nice to also add plots here with the value of the DTM filtration function on the point cloud points (the average distance from a number of nearest neighbors), to see how your learned weights differ from the DTM filtration, and gain some insights on what was essential for the good performance of your method. Plotting some sets in the filtration, and the resulting persistence diagrams, for some interesting point clouds (where the learned weight is different from DTM) would also be very interesting to see.
> It would be interesting to try to gain some understanding on why learning the filtration is beneficial for the protein data (and not for the furniture data). Experiments on more data sets would be very helpful in this direction. In any case, visualizing the two data sets has now become even more important (e.g., at least by including an analogous figure to Figure B for the protein data, as already suggested above).
Thank you for your constructive comments. We will additionally visualize the weight function for the protein dataset, compare them to the DTM filtration and observe the persistence diagrams for some of the point clouds to demonstrate the effectiveness of our method.
We will include these results in the paper as much as the page constraints. (If there is not enough space, we will add them to the Appendix.)
> Why do you include the first row in Figure B, where the color represents the x coordinate, what information does this give us? For the second row, I don’t understand how do you see that “points in some of the point clouds would have appropriate weight to be classified correctly”? You should also include the color legends in both figures, to make it clear which points end up having lower weight.
We appreciate your important questions and comments.
We included a colored point cloud based on the x-coordinates in order to enhance the visibility of the surface shape when plotting a 3D point cloud on a 2D plane.
Regarding the weight function learned by our method, for example, in the rightmost point cloud, it appears that holes are formed at the upper and lower parts of the point cloud by the points with smaller weights.
It might not be clear how this weight function is effective in the classification from these figures, so we will also present the associated persistence diagrams.
Furthermore, we will include the color legends, as you suggested.
> I would suggest not undermining your method too much, and rewrite “while is not effective for some data” in the Figure B caption to “while the learning of the filtration does not have an added value compared to the standard Rips filtration for some other data”, also to improve clarity.
Thank you for your helpful comment. We will replace the expression in the caption of Figure B.
> To improve the impact of the paper, I would suggest to include a small Jupyter notebook as a part of your code, allowing the user to visualize the learned weights for their problem at hand. This will not influence my final rating of the paper, and is obviously up to you to decide if it makes sense.
Thank you for your constructive suggestion.
We will publish the source code that we used to visualize the weights of each point as a Jupyter notebook. | Summary: This paper investigates the extraction of global topological features using the framework of persistent homology. The authors have proposed a neural network architecture to learn the filtration weights for each point in an end-to-end and data-driven manner, which is later supported by an approximability theorem. Additionally, a two-phase training procedure is introduced to further improve the performance of the proposed architecture. The proposed framework is then applied to different tasks such as protein 3D CAD classifications.
Strengths: 1. [Originality] The filtration weight is usually constructed without considering the label information, e.g., a constant for VR complex and k-NN info for DTM. The authors proposed a framework to learn the weights for each point from the label using a neural network.
1. [Quality] Being able to provide an approximability theorem to support/motivate the choice of the neural network architecture is nice.
1. [Clarity] Great overview on the persistent homology as well as the limitations of the current framework. The author laid out the contribution in the very beginning of the introduction which can help present the novelty of this work. Overall, a well-written paper.
1. [Significance] TDA framework has long been suffered from the topological noise issue. Being able to propose a way to automatically learn a weight function to suppress the noises is an important contribution to the field.
Weaknesses: 1. It looks like there are two contributions in this paper: 1) propose a way to learn the weighted filtration as per Section 3.1, with theoretical guarantee in Section 4; and 2) Section 3.2 in the ability to approximation with composite function. The contribution in 1) is more related to filtration learning as it learns a weighting function, but 2) is a more general use case. However, by looking at the experimental results, it is not clear which contribution is more significant, i.e., the performance gain in Table 1 comes from 1), but the gain in Table 2 comes from 2). It would be nice to consolidate the comparisons between different experiments and provide some discussions. That way, the readers can get a better understanding of the interplay between different contributions.
1. It is not clear to me from reading the main text on how you choose the hyper-parameters for DTM. From reading the appendix, it looks like the parameters are not chosen by cross-validation (Section C.2). Is there any specific reason to not choose this parameter end-to-end using cross-validation? How will a different $q$ and $k$ affect the prediction performance in the protein datasets?
1. It looks like Theorem 4.1 suggests the function can be approximated, but it does not mention whether it can be recovered. Will it be possible to get some results regarding the convergence (i.e., with $n \to \infty$, will the $\epsilon$ shrink?) and/or whether $\psi_1$ and $\psi_2$ can be estimated by the proposed architecture (i.e., how close $\psi_1$ and $\psi_2$ to $h$ and $\phi^{(6)}$, respectively).
1. Related to #4, can we support Theorem 4.1 by running a synthetic example? Specifically, can we show that the true weighting function $f$ can be recovered by the $f_\theta$ in Figure 2?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. TDA is also used as data-analysis/unsupervised purposes (e.g., in finding enclosing holes) [A-C], I am curious whether the learned filtration weight $f(X, \cdot)$ can reveal the true topological structures or if one can design some sort of loss function and learn the weight function accordingly?
1. How to correctly understand the connection between the architecture (in Figure 2) and Theorem 4.1? The weighting function $f_\theta(X, x_1)$ is a concatenation of $h(X)$ and $g_1(x)$, but in Theorem 4.1, the $f(X, x)$ can be approximated by a concatenation of $\varphi_1(X)$ and $x$ itself.
1. [Minor language issue] When I first read the paper, it is not clear what the “architecture” and “approximation result” in L12-13 meaning (original sentence: “Additionally, we theoretically show a finite-dimensional approximation result that justifies our architecture.”). Consider adding some details there to improve clarification. For instance, you might want to change it to something like this: “Additionally, we theoretically show a finite-dimensional approximation \textbf{of any filtration function}, which \textbf{justifies (or motivates) the proposed neural network architecture}.”
---
[A] Wasserman, Larry. “Topological Data Analysis.” Annual Review of Statistics and Its Application 5 (2018): 501–32.
[B] Chen, Yu-Chia, and Marina Meila. “The Decomposition of the Higher-Order Homology Embedding Constructed from the k-Laplacian.” Advances in Neural Information Processing Systems 34 (2021).
[C] Wu, Pengxiang, Chao Chen, Yusu Wang, Shaoting Zhang, Changhe Yuan, Zhen Qian, Dimitris Metaxas, and Leon Axel. “Optimal Topological Cycles and Their Application in Cardiac Trabeculae Restoration.” In International Conference on Information Processing in Medical Imaging, 80–92. Springer, 2017.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Authors have addressed the limitations of their work. Negative social impact statement is not necessary in this work, as the primary focus of this manuscript lies in its theoretical contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the detailed comments and suggestions for improving our paper. We will reflect all the comments and suggestions in our final version. In the following, we respond to specific concerns and questions raised by the reviewer.
> Weakness 1. It looks like there are two contributions in this paper: 1) propose a way to learn the weighted filtration as per Section 3.1, with theoretical guarantee in Section 4; and 2) Section 3.2 in the ability to approximation with composite function. The contribution in 1) is more related to filtration learning as it learns a weighting function, but 2) is a more general use case. However, by looking at the experimental results, it is not clear which contribution is more significant, ...
Section 3.2 describes how to combine topological features with more general features obtained by DNNs to solve classification tasks, and it does not describe the ability to approximation with composite function. The first experiment in Section 5 was conducted to show the validity of our method where we only used the topological feature, without DNN features. In the second experiment, on the other hand, we combine the topological features and features from DNNs using the method described in Section 3.2. In the final version, we will clearly state the difference between the two experiments at the beginning of Section 5.
> Weakness 2. It is not clear to me from reading the main text on how you choose the hyper-parameters for DTM. From reading the appendix, it looks like the parameters are not chosen by cross-validation (Section C.2). ...
Thank you for your important remark. As you pointed out, the experimental results in the main text were not ones selected in cross-validation, while we conducted experiments with some hyper-parameters as we showed in Appendix. In the final version, we would like to fix them as we showed in the global response.
> Weakness 3. It looks like Theorem 4.1 suggests the function can be approximated, but it does not mention whether it can be recovered. Will it be possible to get some results regarding the convergence ... and/or whether $\psi_1$ and $\psi_2$ can be estimated by the proposed architecture...
> Question 2. How to correctly understand the connection between the architecture (in Figure 2) and Theorem 4.1? ...
Thank you for your essential remark. Since we proved Theorem 4.1 to demonstrate the validity of our idea to concatenate the (finite-dimensional) global feature $\psi_1(X)$ (which corresponds to $h(X)$) and local feature $x$ (which corresponds to $g(x)$), it does not clarify whether $\psi_1$ or $\psi_2$ can be recovered by the network architecture proposed in this paper indeed. We leave it as future work to provide stronger theoretical results for our network or to propose a new architecture with a complete approximation guarantee.
> Weakness 4. Related to #4, can we support Theorem 4.1 by running a synthetic example? Specifically, can we show that the true weighting function $f$ can be recovered by the $f_\theta$ in Figure 2?
We appreciate your constructive suggestion that we should support Theorem 4.1 by running a synthetic example.
In this setting, we do not have a “true weight function”, but we just find the filtration that can achieve high classification accuracy.
We have the experimental result to support Theorem 4.1 that shows our network can recover the DTM function, as shown in the table below. This table shows the error when our network is trained by the regression task to approximate the DTM functions. This result means that our method can choose filtrations from the space including Rips and DTM filtration if trained appropriately. We will add this result in the final version.
| value of k | error |
|:----------:|:---------------:|
| 0 (Rips) | 0.0000 ± 0.0000 |
| 2 | 0.0015 ± 0.0000 |
| 3 | 0.0016 ± 0.0000 |
| 4 | 0.0017 ± 0.0000 |
| 5 | 0.0018 ± 0.0000 |
| 10 | 0.0022 ± 0.0001 |
> Question 1. TDA is also used as data-analysis/unsupervised purposes (e.g., in finding enclosing holes) [A-C], I am curious whether the learned filtration weight $f_\theta(X, \cdot)$ can reveal the true topological structures or ...
Thank you for your interesting comments. We believe that we can sometimes get some insights on the topological structure of the point cloud data $X$ from the learned filtration weight $f_\theta(X, \cdot)$. For instance, for the point clouds shown in Figure A (in the PDF of the global response), one can find that the points with large weights are outliers. We remark that these weights were automatically learned in a data-driven way without any prior information.
As for the loss function, we are currently using the classification loss as a loss function, but we can also consider other types of loss functions.
For example, if we consider the reconstruction task and its loss, one might obtain the filtration weight that is effective in extracting all of the topological information in the point clouds.
In fact, we have conducted such an experiment and obtained appropriate weights in some cases. We leave it as future work to do much investigation on how using other types of loss functions affects the filtration weights.
> Question 3. [Minor language issue] When I first read the paper, it is not clear what the “architecture” and “approximation result” in L12-13 meaning … Consider adding some details there to improve clarification. For instance, …
Thank you for pointing out this issue and suggesting an alternative sentence. In the final version, we will change the expression to clarify how our theoretical result contributes.
We hope that we addressed all your questions and concerns adequately. In light of our clarifications, please consider increasing your score to accept. Please let us know if we can provide any further details and/or clarifications.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which answers some of the questions I have! Given that TDA/PH is also used in data analysis part, I will highly suggest you to discuss those in your final version of the paper.
I think the paper is still borderline, but I am leaning toward accept the paper now. I raised my score to 5 as a result.
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestion. We will include the discussion in the final version. | Rebuttal 1:
Rebuttal: We appreciate the detailed comments and suggestions for improving our paper. We will reflect all the comments and suggestions in our final version. In the following, we show you some information that we would like to share with all of the reviewers.
1. A reviewer pointed out that Figure 1 is confusing. To address this issue, we would like to replace Figure 1 with Figure A in the attached PDF.
2. A reviewer pointed out that the hyper-parameter $k$ and $q$ in DTM filtration should be chosen to maximize the average classification accuracy with cross-validation. While the experimental results for DTM with different parameters were included in the Appendix, the results in the main text were not for optimal parameters, so we will replace this with the following table in the final version.
For protein data:
| DistMatrixNet | Rips | DTM | Ours |
| --- | --- | --- | --- |
| 65.0 ± 12.0 | 79.9 ± 3.0 | 78.0 ± 1.6 | 81.9 ± 2.1 |
For 3D CAD data:
| | | DeepSets | PointNet | PointMLP |
| --- | --- | --- | --- | --- |
| 1st Phase | | 65.7 ± 1.4 | 64.3 ± 4.4 | 68.8 ± 6.3 |
| 2nd Phase | DistMatrixNet | 65.7 ± 4.8 | 55.7 ± 13.9 | 53.8 ± 7.4 |
| | Rips | 67.0 ± 2.6 | 68.4 ± 2.4 | 57.8 ± 12.4 |
| | DTM | 68.0 ± 2.5 | 68.7 ± 2.3 | 57.2 ± 6.8 |
| | Ours | 67.5 ± 2.5 | 68.8 ± 2.0 | 60.0 ± 6.3 |
3. A reviewer pointed out we should visualize the resulting weight function learned by our method. We showed some examples of the weight function for 3D CAD data in Figure B in the attached PDF. We will add this figure in the appendix.
We also respond to the comments from each reviewer. If there are further questions/comments/suggestions, we would be happy to address them in the discussion period.
Pdf: /pdf/76d42cd95ad7176b736b02256ecc7e362f4ec200.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes a novel framework to obtain adaptive topological features for point clouds based on persistent homology by introducing an isometry-invariant network architecture for a weight function and proposes a way to learn a weighted filtration. The work theoretically proves that any continuous weight function can be approximated by the composite of two continuous functions which factor through a finite-dimensional space. The experiments on public datasets shows the proposed method improves the accuracy in classification tasks.
All of my questions are carefully addressed in the rebuttal. The work is theoretically solid and has potential.
Strengths: The idea of learning filtration by the weight function is novel. The approximation ability theorem is important. The architecture design is based on this theorem and the isometry-invariance, which is rigorous and interpretable. The experimental results are convincing. The paper is well written, all the concepts, math symbols, theorems are thoroughly explained. The deductions are clear and easy to follow. The experimental results are convincing.
Weaknesses: It is not clear why different weight functions affect the qualities of the topological features, and what are the criteria for the persistent diagrams. Some theoretical explanations and numerical demonstration will be helpful.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What are the criteria for good weight functions ? What are the theoretical explanations for them ?
2. For the purpose of classification, the extracted topological features improves the results. It is unclear how the loss functions are differentiable with respect to the topological features and in turn the weight functions. The homology is intrinsically discrete, the differentiability is not obvious. Some explanation will be helpful.
3. Are the weight functions affected by the point cloud quality ? For example, if the scanning quality is improved, how does that affect the weight function?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The work can be further improved by use more realistic examples and compare with state-of-the-art models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable feedback for improving our paper. We will reflect all the comments and suggestions in our final version. In the following, we respond to specific concerns and questions raised by the reviewer.
> It is not clear why different weight functions affect the qualities of the topological features, and what are the criteria for the persistent diagrams. Some theoretical explanations and numerical demonstration will be helpful.
> Question 1. What are the criteria for good weight functions? What are the theoretical explanations for them?
The persistent homology depends on the weight function; Rips filtration is the case when the weights are all zero, and some studies(, for example, Anai et al. (2020)) have used the DTM function to deal with outliers. In this study, we took the approach of learning these weights in a data-driven and supervised manner in order to increase classification accuracy. Therefore, we can say that the classification accuracy is a criterion for the weights in this case.
> Question 2. For the purpose of classification, the extracted topological features improves the results. It is unclear how the loss functions are differentiable with respect to the topological features and in turn the weight functions. The homology is intrinsically discrete, the differentiability is not obvious. Some explanation will be helpful.
Thank you for your important suggestion. Although the homology is intrinsically discrete, the differentiability of persistent homology has already been discussed in previous studies such as [1], [2], [3], and [4]. The differentiability of the loss function in our method can be directly derived from these results. We believe that such a differentiability argument is now standard and it would not be needed in the main text. We would add such an argument in the appendix if necessary.
[1] M. Gameiro et al. A topological measurement of protein compressibility. *Japan Journal of Industrial and Applied Mathematics*, 32:1–17, 2015.
[2] C. Chen et al. A topological regularizer for classifiers via persistent homology. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pages 2573–2582. PMLR, 2019.
[3] M. Carrière et al. Optimizing persistent homology based functions. In *International Conference on Machine Learning*, pages 1294–1303. PMLR, 2021.
[4] J. Leygonie et al. A framework for differential calculus on persistence barcodes. *Foundations of Computational Mathematics*, pages 1–63, 2021.
> Question 3. Are the weight functions affected by the point cloud quality? For example, if the scanning quality is improved, how does that affect the weight function?
We are grateful for your intriguing question. We believe that the weight functions would be affected by the quality of the point clouds. However, we are currently not sure how the affection will be. To clarify this, in our future work, we will conduct the experiment to investigate how our trained filtration will change if the scale of the noise (for example, the variance of Gaussian noise) in the dataset is changed.
We hope that we addressed all your questions and concerns adequately. In light of our clarifications, please consider increasing your score to accept. Please let us know if
we can provide any further details and/or clarifications.
---
Rebuttal 2:
Comment: Dear reviewer,
Please **briefly acknowledge the rebuttal** by the authors and consider updating your score—we want to avoid borderline scores for reviews, and the discussion phase will close soon. If you have any additional questions to the authors please ask them **now**.
Thanks,\
Your AC | null | null | null | null | null | null |
CommonScenes: Generating Commonsense 3D Indoor Scenes with Scene Graph Diffusion | Accept (poster) | Summary: The paper looks at the problem of learning a generative model for sampling 3D environments from a description based on a graph and natural language. The graph does correspond to objects that participate in the scene, and textual descriptions attached to nodes and edges further specify the scene. Textual descriptions are encoded using CLIP. A variational auto encoder conditioned on these inputs is trained to generate the bounding boxes of the 3D objects. A latent diffusion model is trained in parallel to generate the 3D shape of the object in each bounding box. The existing FRONT-3D dataset is augmented with textual descriptions for training this model. Experiments show the advantages and limitations compared to ablations and prior works.
Strengths: * Scene generation is an important problem and this paper addresses it with a responsible model using modern components in a competent manner.
* The paper contributes a large amount of additional labels for 3D-FRONT.
Weaknesses: * The presentation, especially of the technical parts, can be improved. It was difficult to understand how the layout auto-encoder is setup. It would be useful to give a high-level overview first of where the method is trying to achieve (e.g., encoder Z = E(P, O, B), decoder \hat B = E(P, O, Z), Z Gaussian). Section 4.1 presents only half of the story, and we need to get to section 4.2 before the auto-encoder materialises. Line 143 literally states the the encoder is trained by minimising Eq. (4), which is clearly not enough -- Eq. (5) is also needed.
* You man need to revisit the formalism at Ines 90 to 99. $c_i^{node}$ and the same terms for the edges are very poorly defined. Furthermore, it also seems that you can attach more than one such attributes to each node or edge. The formalism does not allow for that.
* Some aspects of the model are unclear. See questions below.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Please clarify how the vectors o in Figure 2 and line 99 are obtained. The paper only states that they are "learnable".
* The two shape and layout branches are said to be trained jointly (Section 4.4). However, the learning dynamics of auto encoder and diffusion processes (one per object) seem to me to be very different. Is it really that trivial to mix them?
* Line 172: the shape diffusion model connects to the *entire* graph, and so to all objects, via cross-attention. How does each instantiation of the diffusion process know *which* of the several objects it is meant to generate?
* Line 214: I don't know what it means to share the same CAD model source. Nor I do understand why it is is sensible to compare a generated sample to a specific CAD instance -- I would not expect the two to match.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: There is sufficient discussion of limitations, but no discussion of ethics. The latter, however, is not particularly relevant for this paper. It may be worth at least discussing the licensing of 3D-FRONT.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Overview paragraph on optimization
Thanks for the useful advice! We now extend our overview section (L111-116) by explicitly cross-referencing figures and the subsections. Further, we rephrase the sentence in L143 as: *“We guide the distribution of embedding space through employing the KL loss.”*. Note that in Section 4.4, we already explained that we use the combination of all these losses to optimize the network, but we will clarify the necessity of this up-front by cross reference.
## 2. The formalism in the preliminary
We have fixed L93 thusly to clarify the formalism:
*Each vertex $v_{i}$ is categorized through an object class $c^{node}_{i} \in \mathcal{C}^{node}$, where $\mathcal{C}^{node}$ denotes the set of object classes.* Note that in terms of notations, we follow the prior work [58].
## 3. Clarification about vectors $o$
The vectors $o$ in Figure 2 and L99 are learnable object embeddings, where each object class is identified with one fixed-size embedding initiated randomly and optimized over the training. It is in the same vein as how Large Language Models use learnable tokens per word in a dictionary.
## 4. Joint training of two branches
Joint training can be achieved. We train the layout VAE and the latent shape-diffusion branches together with the combination of losses (Eq. 7). It was indeed not trivial to mix them, which is why we discussed potential implications in the Supp. Mat. (L155-170), and we ensured synchronized training between the two branches with a uniform sampling (Supp. Mat. Figure 9).
## 5. Cross-attention feature conditioning
The shape branch is conditioned on per-node relation embedding via cross-attention to generate per-node shapes, not the entire graph. We then run the diffusion for every node in the graph to populate the scene. We will clarify this in the final version.
## 6. CAD model comparison
The intention is to measure the generation consistency in the dining rooms, where dining chairs usually appear together in a suit and dining tables as well (p.8 Figure 8). To collect the consistency ground truth, for example, in one dining room, we collect the chairs that are decorated using the same textured CAD model. This shows us which chairs should be in a suit after the generation. We then calculate the CD between these chairs, the smaller the CD the better. This metric is mainly used to test object-object consistency (p.8 Table 2).
## 7. License discussion
We have discussed the ethics in Supp. Mat. L145-149. In terms of licensing, we will follow CC BY-NC-SA.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and encourage them to incorporate these clarifications in the final version of the paper. | Summary: Summary: This paper presents a generative model to generate 3D scenes from scene graphs. Their model is fully generative without the need of any shape database or embeddings. The 3D scenes generation model is pipelined into finding the scene layout and the construction the shapes of the nodes using a diffusion model. The authors constructed a scene graph dataset SG-FRONT from 3D-FRONT dataset and trained end to end to generate 3D scenes from the scene graph input. The paper reported qualitative, quantitative results with comparison to other sota models on the SG-FRONT dataset.
Strengths: 1. the paper claims to have curated a dataset(enriching an existing dataset with scene graph annotations) for 3D scenes construction from scene graphs.
2. Codes and dataset will be publicly available
3. The generated 3D scenes and the quantitative numbers show potential of the scene graph based approach for 3D scenes reconstruction using a diffusion model
Weaknesses: 1. In section 6, Compared Baselines, 'a fully generative method a text-to-shape generation model that follows a layout generation' does not have citations. Is it something the authors modeled for experimentation?
2. In Table 1, need to specify which of the methods use fully generative approach with text only. Also, calrifying in the caption of table 1 on how you segmented table 1 into two main rows and what is 'Ours 'w/o SB''
3. Sharing more details on how the authors curated the dataset from 3D-FRONT in the main paper might be helpful for the readers
4. Some possible typos:
line 47 : 'cues and fine local inter-object relationships.'
Line 65 and 66: Incomplete lines : 'Quickly after this progress, the 3D [53, 1, 58, 26], dynamic [44], robotic
66 grounding [19, 45], spatio-temporal 4D [66], and controllable scene synthesis [29, 65, 54, 36, 13].'
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In section 6, Compared Baselines, 'a fully generative method a text-to-shape generation model that follows a layout generation' does not have citations. Is it something the authors modeled for experimentation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: no negative societal impact.
No limitation addressed.
All the 3D scene generation is done on synthetic dataset. How would the whole model perform on the real scene?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. The baseline based on the text-to-shape model
It is a method we modeled for experimentation. The motivation is introduced in L36-40, the main paper. We have also further explained the implementation of this baseline in Supp. Mat. Section 7 in detail. This baseline consists of a layout generator (the same as ours) and a single text-to-shape generator (SDFusion [8]). Given a scene graph describing the target scene, this baseline uses the layout branch to generate the bounding boxes, while concurrently feeding the individual category name of each node in the scene graph into the text-to-shape generator to generate object shapes, e.g., using the word “bed” with SDFusion to generate a plausible bed. Finally, the entire scene is synthesized by populating each shape within its corresponding sounding box. Here, we train the layout branch using the same settings as ours and train the text-to-shape generator following SDFusion.
## 2. Clarification on Table 1
Among the baselines evaluated, only “Layout+txt2shape” is the “a fully generative approach with text only”. Two main rows are separated with respect to the reliance on an external shape database for retrieval. “Ours w/o SB” refers to ours without the shape branch. We will clarify these aspects in the main paper to ensure that readers can fully understand the distinctions.
## 3. More dataset collection details in the main paper
We plan to extend Section 3 of the main manuscript through the details from Supp. Mat. Section 3 Dataset Details (L82-86) upon acceptance. In core, we adopt a semi-automatic approach for detecting spatial layout through bounding box extensions followed by human annotator inspection (as in prior work 3DSSG [53], 4D-OR [66]). For annotating semantic-level edges, we resort to 3D-FRONT [15] annotations where we extract the object-level features (e.g., “same material as”, “same style as”).
## 4. Limitations, societal impact, and real-world performance
In the Supp. Mat. Sec. 6 Discussion and Limitations, we addressed the negative societal impact (L145-149) as well as the limitations (L132-144). We plan to carry the essential parts to the main manuscript.
Furthermore, for real-scene performance, we also provided a discussion and analysis in Supp. Mat. Section 4 Results on 3DSSG dataset [53].
---
Rebuttal Comment 1.1:
Comment: I appreciate the response from the authors. All my concerns have been addressed. Also, thanks to the author for pointing out the limitation stated in the review, which I missed from the supplementary materials. | Summary: This paper addresses the task of controllable scene synthesis of indoor rooms, conditioned on a semantic scene graph that captures spatial, style and support relationships between objects in a scene. In particular, they introduce CommonScenes, a generative model capable of converting scene graphs into 3D scenes using diffusion models. Given a scene graph of the scene to be generated, they first enhance it using CLIP features computed on the inter-object relationships, as well as using the ground-truth bounding box annotations. Next, they leverage a triplet-GCN based relation network to propagate information among the objects in the scene and produce the Box-enhanced Contextual Graph (BCG). Given a BCG, they then utilize a dual-branch network that generates the final scene. The first branch (layout branch) is another triplet-GCN that generates 3D layout predictions and the second branch (shape branch) generates shapes for each node represented as SDF. The shape branch is simply a latent diffusion model. During training, CommonScenes considers supervision in the form of per-object bounding-box annotations (size, location, rotation), as well as per-object 3D meshes in the form of SDFs. To evaluate their model, the authors enrich the 3D-FRONT dataset with scene graph labels, by annotating per-object spatial, style and support relationships. The authors compare their model to several baselines and show that their approach yields more realistic scenes in terms of FID and KID scores.
Overall, I think this is a nice work that alleviates the need for relying on a library of assets to replace the generated bounding boxes with 3D objects. The proposed architecture is novel and seems to be able to consistently produce plausible scenes. That being said I think that the proposed pipeline is quite complex, as it consists of multiple submodules, and it requires conditioning in the form of semantic graphs that are relatively hard to acquire, hence the authors had to enrich 3D-FRONT. However, since the authors show that their model outperforms prior research I am in favor of accepting this paper.
Strengths: 1. To the best of my knowledge, the proposed model is novel and the authors clearly demonstrate that their model consistently produces plausible scenes. I think that this work is an important step towards alleviating the need for large libraries of assets when generating a novel scenes. Relying on such libraries naturally restricts the diversity of the generated scenes to the diversity of the objects in the library. Although, the proposed model is a relatively complex, I believe it is a valuable work that could potential inspire other works in this direction.
2. I think that the development of the SG-FRONT dataset, that extends 3D-FRONT with scene graph labels is an important contribution of this work that can potentially facilitate other research projects, therefore I strongly encourage the authors to make this dataset variant available upon the paper's acceptance.
3. I really appreciated the supplementary video that the authors provided. I particularly liked the intuitive explanation of the proposed method as well as the additional results. In the future, I hope that more authors will provide such high quality supplementary videos alongside with the paper submission.
4. Although the proposed model is quite complex, I think the paper is nicely written and easy to follow. I really liked the provided figures that provide a more high-level pictorial overview of the various components of the proposed pipeline.
Weaknesses: 1. The main weakness of this work is that it is relatively complex, as it consists of multiple sub-modules. One thing that was not 100% clear from the text was whether they perform a two-stage training, namely first produce the BCG, as discussed in Sec 4.1 and then jointly train the shape and layout branch of CommonScenes, as discussed in Sec 4.2. It might be good to clarify this for the final version of the paper. In addition, for the Shape Decoding module (L164-179), I am not 100% sure whether the authors use some sort of class conditioning to train one LDM for all object types or they use separately trained LDMs per-object class. This might be good to clarify.
2. Although, the paper focuses on scene synthesis, I think that an important evaluation that is missing is measuring the quality of the generated shapes. In particular, the authors could generate a couple of rooms and then take the beds, chairs, nightstands etc. and compute the COV and MMD of the generated objects w.r.t the ground-truth objects. Although, object generation is not the main focus of this work, I think this analysis would be valuable, since this is an integral component of the proposed model. I noticed that the authors tried to do this type of analysis in Sec. 1.1 in their supplementary, but I found it a bit weird that they report Chamfer Distance, instead of MMD/COV. Is there any reason for this?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. For the Ablations in Sec 6.3, for the model variant that is trained without context, do the authors omit both the CLIP features, as well as the bound-box features or only the former. I think that they should be ablated separately. Personally, I am not 100% convinced about the significance of the CLIP features. Can the authors please clarify?
2. I am wondering whether the authors tried to condition their scene generation also on the floor plan together with the semantic scene graph? Unless I am missing out something, this can be easily done e.g. by extracting features from the floor plan and concatenating with the per-object features while created the BCG? Do the authors think that something as simple as that would work?
3. One thing is that a bit unclear to me is whether the first component of the pipeline that generates the BCG using the triplet-GCN is really necessary. Have the authors tried to directly use the scene graph enhanced with the CLIP features and the bounding box features? This might be an interesting ablation to provide for the final version of the paper.
4. One concurrent work that I think the authors should add in their reference list is the Learning 3D Scene Priors with 2D Supervision, CVPR 2023. I think this work could also be an interesting baseline as they can generate both the scene layout as well as the 3D shapes
5. A minor comment/suggestion: In the caption of Figure 1 in the supplementary material the authors state "shows a huge diversity". I think huge might be slightly overclaiming. It might be better to tone it down a bit.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss the limitations of their work and the potential negative impacts in the society in their supplementary material. I think they could have also provided some qualitative examples of the failure case of their model to help the reader better understand the limits of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Training details
We perform one-stage training. As mentioned in the L129, the BCG is created and encoded on-the-fly by the contextual encoder $E_c$. We agree that the overview needs to indicate one-stage training clearly. As for LDM, we train a single LDM for all object types conditioned on the learned relation embeddings. We will clarify both aspects in the final version.
## 2. More object-level evaluation
Since our objective is scene generation, we use FID/KID as the main metrics for evaluating the scene-level generation quality. **On the other hand, we reported the CD as a metric to evaluate generation diversity, following the previous state-of-the-art method Graph-to-3D to enable a fair comparison** (more details in Supp. Mat. L16-17).
As requested, we report the MMD (x0.01) and COV (%) for evaluating per-object generation. We collect ground truth objects in each category within the test set and use the evaluation script from PointFlow [72].
**Table 1.** MMD(↓) comparison
| Method | Bed | Nightstand | Wardrobe | Chair | Table | Cabinet | Lamp | Shelf | Sofa | TV stand |
|:---------------:|:------:|:----------:|:--------:|:------:|:------:|:-------:|:------:|:------:|:------:|:--------:|
| Graph-to-3D | 1.56 | 3.91 | 1.66 | 2.68 | 5.77 | 3.67 | 6.53 | 6.66 | 1.30 | 1.08 |
| Ours | **0.49** | **0.92** | **0.54** | **0.99** | **1.91** | **0.96** | **1.50** | **2.73** | **0.57** | **0.29** |
**Table 2.** COV(↑) comparison
| Method | Bed | Nightstand | Wardrobe | Chair | Table | Cabinet | Lamp | Shelf | Sofa | TV stand |
|:---------------:|:------:|:----------:|:--------:|:------:|:------:|:-------:|:------:|:------:|:------:|:--------:|
| Graph-to-3D | 4.32 | 1.42 | 5.04 | 6.90 | 6.03 | 3.45 | 2.59 | 13.33 | 0.86 | 1.86 |
| Ours | **24.07** | **24.17** | **26.62** | **26.72** | **40.52** | **28.45** | **36.21** | **40.00** | **28.45** | **33.62** |
As shown in Tables 1 and 2, our method shows better performance in both MMD and COV, which highlights the object-level shape generation ability of CommonScenes.
We also calculate 1-nearest neighbor accuracy (1-NNA, %), which directly measures distributional similarity on diversity and quality. This measurement has motivated us to include it in this rebuttal as well for reviewers’ and readers’ reference. The closer the 1-NNA is to 50%, the better the shape distribution is captured.
**Table 3.** 1-NNA(↓) comparison
| Method | Bed | Nightstand | Wardrobe | Chair | Table | Cabinet | Lamp | Shelf | Sofa | TV stand |
|:----------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| Graph-to-3D | 98.15 | 99.76 | 98.20 | 97.84 | 98.28 | 98.71 | 99.14 | 93.33 | 99.14 | 99.57 |
| Ours | **85.49** | **95.26** | **88.13** | **86.21** | **75.00** | **80.17** | **71.55** | **66.67** | **85.34** | **78.88** |
It can be observed that our method surpasses Graph-to-3D in the evaluation of distributional similarity. Coupled with the results in Tables 1 and 2, CommonScenes exhibits more plausible object-level generation than the previous state-of-the-art. We will extend our Supp. Mat. with these additional experiments. Please also check our summarized table in the attached PDF.
## 3. Ablations on the CLIP features
Exactly as the reviewer indicated, we ablate the CLIP features only in Section 6.3, *“Ours w/o context”*. By omitting the features, the BCG degrades to a box-enhanced scene graph, lacking strong semantic cues to guide the shape and consistent layout generation. Table 5 shows this is vital for the quality of 3D scene generation results, both in terms of synthesis quality (FID/KID) and the relation correctness (mSG). Additionally, compared to layout generation in Tables 1, 3, and 4, our method without a shape branch (*"Ours w/o SB"*) highlights the effectiveness of BCG. On the other hand, the ground truth bounding box parameterization process cannot be ablated since it is essential to the VAE modeling, which also serves as supervision labels for the layout.
## 4. BCG encoding
We did not use the triplet-GCN to "generate" a BCG, Instead the BCG is encoded by the triplet-GCN-based $E_c$ during the training, and we actually followed the suggested approach as mentioned in L128-131. For even more clarity, we will rephrase the relevant part in the final version.
## 5. Floor plan involvement
No, we have not tried that. Our motivation is to use as simple as possible conditions, i.e., only objects and their relationships, to condition a scene generation. However, the approach mentioned by the reviewer may mean removing the floor node since it would be used in enhancing the per-object features, i.e., by concatenation. Instead, replacing the floor feature from CLIP with a floor plan embedding and treating it as a node can potentially generate user-defined floor shapes. We will investigate this direction as our further work.
## 6. A related paper from CVPR 2023
Indeed this is a relevant work that learns 3D shape priors from 2D images and generates shapes and layouts from a hypersphere space. We now cite this paper published after our work's submission. However, it is worth noting that this method is trained and conditioned on RGB-related information, e.g., 2D bounding boxes and masks. Therefore, a direct comparison between our method and this would be inappropriate since our inputs are in different modalities, where we use the form of a scene graph.
## 7. Qualitatives for limitations
We will append Supp. Mat. Section 6 Discussion and Limitations with illustrations. We provide some examples of these in the attached PDF.
[72] Yang et al. "Pointflow: 3d point cloud generation with continuous normalizing flows." ICCV 2019.
[73] Nie et al. "Learning 3D Scene Priors with 2D Supervision". CVPR 2023.
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgement
Comment: I would like to thank the authors for taking the time to address my questions and concerns. As I already mentioned in my review I think this is an interesting work that alleviates the need for having a library of 3D assets, moreover as the authors show that their approach outperforms prior approaches, I think that this paper should be accepted. That being said, I would like to urge the authors to incorporate the additional experiments provided in the rebuttal period in the final version of their paper. | Summary: Gist:
The paper presents a framework, called CommonScenes, for generating 3D indoor scenes given scene graphs as inputs. CommonScenes is a dual-branch framework where one branch generates the scene layout using a VAE and the second one generates what the authors call "compatible" 3D shapes using latent diffusion. The claim is that having this second branch (where compatible 3D shapes are generated for populating the generated layout from the first branch) allows capturing global scene-object and local inter-object relationships, something that prior works cannot capture (I am not convinced about this, but more on this later).
The generated scenes can be manipulated by editing the input scene graph, as well as sampling noise in the diffusion process.
The paper also constructs a scene graph dataset using an off-the-shelf 3D scene dataset.
Dataset Used:
3D-FRONT is the base dataset used, which is augmented with scene graph labels and this augmented dataset is termed in the paper as "SG-FRONT" dataset.
Training Mechanism:
Supervised in the form of triplet network setting and latent diffusion models
Evaluation Metrics:
To measure the fidelity and diversity of generated scenes, FID, KID scores at 256x256 pixel resolution between the top-down rendering of the generated and real scenes is used.
To measure shape diversity, each scene is generated 10 times, and the changes in shapes is evaluated using CD.
Baselines and Comparisons:
Three kinds of baselines are compared against:
1) First, a retireval-based method, namely, 3D-SLN from CVPR 2020
2) Second, a semi-generative SOTA method, Graph-to-3D from ICCV 2021, and
3) Third, a text-to-shape generation methods that follows layout generation (this is not cited, so I am not informed by the paper if this was implemented by the authors on their own or if any specific algorithm was re-implemented)
Strengths: + Conceptualizing layout generation using graphs is a nice concept, although this is not the first time it has been addressed. A structured input modality gives rise to many applications, such as scene editing and modification, as demonstrated in the paper.
+ As seen from Figure 4, the proposed method seems to produce plausible outputs given a scene graph as input. This is also validated quantitatively, although I would only consider Table 1 to be more representative of such quantification than other tables.
Weaknesses: - Not really a concern but this is something that people will find about this paper: the paper is trying to do too many things at once. While this may also be a positive aspect in the era of today's models, a reader cannot clearly discern what design aspect leads to layout improvement and what leads to shape improvements. One may even argue: why not use the shape generation scheme employed as an independent approach and submit a paper if enough novelty exists? You get my point.
- L96-97: How are the edge predicates (like spatial relations "left/right", "closeby" etc.) obtained? Is the dataset manually annotated with semantic scene graph information? If that is the case, then, the problem formulation is weak. What would have been interesting is to automatically extract meaningful semantic scene graphs (especially that ground spatial relations to a reasonable extent) and then use these graphs to generate a 3D scene.
- L3-6: I do not understand the message in these lines. Do you mean to say that existing methods use retrieval-based mechanisms to populate the generated layout, because of which scene-object and object-object relationships are inconsistent? It is not true. So, first, I think it is important to rephrase this sentence. It is conveying an altogether different meaning.
- There is mention of a triplet graph network, triplet-GCN in L46, 129, 148, 159. However, there is not much detail about how the positive and negative examples to train this triplet network are obtained. This is quite important since the training data plays a key role in obtaining meaningful and discriminative embedding spaces in the context of contrastive learning setups.
- I fail to understand why the initial node embeddings, c_i^{node}, which are obtained from one-hot semantic encodings (as per info in L93), should be passed through a pre-trained and frozen CLIP text encoder. It makes sense to pass the initial __edge__ embeddings through it as spatial information needs to be captured and the CLIP text encoder does a good job of mapping the initial English text to a meaningful embedding space. But I cannot understand why the node embeddings c_i^{node} need to be passed through the text encoder from CLIP.
- L37: It is not trivial to obtain text information from input scene graphs. This alternative solution is not so straightforward, unlike what is mentioned in this line.
- L65-66, the sentence is incomplete
- L87-88: LEGO-Net from CVPR 2023 is the first work to leverage diffusion models for scene generation. Even though LEGO-Net is designed for scene rearrangement, I would still place it in the generative model category since it is inherently doing denoising to provide a plausible output. So, these two lines are not correct.
- Figure 4: Again, my question is how are the input scene graphs obtained? I am interested in knowing how the spatial relations on the graph edges are obtained. If this is done in a heuristic manner, it is prone to errors, and I do not think this is trivial to obtain in the presence of multiple objects. Obtaining such spatial relations is a challenge, as widely acknowledged in the community (see pre-LLM works on 3D indoor scene generation using language/text input, such as from Chang 2014, 2015, Ma 2018).
- There are quite a few typos and punctuation errors in the paper (one such example is pointed out below). Need to be corrected.
One Missing period/full-stop symbol in line 24.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see the Weaknesses section above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper does not discuss the limitations of the proposed approach.
There exist many questions (please see the Weaknesses section above) that can critically limit the application of the proposed approach, starting from the way the input scene graphs are obtained. At the least, a discussion on how this work can address or alleviate such challenges using additional processing should have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## 1. Text-to-shape baseline
We have explained this baseline, named Layout+txt2shape", more in detail in Supp. Mat. Section 7. This baseline solely considers the shape generation from text input, which we establish through the text-to-shape SDFusion [8].
## 2. Complexity, contribution, and motivation
Notably, our main contribution is the proposal of a method to enable a fully generative model of the entire 3D scene from scene graphs, encompassing its layout and 3D geometries holistically. In our experiments, we systematically show that using only layout and retrieval-based shapes (3D-SLN [31]) underperforms and that a semi-generative shape model of Graph-to-3D [13] also fares poorly (p.8, Table 1). We show that it is possible to generate shapes together with scene layouts in the proposed framework. Contextual information improves the coarse inter-object relationships (p. 9, Table 5 row1 vs. row 4), and the GCN further propagates information among objects and learning global cues, and finer local-object relationships (p.9, Table 5 row 2 vs. row 4). Besides, the optimization from the diffusion-based shape branch not only assists layout generation during training but also brings better shape generation quality than previous work (p. 7 Figure 4).
## 3. Dataset annotation and scene graph sources
We obtain the edge predicates (e.g., "left/right", "front/behind", “above/standing on'') in a semi-automatic manner, similar to prior work (3DSSG [53], 4D-OR [66]), as explained in the main paper L190-192, and in Supp. Mat. Section 3 L75-78. Essentially, we use three methods: 1) For spatial relations (e.g., "left/right", "bigger/smaller"), we initiate the process by applying relationship checks on the bounding box extents, followed by collision checks; 2) For support relations (e.g., "standing on", "above"), we apply a set of thresholds for each object type identified by human annotators; 3) For stylistic cues, we refer to 3D-FRONT object annotations [15] to identify meaningful information, generating semantic edge labels. Since we use the synthetic and high-quality 3D-FRONT [15], any potential errors occur systematically, in contrast to real scenes obtained by depth cameras [66,67]. These can then be easily corrected.
The input scene graphs in Figure 4 are the annotated scene graphs from the test scenes of 3D-FRONT. For general acquisition of scene graphs, there are various ways, such as from text (Chang et al. [68, 69], Ma et al. [32]), single image (Chen et al. [70], Dhamo et al. [12]), or video (Wu et al. [58]). However, our work aligns with 3D-SLN [31] and Graph-to-3D [13] in using the scene graph to enable controllable scene analysis. Therefore, our scene graphs are obtained in a manner akin to previous approaches [13]. We recognize the challenge in this task and have added the related work [68,69], to our paper, further discussing this aspect in the Supp. Mat.
## 4. The explanation and methodology of retrieval-based methods
Retrieval-based methods, like 3D-SLN [31], Graph-to-Box [13] and ATISS [39], typically determine retrievals based on the sizes of bounding boxes, as shown in Figure 1. In this case, even a slight variation in the estimated bounding box size can lead to significant and undesired variations in the generated shapes, resulting in the generation of inconsistent scenes (p.8, Table 2). In contrast, our generated shapes and appearances are conditioned on the relations among the objects. This leads to more consistent shape generation (p.8 Figure 5, L244-248).
## 5. The meaning of triplet-GCN
In this context, the triplet graph network uses a triplet of “subject-predicate-object” [31]. We don’t use any contrastive learning or triplet loss. We will clarify this aspect for readers more familiar with triplet loss concepts.
## 6. CLIP embeddings for graph nodes
We do not pass one-hot embeddings to the CLIP text encoder. Instead, we input the object class names for nodes (e.g., “Bed” and “Table”), as well as the edge class names (e.g., “left”, “right”) as illustrated in Figure 2, exactly because of the intuition explained by the reviewer.
## 5. Text information acquisition in the scene graph
In L36-39, we explained that simply replacing the shape retrieval baseline with a text-to-shape generator, where the text is acquired from the scene graph, does not yield good results (i.e., the baseline "Layout+txt2shape"). **The textual input from the scene graph is obtained by means of class names**. We will add a cross-reference to this experiment on L39 to prevent misunderstanding.
## 8. LEGO-Net from CVPR 2023
We thank the reviewer for pointing out the very recent work [71]. We will cite this work and refine our statement in related work accordingly.
## 9. Limitation Discussion
We discussed the limitations in the Supp. Mat., on Section Discussion and Limitations L131-149. We will carry the essential parts to the main manuscript.
[67] Wald et al. "Rio: 3d object instance re-localization in changing indoor environments." ICCV 2019.
[68] Chang et al. "Learning spatial knowledge for text to 3D scene generation." EMNLP 2014.
[69] Chang et al. "Text to 3D Scene Generation with Rich Lexical Grounding." ACL 2015.
[70] Chen et al. "Scene graph prediction with limited labels." ICCV 2019.
[71] Qiuhong et al. "LEGO-Net: Learning Regular Rearrangements of Objects in Rooms". CVPR 2023.
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgement
Comment: Thanks for the responses to my questions.
I have a few questions still.
1) Complexity and Motivation -- First, a prior work from CVPR 2020 (called Total3D) and a follow-up work from ICCV 2021 (called Implicit3DUnderstanding), both perform scene reconstruction. Both of them target the Layout-to-Scene problem while also generating the constituent shapes. The latter, however, uses a graph representation for the scene. I would consider this work to be similar to the latter, but now in the generation paradigm. So, this is not the first work to generate shapes as a part of the layout generation/recon goal. Second, let me assume that incorporating contextual inter-object relations with the layout information helps generate better 3D shapes. This is a really interesting claim that needs a lot of investigation. The questions I would ask are: How can I be sure that this is due to contextual relations rather than the knowledge base of the pre-trained latent diffusion model employed? Under what scenarios of the proposed framework will the generated shapes look incompatible? I do not see an effort in educating the readers about these questions or an effort in investigating these really interesting questions. Answering these questions would make the paper more informative.
Again, I am not convinced as to why combine Layout and Shape generation in one work if enough novelty exists in the Text-to-Shape part of the work. There is enough interest in the community to see frameworks that do a good job of shape generation from text inputs, and solving this sub-problem effectively should be a novel project on its own. If the idea is to make a generative framework, I argue that the paper's contribution is mixing up many ideas into one single paper, with no real *takeaway* message. I currently see the paper as presenting a complex approach to solving the problem of 3D scene generation from scene graphs. Talking about the input (scene graphs), I will move on to my next point, Dataset Annotation.
2) Dataset Annotation: Getting textual annotations on scene graphs is a laborious and expensive process. While I understand that this is necessary for training neural networks in general, I don't see this paradigm of using annotated scene graphs for learning a generative model of 3D scenes to be useful/practical compared to a paradigm where the generation process is conditioned on a collection of reconstructed scenes from scene images, which are abundantly available and require no annotations. In my opinion, this is a limitation of this work, one that relies on textually annotated scene graphs.
3) Limitations: Right, one should always discuss the limitations of the work in the main paper rather than the Supplementary. Now, I went through the limitations sections (Section 6) in the supplementary material. This section lacks a serious discussion of the proposed approach's limitations. What it currently describes is the fallacies of the dataset and the information about object attributes (textures) that were not included as a part of object encoding. This is highly superficial, to say the least. It will be valuable to the readers if the paper discusses aspects of the proposed approach that make narrowed assumptions, if any, about the weaknesses of the technique, and aspects of *technical* details that could be improved. Then, there should be some high-level thoughts on how these technical weaknesses could be addressed. While I am writing this, I cannot help but think of the motivation of the work, as well as the dataset constraints needed to get this done.
With all the above, I am not persuaded to change my opinion of the paper.
---
Reply to Comment 1.1.1:
Title: Thanks for letting us know your further concerns
Comment: We conclude and answer your concerns as follows:
## 1. About scene graphs
>*The latter, however, uses a graph representation for the scene.*
>*In my opinion, this is a limitation of this work, one that relies on textually annotated scene graphs.*
The scene graphs in our paper are semantic scene graphs [1,2,3], which model the semantic relationships between objects as written in the Preliminary (p.3, Section 3). This differentiates our work from Implicit3DUnderstanding, which only models the relative geometry in their graphs.
>*I don't see this paradigm of using annotated scene graphs [...], which are abundantly available and require no annotations.*
There are more benefits of using a scene graph over sentences, as mentioned in the caption of Figure 1 in [2]. Our work provides an alternative to using such a compact, symbolic, yet structured input as a condition to generate 3D scenes. We have rebutted that one can easily and explicitly get such a scene graph from other modalities like images exactly as the reviewer mentioned or even use a GUI to have such information. Once the scene graph is extracted, our framework can generate plausible scenes.
## 2. The claim of our method
>*[...] a follow-up work from ICCV 2021 (called Implicit3DUnderstanding) [...] this is not the first work to generate shapes as a part of the layout generation/recon goal*
We want to point out that we did not claim that this is the first work that generates shapes together with layout generation.
The focus of this work is modeling a continuous latent manifold which allows us to sample (potentially) multiple plausible scenes that are semantically and contextually coherent given scene graph conditions. This differentiates us from the Implicit3DUnderstanding, which aims for exact reconstruction matched to the shape information conveyed by RGB input.
>*let me assume that incorporating contextual inter-object relations with the layout information helps generate better 3D shapes. [...] How can I be sure that this is due to contextual relations rather than the knowledge base of the pre-trained latent diffusion model employed?*
We also did not claim that the consideration of relations brings "better 3D shapes". We explicitly rebutted that it is the diffusion process bringing better qualities compared to prior work Graph-to-3D:
*"Besides, the optimization from the diffusion-based shape branch not only assists layout generation during training but also brings better shape generation quality than previous work."*
We evaluated the scene-level appearance with FID/KID and provided detailed experiments (See p.8. Table 1). Compared with Graph-to-3D, our model shows better results benefiting from the diffusion process, which is now also supported by object-level MMD/COV/1-NNA. With the same reliance on the diffusion process as Layout+txt2shape, ours still yields more coherent results by considering the inter-object relations.
## 3. Clarification of our goal and contributions
>*I argue that the paper's contribution is mixing up many ideas into one single paper [...]*
>*Getting textual annotations on scene graphs is a laborious and expensive process.*
**The task aspect:** Good shape-generation methods from text inputs are different from what we want to achieve. As stated in the paper and answered in our rebuttal, our goal is to achieve semantically and contextually coherent scene generation.
**The method aspect:** To achieve the goal, we enrich the original scene graph with contextual information and leverage diffusion models conditioned on the inter-object relations to generate scenes by joint training and optimizing the layout and shape branches. As also mentioned by Reviewer zttF, the joint training of VAE and diffusion together is non-trivial.
**The data aspect:** The annotation is not part of the application limitation. The laborious annotation is exactly one of our contributions to the community.
## 4. Limitations and potential improvement
>*It will be valuable to the readers [...] about the weaknesses of the technique, and aspects of technical details that could be improved.*
We discussed that the interpenetrating phenomena in 3D-FRONT prohibit our framework from achieving fully collision-free generation. We have also provided a PDF to show the qualitative examples. However, a possible thought is to introduce an additional IoU loss to alleviate such problems with the support of training on most clean scenes. Second, the texture renderer can be leveraged from the related part in CC3D [4]. We will move related words to the main paper in the final version.
[1] Chang et al. "A comprehensive survey of scene graphs: Generation and application." T-PAMI 2021.
[2] Johnson et al. "Image generation from scene graphs." CVPR 2018.
[3] Wu et al. "Incremental 3D Semantic Scene Graph Prediction from RGB Sequences." CVPR 2023.
[4] Bahmani et al. "Cc3d: Layout-conditioned generation of compositional 3d scenes." ICCV 2023. | Rebuttal 1:
Rebuttal: # Thank you for your insightful comments!
We would like to thank all reviewers for their insightful and valuable comments. In summary, they highlighted the significance of the work (*“scene generation is an important problem”* (xttF), *“structured input modality gives rise to many applications”* (wi3K), and *“this is a nice work”* (ANPx)), indicated that *“architecture is novel”* (ANPx), *“this paper addresses it with a responsible model using modern components in a competent manner”* (xttF), *“the generated scenes seem to produce plausible outputs”* (wi3K). They recognized the value of SG-FRONT *“enriching an existing dataset with scene graph annotations”* (NT6V), *“large amount of additional labels for 3D-FRONT”* (xttF), and acknowledged our code and dataset publicity (NT6V). Furthermore, they appreciated our supplementary video, *“particularly liked the intuitive explanation”* (ANPx).
This rebuttal addresses each reviewer’s concerns. **We also appreciate the pointers for the grammar errors and typos, which we have since corrected. We further attach a PDF document in this rebuttal for reference.**
Pdf: /pdf/b7f95753ad27b197d7a683a35ca55ca7269918c8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Single-Stage Visual Query Localization in Egocentric Videos | Accept (poster) | Summary: The paper proposes VQLoC, an end-to-end trainable framework for Visual Query Localization (VQL) on long-form egocentric videos. Compared with Ego4D's multi-stage approaches, VQLoC proposes a single-stage process that efficiently localizes visually specified objects. It establishes both query-to-frame and frame-to-frame relationships for spatial-temporal localization. The proposed method achieves new SOTA performance on the public leaderboard with a notable improvement.
Strengths: VQLoC can streamline the VQ2D localization process. The method is end-to-end trainable and achieves faster inference speed, as shown in Figure 4.
The results are verified on the public server. Furthermore, from the quantitative ablation study, it seems like the boost comes from the model design rather than parameter tuning or training tricks, which is good.
The related work covers the field well. And the authors made comparisons to previous baselines in terms of size, speed, and performance.
Weaknesses: My major concern is the completeness of the ablation study. This paper can be stronger if more ablations can be done.
For the spatio-temporal transformer, Tab. 4 only shows the window size = 5 gives the best performance among the numbers greater than 5. It's not convincing 5 is the best choice. How about a window size smaller than 5? It's possible that if you don't do any frame-to-frame correspondence, i.e., a window size of 1/0 gives even better performance.
Also, there are multiple thresholds in the methodology. Are the results sensitive to the selection of those hyperparameters? For example, the peak response plot in Figure 5 looks noisy, so it's not sure whether an empirical choice of φ makes sense. The same also applies to the anchor box threshold theta.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (P6 L207) Why not trim complete negative video clips to augment the data for the training?
(P6 L228) Are the top K anchors selected from the entire video clip n' and mixed with anchors from n? I need clarification here since the loss is frame level, but the text is about video level.
Ego4D Episodic Memory also has similar challenges like VQ3D, NLQ, and MQ. Is it possible to adopt similar strategies for these tasks by changing the query encoder and predictor? It will be quite interesting to develop a unified framework for video query tasks.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I don't think the authors can provide more ablations for a large benchmark like VQ2D within the rebuttal date. Overall, this work has pushed VQ2D task forward with a remarkable step. However, more complete ablations will make this paper stronger in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed and helpful suggestions, and we are glad to see the highly positive comments. We will address your questions below.
1. **[Local window size ablation]** We experiment with smaller window sizes 3 and 1, where 1 means there is no temporal reasoning within the model. As shown below, we observe that window size 5 is better than size 3. Moreover, without temporal reasoning, the performance drops dramatically, showing the effectiveness of the spatial-temporal transformer module we proposed.
| Window Size | tAP | stAP | rec% | succ |
|:---------:|:---------:|:---------:|:---------:|:---------:|
| Size 5 | **0.31** | **0.22** | **47.05** | **55.89** |
| Size 3 | 0.27 | 0.17 | 42.26 | 48.14 |
| Size 1 | 0.17 | 0.11 | 33.56 | 42.83 |
---
2. **[Selection of hyperparameters - anchor assignment threshold]** Among the other hyperparameters, the anchor box threshold influences the training a lot. The reason is that it is highly related to the assignment of positive and negative anchors. As the objects are usually **small** on egocentric videos (1/7 of the image resolution on average), anchor box IoU threshold 0.5 (commonly used in detection papers) does not work well since it is harder for the anchor boxes to intersect with small objects with such a high IoU. This is the reason for picking a small IoU of 0.2. We will discuss it in the later version.
---
3. **["Noisy" temporal plot in Fig. 5]** The peak response in Fig. 5 looks “noisy” as it shows the results on very long videos, e.g. a video of 1000 frames. Note: VQL aims to identify the ‘*latest*’ appearance track of the query object, and the ground-truth is annotated accordingly (and ignores earlier appearances of the visual query in the video). Thus, there is only a single peak in the visualized ground-truth. However, the query object itself **may appear multiple times earlier** in the video, and our model will attempt to identify all of them (not just the latest) — these correspond to the other peaks and *is the expected and correct behavior* of the model. To better demonstrate the point, we include additional visualization (in the rebuttal pdf), which shows the all identified peaks. And the results indicate that our method can identify the query object consistently.
---
4. **[Selection of hyperparameters - post-processing threshold]** We select the threshold based on the validation performance following the approach from the baseline methods (SiamRCNN and CoCoFormer). We agree that this is a limiting factor and can be improved in the future, but this is orthogonal to our core contributions. Moreover, our results demonstrate that this threshold determined on the validation set can generalize to the test set without a performance gap (see Table 1).
---
5. **[Use negative clips]** We generally believe trimming negative clips can be a promising technique to avoid false positives. However, this is currently not possible on the Ego4D VQL dataset since *it annotates only the latest object appearance* (i.e., to answer “where did I *last* see [query]?”), while ignoring the earlier appearance of the object in the video. This means that the clips before the ground-truth response track may not all be negatives.
---
6. **[Anchor selection]** The loss $L_{img}$ (L210) is computed per anchor of the image, not over an image as a whole. We will better clarify it in the next version. The top K hard negative anchors are chosen from frames in video $n$ as well as frames from every other video $n’$ (see L229-231).
---
7. **[Unified framework]** Yes, we completely agree that these egocentric vision tasks can share similar challenges, caused by the characteristics of egocentric videos, and it is likely that these tasks have synergy. For example, NLQ and MQ require similar localization capabilities. VQ3D requires both 2D and 3D understanding of objects and scenes. Joint training with multiple tasks might improve the performance of each task. The key to the unified framework is how we handle inputs with different modalities and propose a jointly trainable end-to-end approach. It is also necessary to handle the unique properties of each task, e.g. camera pose estimation is the bottleneck of VQ3D [1]. This is an interesting and promising research direction. However, note that this is orthogonal to our core contribution of efficiently localizing a specific query object instance within the video.
[1] Mai, Jinjie et al. “Estimating more camera poses for ego-centric videos is essential for VQ3D.” ArXiv 2022.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks to the authors for the detailed explanation and additional info on the rebuttal. The table on local window size ablation is helpful to me. A unified framework for other tasks like NLQ, VQ3D, MQ, etc, could be interesting following this work. The authors have addressed my concerns and questions I raised. I have no further questions at this point.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We are glad to hear that the rebuttal addressed your concerns. | Summary: The author proposes a new single-stage end-to-end method for visual query localization. The proposed method builds a holistic understanding of query-video relationship, then spatial-temporal localization is performed. The proposed achieves major performance and speed gain over previous methods on major benchmark datasets.
Strengths: In general, the proposed method is simple and performs well on the major egocentric Ego4D dataset. The proposed method utilizes simple pipelines with cross attention/Transformer, however, achieves state-of-the-art results. In general, I am satisfied with the paper.
Weaknesses: However, I have some concerns about the paper.
1. The author only uses the Ego4D dataset as the benchmark for egocentric dataset. Since it is a new task, it is OK to only have few baselines/datasets cause it is a brand new task. However, It would be better if the author's method can be applied to other tasks/sub-tasks like visual tracking datasets (or modify the tracking datasets
2. More experiment details should be reported. Because it is an end-to-end framework. Though the author reports the FPS of the model, I think the author could report the more detailed metric like FPS/GPU memory. I think the proposed method could have larger GPU memory consumption/larger FLOPs while with fast execution time.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please mainly see the weaknesses section for details.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I think the author has adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your helpful feedback, and we will address your concerns as follows.
1. **[Benchmark]** Thank you for your valuable suggestion. In this paper, we focus on **the special properties of the visual query localization task on egocentric videos** (L33-43), e.g. drastic head motion, large variation between the query object and its appearances in the video, open-set queries, and long videos. Visual tracking, on the other hand, is a different task compared with VQL. Specifically, visual tracking usually requires the model to work on shorter videos with slower camera viewpoint changes, as well as small differences between the visual query and their future appearance in the video. Thus, your idea to apply or adapt our method to visual tracking datasets is intriguing, but we are unaware of appropriate datasets that satisfy the above properties. To the best of our knowledge, Ego4D is the only dataset that has these properties to test on. We will add these considerations to our discussion to highlight potential applications and adaptations.
---
2. **[More details]** Thank you for your suggestions to provide the FPS/GPU memory. In detail, our method observes 2 FPS/GPU per GB. We note that it's difficult to compute this FPS/GPU metric for the baselines, as their GPU usage is different for multiple stages of the baselines, i.e. detection, query comparison, and tracking. For an apples-to-apples comparison, we verified that the current FPS is evaluated with similar “*maximum*” GPU usage between our method and the baselines. Moreover, our method is *more parallelizable* than the baselines. If we are provided with larger GPU memory, our FPS will also increase accordingly for a given (video, query) pair. This is not the case for the baselines as some stages like tracking cannot be parallelized (i.e., tracking has to happen sequentially from one frame to the next).
---
Rebuttal Comment 1.1:
Comment: Thank the author for providing the rebuttal. I think your comment has addressed my concerns, thus I will keep my weak accept rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response! It's great to hear that our rebuttal has addressed your concerns. | Summary: The paper introduces a new approach to address the visual query localization problem in egocentric videos. The major contribution is that the proposed method, a single-stage model, simplifies the previous multi-stage frameworks and eliminates the need for off-the-shelf object detectors, tracking, and similar components by utilizing an end-to-end trainable model. Notably, the proposed method has achieved the top position in the Ego4D VQ2D challenge as of the submission date, demonstrating its effectiveness. Besides, the method demonstrates a considerable enhancement in inference speed, further highlighting its advantages.
Strengths: 1. The paper introduces a novel and technically sound method that formulates the visual query localization task as a unified framework. By taking a video and a template query image as input, the proposed method can directly produce the track of bounding boxes, eliminating the need for off-the-shelf object detectors and tracking methods. This is achieved by decoupling the task into spatial reasoning (finding per-frame responses) and temporal reasoning (consultation within the clip), followed by a prediction head (box&score) upon the learned features. In summary, the idea behind the method is both original and rational.
2. The proposed method demonstrates notable computational efficiency, a crucial aspect for real-world applications. By simplifying previous multi-stage frameworks into a one-stage process, the method achieves a remarkable 10x improvement in speed while maintaining a satisfactory level of performance. This advancement represents a significant stride towards practical applications.
3. The paper is well-written. Readers with a relevant background will have no difficulty following and comprehending the content.
4. The paper presents solid experimental results, achieving the top position on the Ego4D VQ2D challenge leaderboard at the date of submission.
Weaknesses: 1. L12-L13: "Our experiments demonstrate that our approach outperforms prior VQL methods 12 by 20% accuracy while obtaining a 10× improvement in inference speed". Without delving into the entire paper, a glance at Table 1 and Figure 4 might cause confusion regarding the reported results, as the speed does not appear to be 10x faster than STARK [51] (if my understanding is correct, this is an adapted tracking approach upon the proposed framework). It would be beneficial to include some explanation and details in the caption to prevent such confusion.
2. It is unclear how the number of frames (T=30) per clip is determined? Is this choice based on the limit of computational resources or driven by experimental performance considerations. If the latter, it would be helpful to know if any ablation studies have been conducted to investigate its aspect.
3. Discussion on the long videos. The input is fixed to a maximum of 30 frames (~6 seconds with a frame rate of 5fps). To handle longer videos, the proposed solution is to chunk the untrimmed video into fixed-length clips. In this case, is there an additional step to ensure smooth predictions between these clips, such as incorporating tracking between them? Alternatively, another solution to address long videos is to use a streaming mode. Suppose we have processed the past 30 frames $\{ t_{i-29}, …, t_{i-1}, t_i \}$, we would only need to process the new incoming frame $t_{i+1}$. Spatial transformer naturally lends itself to this scenario since it is computed at frame level. The spatio-temporal transformer might be more complex. But it operates within a bounded window, which suggests that it might not require significant re-computation or adjustments for the streaming mode. It would be valuable to have the authors' insights and comments on this matter.
4. L177-L193: The symbol $w$ is used ambiguously, representing both the spatial width and the temporal window.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to the previous section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate your insightful comments and your suggestions on the paper details. We will address your questions as follows.
1. **[Speed comparison]** We apologize for the confusion. The speed improvement was not in comparison to STARK but the prior state-of-the-art VQL methods (SiamRCNN / NFM / CoCoFormer). We will clarify this in our writing.
---
2. **[Clip length]** The clip length (T=30) was chosen based on the computational resource limits and the average response track length (~15 frames). This selection offers a *balanced ratio* between positives and negatives within each clip during training, ensuring steady training. Since we use local-windowed attention to handle fast head motion in egocentric videos, extending the clip length beyond T=30 would not theoretically enhance performance. For example, with a window size of 5 and 3 layers of local-windowed attention, frames more than approximately 7 frames away will not be attended to. We will clarify this in the later version.
---
3. **[Long videos]** We appreciate the insightful comments. As we use local-windowed attention to perform spatial-temporal reasoning, the smoothness of predictions within the several consecutive frames is implicitly learned. As you suggested, it is still possible that the prediction is not smooth enough between the two clips, e.g. the last frame of a clip, and the first frame of the following clip. The streaming mode you suggested is interesting and may further improve the smoothness at the intersection of clips. Further, the local-windowed attention can be used in the streaming mode with slight modification. It would be interesting to explore this in future work. Besides, we think inference in a streaming mode is especially important for egocentric vision, as it usually requires real-time inference. We will add more discussion in the later version.
---
4. **[Writing]** Thanks for pointing this out. We will fix it.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: I appreciate your response. I'm pleased to hear that you agree with my points (especially, intend to incorporate the steaming function into your future work) and their inclusion will undoubtedly enhance the clarity and quality of the work.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response! It's great to have a discussion on future work directions with you. | Summary: This paper tackles the problem of query localization of visually specified objects in egocentric human-activity videos. The main technical contribution for the solution is to design a single-stage transformer-based architecture to model the query-to-frame correspondence matching and frame-to-frame correspondence propagation. Ablation studies verify the effectiveness of the proposed components and comparisons show the superiority of the method in performance and speed with respect to prior works.
Strengths: 1) The network is single-stage and end-to-end trainable, which alleviates the issues in existing multi-stage approaches that different stages are separate from each other for conducting non-differentiable stage-wise predictions.
2) Model designs, including the frame-wise spatial transformer and the subsequent spatio-temporal transformer, are technically sound and easy to follow.
3) Good performance is achieved by the proposed method and speed-accuracy trade-off is also taken into consideration.
Weaknesses: 1) The technical contributions in terms of the model designs are short of novelty, and most of the building blocks can be viewed as simple adaptations from existing techniques used in methods for similar tasks (e.g., visual tracking, temporal sentence grounding, spatio-temporal video grounding, etc., can refer to STARK[1], LGI[2], STCAT[3], QD-DETR[4]). Apart from that, the unique challenges in the task of VQL, such as the egocentric characteristics presented in the videos, are less considered and are scarcely reflected by the model designs.
2) A powerful pre-trained model DINOv2 is adopted as the visual backbone, which plays a crucial role for ensuring the success of the query-to-frame correspondence matching. However, no related discussions are included in the manuscript. What if the method gets rid of (by replacing it with a weaker but common one) the pre-trained DINOv2 model? Would it simply fail or drop significantly in performance? And it’s unclear how much gains are brought by the proposed architecture except for the utilization of a stronger feature backbone.
3) According to the ablation studies, the proposed model heavily relies on a high input resolution to achieve decent localization results. This could be a negative factor hindering the VQL model from scaling up to a larger amount of training data.
4) Some literal expressions, such as the writing of Section 3.3 could be reorganized to make it clearer.
5) As shown by the temporal plot of predicted object-occurrence probabilities in Figure 5, there are still a lot of spurious responses with large magnitude (some are even larger than responses within GT) outside the target temporal region, which implies the current model’s deficiency in accurate temporal localization to some extent.
[1] Bin Yan, et al. "Learning spatio-temporal transformer for visual tracking." In ICCV, 2021.
[2] Mun, Jonghwan, et al. "Local-global video-text interactions for temporal grounding." In CVPR, 2020.
[3] Jin, Yang, et al. "Embracing consistency: A one-stage approach for spatio-temporal video grounding." In NeurIPS, 2022.
[4] Moon, WonJun, et al. "Query-dependent video representation for moment retrieval and highlight detection." In CVPR, 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: YES
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable comments. We will address your concerns as follows.
1. **[Novelty and related work]** We respectfully disagree with the comment. Our method is not a simple adaptation of the existing methods. Its novelty is *appreciated by other reviewers*, and we reiterate our contributions here. We *take the challenges and unique characteristics of egocentric videos into consideration* and propose three ways to address them: i) The **spatial transformer** finds the correspondence between the frame-query to identify regions that are similar to the query. This is designed to **handle the large appearance variation** of the query object with its appearance in the egocentric video; ii) The **spatial-temporal transformer** performs temporal reasoning within locally windowed consecutive frames to propagate spatial correspondences identified earlier. This local window is necessary to **handle the fast head motion** of the egocentric videos. iii) The **anchor-based bounding box prediction with hard negative mining** during training is designed to **handle the small scale and rapid appearance of the objects** in the egocentric videos. These egocentric characteristics and proposed techniques differentiate our task/model from previous visual tracking/grounding tasks and solutions. And our performance without these components drops 35%, 80%, and 40%, respectively. In contrast, for frame-query feature fusion, STARK and STCAT use concatenation; LQI uses Hadamard product. For temporal reasoning, STARK updates the template; LGI, STCAT, and QD-DETR perform reasoning on the entire clip. For bounding box (or target frame) prediction, all mentioned works use non-anchor-based loss without hard negative mining. Since STARK is a SOTA tracker and can be directly used to localize the visual queries, it is tested as a baseline and demonstrates degenerated performance on the VQL task (Table 1). We will add more discussion and cite these papers in the future version as you suggested.
---
2. **[Backbone]** Good question. Our method **does not fail** when we replace DINOv2 with a common CLIP backbone, which is widely used in video-related [1] or non-video [2] tasks. The performance with the CLIP backbone is considerably better than the baseline models and demonstrates that our model still works with alternate backbones. Note that large-scale pre-training is a valuable component of most existing methods [3] (whether it is CLIP or DINOv2), and it is not surprising that it plays an important role here. However, the choice of DINOv2 as the particular backbone for this task is also a part of our contribution (see L155-157). As contrastive-learning-based backbones, including DINOv2, demonstrate good *semantic correspondence properties* [4], it is useful for performing the visual query task, especially for handling *large visual differences* between the query and its appearance in the videos. Besides, the backbone is not the only important aspect of our model. Even with a strong DINOv2 backbone, we demonstrate in our ablations that the performance deteriorates significantly without our other proposed components, as illustrated in the reply to your first question. In short, each component of our model is designed to work well for the VQL task, and i) the performance cannot be solely attributed to the backbone and ii) our CLIP test demonstrates some versatility to the backbone.
| Backbone | Resolution | tAP | stAP | rec% | succ |
|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
|DINOv2|448| 0.31 | 0.22 | 47.05 | 55.89|
| CLIP | 336 | 0.28 | 0.20 | 44.67 | 53.80 |
*Note that the highest resolution that CLIP supports is 336.
[1] Lin, Ziyi et al. “Frozen CLIP Models are Efficient Video Learners.” ECCV 2022.
[2] Shen, Sheng, et al. "How much can clip benefit vision-and-language tasks?." ICLR 2022.
[3] Wang, Yi et al. “InternVideo: General Video Foundation Models via Generative and Discriminative Learning.” ArXiv 2022.
[4] Hu, Yingdong et al. “Semantic-Aware Fine-Grained Correspondence.” ECCV 2022.
---
3. **[Input resolution]** We would like to clarify that our model is **not dependent on high-resolution inputs**. For comparison, we note that baselines, i.e. SiamRCNN, NFM, and CocoFormer, work on original video resolution, which is 1200 pixels. In contrast, **our method works on downsized resolution** of 448 pixels, yet outperforms the baselines. This indicates that our method is effective even though it uses lower resolution and is not over-reliant on high-resolution inputs. Besides, the performance drop with using a smaller resolution (i.e. 224) is expected and is not unique to our method [5], as the resolution of visual features will also be halved. Specifically, with 224 resolution, the average object bounding box size is about 35 pixels (i.e., 15% of the image length), which is challenging to identify even for humans. We will better emphasize this point in our revised version.
[5] Bertasius, Gedas et al. “Is Space-Time Attention All You Need for Video Understanding?” ICML 2021.
---
4. **[Writing]** We will revise Sec.3.3 to improve clarity.
---
5. **[Spurious responses in Fig.5]** It is crucial to note that VQL aims to identify the ‘*latest*’ appearance track of the query object. This means that the ground-truth is annotated only with the latest appearance of the query object and ignores all prior occurrences in the video. Thus, there is only a single peak in the visualized ground-truth. However, since the query object itself may appear multiple times earlier in the video, our model will attempt to identify all of them (not just the latest) — these correspond to the other peaks (or “spurious responses” as noted) and is *the expected and correct behavior* of the model. To better demonstrate the point, we include additional visualization (in the rebuttal pdf), which shows all identified peaks. The results indicate that our method can identify the query object consistently.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for providing the rebuttal. Your comments have addressed my concerns.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response. We appreciate your valuable review and we will incorporate your suggestions to the later version accordingly.
---
Reply to Comment 1.1.2:
Comment: Dear reviewer, thank you for acknowledging that the rebuttal addressed your concerns. I noticed the rating remains unchanged; are there any further aspects I should address? Your feedback is highly valued. | Rebuttal 1:
Rebuttal: We appreciate the insightful feedback and detailed suggestions from the reviewers. It’s great to see the highly positive comments, including “positively impacts the egocentric community” (R-59Vz), “The paper introduces a novel and technically sound method” (R-YaTy), “is simple and performs well” (R-8x9A), and “VQLoC can streamline the VQ2D localization process” (R-4CTN). We will address the questions of each reviewer separately. Besides, we add additional visualization in the rebuttal pdf to answer the "noisy" peak problem.
Pdf: /pdf/d015bd7159c2e3112e2e44f4a127805111c08a2e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes VQLoC, an end-to-end method for egocentric visual query localization based on a holistic understanding of the query-video relationship. The key component is a spatio-temporal transformer, which can effectively model the relationships between the query and frames. The presented approach can perform single-stage inference and achieve the winning entry on the leaderboard. Moreover, the method produces a set of models to balance the speed and balance, while the top-performing one is more advanced from both perspectives than the state-of-the-art. Moreover, the visualization shows that VQLoC is excellent at the visual query 2D detection task and can even deal with challenging cases such as clutter, occlusions, and motion blur.
Strengths: 1. This paper positively impacts the egocentric community, especially in episodic memory tasks. Unlike Moment Query and Language Query, the visual query task is more complicated in spatial-temporal localization, and the existing solutions are redundant. The paper provides a single-stage solution, and the chosen model is more accurate and efficient than the state-of-the-art. This will encourage more researchers to work on the valued research problem.
2. The proposed architecture for the spatio-temporal transformer is able to effectively model the relationships between the query to each frame as well as between the frames. Therefore, the model can leverage rich semantics during feature embedding. Locally-windowed temporal self-attention is applied to improve the model efficiency without losing too much information.
3. Code and the architecture details are attached to the paper, which helps people to re-implement the code. Also, the high-quality video in the supplementary material makes the paper clear and easy to follow.
Weaknesses: 1. My main concern is that although the paper aims to solve the Visual Query Localization task, all the experiments are conducted in the VQ2D setting. According to the definition of the Ego4D [13] paper, visual query localization can be done in the video domain (VQ2D) or the real-world coordinate (VQ3D). Therefore, it is not precise to only validate this method on VQ2D, and evaluating the proposed method on the 3D setup is highly recommended. Otherwise, the paper should revise the task as visual queries 2D localization.
2. Further experiments could be conducted to improve the model further. For instance, the authors find the local windows
of size 5 works the best in the set {5, 7, 9}, then they should experiment with smaller windows for the local optima of their hyper-parameters.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. In the ablation study L299, using high-resolution frames (448 × 448) leads to way better results than the low-resolution ones (224 × 224). Is it possible to further increase the resolution for better performance, even if we can slightly sacrifice the clip length?
2. It seems that the prediction head only gives framewise bounding box locations and confidence scores, but the VQ2D task requires a response track of the query object. Is there any mechanics in the head to make the predicted boxes consistent with the predictions from the temporal neighboring frames?
3. According to Table 1, the recovery ratio and success rate improved significantly, but the AP only raised a little. Is it because the single-stage detection pipeline is weak in predicting precise bounding boxes?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitation is already discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive comments, and your acknowledgment of our potential impact to the community! We will address your questions as follows.
1. **[Weakness Q1]** Our study intentionally focuses on VQ2D due to its unique challenges, e.g. the ‘needle-in-the-haystack’ problem, and the large variation between visual queries and their appearance in the video, as we discussed in L33-43 of the submission. While VQ3D is interesting, it poses unique challenges that *are orthogonal to* VQ2D [1], i.e. difficulties in camera pose estimation, which should be solved separately and is beyond the scope of our work. However, we expect our improvements in VQ2D to propagate to VQ3D since 2D localization is a precursor to 3D localization. We will revise the task as visual query 2D localization as you suggested to make it more clear.
[1] Mai, Jinjie et al. “Estimating more camera poses for ego-centric videos is essential for VQ3D.” ArXiv 2022.
---
2. **[Weakness Q2]** We experiment with smaller window sizes 3 and 1, where 1 means there is no temporal reasoning within the model. As shown below, we observe that window size 5 works the best. Moreover, without temporal reasoning (i.e., window size = 1), the performance drops dramatically, showing the effectiveness of the spatial-temporal transformer module we proposed.
| Window Size | tAP | stAP | rec% | succ |
|:---------:|:---------:|:---------:|:---------:|:---------:|
| Size 5 | **0.31** | **0.22** | **47.05** | **55.89** |
| Size 3 | 0.27 | 0.17 | 42.26 | 48.14 |
| Size 1 | 0.17 | 0.11 | 33.56 | 42.83 |
---
3. **[Questions 1]** While higher resolution could theoretically enhance the performance, our computation resources restrict us to the 448x448 resolution. We are currently training with batch size 3 on each GPU. If we intend to increase the resolution further, e.g. to 896x896, the batch size should be decreased to one-fourth, as the memory requirement of attention-based models grows quadratically. And in this case, we cannot even run on a batch size of 1. If there are GPUs with larger memory available, we believe it is possible to further increase the resolution. Moreover, we are running on a clip length of 30 frames. As the average response track is 3 seconds (15 frames), this clip length leads to a balanced ratio between positive and negative frames during training. Thus, decreasing the clip length in an effort to accommodate higher-resolution images may make training unbalanced, and the smaller batch size may also negatively influence the performance.
---
4. **[Questions 2]** Note that the prediction of each frame *is not isolated from nearby frames*. The proposed spatial-temporal transformer with local-windowed attention allows the information to be propagated across consecutive frames by establishing frame-to-frame correspondence, which implicitly makes the prediction smooth (L177-179 in the paper). Our experiments (Table 4 and our response to WQ2) and visualization in the original supplementary video demonstrate the effectiveness of using the spatial-temporal transformer to make the performance better and smooth.
---
5. **[Questions 3]** We note that our method demonstrates significant and consistent *relative* improvement in all metrics: 19% tAP, 16% stAP, 24% rec%, and 17% succ., when compared to the best baseline method. Among all the metrics, the stAP is the most challenging one, as it requires high precision in both temporal and spatial understanding. Thus, the absolute value of stAP is relatively low for all methods, as are the absolute differences in stAP between methods.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing the supplementary experiments and answering the extra questions. My concerns have been well addressed. Please don't forget to revise the paper accordingly.
---
Reply to Comment 1.1.1:
Comment: Thanks for the response. It's great to hear that our rebuttal addressed your concerns. We will revise the paper accordingly. | null | null | null | null | null | null |
Gaussian Partial Information Decomposition: Bias Correction and Application to High-dimensional Data | Accept (spotlight) | Summary: The paper proposed a new method for partial information decomposition (PID) on multivariate Gaussian distributions. The issue of bias was discussed, and a correction method was provided. The method was tested on synthetic canonical examples and real data.
Strengths: 1. The introduction clearly lays out the problem.
2. The paper is extremely well written with notations clearly defined.
3. The method extends prior work on PID with new properties.
4. The method is rigorously tested on simulated data.
5. The method solves an important problem, namely “the extent to which one region’s activity uniquely explains that of another, while excluding information corresponding to spontaneous behaviors” as stimulus could make two regions seem correlated.
Weaknesses: 1. The paper does not provide enough real data to show effectiveness in real applications.
2. Testing on higher dimensional settings would be important.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Would non-modeled confounding factors affect the results?
Summary after rebuttal
The added simulations for testing higher dimensional data further justify the high score I gave.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper does not have negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
> 1. The paper does not provide enough real data to show effectiveness in real applications.
We thank the reviewer for raising this point. We have tried to ameliorate this concern by demonstrating how the method works with a larger number of neurons (i.e., PCA components) in the same dataset (Fig. 8 and 9 in the attached PDF). We have also demonstrated our method on more simulated non-Gaussian examples, to understand the extent to which our method applies in non-Gaussian settings (Figs. 1-4 in the attached PDF). The focus of our paper was showing that our method works well on examples where the ground truth was known, as well as showing a proof of concept on real data. A more extensive evaluation of our method on real datasets is certainly necessary, but we feel it is beyond the scope of this paper.
> 2. Testing on higher dimensional settings would be important.
We have increased the dimensionality of the simulated high-dimensional example with known ground truth (Example 10, and we find that our method begins to depart from the ground truth at a dimensionality of 512 (please see the overall rebuttal, and Fig. 5 in the attached PDF)
**Questions**
> Would non-modeled confounding factors affect the results?
Our Gaussian PID depends only on the joint covariance matrix between M, X and Y. Different models of external confounding that result in the same covariance matrix will not affect the PID values obtained. However, if one or more of these confounding variables are included in M, X or Y, and the PID is then re-computed, that may change the PID values completely compared to the case where the confounding variable is not included.
Confounders are also important to consider when interpreting PID values. Our ~_G-PID method is based on the covariance matrix between M, X and Y, and is thus a correlational quantity. We cannot make causal claims based on observed PID values. Many of the same caveats that apply to interpreting correlations would also apply to interpreting observed PID values. A detailed analysis of the connections between different structural causal models (e.g., see Peters et al. 2017, “Elements of Causal Inference”) and their PID profiles has not been undertaken in the literature, to our knowledge, but could be the subject of future work (which is also better enabled by this paper).
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: The added simulations for testing higher dimensional data further justify the high score I gave. | Summary: Partial Information Decompositions (PIDs) play an essential role in neuroscience research. One constraint of the broader usability of PIDs is the computational difficulty of computing PIDs for high-dimensional neural recordings. To address this concern, the authors propose a method to compute and estimate a PID efficiently. More specifically, they restrict the optimization space of PIDs to jointly Gaussian, which reduces the number of optimization variables, allowing them to compute PIDs for much higher dimensionalities of neural data. Then, the authors show their method could be written out in closed form and solved by projected gradient descent, and they use nine examples with increasing complexity to show their method has the ability to recover ground truth and stability over increasing dimensionality. The authors also claim it's the first time to correct the bias and variance of estimates in a heuristic way. Finally, the authors evaluate the performance of their method on both synthetic and real neural data.
Strengths: * Computational scalability of ~PID with a basic property called additivity.
* The authors' exposition of their method and experiments is clear.
* The first time to raise the issue of bias in PID estimates.
Weaknesses: * No analysis is done to show the stability of $\delta$-PID over increasing dimensionality. According to the paper, there are two differences between $\delta$-PID and $\sim$-PID. The first one is the "additivity" property, which is clearly shown by Examples 8-9 in Section 4. The second one is that $\sim$-PID uses an exact upper bound, while $\delta$-PID uses an approximate upper bound.
However, no analysis is performed to show how the second difference will affect the performance. In other words, people may be curious about the stability of $\delta$-PID over increasing dimensionality if we use examples with no "additivity" property ($\delta$-PID and $\sim$-PID both agress with the ground truth).
Currently, it's hard to see the differences between $\delta$-PID and $\sim$-PID when applying to high dimensional neural data. I think this point may determine whether the proposed contribution is timely and impactful or a solid technical upgrade without practical consequences in neuroscience.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Does $\delta$-PID also agree with the ground truth even when distributions are not Gaussian?
* Could you please discuss how the "additivity" property would affect the analysis of communication among brain regions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I have highlighted technical limitations and weaknesses above. I have nothing further to add here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses:**
> No analysis is done to show the stability of delta-PID over increasing dimensionality. According to the paper, there are two differences between delta-PID and ~-PID. The first one is the "additivity" property, which is clearly shown by Examples 8-9 in Section 4. The second one is that ~-PID uses an exact upper bound, while delta-PID uses an approximate upper bound.
>
> However, no analysis is performed to show how the second difference will affect the performance. In other words, people may be curious about the stability of delta-PID over increasing dimensionality if we use examples with no "additivity" property (delta-PID and ~-PID both agress with the ground truth).
>
> Currently, it's hard to see the differences between delta-PID and ~-PID when applying to high dimensional neural data. I think this point may determine whether the proposed contribution is timely and impactful or a solid technical upgrade without practical consequences in neuroscience.
We thank the reviewer for raising this point. We have added a comparison with the delta-PID for a larger dimensionality (please see Fig. 6,7 in the attached PDF).
Just as the delta-PID fails at a dimensionality of 2, it also fails at 64 (i.e., the error due to additivity extends to higher dimensions). Furthermore, we found that the delta-PID method begins to take a very large amount of time to compute (> 2 hours) at $d_M = d_X = d_Y = 128$, and completely exceeds the memory capacity of a 48GiB workstation at a dimensionality of 256. We have included a timing analysis (please see Fig.7 in the attached PDF) to show the difference in speed of the two methods; our method runs over 1000 times faster at a dimensionality of 64. We will add this analysis to the paper.
It is much harder to measure the difference between the delta and ~-PIDs due to the difference caused by the delta-PID recovering an approximate upper bound. The cases where the delta-PID agrees with the ~-PID are points where the solution is known in closed form due to a theorem from [12]. In other words, these are “easy” cases. All non-trivial cases are the ones that are constructed using the additivity property. The effective result of this analysis is that the degree to which the ~-PID outperforms the delta-PID due to the fact that the latter’s upper bound is approximate is still unknown. We will mention this point in the revised paper.
However, we still believe our method will be impactful and have practical consequences for neuroscience, because it works accurately at much higher dimensions, runs much faster, and due to the importance of the additivity property (as described below).
**Questions:**
> 1. Does delta-PID also agree with the ground truth even when distributions are not Gaussian?
To the extent that we consider Banerjee et al.’s (ISIT 2018) method to provide ground truth, we show in the paper in Fig. 5 that the delta-PID does not agree with the ground truth to the same degree as the ~-PID for a non-Gaussian distribution. However, it should be noted that Banerjee et al.’s method uses the ~-PID definition. Thus any difference between the delta_G-PID (dashed line) and Banerjee et al.’s method (“ground truth”) could be a result of the difference in the two definitions. Unless we have a more accurate way to compute the delta-PID for discrete distributions, we would not be able to say whether the difference was purely due to the difference in definitions, or due to a difference in accuracy. For additional clarity, we will add this discussion to the paper.
> 2. Could you please discuss how the "additivity" property would affect the analysis of communication among brain regions?
Additivity is an extremely fundamental property: effectively, it states that the PID values of an isolated system should not depend on the PID values of another isolated system. Without additivity, it is not possible to examine systems in isolation, since broadening your view to include a different isolated system could change the PID values of the first system.
Hypothetically, if we take two separate individuals receiving completely independent stimuli, and examine the PID between the activity in brain regions M, X and Y in each of their brains, then the effective unique information that X1 and X2 have about M1 and M2 with respect to Y1 and Y2 should be equal to the sum of the unique information in each of the individuals taken separately.
As another idea, suppose we are trying to examine visual information flow and auditory information flow in a multi-sensory integration task. We may want to understand the degree to which the activity in the sub-regions of the visual and auditory systems depend on each other. A reasonable null model of independence between the two systems would be that the PID values of the joint system will be equal to the sum of the PID values in the two individual systems. Then, measuring the actual degree to which the joint PID value is not equal to the sum will be a meaningful measure of dependence between the systems. This would only be possible with a PID definition that _guaranteed_ additivity of independent sub-systems. However, since the delta-PID does not satisfy the additivity property, we cannot guarantee that the aforementioned null would be the correct null model, and we would not be able to perform such an analysis. We will add this example to the supplementary material.
---
Rebuttal Comment 1.1:
Title: Follow up comment to Reviewer DDWz
Comment: We request Reviewer DDWz to let us know if our responses have addressed their concerns, and to kindly consider whether our paper is now worthy of a better score.
We also request the reviewer to get back to us soon, since author responses will be closed after August 21st, 1pm EDT.
In addition to the overall rebuttal and the reviewer-specific rebuttal, we would also like to draw the reviewer's attention to the analysis posted in an official comment at the top of this page, which extends our simulation in Example 10 to 1024 dimensions in each of M, X and Y. This further shows how our method is more capable and applicable than the $\delta$-PID of [12], which fails at a dimensionality of 256 in the same experiment. | Summary: The authors propose a new, efficient method for computing Partial Information Decompositions (PIDs) on multivariate Gaussians. They build their approach around the $\sim$-PID approach, as this allows them to preserve an additivity property (allowing PIDs to be computed on independent systems separately and then added later). They present a number of canonical examples for Gaussians, shaw that their Gaussian PID works even when distributions are non-Gaussian, and show an example of its use on real neural data from the Allen Institute. Finally, they address the issue of bias in PID estimates, propose a bias-correction method, and evaluate it empirically.
Strengths: The paper is very clearly written and the topics well explained. The literature review seems comprehensive, and although of somewhat specific topical interest, the work seems original and useful.
Weaknesses: Given one of the major motivating factors given in the introduction is the need for efficient estimators of PID that can accommodate higher dimensional neural data, I was disappointed that in the end the authors chose to apply their method to a PCA-reduced version of the Allen Institute neuropixel probe data... wasn't the point to be able to address higher dimensional problems? It's thus not completely clear to me that this method, as is, has delivered on the promise of providing a PID approach more applicable to data with thousands of neurons than other PID methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the authors method run on thousands of neurons, consistent with the original motivation they provide?
What are the limitations on dimensionality more explicitly for this method?
What is the computational complexity?
Does the author's method require using PCA or having high firing rate neurons?
How does it deal with low firing rate neurons or highly inhomogenous Poisson variables?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately address limitations, although it would be good to hear more related to the questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
> Given one of the major motivating factors given in the introduction is the need for efficient estimators of PID that can accommodate higher dimensional neural data, I was disappointed that in the end the authors chose to apply their method to a PCA-reduced version of the Allen Institute neuropixel probe data... wasn't the point to be able to address higher dimensional problems? It's thus not completely clear to me that this method, as is, has delivered on the promise of providing a PID approach more applicable to data with thousands of neurons than other PID methods.
We thank the reviewer for raising this concern and providing us with an opportunity to address it. We have now performed the PID analysis on the Allen Institute data again, using a larger number of PCA components (as described below; Figs. 8,9 in attached PDF). We were forced to use PCA because there were often a larger number of neurons in each region, than the number of trials available for computing the covariance matrix (in other words, if we had directly computed the covariance on the neural activity, the covariance matrix would have been rank-deficient). We wanted to ensure that we obtained a reasonable estimate of the covariance, and minimize the error in our PID estimate, which would naturally be higher for higher PCA dimensions.
In the new analysis, for each mouse in the dataset, we used as many PCA components as possible, based on the number of neurons in each region and the number of trials available for computing the covariance matrix. We used the maximum number of PCA components subject to these constraints, while using the same number of components across all regions and across change and non-change conditions (within each mouse), so as to perform a fair comparison. This gave us on average $53 \pm 16$ PCA components, with a minimum of 22 and a maximum of 84 PCA components. We find that the results of higher and more sustained redundancy on change flashes continue to hold in the new analysis (Figs. 8,9 in the attached PDF). However, as expected, there is much greater variance across mice, possibly due to variability in the number of PCA components chosen across mice, and due to greater errors in our PID estimates. We will add these new results to the revised supplementary material.
For consistency across mice, we used a common basis of 10 or 20 PCA components across all mice (and across all regions and conditions). Fig. 6 in the paper had 10 PCA components; Figs. 13 and 14 in the supplementary material of the paper had 20 PCA components. We will also include the rationale for selecting up to 20 PCA components in the revised version of the paper.
Apart from this, to show that we can go to higher dimensions, we have also extended our high-dimensional example (Example 10) to a dimensionality of 256 (equivalent to a total of 768 neurons across regions). It should also be noted that other PID methods do not come close (please see the overall rebuttal for comparisons): the discrete PID estimators are limited to low single-digit dimensions, while other methods do not show ground truth validation at high dimensions.
**Questions**
> Can the authors method run on thousands of neurons, consistent with the original motivation they provide? What are the limitations on dimensionality more explicitly for this method? What is the computational complexity? Does the author's method require using PCA or having high firing rate neurons? How does it deal with low firing rate neurons or highly inhomogenous Poisson variables?
1. We have re-run our analysis on as many dimensions as possible in the Allen Institute dataset, as described above. We have also increased the dimensionality in our simulated data experiment and shown that our method works well up to 256 dimensions in M, X and Y each (equivalent to 768 total neurons).
2. The method departs from the ground truth at 512 dimensions; we will identify the reason and provide an update.
3. The computational complexity is O(ND^3) for the various matrix operations, where D=d_M+d_X+d_Y, and N is the number of iterations until convergence. We also provide a timing analysis in Fig 7 (attached PDF).
4. Our method does not require the use of PCA, however, we used PCA to obtain more stable estimates of the covariance matrix, and to minimize bias and error, as described above.
5. Our method also does not require particularly high firing rates: this is demonstrated in the multivariate Poisson simulation, where M1 and M2 had mean “spike-counts” of 2 (see Fig. 4a in the attached PDF). However, we expect that as firing rates increase, the data will appear more Gaussian, as a result of which our method will become more accurate. We will include a mention of this in the revised paper.
6. Inhomogeneous Poisson processes are those whose mean firing- (or “emission-”) rates change with time. Our method for estimating the PID does not consider the temporal characteristics of the original signal. Rather, the data analyst can choose the random variables M, X and Y to span some time range (or different time ranges) as they please. In our example, we counted the number of spikes in a 50-125 ms window after stimulus onset. Even if the underlying spiking process was an inhomogeneous Poisson process, this spike count would be Poisson distributed, with a mean given by the integral of the emission rate over the fixed window. In general, while analyzing data and computing PID values, one will have to be aware of severe inhomogeneities (e.g., if the distribution changes not just in the same way in each trial, but changes across trials), and account for them separately. For example, in our analysis, we excluded time periods when the mice were not “engaged” in the task, as defined by not actively consuming rewards at a rate of at least 2 rewards per minute. We will add a discussion of inhomogeneity, as well as how to choose M, X and Y, to the paper or to the supplementary material.
---
Rebuttal Comment 1.1:
Title: Follow up comment to Reviewer wULj
Comment: We request Reviewer wULj to let us know if our responses have addressed their concerns, and to kindly consider whether our paper is now worthy of a better score.
We also request the reviewer to get back to us soon, since we author responses will be closed after August 21st, 1pm EDT.
In addition to the overall rebuttal and the reviewer-specific rebuttal, we would also like to draw the reviewer's attention to the analysis posted in an official comment at the top of this page, which extends our simulation in Example 10 to 1024 dimensions in each of M, X and Y (equivalent to 3072 total neurons). We believe this addresses the reviewer's central point about extending our method to thousands of neurons, as explained in the official comment. | Summary: This article proposes an upper bound on mutual information (more precisely: on the "unique information" / "union information" that appear in "Partial Information Decomposition") that is easier to compute. This is done by replacing an infimum over all distributions matching given marginals by an infimum over only the ones that are Gaussian.
Toy examples (with Gaussian distributions) are given, and an experiment on real data is performed.
Further theory also includes considerations about biases arising from covariance estimators.
Strengths: The paper is mostly clearly written, with simple examples to help the reader follow.
The paper is self-complete, with reminders of definitions.
Considering covariance estimator biases is a plus.
Weaknesses: EDIT: after discussion with the authors, I see that I had misunderstood the scope and contributions of the paper. The weaknesses below do not longer hold (or not as strongly).
-----------
The main issue is that the definition of the upper bound actually supposes that the distribution MXY that is studied is Gaussian, at least marginally. Indeed the infimum is performed over all possible distributions Q_MXY that are Gaussian and whose marginals satisfy Q_MX = P_MX and Q_MY = P_MY, which implies that P_MX and P_MY have to be Gaussian (otherwise the infimum is performed over an empty set).
As a consequence, this upper bound definition cannot be applied to datasets that are not Gaussian (at least marginally). This crucial point is not discussed in the paper.
Actually, from Sections 3.2 and 5, it seems that the upper bound is estimated based on covariance matrices only, without using distribution Gaussianity. If this is the case indeed, then the definition could be changed, to rely directly on these covariance matrices and removing the Gaussianity assumption. This however would probably require to significantly rewrite the paper.
Another issue is the lack of sufficient validation, either theoretically or experimentally, in particular regarding distributions that are not Gaussian (i.e., really not Gaussian). This comment could have been tempered if there had been a discussion about general approximative Gaussianity of distributions in the field of study, but there is none.
Also, the impact of the introduction of such a Gaussianity constraint in the upper bound estimation should be studied closely: how realistic is this assumption, how far are real optimal Q_MXY from their Gaussian versions, how tight is the upper bound thus obtained, etc.
If this was a theoretical paper, I would expect theoretical results (guarantees such as a bound on the error made by the upper bound, or proving the conjectures, etc.). If this was an experimental paper, I would expect an extensive validation on many distributions for which the actual mutual informations are estimated (brute force or known solutions) in settings as varied as possible (not just normal distributions + a single real dataset). But the paper, in its current state, does meet these expectations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would be willing to revise my score if the points in the section above were significantly addressed.
EDIT: they were.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Cf the validation issue in the weakness section above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses:**
> The main issue is that the definition of the upper bound actually supposes that the distribution MXY that is studied is Gaussian, at least marginally … This crucial point is not discussed in the paper.
It is true that our definition applies only to Gaussian P_MXY (in fact, it must be jointly Gaussian, not just marginally), and that the definition technically does not apply to non-Gaussian distributions. This is why we include the word “Gaussian” in the title of the paper. We mention “we compute/estimate the PID on Gaussian distributions”, in the abstract (lines 9-11) and in the introduction (lines 48-49). We also make it clear that M, X and Y are jointly Gaussian just above Definition 2 (lines 112-113), and again in lines 132-133. Overall, we believe we have tried to convey the point that P_MXY is assumed to be jointly Gaussian.
Nevertheless, we understand that there is a potential for confusion between the Gaussianity of P_MXY and the assumption restricting the optimization variable, Q_MXY, to be Gaussian. To make the Gaussianity of P_MXY more explicit, we will state this assumption within Definition 2 itself, rephrase line 120, and we will also explicitly make the point that the definition does not apply to non-Gaussian distributions, strictly speaking. We will also make small edits as necessary throughout the whole paper to ensure that this distinction is clear. We welcome any suggestions that may help make this point more clearly.
Even though our method is designed only for Gaussian distributions, we believe it is an important contribution. Gaussian distributions have historically been a starting point for many estimators (e.g., correlation is used as a measure of dependence, but is exact only in the Gaussian case). Our ~-PID estimator is essentially the best in the field in terms of providing ground-truth validation at high dimensionalities; this was made possible because of Gaussianity. While the central claims and results of the study focus on Gaussian distributions, we showed that when distributions are close to Gaussian (e.g., Poisson) our method for computing PIDs does not immediately break down.
> Actually, from Sections 3.2 and 5, it seems that the upper bound is estimated based on covariance matrices only … definition could be changed … would probably require to significantly rewrite the paper.
As the reviewer correctly notes, our definition depends only on the covariance matrix, and can thus be applied to any data, even if the data itself is non-Gaussian. We do not believe this requires a rewrite of the paper; rather, we just apply an estimator that was designed for Gaussian distributions to data that is non-Gaussian. This is what we do in section 6: while Gaussian P_MXY forms the main scope of the paper in Sections 1-5 (please see the overall rebuttal), we justify the applicability of our method to spiking data using a Poisson simulation.
> Another issue is the lack of sufficient validation, either theoretically or experimentally, in particular regarding distributions that are not Gaussian …
Given that the main scope of the paper in Sections 1-5 is Gaussian distributions, we believe we have demonstrated sufficient empirical validation on Gaussian P_MXY. This has been shown through Examples 5-10 and Figures 1-4. Compared to other papers, we believe our paper provides one of the most comprehensive ground-truth validations (please see the overall rebuttal for a detailed comparison with [17] and [21]).
However, we agree that it is important to justify that our method can be applied to non-Gaussian neuroscientific data, which we do in Section 6 and in the additional analyses (please see overall rebuttal). However, we do not make claims in general non-Gaussian cases, which could be very far from Gaussian. We will make this explicit in the revised paper. While a broader and more rigorous analysis of non-Gaussian distributions is important, we believe it is beyond the scope of the current paper.
> … If this was an experimental paper, I would expect an extensive validation on many distributions for which the actual mutual informations are estimated (brute force or known solutions) …
This is an empirically-focused paper, and we do have extensive validation for Gaussian distributions, which is the main scope of the paper.
As the reviewer states, comparisons on a wider variety of Gaussian and non-Gaussian examples are limited by the availability of ground truth. This is why we rely on the MMI-PID (Definition 3) and Barrett’s theorem [13] (lines 177-178), which provides a closed form expression for the ~-PID for Gaussian P_MXY, _for scalar M_. Using this, we show that we can recover the ground truth by restricting Q_MXY to be Gaussian in Examples 5-7 (Fig 1). We then construct more complex examples using additivity (Property 1), and show that even then, the restriction to Gaussian Q_MXY can recover the ground truth (Examples 8-10; Figs 2, 3). To go beyond these examples, we do not know of brute-force methods for computing the ~-PID for Gaussian P_MXY, except for [17], which applies to general distributions P_MXY with scalar M, X and Y (which is trivial for Gaussian P_MXY due to Barrett’s result), and [21], which is not sufficiently well-tested to consider as “ground truth”, and was also published too recently for us to perform comparisons. For the same reasons, comparisons with non-Gaussian distributions at higher dimensions (beyond small Poisson/Binomial) are also limited by the availability of ground truth.
> I would be willing to revise my score...
To summarize:
- We will make the distinction between the Gaussianity of P and Q clear
- For the scope of Gaussians, we believe our evaluations are comprehensive
- We have performed additional analyses on non-Gaussian distributions, and we will be explicit that our method does not extend in general
Please let us know if our arguments and updated results are convincing, and how we may improve our score.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I had misunderstood the scope of the paper indeed. I thought the Gaussian case was trivial and that the point was to reduce non-Gaussian examples to Gaussian ones as a first approximation (through Definition 2). Now I understand that even the Gaussian case is complex.
I thought that for Gaussian distributions, the minimizer Q in Definition 1 would necessarily be Gaussian, and thus introducing Definition 2 was relevant only for non-Gaussian distributions (hence the misunderstanding on the scope).
If the conjecture is true, i.e. that minimizers Q in the Gaussian case are Gaussian indeed, then the two definitions are identical. If I understand well, the contribution of the paper in that case is not to bring a new definition, but to make explicit use of the knowledge that the minimizer is Gaussian, to help the optimization process, while this was not done in previous papers. Am I right?
---
Reply to Comment 1.1.1:
Title: Response to Comment by Reviewer Tp5P
Comment: We are very grateful to the reviewer for their response.
> I had misunderstood the scope of the paper indeed. I thought the Gaussian case was trivial and that the point was to reduce non-Gaussian examples to Gaussian ones as a first approximation (through Definition 2). Now I understand that even the Gaussian case is complex.
>
> I thought that for Gaussian distributions, the minimizer Q in Definition 1 would necessarily be Gaussian, and thus introducing Definition 2 was relevant only for non-Gaussian distributions (hence the misunderstanding on the scope).
The reviewer is absolutely correct that the Gaussian case is also non-trivial in general. They also correctly recognize that it is _not known_ whether the minimizer Q is guaranteed to be Gaussian if P_MXY is jointly Gaussian. Even for Gaussian P_MXY, this is only known in a few specific cases such as when M is scalar, or if P_MXY satisfies a condition shown in [12].
> If the conjecture is true, i.e. that minimizers Q in the Gaussian case are Gaussian indeed, then the two definitions are identical. If I understand well, the contribution of the paper in that case is not to bring a new definition, but to make explicit use of the knowledge that the minimizer is Gaussian, to help the optimization process, …
Indeed, we do _not_ provide a new definition. Rather, the central contribution of our paper is to develop and evaluate an efficient method to estimate the PID for Gaussian P_MXY, by restricting the space of the minimizer Q to Gaussian variables. As the reviewer states, this would result in the correct solution if our conjecture is true. By assuming Q to be Gaussian, we can make use of the properties of Gaussian distributions (e.g., closed-form expressions for mutual information) to develop an efficient projected gradient descent optimizer, and then check how well we perform (please see _”Evaluating the Gaussianity assumption”_ below).
> … while this was not done in previous papers. Am I right?
A previous paper by Venkatesh and Schamberg [12] used a similar technique of restricting Q to be Gaussian for computing a different PID definition called the delta-PID. Our work makes significant advances over [12], on multiple fronts:
1. We compute a different PID definition, the ~-PID, which satisfies better properties than the delta-PID used in [12]. Most importantly, the ~-PID satisfies a very fundamental property called **additivity** (please see our response to reviewer DDWz on why additivity is fundamental).
2. We examine several examples (Examples 5-10), increasing in complexity and dimensionality, to show that restricting Q to be Gaussian is reasonable, and allows us to recover the ground truth. [12] only tested this for the simplest case of Gaussians with scalar M, and fails in more complex cases (Example 8; Fig 2 in the paper).
3. Our method is faster (please see Fig 7 in the PDF attached to the overall rebuttal).
4. Our method is capable of computing higher dimensionalities (as described in our response to reviewer DDWz).
5. Finally, we consider the problem of _estimation_, i.e., computing the PID from a covariance matrix that is estimated from real data, and show how bias in PID estimates can be corrected. This is not addressed in [12].
These points are summarized in the overall rebuttal near the top of this page.
**Evaluating the Gaussianity assumption**
Since we do not have a guarantee that the minimizer Q is actually Gaussian for all Gaussian P_MXY, we test what the reviewer asked in their original review, but for _Gaussian_ P_MXY. That is, we test: “how realistic is this assumption [of restricting Q to be Gaussian], how far are real optimal Q_MXY from their Gaussian versions”. We consider a number of examples of increasing complexity (Examples 5-10), where the ground truth PID values are known (starting with scalar M [13], and then using the additivity property). We show that our method is able to recover the ground truth, proving that for all the examples we consider, the optimal Q _is_ in fact Gaussian, and hence our conjecture still stands.
Incidentally, our method is also applicable to non-Gaussian P_MXY, because it only relies on the covariance matrix, as noted by the reviewer in their original review. So we further tested whether it was reasonable to apply our method to Poisson distributions (now extended to more non-Gaussian distributions; see overall rebuttal). Here, we do not have ground truth, so we instead compared our results with another ~-PID estimator [20] (which works only for discrete distributions with limited support). We showed that our PID estimator with Q-restricted-to-be-Gaussian comes very close to the discrete estimator of [20] on a multivariate Poisson distribution P_MXY (Fig 5 in the paper). We then used this as a basis to apply our PID estimator to real neural data, and demonstrated its utility in providing new insights about interactions between brain regions (Fig 6 in the paper). | Rebuttal 1:
Rebuttal: We sincerely thank all of the reviewers for taking the time and effort to provide thoughtful and constructive reviews of our work.
We use this space to address a few comments that were common across reviewers, and to explain the additional analyses we perform in response to these comments. We also reiterate the scope and the central contributions of our paper, and describe how these are laid out in the paper.
**Summary of new analyses (and figure numbers in the attached PDF)**
1. Increased dimensionality in Example 10 (Fig 5)
2. Increased number of PCA components in neural data analysis (Fig 8,9). Please see response to reviewer wULj for details.
3. Comparison with delta-PID at higher dimensions (Fig 6,7)
4. More non-Gaussian examples: Binomial, and zero-inflated Binomial and Poisson (Figs. 1-4)
**Scope of the paper**
Our paper provides a new method to compute and estimate the ~-PID definition for multivariate, jointly Gaussian distributions P_MXY. This is done by assuming Gaussian optimality in the optimization problem used to compute the PID, which helps reduce the number of optimization variables. To justify the Gaussian optimality assumption, we consider a number of Gaussian examples with known ground truth, and show that the output of our method agrees with ground truth, even at high dimensionalities. The aforementioned are the subject of Sections 1-5, which deal strictly with Gaussian P_MXY, and endeavor to demonstrate the efficacy of our method on Gaussian distributions. However, since most real data is non-Gaussian, in Section 6, we showed using a low-dimensional simulation, that our method also gives reasonable results on non-Gaussian (Poisson) data. We then apply it to a publicly available neuroscientific dataset to investigate the amount of redundancy between different visual cortical brain regions.
**Extending our method to higher dimensionalities**
In the submitted paper, we presented evidence of agreement with ground truth only up to the dimensions of M, X and Y each being 128 (i.e., a total of 384 dimensions for a covariance matrix of size 384x384). We have now extended this analysis to higher dimensions (Fig. 5 in the attached PDF), showing that our method continues to agree with ground truth up to M, X and Y each being 256-dimensional, i.e., a total of 768 dimensions, for a covariance matrix of size 768x768 (see new figures in attached PDF). Our method begins to deviate from the ground truth at 512 (at which point the covariance matrix has 2.36 million elements). We will investigate the root issue of this problem and provide an update within a week.
It should be noted that our method is the first to provide PID estimates that agree with ground truth at such high dimensions, and that most other works do not come close (see comparisons below).
**More non-Gaussian examples**
In the submitted paper, we only tested our method on a single non-Gaussian, multivariate Poisson example. We have now extended this to include a multivariate Binomial example, as well as zero-inflated versions of the multivariate Poisson and Binomial setups (Fig 1-4 in the attached PDF). We find that our method performs well on the original Poisson and Binomial versions, agreeing with the discrete PID estimator of Banerjee et al. (ISIT 2018).
In the zero-inflated versions, when we have a bimodal distribution with a point-mass at zero, our Gaussian PID estimator no longer recovers the same _absolute_ PID values as the estimator by Banerjee et al.. However, when normalized by the total mutual information, the _relative_ PID-values are still close to the values computed by the Banerjee et al. method and follow the correct trends in every case. Thus, for such a distribution, an improved estimate of the mutual information (e.g., Kraskov et al. 2004; Belghazi et al. ICML 2018) could potentially be used to correct the absolute PID values, after a more thorough evaluation.
The primary challenge with testing non-Gaussian distributions is the absence of good ground truth. We currently rely on a method from Banerjee et al. (2018), designed for discrete distributions, as ground truth. We cannot be sure if the errors seen in the ultimate plot are a result of errors in our method, or errors in computing the ground truth itself.
**Improvements over the delta-PID & other PID measures**
Many reviewers asked us to expand upon how our work improves on prior art. We provide a summary below, which we will include in the Related Work section of our paper. Also, we have now compared our method with the delta-PID at higher dimensions (Fig 6,7 in the attached PDF).
We divide previous works into 3 categories:
1. Compared to Venkatesh et al. (ISIT 2022), here we have:
- Better ground truth validation
- Ability to compute at higher dimensionality (256, rather than 64 or 128)
- Faster computation (>1000x faster at d_M=d_X=d_Y=64)
- Satisfy additivity (which is important; see response to reviewer DDWz)
- Correct for bias in estimates
2. Compared to discrete PID estimators (Banerjee et al., ISIT 2018; Makkeh et al., Entropy 2018), we have
- Much higher dimensionality: both discrete PID estimators were demonstrated for discrete M, X and Y with support sizes of at most 18 in Makkeh et al., which corresponds to a total support of 18^3=5832 for Q_MXY. If we assume a single neuron can have at most 10 spikes (i.e., support of 10), then these methods can handle ~4 neurons, since log_10(5832) < 4.
- We discuss estimation and how bias can be corrected
3. Compared to the continuous PID estimators of Pakman et al. [17] and Liang et al. [21], we have:
- An estimator that matches ground truth at high dimensionalities. [17] shows ground truth validation only in scalar cases, while [21] matches ground truth only for scalar binary cases and deviates from ground truth in other examples.
- We demonstrate our method on higher dimensionalities compared to [17].
- We show bias correction
Pdf: /pdf/eec58125c6a1071fb45ede3c9a0e17627e546358.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: First, I would like to thank the authors for the work they put into contributing to the field.
In their paper, the authors provide a new method to estimating a well-known measure, the Partial Information Decomposition (PID). Within the topic of PIDs, their method computes a specific version of PID, the ~-PID.
While methods to compute the ~-PID already exist, the authors' method is advantageous in that
1) it is more efficient to compute (quadratic in number of variables instead of exponential) and
2) in that it corrects for the bias.
The author's then go on to show with simulations and examples that
1) their choice of PID method, the ~-PID, outperforms other PID methods
2) the bias correction works empirically
3) their method is stable even for increasing dimensionality
4) their method can be applied on data which is not Gaussian distributed such as simulated Poisson data and real neural recordings (neuropixel)
Strengths: - The paper addresses a topic which is becoming more and more relevant as the number of neurons that can be recorded from simultaneously is increasing: The question of how information is distributed among neurons and brain areas.
- The paper is written in a very clear and structured way which successfully guides the reader through every logical step.
- The mathematical formulation is sound
- The quality of figures is excellent
Weaknesses: - The paper is an improvement of an already existing method which makes it important and publishable yet disqualifies it for a very high grade.
- The claim of generalizability to non-Gaussian data is not backed up strongly enough empirically or mathematically (refer to "Questions")
- Small mistakes/typos in lines: 177, 206, 322
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In the list of contributions, the authors state in line 50 that they are able to reduce the number of optimization variables from exponential to quadratic. Where can I see this in the equations and derivations?
- The authors show that their method works on non-Gaussian data by applying it to simulated Poisson data and neuropixel spike count data. They explain this by stating that the Gaussian distributions is a good-enough approximation to the Poisson distribution. It is known, however, that neural firing does not follow a Poisson distribution. This is especially pronounced when recorded via 2photon imaging for which the distribution is strongly inflated at zero. Xue-Xin Wei et al. 2020 propose a zero-inflated gamma model to accurately capture calcium imaging traces. Is it feasible for the authors to test their method on such bimodal distributions which might be harder to approximate with a Gaussian distribution? If the rebuttal time for such experiments is too short, can the authors comment on the expected outcome?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors discuss and point out most limitations. The claim that their method works on non-Gaussian data is only backed by an experiment with Poisson data, however. In my opinion, there is still room for the method to fail for other relevant distributions (refer to point "Questions") which would be worth mentioning as a (possible) limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses:**
> The paper is an improvement of an already existing method which makes it important and publishable yet disqualifies it for a very high grade.
We thank the reviewer for raising this concern, and providing us with an opportunity to explain why we believe our advance is significant:
1. In the overall rebuttal, we explain how our advance is significant with respect to other works in the field, and in particular the work of Venkatesh et al. (2022). In particular, our method works at higher dimensions, runs much faster and corrects for bias.
2. Additivity is an extremely fundamental property: effectively, it states that the PID values of an isolated system should not depend on the PID values of another isolated system (please see our response to reviewer DDWz for an explanation). The ~-PID satisfies the additivity, whereas the delta-PID does not. This makes our method much more practically applicable than the method based on the delta-PID. Apart from additivity, several works have shown that the ~-PID also has other strong operational motivations (e.g., Kolchinsky, Entropy 2022), which has not been shown to the same degree for the delta-PID. These points (additivity in particular), make it extremely important to have a method capable of computing the ~-PID. We will add an explanation of the importance of additivity to the revised version of our paper.
3. Bias correction is extremely important for obtaining correct PID values, as evidenced by the sheer amount of bias seen at small sample sizes (Fig. 4, and Figs. 9 and 10 in the appendix). For example, we find that synergy is often highly over-estimated. To our knowledge, ours is the first paper to demonstrate the severity of not correcting for bias when estimating PIDs from real data, and we are also the first to propose and evaluate a method to correct this bias. We will re-emphasize the importance of bias correction in the revised paper.
> The claim of generalizability to non-Gaussian data is not backed up strongly enough empirically or mathematically (refer to "Questions")
Thank you for raising this point. We did not mean to claim generalizability to non-Gaussian distributions in general. Rather, since our Gaussian PID method uses only a covariance matrix, we tried to justify its application to non-Gaussian spiking neural data before demonstrating its utility in a practical neuroscientific setting. Accordingly, we presented a small multivariate Poisson simulation (which was the largest possible example in which we could compare with known estimators), which we have now extended to include a few more non-Gaussian cases. We hope our new simulations highlight the limitations of the applicability of our method to non-Gaussian data, and show that care needs to be exercised when drawing interpretations from the results of our method. A more detailed response is given below.
> Small typos ...
Thank you, we will correct these.
**Questions:**
> In the list of contributions, the authors state in line 50 that they are able to reduce the number of optimization variables from exponential to quadratic. Where can I see this in the equations and derivations?
We thank the reviewer for raising this question, and we will add the following explanation to the paper:
The general form of the ~-PID involves optimizing over Q_MXY. If we consider M, X and Y to be discrete, with a support of size K in each dimension, then the number of degrees of freedom of Q will be O(K^(d_M + d_X + d_Y)). For general continuous distributions, if we discretize the distribution, the same analysis would hold. In the case of our Gaussian ~-PID method, the optimization variable is \Sigma_{X,Y|M}, which has a dimensionality of just d_X * d_Y.
> The authors ... applying it to simulated Poisson data ... It is known, however, that neural firing does not follow a Poisson distribution. This is especially pronounced when recorded via 2photon imaging for which the distribution is strongly inflated at zero. ... Is it feasible for the authors to test their method on such bimodal distributions ...?
We agree that further validation on non-Gaussian data to understand the impact of non-Gaussianity is important. As described in the overall rebuttal, we have now included an example on a multivariate Binomial distribution (which is also close to Gaussian), as well as zero-inflated versions of both the multivariate Poisson and multivariate Binomial distributions. Our observations are outlined in the overall rebuttal.
We note that a more extensive evaluation of non-Gaussian distributions (especially at higher dimensionalities) is made difficult by the unavailability of ground truth. The “ground truth” in our multivariate Poisson example is obtained using the discrete ~-PID estimator of Banerjee et al. (which we assume is more accurate for discrete variables). Our method provides a better result than the delta-PID and the MMI-PID, at least part of which could be attributed to the difference in definition. Nevertheless, we believe it demonstrates that applying our method to non-Gaussian data need not be ruled out, although care should always be taken in interpreting results.
We will add both the new simulations, as well as this discussion on careful interpretation to the revised version of our paper.
With the addition of new results showing the extent to which our method applies to non-Gaussian data, we request the reviewer to reconsider whether our paper meets the criteria for a higher score.
---
Rebuttal Comment 1.1:
Title: Clarifications and extra experiments
Comment: I thank the authors for their clarifications concerning the impact of the work and for their additional experiments showing the (limited) generalizability to non-Gaussian data. I have raised the score from 6 to 7. | null | null | null | null | null | null |
Optical Transformers | Reject | Summary: The authors analyze the performance, efficiency, and robustness of free-space optical dot-product engines for Transformer accelerations. Measurement results on an SLM-based optical system are demonstrated on some layers in a GPT-like model. System performance/efficiency are estimated and compared to digital computers. Scaling of optical processors are discussed to show the scalability of optical computing platforms.
Strengths: 1. Experimental results on SLM-based free-space optical system has been demonstrated for matrix multiplication.
2. Scalability with future technologies are discussed to show the benefit of optical computing in the future.
Weaknesses: 1. The novelty of the paper raises some concerns as no new hardware design or algorithm innovations have been shown. The SLM-based system and its experimental demonstration are not new. No customized hardware is shown for Transformer. The claimed optical hardware is designed for CNNs/MLPs. On the algorithm part, device quantization, the LUT-based training method, noise analysis, and 4-pass multiplication are standard methods for analog computing. NeurIPS community usually requires certain machine learning contributions. What is the main ML contribution? Probably other venues in the optics community are more suitable for this paper.
2. The demonstrated system is weight-in-place which needs a large number of parallel MVM to amortize the weight programming/encoding cost. However, the dynamic attention operations in Transformer and fully-connected layers usually have low arithmetic intensity, especially GPT-like architecture with KV cache, which cannot provide enough batch dimension to amortize such cost for weight-in-place systems. More justification for the usage of the weight-in-place system needs further discussion. A weight streaming system might be the suitable architecture for Transformer.
3. In Fig. 2, only a small part of layers in the Transformer block are implemented by optics, while other operations are on all digital computers. In this hybrid case, how large are the efficiency/performance benefits, or is it worthwhile to use optics?
4. For the noise analysis, only shot noise is emphasized, which is much smaller compared to other variations in the system both on the electrical and optics sides. A simple Gaussian added to the output results might be oversimplified as system error modeling.
5. In Line 291, the system assumes a 10 GHz light modulator array. If I understand correctly, the spatial light modulator typically has high resolution but very low switching frequency. This 10 GHz modulation speed needs further justification. How fast is the switching freq for weights and input feature maps? The modulation energy cost is based on thin-film lithium niobate modulators, which are fairly large. How many such large modulators are required to modulate a million pixels?
6. Also, the light source/TIA/ADC power consumption in the camera will be very large if working at such a high frequency. The incoming data fetched from memory will also be a bottleneck, which might not be able to fast enough to feed the 10 GHz optical core. In Line 297, the memory part is completely ignored when compared with digital computers, which might not be a fair comparison even in the near future. Only multiplications are done in optics, the partial product summation is done digitally, especially when it requires 4-pass, which raises concerns about the benefit of this SLM system for Transformer acceleration. More discussion on the system performance/efficiency is recommended.
7. The paper title optical Transformer is very broad, however the current paper only focuses on SLM-based free-space optics, which suffer from bulky optical setups (low compactness) and noise/alignment/sensitivity issues in practical deployment. More discussion and comparison on other integrated photonic/diffractive hardware is important and necessary. Otherwise, the scope/title of the paper is more suitable to be narrowed down to an SLM-based Transformer acceleration platform.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My questions are listed in the above weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Re: Novelty and Contributions to ML**
We are not aware of any previous study on how ONNs would behave in the regime of LLMs, which are at least 100 − 1000× larger than any model simulated for ONN hardware so far. Due to the unavoidable noise in analog physical computing, the fact that ONNs work well for small-scale ML models (mostly developed for computer vision) does not imply they would work equally well for much larger ML models made for language processing. This work contributes to our understanding of whether current developments in LLMs could be suitable (and what the consequences would be) in the context of optical hardware, and vice-versa.
**Re: LLM Inference Caching, Weights-in-Place vs Streaming Weights**
In the case of attention, one might imagine only updating the in-place k/v data incrementally with new tokens and computing attention heads one-at-a-time. This way, the data re-use for attention operations is recovered. This kind of incremental writing and re-use may give the advantage to weight-stationary systems over streaming weights, where they would need to be reloaded. But it does require a lot of weight "memory".
We note that caching has its own memory issues --- it uses an enormous amount of memory which in LLMs may require offloading to off-chip memory (which is an incredibly expensive data-access overhead [1]). These issues with large caches also suggest that a naively implemented LLM on ONN platforms can be thought of as a way to save memory rather than energy: In a scenario where caching is the default, one might imagine replacing it with a fixed-size, fast accelerator system that can just recompute the data nearly for free. In general, and even for GPU, compute is cheap while data access is expensive.
Many Transformer architectures can or must perform the full attention/MLP computations in parallel. Some examples include: vision [2], language [3], and transfer learning on downstream tasks. We are interested in Transformers asmulti-purpose models that achieve state-of-the-art performance in tasks beyond language generation.
**Re: The Amount of Optical vs Digital Operations**
Most operations in LLMs are linear, and can be readily implemented by optics. In our experiment, only part of these linear operations were run optically on our experimental setup (marked with the laser icon in main text Fig. 2) for the purpose of error characterization. In our simulation, all linear operations were simulated as running optically with the experimentally derived error/noise model described above. We apologize for any confusion this might have caused. We only subsampled layers for the experiment because the available hardware we had for our prototype system ran at limited speeds.
**Re: Shot Noise vs Other System Variations**
As for our studies of energy scaling, it was necessary to consider shot noise independently from systematic error, because, in the low-light regime, shot noise eventually dominates over other sources of noise or error.
**Re: Suitability of Gaussian Error Profile**
We chose the Gaussian error profile because it very closely matched our experimental measurements (see Section 4/Fig. 3 for results, Appendicies B, C for further discussion of the experimental procedure, data sampling, and analysis of precision). That is, the Gaussian profile was observed, not assumed.
**Re: Efficiency and Feasibility at High Speeds**
Our energy estimates already account for components (ADC, DAC, modulators, etc.) running at the 10 GHz speed, which can be readily achieved by using integrated electro-optic modulators. However, in weight-stationary systems, only inputs need to be updated at this speed, but the weights need to be updated significantly less often if at all.
We acknowledged in the discussion section that a whole system to achieve real speedup/efficiency advantages has yet to be realized. This is still an open challenge, and figuring out how to supply enough data to run at those speeds is definitely another big challenge.
**Re: Data Access Cost (line 297 main text)**
The statement made on line 297 applies to the loading weight only; all other memory costs are considered. The energy cost for one-time loading of weights can be ignored as long as each loading is sufficiently reused by working on large batches.
**Re: Cost of Summation**
Summing the multiplication results happens as the light is fanned in to produce the final magnitude of current corresponding to the value of the dot product, so the operation is not performed digitally, and happens as part of detection.
The four-pass method for dealing with non-negative values in the setup does introduce extra data access costs, but they are insignificant for energy scaling. This is because the four-pass summation happens only once for each dot product, regardless of vector dimension, while the number of optical multiply-and-accumulate scales with vector dimension. Also, ONN systems using coherent light avoid this entirely.
**Re: Discussion on Integrated Photonic Platforms**
We agree with the referee that the optical energy scaling law in this work should be discussed in the context of other experimental platforms, since they are all promising contenders for achieving an optical advantage. We plan to revise our manuscript by adding discussion according to the outline provided in the section of **Comparison to Other Experimental Platforms** in the general rebuttal.
[1] Pope et al. Efficiently Scaling Transformer Inference. arXiv:2211.05102. (2022)
[2] Dosovitskiy et al. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR. (2021)
[3] Devlin et al. BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL. (2019)
---
Rebuttal Comment 1.1:
Title: Further Comments
Comment: Thanks for the responses.
1. LLM Inference Caching, Weights-in-Place vs Streaming Weights
Even though the author claims a 10GHz SLM, for most weight-in-place integrated photonic accelerators to have low power, lowloss and small area, the switching speed is much lower than the computing speed (1 us vs 100 ps). It is not justified which workload, even for the referred Transformer that performs the full attention/MLP computations, can amortize such a high reprogramming cost. I don't think the batch/token for any vision/language task can merge this gap. So the claimed advantage of weight-in-place design over streaming one is not convincing.
2. Shot Noise vs Other System Variations.
The claim is for free-space optics (some prior work indeed claims low light, like sub-photon per MAC), but not for current integrated photonics. Given the current PD sensitivity, responsivity, and OSNR limit, it is not convincing why shot-noise will dominate. More discussion is needed if this paper is for general optical architectures.
3. Data Access Cost (line 297 main text)
Similar to the above comments, reusing itself is hard to amortize the memory loading and device programming latency for most optical hardware, even for large batches. Some special computer architecture designs are necessary to hide the latency, which should be pointed out. This is the fundamental bottleneck for most analog hardware accelerators. It definitely requires cross-layer solutions to mitigate it, not just reusing..
4. Cost of Summation
The four-pass method will introduce ~4x the latency and energy cost compared to one-shot computing. I still don't quite understand why this is insignificant in the overall efficiency/performance.
5. Discussion on Integrated Photonic Platforms
This is the critical limitation of this paper, which claims "optical transformer" but lacks systematic analysis/comparison across different optical hardware platforms, and it requires significant modification/major revision of the paper.
---
Reply to Comment 1.1.1:
Title: Re: Further Comments
Comment: We thank the reviewer again for the additional feedback.
**Amortized Loading Costs**
For a fully weights-in-place system, the weights do not need to be switched. In other cases, Transformers' weight reuse can be sufficient to amortize loading costs. Typically, recent LLMs have sequence length $L \sim 10^4$. Without batching, at 10-GHz input, this only requires ~MHz-regime switching of weights, ~1us latency. For energy, we discussed this scenario in appendix G. For example, with 10G weights memory, ONNs enjoy a > 100x energy-efficiency advantage for the largest models, with models smaller than FUTURE-129T retaining nearly their full advantage.
For attention weights-in-place may not be worthwhile as frequent switching is necessary. In all of our calculations we already assume that attention operations are computed with a streaming-weights approach. In this case, the weights are streamed by a light modulator in a similar fashion as the input data. We also acknowledge this at the end of section 4 (line 313). Consequently, implementing attention efficiently is hard, but MLP is the majority of Transformers' compute; ONNs still achieve an advantage in total MACs versus total energy (Appendix E).
**Clarification About Noise/Error Analysis**
We did not wish to claim that shot noise may dominate in most ONN systems, but rather to point out that a certain number of photons is necessary for a particular SNR requirement. Testing this requires examining the shot-noise-dominant regime. There also have been ONNs using low light, <100 photons for each photodetection [2, 4]. This translates to ~10 SNR, which is greater than hardware error.
**Latency**
Proposing a detailed specialized computer architecture is beyond the scope of our work. We acknowledge that the design of a suitably fast and efficient ONN system is still an open question. We hope that future ONNs continue to mature in this direction.
**Cost of Summation**
Altering energy costs by small constant factors (<10x) does not affect the main conclusions of this work:
- **At the large scale, ONNs could achieve orders-of-magnitude energy-efficiency advantages over GPU**. As long as the overhead cost is also not orders-of-magnitude more expensive, it will not affect this claim
- **The ONN advantage scales with the model size**. An increase in data-access cost only shifts the still asymptotically different energy curve
- **The energy calculations are estimates**. We intended for our energy calculations to offer a discussion about ONNs' efficiency, not to predict specific numbers exactly; The discussions about scale and energy use of Transformers would not change
Also, we remark that there would be *less than a 4x penalty* --- there are other considerations that would be unaffected but were a significant portion of the costs of digital ops. The presence of activations like ReLU and softmax means only two passes are needed in certain scenarios. Coherent-light ONNs can natively process real numbers (see systems comparison in Author Rebuttal).
**Comparison to Other Platforms**
Our work concludes that Transformers *can* run and are worth running in the presence of common analog-optical-system behaviors and pathologies. This suggests that in general, the use of optics to accelerate these large-scale models is worth pursuing. These central claims stand regardless of our limited discussion about the specifics of other ONN systems. In pursuit of these higher-level claims, we considered aspects that are fairly general:
- Optical fan-out/in and related energy advantages are common to many ONNs
- Other ONNs do have similar error profiles to ours [1, 2]
- The scale-relative behavior (rescaling of operands) is common in analog computing
- We emulated imprecision via the use of real LUTs
- We tested Transformers at different precision levels, not just those in experiment (Appendix B.4, main text Fig. 3)
- We assumed the use of memory, DAC, etc., which ONN accelerators require
- We deliberately used commonplace techniques to make conclusions that do *not* rely on the details of a bespoke software or hardware approach.
- Our energy estimations and assumptions are similar to the approach used for other ONN architectures in the literature [3, 4]
These encompass much of the designs of ONNs, sufficient to paint the general picture of ONNs and Transformers. Nevertheless, we agree that this discussion would be helpful to include in the article.
[1] Feldmann et al. Parallel convolution processing using an integrated photonic tensor core. arXiv:2002.00281. (2020)
[2] Sludds et al. Delocalized photonic deep learning on the internet's edge. Science (Vol. 378, Issue 6617, pp. 270–276). (2022)
[3] Hamerly et al. Large-scale optical neural networks based on photoelectric multiplication. Physical Review X, 9(2):021032. (2019)
[4] Wang et al. An optical neural network using less than 1 photon per multiplication. Nature Communications, 13 (1). (2022) | Summary: This paper propose a photonic hardware accelerator to process the inferences of large language models, i.e., transformers, using optical multiply-accumulate (MAC) operations. Optical MACs are suitable for computations with large operands, thereby leading to asymptotic energy advantages over the digital hardware accelerators.
Strengths: 1. The paper is well-organized.
2. The paper works on an important problem.
Weaknesses: 1. The paper does NOT consider the energy consumption of analog-to-digital converters, digital-to-analog converters, and various memories such as on-chip SRAM and off-chip DRAM. I totally agree with the cornstone of this paper, which is optical MACs or matrix multiplications are super energy-efficient. However, gaining this energy advantage is not easy. Reading 530B parameters of a transformer, converting these many digital parameters to analog optical signials, and converting the analog optical result signials back to digital values may donimate the energy consumption of an inference. As a result the energy efficiency improvement may not be very large.
Please check the comparison in this paper:
W. Liu, W. Liu, Y. Ye, Q. Lou, Y. Xie and L. Jiang, "HolyLight: A Nanophotonic Accelerator for Deep Learning in Data Centers," 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 2019, pp. 1483-1488, doi: 10.23919/DATE.2019.8715195.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please comment on the energy consumption of analog-to-digital converters, digital-to-analog converters, and various memories such as on-chip SRAM and off-chip DRAM during the otpical transformer inferences.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Hello and thank you for your feedback. The overheads related to data access (RAM, DAC/ADC) are indeed very important in considering whether an ONN platform may have any energy advantage. All energy values reported in this work did take those into account. We acknowledge that explicitly mentioning in the main text the details of what costs are included would have been more clear.
These data access costs are also important to consider because they shed light on how optics-based platforms work differently from digital ones. We explained our approach in Section 2 (see line 107) and summarize it as follows:
- Using the effective model bit precision we found in a small ablation study (Appendix C), we estimated the per-use cost of DAC, ADC, modulators, TIA, DRAM, and SRAM at these precisions. We assumed energy quantities found from existing products' datasheets or reported in other research (Appendix D).
- We used these energy quantities to compute the total energy cost of all data access in the models (Figure 2, appendicies D, E, F). Each access of a single tensor element counts as a single use of the relevant DAC/ADC/memory.
- We showed that even with ADC/DAC and other data-access related costs accounted for, there is still a significant and scaling energy advantage for larger models, but that the overhead makes running smaller ones optically less worthwhile (Section 4.3).
- We broke down the estimated energy costs for Transformer models to see the contribution of each component (Appendix E), and indeed found that memory access, DAC, and ADC are the overwhelming majority of energy costs in our estimates.
- We provided our code for producing our estimates, where the user can reproduce the values calculated and change the energy quantities.
If these data access overheads were correctly accounted for and are very expensive, then a reasonable question to ask is how ONNs obtain such large energy advantages when running Transformers. A focus of our work is to highlight the idea that even if these data-access costs are expensive, they may be amortized by re-use of data in the optical domain. This has been investigated in previous works [1. 2], but we are aware of no existing study that discusses how this is affected by model architecture, what model scales the asymptotic advantage of optics overcomes these additional overhead versus digital systems, and what makes existing popular architectures (such as Transformers) well or not-well suited. As an example, consider a matrix-matrix product with operands of shape ($m \times n$), ($n \times d$). The total number of MACs is $mnd$. This operation requires loading of ($mn + nd$) elements, and storage (ADC+memory, $E_\mathrm{store}$) of the resulting matrix's $md$ elements. Each element of the loaded matrices is reused. In a weights-in-place system, the cost for loading the $nd$ elements is ignored, but all other calculations are identical. The rows of the ($m \times n$) matrix each get fanned out optically (free) to create $d$ copies, and each column of the ($n \times d$) matrix gets reused $m$ times. So the total cost of the data access (and we address optical energy scaling in Section 4.2) is $E_\mathrm{load}(mn + nd) + E_\mathrm{store}(md)$. Unlike for digital computers, this is not proportional to the number of MACs, and therefore results in an asymptotic advantage: the energy per MAC is $O(\frac{1}{m} + \frac{1}{n} + \frac{1}{d})$ [1, 2]. It follows that models with large weight/activation matrices would be best suited for achieving an optical advantage, hence our interest in Transformers, their large MLP blocks, and their parallel-processing of many tokens with the same weights. We discussed how design decisions like these are critical in creating DNN architectures that can be run on an ONN advantageously (Section 5).
We thank the reviewer for bringing to our attention [3] --- we believe that the prospect of designing ONNs without ADCs entirely is exciting. The energy usage for the ADC (in Table 1 of the article) also appears to be roughly in agreement with our estimate: assuming 8 bits of precision, 2048 mW for 1024 uses at a time, and 1.28 GHz speed, the energy usage appears to be 1.56 pJ per 8-bit sample. Our estimate in this work was (appendix D) 3.17 pJ per 7-bit sample.
We wish to reiterate that **all ONN energy cost estimates in this work did include the costs of DAC, ADC, and memory access**.
[1] Hamerly et al. Large-scale optical neural networks based on photoelectric multiplication. Physical Review X, 9(2):021032. (2019)
[2] Wang et al. An optical neural network using less than 1 photon per multiplication. Nature Communications, 13 (1). (2022)
[3] Liu et al. HolyLight: A Nanophotonic Accelerator for Deep Learning in Data Centers. 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy. (2019)
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I increase my score. | Summary: This paper explores the feasibility and benefits of employing optical computing techniques for machine learning and specifically focusing on large language models (LLMs). The paper builds upon earlier work on optical neural networks, primarily [61] (Wang et al., Nature, 2022), which experimentally demonstrated the feasibility of performing dot products optically in a two layer neural network applied to MNIST while achieving around 90% accuracy at about one photon per multiplication optical energy. As LLM computations involved a large and rapidly growing number of multiply accumulate operations, the objective of the paper is to explore whether optical techniques can yield benefits over existing CMOS-based accelerators (GPUs, TPUs). The paper tackles this by employing a simulation based methodology where the simulator attempts to model the various noise sources (systemic, shot noise) along with limited precision of a potential electro-optical system. The evaluation shows that as LLMs continue to scale such optical systems may potentially yield many order of magnitude benefits in terms of energy efficiency over current approaches.
Strengths: Makes a reasonably strong case for further exploration of optical computing for large language models.
Weaknesses: Not enough discussion of the remaining challenges that need to be overcome to make such systems competitive in reality.
Somewhat limited contributions in-so-far as earlier works have already explored using optical for ML.
Unclear how accurate the simulation methodology is.
Some aspects unclear.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How far are current optical systems from the 10GHz operating frequency assumed in this paper (Line 291) and what would Figure 5 look like and to what extent would the main conclusions about the benefits of optical be undermined if more modest (or perhaps realistic) rates were assumed? How much improvement is required in this direction to achieve the hoped for benefits? What frequency do current modulators operate at?
How do you separate out shot noise from systematic noise in the measurements in Figure 3?
Does the y-axis in Figure 4 of the main paper include the electrical energy overheads like those in Figure 3 and 4 in the supplemental? Figure 3 in the supplemental seems to show optical energy is entirely negligible so in this case the accuracy of Figure 4 in the main paper would be entirely dependent upon a full accounting of all other energy sources. What assurances can you give that all significant sources of non-optical energy are properly accounted for?
The assumption of weights-in-place seems quite unrealistic and while the supplemental discusses (Section G) a "chunking" technique to scale up to larger numbers of parameters it was unclear whether the assumptions used to plot Figure 6 in the supplemental section are optimistic or conservative and to what extent. Please comment.
Transformers are having a lot of impact, which makes the focus of this paper on them make sense, but can you comment on how much optical may other network architectures? Accelerators like GPUs and TPUs try to be relatively general purpose and so were able to be quickly retargeted to transformer based models without needing much in the way of hardware changes. Can your simulation based study approach yield insights about whether optical will help networks other than transformers and/or under which circumstances?
How accurate is the simulation methodology in Figure 2? Can you attempt to compare your simulators predictions versus your actual hardware for a small network like the one studied in [61]?
I did not follow what the lookup-table (LUT) in the simulator does. What values do the LUTs in the simulator contain and what quantity is used to index into the LUT?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is some discussion of limitations. If the paper is meant to rally others to work on optical techniques for machine learning it would be helpful if the authors could more systematically highlight the remaining challenges to making such systems practical.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Re: "How far are current optical systems from the 10GHz operating frequency", "...benefits of optical be undermined if more modest rates were assumed?**
If the system were to be run at lower frequency the energy estimates would not change significantly for smaller models, but can for the hypothetical future models. The main concern about the relationship between speed and energy cost is the cost of maintaining the weights in a weight-stationary system. In our estimates we modeled this after the power consumption of a simple LCD display (Appendix D). If the frequency is slower, then the amount of energy expended to run per processed sample would therefore be higher. For example, for FUTURE-4q, the largest hypothetical future model, when we change update rate from 10 GHz to 1 GHz, the energy-advantage factor would change from ~8000x to ~6000x (can be reproduced by simply multiplying the weight maintenance cost value by 10 in provided source code). For smaller models the difference is negligible. Also, at lower speeds ADC/DAC and other components may become significantly cheaper, which we did not account for in this calculation. Our estimate for the weight maintenance cost is quite conservative and based on existing components, not those that future models would be run on.
10 GHz is already achieved by transceivers in telecom applications. Other components can also run more quickly:
- DAC/ADC can exceed 10 GHz [2]
- Current modulators can run in the regime of 100 GHz [3]
- 30 GHz Si-Ge photodiodes have been demonstrated in ONNs [4]
We wish to clarify that only the input data needs to be modulated at 10 GHz. Weight-stationary systems do not need to be updated at a high speed or at all.
**Re: limitations of weights-in-place**
The scalability of the weights-in-place approach is indeed a concern, but the requirement of more GPUs for more VRAM is a concern too when weights/activations become large. Hence the approach in appendix G, which is conservative because GPU-GPU communication is estimated as only being as expensive as DRAM. We remark that Fig. 6 is mostly related to the cost of input reloading for multiple chunks in cases where the processing cannot be parallelized across devices. All comparisons were against a single, ``ideal'' GPU (300 fJ/MAC).
We wish to point out that even in streaming-weight systems there can still be significant energy advantage from data re-use, so the foundational principle of the energy-advantage argument for optics still applies: when the weights are streamed, they may still re-used by a factor of the batch size times the number of vectors in the input token sequence to be processed.
**Re: "somewhat limited contributions"**
While previous studies have shown ONNs running small-scale computer vision tasks such as MNIST, this is the first to show ONNs' performance on reasonably large models that include optical shot noise and errors. This is important because the energy advantages for ONNs only become large at enormous scales. We wish to clarify here what concrete findings and concepts we believe are valuable:
- We presented a simple method and techniques that proved that a Transformer model at sufficient scale to be used in practical tasks *can* run on an ONN accelerator despite real-world hardware errors and shot noise. This is a new, concrete piece of information that could not be inferred from previous works on ONNs that considered smaller models with different architectures. The largest such *simulated* model we are aware of is AlexNet [1]
- We documented how scaling of models is related to energy usage.
- That Transformers achieve the efficient scaling (Fig. 4) is a nontrivial result --- The scaling is highly dependent on model statistics. Thus, our findings were not trivial to assume a priori just because previous ONN literature has investigated the effect in other architectures, tasks, etc.
- Our energy estimates provide a perspective on what might be achievable on future ONN hardware if their creation is to be pursued. ONNs are still an emerging technology, so there was an open question of whether further development should be pursued at scale.
- We provide a list of specifications for what a theoretical accelerator might need to accomplish based on our findings.
**Re: Separating Shot Noise From Systematic Errors in Experiment**
We define systematic errors as corrupted output due to deficiencies intrinsic to the hardware. Thus, it is identical across runs of an experiment. We use this among other approaches to isolate them:
- We use high photon counts in our experiments. This directly reduces shot noise.
- We average the results of 10 trials in experiments. Because of how systematic error works, it will remain persistent. This also eliminates noise from other sources than shot noise.
- We fit a calibration curve to the averaged results. Any deviation is systematic error.
**Re: Viability of other networks**
While we did not simulate other models, we do not believe that all architectures would be as suitable for ONNs. We chose Transformers because if ONNs were only good at running one model, the model should be useful for many different tasks. We emphasize that the purpose of our simulations was to emulate the behavior of the actual hardware, and so simulating other models is possible. Their performance is a separate issue.
[1] Hamerly et al. Large-scale optical neural networks based on photoelectric multiplication. Physical Review X, 9(2):021032. (2019)
[2] Liu et al. A 10GS/s 8b 25fJ/c-s 2850um2 Two-Step time-Domain ADC using delay-tracking Pipelined-SAR TDC with 500fs time step in 14nm CMOS technology, IEEE (ISSCC). (2022)
[3] Wang et al. Achieving beyond-100-GHz large-signal modulation bandwidth in hybrid silicon photonics Mach Zehnder modulators using thin film lithium niobate. APL Photonics; 4 (9): 096101. (2019)
[4] Ashtiani et al. An on-chip photonic deep neural network for image classification. Nature 606, 501–506. (2022) | Summary: The authors perform experimental analysis with a spatial light modulator to optically perform the computations of the linear components of the Transformer architecture. These measurements allow them to create a noise-model that is then used to simulate a GPT-2 like model and measure performance (validation perplexity) in function of model parameters (system noise, not including optical shot noise). These optical systems are physically constrained by "optical shot noise" that dictates the minimum number of photons to achieve a target precision. This optical shot noise scales favorably with larger models, and the authors then extrapolate their observations to existing large Transformer models (like PaLM), and further to even larger hypothetical future models, showing a substantial advantage of a large scale hypothetical optical system over existing electrical systems (the energy advantage scaling with the width of the model).
Strengths: 1. While there have been previous works that examine smaller scale optical neural networks and the optical shot noise scaling behavior seems to be well established (I'm inferring this from the works cited in the submission, I'm not familiar with the area), the authors highlight the importance of the scaling behavior in context of the Transformar architectures that are typically used with large language models. I think the combination of innovative research in the field of optical computation with the computation requirements of applied machine learning models at the largest scales is particularly interesting.
2. The authors used small scale experiments to establish realistic system properties and then simulated an entire Transformer to check how the physical properties would influence overall (validation) performance of the model. They did this at a scale (GPT-2 like quantized model with 15M-416M parameters) that seems large enough to gather realistic predictions.
3. The interpolation of the simulated data to much larger models is necessarily based on many assumptions, both in scaling from the simulation to larger models, and with respect to the hypothetical hardware that would run very large models on optical hardware. The appendix goes into some detail on the different assumptions that led to the conclusions summarized in the main part of the article.
4. The overall presentation of the work seems adequate for a public that is knowledgeable in machine learning, but probably has much less experience with physical properties of optical computational hardware, which is no easy task given the relatively large gap between these domains.
Weaknesses: 1. All the discussion of the optical vs. electrical implementation of the computation is in the lens of energy consumption. After reading the paper I'm not sure how the proposed architecture would fare with respect to latency/speed, or other constraints (e.g. if there is a theoretical limitation to the size – or cost – of different components that scales very differently between the currently used IC technology of purely digital microchips vs. optical components). Latency is mentioned in Appendix G, but there is no mention of these other dimensions in the main text.
2. The authors highlight the scaling behavior of optical shot noise, but other sources of noise (called "systematic errors" in the paper) are simply measured for the system at hand, and no information is given how these other sources of noise would scale when the system is scaled. For example, I assume that any realistic scaling would require miniaturization of the optical components, and I would assume that this miniaturization also causes the systematic errors to change in relative magnitude. It would be interesting to at least briefly discuss the scaling behavior of these other sources of noise.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Small things I noticed while reading the paper:
a) 155-157: not clear what this means: “We collected lookup tables (LUTs)—mappings of the available discrete levels in both the display and SLM devices—and used them to train a “LUT-aware” optical Transformer model to run on the setup.” why are the LUTs needed? what does “collected” refer to? (measurements used for downstream analysis? used for the computation itself?) … maybe referencing what is exactly meant with LUTs (is it a hardware component?) would also help here
b) 171, 179: the abbreviation QAT is used without being introduced (it’s spelled out on line 194)
c) Table 1 is a bit hard to read because the “Setting” column spans multiple rows, but the text neatly aligns (Hardware->QAT, Simulation->Eval, etc) with the individual rows (one could e.g. add more horizontal lines in columns 2+ to make this clearer)
d) 196: when saying “int8” is it actually “uint8” ?
e) Figure 5 I find it surprising to see TPU int8 use 800fJ/MAC vs. NVIDIA A1000 fp16 300 fJ/MAC – I could not readily find these numbers in [47] – how exactly were they computed?
f) 329: either “study” or “illustrate” (but not “our studies illustrates”)
g) Appendix, Figure 4 (left): “Digital Ops” are not visible – why? If they are zero (e.g. “not doing any compute”) then that should be mentioned.
h) Appendix, Figure 4 (right): the colors are hard to tell apart (especially it would be interesting to clearly see optical vs. electrical). It’s also not very clear what are “Ele FF” and “Ele Attn FF” (maybe ReLU6, LayerNorm, and Add from Figure 2?)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I don't see a negative societal impact of the presented work, other than making Transformers more energy efficient would allow to scale them further than otherwise possible, and accordingly accentuate any potential opportunities and risks of models such as LLMs. I don't think it's required to point these out explicitly in the paper.
There seem to be a number of technical limitations with respect to the predictions in the sense that there is a lot of uncertainty whether the hardware to run optical neural networks could be scaled up as much as state of the art digital circuits. But the authors make it clear already in the abstract that their main interest is in establishing a scaling law and not concrete predictions about the future of the implementation of large scale optical neural networks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's helpful comments. We provide here further explanation on the reviewer's questions and concerns in a point-by-point manner.
**Re: "All the discussion ... in the lens of energy consumption"**
Latency and speed are important factors in determining the viability of ONN platforms, and we agree that not providing in-depth analysis is a limitation of our work. One important reason why we chose to focus on the energy efficiency of ONNs is that the energy consumption of different ONN architectures can be analyzed in a universal framework based on the common physical principles they share (such as data re-use, optical transmission advantage, and photon detection noise). Speed and latency, on the other hand, can vary drastically from architecture to architecture and many ONNs are optimized towards particular operations such as convolutions [1] or for black-box reservoir [2] or physical-neural-network computing [3] where the equivalent number of "flops'' is likely high but undefined/incalculable.
The throughput advantage is often related to optics' high bandwidth and parallelism. Presently, though, ONNs are still an emerging technology and many implementations, such as ours, are proofs-of-principle that have not been optimized for speed. But we do not see any fundamental limitation for ONNs to update at 10 GHz speeds or higher, and ONN technology itself is progressing, with promising demonstrations at significant speeds [4].
**Re: sources of error only measured for the system at hand**
We acknowledge that the systematic error analysis we reported in this work is specific to our system and not entirely generalizable to ONNs as a whole. That said, the systematic error of our experimental setup is typical among ONNs [4, 5], and therefore is representative of the numerical precision that can be achieved by macroscopic or integrated systems in many cases. It is also possible to reduce systematic error via more sophisticated calibration techniques. By definition, systematic errors are consistent defects that exhibit the exact same behavior from run to run. Thus, some ONN platforms, such as Mach-Zehnder Interferometer (MZI) arrays [7], are designed with more configurable components and systematic errors can be eliminated by a calibration procedure that accounts for device defects.
**Re: "how these other sources of noise would scale"**
While we agree that miniaturization of ONN devices *may* lead to higher systematic error or device-to-device variation, we think the continued scaling of ONNs do not hinge on the miniaturization. We believe that systematic error behavior has more to do with the particular ONN implementation than the scale. One could also imagine scaling up a system by fanning out data to multiple copies of the same system, meaning that the error would be the same at worst.
**Re: "Why are the LUTs needed? what does “collected” refer to?"**
The LUTs, which we collected for the OLED display and SLM, are mappings from indices of the discrete values supported by the devices' displays to actual measured intensity values in the lab. They are collected by measuring the levels of transmitted light for each possible display (with SLM at full transmission) and SLM setting (with display fixed). These differ from simple quantization in that the intensity values are not linearly scaled with the input index, and the smallest SLM weight value is 0.02 instead of 0. Thus by incorporating these measured LUTs directly in our simulation instead of a naive 8-bit quantization scheme, we allow both in QAT and during inference the model to function only with the hardware's capabilities/deficiencies. Backpropagation is carried out using the straight-through estimator, but unlike QAT once the rounding operation produces the quantized int8 activations, they directly index the LUTs to produce the floating-point activations instead of dequantizing.
**Re: "...TPU int8 use 800fJ/MAC... how were they computed?"**
For the GPU, we used the peak performance numbers specified by NVIDIA for the device, as well as the TDP to estimate power. For TPU, the comparison was made against the older TPUv1. This is easier to discern in Fig. 3 of the revised version of that article [7] where the int8 TPUv1 is shown as achieving roughly $9 * 10^4$ GOps/s at roughly 70W of peak power, ~778 fJ/Op.
**Re: Digital ops not visible**
The digital operations are present in the chart in Fig. 4, but we recognize that it is difficult to see. For all but the smallest models, there are digital operations but the fraction is so small that it is effectively zero.
The "Ele *'' ops refer to the electrical overhead costs of a particular part of the model. For example, "Ele Attn QKT Ld'' refers to the **electronics** costs of **loading** the operands for the part of the attention operation where the $Q$ and $K$ matrices are loaded to perform the product $QK^T$. "Attn FF'' refers to the linear layers in the attention part of the Transformer where the $Q$, $K$, and $V$ tensors are derived. It also includes the final linear mapping at the end of attention. The other "FF'' labels refer to the layers in the MLP blocks.
[1] Xu et al. 11 TOPS photonic convolutional accelerator for optical neural networks. Nature, 589(7840):44–51, (2021)
[2] Lupo et al. Fully Analog Photonic Deep Reservoir Computer Based on Frequency Multiplexing.arXiv: 2305.08892 (2023)
[3] Wright et al. Deep physical neural networks trained with backpropagation. Nature 601, 549–555 (2022)
[4] Feldmann et al. Parallel convolution processing using an integrated photonic tensor core. arXiv preprint arXiv:2002.00281 (2020)
[5] W. Zhang et al. Silicon microring synapses enable photonic deep learning beyond 9-bit precision. Optica 9, 579-584 (2022)
[6] Shen et al. Deep learning with coherent nanophotonic circuits. Nature Photonics, 11(7):441, (2017)
[7] Reuther et al. AI and ML Accelerator Survey and Trends. 2022 IEEE (HPEC) (2022)
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their excellent rebuttal, both in the common part, as well as in the individual follow-ups.
This has reinforced the confidence in my rating, and it will be interesting to see if other reviewers adapt their scores accordingly.
Some follow-up comments from my side:
1. I'm looking forward to the inclusion of additional optical systems in the paper ("Comparison to Other Experimental Platforms" from the common rebuttal). In particular, this would give a good opportunity to include some of the discussion above about speed/latency, errors from other systems and how they likely scale with miniaturization. I'm somewhat skeptical of the authors' claim "One could also imagine scaling up a system by fanning out data to multiple copies of the same system", as this only allows for very limited scaling.
2. Looking at Figure 4 in (Reuther, 2022) still leaves me wondering why Figure 5 in the submission has the efficiency ordering ASIC (100 fJ/MAC) > A100 (300 fJ/MAC) > TPUv1 (800 fJ/MAC), since that same Figure 4 in (Reuther, 2022) shows (approximately) 400W for 3e5 GOps/sec (which would be 1333 fJ/Op, and not 300 fJ/Op, and reverse the ordering A100/TPUv1).
3. I found the additional explanations as to the specific use of LUTs very insightful. I would appreciate it if the authors updated the main text in the submission to make this point clearer.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for their additional feedback.
Regarding the GPU estimate, a general approach we took throughout this work was in cases of ambiguity we preferred to overestimate the GPU's efficiency and underestimate the ONN's advantages. We therefore aimed to provide as generous an estimate as possible for the NVIDIA A100 because finding consistent information about its real world, FP16, inference-mode performance was hard. For example the value reported by Reuther et. al appeared to be for training, which is more data-bottlenecked than inference. Also, the 400W GPU appears to be the SXM variant of the A100 which consumes an additional 100W of power. So to build an estimate we divided the rated power consumption of a typical GPU by the peak reported performance on NVIDIA's datasheet for the card; we assumed typical GPU power consumption of 200W for ~624 TFLOPs of compute. In the best case (200W, but a more typical power consumption for A100 would be ~250-300W), this yields roughly 300 fJ/MAC (depends on how MACs/FLOPs are counted, however). This way, despite ambiguous information about the A100's real-world characteristics, we could be reasonably confident that our reported ONN energy advantages were not overestimated. We agree with the reviewer that in practice the cost could be higher than estimated for GPU.
We also agree that the work could benefit from additional discussion about related ONN works/hardware, miniaturization/scaling, and the LUTs. We hope to add these to a revised version if possible. | Rebuttal 1:
Rebuttal: We wish to thank the reviewers for their time and dedication in providing valuable feedback for this work. We hope that the additional explanations we provided here serve to clarify the scope of this work and address some of the common concerns of the reviewers about our energy calculation assumptions and the applicability of our results to other ONN platforms. We have collected our responses to those concerns here so that they are available to everybody:
**Scope of This Work**
The scope of this work was to study what potential benefits could be had if future optics-based hardware were designed for accelerating large-scale neural networks. In essence, we aimed for results that could generalize well to the field of ONNs by selecting an existing popular DNN architecture and ONN platform. In this sense, while some reviewers have expressed their concern about the novelty of our ONN design, we wish to emphasize that we intentionally chose an already well-understood platform because of its many features that are common to ONNs in general. This is why we make claims about optical Transformers in general, and not our particular SLM-based system. We discussed broadly how they they could be achieved via ONN implementation. These specifications (Appendix G) are intended to be general - any platform capable of a certain amount of compute performance counts as a ``core'', any device that can emit or detect a scalar value is suitable for processing input/output vectors, and so on. These are a set of high-level requirements that we believe *any* ONN platform should target if aiming to achieve an energy-efficiency advantage, regardless of implementation details.
**Energy Calculations Always Include Electronics Overheads**
Several reviewers pointed out the importance of counting the overhead energy costs of loading/encoding/decoding/storing the data to be used in an optical accelerator. We apologize for any confusion, and we would like to clarify that **all energy estimates include the overhead costs of electronics for data access**. This includes digital-analog conversion, analog-digital conversion, memory read/write, modulation, weight maintenance costs, detection, and transimpedance amplification. These costs were a critical part of our analysis, since the energy advantage we discuss hinges on the abilities of optics to amortize these costs through data re-use and shot-noise-limited photon scaling. The details of our calculations, and all assumed values are in Appendices D, E, and F. We have also provided our source code for reproducing the calculations, where the user can change various energy quantities to experiment with how they affect the energy usage.
**Clarification and Motivation For Using Lookup Tables (LUTs)**
We included LUTs to model a kind of hardware error that is common to many optoelectronic devices. There are differences between the precision limitations of real devices and linearly-spaced quantization schemes often used for DNNs. The LUTs were collected for the organic LED display and spatial light modulators (SLMs). While these devices are commonly controlled by digital signals with evenly spaced discrete levels, the resultant output of these devices tends to be unevenly spaced because of their intrinsic nonlinear response or finite extinction ratios.
We incorporate these LUTs into both training and simulation. Backpropagation is carried out using the straight-through estimator just as for QAT, but unlike QAT once the rounding operation produces the quantized uint8 representations, the numbers are directly used to index the LUTs to produce the activations instead of dequantizing.
**Comparison to Other Experimental Platforms**
We recognize that further discussion about other platforms and how their similarities/differences to ours would provide useful background information. We plan to revise our manuscript by discussing how all these platforms are good contenders for optical LLMs, as long as they possess specific properties, such as optical data re-use, to support the optical scaling law. A summary of representative works is as follows:
- Wavelength-division-multiplexed Modulator Array [1, 2, 3]: Data is fed into a grid-like structure with resonators or phase-change materials to modulate the light field according to weights.
- Mach-Zehnder Interferometer (MZI) meshes [4]: These devices use cascaded networks of MZIs (which store weights) to implement matrix-vector multiplication.
- EOM-based convolution engines [5]: These leverage EOMs' toeplitz-matrix coupling of modes in the synthetic frequency dimension to implement convolutions.
- Coherent, SLM-based free-space ONNs [6]: A scheme very similar to ours, but supports real-number data.
- Coherent, free-space diffractive ONNs [7]: A scheme that uses optical depth in 3D space to encode a large amount of parameters for ONN layers.
[1] Mesaritakis et al. Micro ring resonators as building blocks for an all-optical high-speed reservoir-computing bit-pattern-recognition system. J. Opt. Soc. Am. B, 30(11):3048–3055 (2013).
[2] Feldmann et al. Parallel convolutional processing using an integrated photonic tensor core. Nature 589:52–58 (2021)
[3] Tait et al. Microring weight banks. IEEE Journal of Selected Topics in Quantum Electronics 22.6: 312-325 (2016)
[4] Shen et al, Deep learning with coherent nanophotonic circuits. Nature Photonics 11(7):441 (2017)
[5] Fan et al. Multidimensional Convolution Operation with Synthetic Frequency Dimensions in Photonics. Physical Review Applied 18 (2022)
[6] Spall et al. A. Fully reconfigurable coherent optical vector–matrix multiplication. Optics Letters 45(20): 5752–5755 (2020)
[7] Lin et al. All-optical machine learning using diffractive deep neural networks. Science 361.6406: 1004-1008 (2018) | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Budgeting Counterfactual for Offline RL | Accept (poster) | Summary: This paper proposes a novel offline RL algorithm BCOL that builds on the idea of limiting the numbers of counterfactual decisions. Instead of enforcing policy or value regularization, BCOL follows the decisions of the behavioral policy in the majority of the states, and only makes counterfactual decisions for a limited number of times. The strategy to spend the fixed amount of opportunities for counterfactual decisions is learned through a dynamic programming algorithm. Experimental results using two different implementations of the proposed algorithm on a wide range of offline RL tasks are presented.
Strengths: It is great that the authors present all details about their experiments and the implementation details of their proposed algorithm.
The authors present two implementations based on different state-of-the-art RL algorithms (TD3 and SAC).
The presentation about the empirical results (figures, tables, etc.) is very clear. The ablation study on the hyper-parameters is great.
Weaknesses: ### Method
It is well-known in the offline RL community that the extrapolation needs to be careful. Limiting the level of extrapolation itself is not challenging, one can copy the behavioral policy, leading to zero extrapolation. The challenging thing is to find out where to extrapolate. The authors have argued for many times in the paper that assigning an upper bound for the number of counterfactual decisions can effectively constrain the level of extrapolation, leading to a balance between the gain of counterfactual decisions and the risk of extrapolation. However, this only explains how the level of extrapolation is limited but not the more important question: why the proposed BCOL algorithm can learn where to extrapolate.
On the intuitive level, the explanation is not enough and not clear. On the formal side, it would be great if the authors can provide some theoretical guarantees for the proposed algorithm so that the benefits of BCOL become more clear. In the current version of the paper, I fail to see enough support, either intuitive or theoretical, for the efficacy of the proposed algorithm.
### Algorithm
The proposed algorithm induces an extra burden for learning because the Q(s,b,a), unlike the regular state-action value function, needs to approximate the value of the budget well. This at least linearly increases the difficulty of the learning problem because for every b, Q(s,b,a) needs to be approximated well so that the proposed Counterfactual-Budgeting Bellman Operator can work well.
### Experiments
It is appreciated that the details about the experiments are presented well.
However, the performance of BCOL doesn’t seem that impressive given the increased training cost. For example,although the BCOL(SAC) has the highest total score, it is outperformed by CQL and CDC on about half of the tasks.
Moreover, the baseline methods, as diverse as they are, are dated algorithms. It may be better to include more latest baselines that also report SOTA performance in their papers (see for example, [1] and [2]) .
[1] Bhardwaj, Mohak, et al. "Adversarial model for offline reinforcement learning." arXiv preprint arXiv:2302.11048 (2023).
[2] Kang, Bingyi, et al. "Improving and Benchmarking Offline Reinforcement Learning Algorithms." arXiv preprint arXiv:2306.00972 (2023).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Is the algorithm still working if there is more than one behavior policy for collecting the offline data?
The sentence “Thus, the choice of policy on each step needs to balance between the Q value gain from the current step and the potential benefit from future counterfactual decisions.” is very confusing. Could the authors please elaborate more on it?
What is the formal definition of Q(s,b,a) in Eq. (4)? It looks like the exact definition of Q(s,b,a) doesn’t matter, and the only thing that matters is the fixed point of $\mathcal{T}_{CB}$.
I understand that by definition of $\mathcal{T}_{CB}$ that are at most B backup steps taking the max operation as the maximum value. But why does this “intuitively upper bounds the amount of extrapolation that the Q function can take”?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes. The authors have discussed the limitations of this work and pointed out future directions for continuing research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We hope you will consider increasing your score after reading our responses. Please let us know if there are more questions
> “why BCOL can learn where to extrapolate"
> “not enough intuitive or theoretical support”.
Indeed, it is the very central question that motivates our algorithm design and we would like to clarify it and avoid misunderstanding about it. In short, the dynamic programming formula Eq (4) in BCOL is naturally derived from the constrained objective Eq (2), to find where to extrapolate under the budget. Formally, the fixed point of Eq (4) is guaranteed to be the optimal allocation of the counterfactual or extrapolation across the time steps, by Theorem 2. BCOL is the approximate dynamics programming algorithm based on Eq (4).
Here we will give a more intuitive summarization of the formal discussion in paper. To solve the problem about “where to extrapolate”, we need to balance the trade-off between spending the budget now to get better action and keeping the budget for the future to get a better future value. With BCOL, we solve the trade-off by dynamic programming. Mathematically, it is described by the selection between $\max_{a’} Q(s’,a’,b-1)$ and $E_{a’ \sim \mu} Q(s’,a’,b)$ in Eq (4) in our paper. Theorem 2 says, by using such a modified Bellman backup, the fixed point is guaranteed to be the constrained optimal value function.
In summary, we respectfully disagree that we do not have enough intuitive or theoretical support in the paper. We would like to refer to the description of our algorithm from reviewer wQce, who said that our algorithm is “a solution to solve the allocation problem by dynamic programming, which is also solid and theoretical justificated.”
> “Computation cost”
We acknowledge that BCOL will increase the amount of computation compared with SAC and TD3. In our paper, we discussed a stochastic approximation to reduce the linear cost on $B$ at the end of section 3.3. We remark that the computation is often not the main bottleneck in offline RL settings where the data is fixed (as discussed in all previous offline RL papers). It is more important to consider how to use the available data and leverage more information.
> “comparison with CQL and CDC”
To answer your concern, we would like to explain a bit on the difference between Mujoco and AntMaze tasks, and why we think our algorithm shows a significant improvement. Mujoco datasets in D4RL have a significant fraction of near-optimal trajectories, and are considered as easier tasks by prior work e.g. IQL and CDC. AntMaze tasks require stitching parts of suboptimal trajectories that travel between different states to find a path from the start to the goal of the maze, and are more challenging and meaningful. (More discussion in section 4 of paper.) Thus we argue that the improvements on the AntMaze tasks are more significant and valuable. On AntMaze tasks, we outperform CQL and CDC by a much larger margin.
We would like to refer to our response to Reviewer 4X4f for a remark on how CQL results are reported in literature.
> “latest baselines”
We thank the reviewer for the pointers and we will include more baselines in an updated version. In the response to all reviewers, we compared against the two works pointed out by the reviewer, and baseline algorithms that these two works compared with. These baselines either do not report their performance, or are significantly worse than BCOL in the harder AntMaze tasks.
Additionally, it is important to note Kang, Bingyi, et al. was only available online after the NeurIPS submission deadline, so it was not possible for us to consider it.
> “Is the algorithm still working if there is more than one behavior policy?”
Yes. Most D4RL datasets contain data from more than one behavior policy. Please see Page 5 in the D4RL paper for further details. Our results show that our method outperforms others on this benchmark. Also BCOL does not need to know the behavior policy as well, and it will fit a behavior policy as $\mu$.
> “the confusing sentence”
We will clarify that in the updated version, however, let us explain it here. That sentence means, if we are under the constraints that we can only take at most B counterfactual decisions, we face the problem of where to allocate them. We expect the counterfactual decisions to provide improvement on top of behavior values, but taking too many of them can result in extrapolations. At some time-step, We can take counterfactual decisions immediately, and enjoy the value improvement. We can also take action from behavior policy, and this will save the budget for the future and maybe we can use it in some more influential decisions.
> “formal definition of Q in Eq 4”
Equation 4 is the definition of an operator for any Q function in the function space. Thus $Q(s,b,a)$ can be any function in the function space $\mathcal{S} \times [B] \times \mathcal{A} \to \mathbb{R} $.
> “why does this intuitively upper bounds the amount of extrapolation”
Good question. The bounded extrapolation is from the comparison with standard Bellman update and RL methods based on it. In a standard Bellman backup we will update the Q function by:
$$ Q(s,a) \leftarrow r(s,a) + \gamma \max_{a’} Q(s’,a’) $$
Practical algorithms fitting this goal keep updating the Q values with extrapolated Q value $ \max_{a’} Q(s’,a’) $ recursively (unless the argmax happens to be behavior policy). Such a recursive update is either ended by the episode end or discounted by $\gamma$. Thus there are $\frac{1}{1-\gamma}$ or $H$ (horizon) times $\max$ operator effectively on the optimization path, and that many extrapolated queries to the Q functions.
In contrast, our algorithm queries at most $B$ times to the extrapolated Q values in one optimization path. As $B << \frac{1}{1-\gamma}$ and $H$, the amount of extrapolation with our Bellman backup is much smaller than with the standard Bellman operator.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's effort in addressing my concerns. However, two of my biggest concerns are still not addressed.
### Theoretical Guarantee
The authors have repeated their motivations many times in the paper: limiting the number of times to deviate from the behavior policy can effectively improve extrapolation. The authors have accordingly defined a new operator and proved in Theorem 2 that the fixed point of the proposed operator is optimal value function of interest. However, Theorem 2, as the only theory contribution in this work, is not involved at all. More importantly, although the benefits of the proposed approach are underscored many times in words, there is no formal justification of the performance. **There is no rigorous understanding of the benefits brought by counterfactual balancing.** For example, what is the regret guarantee of BCOL? Under what condition does BCOL have an edge over other offline RL methods? These questions shadow the value of this work.
### Computation Cost
While the authors discuss very briefly that they tensorized the loop over $b$ and can use sampling over $b$ to further reduce the computation cost, a short formal analysis can show that the required number of operations for $\widehat Q_{\theta}(s,b,a)$ to uniformly reach a desired accuracy $\epsilon$ over $b$ is at least $\Omega(b)$. Moreover, the demand for high accuracy of the proposed $\widehat Q_{\theta}(s,b,a)$ should be higher than that of regular $Q(s,a)$ because the error can accumulate over backup of $b$.
I would like to note that this is only the complexity of computation, it is possible that the the complexity of sample also increases with the introduction of $b$ into the value function, which is more challenging to address.
Hence, I incline to keep my score for now.
Thanks!
---
Reply to Comment 1.1.1:
Title: Response to the reviewer's further questions
Comment: We are glad to hear that our responses addressed most of the reviewer's concerns. We have addressed the remaining questions below. We encourage the reviewer to revise their score after reading our responses. We believe that we have addressed all of the reviewer's concerns.
### Theoretical Guarantee
We respectfully disagree with the judgment on "lack of rigorous understanding" in this paper, from two perspectives.
“The rigorous understanding of the benefits brought by counterfactual balancing (our method)” is provided by the constrained optimality (property of our solution) in Theorem 2. As for why the constrained optimality is desirable, we would like to quote the reviewer’s own initial review to explain it: “**It is well-known in the offline RL community that the extrapolation needs to be careful... The challenging thing is to find out where to extrapolate.**” Theorem 2 is providing a computationally feasible solution exactly to the question “**where to extrapolate**”. Thus we are confused by the reviewer’s new comments on ”Theorem 2 is not involved at all.”
As the reviewer’s request on “what is the regret guarantee of BCOL?”, while we acknowledge theoretical guarantees are very insightful and can be used to guide online/offline RL algorithm design, we would argue that it is not the sole standard to judge RL algorithms and papers. We would refer reviewers to our experiment section, where our well-designed and comprehensive experiments clearly show the benefits of our proposed method.
There are many offline RL works with regret or sample complexity guarantees that are limited to linear function approximation settings [e.g. 3, and many other related work] without any empirical study. There are also few offline RL algorithms with theoretical guarantees, but without experiment on the challenging tasks in the D4RL benchmark [1,4,5]. Thus we think this paper still brings new contributions to the community.
### Computation cost
We respectfully disagree with the reviewer that the computation cost is a major measure of offline RL algorithms. Almost all offline RL algorithms use one or more following tricks that increase the computation cost: introducing new regularizer terms, computing uncertainty sets, sampling actions multiple times from behavior policy, or taking conservative backup from multiple target Q networks. In addition to these common tricks, as more recent examples, the work cited in the review [1] and its prior work [5], introduced min-max objectives. Thus it introduces a whole new level of optimization problem and requires more computation. Another recent work [2] in offline RL introduced learning Q values, $Q(s,a,\delta)$, conditioned on a confidence level $\delta \in [0, 1]$. Offline RL algorithms are motivated by the scenarios where samples are costly and thus limited. Thus these algorithms are often motivated to to use samples more efficiently and thoroughly, with an additional cost of computation.
We implement the Q function and policy with $B$ output heads in the last layer and vectorize the loop over $b$. Thus, the additional $O(B)$ computation cost only happens in the computation of loss and forward and backward pass of the last layer, rather than the whole network. Thus the additional $B$ factor does not apply to the whole amount of computation. (This is an exact implementation of Eq (6) and (7) without considering the sampling $b$ approach.)
While our computation cost is comparable to other offline RL methods, we believe that it is not a major factor to consider when evaluating a new offline RL algorithm. In fact, we have not seen any work in offline RL that suggests that computation cost is a critical factor. Therefore, we do not believe that computation cost is relevant to the evaluation of our algorithm.
Regarding the sample complexity introduced by $b$: There is no uncertainty or unknown transitions related to the variable $b$. It is unclear what the reviewer means by the possible sample complexity (in order to estimate any new random variable introduced by $b$).
### Response to the overall judgement
**While the reviewer acknowledged the contribution of our algorithm, the main concerns are about why we did not make "other" contributions in the paper.** We found this unfair, as the reviewer did not identify any technical flaws, weak evaluation, inadequate reproducibility, or ethical considerations in our paper. These are the types of issues that would warrant a score of 3 per NeurIPS's definition of score 3. However, the reviewer assigned us a very low score without identifying any of these issues.
Finally, it is important to note that while most offline RL methods rely on policy or value regularization, our work takes a fresh and new approach to offline RL which is very different from previous works and to the best of our knowledge, we are the first who propose such a fresh perspective in offline RL. This in itself demonstrates the significance of our work. | Summary: This paper proposes a TD approach to induce counterfactual decisioning in offline RL agents. Basically, the approach suggests using a count-based budget that gives scope for making decisions that are not exhibited by the behaviour policy. The paper implements this approach in various standard benchmarks and shows that the method performs favourably to the state-of-the-art approaches. They also give a theoretical justification for why budgeting should work.
Strengths: The strength of the paper lies in the simplicity of the proposed budgeting for counterfactual decision making. The paper identifies the gap in the existing approaches which fall short of inducing counterfactuals. The authors propose to put a hard budget on the number of counterfactual decisions taken by the agent. Building upon the constrained optimization formulation for solving the MDP, the work shows how it converges to a fixed point and argues about the optimality of the same. In addition, it gives a function approximation version for using deep learning based approaches in conjugation.
Weaknesses: I find the following weaknesses in the approach:
**Tuning of the B**: It is unclear how the budget would be tuned for different environments. Because having a high B implies applying off-the-shelf RL algorithms without considering the nuances of offline RL setting, and a low B will make the method similar to imitation learning techniques.
**Lack of Imitation Learning baseline**: It would be helpful to see how the proposed BCOL compares with state-of-the-art approaches for imitation learning. This will highlight the importance of counterfactual decision-making learnt from offline dataset. I suggest authors include a competitive baseline for IL, too.
**Comment on the novelty of budgeting**: Safe RL approaches have count-based constraints on agent's violations of safety. Therefore, the theory given in the present work has a very high overlap with the prior works, reducing the novelty in the optimization or the fixed point derived.
**Motivation behind the counterfactual budgeting**: I am myself not aware of regularized techniques for increasing counterfactuals in offline RL, however from first principles, it seems that inducing counterfactuals might not be a good idea as offline RL might lead to safety issues when the agent is deployed. The aim of offline RL (at least to me) is finding connections in the offline data that can help improve the returns gained by taking care that the agent doesn't infer unrealistic novel behaviours from offline data that could turn harmful once deployed in the actual environment. Can authors please comment on the safety of the BCOL agents?
**Why would such budgeting work**: From equations 4 and Select() given on line 185, it looks like BCOL would spend the budget in the initial steps of decision-making, and after that, it would follow the behaviour policy. Put in a different way, BCOL's budgeting seems myopic in nature; it is unclear to me how the algorithm will induce the agent to use the budget later in the decision-making when it matters the most. Can authors please comment on how the budget would be used pragmatically by the agent? It would help to see an empirical analysis of the budget expenditure against the time steps.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have listed my questions and corresponding suggestions in the above section.
Overall, I like the paper's straightforward approach to increasing counterfactuals in offline RL and enjoyed reading the derivations provided. However, the paper has many unaddressed weaknesses, as pointed out above. I am inclined towards borderline rejecting the work in its current form. With my questions answered, I would love to increase my rating for this work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors do point out the limitations around using count-based approach for budgeting and theoretical analysis of less number of counterfactuals. In addition, I urge authors to also talk about the safety considerations involved in BCOL.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We hope you will consider increasing your score after reading our responses. Please let us know if there are more questions.
> “Tuning of the B”
We agree with the reviewer on the algorithm’s behavior with high and low B. Such a behavior (spanning over the spectrum from imitation learning to vanilla RL, with different values of hyper-parameters) is a common and desired behavior for offline RL algorithms. We would like to restate that we use a single value of B across all Mujoco tasks, and another value of B across all AntMaze tasks. This shows the algorithm is less sensitive to hyperparameters than many offline RL algorithms that tune the hyperparameters per task. It also shows that it’s feasible to tune B over one simpler validation task that shares similar properties with the target task. We also want to remark compared with the coefficient of the pessimism/uncertainty regularization in other algorithms, B has a clear physical meaning and it is more intuitive, and is easier to be set by prior knowledge about the environment.
> “Lack of Imitation Learning baseline”
As the reviewer requested, we will include more imitation learning baselines in the paper. We include the results from BC on top 10% data, advantage weighted regression, and the decision transformer here, as the common imitation learning baselines in offline RL work. Please see the results in the rebuttal to all reviewers.
> “the novelty of budgeting”
We thank the reviewer for pointing out the connection between our work and count-based constraints violation in safe online RL. However, there is a key difference between our setting and safe online RL: in online RL, the agent can interact with the environment to collect more data, while in offline RL, only limited data is available and exploration is not an option. This difference has a significant impact on the way that we design methods for offline RL. Additionally, while count-based constraints violation has been studied in safe RL, to the best of our knowledge, our approach of using budget for counterfactual decisions in the context of offline RL is novel (also reviewer wQce pointed that out). Specifically, our proposed algorithmic idea of using dynamic programming to solve a constrained optimization problem in an offline setting has not been proposed before. We will include a discussion of the connections with safe RL in our paper. If the reviewer can provide a more specific reference that raises concerns about the novelty of our algorithm, we would be happy to include a more detailed discussion in the paper.
> “Motivation behind the counterfactual budgeting”
We would like to clarify that our algorithm is not “regularized techniques for increasing counterfactuals in offline RL”. In contrast, our algorithm can be viewed as *decreasing counterfactuals in offline RL*, as most offline RL methods do, but we are doing it in a simpler, more direct and explainable way. In short, RL methods are intrinsically learning over counterfactuals, and our budgeting idea is to upper bound it.
More specifically, vanilla Q learning (or any off-the-shelf online RL method based on that), without any constraints, will update the Q values from the max Q values in the next time step:
$$ Q(s,a) \leftarrow r(s,a) + \gamma \max_{a’} Q(s’,a’) $$
This backup will implicitly update the greedy policy with a counterfactual action $\arg\max_{a’} Q(s’,a’)$ unless the argmax is behavior action. Online RL algorithms such as SAC, DDPG, TD3 are all based on learning these counterfactual actions. In the offline settings, a popular approach is adding a regularization term on top of the Bellman error on Q function, such as:
$$ - (\max_{a’} Q(s,a’) - Q(s,a)) $$
Where a is from the behavior policy. Such a regularization term prevents the Q values for counterfactual actions from being too large, thus it decreases the number of counterfactuals in the resulting Q function and its greedy policies. However, unlike our method, this type of popular regularized method cannot yield a test-time absolute upper bound of counterfactuals. In the sense of counterfactual decisions, our method is safer than other offline RL methods, except behavior cloning which follows behavior policy (factual decisions) anytime.
Please see Section 3 in our paper for more discussion.
> “Why would such budgeting work”
We emphasize that our algorithm does not spend the budget in the initial steps of decision-making. The reason is that we use dynamic programming to solve the trade-off between spending the budget now to get a better action and keeping the budget for the future to get a better future value. Mathematically, it is described by the trade off between $\max_{a’} Q(s’, b-1, a’)$ and $E_{a’ \sim \mu} Q(s’,b,a’)$ in Eq (4) in our paper. The maximization in the first term represents the immediate benefit of greedy counterfactual actions, but the higher budget and thus higher Q values explains the benefit of taking a factual action for now to achieve a higher value from future counterfactual actions. Always taking the greedy at the beginning is myopic and not the optimal solution to this dynamic programming problem.
In fact, our experiment shows there are still budgets left at the end of the episode during the test. We attached the total BCOL spent in the test, averaged over seeds and last 10 steps, for AntMaze tasks.
Task | Average budget spent in test (max budget is 50) | Standard deviation of budget spent
---|---|---
Antmaze-umaze-v0 | 41.00 | 0.91
Antmaze-umaze-diverse-v0 | 20.02 | 6.44
Antmaze-medium-play-v0 | 45.94 | 1.13
Antmaze-medium-diverse-v0 | 46.04 | 0.92
Antmaze-large-play-v0 | 43.33 | 1.81
Antmaze-large-diverse-v0 | 43.75 | 1.77
This result shows budget spent is less than 50, and there are budget left at the end of episodes. Thus the algorithm will not be forced to take behavior actions. In contrast, it will plan on how to spend the budget by line 1 in Select().
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response to my review.
Tuning of B:
The authors acknowledge that high and low B values will have the effects pointed out in the main review. However, I fail to understand why such behaviors are _desired_ in the context of the present work. The present work aims to induce counterfactuals in offline RL with count-based budgets. In that case, it would be great if authors shed more light on how B is chosen for a given environment; otherwise, issues with the value of B arise.
Imitation Learning baseline:
Thanks for including the BC baseline. It answers my query.
Novelty of budgeting:
The reason for pointing out the connection here is neither the present work nor the safe online RL works restrict themselves to offline or online settings, respectively, while deriving the fixed point theoretically. So, the authors can at least acknowledge the count-based optimization in safe RL. However, I acknowledge the novelty of applying count-based optimization for counterfactual induction in offline RL.
Motivation behind the counterfactual budgeting:
Yes, the present work is not a "regularized technique." When I refer to them, I mean the previous works that try to induce counterfactual decision-making in offline RL. In this context, I request the authors to comment on the safety of inducing such counterfactuals, which might lead to unrealistic extrapolation and cause safety hazards when the offline-trained RL agent is deployed in the real world.
Why would such budgeting work?:
I feel the table attached here answers the question of whether BCOL avoids myopic use of budget to a limited extent. I do agree with the authors that BCOL will limit the counterfactuals within the provided budget. However, my question is regarding the optimal use of the budget, i.e., empirically confirming whether the budget is used in the best way possible. I do get the argument that it is DP's task to ensure such strategic use of budget. But, it would still be helpful to provide at least a proof of concept on a small grid world. For simplicity, authors can verbally describe a 5x5 or some such grid world with few trajectories and concretely present a case for BCOL's budgeting.
I am keeping the score for now, but I will be happy to continue the discussion on the above points.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer FoC4's further questions (part 1)
Comment: We are pleased that you are onboard with the idea and merit of the paper. We hope our responses clarify the remaining issue and the reviewer will increase their scores as they already acknowledged the significance of our work.
### A meta clarification note about inducing counterfactuals
> The present work aims to induce counterfactuals in offline RL
> safety of inducing such counterfactuals
We see this was repeated many times in the review and this response. Thus we want to clarify this to make sure we understand the reviewer’s comments correctly.
All offline RL methods will introduce counterfactual decision-making as a nature of RL. It will be imitation learning if it does not plan on any counterfactual decisions. However, BCOL upper bounds this counterfactual in offline learning. We are not aware of any other offline RL works which directly upper bound the induced counterfactual decision-making in offline RL like us. Thus the goal of our work is not really **inducing counterfactuals** but **reducing counterfactuals**. It is the key insight and contribution of the whole paper and we really hope the paper expressed it clearly.
### Response to specific questions
>Tuning of B: The authors acknowledge that high and low B values will have the effects pointed out in the main review. However, I fail to understand why such behaviors are desired in the context of the present work.
That is a good question. The reason is offline RL requirements vary in different environments and dataset conditions. For example, the agent should act more conservatively while learning from a near-expert dataset, and it will require more exploration with a dataset collected by random policy. Thus the offline RL algorithm needs to be flexible to cover the spectrum from imitation learning (no exploration) to vanilla online RL (proactive exploration). Many existing offline RL algorithms share a similar property, e.g., IQL paper says “We also emphasize that our value learning method defines the entire spectrum of methods between SARSA ($\tau = 0.5$) and Q-Learning ($\tau \to 1$).”
>The present work aims to induce counterfactuals in offline RL with count-based budgets. In that case, it would be great if authors shed more light on how B is chosen for a given environment; otherwise, issues with the value of B arise.
We thank the reviewer for this good suggestion. For how we tuned B in our experiment, we discussed it on page 8 of that paper. For the suggestion about selecting B to the readers, we suggest starting from B=0 or 1 as an imitation-style baseline, then increasing or doubling the value of B which will add more RL components into the imitation learning baseline. We suggest stopping increasing the value of B after observing variance and instability during offline training. Thus we select the budgets to improve on top of imitation baseline and avoid over-extrapolate in offline RL. We will include more discussion in an updated version of our paper.
> Imitation Learning baseline:Thanks for including the BC baseline. It answers my query.
We thank the reviewer’s acknowledgement of our effort.
> Novelty of budgeting
We thank the reviewer for the good suggestion. We will include discussion about the count-based optimization in an updated version of our paper. Would it be possible if the reviewer points us to some specific references in safe RL to make sure we don’t miss the important ones?
> In this context, I request the authors to comment on the safety of inducing such counterfactuals, which might lead to unrealistic extrapolation and cause safety hazards when the offline-trained RL agent is deployed in the real world.
This is a good question. First, we must clarify that BCOL is not “inducing such counterfactuals”, and we hope it is not the underlying assumption behind the reviewer’s request. (See our meta clarification note). We want to make sure there is no misunderstanding on this point.
We agree that the safety concern when the offline-trained RL agent is deployed in the real world is very important. We will include more discussion about that in the introduction and discussion section from that perspective. We also want to mention that the safe RL has very different setups than offline RL, though it overlaps with offline RL. Most existing offline RL have not been tested with the safety criteria formally, and it is not clear how they would perform. Our method is no exception.
However, we identified that our algorithm design shares the similar motivation with the reviewer’s comments. Previous work, no matter how they regularize the value of policy function, cannot provide a guarantee about the distance between deployed policy and a safe baseline (behavior policy). For our algorithm, we have an absolute upper bound of violation of behavior policy during the test, if we can assume the behavior policy is always safe to execute. We would like to address further specific concerns about our algorithm on this aspect.
---
Reply to Comment 1.1.2:
Title: Response to Reviewer FoC4's further questions (part 2)
Comment: > Why would such budgeting work?
We are glad that our responses addressed your questions.
Here we provide a 3 x 4 grid-world example to illustrate how the budget and BCOL algorithm works. We describe the map below. S means starting state, F means a failed state with reward -1 and G means the goal state with reward 1. We refer to the position of state by (y, x) where (1, 1) stands for the top left corner (e.g. starting state S is (2, 1)). Star means empty grid (Openreview use markdown and it does not allow me to leave it empty.)
— — — — — —
| * | * | * | * |
— — — — — —
|S | * | * | G |
— — — — — —
| * | F | * | * |
— — — — — —
There are two types of trajectories in the dataset: type 1 and type 2 where most of the trajectories in the dataset are type 1, and there are only few type 2 trajectories. The trajectories are described below, with L, R, U, D denoting the four actions left, right, up, and down. E means the trajectories end.
Trajectory type 1:
— — — — — —
|R |D | * | * |
— — — — — —
|U | R | R | E |
— — — — — —
| * | * | * | * |
— — — — — —
Trajectory type 2:
— — — — — —
| * | * | * | * |
— — — — — —
|R | D | * | * |
— — — — — —
| * | E | * | * |
— — — — — —
Without any budget of counterfactuals, imitation learning agents will follow the first type of trajectories and take a longer path to the goal. With budget = 1, BCOL selects between following the empirical behavior policy (“U”) or an alternative action (“R”) in the starting state. Notice that there are a few type 2 trajectories, so that the high value of (2, 2) will be reflected in the backup value of action “R” in (2, 1). Thus taking action “R” and spending the budget results in a higher value. This is also because in the later states following trajectory type 1, all alternative actions will not lead to any state with a higher value than the goal state. The budget also prevents the agent from taking too many counterfactual actions e.g. in states (2,2) or (2,3), unless these counterfactual actions provide a higher reward gain than optimal path.
With this type of grid-world examples, it is also obvious to see that we can change the state positions and trajectories so that the optimal way of spending the budget is in the middle of trajectories.
> I am keeping the score for now, but I will be happy to continue the discussion on the above points.
We are actually puzzled by the reviewer’s overall score as the major questions from the reviewer are about clarification and connection with safe RL work. We hope with the response above we can address the remaining questions. We are happy to address any concerns that prevent the reviewer from being able to increase the score. | Summary: This paper gives a total new solution to offline RL, instead of introduing pessimism by behavior constraint or value regularization, budgeting the number of counterfactual decisions, which naturally reduces overestimation. This paper also gives a good formulation of the problem and provides a nice solution to solve the allocation problem by dynamic programming. This paper shows good empirical performance over offline RL benchmarks and optimality of the fixed point solution as theoretical justification.
Strengths: - The paper is very well written.
- The idea of avoid distributional shift by budgetting the number of counterfactual decision making is very novel and promising.
- This paper gives a nice formulation of the problem and provides a solution to solve the allocation problem by dynamic programming, which is also solid and theoretical justificated.
- This paper shows strong performance on various D4RL datasets, though not SOTA in some datasets if compared to more recent works.
Weaknesses: - This method introduces two hyperparameters to tune: $\omega$ and $B$.
- It is better to have some visualization on which critical steps does the algorithm tends to make counterfactual decisions and whether it has some physical meanings. For example, on antmaze tasks, which location does the algorithm make counterfactual decisions.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Why adding the gap_penalization term (Equation 9) is necessary? Does it still do some kind of behavior regularization implicitly? Could you elabotate more on the usage of Equation 9?
- Why SAC+BCOL can work well on antmaze tasks while TD3+BCOL can't?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and positive feedback. Please see our responses to your comments and let us know if there are more questions.
> This method introduces two hyperparameters to tune
That is a fair point. Our method does have two hyperparameters, but this does not diminish its efficacy or applicability as a practical offline RL algorithm. In fact, even without the hyperparameter $\omega$ ($\omega=0$), our method (BCOL with SAC implementation) shows strong performance (and sometime remains state-of-the-art) in Mujoco and AntMaze tasks, as shown in Figure 1 and Table 6 of our paper. However, the issue of hyperparameters is an open problem in offline RL, and other methods such as IQL, CQL, and CDC also have various hyperparameters.
> More visualization on critical steps and counterfactual decisions
That is a good suggestion, thank you. We will work on creating visualizations for the camera-ready version of our paper to show which critical steps the algorithm tends to make counterfactual decisions in AntMaze tasks.
>Why adding the gap_penalization term (Equation 9) is necessary:
That is a good question. $Q(s,b+1,a) \ge Q(s,b,a) $ is a property that must be satisfied for our desired function $Q$. However, such a property is not always held when we optimize the TD errors. Thus enforcing such constraints can be helpful to learn the desired Q function. One way to enforce it is to design neural network architectures outputting a monotonically increasing sequence of real numbers, as $Q(s, \cdot, a)$, but it comes with additional complexity of the network structure and training. Thus we choose a simpler way of adding a regularization term to control the violence to property $Q(s,b+1,a) \ge Q(s,b,a) $. Such a regularizer does not directly enforce a behavior constraint to the policy. Because this regularizer constraints Q gaps for the same action with different budgets, instead of gaps between different actions. But this regularizer will help the dynamics programming over counterfactual actions finding a self-consistent solution. We will include more discussion about this in the corresponding section of the updated version.
>Why SAC+BCOL can work well on AntMaze tasks while TD3+BCOL can't?
That is a great question. We believe this is an issue with TD3 algorithm and offline RL based on TD3 in general. For example, the original TD3+BC paper does not report on AntMaze tasks. Our reproduced results show TD3+BC perform poorly on AntMaze. Similarly, prior work [1] observed BCQ (another TD3 based offline RL algorithm) perform poorly either on AntMaze. In the meantime, we noticed there are fewer TD3 based offline RL algorithms, in comparison with SAC based architecture. Thus our hypothesis is that TD3 based algorithms face larger challenges in offline RL especially in tasks like AntMaze where the offline algorithm needs to find a policy that stitches together multiple policies to find the optimal policy.
[1] Fakoor, Rasool, et al. "Continuous doubly constrained batch reinforcement learning." Advances in Neural Information Processing Systems 34 (2021): 11260-11273. | Summary: This paper designs an new algorithm for offline RL. It also conduct experiment to validate its algorithms.
Strengths: The algorithm proposed is new. The intuition behind is presented clearly.
Weaknesses: 1. The performance of the proposed algorithm does not demonstrate substantial superiority compared to the Contrastive Q-Learning (CQL) algorithm. It would be beneficial to provide more persuasive evidence of the proposed algorithm's efficacy.
2. The use of the "budget" concept presents limitations, as it appears applicable primarily in tabular settings. Its adaptability to continuous cases is challenging, which curtails its generalizability. The authors may want to consider elucidating the feasibility of this concept in a broader context or proposing alternative approaches for continuous settings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The performance of the proposed algorithm does not demonstrate substantial superiority compared to the Contrastive Q-Learning (CQL) algorithm. It would be beneficial to provide more persuasive evidence of the proposed algorithm's efficacy.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The use of the "budget" concept presents limitations, as it appears applicable primarily in tabular settings. Its adaptability to continuous cases is challenging, which curtails its generalizability. The authors may want to consider elucidating the feasibility of this concept in a broader context or proposing alternative approaches for continuous settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We hope you will consider increasing your score after reading our responses. Please let us know if there are more questions.
> The performance of the proposed algorithm does not demonstrate substantial superiority compared to the Contrastive Q-Learning (CQL) algorithm.
Contrary to what the reviewer suggested, our results are better than the reported results with CQL. In particular, for the total score in Mujoco control tasks (746 vs 713) and, more importantly, the harder AntMaze tasks (396 vs 352.9), our algorithm BCOL surpasses the CQL performance by a large margin. (Please see the discussion about why AntMaze are more meaningful benchmark in our response to Reviewer bnLz about “comparison with CQL and CDC”.) We consider it a substantial improvement given the merits of CQL and how well it is performed in Mujoco tasks (e.g. a contemporaneous work [1] summarized the comparison between CQL and several more recent offline RL algorithms).
We would like to bring the reviewer's attention to two observations about how CQL results are mainly reported in related works. In short, we tried our best to be as fair as possible to CQL, which is different from the way CQL is reported in literature. Thus the improvement on top of CQL does not look as large as in prior work.
1. CQL website [2] keeps updating its performance and the performance has been significantly improved after the publication of the paper, especially after the D4RL Mujoco-v2 environments. In the meantime, many prior works [3,4,5,6] only compare to the results (in Mujoco-v0) in the original CQL paper. That explains why the gap with CQL in these papers is so large.
2. In our submission, we reported CQL’s results on AntMaze environments from the CQL paper. However, many recent papers [1,7,8,9] compare proposed methods against their reproduced CQL results, not the original numbers in CQL. Their reproduced CQL results are often worse than the original numbers in CQL. E.g. offline results of CQL in Table 2, and Appendix C in [7] say “Our reproduced results offline are worse than the reported results, particularly on medium and large AntMaze environments.” Table 1 in [1], Table 1 in [8] and Footnote 4 on Page 8 in [9] report the results similarly. That explains why the gap with CQL in other papers is very large.
> The use of the "budget" concept presents limitations, as it appears applicable primarily in tabular settings
We believe there is a misunderstanding. We would like to clarify that we derive a practical Bellman operator designed for function approximation in Section 3.1. Our method works with continuous state and continuous action spaces, as evidenced by our comprehensive experiments with gym MuJoCo and AntMaze in the D4RL benchmark. Both of these environments have continuous state and action spaces. Therefore, we are unsure of what the term "tabular" refers to in our context.
If the review means the budget space is discrete, we would acknowledge this in our current method. However, this is not a fundamental limit for the general “budget” method. We can define a continuous distribution distance metrics as a budget variable, and as an input to the Q function $Q(s,a,b)$. The sum over $b$ in training loss can be replaced with sampling $b$ uniformly. We would like to emphasize that our method itself shows the effectiveness of budgeting ideas, and we left the continuous budgeting implementation as a future work. We will include more discussion about this in the Discussion section in an updated version.
[1] Improving and Benchmarking Offline Reinforcement Learning Algorithms. Kang et al. 2023
[2] https://sites.google.com/view/cql-offline-rl
[3] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." Advances in neural information processing systems 34 (2021): 15084-15097.
[4] Brandfonbrener, David, et al. "Offline rl without off-policy evaluation." Advances in neural information processing systems 34 (2021): 4933-4946.
[5] Cheng, Ching-An, et al. "Adversarially trained actor critic for offline reinforcement learning." International Conference on Machine Learning. PMLR, 2022.
[6] Bhardwaj, Mohak, et al. "Adversarial model for offline reinforcement learning." arXiv preprint arXiv:2302.11048 (2023).
[7] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. "Offline reinforcement learning with implicit q-learning." arXiv preprint arXiv:2110.06169 (2021).
[8] Fakoor, Rasool, et al. "Continuous doubly constrained batch reinforcement learning." Advances in Neural Information Processing Systems 34 (2021): 11260-11273.
[9] Fujimoto, Scott, and Shixiang Shane Gu. "A minimalist approach to offline reinforcement learning." Advances in neural information processing systems 34 (2021): 20132-20145.
---
Rebuttal 2:
Title: Thank you
Comment: We thank the reviewer for their time and effort. As the discussion period goes, we would be happy to explain anything further or address more questions. If we were able to address your questions and concerns, then we would appreciate if you can update your review. Thanks again for putting the time into reviewing our paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback that we've used to greatly improve the paper. We have responded to the concerns of the reviewers as individual comments below.
We are glad that the reviewers found that our method is simple and straightforward (FoC4), novel (4X4f, wQce, bnLz), and is a solid approach and theoretically justified method (wQce). They also said that the paper generally is well-written, an enjoyable reading, presents a clear message, and the intuition behind is presented clearly (4X4f, wQce, bnLz). They found our experiments to be clear and comprehensive (wQce, bnLz) with strong performance (wQce).
We would like to clarify some key facts and summarize our main contributions here:
1) To the best of our knowledge, this is the first work to upper bound the number of counterfactual decisions in the context of offline RL. Such an objective is more straightforward, and explainable than many other offline RL methods.
2) We propose a dynamic programming approach to optimize this objective. By planning on the benefit from different extrapolation steps, our algorithm will balance between taking immediate greedy action to Q values, and the potential benefit of taking a more advantaged action in the future at the opportunity cost.
3) Theoretically, we prove our dynamic programming algorithm finds asymptotically the optimal way to allocate the budget of counterfactual, given an upper bound of counterfactual decisions.
4) Through our comprehensive experimental results, we show that our algorithm outperforms most of the offline RL baselines. It verifies the effectiveness of bounding the counterfactual decisions in offline RL.
We have included new experimental results in response to the reviewer comments from recent papers, and these results show that our method outperforms the latest SOTA by a large margin on the AntMaze task (one of the harder tasks in the D4RL benchmark) and is comparable to latest SOTA in Mujoco tasks. These latest results further strengthen our claims about the effectiveness of applicability of our method.
We attach the comparison against imitation learning baselines here.
Task|10% BC| AWR | DT | BCOL
---|---|---|---|---
Mujoco total | 666.2 | 308.5 | 672.6 | 746.0
AntMaze total | 134.2 | 126.3 | 112.2 | 396.0
For more recent offline RL baselines, we compare BCOL against them here as well.
Tasks | ARMOR| MoRel | MOPO | RAMBO | COMBO | ATAC | MuZero | CRR+ | CQL+| BCOL
--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ---
Mujoco total | 788.6 | 656.5 | 379.3 | 753.1 | 738.3 | 792.4 | 140.2 | 703.9 | 717.9 | 746.0
AntMaze total | - | 0 | 0 | 37.8 | 137.6 | - | 0 | 41.9 | 89 | 396.0
As these results show, BCOL outperforms IL baselines with large margins. BCOL is comparable to the latest SOTA in Mujoco tasks, and is substantially better in AntMaze tasks. For simplicity, we only list the total score here for all new results in rebuttal, but we will include the full results in the paper. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Re-Think and Re-Design Graph Neural Networks in Spaces of Continuous Graph Diffusion Functionals | Accept (poster) | Summary: This paper focuses on devising a new inductive bias for cutting-edge graph application and present a general framework through the lens of variational analysis. To this end, the authors first introduce a new selective mechanism that can be easily integrated into existing GNNs and effectively address the trade-off between model depth and over-smoothing, and then devise a novel generative adversarial network to predict the spreading flows in the graph through a neural transport equation. Extensive experiments show that the proposed GNN models can achieve state-of-the-art performance on graph learning benchmarks such as Cora, Citeseer, and Pubmed.
Strengths: 1. The perspective of the article is innovative, as it creatively utilizes heuristic problems from traditional physics to inspire the design of GNN structures.
2. The description of the proposed GNN architecture is clear, and the experimental setup is well-defined.
3. The experimental results validate the superiority of the method proposed by the author and demonstrate the rationality of the proposed theory.
Weaknesses: 1. The majority of the content in the article is based on the Euler-Lagrange (E-L) equation of the heat kernel. However, I don't believe that the detailed knowledge of the EL equation is familiar to every graph neural network researcher. Therefore, I think it is important to briefly introduce it when it first appears (Line 74) or mention it in the appendix. This is crucial for maintaining the readability and coherence of the article. In fact, it took me a lot of time to consult relevant materials for the subsequent introduction of the Euler-Lagrange (E-L) equation to LaGrangian Mechanics.
2. The issue of over-smoothing in GNNs has been extensively studied since 2020, but the theoretical reasons behind this problem have not been conclusively determined [1, 2, 3]. In line 135-136, the authors claim that after connecting the GNN inductive bias to the function of the graph diffusion process, we can postulate that the root cause of over-smoothing is the isotropic regularization mechanism encoded by the ℓ2-norm. Treating this theory as a conclusion is certainly not a problem. However, due to the fact that many subsequent formula derivations are based on this theory, I have to consider whether this theory has been somewhat hasty and overclaimed. I think the authors should provide some deductions before presenting this theory to maintain logical rigor.
3. In line 166, the authors claim that Eq. 1 is the dual formulation with min-max property for the TV distillation problem. It is well known that the dual problem is strictly mathematically defined. I do not fully understand how the duality problem here is derived. Please provide further explanation.
4. In line 185-186, I am confused about how the recursive min-max solution for Eq. 1 is obtained by disentangling the building block in vanilla GNN into the feature representation learning and graph diffusion underling TV. I do not intuitively perceive the connection between the two. Please explain.
reference:
1. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification. ICLR 2020
2. Towards Deeper Graph Neural Networks. KDD 2020
3. Beyond Low-frequency Information in Graph Convolutional Networks. AAAI 2021
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weak points.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please see the weak points.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate this reviewer’s insightful comments. We provide answers to the four major questions below.
1. $\bf Q$: Lack of detailed knowledge of the EL equation for GNN researchers.
$\bf A$: We apologize for this oversight. We will add more details and relevant work in the Supplementary. Specifically, we plan to open the background introduction by discussing the NeuroODE work [1], which offers a PDE-based implementation for ResNet. Additionally, we will provide a brief summary of the pioneering contributions [2-4] that establish connections between GNN and heat equations on graph data. Our work draws significant inspiration from these early studies.
[1] Neural ordinary differential equations. NeurIPS 2018.
[2] Grand: Graph neural diffusion. ICML 2021.
[3] GRAND++: Graph neural diffusion with a source term. ICLR 2021.
[4] PDE-GCN: Novel architectures for graph neural networks motivated by partial differential equations. NeurIPS 2021.
2. $\bf Q$: Should provide some deduction regarding the cause of over-smoothing to maintain logical rigor.
$\bf A$: We appreciate this constructive comment. We will make it clear in the final version by the following means.
$\bf First$, we will introduce the GNN models in [1-3] (shown below) in the final version (placed in the last paragraph of section 2.1), as part of relevant works. Specifically, [1] showed an interesting engineering solution to alleviate the over-smoothing issue by trimming graph nodes/edges. [2] proposed to disentangle feature learning and propagation steps which is similar to our idea of adding the FC layer and DC layer (in Fig. 3 of the main manuscript). The approach in [3] presented an adaptive information aggregation approach by treating low and high-frequency information differently.
$\bf Second$, we will follow the approach in [2] to explain the intuition of why deeper GNN fails using $l_2$-norm regularization term from the perspective of (1) t-SNE visualization of node feature representations as the number of GNN layers increases, (2) the evolution curve of $l_2$-norm, and (3) the evolution curve of the TV term, as the number of layers increases. Preliminary results on the Cora dataset are shown in Fig. 2-3 of the 1-page PDF. We will show the same results for other datasets (such as PubMed and Citeseer) in the Supplementary of the final version. It is clear that (1) the topological community structure by isotropic diffusion is much less consistent with the label distributions than TV-based adaptive diffusion (in Fig. 2), and (2) the trajectory of $l_2$-norm drops much faster than the counterpart curve by TV term (in Fig. 3), indicating that the effectiveness of TV-based GNN in alleviating the vanishing of graph gradients (effect of the over-smoothing issue).
$\bf Third$, we will strengthen the rigor by linking the classic TV-based work (such as Merriman-Bence-Osher (MBO) scheme [4]) in image processing with our TV-based GNN solution for graph data learning. Despite the distinct mathematical frameworks used to define diffusion processes on grid coordinates and graph structures, they both exhibit a common cause for the over-smoothing issue, which can be attributed to the isotropic regularization mechanism encoded by the $l_2$-norm.
[1] DropEdge: Towards Deep Graph Convolutional Networks on Node Classification. ICLR 2020.
[2] Towards Deeper Graph Neural Networks. KDD 2020.
[3] Beyond Low-frequency Information in Graph Convolutional Networks. AAAI 2021.
[4] An MBO scheme on graphs for classification and image processing. SIAM Journal on Imaging Sciences. 2013;6(4):1903-30.
3. $\bf Q$: How is the duality problem in Eq. 1 (line 166) derived?
$\bf A$: We apologize for the confusion. Since the TV term is not differentiable at 0, there are two common approaches to circumvent this issue: (1) replace |.| by a function which behaves almost like the magnitude function while is differentiable, e.g., $|x|=(x^2)/\sqrt{(x^2+\epsilon^2)}$ where $\epsilon$ is a small perturbation; (2) introduce a dual variable and rewrite the minimization problem into the min-max scheme (line 158-159) by introducing the dual variable $z$. We choose to employ the dual formulation (specifically the Lagrange dual form), as commonly used in TV-based image processing [1]. We will make it clear in the final version.
[1] A simple primal–dual method for total variation image restoration. Journal of Visual Communication and Image Representation, 2016, 38, 814-823.
4. $\bf Q$: How the recursive min-max solution for Eq. 1 is obtained by disentangling the building block in vanilla GNN into the feature
representation learning and graph diffusion underlying TV?
$\bf A$: In general, the min-max optimization consists of two alternating steps: (1) solving for $x$ (while $z$ is fixed) by minimizing the $l_2$-norm graph smoothness term (Eq. 2 in line 172) and (2) solving for $z$ (while $x$ is fixed) where we employ the majorization-minimization (MM) method (line 178-181). To solve for $x$, Following early work (such as GRAND) on linking PDE and GNN, Eq. 2 essentially describes a heat kernel diffusion process. Thus, the solution for $x$ can be achieved by the discrete GNN model. Meanwhile, the solution for $z$ boils down to a node-wise clip operation (Eq. 3 in line 180). Since the optimization of $x$ and $z$ have been decoupled into two alternating steps, we encapsulate them into a building block consisting of FC-layer (for solving $x$) and DC-layer (for solving $z$), and then cascade a collection of building blocks into a deep GNN model. We will make it clear in the final version.
---
Rebuttal 2:
Title: Please provide additional feedback
Comment: Hi,
You seem to have a low score for this paper. Could you please acknowledge that you have read the rebuttal and let us know if you still have concerns or not? If not, then I would encourage you to increase your score.
---
Rebuttal Comment 2.1:
Comment: Thank the author for the detailed rebuttals about Q1, Q2 and Q4.
The authors have added more details and relevant work in the Supplementary and provided a brief summary of the pioneering contributions.
I notice that Eq1 is important for the coherence of the article. However, for Q3, what I am curious about is the derivation process of the duality problem in Eq. 1 (line 166), since the authors claim that Eq. 1 is the dual formulation with min-max property for the TV distillation problem. I would like to emphasize that the dual problem is strictly mathematically defined clearly. Hence the derivation process is necessary. The author’s response is just the motivation.
So I would keep my score.
(Sorry for putting my previous reply in the wrong position.)
---
Reply to Comment 2.1.1:
Title: Response to Reviewer HWVs
Comment: Dear, Reviewer HWVs,
We are glad that we have addressed the concerns in Q1, Q2, and Q4. We also appreciate reviewer HWVs giving us another chance to further clarify the confusion in Q3.
The derivation process of the duality problem in Eq. (1) is as follows:
The Rudin-Osher-Fatemi (ROF) model solves the minimization problem:
$\mathop {\min }\limits_x \left\| {x - {x^0}} \right\|_2^2 + \lambda \int |{{\nabla _{\cal \mathcal{G}}}x}| dx$.
To derive the dual formulation, we recall that the TV-norm can be reformulated as:
$\mathop \int|\nabla_\mathcal{G} x| dx = \mathop {\max }\limits_{|z| \le 1} \mathop \int \nabla_\mathcal{G} x \cdot z$.
(please see for instance, Zhu, M., Wright, S. J., & Chan, T. F. (2010). Duality-based algorithms for total-variation-regularized image restoration. Computational Optimization and Applications).
With this definition, the ROF model becomes:
$\mathop {\min }\limits_x \mathop {\max }\limits_z \left\| {x - {x^0}} \right\|_2^2 + \lambda \int {{\nabla _{\cal \mathcal{G}}}x} \cdot z dx$,
where $x$ and $z$ are primal and dual variables, respectively.
The min-max theorem (Chapter VI, Proposition 2.4 in Ekeland, I., Témam, R.: Convex Analysis and Variational Problems. SIAM Classics in Applied Mathematics. SIAM, Philadelphia, 1999) allows us to interchange the min and max, to obtain
$ \mathop {\max }\limits_z \mathop {\min }\limits_x \left\| {x - {x^0}} \right\|_2^2 + \lambda \int {{\nabla _{\cal \mathcal{G}}}x} \cdot z dx$.
Therefore, Eq. (1) is proved.
The derivation of dual optimization has been widely studied in the literature [1-4]. Since it is not our contribution, we will include the step-by-step derivation in the Supplementary of the final version.
[1] Carter, J.L.: Dual method for total variation-based image restoration. Report 02-13, UCLA CAM (2002)
[2] Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89–97 (2004)
[3] Chan, T.F., Golub, G.H., Mulet, P.: A nonlinear primal-dual method for total variation based image restoration. SIAM J. Sci. Comput. 20, 1964–1977 (1999)
[4] Zhu, M., Wright, S.J. Chan, T.F.: Duality-based algorithm for total-variation-regularized image restoration, Computational Optimization and Applications, 47, 377-400 (2010)
Hope we have addressed your question, thank you so much.
Best,
Authors | Summary: This work studies a novel design of GNN leveraging the connection with graph diffusion. Specifically, this work considers the discrete GNN model in view of continuous graph diffusion functional formulated as the Euler-Lagrange equation. This work proposes a new design of GNN with selective inductive bias which alleviates the over-smoothing in GNNs. Further, this work proposes a new approach for predicting the flow dynamics in the graph via a neural transport equation using the GAN model to predict the spreading flow.
Strengths: - The paper is well-written with clear motivation and justification. The writing is easy to follow with summarized questions and the paper's approach (re-think and re-design) helps to understand this work.
- The proposed architecture is novel yet simple based on theoretical justifications leveraging the connection with the graph diffusion (although this was studied widely in other papers). Further, the GAN model for flow prediction is also new, using neural transport equations.
- The experimental results show superior performance compared to the baselines in node classification, and the results in node classification demonstrate that the proposed architecture mitigates the over-smoothing issue.
Weaknesses: - Related work section could greatly help the readers for understanding this work as it deals with new GNN as well as flow prediction in graphs. I assume that the related work section was omitted due to the page limit, and recommend adding it if possible.
- The reason for the superior performance of the GAN model is not clear.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is the purpose of using total variation? Is it simply from the inspiration explained in line 151?
- Although the extra training time is described in section S2.4, comparing it with the training time without DC layer would show that the computational burden is not significant.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The limitations are discussed in the supplementary file section S2.4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable comments from this reviewer. We will incorporate all suggestions into the final version.
We will add a “Related Work” section in the Supplementary which includes the relevant GNN work in solving over-smoothing issues (suggested by Reviewer HWVs) and neuroimaging background on predicting pathology spreading flows.
In general, the performance gain of GAN is largely from the min-max optimization schema. Specifically, mounting neuroscience evidence shows that the propagation of disease-specific pathologies underlines the community structure of the brain’s wiring diagrams [1, 2]. Following this assumption, we formulate the flow estimation problem into a TV-constrained objective function, where we minimize the graph smoothness constraint (using the graph heat kernels) while maximizing the inter-community flux ($\alpha$ in Eq. 6). After that, we devise the equivalent discrete deep model using GAN. At a higher level, the promising result of flow estimation showcases the effectiveness of our GNN-PDE-COV framework in designing a less “black-box” GNN model for real-world machine learning problem.
[1] Deborah N Schoonhoven and others, Tau protein spreads through functionally connected neurons in Alzheimer’s disease: a combined MEG/PET study, Brain, 2023.
[2] Steward, A., Biel, D., Luan, Y., Brendel, M., Dewenter, A., Roemer, S.N., Rubinski, A., Dichgans, M., Ewers, M. and Franzmeier, N. (2022), Brain network segregation attenuates tau spreading in Alzheimer’s disease. Alzheimer's Dement., 18: e061626.
Regarding the purpose of using total variation, TV has demonstrated its effectiveness in mitigating the problem of excessive smoothing in image denoising and reconstruction. In this work, we tackle a similar challenge related to over-smoothing in graph data. Upon identifying that the root cause of over-smoothing in existing GNN models is linked to the $l_2$-norm graph smoothness term, we conjecture that, similarly to image processing, TV might be an effective solution for over-smoothing in GNN.
Regarding the extra computational cost due to the min-max schema, we have summarized the comparison of running time with other GNN models (such as GCN, GAT, and GRAND) in Table 3 of the attached 1-page pdf file.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response.
I do not find other major concerns. I believe further adding the intuitions related to GAN as well as the backgrounds (including the citations mentioned in other reviewers' comments) would strengthen the presentation of this work.
Thereby I would like to keep my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer sJUp
Comment: Thank you for your time and careful response.
Best regards,
Authors | Summary: The authors connects graph neural networks with discretizations of PDEs and connects their limiting behavior with E-L equations. By changing regularization / smoothing term from second order to first order, the authors designed a new E-L equation and its corresponding GNN. The author then present numerical experiments to show their improvement in performance.
Strengths: - The authors tackles the problem of oversmoothing in GNN, which is an essential problem GNN cannot get as deep as other neural networks.
- The authors provided solid mathematical support on their method and great experimental performance.
- It is an original work connecting TV-minimization from the old school machine learning, with the SOTA design of GNNs.
Weaknesses: - The authors should do an table outlining the steps of their algorithm. It will make it easier for audience who prefer testing model before going through the math.
- The infomation in figure 5 is too dense. It would be better if the authors could split it into two figures.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Equation 3 sounds similar to the MBO scheme, which is basically diffusion + clip. Are there any connections in between?
- Does your model transfer to other types of neural networks like convolutional neural networks?
- Does your model requires more computation time? How much more compared to the origional run time of GCN, GAT, and GRAND?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that the reviewer is enthusiastic about this work. We will outline the steps of our algorithm in a table (likely in the Supplementary material). Also, we appreciate the comment regarding the layout of Fig. 5. We will split it into two figures in the final version.
We agree with this reviewer that our min-max optimization is very close to the classic Merriman-Bence-Osher (MBO) scheme for TV-based image filtering. Actually, our work is greatly inspired by these pioneering works.
Our current work only applies to graph data. Recall the connection between ResNet and NeuroODE, it is possible to devise a new CNN-based backbone using the similar framework, that is, designing new functionals in a continuous domain and developing the equivalent discrete deep models.
Compared to GCN and GAT, there is no significant increase of running time by our method since the operations in the DC-layer are element-wise multiplication and clip. However, our method is almost 5 times faster than GRAND which relies on a PDE solver. We have summarized the comparison of running time (by text) in the Supplementary (lines 161-165). We will show the detailed computation time in a table (as shown in Table 3 of the 1-page pdf file in the rebuttal) in the final version. | Summary: The authors propose a framework based on the Euler-Lagrange equation to derive specialized GNNs by discretizing continuous diffusion functionals. By deriving a new GNN layer from the Total Variation functional, they manage to control the oversmoothing problem in existing GNN architectures and improve node classification performance of six existing architectures with up to 128 layers. Additionally, the authors introduce a new GAN to learn flow problems on graphs and evaluate it on longitudinal data from Alzheimer's disease.
Strengths: The prominent strengths of this paper are the thorough evaluation and the great results on node classification showing that the proposed DC layer can improve different architectures and enable deeper graph neural networks. The proposed GAN also performs very well on flow prediction in an Alzheimer's disease dataset though I cannot judge the subject-specific interpretation of those results. In the derivation of their method, the authors combine many techniques though it is in parts difficult to follow (see Weaknesses).
Weaknesses: The one major weakness of this paper is its (lack of) clarity in writing. Many ideas are insufficiently explained and notation is often inconsistent and confusing. In particular, I refer to the following:
1. Line 90: The numbers for Cora and Citeseer are not in Table 1.
1. Line 110: Graph divergence operator is never defined and no reference is given.
1. Line 111-121: Unclear writing that mixes two different ideas. The first two sentences talk about an often-used (citations missing) regularization term and its effect, but the remainder of the paragraph is about interpreting GNN structures as neural ODEs on graphs.
1. Line 127-130: Drawing the conclusion that you have established this mapping requires either further explanation or at least a citation.
1. Line 136: Which L2 norm does this refer to? The one in the functional in line 127?
1. Line 158-159: The definition of $J_{\mathrm{TV}}$ is completely unclear. $x$ is triple bound (parameter of $J_{\mathrm{TV}}$, parameter of $\min$ operator and integrated over) and $z$ is double bound.
1. Line 160-161: What exactly does this "trick" consist of and in which sense is "degree" used in this sentence?
1. Equation (1): Same issue as line 158-159.
1. Equation (3): Is the clipping elementwise?
1. Line 190: Why are the $x_i$ being clipped when Equation (3) referes to the $z_i$?
1. Line 191: How does large degree increase $z_i$? An arbitrary number of edges of identical nodes would leave $z_i$ still unchanged, wouldn't it?
1. Line 192-193: What does it mean to "shift the diffusion pattern"? Which cases of the case distinction in Equation (3) correspond to "heat-diffusion within community" and "penalizing inter-community exchange"?
1. Line 211: How is future predictability related to the Brachistochrone problem that asks about the shortest path?
1. Section 2.2.2: The whole section is difficult to follow because it introduces yet another problem and solution in little space. While I understand the importance of this section as an example for an alternative GNN architecture derived from the proposed method, for the overall clarity of the paper the space might be better used to clarify the main method.
1. Overall, the importance of the Brachistochrone problem is overemphasized and diverts attention from the main contribution of the paper. It is a nice example application of the E-L equation but even in Figure 1 the analogy between the mechanics and ML case is weak.
This paper leaves the impression of solid work, though the opaque presentation prevents me from saying so with any certainty. A thorough rewrite of Section 2 with a focus on the reader could make this a great paper.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. In how far does GNN-PDE-COV rely on the homophily assumption? Part of the problem illustrated in Figure 1 is that the strong connection between nodes of separate classes breaks the homophily assumption of many GNN models.
1. How does the runtime of your model scale in the number of nodes $N$ and number of edges $E$?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: The appendix mentions briefly limitations of the mode of evaluation. I would be interested in a point on how feasible it is to derive specialized architectures based on the GNN-PDE-COV framework given that the optimization problem in the proposed GAN is completely different from the optimization problem in Section 2.2.1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the constructive comments from this reviewer. We answer the general question first and then clarify each specific comment.
General questions:
We appreciate the insightful question regarding the homophily assumption. In our GNN-PDE-COV framework, similar to current GNN models, we do adopt the homophily assumption, which implies that graph embeddings between two nodes that are topologically connected are expected to be similar. As we explained in lines 191-196, we introduce the total variation (TV) on graph gradients to incorporate a high-level heuristic of community structure into the information exchange on the graph. Thus, the homophily assumption remains a fundamental driving factor in our GNN models. The inclusion of the TV term in the DC layer (Eq. 3) provides a flexible mechanism to adaptively control the information exchange based on the global topology of the network community. In the 1-page PDF attached with this rebuttal, we have shown the new benchmark results on five heterophilic graph datasets (Texas, Wisconsin, Actor, Squirrel, Cornell, and Chameleon) in Table 2. The results indicate that our TV-based GNN model also performs well on heterographic data.
Regarding the running time (summarized in Table 3 of the attached 1-page PDF), the driving factor is the number of nodes $N$. This is because the majority of operations in our GNN model, as well as in other GNN methods, are primarily applied to nodes rather than edges.
Specific comments in “Weaknesses” session:
1. The accuracy numbers for Cora and Citeseer are in Table 1 (128 layers, last column and last row of each dataset). Since the numbers in Table 1 are densely packed, this reviewer might have missed the %. We will highlight them in the final version.
2. We will explain the graph divergence operator in the final version.
3. We will smooth out the write-ups to transit from GNN regularizer to PDE. Thanks for this constructive comment.
4. We will add a reference since it is not our major contribution. Thanks.
5. Correct. We will make it clear in the final version.
6. We apologize for this notation issue. We will fix this in the final version by replacing “$J_{TV}(x,z)$” with "$\mathop {\min }\limits_{x} \mathop {\max}\limits_{z}J_{TV}(x,z)$", where $J_{TV}$=...”.
7. We will include a reference to explain the approximation used to handle the absolute value operator on $z$ during optimization. Due to the page limit, this information will be provided in the Supplementary material.
8. We have explained in #6.
9. Correct, the clip operation is element-wise.
10. We apologize for the confusion. Actually, $z$ is the intermediate result of $x$. Precisely speaking, we update $z$ based on $x$ and then apply clip operation on $z$. After that, we replace $x$ with the updated $z$ for graph diffusion. We will make it clear in the final version.
11. Following the homogeneity assumption that nodes within the same community share the group-specific characteristics of graph embeddings, inter-community links are supposed to have large gradient gradients, which leads to the large value of $z$.
12. We have specifically discussed the TV-based diffusion patterns in Supplement S2.2. Please also refer to Fig. S3. Specifically, we find that (1) more than 70% nodes are actually associated with inter-class links which confirm the hypothesis of over-smoothing in Fig. 1 of our manuscript; (2) Our novel GNN models have the ability to learn feature representations that better preserve the discriminative power for node classification (as indicated by the distribution of node-to-node similarity shifting towards the sign of anti-correlation).
13. We used the shortest path in the Brachistochrone problem as an example motivating work. For each learning problem, the underlying questions may vary. For example, the FlowNet (in 2.2.2) is seeking for a set of max-flows that underline the network community structure.
14. Due to the page limit, we moved the background of neuroimaging application to the Supplement. We will make it clear in the final version (by adding more detail of background in the Supplementary).
15. We would like to emphasize that the Brachistochrone problem in Fig. 1 is only used to help readers understand the motivation of linking GNN to a calculus of variation problem. Our major contribution is to address the over-smoothing issue in GNN from the perspective of designing application-specific graph diffusion patterns.
Thank you for your valuable comments, we will incorporate all the comments and suggestions in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. Two more point from my side:
1. As the text in line 90 refers to state-of-the-art results, I only checked the red numbers in the table. It should be clarified in the text that you refer to the 128 layer numbers.
---
4. Which paper will you cite here?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer paGB
Comment: Thank you for your careful reply.
For 1:
OK, we will make it clear in the final version.
For 4:
We apologize for the short answer. In the final version, we will add one sentence in line 130 as:
“…we have established a mapping between the mechanics of GNN models and the functional of graph diffusion patterns in a continuous domain. Note, similar works can be found in [1,2]. ”
[1] GRAND: Graph neural diffusion. In: International Conference on Machine Learning (ICML), 2021.
[2] PDE-GCN: Novel Architectures for Graph Neural Networks Motivated by Partial Differential Equations, Advances in neural information processing systems (NeurIPS), 2021 | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful feedback.
We are thrilled by the reviewers’ comments which they consider our paper as being original (Reviewer CRUA), creative (Reviewer HWVs), novel yet simple (Reviewer VF1D, sJUp, HWVs), clear (Reviewer sJUp, HWVs), great (Reviewer CRUA), very well (Reviewer paGB) and well written (Reviewer VF1D).
We are glad they found our study provided solid mathematical support (Reviewer CRUA) and the rationality of the proposed theory (Reviewer HWVs), as well as our analyses are thorough (Reviewer paGB), well-defined (Reviewer HWVs) and the great and superior experimental results (Reviewers paGB, CRUA, sJUp) strongly support the claims.
We agree with the Reviewer HWVs who recognizes the detailed knowledge of the EL equation is not familiar to every graph neural network researcher. We were constrained by space. Therefore, we will add more details and relevant work in the Supplementary of the final version.
One primary concern is limited evaluations of the experiments on comparison methods and datasets (Reviewer VF1D). We have added three comparison methods (Reviewer VF1D mentioned), six benchmark datasets of heterophilic graphs and two benchmark data of traffic flow. All the results are provided in the 1-page PDF.
$ \bf Table 1$ (for Reviewer VF1D): The performance of different data splitting schemas.
$ \bf Table 2$ (for Reviewer VF1D): Node classification results on heterophilic graphs (including six datasets) for four methods.
$ \bf Table 3$ (for Reviewers VF1D, CRUA, paGB, sJUp): The running time on different methods.
$ \bf Fig 1$ (for Reviewers VF1D): Traffic flow prediction accuracy in terms of mean absolute error (MAE) in PEMS-BAY and METR-LA benchmark datasets.
$ \bf Fig 2$ (for Reviewers HWVs): The t-SNE visualization of node feature representations.
$ \bf Fig 3$ (for Reviewers HWVs): The evolution of $l_2$-norm graph smoothness term and $l_1$-norm TV term as the number of GNN layers increases.
We have answered all the specific questions for every reviewer as below and will incorporate all feedback in the final version.
Pdf: /pdf/a56018cde933994140fe8e4571ac346f038ea625.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors develop a framework for linking discrete (message passing) GNNs to continuous graph diffusion functional networks using Euler-Lagrange equations of heat kernels. Via this framework, they analyze the causes of oversmoothing in current GNNs. By noting that the main cause of oversmoothing is the minimization of the quadratic graph smoothness term in the diffusion equation, their main contribution is to replace this by a Total Variation (TV) term (as used in image reconstruction, restoration etc.), yielding a new objective function. Further since the $l_1$ norm term of TV is non-differentiable at 0, they develop a dual min-max approach for solving this objective function iteratively by first minimizing for X and then maximizing for Z. This solution technique then results in a new GNN architecture with a separate feature learning layer (Fully Connected layer) for $l^{th}$ layer embeddings $X^l$ followed by a diffusion clip layer for generating the Z terms. They then apply their method for a specific application of predicting spreading flow dynamics. Experimental results are presented on 3 benchmark citation datasets and 6 GNN models including Vanilla GCN and one other diffusion GCN (GRAND).
TL;DR takeaway of the problem setup/motivation is how to preserve community/contextual embeddings of adjacent (dissimilar) boundary nodes in different labeled communities by preventing oversmoothing through localized message passing. The proposed solution is via the Total Variation parameter Z that should penalize inter community information diffusion.
Strengths: 1. Avoiding local oversmoothing by replacing the quadratic graph smoothness term in the diffusion equation by an $l_1$ norm regularize which is the TV term. This seems to be a novel application of TV to diffusion GNNs.
2. Formal derivation of an iterative dual min-max method for solving the non-differentiable objective function with the TV term. While this type of iterative technique has gained increasing popularity starting with ADMM, the detailed derivation is a good contribution.
3. For small layers (2 layers), their method shows performance improvement over the other baselines. This might be due to ameliorating the oversmoothing process due to the TV Z parameter.
Weaknesses: 1. I am surprised the authors do not have references to classic papers that analyze the root causes of oversmoothing in GNNs, for example, the DropEdge paper [1] “Tackling Over-Smoothing for General Graph Convolutional Networks”, W Huang∗ , Yu Rong∗ et al.. IEEE TPAMI Aug. 2015. This paper analyzes oversmoothing by looking at a spectral analysis of the underlying adjacency matrix. It would be interesting to see if there is a connection between the $beta$ parameter and degree connectivity of equation (3) and results from [1].
2. The baseline model comparisons are too limited - vanilla GCN, GAT, one diffusion GNN (GRAND).
a. There have been several papers that either explicitly focus on deep sampling while mitigating oversmoothing e.g., [2] “Decoupling the Depth and Scope of Graph Neural Networks”, H. Zeng, M. Zhang et al. Neurips 21.
b. They test their model on citation networks which are considered highly homophilic. However, dissimilar nodes (as they illustrate in fig. 1) that would ideally prove their oversmoothing claims are adjacent to each other primarily in heterophilic networks. It would be useful if they can show their results on heterophilic graphs.
There needs to be an extended comparison with GNN models such as [2] as well as others that look at heterophilic graphs e.g., $H_2GCN$ [3] “Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs,” J. Zhu, Y. Yan et al, Arxiv. As they claim their method prevents over-smoothing, it would be very valuable to see comparative results on heterophilic graphs and if they can show improvements over other models.
3. Experimental Results: [3] $H_2GCN$, [4] GeomGCN [5] GPRGNN seem to show better results. For instance, accuracy of this paper on Pubmed is 80% while H2GCN, GeomGCN, GPRGNN show 90% accuracy. Similar results for Cora and Citeseer .
[4] Geom-GCN: “Geometric Graph Convolutional Networks. In International Conference on Learning Representations”, Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. ICLR 2020.
[5] GPRGNN: “Adaptive Universal Generalized PageRank Graph Neural Network”, ICLR2021.
4. There may be some typos in experimental results - published results are not consistent with published tabular data in the previous works. E.g. GCNII results for Pubmed are 90% in Table 1 in the following nips 22 paper: https://papers.nips.cc/paper_files/paper/2022/file/75c45fca2aa416ada062b26cc4fb7641-Paper-Conference.pdf but Table 1 in this paper only shows around 79% for GCNII on Pubmed. Similarly please check Table 5 in the following nips 20 paper: https://arxiv.org/pdf/2006.11468.pdf for other discrepancies. I suggest rerunning experiments and checking for typos or otherwise undertanding why numbers are different. In general, the margin of improvement is around 2-4 points over vanilla GCNs for $H_2GCN$ and GPRGNN while the margin is also in that range for this paper, even though they show lower scores for vanilla GNN and GAT. It would definitely improve this paper if you added these papers as baselines for direct comparison.
5. The diffusion flow application that is used in the paper to validate the diffusion model seems to a very niche application. It would greatly add value to the paper if you validated the model over a realistic flow model such as a traffic flow problem.
6. The authors note that their iterative solution technique is not computationally intensive (appendix). However, the method was tried only on 4 limited datasets that are all citation networks. In general, for any iterative solution process the tradeoff between convergence time and accuracy requires more sophisticated and detailed evaluation. Can you improve accuracy on Pubmed (see above) at the cost of increased computation time?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please consider different graph setups to validate your oversmoothing diffusion models such as heterophilic graphs in which dissimilar nodes are close by and similar nodes can be far away.
More extensive evaluation of tradeoffs between convergence time and accuracy would be helpful, especially to improve accuracy on the 4 datasets as compared to the newer baselines suggested above.
Consider a more realistic diffusion flow application to validate your diffusion model such as a traffic flow problem.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have addressed the limitations of this work in the appendix, primarily the lack of diversity in evaluation datasets. In general, as pointed earlier, paper could be improved with more extensive comparisons on diverse datasets and models.
The authors have stated that the proposed topic doesn't have any negative societal impacts. To the contrary they state that "From the application perspective, the new deep model for uncovering the in-vivo propagation flows has great potential to establish new underpinning of disease progression and disentangle the heterogeneity of diverse neurodegeneration trajectories." While technical correct, this comment in general applies to any work on sensitive datasets in the medical field.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely thankful for all constructive comments provided by this reviewer. We are pleased that the reviewer recognizes the merits of our new GNN technique, as the major concerns are centered on more extensive comparisons on diverse datasets and models. We have prepared all necessary results in the attached pdf file, which includes comparisons with more GNN models (such as ShaDowGNN [2], $H_2$GCN[3], and GPRGNN [5]) on six new datasets and two new benchmark experiments on traffic flow data. These results are ready to be included in the final version.
$\textbf {(1)}$ Regarding the relatively low accuracy on Cora, Citeseer, and PubMed shown in our paper compared to other published works (weakness #3,4), we would like to emphasize that such discrepancy is due to the use of different data splitting strategies. As shown in Table 1 of the 1-page PDF, we show the classification accuracies by ShaDowGNN, $H_2$GCN, and GPRGNN, utilizing a 3:1:1 (i.e., randomly split nodes of each class into 60%, 20%, and 20% for training, validation and testing, and measure the performance of the models on the test sets over 10 random splits) data split (often used for full-supervised learning) [4] (you mentioned) and a 1:25:50 (with 20 nodes per class for training, 500 nodes for validation and 1000 nodes for testing) split (often used for semi-supervised learning) [11] (as shown in Supplementary file). The relatively higher accuracy referenced by the reviewer pertains to the models trained and tested on the 3:1:1 split, where training data significantly outnumbers the validation and testing sets. In contrast, the results presented in the manuscript are based on the more challenging 1:25:50 split of the dataset.
[11] Revisiting semi-supervised learning with graph embeddings, ICML, 2016
$\textbf {(2)}$ Regarding the additional evaluations on heterophilic graphs and more baseline models (weakness #2,6), we have included Texas, Wisconsin, Actor, Squirrel, Chameleon, and Cornell datasets, where the homophily ratios are less than 0.3. In Table 2 of the 1-page PDF, we have shown the node classification results as the number of GNN layers increases from 16 to 128 (we will add all the experimental results of all the layers (2 to128) on all methods in the final version). It is clear that our TV-based GNN model (GCNII+ by adding DC-layer on top of GCNII backbone) outperforms $H_2$GCN, GPRGNN, ShaDowGNN in all testing scenarios. Note, we observed a consistent 1-3% enhancement over GCNII, which is consistent with the benchmark results we have presented in Table 1 of the manuscript. Also, we are unable to make Geom-GCN running in our GPU environment (with a newer version of deep graph library) since Geom-GCN reportedly requires CUDA 9.2. Given the short turnaround time, we didn’t include Geom-GCN in Table 2 of the attached 1-page PDF. We have released all the codes and data on GitHub (please use the same anonymous link shown in the manuscript if can access).
$\textbf {(3)}$ Regarding the validation of flow experiment on traffic data (weakness #5), we have evaluated our FlowNet on two benchmarks of traffic flow datasets: METR-LA (arXiv:1707.01926) and PEMS-BAY (arXiv:2108.09091). The MAE by our FlowNet is 3.411 in METR-LA (3.229 by the best model) and 1.814 in PEMS-BAY (1.790 by the best model), respectively. Compared with the results published on website paperswithcode (please refer to Fig.1 in the 1-page PDF), our proposed method can achieve competitive promising results, compared to the current state-of-the-art models. Thank you for this constructive suggestion, we will add these experiment results to the Supplementary in the final version.
$\textbf {(4)}$ Regarding the methodology comparison with DropEdge [1] (weakness #1), we appreciate this valuable information. We will include this work in the final version, as part of the related work. Since the journal paper this reviewer is referring to has not yet been published in TPAMI, we will cite their conference paper (DropEdge: Towards Deep Graph Convolutional Networks on Node Classification. ICLR 2020). Although both works aim to address the over-smoothing issue in GNN, the two approaches are completely different. “DropEdge” focused on preventing over-smoothing by reducing the number of network edges which might undermine the graph topology. Our work provided a top-down mathematical framework to regulate the message exchange through a min-max optimization schema. In the perspective of methodology, we would like to argue that there is NO DIRECT connection between the beta parameter in Eq. 3 and the “DropEdge” operation in their work.
$\textbf {(5)}$ Regarding the computational cost (weakness #6), we have summarized the running time for various GNN models in Table 3 of 1-page PDF.
$\textbf {(6)}$ Regarding the concern on societal impact, we mainly use neuroimaging data and clinical outcomes from public databases. Note there are no biological/chemical resources included in this paper. The human subject information of all imaging and demographic data has been completely removed from the public databases.
---
Rebuttal Comment 1.1:
Title: Comments
Comment: Thank the author for the detailed rebuttals about Q1, Q2 and Q4.
The authors have added more details and relevant work in the Supplementary and provided a brief summary of the pioneering contributions.
I notice that Eq1 is important for the coherence of the article. However, for Q3, what I am curious about is the derivation process of the duality problem in Eq. 1 (line 166), since the authors claim that Eq. 1 is the dual formulation with min-max property for the TV distillation problem. I would like to emphasize that the dual problem is strictly mathematically defined clearly. Hence the derivation process is necessary. The author’s response is just the motivation.
So I would keep my score.
---
Rebuttal Comment 1.2:
Title: Requires deeper analysis than in the 1-page addendum
Comment: I sincerely appreciate the additional information provided by the authors in their rebuttal. They have definitely put in major efforts in response to my and other reviewer comments. However the nature of the results provide in the 1-page addendum raise several deeper question as outlined below.
Papers along the lines of [1][2] DropEdge, ShadowGNN etc. attempt to provide some theoretical basis for their oversmoothing claims in terms analyzing variance in embeddings. Eq. (3) in this paper is quite crucial in terms of intuition for preventing oversmoothing, however unlike [1][2] and similar others, results based on Eq. (3) are primarily heuristic and intuitive. Since reduction of oversmoothing is the major claim in this paper, an important question to readers will be whether the TV method adopted in this paper leading to (3) can be analyzed in a similar manner to provide some theoretical basis for the reduction in oversmoothing. The new results presented in the 1-page addendum show a rather remarkable outperformance of node classification of this work on heterophilic datasets compared to ${\bf every}$ other baseline which makes some theoretical basis for oversmoothing even more needed. The new table of results in the 1-page addendum are appreciated but too succinct in terms of experimental details and raise several deep question on how this overperformance is achieved (while not the authors fault since the rebuttal is page limited, but too important since they consider a complete independent area of heterophilic graphs – it would have been great if the authors had given this full treatment with the proper analysis in the expanded version, not just an additional table). An important exceptional result like this should be analyzed comprehensively with explanations, ablation studies, and comparative analysis, the current submission does not provide the means to do so.
I have a similar comment for the new graph on traffic flow prediction. What was the input setup, any simplifying assumptions, any specific engineering of meta-parameters etc., what are the limitations of using the proposed method of flow diffusion for such an important problem space. How does the TV method work so well for traffic flow? I think the community will be better served if the paper is resubmitted with the proper treatment of these important applications.
---
Reply to Comment 1.2.1:
Title: Response to Reviewer VF1D
Comment: Dear Reviewer VF1D,
Thank you for the follow-up comments.
(1) Existing works such as DropEdge [1] and ShaDowGNN [2] have achieved great success in addressing the over-smoothing issue using graph theory. However, we study the over-smoothing issue from a completely different perspective by formulating graph learning as an ill-posed optimization problem. In general, the energy function is defined to transform the initial graph embeddings (via a graph diffusion process) to the extent that the diffused graph embeddings reach the largest correlation with the outcomes (labels), which is constrained by a pre-defined regularization term (such as $l_2$-norm graph smoothness term). Following the notion of total variation (TV), we introduce a selective gating mechanism that adaptively controls the smoothness based on the learnable threshold (Eq. 3). The step-by-step derivation of Eq. 3 in the framework of variational calculus is detailed in the supplementary S1.1 (please check lines 16-31). Since our work is built upon the well-studied framework of variational calculus, we paid more attention to interpreting the insights of the selective smoothing mechanism from the TV term (line 191-197 and line 235-243 in the main manuscript), rather than the theoretic basis for the reduction in over-smoothing.
(2) The calculus of variations and partial differential equations were extensively employed in the field of image processing several decades ago. It is important to note that we are not reinventing the wheel of existing theoretical concepts. A majority of the theoretical proofs related to TV-based optimization can be located in the textbook "Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations" by Aubert and Kornprobst, published by Springer. However, what makes our work unique is that our pilot application of the variational calculus framework to graph neural networks. As supported by the state-of-the-art performance (on most graph public datasets) with comparison to existing benchmark GNN models, the integration of these mathematical principles into the realm of graph-based learning could be a new recipe for designing novel application-specific GNNs.
(3) We appreciate reviewer VF1D giving us another chance to explain the important details that we could not put in the 1-page addendum. Actually we were planning to show improvement of each GNN model in Table 2 of the addendum after plug-in the DC layer (marked '+'). The results are summarized as follows.
Texas (a) – Wisconsin (b) – Actor (c) – Squirrel (d) – Chameleon (e) – Cornell (f)
GPRGNN+: 0.5762 ($a$) – 0.5375 ($b$) – 0.2873 ($c$) – 0.2936 ($d$) – 0.4417 ($e$) – 0.4746 ($f$)
H2GCN+: 0.2881 ($a$) – 0.2584 ($b$) – 0.1954 ($c$) – 0.2745 ($d$) – 0.2237 ($e$) – 0.2162 ($f$)
ShawDowGNN: 0.2140 ($a$) – 0.2367 ($b$) – 0.2112 ($c$) – 0.2010 ($d$) – 0.2303 ($e$) – 0.2328 ($f$)
Note, we only conducted the experiments on 128-layers on involved methods, we will add all the experiments on different network layers to the final version.
We further conducted an ablation study on GCNII [1] without DC layer, the performance using 128 layers is as follows.
GCNII: 0.7390 ($a$) – 0.7575 ($b$) – 0.3393 ($c$) – 0.3017 ($d$) – 0.4513 ($e$) – 0.7119 ($f$)
[1] Simple and Deep Graph Convolutional Network, ICML 2020
It is worth noting that we have released the code and data in the anonymous Github. We are committed to showing these new results in the Supplementary of the final version.
(4) We have a clear neuroscience motivation for characterizing the toxic protein flows from neuroimages. As mounting evidence shows that the spreading of disease pathology underlines the topology of the wiring diagram in the brain (Franzmeier et al. “Functional Brain Architecture is Associated with the Rate of Tau accumulation in Alzheimer’s Disease”, Nature Communication, 2020) to the extent that a large portion of spreading flows occur between strongly interconnected nodes such as hubs. Since the topologically critical nodes (such as hubs) often have high degree of connections, it is reasonable to use TV-based regularization term to avoid vanishing flows along strong links by selectively suppressing the potential over-smoothing of embedding vectors between nodes with dense connections.
Due to character restrictions, please refer to the next page's response.
---
Rebuttal 2:
Title: Please provide additional feedback
Comment: Hi,
You seem to have the lowest score for this paper. Could you please acknowledge that you have read the rebuttal and let us know if you still have concerns or not? If not, then I would encourage you to increase your score. | null | null | null | null | null | null |
Blurred-Dilated Method for Adversarial Attacks | Accept (poster) | Summary: The authors propose the Blurred-Dilated method (BD), which utilizes BlurPools and dilated convolutions on the source model when an adversarial attack is applied, to increase the transferability of the transfer-based attack. The method replaces the MaxPool layer with MaxBlurPool, Conv with ConvBlurPool and AveragePool with BlurPool, in both forward and backward computation. The author conducts experiments and find that BD can outperform multiple SOTA. On top of that, Combining BD with existing black-box attacks can further improve the attack success rate.
Strengths: 1. The paper is written very clearly with a detailed introduction to the preliminary works. The actual modifications to the models are detailedly reported in tables.
2. A large variety of experiments are covered, such as comparison with SOTA, success rates against robustly trained models, ablation study, hyper-parameters, etc. A lot of potential concerns can be addressed with the reported results.
3. The proposed method is simple yet effective. Replacing model layers does not introduce extra computation time compared with methods like GhostNet.
Weaknesses: #### 1. Datasets with lower resolution are not tested with
In the paper, all the experiments are performed on ImageNet. Since BD modifies the downsampling operation, which can make a huge difference with different image resolutions. ImageNet has a relatively high resolution (3x299x299), enabling dilated convolution to be applied without much issue. However, I wonder if BD can still generate transferable attacks for datasets with lower resolution, such as CIFAR-10 and CIFAR-100.
#### 2. The proposed method is specific to some model components
This limitation is also brought up by the authors, "*Our proposed model modification is based on domain knowledge and empiricism.*" Besides, there is no general guideline on the "early stop" strategy on how many downsampling layers to be removed. Summing up these points, it can be difficult to extend the proposed methods to new models.
For the same reason, claims like line 176 might be too broad. "*BD (Blurred-Dilated method) is a universal technique that can be easily implemented in any DNN.*" As BD requires specific CNN layers like Convolution with stride and pooling layers to work, it cannot be directly applied to non-CNN models like vision transformers and MLP mixers. This is also another limitation that is not discussed in the paper.
#### 3. Minor formatting recommendations
- I recommend inserting a space between the square bracket for citation and the previous word (e.g. word [1] instead of word[1])
- The formatting of the norm is inconsistent in the main text (l$\infty$) and the appendix ($l_\infty$).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. If we consider replacing max pooling with BlurPool, can we interpret it as replacing a non-linear function with a linear function? Thus, does the Linearity Hypothesis in LinBP apply to BD as well?
2. I suppose the replacement of model layers takes place during test time (actually it is better to clarify in the paper), when the source models are already pre-trained similar to SGM and LinBP. However, unlike SGM and LinBP, BD also modifies forward propagation. As the architecture is changed, I wonder if we can retrain/finetune the modified model and uses the new one as the source model instead?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors admit Weakness #2 in the paper. However, can the authors advise some guidelines on the 'early stop' strategy and what is the proportion of the low-level features to be discarded/retained?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1. If we consider replacing max pooling with BlurPool, can we interpret it as replacing a non-linear function with a linear function? Thus, does the Linearity Hypothesis in LinBP apply to BD as well?
Thank you for providing a new perspective to explain the effectiveness of our method. However, we actually replace max pooling with MaxBlurPool, which consists of max-pooling with a stride of 1 and blur-pooling (Line 161 of our main paper). Therefore, MaxBlurPool is still a non-linear function.
Furthermore, LinBP proposes that the linear structure in the network is helpful for attack transferability, and removes the ReLU function in the network. Maybe it's because the ReLU function causes information loss when x<0?
Q2. Can we retrain/finetune the modified model and uses the new one as the source model instead?
Sorry, we didn't explain this clearly. In fact, we have fine-tuned the model after modifying it with the BD method. Otherwise the accuracy of the modified model will decrease, degrading the attack performance as well. We will make it more clear in our final version.
Q3. Experiments on lower-resolution datasets.
We experiment with lower-resolution datasets: CIFAR-10 and CIFAR-100. The experimental results are shown in Table R1 of the uploaded PDF file. The original source models for CIFAR-10 and CIFAR-100 are ResNet 20 and ResNet 56, respectively. We can see that our method can still be applied to lower-resolution datasets to generate transferable adversarial samples. Besides, we still consistently outperform the baseline method by about 4\% on average.
Q4. Can the authors advise some guidelines on the 'early stop' strategy and what is the proportion of the low-level features to be discarded/retained?
(1) Through experiments, we found that retaining half of the downsampling operations in the model can be a guideline.
(2) The input size of ResNet is $224\times224$, and a $7\times7$ feature map is obtained finally after 5 downsampling operations. Therefore, the proportion of the low-level features to be discarded/retained for the original ResNet is $(224^2-7^2)/7^2$. In contrast, in our BD ResNet, we only keep 2 or 3 downsampling operations, obtaining a $4^2$ times larger feature map ($28\times28$) finally. Therefore, the proportion of the low-level features to be discarded/retained is $(224^2-28^2)/28^2$.
Q5. Fixing the claims like line 176.
Our claim is not accurate, we will change it to "BD (Blurred-Dilated method) is ageneral technique that can be easily implemented in popular CNNs."
Q6. Minor formatting recommendations.
Thanks for your careful review. We will fix this typo and thoroughly proofread the paper again.
---
Rebuttal Comment 1.1:
Title: Thanks for the Rebuttal
Comment: Weakness #1: The experiments in Table R1 resolve my concerns. The authors successfully show that their method (BD) can also work well on datasets with lower resolution such as CIFAR-10 and CIFAR-100.
Weakness #2: This is undeniably the limitation of the proposed method. Nevertheless, in my opinion, this alone is insufficient to lead to a "reject". Some of the broad claims need to be fixed, which is acknowledged and promised by the authors.
Questions: Q2, 5, 6 show that there is room for improvement in the paper in clarity and formatting. The authors also promised to improve the clarity in these specific areas.
For the reasons above, I will keep my rating. However, I would like to point out that the design choice of the number of features to be discarded/retained seems to be obtained mainly from experiments. The motivation/justification can be made stronger if the authors also include a more detailed discussion similar to the rebuttal to reviewer hwet and their new findings from CIFAR-10 and CIFAR-100 in the paper.
---
Reply to Comment 1.1.1:
Comment: We are glad to hear that our response addresses your concerns. We sincerely appreciate the positive comments on the impact and contribution of this work. We will revise our manuscript to correct the formatting issues and make it more clear. We will also include a more detailed discussion to make our motivation and justification stronger based on your suggestions. We are grateful for your careful consideration of this work and insightful comments!
Title: Thank you for the comment! | Summary: In this work, the author generates adversarial examples to attack other models by using BlurPools and dilated convolutions on the source model. The results show that increasing the model with BlurPools and dilated convolutions can generate more transferable adversarial examples.
Strengths: The work is well-written and easy to follow. The work performs comprehensive experiments and validates the effectiveness of BlurPools and dilated convolutions regarding improving the transferability of adversarial examples.
Weaknesses: Although the author has demonstrated the effectiveness of the proposed method through experiments, my main point is still that this work lacks sufficient novelty. In my opinion, adding a blur layer inside the network and methods based on input augmentation (such as padding and resizing) are not fundamentally different. Moreover, the former method actually has more model dependence and cannot be used as plug-and-play as the latter.
Additionally, some other concerns include:
1. "They mostly focus on backpropagation while neglecting forward propagation." This limitation is not entirely correct. According to the author's classification, input augmentation can also be considered an improvement from forward propagation. I believe this statement should be further explained in more detail.
2. "It is crucial to retain as many comprehensive features as possible." This statement may not be entirely rigorous. The best transferability comes from category-related features, and background features sometimes provide false information.
3. In terms of experimental results, there is still some gap compared to some existing sota methods.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1. The novelty of the proposed method.
Please see Q2 of Reviewer hwet.
Q2. "In my opinion, adding a blur layer inside the network and methods based on input augmentation (such as padding and resizing) are not fundamentally different."
(1) Our method can help to preserve more low-level features during forward propagation of the modified source model. Therefore, employing such a source model can guide adversarial samples to more thoroughly and precisely destroy the low-level features of a clean image. Such adversarial samples are more transferable, since different models all use low-level features to extract high-level semantics and then make predictions (Please see Q1 of Reviewer hwet).
(2) In contrast, input augmentation improves adversarial transferability by offering less noisy gradients to escape from the poor local optimal of a source model. Therefore, our method and attacks based on input augmentation are fundamentally different. Besides, we can combine our method with input augmentation based attacks to further improve the performance.
Q3. Explaining "They mostly focus on backpropagation while neglecting forward propagation."
We are sorry about that. "They" here refer to the attack methods based on model modification. Many existing transfer attacks based on model modification mainly focus on modifying the back propagation.
Q4. Fixing "It is crucial to retain as many comprehensive features as possible.''
We will change it to "It is crucial to retain as much low-level features as possible.'' Please see Q1 of Reviewer hwet for explanations.
Q5. Comparing with existing SOTA methods in terms of the experimental results.
(1) Our work is focused on model-modification based attacks. LinBP and ILA family are SOTA methods in this category, and we have compared with them.
(2) Perhaps the SOTA method you mentioned is based on other mechanisms such as input augmentation. To address your concerns, we compared our method with the state-of-the-art input-augmentation based method SSA [R6] with a maximum perturbation of 16, using MI as the attack method. Table R2 of the uploaded PDF file shows the attack results. We can see that our method can achieve an average attack success rate of 95.3\%, outperforming SSA by 1.6\%. Moreover, our method can be combined with SSA to further improve the attack success rates of SSA by 4.7\% on average.
[R6] Long, Y., Zhang, Q., Zeng, B., Gao, L., Liu, X., Zhang, J., & Song, J. (2022, October). Frequency domain model augmentation for adversarial attack. In European Conference on Computer Vision (pp. 549-566). Cham: Springer Nature Switzerland.
---
Rebuttal 2:
Title: Follow-up response
Comment: Dear Reviewer,
Considering that the discussion phase is nearing to end, we are looking forward to your further feedback about our latest response. Do our responses fully address your concerns? Do you have any other comments? We would like to discuss with you in more detail. We greatly appreciate your time and feedback.
Sincerely,
Authors
---
Rebuttal Comment 2.1:
Comment: Thanks a lot for your response. Most of my concerns have been addressed. However, I still have a slight question regarding the comparison with state-of-the-art (SOTA) attacks. I believe that the ultimate goal of this work should be to achieve state-of-the-art transferability results, considering the motivation the concept of __low-level feature improving transferability__ has been mentioned to some extent in other works (ILA etc). If we follow the authors' statement and compare it with model-modification based attacks like [1], what would be the advantages of this work?
[1] Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective.
---
Reply to Comment 2.1.1:
Comment: Q1. Comparing with DRA [1].
(1) Due to the limited time left, we can only conduct some preliminary experiments to compare our method with DRA [1], when using ResNet-50 as the source model and MI-FGSM as the attack algorithm. The attack success rates of both methods on ImageNet are shown below. We can see that our method can still outperform DRA [1].
| ε | Target Model | DRA | BD (Ours) |
|:-----:|:---------:|:-----:|:-----:|
| 16 | RN50 | 99.6% | **99.9%** |
| 16 | PNAS | 93.8% | **96.4%** |
| 8 | RN50 | 95.9% | **99.8%** |
| 8 | PNAS | 63.8% | **81.0%** |
| 4 | RN50 | 71.7% | **99.5%** |
| 4 | PNAS | 23.6% | **52.9%** |
(2) DRA does not modify the structure of the source model to make it able to preserve more low-level features of an image. Instead, DRA proposes to improve adversarial transferability from the data distribution perspective. It defines a new loss function to fine-tune a source model so that the gradient of the fine-tuned source model can approximate the gradient of the ground-truth data distribution. As a result, using the fine-tuned source model can better push the image away from its original distribution, which helps to improve adversarial transferability.
(3) We can combine our method with DRA [1] to achieve better performance, since these two methods attempt to improve adversarial transferability from different perspectives.
Q2. "the ultimate goal of this work should be to achieve state-of-the-art transferability results, considering the motivation the concept of low-level feature improving transferability has been mentioned to some extent in other works (ILA etc)."
We do not agree that ILA has mentioned the concept of low-level features improving transferability. The motivation of ILA is to increase the perturbation of an adversarial sample on a pre-specified layer of the source model, which the authors hope will be conducive to greater transferability. Therefore, in addition to achieving state-of-the-art transferability results, our work contributes to providing a new perspective to study the transferability of adversarial samples. | Summary: This paper proposes a new Blurred-Dilated method for generating transfer attacks. The authors focus on generating transfer attacks as a more realistic attack model by looking at how the substitute model's architecture can be changed to increase the transferability of adversarial attacks. By introducing blurred downsampling and dilated convolutions in the substitute network, the authors try to focus on preserving important features to increase transferability. The authors evaluate transferability on ImageNet across several naturally trained architectures.
======= POST REBUTTAL =========
After the rebuttal, I have raised my score due to experiments on transfer-based defenses.
Strengths: 1. Interesting idea. The paper proposes an interesting approach, which is whether or not transfer attacks can be carried out on the substitute model side. If there are techniques we can do on the substitute model side, this makes the attacks more possible and practical.
2. Compared to ILA, ILA++, and LinBP, the BD attacks seem to do quite well, transferring across naturally trained architectures at a higher rate than these other attacks.
3. Examining the attention maps in the experiments is interesting, and helps add some insight into what would otherwise be an empirical paper.
Weaknesses: 1. Some design choices could be defended stronger. For example, why are the dilated convolutions only applied at the later layers? Why is blurred-downsampling better than convolving with higher stride? Also, the blurred filters would still lose information. What formally is being preserved with the BD filters and not with the regular convolutions or max pooling or average pooling? More formal one-to-one comparisons between each choice may be helpful. I am still a bit unsure about what fundamentally is important about these operations.
2. Unstated practicality concerns. While the results are good against naturally trained models, what if the defense is trained to stop transfer attacks? E.g., "Ensemble Adversarial Training: Attacks and Defenses" (Tramer et al. 2018) or "TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness" (Yang et al. 2021). The introduction of such defenses may render these attacks ineffective. I wonder how effective these defenses would be in stopping BD attacks. In addition, the assumption is that ImageNet as the domain is known but the architecture may be different. How realistic is this in the real world? One might not know the dataset that a machine learning model was trained on, and then would have to extract a substitute with model extraction. Then, the practical cost of extracting a model and then performing a transfer attack would have to be compared with the cost of performing query-based black-box attacks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why are dilated convolutions only applied at the later layers?
2. Why are blurred-downsampling filters better than convolving with higher stride?
3. How does BD perform against defenses trained to stop transfer attacks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper is an attack paper and does not extensively talk about the implications of such a method. A discussion on possible countermeasures and how this impacts the overall aim of creating robust and reliable machine learning models would be helpful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1. Why are dilated convolutions only applied at the later layers?
(1) We introduce dilated convolutions to reduce downsampling in the model, since we want to preserve more low-level features of an image during forward propagation (Please see the answer to Q1 of Reviewer hwet for the motivation of our method). However, merely removing downsampling can degrade the model's performance on capturing the global information of an image, since downsampling can enlarge the model's receptive field. Therefore, in addition to removing downsampling, we add dilated convolutions to enlarge the model's receptive field while maintaining more low-level features.
(2) We do not apply dilated convolutions to reduce downsampling at the earlier layers, since applying dilated convolutions at the earlier layers will maintain the large dimensionality of earlier features, which makes the forward computation too expensive to complete.
Q2. Why are blurred-downsampling filters better than convolving with higher strides?
In addition to adding blurred-downsampling, we change convolving with higher strides to convolving with stride 1. Such a combination (ConvBlurPool, please see Line 163 of the main paper) is better than convolving with higher strides in terms of preserving more low-level and low-frequency features. The reasons are:
(1) Convolving with higher strides would skip pixels and potentially miss features in the original image. Therefore, we first change the stride to 1, which can preserve more low-level features.
(2) We then add blurred-downsampling. Although it reduces the dimensionality of features, the output of blurred-downsampling still considers all features in its receptive field. Therefore, it will not cause more information loss than convolving with higher strides. Besides, since blurred-downsampling is a Gaussian filter, it can keep more low-frequency features, which can help to generate more transferable adversarial samples [6].
Q3. How does BD perform against defenses trained to stop transfer attacks?
We have tested three adversarially trained models: IncV3\textsubscript{ens3}, IncV3\textsubscript{ens4}, and IncRes\textsubscript{ens} (Tramer et al. 2018). The detailed results are shown in Table 9 of our appendix due to space limit. TRS (Yang et al. 2021) does not provide a pre-trained defense model, and we cannot complete the model training for TRS due to the limited rebuttal period. Therefore, we tested more widely-used defenses to stop transfer attacks: JPEG [R1], FD [R2], FAT [R3], RS [R4] and NRP [R5]. The results are shown in Table R3 of the uploaded PDF file. We can see that these defenses cannot effectively stop our BD attacks, and we outperform the state-of-the-art baselines by a large margin of 14.7\% on average.
[R1] Guo, C., Rana, M., Cisse, M., & Van Der Maaten, L. (2017). Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117.
[R2] Liu, Z., Liu, Q., Liu, T., Xu, N., Lin, X., Wang, Y., & Wen, W. (2019, June). Feature distillation: Dnn-oriented jpeg compression against adversarial examples. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 860-868). IEEE.
[R3] Wong, E., Rice, L., & Kolter, J. Z. (2020). Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994.
[R4] Cohen, J., Rosenfeld, E., & Kolter, Z. (2019, May). Certified adversarial robustness via randomized smoothing. In international conference on machine learning (pp. 1310-1320). PMLR.
[R5] Naseer, M., Khan, S., Hayat, M., Khan, F. S., & Porikli, F. (2020). A self-supervised approach for adversarial robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 262-271).
Q4. What formally is being preserved with the BD filters?
(1) At the earlier layers, we change MaxPool, Conv, and AveragePool to MaxBlurPool, ConvBlurPool, and Blurpool, respectively (Lines 161-166 of the main paper). At the later layers, we introduce dilated convolutions in order to remove MaxPool or AveragePool.
(2) Compared to the original filters, our BD filters preserve more low-level and low-frequency features (Please see the answer to Q1 and Q2). Compared to max pooling, which keeps only one feature in its receptive field, MaxBlurPool considers all features in its receptive field. Besides, compared to AveragePool, since we use a Blurpool with a larger kernel size, with the same stride, Blurpool can keep more low-level features.
Q5. Threat model.
(1) The threat model adopted in our work is consistent with the well-recognized work in this field (e.g., [6, 9]), which assumes that attackers know the training dataset of the target model.
(2) We plan to study transfer attacks in more realistic settings in the future work.
Q6. Implication of our method.
Our method can be utilized to generate more transferable adversarial samples for adversarial training, which can better improve the robustness of a model.
---
Rebuttal 2:
Title: Follow-up response
Comment: Dear Reviewer,
Considering that the discussion phase is nearing to end, we are looking forward to your further feedback about our latest response. Do our responses fully address your concerns? Do you have any other comments? We would like to discuss with you in more detail. We greatly appreciate your time and feedback.
Sincerely,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you to the authors for their thoughtful rebuttal. The simplicity of the approach and the results are compelling. Seeing results against various defenses to stop transfer attacks is also quite helpful. I agree with some of the other reviewers that some more thorough analysis of how BD is working so much better could be nice (hwet), and that there are some model specific / empirically guided settings (eif6), but given the strength of results I have increased my score nonetheless.
---
Reply to Comment 2.1.1:
Title: Thank you for the comment!
Comment: We are glad to hear that our response has addressed your concerns and you have increased your score. We will revise our manuscript according to the suggestions of all the reviewers. Thank you again for your time and encouraging comments! | Summary: The paper proposes a novel transfer-based black box adversarial attack called Blurred-Dilated method. They authors consider the model modification approach and propose to reduce downsampling operations on the source model for the attack. They conduct extensive experimenting and compare with the previously proposed methods to show the superior transferrability of their generated adversarial examples.
Strengths: 1. Originality. The paper proposes a novel approach for the black-box attacks. I am not aware of other black-box attacks that focus on the forward propagation in the model.
2. Quality. The paper provides reasonable experimental support and justification of their proposed methodology.
3. Clarity. The paper is well-written and easy to follow.
4. Significance. Improving the transferability of the adversarial examples allows to raise awareness for the vulnerabilities of the models deployed in safety-critical domains.
Weaknesses: The BD method proposed in this work seems to be specific for the CNN architecture and cannot be applied to other popular vision archictectures such as Vision Transformer [1]
[1] Dosovitsky et al. “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”, ICLR 2021
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: What is the effect of the introduced architectural elements on the model inference time?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations and Broader Impact are adequately discussed in the Appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Q1. Can the BD method be applied to other popular vision architectures such as Vision Transformer?
We can first add CNN blocks to Vision Transformers as in (Dosovitsky et al. 2021). We can then apply our BD method to the CNN blocks in the hybrid model to improve the transferability of adversarial attacks. We plan to extend our method to more architectures in the future work.
Q2. What is the effect of the introduced architectural elements on the model inference time?
We compared the inference time of Resnet50 and BD-Resnet50 for 300 iterations and obtained the average time. The result is that the inference time of Resnet50 is 7.85ms, and the inference time of BD-Resnet50 is 15.69ms. The above result is tested on a Nvidia Tesla T4.
Compared to other methods, our method can still quickly generate adversarial samples with a higher attack success rate.
---
Rebuttal Comment 1.1:
Title: Reviewer's Response to the Rebuttal
Comment: Thanks for your rebuttal.
Q1. Given the prevalence of Vision Transformers in the contemporary literature, it would be good to have quantitative results. I am not convinced that adding CNN blocks to the Vision Transformer and converting it into a hybrid model would improve the transferability of the attack. Looking forward to the results in your future work.
Q2. Thanks for the results. Since inference time doubles after introducing blurred dilations, I find that this aspect should be clearly stated in the paper. Given that ResNet-50 has rather fast inference compared to other popular architectures, the overhead can be even more drastic for other families such as Vision Transformers.
Given the rebuttal, I find that my original rating and confidence score are reasonable and I am not changing them. My confidence on the paper's impact and contribution remains not too high given the absence of results for the Vision Transformer architecture.
---
Reply to Comment 1.1.1:
Comment: We will refine the paper accordingly in our final version. Many thanks for your valuable feedback!
Title: Thank you for the comment! | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely appreciate all of your precious time and constructive comments. We are greatly encouraged by the positive comments on our work. We will carefully revise our manuscript by adding the suggested experimental comparisons, presenting more detailed explanations, and fixing the typos. We are looking forward to receiving your valuable feedback to further improve our work. Thank you for your time!
Sincerely,
Authors
Pdf: /pdf/5bdf47174222acb6b9fa5f2a3cfd8d8ee7842ab6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper investigates a modification to vision networks that causes adversarial examples generated by attacking them to be more transferable. In particular, the paper suggests using dilated convolutions + blurred downsampling, which the authors motivate as retaining a maximum amount of original feature information.
Strengths: Overall, the paper seems reasonable and solid. It proposes a new approach (albeit with a somewhat handwavy motivation), shows that it works well in a variety of settings, and performs ablations to ensure that the components of their approach are necessary.
Weaknesses: As alluded to above, although the paper is solid, it's not a particularly standout paper either. The motivation for why this approach is useful is quite vague and is not explored thoroughly, except for some light analysis of salience maps and the method itself is not very novel. The results, however, are quite good. Overall, i feel somewhat lukewarm on this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1. The motivation of our approach.
(1) We observed that: 1) The adversarial samples generated by ResNet usually have better transferability than those generated by other models. We think the reason may be that the skip connections of ResNet connect the lower-layer feature maps to the higher-layer feature maps, which helps to transfer more low-level features and reduce the information loss caused by downsampling. 2) Due to the large resolutions of the images on ImageNet, models need to do downsampling multiple times, resulting in a large amount of information loss. Besides, due to the structural differences between models, the discarded features are different. Therefore, preserving more low-level features, as ResNet does, can improve adversarial transferability. 3) On CIFAR-10/100, the adversarial transferability between models are usually better than that on ImageNet. Since the resolutions of the images on CIFAR-10/100 are smaller, there are fewer downsampling in the CIFAR-10/100 models. Therefore, different CIFAR-10/100 models all maintain more low-level features, making the generated adversarial sample easy to transfer.
(2) Given the above observations, we think that employing a source model that can preserve more low-level features of an image during forward propagation can help to craft adversarial samples, which can more thoroughly and precisely destroy the low-level features of a clean image. Such adversarial samples are more transferable, since different models all use low-level features to extract high-level semantics and then make predictions. Since downsampling discards image features, which will reduce the details of an image (i.e., the low-level features) during forward propagation, we propose our Blurred-Dilated method to reduce downsampling in the original models, and preserve more low-level features of an image. Therefore, the proposed Blurred-Dilated method can generate more transferable adversarial samples.
Q2. The novelty of our method.
The novelty of our approach can be summarized as follows:
(1) We find that existing transfer attacks based on model modification only focus on modifying the back propagation. Different from them, we are the first to consider forward propagation.
(2) Our method is built upon a new perspective to improve adversarial transferability: keeping more low-level features during forward propagation. Besides, our method outperforms the state-of-the-art baselines by a significant margin. Therefore, our work sheds some light on studying the transferability of adversarial samples.
---
Rebuttal 2:
Title: Follow-up response
Comment: Dear Reviewer,
Considering that the discussion phase is nearing to end, we are looking forward to your further feedback about our latest response. Do our responses fully address your concerns? Do you have any other comments? We would like to discuss with you in more detail. We greatly appreciate your time and feedback.
Sincerely,
Authors | null | null | null | null | null | null |
Banana: Banach Fixed-Point Network for Pointcloud Segmentation with Inter-Part Equivariance | Accept (spotlight) | Summary: This paper considers an important problem in learning on point clouds -- the equivariance under the SE(3) group.
Namely, the authors address the requirement of inter-part equivariance, essential for handling real-world scenarios, where an object can consist of multiple moving parts, or a scene can contain multiple objects that undergo different rigid transformations.
The key observation is that segmentation is necessary to define such per-part equivariance, which the authors are the first to do in a strict manner.
Furthermore, the proposed fixed-point framework with one-step training and iterative inference is used to demonstrate that the per-step equivariance induces an overall equivariance upon convergence.
The developed inter-part equivariant message-passing network with stable convergence is experimentally shown to have strong generalization under different scene configurations, even those changing the point cloud geometry/topology.
Overall, the paper provides a sound theoretical framework for a complex practical problem.
---------------------------------------
POST REBUTTAL
The authors comprehensively addressed my questions and concerns.
Given the provided answers and rebuttal overall, I maintained my positive assessment of the paper.
Strengths: S1. The paper is very well-written, and the structure is sound. The illustrations are clear and instructive, which helps to comprehend the presented theory.
S2. The proposed view of inter-part equivariance as a co-evolving interplay between geometry and segmentation is novel and compelling.
S3. The experimental validation supports well the claims stated in the contribution.
Weaknesses: W1. A discussion on the (time-)complexity of the proposed method is missing.
- It is unclear how efficient the iterative inference is.
- Besides, a comparison of the proposed model complexity and the baselines is missing.
W2. The details of the hyperparameter choice, including the iterative inference part, are missing.
- Crucially, the optimal number of iterations $\textbf{k}$ is unspecified, nor is its effect on the complexity with regards to W1.
- The same applies to the motivation of the chosen size of the radius of the ball query, $r$, and the maximum number of points in the local neighborhood, $k$.
W3. Experiment Section 5.1: the selected Shape2Motion shapes have at most 3 parts with two different semantic labels.
- I wonder how the proposed method would perform on Shape2Motion shapes with more than 2 different part labels, e.g., motorbike or bicycle.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Q1. The first part of the title is often associated with negative connotations in English; see, e.g., [0]. The authors might want to revise it so as not to cause unnecessary misunderstanding.
Q2. How can the proposed method be used to perform point cloud classification?
Q3. The declaration of $φ$ in (14) is missing.
Q4. What is the number of runs in the experiment statistics presented in the tables in Section 5? Especially important for Table 2.
Q5. Can the authors explain the large standard deviation of the results in Table 1? Why does the proposed model, in most cases, perform better on Unseen states + unseen instances than Unseen states (see Table 1)?
Q6. Ablation studies: Was the inherent ambiguity of PCA-based canonicalization (see, e.g., [1]) taken care of? Could authors elaborate on the details of this experiment?
Q7. In the theory of the VN framework [2], which the authors' method is based on, the pose-effect cancelation is achieved by means of the inner product of the equivariant features, which also cancels the effect of reflections. Could the authors run a simple experiment showing if their method is actually E(3)-equivariant and discuss it?
Q8. Further noise-stability analysis of the method, where the noise is applied to the input point cloud (not just the segmentation mask), would be beneficial.
Q9. It would make the paper more accessible to a broader audience if the authors included an informal motivation of using the Banach fixed-point theorem and iterations.
A non-exhaustive list of typos:
- Line 148 " and we" --> ", we"
- Line 277 "espite" --> "despite"
-------------------------------------------------
[0] "banana." Farlex Dictionary of Idioms. 2015. Farlex, Inc 14 Jun. 2023 https://idioms.thefreedictionary.com/banana
[1] Li et al. (2021), A Closer Look at Rotation-Invariant Deep Point Cloud Analysis
[2] Deng et al. (2021), Vector neurons: A general framework for SO(3)-equivariant networks
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors adequately addressed the limitations and broader societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for your feedback and for appreciating our work! Here’re our responses to your questions and comments and we hope that they could address your concerns as well:
**W1. Time complexity of the proposed method is missing.**
In our experiments, we set k=20 iterations for evaluation. But in practice, we plot the IoU w.r.t. the number of iterations (PDF Figure 2) and observe that the network prediction converges within ~k=5 iterations. We will add this information to our paper.
The training time is the same as the standard training frameworks as it only takes a single-step prediction with the ground-truth labels. If one wants to further incorporate Lipschitz regularization losses into the training, it will take more time for the loss computation, especially if it uses adversarial sampling which needs to compute the network gradients and sample iterations. But currently we are not incorporating such regularizations.
**W2. Details of the hyperparameter choice.**
As mentioned above, for the experiments in the main paper, we set k=20 iterations for inference, but on average the network converges within ~k=5 iterations.
The input pointcloud contains 2048 points and is scaled to [-1, 1]. For the message passing layers, we set a neighborhood radius r=0.3 with maximum number of 40 points. An ablation study on the radius is shown in Table 4 of the attached PDF. When the radius is too small, the network fails to extract useful local features; and when the radius is 0.3~0.4 the network performance is relatively consistent.
We will add these details to the paper.
**W3. Experiment Section 5.1: the selected Shape2Motion shapes have at most 3 parts with two different semantic labels.**
We demonstrate our method on these object categories mostly because their part motions are more obvious, which helps us to show our “inter-part equivariance”. For categories like motorbike or bicycle, the part motions are relatively small (e.g. wheels rotating). We believe for several parts, our method can also work well. But if it's tens of parts like scene segmentation, the dimension of the iteration space $[0,1]^P$ greatly increases and it may cause difficulties.
**Q1. Acronym “Banana”.**
In fact, the acronym “Banana” comes from a meme in some math departments where students jokingly call “Banach space” the “Banana space”. But we are very sorry for not being aware of its negative connotations in English and we will consider changing or removing it.
**Q2. How can the proposed method be used to perform point cloud classification?**
As the fixed-point iteration must be always in the same domain $[0, 1]^P$, the direct input of our framework can only be the segmentation labels. However, if we don’t restrict the whole pipeline to be end-to-end, one can first do segmentation and apply another standard part-aware equivariant network to do other tasks like classification.
**Q3. The declaration of $\varphi$ in (14) is missing.**
Thanks for pointing it out! $\varphi$ is an MLP representing the edge function in the message passing. We will add the declaration to the paper.
**Q4. Number of runs.**
The error bars are not w.r.t. different random seeds, but are the standard deviation across different instances in the datasets.
**Q5. The large standard deviation in Table 1. Better results on Unseen states + unseen instances than Unseen states.**
For the large standard deviation, we notice that unlike many standard segmentation networks where prediction errors often exist locally (e.g. mislabeling a small region), our errors usually happen in a way that it converges to a totally different fixed point, e.g. an oven with its door slightly open, the network converges to a state where another wall of the oven body is labeled as “door” and the door is labeled as “body”. In other words, our errors usually exist globally instead of locally.
The better performances on unseen instances seem to also happen with all the baseline methods. Our guess is that it may be due to the shape biases between the two sets. For example, in the washing machine category, the majority of the instances have oval doors and the minority have rectangular doors, and there’re more rectangular doors in the test split than in the training split.
**Q6. Inherent ambiguity of PCA-based canonicalization.**
Yes, the inherent ambiguity of PCA is eliminated in the canonicalization with sign-flipping based on the mean at each PCA direction. The infinite Lipschitz upper bound of PCA comes from the possibility that a small change in the part assignment $\mathbf{y}$ can result in a substantial change in the PCA directions. For example, imagine you have a part whose shape is a perfect sphere, now if any point around this part is added to this part (with weight $\mathbf{y}_n = \varepsilon$), the point’s direction will immediately become a principle direction of the part – such direction change caused by one single point label change is an example of unbounded Lipschitz.
**Q7. E(3)-equivariance.**
Yes, the VN framework we adopt is indeed E(3)-equivariant. A sketchy draft discussing the reflection-equivariance and potentially how to break such symmetry can be found at: https://arxiv.org/pdf/2210.16646.pdf
**Q8. Noise-stability analysis of the method.**
We show the noise-stability analysis of our method in PDF Figure 4. We apply Gaussian noises to the input pointclouds with different standard deviations ranging from 0 to 0.05 (the value 0.05 is adopted from [1]). In Figure 6 of the main paper, we also show some qualitative results of our method trained on clean synthetic pointclouds and tested on noisy real scans.
[1] Mescheder, Lars, et al. "Occupancy networks: Learning 3d reconstruction in function space." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
**Q9. Informal motivation for using the Banach fixed-point theorem and iterations.**
Thanks for the suggestion! We will add some motivations to the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for comprehensively answering my questions and addressing my concerns!
I find your additional experimental results convincing and recommend you include them in the paper.
Given the provided answers and rebuttal overall, I intend to maintain my positive assessment of the paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply! We'll add the additional results to the paper/supplementary material. | Summary: The authors introduce a method for object part segmentation of point clouds that is equivariant/invariant to SE(3) part transformations. The core of their method is a neural network with points and segmentation as input and segmentation as output. The network is assumed to be contractive and is then used to perform Banach fixed point iterations towards the correct segmentation during inference. The Banach fixed point network is embedded in a larger architecture using Vector Neurons. For implementation, a message passing network is used, which uses the input segmentation for determining message strength.
The proposed method is evaluated for articulated object part segmentation (on Shape2Motion) and for segmentation of multi-object scans (on DynLab), where it compares favorably against baselines.
Strengths: - The theoretical framework of Banach iterations is elegant and an interesting perspective on iterative networks during inference. In practice, similar ideas on point clouds have been there before, understanding such networks as reweighting functions for iterative reweighting least square schemes, EM iterations, etc. However, I think there is value in this specific perspective, especially since it comes with a convergence guarantee under Lipschitz constraints. To my knowledge, this is a novel contribution.
- The presented algorithm seems to solve the given tasks well, outperforming the given baselines clearly (however, there are some concerns regarding missing baselines, see below).
- The paper is nicely presented and easy to follow, given the slightly more complex nature.
- The result of generalization from synthetic to real chairs is strong
Weaknesses: - The first part of the paper is all about the consequences of a Lipschitz-constraint network and then there is no Lipschitz constraint in practice. The authors claim that the local message passing serves as some type of Lipschitz regularization. However, there is no reference given for this claim and I don't know one either. I think it would make the paper stronger if this aspect would be supported by experiments or theory, which is isn't. The authors only ablate on model performance under different noise levels, which does not give a full picture about convergence.
- In articulated object part segmentation, the paper seems to leave out important comparisons to previous work, e.g., [1,2,3]. All three papers are referenced in related work but not compared against. It would be good if the authors would provide a comparison or a discussion, why the comparison is not necessary.
I think this is a very interesting paper with a new perspective on iterative networks. However, the above points are a bit concerning, which is why I am not fully convinced to give a better score.
[1] Kawana et al.: Unsupervised pose-aware part decomposition for 3d articulated objects.
[2] Kawana et al.: Neural star domain as primitive representation
[3] Chen et al.: BAE-NET: Branched Autoencoder for Shape Co-Segmentation
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - I suspect that equation 1 wants to express linear blending of part transforms. However, I don't think it is correct: (1) sum of matrix $\mathbf{R}$ and vector \mathbf{t}, (2) $\mathbf{t}_p$ not weighted by $\mathbf{y}_{np}$ . Could the authors clarify?
- Please provide further evidence that the network indeed behaves contractive or a theoretical justification for the Lipschitz regularization via local message passing.
- Please discuss or provide missing comparisons with previous methods for part segmentation on Shape2Motion.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discuss limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback and for appreciating our work! Here’re our responses to your questions and comments and we hope that they could address your concerns as well:
**I suspect that equation 1 wants to express linear blending of part transforms. However, I don't think it is correct: Could the authors clarify?**
Yes, it is a typo. Thanks for pointing it out! It should be $\sum_{p=1}^P \mathbf{y}_{np}(\mathbf{x}_n\mathbf{R}_p+\mathbf{t}_p)$. We made this mistake because we were writing $(\mathbf{R}_p, \mathbf{t}_p)$ together as $\mathbf{T}_p$ in the beginning and changed it in the last minute but forgot to change the expressions.
**Please provide further evidence that the network indeed behaves contractive or a theoretical justification for the Lipschitz regularization via local message passing.**
We have added a paragraph of (informal) theoretical explanation of why the SE(3)-equivariant message passing is helpful to small network Lipschitz constants. The PDF file is directly sent to AC as it contains TeX equations that cannot be rendered by MathJax here in the text box.
In addition, the convergence itself actually doesn’t rely on the network being contractive with $L<1$, as $[0,1]^P$ is a compact convex space and the Brouwer fixed-point theorem guarantees the existence of fixed-points for any continuous functions, which can be found by Newton fixed-point iterations. However, we developed our theory on top of the Banach fixed-point theorem/iteration for the uniqueness of the fixed point, which is for the proof of equivariance.
We also provide a study of different Lipschitz constraining methods under different norms (PDF Table 1 and Figure 1). In practice, none of these existing Lipschitz constraining methods are helpful to the network performance (PDF Table 1). We also plot the $l_2$-norm regularization losses (PDF Figure 1) and they are zero almost everywhere.
Directly evaluating the network Lipschitz is also non-practical, as the space $[0, 1]^P$ is of very high dimension and the sampling in it is very inefficient. This inefficient sampling also makes the behavior of the Lipschitz regularization losses less understandable and controllable, as they’re not regularizing the entire space but only on some sparsely sampled points, which may bring unexpected natures to the space.
Overall, based on our observations, not having an explicit Lipschitz constraint and using the SE(3)-equivariant message passing work best for the current situation. But we also agree that, as we discussed in our limitation section, Lipschitz bounds and regularizations for set-invariant networks (which norm to use and how to constrain) would be an interesting and important problem for future study.
**Please discuss or provide missing comparisons with previous methods for part segmentation on Shape2Motion.**
The assumptions and training/test setups of [1, 2, 3] are different from ours in the following aspects:
- [1, 2, 3] are unsupervised **co-segmentation**, ([1] uses a GAN loss, and [2, 3] follows the slot-attention-like techniques), but we are following the standard supervised training/test frameworks for semantic/part segmentations.
- Most importantly, [1, 2, 3] require the training data to have objects at **all articulation states**, but we train our method only on limited states (e.g. only the rest state) and show its generalization to unseen states.
- Another minor point is that [1, 2, 3] also needs watertight meshes/implicits for their training, but our method can work on pure pointcloud data.
[2, 3] doesn’t apply directly to articulated objects, but similar ideas are incorporated in [4] with part-level SE(3)-equivariance for articulated object segmentation (also discussed in the related work section) – and the two arguments above hold for [4] as well. Also note that although [4] shows that they feed their segmentation and pose predictions back to their feature extractor in their pipeline figure, they only do a two-step coarse-to-fine prediction and are not actually using an iterative framework – and also they only leverage per-part SE(3)-equivarinace but don’t provide any theoretical insights for inter-part equivariance.
We will add these discussions to our paper.
[1] Kawana et al.: Unsupervised pose-aware part decomposition for 3d articulated objects.
[2] Kawana et al.: Neural star domain as primitive representation
[3] Chen et al.: BAE-NET: Branched Autoencoder for Shape Co-Segmentation
[4] Liu, Xueyi, et al. "Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE (3) Equivariance." arXiv preprint arXiv:2302.14268 (2023).
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you for explaining the differences in setup with respect to related works [1 - 4]. I would encourage to make this distinction also clear in the related work section of the paper. I guess one could still compare the methods in a setup where all articulation states are used as training data but I agree that this might not be essential to support the main argument of the paper.
I appreciate the theoretical justification for message passing networks. It provides a good framework of how to think about it. Also, the additional results shown in the PDF are good additions to the paper. One concern I have regarding the Lipschitz losses: If they are zero everywhere when applied as additional loss, doesn't this just mean they are weighted to strongly? If this is the case it is also no wonder that they have strong negative impact on the IoU. Wouldn't it be more interesting to show these plots in a scenario where they are not used in the loss in order to show the behaviour of the network without additional regularization?
All in all, I am satisfied with the answers to my question and increase my score. I think even if this method is not directly relevant in practice, the theoretical framework is interesting and investigates iterative networks from a new perspective.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply and for raising your score! We'll add the additional discussions and results to the paper/supplementary material. | Summary: The paper proposes an equivariant network for part-based (or multi-object) point cloud segmentation. The approach is equivariant to separate SE(3) transformations of each part/object. This is ensured by introducing a Banach fixed-point network. The network takes the point-could and the current segmentation as input, and iterates till convergence. The proposed part-aware equivariant network employs the Vector Neurons method to achieve equivariance for the individual parts. A segmentation-weighted message passing then adds communication between the different parts. Experiments are performed on objects from the Shape2Motion and DynLab datasets.
Strengths: - Novel work based on very interesting and elegant ideas.
- Theoretically sound.
- Very well written and clear.
- Good illustrations.
- Comparison with several methods.
Weaknesses: 1. The experimental evaluation is limited to very small-scale datasets. It is not clear how the method would scale to larger datasets and more complex scenes, e.g. in an automotive setting, or from terrestrial Lidar scans. It would be good if the authors could discuss this in more detail.
2. It seems that the authors always train on the exact instance which is also encountered during inference. How would the method perform if it encounters a new type of, e.g., oven, after being trained on a dataset of different ovens.
3. I did not find how the other methods in table 1 and 2 were trained. Is data augmentation used? How would data augmentation impact the performance of e.g. PintNet++ or MeteorNet?
4. I did not find discussion on inference and training time.
5. It would be very interesting to see some plots of the IoU w.r.t. the number of iterations, to understand the convergence behavior of the model.
In summary, I think that this is a very interesting and solid work. I find no significant weaknesses. Although it would be appreciated if the authors can answer and address my comments in a rebuttal to further strengthen the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See weaknesses.
Moreover:
Which other tasks could be suitable for this method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: They are well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback and for appreciating our work! Here’re our responses to your questions and comments and we hope that they could address your concerns as well:
**1. Generalization to larger datasets and more complex scenes.**
For larger scenes, we believe a major difficulty would be the network architecture. Currently, we are using the Vector Neuron framework for equivariance and the message passing for encouraging small network Lipschitz. But in larger scenes, more complex network structures are usually important, e.g. transformers, down/up-sampling, voting schemes, and how to incorporate equivariance and Lipschitz properties into these complex networks is yet underexplored.
**2. Inference on novel instances.**
In Table 1 right and Figure 5 right in the main paper, our method is tested on novel instances which are not seen in the training set. We also show in Figure 6 (main paper) that after training on the clean synthetic samples, our method can be applied to real scans with some noise. We visualize more paired training and test instances of different states only for a better illustration of the equivariance properties.
**3. Data augmentation.**
We further compare our methods to the baselines under different data augmentation settings: no augmentation, global pose augmentation $SE(3)$, and per-part pose augmentation $SE(3)^P$. The results are in Table 3 in the attached PDF file.
**4. Inference and training time. 5. Convergence behavior of the model.**
In our experiments, we set k=20 iterations for evaluation. But in practice, we plot the IoU w.r.t. the number of iterations (PDF Figure 2) and observe that the network prediction converges within ~k=5 iterations. We will add this information to our paper.
The training time is the same as the standard training frameworks as it only takes a single-step prediction with the ground-truth labels. If one wants to further incorporate Lipschitz regularization losses into the training, it will take more time for the loss computation, especially if it uses adversarial sampling which needs to compute the network gradients and sample iterations. But currently, we are not incorporating such regularizations.
**Moreover: Which other tasks could be suitable for this method?**
We think one task that may be very suitable for our method is tracking. As the changes between consecutive frames are small in tracking, one can probably use the segmentations from timestep $t$ as an initialization for the iterations at timestep $t+1$, which might be very helpful for the convergence. And the equivariance properties, on the other hand, may help the tracking network achieve better inter-frame consistency.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the addressing my questions. I will maintain my positive rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply! | Summary: The paper propose a Banach, an approach for part-based point-cloud segmentation. In particular, the authors propose an approach to enforce equivariance of the part-segmentation, by construction. They propose a fixed-point framework with one-step training and iterative inference. They propose a part-aware segmentation network.
Strengths: - The results are very convincing - the proposed approach seem to significantly outperform previous approaches
Weaknesses: I do not see any direct weaknesses but I have very little knowledge about this field.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - what is the run-time of the approach? How does it compare to previous approaches?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback and for appreciating our work! Here’re our responses to your questions and comments and we hope that they could address your concerns as well:
**Inference and training time.**
In our experiments, we set k=20 iterations for evaluation. But in practice, we plot the IoU w.r.t. the number of iterations (PDF Figure 2) and observe that the network prediction converges within ~k=5 iterations. We will add these information to our paper.
The training time is the same as the standard training frameworks as it only takes a single-step prediction with the ground-truth labels. If one wants to further incorporate Lipschitz regularization losses into the training, it will take more time for the loss computation, especially if it uses adversarial sampling which needs to compute the network gradients and sample iterations. But currently we are not incorporating such regularizations.
---
Rebuttal 2:
Comment: Since I had no major concerns, I will update my score to 7
---
Rebuttal Comment 2.1:
Comment: Thanks for your reply and for raising your score! | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive feedbacks, and are glad that they find our work presenting novel and compelling ideas as well as convincing results.
Here we provide a brief summary of our response, including additional theoretical explanations and experimental evaluations. Detailed responses are replied to each reviewer separately. We attach a PDF file presenting tables/figures with brief explanations (for more detailed discussions please refer to our text response), and a PDF file directly sent to AC containing the equations that cannot be rendered in the text boxes. We will also add these contents as well as fix the typos in our paper revision.
- Network Lipschitz (iMm7, T7gc)
- Theoretical explanations. (The file is directly sent to AC through an anonymous link as it contains equations that cannot be rendered by MathJax directly in the text boxes.)
- Network performance under different Lipschitz constraining methods. [PDF file Table 1 and Figure 1]
- Algorithm complexity (tamA, DYXN)
- Training and inference time compared to other methods (tamA, DYXN).
- IoU w.r.t. the number of iterations (tamA). [PDF file Figure 2]
- Evaluations and comparisons (iMm7, tamA, T7gc, 4Umu)
- Hyperparameters for neighborhood radius (4Umu). [PDF file Table 4]
- Discussions of equivariant convolutions. Comparison to an invariant message passing strategy adopted for human segmentation (iMm7). [PDF file Table 2]
- Discussions of intrinsic methods (iMm7). [PDF file Figure 3]
- Data augmentation for the baseline methods (tamA). [PDF file Table 3]
- Network stability w.r.t. different noise levels on the input pointclouds (4Umu). [PDF file Figure 4]
- Discussions of other articulated-object segmentation methods (T7gc).
- Other discussions, including answers to questions, clarifications, and the acronym of the paper title (iMm7, tamA, T7gc, 4Umu)
Pdf: /pdf/384edc37ac0a08eccadc7a4d723972e226430e13.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a neural network architecture that is equivariant to transformations in SE(3) for each object part independently. The network is part of a fixed-point framework where the network is trained with a single step but during testing an iterative approach is used that converges to the desired segmentation.
Strengths: - The paper presents a fresh idea to solve the problem of part segmentation that takes into consideration the different movable parts that compose an object and makes the network equivariant to transformations of those.
- The paper provides theoretical derivations that motivate the solution proposed in the paper.
- The paper shows the effectiveness of the solution by providing several experiments.
Weaknesses: Although I like the paper, I believe the evaluation could be improved by including other types of equivariant networks:
- For example, neural networks based on equivariant operations such as group convolutions or steerable convolutions have by construction equivariance wrt the object parts too. Although these methods would allow information flow between object parts, most of the object parts would remain equivariant to transformations of those.
- Moreover, network architectures that work only with the intrinsic information of the shape should also be included. Graph Convolution networks or equivariant mesh convolutions would also maintain equivariance of object parts. These networks are commonly used to segment people in different poses, which is a related problem to the one addressed in the paper.
- As the paper states, there is no guarantee that L < 1. The paper states that weight truncation could restrict the upper bound of L but harm the expressivity of the network. An experiment where this is studied would improve the paper.
- Lastly, I do not like the acronym Banana since it is not really an acronym of the title. There is no need to include an acronym on the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback and for appreciating our work! Here’re our responses to your questions and comments and we hope that they could address your concerns as well:
**Equivariant convolutions.**
The VNN baseline we compare to is also using a graph convolution network backbone. These types of methods face the dilemma of local neighborhood size – when the neighborhoods are too small, it’s more agnostic of part motions but the local features are less expressive, and the contrary when the neighborhoods are large. We also add a baseline comparison with invariant-feature message passing (PDF Table 2, also discussed below), which is an extreme case where local equivariant features are only extracted in the first layer.
**Intrinsic methods.**
We agree that intrinsic methods are highly relevant to our task and will add discussions of these works to the paper. Compare to intrinsic methods, our method has the following superiorities:
- Rigid part motions preserve geometric distances in many cases, however, this doesn’t hold when topological changes exist, e.g. oven door closed to oven door opened, where the intrinsic operators such as the Laplacian cannot stay constant (PDF Figure 3). And most intrinsic methods (either spectral methods like the functional map, or spatial methods like the graph convolution) are developed based on the Laplacian operator.
- Another limitation of intrinsic methods is that they cannot do part segmentation with duplicated parts (like the chairs example in Figure 6, main paper), as the same local geometries can only be classified as the same labels. In other words, our method can give “equivariant” (order 1) outputs, but intrinsic methods can only give “invariant” (order 0) outputs.
- Also, implementing intrinsic operators on pointclouds is much more non-trivial than on meshes, especially when noises are also considered.
In addition, we also add a baseline network which first computes per-point local SE(3)-equivariant features, converts them to invariant features, and applies invariant message passing (PDF Table 2), which is a strategy adopted for human segmentation in [1].
[1] Feng, Haiwen, et al. "Generalizing Neural Human Fitting to Unseen Poses With Articulated SE (3) Equivariance." arXiv preprint arXiv:2304.10528 (2023).
**Network Lipschitz.**
We have added a paragraph of (informal) theoretical explanation of why the SE(3)-equivariant message passing is helpful to small network Lipschitz constants. The PDF file is directly sent to AC as it contains TeX equations that cannot be rendered by MathJax here in the text box.
In addition, the convergence itself actually doesn’t rely on the network being contractive with $L<1$, as $[0,1]^P$ is a compact convex space and the Brouwer fixed-point theorem guarantees the existence of fixed-points for any continuous functions, which can be found by Newton fixed-point iterations. However, we developed our theory on top of the Banach fixed-point theorem/iteration for the uniqueness of the fixed point, which is for the proof of equivariance.
We also provide a study of different Lipschitz constraining methods under different norms (PDF Table 1 and Figure 1). In practice, none of these existing Lipschitz constraining methods are helpful to the network performance (PDF Table 1). We also plot the $l_2$-norm regularization losses (PDF Figure 1) and they are zero almost everywhere.
Directly evaluating the network Lipschitz is also non-practical, as the space $[0, 1]^P$ is of very high dimension and the sampling in it is very inefficient. This inefficient sampling also makes the behavior of the Lipschitz regularization losses less understandable and controllable, as they’re not regularizing the entire space but only on some sparsely sampled points, which may bring unexpected natures to the space.
Overall, based on our observations, not having an explicit Lipschitz constraint and using the SE(3)-equivariant message passing work best for the current situation. But we also agree that, as we discussed in our limitation section, Lipschitz bounds and regularizations for set-invariant networks (which norm to use and how to constrain) would be an interesting and important problem for future study.
**Acronym “Banana”.**
In fact, the acronym “Banana” comes from a meme in some math departments where students jokingly call “Banach space” the “Banana space”. But we are very sorry for not being aware of its negative connotations in English and we will consider changing or removing it.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: Thank you for the thorough rebuttal. All my concerns have been addressed and I would encourage the authors to include these discussions in the paper and/or supplementary material. Regarding the acronym, I was not aware of this since mathematics is not my background. I would not oppose leaving it in the paper. I keep my initial positive assessment of the paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply!
And thanks for your understanding of the paper title. In fact, we also have a backup plan to call it "La Banane" ("The Banana" in French), which is the acronym for **Bana**ch **N**etwork with **E**quivariance (an actual acronym!), and it won't have negative connotations in English as it is not English... We used "Banana" in this submission because we felt having a French word in the title may make it look a bit obscure.
And yes, we'll add the contents to the paper/supplementary material! | null | null | null | null | null | null |
Learning and Collusion in Multi-unit Auctions | Accept (poster) | Summary: This paper studies a setting where a single seller runs repeated multi-unit auctions. In each multi-unit auction, there are $K$ identical units of good for sale. Each buyer individually has a valuation for the good with decreasing marginal returns and submits a bid vector. The seller will allocate the units according to the ranking order of the bids, and use uniform pricing. Specifically, the seller will set the price to be the $K$-th highest bid (or $(K+1)$-st highest bid in another variant). Such an auction format takes place in many important real-world settings, such as license auctions for CO2 emissions, ad auctions on online platforms.
The authors first show how to efficiently compute an optimum bid vector in the offline setting, which also serves as a benchmark bidding strategy for performance evaluation in the online setting. In the online setting, they can design algorithms with polynomial running time and low regret under either full information feedback or bandit feedback. They also give a lower bound on the expected regret of this problem. Additionally, they analyze the equilibria in the two variants of the auction to study whether they are susceptible to bidder collusion.
Strengths: 1. This work contributes the line of literature on multi-unit auctions by considering dynamic bidding strategies rather than focusing the seller's side. The model is well built on previous work of mechanism design in combinatorial auctions, learning algorithm design in repeated auctions, and has significant implications for real-world issues such as license allocation.
2. The main insight that the bidder can compute an optimum bidding strategy in the offline setting by finding a maximum wight path in a DAG is quite novel. Particularly, the weight of an edge appears to depend on the whole bid vector but turns out to rely only on two neighbor bids, which I find quite interesting. Moreover, it also requires non-trivial techniques to design unbiased estimators for the bandit feedback model.
3. Other characterizations about this problem are provided, including regret lower bounds and equilibrium analysis.
4. All results and proofs are well written.
Weaknesses: The organization of this paper looks weird to me. Section 1.2 covers more than two pages. In particular, "equilibrium analysis" is one of the contributions but all the results about it are piled up in Introduction without an independent section. The paper is titled with ``online learning'', so I assume the algorithms for the online setting should be the highlight of this work, but now Subsubsection 1.2.3 accounts for more than half of Subsection 1.2.
Actually, this part of equilibrium analysis seems to have almost no connection to the $T$-round setting. Theorem 3 and 4 only give regret upper bounds, and do not analyze the convergence of the algorithm. No-regret learning algorithms can converge to a coarse correlated equilibrium, but there is no guarantee of convergence to any pure Nash equilibrium. Therefore, this part in fact studies the pure equilibria in a static setting from the perspective of the seller, aiming to figure out which kind of uniform pricing is a better pricing rule. Maybe the authors should consider changing the title to enlarge the scope of this paper.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: 1. I don't see why the authors concludes with a summary that the $K$-th price auction format may be preferable to the $(K+1)$-st price auction. Theorem 6 is a negative result about the core stable profile in the $(K+1)$-st price auction while the $K$-th price auction does not even have a pure Nash equilibrium. Why are they comparable?
2. Minor comments.
- In line 65, $z$ is used to denote an allocation, but then in the following paragraph, $x$ is used to denote the allocation. And $z$ is later used to denote a vertex in DAG.
- I personally feel that "repeated auctions" is the only accurate expression, while "repeated mechanisms" (line 33), "repeated setting" (line 76), "repeated auction" (line 272) are not appropriate expressions.
- Line 195, the angle -> from the angle.
- Line 201, such problem -> such a problem.
- In the definition of $S_i$ (equation 1), no value for $t$ (the superscript) is given.
- The key evaluating metric, regret, is not defined in the model part.
- There are some meaningless repetitions. For example, lines 220-223 almost repeat lines 89-93, the footnote on page 6 repeats the meaning of $x_i$ in line 68.
- Example 1 can help readers to better understand the model, so I think it is better not to defer it to the appendix.
- When $K=1$, the problem then becomes bidding in repeated first-price ($K$-th price) or second-price ($(K+1)$-st price) auctions. Pointing out the special cases may help readers better understand why the $k$-th price auction does not have a pure Nash equilibrium, and why the $k+1$-st price auction is more susceptible to bidder collusion.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
- Regarding Section 1.2 and Coarse Correlated Equilibria (CCE) and K+1-th pricing rule: Let us first note that in essence, our contribution introduces a learning algorithm for bidding which can enhance comprehension of CCEs within uniform price auctions. This can be done by simulating our learning algorithm in various settings. This application holds practical significance.
Moreover, our equilibrium analysis results in Section 1.2 bear substantial value for decision-makers. Specifically, they spotlight a critical distinction: while uniform pricing auctions under the K+1-st pricing rule yield equilibria with zero prices, the same does not apply to the K-th pricing rule. This observation is important given the prevalent use of uniform price auctions.
In any case, we are open to changing the title of the paper to reflect the additional results we have in Section 1.2 for NEs.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I don't have any further questions. | Summary: This paper studies the setting in the multi-unit auction where there are $K$ items to allocate and buyer are not necessarily unit-demand and have quasi-linear valuations with decreasing marginal returns. The bidder set separate bid for each item, and each item goes to its highest bidder, where the price can be either the $K$-th bid, or the $K+1$-th bid. The goal is to design a low-regret algorithm from a bidder's perspective, with the goal of maximizing his utility defined as the value minus payment for the winning items. This paper gives a collection of interesting results:
- a DAG construction of the construction for one bidder, and a proof for the bijection between the bid vectors and paths in the offline setting
- Reduce the learning problem as an online maximum weighted path problem, and a weight-pushing algorithm as a solution to the online full information setting
- Regret lower bounds for the setting.
- Equilibrium analysis of this auction.
Strengths: - This paper provides several interesting results.
- The model is well-motivated and the intro is clean and informative.
Weaknesses: - No empirical experiments, but that's fine since this is a theoretical paper.
Here are some additional comments regarding the presentation:
- The abstract in the pdf version is different from the abstract in the OpenReview system.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is it possible to generalize this regret against an adaptive adversary to the policy regret?
- The equilibrium analysis seems to imply this auction doesn't guarantee any positive revenue, why is it essential to study considered on this result?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - no empirical experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
- Expanding to policy regret: Achieving a deeper understanding of policy regret involves transforming our current setup into a contextual bandit framework with an exponential range of contexts. This challenge presents an exciting avenue for future research.
- Regarding the Importance of Uniform Price Auctions and Their Variants: It's essential to highlight the distinction between uniform pricing auctions under the K+1-st pricing rule—yielding zero-price equilibria—and the K-th pricing rule, lacking this property. This distinction holds significance when selecting an appropriate uniform pricing variant due to its wide adoption.
Furthermore, uniform pricing is foundational, achieving equilibrium by matching demand and supply. Its application spans electricity markets, carbon trading, and institutions like the Bank of England and Treasury. The pricing rule in this auction is perceived as fair, and fair pricing mechanisms are particularly relevant in high stakes settings like carbon trading.
Given these factors, studying uniform price auctions is well-justified. The interplay between pricing variants and their real-world implications underscores the importance of this investigation for informed decision-making.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I don't have any further questions. | Summary: The paper considers a repeated autcion setting where
the players submit their bids for an item of which $K$ units are avaialble.
An auctioneer computes a price $p$
and allocates the j-th unit to the
owner of the j-th highest bid, charging the price p$ for each unit.
(I did not understand this part completely, see "Questions" below)
The players have quasi-linear valuations with decreasing marginal returns.
The paper considers the problem in both an offline and online settings.
They derive upper boudns on regret for the full information
setting, where the bids are public, and the bandit feedback setting,
where each player only observes the price and their own allocation.
Strengths: The paper considers an interesting problem.
Repeated multi-unit auctions are widely used in practice.
The problem formulation is nice and could lead to interesting follow-up works.
Weaknesses: I found the writing to be somewhat confusing. In particular,
since the our results section does not end till page 4, the model
should have been described in more detail earlier on.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The disucssion of the model is pretty confusing to me.
How does the auctioneer allocate the units?
The paper repeatedly says
"the j-th unit to the player that submitted the j-th highest bid".
But the players have different bids for different units.
Does the auctioneer simply allocate each item to the player who has the highest
bid for that item given the previous allocations? This would be the most reasonable
thing to do.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
- Regarding the model: The model can be located on page 2 immediately after the introduction. We placed the model early within the paper to facilitate a comprehensive understanding of our contributions within the framework of our proposed model.
- Regarding the allocation rule: The formal depiction of the allocation rule can be found in Section 1.1, while an illustrative example is provided in Example 1 within the appendix. In particular, consider the labeling of units as $1, 2, ..., j, ..., K$. The sentence "the j-th unit to the player that submitted the j-th highest bid" precisely signifies the allocation rule's operation: participants submit their bids, the auctioneer arranges these bids in descending order $(c_1, ..., c_j, ..., c_{n * K})$, and then allocates unit 1 to the bidder with bid $c_1$, unit 2 to the bidder with bid $c_2$, and so on.
- Furthermore, it's important to note that the allocation at time t solely depends on the bids submitted by bidders in round t, excluding any prior rounds. As an example, let's consider K = 2 units and two bidders. In round 5, suppose bidder 1 submits the bid vector [4, 2], and bidder 2 submits [5, 3]. After sorting the bids – [5, 4, 3, 2] – the first unit is allocated to bidder 1, and the second unit goes to bidder 2 (this being the allocation in round 5). Notably, this allocation disregards any events from preceding rounds.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications; I think the explanation you have written here for the allocation rule is much more clear and it would be good to incorporate it in the paper.
I don't have any additional questions. | Summary: This work systematically considers computational, learning-theoretic, and game-theoretic aspects of multi-unit auctions with uniform pricing. In such auctions, $K$ identical goods are sold to agents with quasilinear utility with a uniform pricing scheme set to either the $K$th or $(K+1)$th highest overall bid (note bidders submit a list of $K$ nonincreasing bids for receiving subsequent goods by diminishing returns). On the computational side, it is shown that the offline optimization problem of maximizing hindsight utility given a history of competing bids (subject to discretization) can be nontrivially and efficiently reduced to computing a max-weight path in a DAG; a similar transformation is then used to devise no-regret algorithms in the online setting in both full-information and bandit information settings. These are complemented by nontrivial regret lower bounds. Finally, this work characterizes the core-stable allocations and prices in these settings to suggest that $K$th price auctions may be more resilient than $(K+1)$th price auctions in practice.
Strengths: The considerations given in this work to this setting are fairly comprehensive on a multitude of axes as listed above. The paper is fairly well-written and provides nice discussion of the problem and related work.
Weaknesses: There remains a fairly large gap between the bandit upper and lower bounds; it's not entirely clear at present whether or not existing learning methods could resolve this gap easily.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ---In general, this paper is quite well-written --- the authors provide nice discussion of the relevance of this setting as opposed to other natural auctions with (implicitly) discriminatory pricing, like GSP, as well as other related work.
---I was not able to verify all the proofs in detail, but the general structure and ideas explained in the text made general sense.
---Are these results robust/naturally extendible in some way to the setting where the parameter $K$ instead is a time-varying parameter $K_t$?
---On the learning-theoretic side, is it clear that existing online shortest path learning methods, i.e. FTPL, cannot work ``out-of-box'' in this setting after applying the (new) reduction? Or does the analysis done here to invoke Hedge guarantees seem necessary to re-do?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
- About lower and upper bounds: Despite our attempts to enhance these bounds, the ones presented in the paper remain the best results we could attain. Thus we leave this as a future work. Thank you for your understanding.
- Regarding $K_t$: When the number of units $K_t$ is varied over time, our offline algorithm, which leverages a Directed Acyclic Graph (DAG), can be extended by adjusting edge weights according to $K_t$. Furthermore, our learning algorithms, designed for both full information and bandit settings, can be expanded by integrating the modified DAG into an environment where the bidder does not possess knowledge of $K_t$ at the time of bidding in round $t$.
- Regarding FTPL: Yes you are right that FTPL can also achieve a sub-linear regret in $T$ for both full information and bandit settings. However, we found that the regret achieved by FTPL appears to be worse than that achieved by the Hedge algorithm in our problem. Take the full-information setting as an example: by adding independent exponential distributed perturbations with rate $\eta$ to each edge weight, the regret upper bound from the regularizer part scales as $K\log m/\eta$ (m is the number of discretization levels). For the sensitivity part, an upper bound is $K^2 m \eta T$ ($T$ rounds, per-step reward upper bounded by $K$, $Km$ edges in total, and total variation distance $\eta$ between two shifted exponential distributions). Choosing the optimal $\eta$ to balance these two terms gives us a worse regret upper bound than Hedge. Therefore, we decided to use Hedge in this work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response (and sorry for the delay in responding)! It may be worthwhile to put a sentence about this last point, that it may be possible to get sublinear regret using off-the-shelf algorithms but would attain a worse rate. But otherwise, I have no further questions and will leave my score as it is.
Thanks!
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! We will incorporate a discussion regarding the final point into the camera-ready version. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the problem of learning to bid in a multi-item auction. There are k identical item to be sold. Each bidder has decreasing value v_1, v_2, ..., v_k with value v_j for j'th item allocated to the bidder. The auction allocates to the k highest bidder and charge k'th or (k+1)'th price. It's worth noting that k'th and (k+1)'th price are neither truthful nor first price in this setting. It has the value of being fair - same price is charged to all bidders. However, it is not truthful as the (k+1)'th bid might belong a bidder obtaining some of the items and that bidder might have an incentive to mis-report.
The authors consider two problems in this setting: how does the bidder learn to bid with low-regret in this setting and what type of equilibria can be obtained that are core-stable (no subset of bidders can deviate to obtain a better outcome.
For the first, the authors provide an offline algorithm for computing best-response to a history of bids by the competitors, provide a low regret algorithm when in each round the full vector of bids is revealed and provide a low regret algorithm when in each round only the winning price and the allocation to the individual player is revealed. The authors also provide a lower bound example to bound the minimum regret that is unavoidable.
The authors also study equilibria that are core-stable - however even though this is an auction setting they only consider settings where the players other than the auctioneer deviate. They show that (k+1)'th price can only have zero price pure Nash equilibria that are core stable.
The paper is very well written. All the arguments are detailed and easy to follow. All the proofs are in the appendix but the statements in the main body are clear enough to give the sense of the result and a sketch of how the result is obtained.
For the learning to bid results, the authors map the problem of computing optimal bid to finding a maximum weight path in a directed acyclic graph. With this mapping, they use existing technology to instantiate the hedge algorithm to explore paths. For the bandit setting where only signal about the price and number of units won is available a slower update along the path that won is used.
I am not sure about the value added by the equilibrium analysis. It is not related to the other set of results. Authors show bad properties of the (K+1)'th price auction, and conclude that k'th price auction might be preferred - however that conclusion is not clear.
Update post rebuttal:
Thank you for the rebuttal. It seems fair to restrict to just uniform price auctions. Perhaps the authors can make this more explicit and include further justification in the paper.
For the preference between (K+1)'th, k'th price - I am not sure authors fully addressed this question. Agreed that zero revenue equilibria are bad and the auctioneer would like to know about that, but with the k'th price not even having a pure Nash equilibrium, how should the auctioneer choose? Perhaps the authors should explore non-pure Nash equilibria for k'th price and show that those guarantee non-zero revenue?
It would also be good if the authors could tie together the no-regret analysis with the pure Nash equilibrium analysis - may be by making statements about the coarse-correlated equilibria of the two auction formats.
Strengths: - The paper is well written. Explains all the results well and provides adequate details in the main body and the appendix.
- The algorithms for learning to bid are non-trivial, they build on existing technology but require ideas specific to the model.
Weaknesses: - The results for equilibrium analysis seem unrelated and don't add a lot of value. The conclusion about k'th price auction being preferred is not clear.
- The authors assume that uniform pricing auction is preferred. This could use more justification. Perhaps true first price or truthful payment rules are also worth studying. Would the regret analysis extend to those as well?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Is the conclusion that k'th price is preferable clear? Is there any other connection to the rest of the paper of this equilibrium analysis?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Authors discuss open questions remaining. There is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
- Regarding equilibria, the possibility of learning algorithms promoting collusion is a significant worry. Thus, using an auction like the K+1-st, which features equilibria with zero prices, presents issues. These equilibria are not just resistant to coalition deviations but are also quickly identifiable by buyers. While participants might see this as beneficial, it does raise concerns about the auction's effectiveness.
Delving into the understanding of Nash Equilibria (NEs) further sheds light on worst-case scenarios, providing valuable insights for the auctioneer. An illustrative example of this importance lies in the context of the carbon market. Here, revenue generation stands as a pivotal objective, as the obtained revenue predominantly fuels investments in green technology.
- Regarding K-th price auction being preferred, uniform price auctions utilizing the K-st pricing rule do not allow for an equilibrium with zero revenue. This observation holds significant relevance and should be taken into consideration when deciding upon the appropriate variant of uniform pricing to adopt.
- Regarding uniform pricing: Uniform pricing stands as a fundamental concept, on par with market equilibrium where the demand and supply get matched. Its practical application spans diverse sectors such as electricity, carbon trading, and institutions like the Bank of England and Treasury. Its pricing mechanism is perceived fair, which is important in settings such as carbon trading.
- Regarding regret in auctions beyond uniform pricing: Our analysis covers the special case of K=1, including first price and second price auctions using the K+1st and K-th pricing rules.
Additionally, our study establishes a strong link to graphical models, suggesting applicability in exploring other auction formats like GSP and GFP. However more analysis is needed to fully solve other formats which we believe would belong to future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I do not have any more questions. | Summary: In this paper, the author proposes efficient algorithms that players can use for bidding, in both the offline and online settings. Furthermore, the paper shows regret lower bounds and then analyzes the quality of the equilibria in two main variants of the auction. It focuses on studying uniform price auctions the angle of designing bidding algorithms for the players.
Strengths: This paper well points out several limitations regarding multi-unit uniform-price auctions that previous works have not focused on and addresses the novel algorithm for those limitations with online and offline settings for bidding with many theorems and their proofs.
Weaknesses: Regarding a carbon auction, licenses for CO2 emission mentioned in the abstract are not well delivered in the paper. Moreover, The abstract seems to be different from what I see on the review page. Please be consistent with the abstract that the authors deliver.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
- The uniform price auctions play a pivotal role in determining the allocation of CO2 emission licenses in the EU Emissions Trading System (EU ETS). See also the paper “Reducing Inefficiency in Carbon Auctions with Imperfect Competition, ITCS 2020”, which explains the auction model used for carbon emissions and gives additional references.
- In these auctions, learning effective bidding strategies is challenging. This is because of the nontruthful nature of the auction and the bid space being exponentially large, which the paper addresses.
- The relevance of revenue obtained from these auctions holds paramount importance in the context of the carbon market. This revenue can potentially drive investments in green technology and further sustainability initiatives. | null | null | null | null |
On Computing Pairwise Statistics with Local Differential Privacy | Accept (poster) | Summary: In this paper, the authors analyzed the problem of privately computing the quadratic form in the model of differential privacy, and provide non-interactive local DP algorithm with MSE upper/lower bounds with gap log(k). The paper further develops results for an interactive algorithm for the same problem and proves analogous bounds.
Strengths: The bound results for both non-interactive and interactive local DP algorithms are generic. The authors provided a complete analysis on the non-interactive algorithm along with its bound results.
Weaknesses: Compared to the linear queries result, the MSE bounds of the non-interactive local DP mechanism for estimating quadratic forms reveal a noticeable gap depending on $k$, the dimension of matrix W.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In addition to the discussions in the paper, can the author provide some intuitions on if the upper bound of the non-interactive local DP mechanism for estimating quadratic forms should depend on k?
2. Can the author provide some intuition on why the definition of $\gamma_2(W,\alpha)$ uses infinity norm instead of other norms? Is there any complexity results on other norms?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Compared to the similar sample complexity results for computing the statistical queries in the models of differential privacy, the sample complexity results for computing the pairwise statistics in local DP reveals a gap that depends on the dimension of the matrix $W$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review & questions. Please find the answers to your questions below.
1. This is indeed a great question and is why we also pose it as an open question in our paper (lines 344-345). We conjecture this might be necessary since even the seemingly simpler heavy-hitter problem also requires a dependency on $k$ (e.g,. [Bun et al., PODS’10]). However, this is just an intuition since we are not aware of a formal reduction between computing heavy-hitters and computing pairwise statistics.
Furthermore, note that the original work of [ENU20] on linear queries *also contained a $\\log k$ gap*. (In particular, $\\gamma_2(W, \\alpha)$ is required to be at least $\\log k$ for the result to hold; see Theorem 22 in their arxiv version.) It is only in our paper that this gap is closed, due to a simple tweak of the proof; see Appendix B for more details. We will make sure to emphasize this point more clearly in the revision.
2. Roughly speaking, the infinity norm is used due to the fact that we often want a worst case guarantee across all linear queries (i.e., $mMSE$ error). Recall that in the matrix mechanism we output $L(R h_x + z)$ where $z$ is the noise term. Even if the noise term $z$ is zero, our output is still $L R h_x$. Now, it turns out that we can find $h_x$ and $j$ such that the difference between $(LR h\_x)\_j$ (our answer for the $j$-th query) and $(W h\_x)\_j$ (the correct answer for the $j$-th query) is proportional to $\\|LR - W\\|\_{\\infty}$. (Namely, if $(LR - W)\_{ij}$ is the largest entry in absolute value, then we just let $x\_1 = \dots = x\_n = i$.) Since our reduction uses the $mMSE$ error measure, we consider the definition of $\\gamma_2(W, \\alpha)$ as stated in the paper.
If we want some average guarantee, then it seems plausible to change the infinity norm to, e.g., the Frobenius norm. However, we are not aware of previous results on this.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, and I will keep my score. | Summary: This paper studies the computing of pairwise statistics with local differential privacy by considering the quadratic form computation. In order to obtain the lower boudn and uppoer bounds, it proposes the inter-reductions between quadratic forms and linear queries.
Strengths: Studying pair statistics with local differential privacy is intersting.
Weaknesses: (1) The presentation needs some improvement. There are many grammmatical mistakes in many places. The preliminary part is quite messy: a formal definition of central differetial privacy is given but the definition of local DP is informal. Randomized response mechanisms are not defined in the paper.
(2) I don't see the significance of the two reductions in Section 1.2. Either the reductions are trivial or the subtlety has not been made explicit yet.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Q1: Could you pls explain the logic behind the sentence in Line 150?
Q2: Could you explain more the matrix mechanism especially the factorization? (Line 159)
Q3: I don't see the reasoning in Line 288 especially the second line there about "cross terms".
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: This paper highly depends on the results in ENU20. I don't see any significant technical contribution in this paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## High-level Response:
We completely agree with the reviewer that the reductions in Section 1.2 are simple, _but only in hindsight!_ On the other hand, we think of this as a significant contribution of our work for the following reason. While previous works [BBGK20 (AISTATS’20), CM22 (VLDB’23), BHBFG+22 (NeurIPS’22)] had elaborate and specific algorithms for _each_ specific kernel $f$, our reduction gives algorithms for *all* kernels that essentially match (and sometimes even improve upon) the guarantees in previous works. Similarly, none of the previous work showed any lower bounds for their problem, meanwhile our reduction gives nearly tight lower bounds for all possible kernels in the non-interactive setting (Theorem 6). In summary, our reductions give simple, tight, and extremely general results for the problem of computing pairwise statistics.
Regarding the comparison to [ENU20], we wish to point out that we study a *completely different problem* (computing pairwise statistics) compared to that paper (computing linear queries). Indeed, one of our main (and arguably surprising) contributions is to show that these two ostensibly unrelated problems are intimately related.
We hope the reviewer will consider these points in their (re)evaluation of the paper.
## Q1:
First, we apologize for the typos: on line 145 (and 131) $( \\hat{z}_1, \\dots, \\hat{z}_j )$ should be changed to $( \\hat{z}_1, \\dots, \\hat{z}_k )$.
Now, on to line 150: If we do not add noise at all, then we would set $\\kappa\_j = 0$ and the $( {\\hat{z}}\_1, {\\dots}, {\\hat{z}}\_k )$ on line 145 are exactly equal to the answer of the linear queries.
This means that $o_j = \\hat{z}\_{x\_j} = {{(W h\_x)}\_{x\_j}} + 0 = {{1_{x_j}}^T} W h_x$.
Averaging this over $j = 1, …, n$, we get $\\frac{1}{n}(o_1 + \\dots o_n) = \\frac{1}{n}\\left(1_{x_1}^T + \\cdots + 1_{x_n}^T\\right) W h_x = h_x^T W h_x$, whereas the last inequality simply follows from the definition of normalized histogram $h_x$ (line 33).
## Q2:
Matrix mechanism is a standard tool in DP that dates back to (at least) the paper “Optimizing linear counting queries under differential privacy” from Li et al. in PODS'10. Given the long history of the topic, it is impossible to cover it thoroughly below; [ENU20] and references therein contains a more detailed picture.
Nevertheless, we will try to explain it here. The matrix mechanism is based on the following idea. If the linear queries in $W$ are similar (or correlated in certain ways), it is not optimal to add independent noise to them. For example, let’s say that all linear queries in $W$ are identical. Namely, all the rows of $W$ are the same. The “trivial” algorithm here is to add independent noise to each answer. Due to (the advanced) composition of DP, this will mean that we will have to scale the noise by a factor of roughly $\\sqrt{k}$. Meanwhile, a much better algorithm is to compute the answer of just a single query, add noise to it, and then use this noisy value as the answer to all queries. This has the same noise as if we were to answer a single query, so we get a lot of savings by doing so! Here we can think of it as factoring $W = L \\cdot R$ where $L$ is the $(k \\times 1)$ all-one matrix and $R$ as the $(1 \\times k)$ matrix containing a single row of $W$. The aforementioned algorithm is thus to compute $y = R h_x + z$ where $z$ is the noise and answer $L y$. The matrix mechanism is the generalization of this, which allows arbitrary factorizations $L, R$. It turns out that $\\gamma_2(W)$ is exactly the error of the matrix mechanism (after some scaling) and that the optimal factorization can be done efficiently.
## Q3:
We are simply using the identity $(a + b)^2 \\leq 2a^2 + 2b^2$ here, so there is no cross term to be taken care of. | Summary: This paper studies the problem of computing Quadratic Forms $h_x^TWh_x$ under Local Differential Privacy, where $h_x$ is the normalized histogram representation of a vector $x\in [k]^n$. In particular, reductions to and from the problem of computing linear queries are established, through which algorithms and tight bound results (in mean squared error) consequently follow from those in linear queries. The reduction from linear queries is built on the observation that the $j$th entry $(Wh_x)_j$ of the linear query can be obtained from three quadratic forms:
($h_{x\cup 1_j}^TWh_{x\cup 1_j}$), ($h_{x}^TWh_{x}$) and ($h_{1_j}^TWh_{1_j}$).
The reduction to linear queries is built on two observations: 1) $h_x^T W h_x^T$ is a linear query on $h_x$ with weights $Wh_x$ (also a linear query); 2) $h_x^T W h_x^T$ is an inner product of $Lh_x$ and $Rh_x$, where $W=L^T R$. The second approach can be further refined, where using a JL projection on $L$ and $R$ allows the magnitudes of the noise terms associated with privatizing $Lh_x$ and $Rh_x$ to be reduced to $O(\log n)$ from $O(k)$.
Strengths: - This paper presents a generic framework for estimating pairwise statistics that can be expressed as quadratic forms (although utility in some statistics can be improved by exploiting their specific properties).
- The ideas used are interesting and natural, the presented analysis is solid.
- They discuss both interactive and non-interactive settings, and show that computing pairwise statistics separate interactive and non-interactive local DP.
Weaknesses: - In the proof of Theorem 5, the condition on $\alpha$ requires $\varepsilon \ge 1/\sqrt{n}$, but $l=O(\log(k) \varepsilon^2 n)$ requires $\varepsilon < 1$ for $l< n$? These place $\varepsilon$ in a restrictive range.
- Although the main paper focuses on presenting reductions in the non-interactive setting, there are places where the discussions of interactive and non-interactive appear together, which caused some distraction for me. I think all of the discussion on the interactive setting can be moved to its own section toward the end.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - In the proof of Theorem 14, line 286 requires $W$ to be symmetric?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review & questions. Please find the answers to your questions below.
## $\\epsilon$ value in Theorem 5:
We remark that the lower bound requirement $\\epsilon \\geq 1/\\sqrt{n}$ for non-trivial utility is present in essentially _all known_ local DP results and is supported by Theorem 6 (which implies that non-trivial utility is impossible when $\\epsilon \ll \\tilde{O}(1/\sqrt{n})$). In fact, to the best of our knowledge, this is required to get sub-constant error even for the very simple problem of averaging binary values.
On the other hand, there is no upper bound required on $\\epsilon$. Note that our proof of Theorem 5 works for arbitrarily large $\\epsilon$ and the condition $\\ell \\leq O(\\log(k)\\epsilon^2 n)$ does *not* enforce any upper bound on $\\epsilon$.
## Writing (Interactive vs Non-Interactive):
Thanks for your suggestion and apologies for causing the distraction. In the revision, we will consider separating the discussions on the interactive vs non-interactive in a more streamlined way.
## Theorem 14:
We implicitly assume that $W$ is symmetric, which is the case when we construct $W$ from a kernel $f$ as specified on line 293. Indeed, line 286 uses symmetry of $W$ as you said. We will make sure to clarify this in the revision. (Note that our algorithms work even for asymmetric $W$. However, the lower bound is harder to handle due to the cross terms.)
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarification. | Summary: The paper considers the computation of quadratic forms of histograms under local differential privacy (LDP). Previously, special cases of this problem have been studied, but this paper presents a general theory analogous to the existing theory for linear queries. The problem is studied both in the standard LDP setting and using "interactive", multi-round protocols where each user outputs an LDP value in every round. The 9-page submission focuses on the standard, non-interactive setting. Lower and upper bounds are presented that are tight up to polylogarithmic factors. A sketch of interactive protocols (with better utility) is presented, with details in supplementary material.
Strengths: - Identifies a natural problem for which special cases have been studied before and presents a general theory
- The results are tight (up to polylogarithmic factors) for *every* quadratic form
- The techniques for multi-round LDP protocols are particularly interesting, and separate single- and multi-round LDP for a natural problem
- The mechanisms are simple to describe
- The paper is very well-written
Weaknesses: - Though LDP has been deployed in practice, its general usefulness has sometimes been questioned, since it tends to require very large data sizes to get good utility
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Can you comment on potential practical use of your results? Are there large factors hidden in big-O notation, and in particular what kind of data size is needed to get good utility for the examples in Corollary 7?
- The shuffle model is sometimes used to amplify privacy of LDP protocols. What results in the this model are implied by your results?
- In line 36, I suppose $\mathcal{X}$ should be $\mathcal{X}^2$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Since there are no experiments, I would have like at least some discussion of the extent to which the protocols proposed might be practical, or whether the contribution is considered purely theoretical.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review & questions. Please find the answers to your questions below.
- While we omitted the constants (following the precedence of not reporting exact constants in most previous work), it is not hard to compute them in Corollary 7. We will consider adding them in the revision. In terms of practicality, our algorithms subsume those in previous work such as [BBGK20, BHBFG+22]. Therefore, the practicality is similar, e.g., our protocol for Gini’s diversity index has less than 0.01 error for $\epsilon=3, k \leq 1000$ and $n \geq 100,000$.
- This is a great question. Our non-interactive algorithm only requires a vector-summation primitive (where each vector has norm at most $C$). Since known protocols for vector summation in the Shuffle model achieve an RMSE of $O\\left(C\\cdot\\sqrt{\\log(1/\\delta)}/\\epsilon\\right)$ [Balle et al., CCS 2020], we can simply replace Theorem 9 in our paper by this protocol and reduce MSE by a factor of $n$ in Theorem 5 and Corollary 7. Thank you for suggesting this; we will add this to the revision.
- Re line 36: Yes, $\mathcal{X}$ should be replaced by $\mathcal{X}^2$. Thank you for pointing this out; we will fix this in the revision.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the additional details. I think it would make sense to add something along those lines to the paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adapting Neural Link Predictors for Data-Efficient Complex Query Answering | Accept (poster) | Summary: This paper addresses the challenging task of answering complex queries on incomplete knowledge graphs, where missing knowledge introduces additional complexity. Previous approaches either employed end-to-end architectures with opaque reasoning processes or relied on simple neural link predictors, sacrificing information gain for computational efficiency. To overcome these limitations, the authors propose a parameter-efficient score adaptation model optimized for recalibrating neural link prediction scores in complex query answering. While the neural link predictor remains frozen, the adaptation component, with minimal additional parameters, is trained on the downstream task. The proposed method significantly improves accuracy compared to state-of-the-art approaches, achieving higher Mean Reciprocal Rank values while using only a fraction of the available training query types.
Strengths: ### Originality
The paper introduces a novel approach to addressing the task of answering complex queries on incomplete knowledge graphs. The use of a parameter-efficient score adaptation model, optimized for recalibrating neural link prediction scores, sets it apart from previous methods. This original approach demonstrates innovation in the field of complex query answering and contributes to advancing the understanding of addressing missing knowledge in knowledge graphs.
### Quality
The paper provides thorough evaluations and comparisons with state-of-the-art methods, demonstrating that the proposed approach produces significantly more accurate results. The proposed method significantly improves accuracy compared to state-of-the-art approaches, achieving higher Mean Reciprocal Rank values while using only a fraction of the available training query types.
### Clarity
The paper effectively communicates the proposed approach and its components, such as the parameter-efficient score adaptation model and the frozen neural link predictor. The writing is clear, concise, and well-structured.
### Significance
The paper addresses an important and challenging task in the field of complex query answering on incomplete knowledge graphs. By introducing the approach and demonstrating its superior performance compared to existing methods, the paper makes a significant contribution to the advancement of techniques for handling missing knowledge in knowledge graphs.
Weaknesses: The authors haven't discussed the limitation of this work in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This is a solid work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Regarding the following,
- The authors haven't discussed the limitation of this work in the paper.
We tried our best to run all possible ablations and analyses for this paper, and some additional experiments to run came up during this rebuttal phase. We will add results of CQD with the considered negation functions to the camera ready and a more detailed analysis of the explainability properties of CQD$^\mathcal{A}$ -- which we assume are inherited from CQD, but we do not discuss it in the main paper mainly for space reasons; as well as potential future work directions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications! | Summary: The paper proposes a score adaptation model called CQDA for efficient complex query answering on incomplete knowledge graphs. The authors address the problems in existing methods that are either hard to interpret or require intensive training. CQDA is a parameter-efficient model that recalibrates neural link prediction scores for the complex query answering task. The authors demonstrate that CQDA outperforms state-of-the-art methods in terms of accuracy, data efficiency, and robustness.
Strengths: - The paper addresses an important problem in the field of complex query answering on incomplete knowledge graphs. The proposed CQDA model provides a solution that improves accuracy while maintaining computational efficiency.
- The paper introduces a parameter-efficient score adaptation model that recalibrates neural link prediction scores. This approach reduces the need for extensive training data and resources.
- Experimental results show that CQDA achieves significantly better results than current state-of-the-art methods. The model is also shown to be data-efficient and robust in out-of-domain evaluations.
Weaknesses: I think there is a discrepancy between Figure 1 and the statement, and thus the paper lacks an important baseline: **normalizing scores then merging**.
In Figure 1, the paper shows that the score scale may be different for different subqueries, which makes me think that the original COD may be without normalization which maps the score from any scale to [0,1]. However, the paper also mentions that they follow previous work to add normalization as below:
> To ensure that the output of the neural link predictor is always in [0, 1], following Arakelyan et al. [2021], Minervini et al. [2022], we use either a sigmoid function or min-max re-scaling.
So, what would happen if we do a merge after normalization on scores for all subqueries? Please correct me if I miss anything.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Figure 1 shows the scores over all entities but does not clearly present if the score is from a score function, or from a linear layer. If I understand correctly, the score before adaptation should come from the score function $\phi$, right?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review! Regarding your concerns –
- In Figure 1, the paper shows that the score scale may be different for different subqueries, [..] However, the paper also mentions that they follow previous work to add normalization as below [..]
In CQD, for the t-norm and t-conorms to be applicable, link prediction scores need to be mapped to the $[0, 1]$ range, which they achieve, for example, via the sigmoid function. However, we find that these transformations may be insufficient to solve the score calibration problem outlined in Figure 1. For instance, given two atoms $a_1$ and $a_2$ with scores in $[-10, -5]$ and $[5, 10]$, using the sigmoid will cause the score of the former atom to be close to $0$ and the score of the latter atom to be close to $1$, which will cause the latter score always to be ignored when applying the minimum t-norm. In this work, we propose a solution to this problem by learning to adapt the scores using simple transformation learned on the downstream complex query answering task by back-propagating through the complex query answering process. In our rebuttal PDF (Table 1), we included an additional experiment that evaluates previous work with normalization but without calibration, which we observe produces less accurate results than our proposed CQD$^{\mathcal{A}}$.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thanks for the author for the clarification. It makes me more confident that this paper should be accepted. | Summary: The paper proposes a novel approach called CQD^A for answering complex queries on incomplete knowledge graphs. The authors address the challenge of answering complex logical queries in the presence of missing knowledge by re-calibrating neural link prediction scores. They introduce an adaptation component that is trained on the downstream complex query answering task, while the neural link predictor is frozen. The proposed model, CQD^A, significantly improves the Mean Reciprocal Rank values compared to existing state-of-the-art methods. It achieves this by increasing the accuracy of answers while using a smaller amount of training data. Additionally, CQD^A is shown to be data-efficient and robust in out-of-domain evaluations.
To sum up, the proposed method is simple yet efficient. I have no reason to reject.
Strengths: 1. Novel Approach: The paper introduces a unique and novel approach for answering complex queries on incomplete knowledge graphs. The use of neural link predictors and the adaptation component allows for efficient complex query answering while providing interpretable answers. Well motivation.
2. Improved Accuracy: The proposed model, CQD^A, outperforms current state-of-the-art methods. This improvement demonstrates the effectiveness of the re-calibrated neural link prediction scores in producing more accurate results.
3. Data Efficiency: CQD^A achieves competitive results even with only 1% of the training data. This data efficiency is advantageous as it reduces the computational and resource requirements for training the model.
4. Robustness: The model's robustness is demonstrated through out-of-domain evaluations. The ability to perform well in different domains further strengthens the applicability and effectiveness of CQD^A.
Weaknesses: 1. Interpretability of Results: While the paper mentions that the proposed approach provides interpretable answers, it would be beneficial to provide some concrete examples or explanations to illustrate this interpretability.
2. Further Analysis of Training Data Reduction: The paper mentions that CQD^A achieves competitive results with only 1% of the training data. It would be interesting to see a more detailed analysis of the impact of reducing the training data on the model's performance across different query types and datasets.
3. Future Directions: It would be valuable to discuss potential future directions for the research. This could include exploring the applicability of CQD^A in different domains or investigating the scalability of the approach to larger knowledge graphs.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: see weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations are ignored in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your comments and valuable feedback. We would like to address the following points:
- It would be beneficial to provide some concrete examples or explanations to illustrate this interpretability.
We would like to refer you to our global response, where we clarify why CQD$^\mathcal{A}$ remains as interpretable as CQD. We will provide an additional analysis for this aspect in the camera-ready version.
- Discussion of future work.
In our conclusion, we emphasise that an important direction of future work lies in fundamental enhancements to link prediction methods. With CQD$^\mathcal{A}$, these improvements can be easily adapted for complex query answering in a data-efficient and computationally efficient way. However, we also identify other promising avenues for future exploration. One such area is the significant gap across all methods between EPFO queries and queries containing negations, potentially revealing a core limitation in existing query-answering techniques. Additionally, we propose investigating the specific type of link predictor used in CQD$^\mathcal{A}$, which employs ComplEx+N3 as the default model. This exploration may pave the way for designing link prediction models that generalise better to complex query answering. We will include this discussion in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarification! But I'm still interested in the impact of reducing the training data on the model's performance across different query types and datasets. The results needn't to be included in the paper. You can comment after the discussion. I'm just curious that whether some types of query data make different contributions to the performance, which may be similar to instruction fine-tuning in NLP. | Summary: This paper proposes an adaptation model on top of neural link predictors to learn score re-calibration to suit complex query answering task.Shows empirical evaluation on standard benchmark datasets to show its value. One of the benefits is a simple calibration model with lesser training data on complex questions gives better results than ones present in the literature.
Strengths: Paper is well written and easy to follow. Contributions are clearly spelled out.
Experimental results cover complex query answering benchmarks and show the value.
Weaknesses: Contribution seems incremental on top of the existing works. Seems like a small extension to CQD.
Overall results show benefits only on certain query cases compared to state of the art.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Model is fine tuned with 2i and 3i queries, and test results on those two query patterns don't show any win in two datasets. Any comment on why its happening?
Method seems to get clear benefit on NELL but not on other datasets. any comments on why its happening.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your comments and valuable feedback. We would like to address the following points:
- Contribution seems incremental.
Our analysis of CQD points to a fundamental limitation of CQD that is not necessarily trivial to solve if we want to maintain its favourable properties, such as data efficiency and interpretability. We directly tackle the fundamental limitation of uncalibrated scores with a lightweight function that maintains interpretability (due to its linearity), which we argue is not an obvious extension but rather effective compared to significantly more computationally demanding and less data-efficient methods.
- Method seems to get clear benefit on NELL but not on other datasets. any comments on why its happening.
We respectfully disagree with this conclusion. In the case of EPFO queries on FB15k and FB15k-237, CQD$^\mathcal{A}$ remains close to GNN-QE, but CQD$^\mathcal{A}$ outperforms all baselines on all datasets, on queries including negations, all while using 10^3 fewer parameters. In the low-data regime (described in the Data Efficiency subsection), the advantages of CQD$^\mathcal{A}$ over GNN-QE become more pronounced. In additional results (see Table 2 in rebuttal PDF), we have included an additional comparison with BetaE in the low-data regime, where CQD$^\mathcal{A}$ is still producing significantly more accurate results than the baselines. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and valuable feedback. We appreciate that reviewers acknowledge CQD$^\mathcal{A}$ proposes a novel (jfF5, 5Xze) and technically sound method (jjs4) for data-efficient complex query answering.We have incorporated the provided feedback into our work, including additional experiments (highlighted in red) in the rebuttal PDF.
**Effectiveness of CQD$^\mathcal{A}$ in additional settings**
In the rebuttal PDF, we include additional experiments motivated by the reviewer’s feedback that further indicate the effectiveness of CQD$^{\mathcal{A}}$:
- Table 1 shows the result of vanilla CQD applied directly to queries with negations. We observe that CQD$^{\mathcal{A}}$ still yields more accurate results, indicating that simply normalizing via the sigmoid, or min-max re-scaling, may not be sufficient to solve issues caused by non-calibrated scores, and thus CQD$^{\mathcal{A}}$ significantly benefits from an additional fine-tuning step on the downstream complex query answering task.
- Table 2 includes an additional comparison with BetaE, showing how CQD$^{\mathcal{A}}$ yields significantly more accurate results on the downstream complex query answering tasks.
**Interpretability of CQD$^{\mathcal{A}}$**
Reviewers jfF5 and jjs4 indicated that it is not clear whether CQD$^{\mathcal{A}}$ maintains interpretability. One of the advantages of CQD is the ability to inspect link prediction scores at each step of the query-answering process. This allows to explicitly access and assess the intermediate candidate assignments made by the model for each reasoning step. Our proposed calibration function is monotonic, so the order in the ranking produced by the link prediction scores at each step is preserved and can be interpreted in the same way as in CQD (note that calibration does affect how scores are combined during complex query answering). This would not necessarily be the case if the calibration function were nonlinear. We will clarify this aspect in the camera-ready version.
We provide answers to the remaining questions in individual responses.
Pdf: /pdf/979f25ca264eaa4ed07858cd35c000f350e68820.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes CQD$^\mathcal{A}$ an adaptive variant of the complex query decomposition (CQD) approach. The authors identify that the neural link predictors used in CQD produce uncalibrated scores that can be in very different ranges of each other. The authors show that CQD$^\mathcal{A}$ can adaptively learn an adaptive function that alleviates this problem.
In an extensive evaluation the authors show that this adaptive calibration imbues CQD with favorable properties such as being more data efficient, robust to OOD and being able to support negation.
Strengths: The paper addresses an important limitation of CQD, wherein the Neural link predictors can produce vastly different ranges of scores leading reduced performance. The authors provide a very simple fix that (inspired from Platt's scaling) allows these scores to be calibrated.
The authors show that this allows the model to boast better results on three benchmarks. The authors also show that this adaptive setting allows the modelling for negations that was previously not feasible. (**Significance** and **originality**)
The paper is generally well written and easy to follow. (**clarity**)
The authors also provide extensive analysis with regards to their proposed approach which can give useful insights regarding what helps improve performance. (**quality**)
Weaknesses: I think one weakness is that the approach lacks a great deal of novelty. The proposed method is a minor augmentation to an existing approach. However, I still believe that the authors have identified an important problem with CQD and have provided a technically sound solution.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Perhaps the part about negation could be better explained in the paper. The authors say that negations are modelled by either a standard or strictly cosine functions. In theory this could be added to vanilla CQD as well. Was this experimented with? Maybe improved results ny CQD$^\mathcal{A}$ over vanilla CQD would bolster the importance of calibration for supporting negations.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have not addressed the limitations or potential societal impacts of their work.
Perhaps the authors could talk about how CQD$^\mathcal{A}$ can produce explainable solutions to complex questions which can create systems that can be verified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your comments and valuable feedback. Regarding your questions,
- Negations are modelled by either a standard or strictly cosine functions. In theory this could be added to vanilla CQD as well.
Even though the link prediction scores are not explicitly calibrated in the original formulation of CQD, we agree that negations can, in principle, be applied with similar functions. Based on your comments, we updated the results in Table 5 to include negations with CQD in our comparison (see Table 1 in the rebuttal PDF), and we find that the model still benefits from an additional calibration via fine-tuning step. This confirms the effectiveness of calibrating the scores for the complex query-answering task.
- How can CQD$^\mathcal{A}$ produce explainable solutions to complex questions which can create systems that can be verified?
We would like to refer you to our global response, where we clarify why CQD$^\mathcal{A}$ remains as interpretable as CQD. We will update the camera ready with additional analysis on this aspect.
---
Rebuttal Comment 1.1:
Title: After rebuttal
Comment: Thanks for the clarifications! | null | null | null | null | null | null |
How a Student becomes a Teacher: learning and forgetting through Spectral methods | Accept (poster) | Summary: The authors propose a novel technique that allows identifying an invariant subnetwork in a student model that mirrors the characteristics of the teacher in terms of computing neurons, path distribution, and topological attributes.
Strengths: - The manuscript is clearly structured, and the subject of research is relevant
- The authors have developed a novel technique to identify invariant characteristics of a student model mirroring key characteristics of the teacher network.
Weaknesses: - The authors have used a single synthetic dataset to perform the experiments.
- There is little reference to related work, and no baselines are considered when comparing the proposed approach
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: GENERAL COMMENTS:
- (1) The authors have used a single synthetic dataset to perform the experiment. The results could be strengthened by considering multiple datasets (different synthetic parameterizations and publicly available datasets).
- (2) We miss a more detailed related work. The authors provide a brief introduction, but references to related work are scarce. The authors may reference some works related to model distillation and pruning. In particular, we think the following works may be useful, given they mention some of the concepts and works related to those used in the manuscript:
- Liang, Tailin, et al. "Pruning and quantization for deep neural network acceleration: A survey." Neurocomputing 461 (2021): 370-403.
- (old, but relevant) Elizondo, David, and Emile Fiesler. "A survey of partially connected neural networks." International journal of neural systems 8.05n06 (1997): 535-558.
- Gou, Jianping, et al. "Knowledge distillation: A survey." International Journal of Computer Vision 129 (2021): 1789-1819.
- (3) The authors should acknowledge the limitations of their research, discuss whether these limitations impact the results, and provide some insights on how these limitations could be addressed in future work.
FIGURES:
- (4) Figure 1: enhance the wording of the caption. "Histogram for the quantities (10) (in blue) and (9) (in orange)" -> "Histogram for the quantities (Eq. 10) (in blue) and (Eq. 9) (in orange)"
SPELLING/WORDING:
- (5) "Remarkably several topological proprieties of the teacher" -> "Remarkably several topological properties of the teacher"
- (6) "These latter parameters are denoted λ(k) for reasons that will become clear in the following, " -> in the following?
- (7) "The dataset will be generated by choosing the probability distribution" -> "The dataset was generated by choosing the probability distribution"?
- (8) "values stay stably across all choices of h above 20" -> "values stay stable across all choices of h above 20"
- (9) "To assess feature localization within the network, we generated aggregated histograms of the scalars (9) and (10)." -> "To assess feature localization within the network, we generated aggregated histograms of the scalars (Eq. 9) and (Eq. 10)."
- (10) "To ensure that the achieved conclusions are not influenced by the size of the second hidden layer, various student configurations have been tested, yielding equivalent results." -> Please provide some details.
- (11) "The blue histograms stands for (9)," -> "The blue histograms stands for (Eq. 9),"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors should acknowledge the limitations of their research and provide some insights on how these limitations could be addressed in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the referee for their positive review and insightful suggestions. In relation to the 'Figures' and 'Spelling/Wording' sections, we are prepared to implement the suggested corrections should the paper progress to the camera-ready stage.
Regarding the General Comments section:
- _1)_ We concur with the referee on this point. We have conducted additional experiments that we are eager to incorporate into the manuscript. Specifically, in Figure 1 of the PDF, we present an analysis for various datasets: Shuffled MNIST, Shuffled Fashion MNIST, California Housing, and a more intricate Teacher structure. In the latter, the size of the first hidden layer differs from that of the second, ,removing the high degeneration of the original example and making it a more relevant case of study. The student structure remains consistent with the dimension of the second hidden layer set to 50. In this context, the performance metrics (in terms of accuracy and loss function) for both the spectral Student and the traditional student are comparable across all datasets. The findings align with the scenario described in our original paper: the spectral regularization is capable of finding a computational core (panels $a-d$) whose MSE behaviour, after perturbation, is independent from the initial layer size (panels $e-h$). Due to space limitations in the PDF, we opted against including supplementary plots, such as the loss function values and the eigenvalue histograms.
- _2)_ We are in agreement with the referee on this matter. We faced challenges in locating relevant references specific to node pruning, as the majority of existing literature appears to concentrate on weight pruning. We appreciate the valuable references provided by the referee and intend to incorporate them into our manuscript in conjunction with others we have found.
- _3)_ The referee's observation is on point. We will acknowledge the limitations of our study. In particular, we recognize that computational constraints prevented us from employing highly complex models where outcomes might vary. We hope to explore such scenarios in future research. Furthermore, we haven't fully explored the capabilities of the spectral decomposition. Indeed, it's feasible to conduct a feature relevance analysis in the input space. By setting $w_{ji} = (\lambda_j^{in}-\lambda_i^{out})\phi_{ji}$ in the first layer, the eigenvalues $\lambda^{in}$ could emphasize the relevance of the input's $j$-th component.
We hope the referee appreciates the aforementioned changes and we thank them again for the interesting feedback.
---
Rebuttal Comment 1.1:
Comment: We thank the authors for the rebuttal. We consider our observations have been addressed and we will keep the score. | Summary: This paper analyses the performance of a spectral parameterisation/regularisation scheme for neural networks. After first introducing the spectral approach, the authors describe student-teacher experiments where they attempt to distil a fixed teacher network’s behaviour into a student network. The authors show that the spectral parameterisation gives equivalently good predictive performance, but that the structure of the trained network is considerably different. In particular, they show that the spectral network shows significant sparsity when it is over-parameterised with respect to the teacher. Intriguingly, they show a way to measure the “size” of a dense computational core that seems to be invariant with respect to changing the size of the student network.
Strengths: The paper is on the whole well-written and easy to understand. The authors have done a very good job of taking a theoretical topic and making it accessible and understandable.
The idea they present is simple at its core, and the results are very interesting: both that their parameterisation/regularisation scheme leads to marked sparsity and the presence of a “core” with invariant size.
While I think the results are somewhat limited (see below) the results that are presented are clearly described and care has been taken with the experimentation, leading to a convincing presentation. EDIT: the authors have fully addressed my concern around limited evaluation.
I am unable to judge the significance or novelty of these results, as I am not an expert in the field of this paper. I would say that from the perspective of a general Neurips participant that I found the results interesting and exciting. So while I would defer to experts to place the work within the literature, I will comment that I think the results may be of interest to the general Neurips audience.
Weaknesses: I think the only real weakness of the paper is that the experiments are quite limited in scope. The authors use a single source of “teacher” data, which is a fully-connected network with a certain shape. I think the paper would be considerably stronger if the authors were to repeat their analysis with further sources of data. In particular, I think it would be an interesting complement to use a non-synthetic source of data as a training objective for the network, if a suitable source could be found. While this would limit the ability of the authors in making the exact correspondence between the size of the teacher and the effective size of the student, I think seeing the same behaviours - enhanced sparsity as compared to the direct parameterisation, and a “phase transition” like behaviour showing a “computational core” - would considerably strengthen the results of the paper. I think without some strengthening of the results it is difficult to accept the paper, but I think further results would change this assessment. EDIT: the authors have fully addressed my concern around limited evaluation.
While on the whole the paper displays good clarity, there are some places where the use of language leads to confusion. I have called out areas below in the questions section where I think there was particular confusion, and I think if these are addressed the paper will be sufficiently clear.
I didn’t understand the “path analysis” section, or what the significance of this result was. Perhaps being more precise with the description of the analysis would have helped (see questions).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I have two non-trivial questions, and then a lot of minor comments. The two questions:
I found it surprising that the number of “active” nodes in the spectrally parameterised student networks corresponded exactly with the number of hidden nodes in the teacher network. I’m not sure how to understand this. It in some sense suggests that the data emitted from the teacher network in some sense “maxes out” the computational capacity of the network. It’s not obvious to me at all that this would happen with a randomly initialised network. In fact, it’s not really obvious what I even mean by “computational capacity” of a network. It may just be that I’m not at all an expert in these matters, but I found this confusing. Either way, I think it would be very helpful for the authors to make some comment on why such a perfect numerical correspondence might be expected, and how it is to be interpreted.
The other question, as noted above is that I didn’t understand the path analysis section. In particular I wasn’t able to understand the transformation described in line 279-281, which I think then made it difficult to understand the rest of this section. Perhaps define the transformation explicitly in terms of the components?
Some minor comments. As noted above some of these are simply English-language suggestions that I think might make the paper a little easier to read:
Line 9 (and 55): I’m not familiar with the term “quenched teacher structure” and searching the internet didn’t help me. This might be a specific technical term that I’m not familiar with, but if not I wonder whether it might be more natural to use “frozen teacher” which I think would be the more usual English idiomatic form for a teacher whose weights are not modified during the experiment.
Line 28: “self-consistent elucidation of intertwined data correlations” sounds a bit confusing, I wonder if the authors could state more plainly what they mean here?
Line 33: “spatial sorting of the chosen reservoir of computing neurons” I’m not sure what this means … what does “spatial mean” in this context?
Line 46: ”thus more suitable for possible hardware deployment”. This is an interesting claim. I wouldn’t have (naively) guessed that a sparse computation in eigenvalue/eigenvector space would necessarily map well to the constraints of current ML accelerator hardware. I don’t think this point needs to be expanded upon in the paper, but perhaps the authors could add a suitable reference (if one exists) that the interested reader could follow up on?
Line 75: I think I’m to read equation 1 as a simple reparameterisation. In which case it’s a little bit confusing as the w_{ij}^{(k)} here is not the same as the one introduced above. Maybe it would be clearer if the initial weight matrix were given a slightly different symbol?
Line 105: notation a bit confusing, it looks like a lower-case phi has turned into an upper-case phi. This isn’t consistent with how matrices and matrix elements are handled previously in the manuscript. It then becomes lower-case bold in equation 6 which is consistent with the previous notation.
Line 202-204: I didn’t understand what the “alternative scenario” here was. Perhaps the authors can reword this part to be more clear?
Line 215: “mean squared error (MSE) of S(h)” -> “mean squared error (MSE) of the predictions of S(h)”?
Line 226: the identification between the histogram colour and the equation here is the opposite of that in the caption of Figure 1! I think the text is the correct one.
Line 224: I’ve never heard “vehiculate” used before, and the dictionary doesn’t suggest a definition that quite fits here. Can I suggest that “drives” might be a more standard English term for this?
Figure 3 caption: describe what the colours correspond to.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I think the only considerable limitation is what I mentioned in the questions section about the weakness of evaluating with only one source of data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for the insightful feedback and in the following we will address all of their raised points.
**TWO MAJOR QUESTIONS (number of active nodes and path analysis)**:
In the presented results, we propose that the data originates from a 10-dimensional space and is subsequently projected in a complex manner into a 20-dimensional space, corresponding to the second and third layers. This projection can be further understood as the union of a linear function and a non-linearity (specifically, the ReLu function), particularly when biases are turned off as in our setup. Thus, we posit that at least a 20-dimensional space is necessary to capture the complete variability of the function at this stage.
For clarity, the dimensions of the second student network's hidden layer is kept fixed an set at 20. Given this design (with a lot of dimension that are all equal to 20), we believe it's logical to anticipate that a well-optimized first hidden layer of the student, also of dimension 20, would be competent in identifying the salient features from the teacher's first and second embedding layers. Our rationale is further substantiated by the paper *Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup (Goldt et al.)*. This work, which influenced ours, points in the direction that in a teacher-student setup, the student's dimensionality should at least match that of the teacher for effective learning.
In this teacher-student framework, both networks share identical activation functions and biases. This allows the linear components of the two information transfers (from the input to the first layer and from the first layer to the second) to be isolated and compared. Specifically, by examining the weights represented by $W=w^{(2)}w^{(1)}$, we can understand how activation moves linearly from the input to the second hidden layer. Each matrix entry, $W_{km}$, is defined by the equation: $W_{km}=\sum_{j=1}^{N_i=20} w^{(2)}_{kj} {w^{(1)}_j}_m$ [sorry for the misalignment but it seems there is a bug on Markdown]
Here, it becomes evident that we're aggregating all the pathways from neuron $k$ to neuron $m$. Our "Path analysis" section delves into the distribution of these pathways, which represent the cumulative effects of two linear operations. The term "path" that is quite unconventional, stems from the association between linear operations and bipartite graphs.
**MINOR QUESTIONS (sorted by line reference)**
- _line 9)_ Yes, the referee is right, the term is a substitute for "frozen" and we are willing to implement the correction in the following version of the paper.
- _line 28)_ We apologize for the lack of clarity, we are willing to rephrase this in ”organizing complex
data by clearly understanding how different data pieces relate to each other."
- _line 33)_ Again we are sorry for the lack of clarity. Spatial in this context is inherited from a jargon typical of dynamical systems on networks where the nodes are intended as points of a discrete space. We will modify this sentence as: "These latter reflect the spatial sorting of the chosen reservoir of computing neurons" $\rightarrow$ " These latter represent how the selected group of computing brain-like cells are organized."
- _line 46)_ The referee is right as more conventionally the sparse implementation is the one more popular. However, there are applications where the advantage of having a smaller network structure with respect to a larger sparse one is appreciated. An example of these ML models is at the Large Hadron Collider at CERN, where the inference time required, on the order of nanoseconds, makes the smaller model preferrable. (see for example *Fast and resource-efficient Deep Neural Network on FPGA
for the Phas EPJ Web of Conferences 245, 01021 (2020)* )
- _line 75)_ We will change slightly the notation in the following version of the paper.
- _line 105)_ We are sorry for the confusion, we will double check the equations and be sure that with lowercase $\phi^{(k)}$ we refer to the off-diagonal block of size $N_{k}\times N_{k-1}$ and of the square matrix $\Phi^{(k)}$ of size $(N_{k-1} + N_k)\times (N_{k-1} + N_k)$.
- _line 202-204)_ The alternative scenario we are referring to, which will be also included is something we are willing to analyze in follow-up papers and it is the effect of adding also the input eigenvalues, labelled $\lambda^{(in)}$ as trainable variables. This will lead (for the input layer) also to the possibility of finding the relevant input space features by inspecting their after-training modulus (of course employing the regularization).
- _line 215)_ We agree that it would be better rephrased in the suggested way.
- _line 224)_ We can rephrase the word in the suggested way.
line 226) Again, the referee is right, indeed there is a mistake. The Blue one, that gets more peaked in zero is the one of the spectral layer, eq. (10).
- _Figure 3)_ We are willing to implement the suggested modification in the following version.
Moreover, we have extended our analysis to 4 more relevant and complex datasets, more specifically in Figure 1 of the PDF page we show the same analysis carried out with for: Shuffled MNIST, Shuffled Fashion MNIST, California Housing and a more elaborated Teacher structure. In the latter, the size of the first hidden layer is different from the one of the second hidden. The student structure is left unchanged and the performances (in terms of accuracy and loss function) of both, the spectral Student and the classical student, are the same in each dataset. The results shown are in line with the simplified scenario that we have presented in the original version of the paper and we believe extend considerably the validity of our results.
We hope that, in light of the aforementioned changes, the referee will reconsider the rating given and we thank them again for the feedback.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for responding in detail to my comments and for taking the feedback into account when revising the manuscript.
I think the main weakness that I had identified - the use of a single, synthetic dataset - has been convincingly overcome by the authors' addition of several real-world dataset. I am pleased to see that the same qualitative "computational core" behaviour holds for these datasets, and I think the results section now appears very strong.
It strikes me that as, an additional optional change, the authors could perhaps show on the right hand side of their revised figure 1 (panels e-g) what happens if units are randomly removed from the "direct" network. This is first of all just to confirm that non-spectral neural networks don't show similar effects on (random) node deletion (very unlikely, I don't see how they possibly could), but also would show a stark contrast with the authors' method, further highlighting its strength.
Overall I think the addition of further results has considerably strengthened the paper, and I am happy to change my recommendation. | Summary: The authors consider a knowledge distillation setting involving a teacher and a student neural network.
The authors exploit a few interesting tricks -especially the use of a spectral parameterization of the network, and special regularizers- to show that it is possible to enforce learning a submodule within a student network (as long as it is larger than the original teacher), that implements the teacher behavior with a minimal number of neurons, and which can thus be used to estimate the effective size of the teacher.
The authors present thorough theoretical work, along with experimental verification; the results are convincing, but the topic is outside my area of expertise.
Strengths: - The paper is interesting and well written.
- The results are convincing.
Weaknesses: - The empirical experiments could be more extensive.
- The dataset used is extremely simple, and it is not clear whether the method would work in more typical scenarios.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors:
N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - Evaluation is performed only on small networks and on toy data. It remains to be seen how well the approach would work on complex, modern datasets and network architectures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the referee for their feedback. We acknowledge the need for more comprehensive results. In response, we've conducted four additional more complex scenarios, with the outcomes presented in Figure 1 of the accompanying PDF file. The experimental framework remains consistent with the original paper, and the outcomes corroborate the preliminary findings we described. In more detail, our expanded analysis encompasses four unique datasets: Shuffled MNIST, Shuffled Fashion MNIST, California Housing, and a nuanced Teacher structure. In this Teacher model, the first hidden layer's size is distinct from the second and from the fixed one of the student. We've adhered to a uniform student structure, with the only variation being the second hidden layer's size set to 50. At variance with the showcase of the paper, however, we are not able to set a proper complexity threshold in the plots. The performance indicators, both in accuracy and loss function, for the spectral Student and the classical Student, remain consistent across all datasets.
We would like to stress that with those datasets that are much closer to the typical scenarios in terms of complexity and input dimension, the capability of finding the computational core of the student (panels $a-d$) and the phase transition-like behavior of the MSE (panels $e-h$) are both validated.
We trust that in light of these additional findings, which in our view enhance the manuscript's robustness, the referee might reconsider the manuscript's rating.
Regarding the network architecture, we intend to address this in the _Limitations_ section of the paper and are keen to delve into more sophisticated architectures, such as Residual Networks and Transformers, in forthcoming publications.
---
Rebuttal Comment 1.1:
Title: Thanks for authors' rebuttal!
Comment: Reviewer Cz6u, did the authors address your concerns on simple dataset and experiments? Thanks. | Summary: This work focuses on the teacher-student paradigm in theoretical machine learning and shows that, for a unique optimization scheme that involves directly optimizing on the eigenvalues/eigenvectors of the data, a stable subnetwork in the student can be identified that can mirror the complexity of the teacher network. The area of this work is outside of my domain, so I am unable to comment further on the contributions.
Strengths: The approach seems theoretically-motivated and the result seems interesting. The area of this work is outside of my domain, so I am unable to comment further on the contributions.
Weaknesses: The experiments seem limited, but the area of this work is outside of my domain, so I am unable to discern what level of experimentation is normal for this kind of work.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: How can the spectral method be scaled up for large datasets, like images?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I did not see a section dedicated to limitations of the authors' work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the referee's feedback and recognize that the work may not fall directly within their domain of expertise. Regarding the implementation with other data, we agree with the referee and therefore have extended our analysis to four more complex and realistic datasets. Delving deeper, our refined analysis covers: Shuffled MNIST, Shuffled Fashion MNIST, California Housing, and an intricate Teacher structure. Within this Teacher design, the size of the first hidden layer differs from that of the second. We've maintained a consistent student framework, the only deviation being a second hidden layer size of 50. For both the spectral Student and the conventional student, performance metrics, encompassing accuracy and loss function, are uniform across all datasets.
Regarding the implementation of a large image dataset in the main paper we show how the spectral parametrization can be extended to a Convolutional Layer. In this setting, the eigenvalues can be mapped into weights that leverage the relevance of every filter resulting in an already analyzed algorithm of proven effectiveness.
We are, moreover, willing to insert a dedicated Limitation section where we recognize that due to computational limitations, we haven't tested more complex models where outcomes might differ. Nevertheless, we aspire to explore such models in forthcoming research. Furthermore, we haven't fully tapped into the capabilities of spectral decomposition. Indeed, a feature relevance analysis can also be conducted in the input space. By setting $w_{ij}=(\lambda_j^{in}-\lambda_i^{out})\phi_{ij}$ in the initial layer, the eigenvalues $ \lambda^{in} $ could highlight the significance of the input's $j$-th component.
We hope that, given these supplemental results which we believe bolster the paper's strength, the referee might re-evaluate the manuscript's current rating which we believe is a little unfair considering their assessment of inexperience with the domain.
---
Rebuttal Comment 1.1:
Comment: Thanks for adding the new experimental results. I believe this strengthens the work and I have raised my score (though I encourage the AC to give the other reviews more weight, since I have kept my confidence score as a 1). | Rebuttal 1:
Rebuttal: We express our gratitude to the chairs and referees for their valuable effort in providing constructive feedback on our manuscript. We have carefully considered each of the comments made and addressed them individually and in-depth in the replies to the referees. We have taken note that all the referees highlighted the need for more than one teacher and the use of more complex and realistic datasets. We acknowledge that this could be a weakness of our work. To address these concerns and enhance the quality of our paper, we conducted various new experiments on different datasets and with a more elaborate teacher structure. The results of this supplementary analysis are available in the attached PDF and demonstrate that all the spectral regularization effects still persist in these complex scenarios.
For this scope, we have analyzed various datasets, including Shuffled MNIST, Shuffled Fashion MNIST, California Housing, and a more complex Teacher structure. Both the spectral Student and classical one achieved similar accuracy and losses across all datasets. Although we regret not being able to include additional plots, such as the loss function value and eigenvalue histogram, due to space constraints in the PDF, the plots show the clear presence of a regularization effect in the spectral parametrization irrespectively of the initial size of the Student and of the task assigned to it (see panels $a-d$). Moreover the computational core found behaves in the same way when its neurons are removed (panels $e-h)$
These additional findings will surely strengthen the claims of our paper and will be incorporated into the main manuscript if it is accepted. Hoping that these new results and the point-by-point reply that we have given to each referee can convince them of the quality and robustness of our work, we remain.
Respectfully yours,
The authors
Pdf: /pdf/97a8cf89e9b2707da710e4cc0d7fd4fb50ca2a4e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper tried to understand and extend a new parametrization, spectral parametrization, for fully connected networks. They empirically show that in the teacher student setup, even when the student network is highly over parametrized, the student network under that parametrization will converge to a somehow "sparse" network that can be compressed, using standard optimizers. And they show that standard parametrization for the student network cannot do that.
Strengths: No.
Weaknesses: 1. Lack novelty. Why this parametrization can lead to sparsity is almost well-understood in the literature. For instance, it's well known that for such model $h_{\theta}(x)=h(x;u\odot v)$, if we initialize $u,v$ to be small and use standard gradient based algorithms, we will have a gradual rank increase for $u,v$, which leads to sparsity more easily. For some reference we can look at Abbe's paper https://arxiv.org/abs/2306.07042 (This is the most recent reference) or Jason's paper https://arxiv.org/abs/2207.04036. Or we can simply compute GD dynamics for using diagonal linear network to learn a linear target with small initialization. In a word, this result is not surprising to me. I feel like I already know this/ expect that happens.
2.For the empirical result, the input dimension in the experiments is too low. It's only 10. We have high dimensional input in practice. Also, the experiments are insufficient. Like you can try different optimizers, try different hyperparameters (width, initialization scheme), etc.
3.The authors didn't discuss the related works sufficiently. This phenomenon is clearly related to training dynamics of fully connected networks and there are many theory papers discussing the same things (even similar implication/conclusion).
Update: since the authors update the experiments, I have changed my score.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: No.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I don't think they discuss their limitations adequately. Here are my suggestions for improvement.
Since this phenomenon on simple network structures is actually almost well understood, if you really want to show this kind of reparametrization works, you should do some larger scale experiments. If there are some larger scale experiments to show that that kind of parameterization really works in practice and helps practice problems, then I think it's a good paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the referee for pointing us to the insightful reference concerning implicit regularization resulting from the commuting nature of our parametrization. Unfortunately, the other mentioned reference came out after the submission of our work to the conference. However, we contend that the effect we describe extends beyond such implicit regularization. Specifically, we do not initialize our spectral weights to small values; instead, they are uniformly set to one in every trial. This suggests that rather than imposing a gradual increase in rank, we observe more of a decrease. While our parametrization may indeed exhibit an implicit bias towards sparse representation, it alone cannot account for the observed distribution of $\\lambda_i$ and consequently, $\\mathcal{L}_i$.
In the provided PDF, we present the distribution of $\\mathcal{L}_i$ (very similar to the one of $\\lambda_i^{(out)}$) for Shuffled Fashion MNIST. Notably, the after training distribution of the variable with explicit $L_2$ regularization is very different from the one we get whithout, relying solely on the implicit bias. The visual inspection of the image points in the direction that is the eigenvalues' sparsification, leading to feature-centric sparsification (involving node elimination rather than link elimination), which is considerably more pronounced when explicit bias is applied.
Regarding the dataset's simplicity, we concur with the referee's observation and have undertaken additional experiments that we are willing to include in the manuscript. Specifically, in Figure 1 of the PDF, we showcase our analysis on four distinct datasets: Shuffled MNIST, Shuffled Fashion MNIST, California Housing, and a more intricate Teacher structure. For this Teacher configuration, the size of the first hidden layer differs from the second. We maintained a consistent student structure, and performance metrics (in terms of accuracy and loss function) for both the spectral Student and the conventional student are consistent across all datasets. We point outh that, in this case, we are not able to set an obvious complexity threshold of the teacher.
The results presented align with the simplified scenario depicted in the original version of our paper and we hope that, in light of those new findings, the referee will be keen on reconsidering the given 'Rating'.
---
Rebuttal Comment 1.1:
Title: Require Further Explanations
Comment: I appreciate you providing this feedback. Upon further consideration, I now understand that your initialization is outside of the small initialization regime, which is nice.
My primary concern at this point is regarding the scale of the dataset utilized. Have you previously experimented with substantially larger datasets such as CIFAR-100 or ImageNet? If your methodology achieves similar performance on those larger datasets, I would significantly increase my assessment score to at least 5.
---
Reply to Comment 1.1.1:
Title: Results with larger datasets
Comment: We are pleased to hear that the reviewer appreciates our results. We acknowledge the concerns raised about the behavior with larger datasets. To address this, we have evaluated our method on CIFAR-100, employing a pretrained ResNet50 backbone enhanced with three fully connected Spectral layers and Batch Normalization. Even in this more complex setting all the results concerning the presence of an invariant core of information are in line with the ones of simpler scenarios, presented in the provided 'pdf'.
We are prepared to incorporate these supplementary results in our paper's subsequent version and hope that, with these clarifications, the reviewer might positively reconsider their evaluation of our work. | null | null | null | null | null | null |
Large language models transition from integrating across position-yoked, exponential windows to structure-yoked, power-law windows | Accept (poster) | Summary: The authors investigate the temporal integration window of several transformer language models (but focus on gpt2) by evaluating the effect of word swaps on the activations of individual units as a function of their distance from the swapped word. They then characterize these mean integration curves of the units in each layer as a convex combination of an exponential and power law function and highlight an evolving motif as longer context is utilized at later layers. They also explore specifically whether these windows are tied statically to only token proximity or whether they dynamically capture the boundaries of sentences (and noun phrases), and find that the integration windows of later layers are sensitive to these boundaries.
Strengths: Excellent problem presentation within context of related work. The core “word swap” approach is well designed and appropriately motivated as is the analysis of “structure-yoked integration.” All experiments appear carefully constructed and controlled. The submission is clearly written and well organized. The results are important, presented clearly, and discussed appropriately.
Weaknesses: The motivation of the proposed functional form is underspecified. Please include more detail as to how it was decided that the integration windows should be modeled as such. More specific question below.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I would have liked to see a more principled approach starting from the integration window curves and arriving at the formulation of this convex combination of exponential and power law functions. What space of functional forms and combinations was explored? And how was this form deemed to be the best fit? Some notion of variance explained as a function of number of free parameters would be a useful datapoint to include. Just a few sentences elaborating on and clarifying this should be sufficient.
EDIT: this has now been addressed.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations are appropriately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive review.
We have addressed your comment about needing to better motivate the chosen functional forms in our general response. We show that the exponential-power law function provides substantially better prediction accuracy (measured using three different metrics) than a wide range of other functional forms including exponential and power law forms in isolation.
---
Rebuttal Comment 1.1:
Title: Comments addressed
Comment: Thanks for including these new analyses. The quantification of model fits is convincing and the other extensions will be interesting to see. Great work! | Summary: Transformer models have the potential to acquire essentially arbitrary patterns of attention during training. But what patterns do they acquire in practice? This is the question taken up in the present paper. The data for the paper are 40 word sequences from the classic Brown corpus. The language models examined are GPT-2, LLaMA, and BERT. The paper introduces a word-swap procedure for evaluating integration. It argues that the large language models exhibit a transition from exponential to power-law dynamics across the layers of the network. It describes the power-law windows as structure-yoked (in the context of the study, this means yoked to sentence boundaries) and the exponential windows as position-yoked.
Strengths: The paper raises a good question about what patterns of attention are actually acquired during the training of large language models. It undertakes to characterize the patterns in terms of functional forms. This sets up an important point of potential contact between machine learning and the study of scaling laws in physics.
Weaknesses: The author(s) assert that the integration windows are surprisingly well fit by a convex combination of an exponential and a power law. They do not rigorously evaluate any alternative fits. They do not appear to be aware of the substantial research literature on power laws, and appear to have overlooked the following points.
1) Power laws can themselves be generated as mixtures of exponentials.
2) To statistically distinguish power laws from other similar-looking distributions, it would be necessary to explore a very much greater range of time scales than appear in this study.
Here are a few of the very large number of references bearing on this issue:
MEJ Newman (2005) Power laws, Pareto distributions, and Zipfs’s law. Contemporary Physics
M Mitzenmacher (2004) A brief history of generative models for power law and lognormal distributions.
RD Malmgren et al. (2008) A Poissonian explanation for heavy tails in email communications. PNAS 105(47)
S Arbesman et al (2009) Superlinear scaling for innovation in cities. Phys. Rev. E 79, 016115
Altmann et al. (2009) Bursts, Lulls, and Scaling in the Temporal Distributions of words. PLoS One 4(11
By equating structure-yoking with sentence boundaries, the author(s) disregard all other types of linguistic structures. These range from smaller components of syntactic structures (such as the structure of noun phrases) to larger structures that control discourse coherence. There seems to be no justification for thinking that all these other structures are "position yoked".
Finally the decision to report results only the 1979 Brown corpus is puzzling (the paper states that similar results were obtained using the BookCorpus, but provides no details about that). The Brown corpus contains only 1 million words, hence the vocabulary is only the more frequent words of English. It does not contain examples of various genres that figured in the training sets for the LLMs. It would be important to understand how LLMs work on the range and variety of material they were trained on.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: If revising this paper, it will be important to provide rigorous comparisons of different model fits, and to test the approach on a more complete range of linguistic material and a more carefully articulated set of structures.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The author(s) acknowledge that the relationship between natural language and the integration windows learned by the LLMs was not explored in depth. As mentioned under "Weaknesses", they do not appear to have a complete grasp of what is needed to justify the functional forms they selected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive critique.
We have addressed your critique about the need for more extensive quantitative comparisons in our general response since other reviewers raised similar points. We tested a much wider range of different parametric forms motivated in part by the papers cited in your response. We find that the exponential-power form outperforms all other forms tested across multiple metrics. If there are any other specific functional forms that you would like us to test, we would be happy to do so.
We are aware that power laws can be approximated as mixtures of exponentials, and will clarify this point in the manuscript. In the model comparisons described above we included a model that contains a mixture of two exponentials and thus has the same number of parameters as our exponential-power form. This alternative model performs substantially worse than our exponential-power function.
Our original submission tested both sentences and noun phrases, and we find that the results are very similar across both. To address your comment and that of other reviewers we repeated our analyses using paragraphs, and again find very similar results (see general response). If there are other structures that you would consider it important to test, we would be happy to do so.
To address your comment, we will include the results of BookCorpus in our revised manuscript. If there is another corpus you would like us to test, please let us know. We note that we have run many experiments investigating different architectures, swap procedures, linguistic structures, structure durations, and corpora, all of which consistently yield the same core set of results.
---
Rebuttal Comment 1.1:
Title: New calculations are improvement
Comment: The new calculations represent a considerable improvement in the quality of the paper. | Summary: This paper studies the relationship between the length of context and its influence on language model outputs across layers and units. They propose a model-agnostic procedure that swaps words at a given distance and measuring the change in the activations. Doing this for a range of distances and many different sequences allows patterns to emerge across layers. Curve-fitting shows that these patterns are a convex combination of exponential and power law functions: largely exponential for lower layers, suggesting shorter temporal horizons, and gradually shifting to power-law for higher layers, suggesting growing temporal dependence. The parameters of the curves also suggest increasing temporal dependence within the functions. Experiments also show that these temporal horizons are bound to linguistic structure, i.e. sentence boundaries.
This paper provides empirical evidence for robustness of results by experimenting with different models and varying the details of the proposed procedures and research questions. They also provide evidence for the absence of such patterns in untrained models, which strongly suggests that the patterns discovered emerge from training on natural language.
Strengths: This is an interesting study that sheds light on how language models rely on context (temporally) by analyzing changes in activations caused by swapping words at increasing distances. It provides a different and valuable perspective on potential sources of the learning signal within the data and how that may translate to certain behaviors we observe during inference. The paper makes a strong effort to reinforce results by varying the experimental setup and measuring statistical validity of the findings.
Weaknesses: While the discussion is scientifically engaging, one of the frequently discussed weaknesses of analytic studies such as this one is: how can the findings of such studies provide concrete/actionable improvements and help advance the relevant fields, beyond speculation. Improving our mental models of how LMs work holds plenty of value, but if the authors have ideas or experiments on this it may be worth including them.
The experimental setup in the main experiments is a bit hard to follow. For example, what is the word replacement procedure used? How much data is used and how is it sourced? It might be worth extracting the details into a separate mini section if possible? Highlighting the findings per experiment maybe via paragraph titles may also improve clarity of the paper.
Is the data used too synthetic or too much of a toy setting? After the initial experiments, is it possible to explore a variant on more naturally occurring data such as that in language modeling datasets? Would this be feasible?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Maybe I missed this in the experiments: is it hard to extend this analysis beyond sentences to contextually relevant paragraph-level analyses and demonstrate hierarchical dependence? Figure 3A includes sequences that are contextually unrelated, right?
The authors may find this older study on recurrent LMs and context interesting: https://arxiv.org/abs/1805.04623
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: As noted above, perhaps a discussion on the potential of this line of work to concretely influence the relevant fields could be useful. Other noted limitations seem pretty thoughtful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the supportive review.
We have addressed your question about the impact of our work on relevant fields in our general response because other reviewers raised similar questions.
Below we have clarified the word swap procedure and the method used to select the swapped word:
Word swap procedure. We first sample an N-word sequence from a corpus. We used the Brown Corpus, but the results were similar using the Book Corpus. For each word in the original sequence, we generate a new sequence with just that word swapped, yielding N sequences each with a single swapped word. We tested several methods for selecting the word to be swapped.
Word selection procedure. We investigated three procedures for choosing the swapped word. (1) Part-of-speech matched. The simplest procedure randomly selected the swapped word from the set of all words with the same part-of-speech tag. (2) Probably swaps. Probable swaps were randomly sampled from a list of the 100 most probable words given the context (excluding the actual word) as computed by BERT (masking out the target word to be swapped). (3) Embedding-distance matched. The goal of this procedure was to ensure that the average embedding distance between the original and to-be-swapped word was the same for all positions, helping guarantee that any position-dependent effects (e.g., due to structure-yoked integration) could not be explained by the structure of swaps in the embedding layer. Specifically, for each swap, we sampled a desired embedding distance from a uniform distribution and then sampled a word whose distance from the original word was close to this target value when swapped in (the uniform distribution and distance tolerance were hand selected so as to provide a feasible target for the vast majority of words needing swaps; in the rare case when there was not a valid target, we sampled randomly). We found the results were very similar using all three of these procedures. We focused on the results from the simpler, part-of-speech matching procedure for our overall integration window analyses (Figure 2). We used distance matching for our structure yoking analyses since these analyses focused on position-specific effects (Figure 3), and we replicated both our overall integration window and structure yoking analyses using probable swaps (Figure 4I).
We will follow your advice and clarify these methods in separate mini sections, and we will highlight our results using section and paragraph titles.
We have addressed your comments about more naturalistic and paragraph-level analyses, in our general response, since a similar question was raised by other reviewers. Specifically, we repeated our analyses using paragraphs composed of three 6-word sentences, and as a consequence, the boundary between each sentence is entirely natural, unlike in our original experiments. The results from this analysis replicate our findings showing a clear change in the integration window at the boundary between sentences and also show hierarchical organization with a greater boundary change for paragraphs compared with sentences (see Figure 2). We are in the process of scaling this up to larger structures using stimuli generated by ChatGPT (e.g., 3 paragraphs each composed of 12-word sentences).
The only remaining unnatural part of this procedure is the use of sequences composed of multiple fixed duration structures (e.g., 6-word sentences). To address this, we are in the process of performing the following analysis. First, we select completely natural sequences that contain a structure of a given duration (e.g., 12-word sentence) in the middle of the sequence, thus aligning the start and end of this structure across all sequences. We then repeat all of our analyses and check if there is a change in the integration window at the start and end of this single structure.
Thank you for noting the Khandelwal et al. paper. We agree this is a relevant study and will reference it in our revised manuscript. Word shuffling might provide an interesting way to examine order-dependent integration in future research.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: The additional experiments and discussion are interesting and helped close some of the gaps in clarity and coverage of the initial analysis. I've raised my score to reflect this.
By the way, in the discussion on relevance to language modeling it seems like the authors are discussing inductive bias? It wasn't super clear but might be worth a proofread or some revision for adding to the draft. | Summary: The authors provide an experimental understanding of LLMs that how LLMs have inherent Integration windows to take account of the global meaning of the given sentence, by developing a novel method called “word-swap procedure,” which is model agnostic. The authors investigate the behavior of the Integration window with various control: changing the inner layer, changing the distance from the current position of sentence tokens, or varying sentence structures. The finding is that some trained LLM’s Integration window fit well with a convex combination of an exponential and power law function, resulting in the exponential law window at early layers across position-yoked, and the power law window at later layers followed by structure-yoked.
Strengths: - Suggest a metric for quantifying the variance of integration windows
- propose model agnostic analysis method to understand the integration window inherent LLMs
- provide an explicit experimental understanding of how LLM’s integration window behaves across layers, and with varying sentence structure.
Weaknesses: - It seems that the kinds of tested LLMs are quite restricted (only three), which bound the extensiveness of the analyses, let alone that GPT-2 is actually not that large with respect to current LLMs. In addition, for the generalization of the transformer architecture, it seems necessary to add the result of the encoder-decoder transformer.
- The number of tokens for each sequence seems to be fixed at most 40, but the methods author proposes can be applied to the longer sequence length. It would be helpful to understand the property of the integration window if the author provides enriched experiments with longer sequence lengths.
- Though the authors exhibit that the integration window of LLM shows different behavior across the layers, that the lower layer follows the exponential law, and that the higher layer follows the power law, the contribution itself seems to be not enough.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can you specify what kind of linguistic structure is used for the experiments of structure-yoked integration in Figure 3?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Integration window may not appear only for simple text generation, but could be observed in other tasks of NLU, or summarization, or text retrieval (though summarization and text retrieval are belong to generation, these tasks would need more detailed behaviour of Integration window). Could you provide experimental investigations other than simple text generation?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: To address your comments, we will test additional, larger models including LongT5. LongT5 is an encoder-decoder architecture that can accept very long sequences and has been trained on multiple tasks, addressing each of the issues that you raised in your review. The T5 model has also shown strong neural predictivity in the brain (Schrimpf et al. (2021) The neural architecture of language: Integrative modeling converges on predictive processing) (LongT5 has not been tested to our knowledge). We plan extend our analyses to much longer sequences (e.g., 1000 tokens).
We note that our contribution extends beyond simply showing a transition from exponential to power law dynamics. In particular, a key result of our study is that late layers adapt their integration window to structural boundaries in language, while earlier layers do not (or do so weakly). This suggests there is a transition from position-yoked to structure-yoked integration in LLMs, a finding that we replicate across all models tested. Our study also introduces new methods for measuring both the overall integration window and structure yoking.
We used 12-word sentences for our analyses in Figure 3. Figure 4K shows the results for 8- and 36-word sentences and Figure 4L shows the results for 6-word noun phrases. In our general response, we describe the results of a new analysis using paragraphs composed of three 6-word sentences. All of these analyses qualitatively show the same effect and our analyses with paragraphs further reveal hierarchical structure yoking. We will clearly label the structure tested in the caption of all figures.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's detailed response.
After reading the author's reply and the comments from other reviewers, I will maintain my score. | Rebuttal 1:
Rebuttal: We were pleased the reviewers overall felt that our work addressed a timely question, that the methods were well-motivated and described, and that the results were interesting and robust across multiple experiments. We thank the reviewers for their constructive critiques. Below, we address comments shared across multiple reviewers.
Functional form of integration windows
Multiple reviewers requested additional motivation/quantification of the functional form used to model integration windows. To address this issue, we performed several analyses, making the following changes:
1. Tested additional functional forms motivated by prior literature (including papers noted by reviewer NDTr): (1) exponential (2) power (3) exponential-power (4) exponential-exponential (5) Zipf-Alekseev (6) log-normal, (7) log-Cauchy.
2. Quantified goodness-of-fit with multiple metrics: (1) cross-validated mean-squared error (MSE), (2) cross-validated Kolmogorov-Smirnov (KS) test statistic (3) Bayesian Information Criterion (BIC).
3. Extended the sequence length from 40 words to 150 words (we plan to extend it further, e.g. 1000 words).
4. We plot integration windows on a log-log scale to better visualize the tail.
We found that the exponential-power form provides the best fit across all GPT-2 layers using all 3 metrics (see Figure 1) (we will repeat this analysis for the other tested models).
Interestingly, when plotted on a log-log scale, integration windows appear to exhibit piecewise-linear decay with a single knot (Figure 1). We are testing whether a piece-wise linear form, corresponding to a transition between two power laws, shows even better fits. The results of this analysis will not change our finding that LLM integration windows can be approximated using a simple (3-parameter) functional form whose timescale substantially expands across layers. It also has no impact on our structure yoking findings or methodological contributions (word swap procedure, structure yoking paradigm, and boundary metric). We will revise the manuscript to incorporate all of these results.
Larger-scale hierarchical structure
Several reviewers (3ifa, hKex, NDTr) asked if our structure-yoking results would occur at supra-sentential scales and/or exhibit hierarchical organization, as well as whether structure yoking would be evident at natural boundaries between sentences (as opposed to between randomly selected sentences). To investigate these questions, we repeated our analyses using paragraphs composed of three 6-word sentences (the longest sentence length for which we could find enough paragraphs) (Fig 2). We observe structure yoking at the boundary between sentences, demonstrating that our effects generalize to natural boundaries. We also observe even stronger yoking to the boundary between paragraphs, suggesting hierarchical integration. We are working on scaling up these analyses by using ChatGPT (GPT-4) to craft larger-scale structures with stereotyped durations (e.g., 3 paragraphs each with 12-word sentences), as well as repeating these analyses with larger LLMs (e.g., LLaMA, LongT5). We will revise the manuscript to include these new results.
Significance to the NeurIPS community
Multiple reviewers asked us to better articulate the significance of our work for the NeurIPS community:
Relevance to neuroscience. Understanding how the human brain integrates linguistic information is an important question of active interest to the NeurIPS community (e.g., Jain et al. (2020) Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech). The brain must have mechanisms for integrating flexibly across multiple timescales and linguistic structures. Yet, it is largely unknown what functional form best describes human cortical integration windows and whether/how these windows vary with structural boundaries, in large part due to methodological limitations. LLMs are state-of-art in terms of predicting human brain responses to natural language, and there is considerable interest in whether the computations of these systems resemble those in the brain and utilizing these systems to generate new scientific insights (Caucheteux et al (2023) Evidence of a predictive coding hierarchy in the human brain listening to speech, Tang et al. (2023) Semantic reconstruction of continuous language from non-invasive brain recordings). Because our methods are model-agnostic, they are directly applicable to measuring and modeling integration windows in biological neural systems. Our findings provide clear, testable predictions for how neural integration windows in the brain will be structured if LLMs integrate information in a brain-like manner.
Relevance to language modeling. The goal of our work was to understand existing language models, not advance the state-of-the-art. We agree that our paper does not provide specific guidance on how to improve language models, which we will note as a limitation. The impact of empirical insights on applied research is often difficult to predict, and we believe there are many potential applications of our work. For example, LLM integration windows show a stereotyped functional form that differs substantially from untrained networks and is robust across different architectures. Thus, a potentially interesting research direction would be to investigate weight initialization schemes (or architectural improvements) that impose this functional form so that the network only needs to learn variations on this form (e.g., structure-yoked integration), which might improve speed or performance. Our metrics might provide useful tools for diagnosing model limitations, such as an inability to yoke to larger-scale structures. We will briefly note these potential research directions/applications in the revised manuscript, flagging them as speculative.
Pdf: /pdf/b8498bfdc217ecbe7f9363908e9119e1a399a39b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper describes a novel method for measuring and characterizing integration behavior in large language models. This method is used first to look just at temporal integration windows—i.e. how do inputs at different lags affect model activations at a specific time—in GPT-2, revealing a transition from exponential-like to power-law-like behavior across model layers. Second, the authors investigate whether integration is “position-yoked” or “structure-yoked” by performing a similar analysis on strings comprising multiple concatenated sentences. These results show that the degree of structure-yoking, like integration window size, increases consistently across model layers. Finally, these results are replicated on other networks (LLaMA and roBERTa) and with some variations in language input.
Strengths: This is overall an interesting and well-executed paper, with many strengths:
* The procedures are well-motivated theoretically, clearly explained, and obviously well-suited to measuring the effects of interest.
* In particular, the structure-yoking experiment is cleverly constructed and tests a very interesting property of these models.
* Evaluation is fairly exhaustive, with tests of many networks and different variations on the procedure to ensure robustness.
* The paper makes contact with much of the relevant literature, linking the work both to machine learning and neuroscience research.
Weaknesses: * My only actual issue with the paper is that it does not do a terribly good job at answering the question, “so what?” This seems like timely and interesting work, so the authors should be able to say a little about why it’s useful to deeply understand and characterize integration behavior in these networks.
* Differences between exponential and power law decay are quite difficult to see with linear scales; the authors should try showing the relevant data (especially Figure 2C) using log-log plots.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * The analyses consider structure only at the scale of noun phrases (6 words) and sentences (8 to 36 words). What about higher-order structure? Do the same layers also “yoke” to structure at supra-sentential scales, like paragraphs or (maybe possible only with LLaMA) entire narratives?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The discussion of limitations is clear and complete.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the supportive review.
We chose to address your questions about the impact of this work and the generalization of our paradigm to supra-sentential scales in our general response since similar questions were raised by other reviewers.
We now plot integration windows on a log-log scale (see Figure 1 of included PDF).
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed responses & new analyses. In particular, I think the authors did a good job addressing the question of significance. I already thought this paper was good, and I still think it's good. | null | null | null | null | null | null |
HeadSculpt: Crafting 3D Head Avatars with Text | Accept (poster) | Summary: This work proposes a text-to-avatar creation pipeline, building upon dreamfusion and magic3d. To alleviate the geometric ambiguity, the authors replace the vanilla stable diffusion with a landmark-conditioned diffusion model finetuned with controlnet. They also use textual inversion to obtain specific token for back view. Extensive experiments show that the proposed method achieves SOTA performance.
Strengths: - The generation results are impressive and compelling. As shown in Fig. 1, the method can generate avatars in different domains including photo-realistic, anime.
- The writing is easy to follow.
Weaknesses: - The generated avatars lack details and suffer some artifacts. The results in Fig. 1 suffer obvious noises especially when zoomed in. Hair and clothing lack details.
- How about the generation diversity? Given the same text prompt, can the model generate multiple avatars that align with the text?
- One key aspect of avatars is animation. How could the generated avatars be driven?
- Fig. 2 is a bit confusing for me. It would be better to clean it.
- As in Fig. 3, the editing performs well for frontal view, but fails on side and back views.
- Are results in the main paper cherry-picked? Could you show some randomly sampled results? and more visualization for the geometry?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The method used DMTet in the second stage, which is a mesh-based representation. But the mesh has difficulties in representing hair, beards, and so on. How do you solve these parts?
- How do you train landmark-conditioned controlnet? training data?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Authors have claimed the limitations.
Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 5. Response to reviewer `#Gcqm`
We thank reviewer `#Gcqm` for agreeing with our motivation and acknowledging the results as "impressive and compelling".
Below we respond to the doubts put forward by the reviewer - **we regularly refer to the general response above and the provided one-page pdf with figures:**
## 5.1. Noises and artifacts in generated avatars
Please refer to Sec. 1.3 of the above general response.
## 5.2. Generation diversity
Indeed, like existing text-to-3d methods (e.g., DreamFusion), our method also does not yield large amounts of diversity across random seeds (as shown in Fig. 11). This is likely due to the mode-seeking property of $\mathcal{L}_{\mathrm{SDS}}$ combined with that at high noise levels, the
smoothed densities may not have many distinct modes. Understanding the interplay between guidance strength, diversity, and loss functions remains an important open direction for future research.
## 5.3. Avatar animation
Please refer to Sec. 1.1 of the above general response.
## 5.4. Cleaning Fig. 2
Thanks for pointing this out. We will reorganize the layout and clean it in the revised version.
## 5.5. Editing performs badly on side and back views
This is because the front view of a head represents almost the whole information for editing, the editing results of side and back views sometimes are inferior to those of front views. However, we respectfully argue that this is not always the case, e.g., most of the editing results as shown in Fig. 1 are similarly satisfactory among all views.
## 5.6. Randomly sampled results
> Are results in the main paper cherry-picked? Could you show some randomly sampled results?
The results shown in the submitted manuscript are not cherry-picked. Please also check the newly added results across random seeds in Fig. 11, where we don't find obvious geometry and appearance differences among different runs.
> and more visualization for the geometry?
For geometry visualization, we have now provided additional normal-rendered images in Fig. 12. Please find more examples in the supplementary file.
## 5.7. Difficulties of mesh in representing hair and beards
In this work, we didn't design specific modules to handle the issue of hair and beard (out of our current focus and scope). We also note that, compared with pure NeRF-based methods, using mesh as a second-stage representation can offer sharper and more structural hair and beard, since the initialization from NeRF's density can be further trained to improve at a much higher resolution. To further improve the quality of hair and beard, one possible solution for future work is building high-quality hair/beard templates, which can be disentangled from the whole optimization process.
## 5.8. Details about landmark-conditioned ControlNet
As mentioned in L167, instead of training ControlNet by ourselves, we took an off-the-shelf version[1] trained on LAION-Face[2] dataset, including 50M diverse face images and the corresponding face landmarks predicted by Google MediaPipe.
[1] ControlNetMediaPipeFace. https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace.
[2] General facial representation learning in a visual-linguistic manner.
---
Rebuttal 2:
Comment: Thanks to the authors for their responses. After reading the rebuttal and comments from other reviewers, I would keep my original ratings. | Summary: Existing text-driven 3D generative models could have many problems, such as geometric artifacts (e.g., Janus problem), and visual inconsistency. This work focuses on 3D head avatar generation, which utilizes FLAME model to incorporate human geometric priors into generation, hence resolving those problems in 3D generation. The proposed framework utilizes a coarse-to-fine framework, and it first generates a coarse face via the neural radiance field (NeRF) and then performs refinement/editing using tetrahedron mesh (DMTet). As a result, the proposed method could generate diverse 3D human faces, and it also enables identify-aware editing with the help of both ControlNet and InstructPix2Pix.
Strengths: (1) This work proposes a text-guided 3D generation, specifically for human heads. It could successfully generate various human heads by using or modifying many existing methods, such as FLAME, ControlNet, Instruct-Pix2Pix, and DMTet.
(2) This method also studies identity-aware editing by defining a trade-off between the original appearance and the desired editing.
Weaknesses: The proposed text-guided 3D generation method focuses on human heads, but it seems like combining 3D head priors with the existing general 3D generation or style-editing methods. A few problems associated with 3D human heads have not been deeply studied, and please refer to the following detailed weaknesses:
(1) This method effectively addresses the limitations encountered in current text-guided 3D generation approaches through the integration of 3D head priors with the FLAME model. By leveraging the FLAME model for 3D head generation, it is expected that this method could facilitate intricate expression variations. However, the current generated head avatars seem to be static only, and the manuscript did not provide many expressions except for the neutral expression. Enabling expressions is important for 3D head avatars, and a few examples could be found in the FLAME model, ICT-FaceKit, and the very relevant work DreamFace.
(2) The identity-aware editing has not been well-justified. Based on my understanding, the current method simply mixes the style of the original appearance and the desired editing to generate the facial texture based on the ControlNet-based InstructPix2Pix. I am wondering if it is possible to do a few specific editing of human heads, such as changing the hairstyle, wearing different glasses, and perhaps a few other geometric adjustments (e.g., getting fatter or getting thinner).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Besides the questions listed in the weakness, I would like to ask:
(1) Is the generated head avatar capable of performing various expressions? Or is it possible to perform different expressions by using simple post processes?
(2) Is there a clear definition of the so-called identity-awareness of the editing process? Merely blending styles appears to be an unsatisfactory solution.
(3) Is it possible to generate/edit human head avatars with different shapes?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 4. Response to reviewer `#thuE`
We are grateful the reviewer `#thuE` took the time to thoroughly review our work and found that our method "effectively addresses the limitations encountered in current text-guided 3D generation approaches".
We provide our responses regarding the raised concerns as follows - **we regularly refer to the general response above and the provided one-page pdf with figures:**
## 4.1. Expression variations
> the current generated head avatars seem to be static only, and the manuscript did not provide many expressions except for the neutral
We acknowledge our current method does not enable direct animation, as discussed in the limitations. This is the sacrifice of increasing generalization ability, where we didn't optimize in the pre-trained parametric space of FLAME while only using it as the density initialization of NeRF. However, expression edits are possible using IESD, as Fig. 9 shows. Despite lacking direct animation, our edited expressions demonstrate a reasonably consistent appearance and identity.
Please refer to Sec. 1.1 of the above general response for more details.
## 4.2. Specific editing
> I am wondering if it is possible to do a few specific editing of human heads, such as changing the hairstyle, wearing different glasses
In Fig. 10, we have now provided several additional editing results as suggested, including hairstyle change, beard change, and adding sunglass. It is evident that our method can achieve satisfactory editing results.
> and perhaps a few other geometric adjustments (e.g., getting fatter or getting thinner)
> Is it possible to generate/edit human head avatars with different shapes?
We found it is generally difficult to do geometric adjustments via editing since it is challenging and ambiguous to describe a desired geometry via text prompts alone. However, thanks to our NeRF initialization from the FLAME model and corresponding landmark control in the diffusion prior, our method supports geometric variation in generation by changing the FLAME model. We only used the canonical FLAME model by default in the submitted manuscript. To demonstrate our method's ability for geometric adjustment, we have further provided several examples with different FLAME models as the initialization, as shown in Fig. 12. We will add this result in the revision.
## 4.3. Definition of IESD
> Is there a clear definition of the so-called identity awareness of the editing process?
We define identity awareness as the ability to make desired modifications to an avatar's appearance based on editing instructions while preserving key facial features and attributes that represent the avatar's core identity and preventing undesired changes unrelated to the edit.
As introduced in L202-213, the proposed IESD is a variant of SDS, characterized by blending two different scores for noise prediction.
> Based on my understanding, the current method simply mixes the style of the original appearance and the desired editing to generate the facial texture based on the ControlNet-based InstructPix2Pix.
We note that it is more than a style mix of the original appearance and the desired editing, because the gradient predicted by InstructPix2Pix for editing is dominant in the desired editing area, so it's capable of local region editing, as shown in Fig. 10.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal and additional questions
Comment: Thanks for the rebuttal. After reading the other reviewers' comments, I realize we have similar concerns about the method pipeline.
As for the method details, I have more questions, and I hope the authors can provide a thorough explanation.
(1) How is the "Mixing" in Figure 2 implemented? The manuscript does not seem to provide corresponding formulas or descriptions for it.
(2) In P5 L169, what are the criteria for selecting vertices? Is it random or algorithmic?
(3) In P6 L186, where was the data collected from? Is it public or private? Was the selection criteria completely random?
(4) In P6 L207, do I and C correspond to the two ControlNets in Figure 2(c) of the PSD? The authors claim they are identical, but based on what is mentioned in section 3.3 and the original ControlNets, they should represent different tasks.
(5) In P6 L211, according to Figure 2, the reference image should be renderable from both DMTET (high-resolution) and Coarse Nerf. Why is it rendered only from Coarse Nerf in this case?
I will raise my rating if all unclear aspects are adequately explained.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. We provide explanations with respect to these additional questions as follows:
1. Apologies for the confusion. The "Mixing" in Figure 2 refers to Equation 7, which is the weighted averaging of the two predicted noises for IESD.
2. These vertices are selected by a pre-defined index set, which includes the vertices that correspond to the contour of face, eyes, nose, and mouth. This selection process is used to process the projected dense landmarks in the same style as the sparse landmark maps used for training ControlNet. This procedure is the same in all experiments.
3. The images in the tiny dataset are manually downloaded from Google search. Please find more details and example images in the submitted supplementary material.
4. Yes, I and C correspond to the two ControlNets in Figure 2(c) of the PSD. We apologize for any misunderstanding - we did not claim I and C are identical. We would be grateful if reviewer `#thuE` could elaborate so we can address this point fully.
5. As shown in Figure 2, we use a frozen coarse NeRF to render reference images and a fine DMTET to learn the newly edited 3D representation. This means that frozen coarse NeRF-rendered images do not change, while fine DMTET-rendered images do. If we use the DMTET-rendered images as the reference images, the reference image at $i$ iteration will be the edited result from $i-1$ iteration, which will endlessly accumulate the editing appearance and thus leads to unsatisfactory results.
Please let us know if these explanations address your questions. We remain open to further discussions and warmly welcome any additional feedback or inquiries you might have. | Summary: The paper proposes a novel method for performing text-to-3D generation for specifically human (or humanoid) heads. These generated heads can be further edited and refined using more fine-grained detailed text prompts while still preserving the identity of the generated asset. This is accomplished with two main contributions: using a landmark-aware score distillation loss in order to ensure that the generated heads roughly align with a template head mesh (FLAME), and introducing a loss function which balances text-based editing with a new description with the original description used to generate the asset. The generated results fix a number of artifacts with existing text-to-3D generation methods specifically for heads. For example, the generated heads do not suffer from the multi-face problems, and the strong prior given by a parametric head model ensures that the generated results are geometrically correct. Additionally, the results can be edited while still maintaining the identity of the original generated results. This is shown in comparison to a number of baseline methods, which are outperformed.
AFTER REBUTTAL:
I have read the rebuttal and appreciate the detailed response. I appreciate the additional quantitative evaluations, as I believe this was a limitation of the method that has been addressed. However, I still think the quality is a bit limited, but this is the case for most text-to-3D methods. The additional comparisons weakness has been addressed as well. However, I don't see justification for the the more stable training - I would suggest removing this claim without verifying it significantly, while Fig 11 is nice, I am not sure this is enough to make an entire claim based on.
Strengths: In my opinion, the strengths of the paper are that:
1. The presentation of the paper is extremely high quality. The introduction and related work sections are comprehensive, motivate the problem well, and clearly delineate the contributions provided in the paper. The methods section is very clearly described and is simple to follow for those familiar with the field of generative 3D. I view this as very important because it is much easier to glean information from the paper, and future work in text-to-3D (potentially for avatar generation) would be much more likely to build off of this method.
2. The contributions are clearly stated, and to the best of my knowledge they are novel, and they seem like they could be relatively significant.
- The idea of integrating ControlNet together with a score distillation loss makes intuitive sense for gaining better control over generated 3D assets, and I have not, to the best of my knowledge, seen it proposed elsewhere. Maintaining control over the structures which are generated from text-to-3D, including ensuring they have higher quality and less noisy geometry, is a very important problem in text-to-3D generation and I view this solution as potentially having a large impact on those working in this field.
- While balancing losses between an editing prompt and original prompt seems simple, it does not seem to be used in other text-to-3D methods and seems like it could be useful for text-based editing of already fitted representations in other applications.
3. I find the ablation study to be high quality, and ablate all of the parts of the method which are relevant: introducing ControlNet into the SDS loss, the representation chosen and coarse-to-fine optimization, and the hyperparameters for balancing the editing losses. These are all clearly ablated, along with other smaller contributions such as textual inversion for generating the back of the head.
Weaknesses: In my opinion, there is only one main weakness with the paper, which is that the evaluations and comparisons to baseline methods do not seem complete.
1. There are no quantitative comparisons. While the user study is insightful, methods like DreamFusion proposed the CLIP-R metric. Is there a reason why this was not used in this paper? Does the metric do a poor job at quantifying what it is trying to measure? Additionally, some quantitative results for evaluating the editing quality would be useful: CLIP-R (or something like it, if CLIP-R is not useful) can be used to ensure that the identity is still preserved despite editing with an additional prompt (and perhaps also show that the rendered images are pushed closer to a desired fine-grained editing prompt).
2. The baseline methods are not evaluated entirely fairly. For example, I don’t see any of the baseline methods evaluated for editing quality. What happens if one of the generated examples (ex: those from Fig 4) go through a fine-grained text-based edit? How much will the identity be degraded or the edit not be done sufficiently? I think this is extremely important to show because without it, I don’t know what is the state-of-the-art that is currently being improved on and thus don’t know if the proposed method is even better.
3. It is mentioned that the optimization of the proposed method is “more stable” (L244-245) than baseline methods. However, this claim is not justified anywhere. This would be an extremely important contribution for text-to-3D (where optimization is notoriously brittle), so some quantification of this would be very impactful if it could be shown. For example, ranges of hyperparameters trained with which still lead to a good solution, or error bars on generated results, or failure cases.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I do not have additional questions brought up (see weaknesses for evaluations questions). The paper is exceptionally clear in describing the method and contributions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper adequately addresses the limitations of the proposed method. The first limitation, which mentions that the results are “non-deformable” is insightful as this was a question which is natural to ask considering that the FLAME template is a deformable model for heads. Additionally, the generated results still are not completely photorealistic, such as those representations which have been fit from captured data. Some additional failure cases of the method would be interesting to see in order to understand better what the limitations are of the method and which piece would be the biggest bottleneck for someone who wanted to integrate this work into their application.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 3. Response to reviewer `#ztub`
We really appreciate the feedback provided by reviewer `#ztub`.
Thanks for finding our presentation clear and our contributions novel and regarding our method as a fundamental basis that future work could "build upon".
Below we address the proposed concerns about evaluations and comparisons - **we regularly refer to the general response above and the provided one-page pdf with figures:**
## 3.1. More quantitative evaluation
Please refer to Sec. 1.4 of the above general response.
## 3.2. Baseline methods evaluated for editing quality
As suggested, we have now provided several editing results produced by the baselines via prompt modification. As shown in Fig. 13, bias in editing is a common problem that all baselines suffer. This is because although variation in the representation and optimization methods, these methods share the same guidance function (i.e., $\mathcal{L}_{\mathrm{SDS}}$) of diffusion prior, where the bias comes from. In this paper, we propose IESD, which allows the guidance function to incorporate information from two complementary sources: 1) the original image gradient that preserves identity and 2) the editing gradient that captures desired modifications. By factoring both terms, our IESD enables more explicit and direct control over the editing process compared to the conventional guidance derived from the input alone.
We will further clarify this in the revised version.
## 3.3. Stability comparisons
> It is mentioned that the optimization of the proposed method is “more stable” (L244-245) than baseline methods. However, this claim is not justified anywhere.
We observed that all baselines tend to have diverged training processes as they do not integrate 3D prior to the diffusion model. Taking two shape-guided prior methods (i.e., Latent-NeRF and Fantasia3D) as examples, we compare their generation results and ours across different random seeds. We conduct comparisons under the same default hyper-parameters and present the results in Fig. 11. We can observe that prior methods need to try several runs to get the best generation while ours can achieve consistent results among different runs. Our method is thus featured with stable training, without the need for cherry-picking over many runs.
> For example, ranges of hyperparameters trained with which still lead to a good solution
All the experiments, for all different methods, presented in the submitted manuscript and rebuttal were conducted under the same default hyper-parameters.
## 3.4. Failure cases and bottlenecks
Thanks. We have now given several failure cases in Fig. 14. As discussed in the limitation part of the manuscript, our method is not perfect, for example, 1) it inherits the bias from the diffusion model, e.g., it cannot correctly generate characters in the eastern culture like "Sun Wukong"; 2) it can not handle highly-detailed textures due to the mode-seeking property of $\mathcal{L}_{\mathrm{SDS}}$, e.g., "Freddy Krueger".
> which mentions that the results are “non-deformable” is insightful as this was a question which is natural to ask considering that the FLAME template is a deformable model for heads.
Please refer to Sec. 1.1 of the above general response.
> the generated results still are not completely photorealistic
Please refer to Sec. 1.2 of the above general response.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed clarification and additional quantitative and qualitative evaluations of the method and baselines. This has certainly improved my opinion on the evaluation quality of the method. I do not have additional questions on this.
Overall, I am now convinced the quality is an improvement over state-of-the-art, at the cost of limiting to a specific class of objects: heads. However, I do feel that the lack of being able to animate the result severely limits the applicability: a method specifically designed for heads should be able to include head-specific features such as expression editing. With both of these in mind, I still feel slightly positively about the paper in its current state.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply - we are pleased our responses addressed your questions.
We agree with you that animation is valuable for downstream tasks, but it should not compromise performance or generalization capabilities. At this stage, our framework supports expression editing, albeit not full animation. Enabling animation while retaining generalization ability remains an important challenge as discussed above. Exploring optimal solutions to unlock animation without sacrificing quality or generalization will be a key focus in our future work. Two potential solutions we aim to investigate are: 1) finding improved head representations that retain mesh structure to enable animation while maintaining generalization capacity; and 2) utilizing auto-animation tools in the off-the-shelf graphic pipeline as post-processing on current outputs to add motion. Your insightful comments will help strengthen our work. We are grateful for your time and input. | Summary: Authors proposed a solution for generating human heads based on text description. The technology is based on popularised concepts of control signal for pre-trained diffusion model and static 3D model to guarantee consistent head. Moreover, authors presents the concept of the “back” of the head with a textual inversion. The qualitative and quantitative evaluation shows the superiority of the method over existing competitors.
Strengths: - there are plenty of interesting ideas how to do specific solution including domain knowledge: 1) statistic 3d model (e.g. FLAME), 2) improved score distillation with additional special token, 3) editing directions
- Authors demonstrate a wide set of experiments to evaluate their method including small use study
Weaknesses: There is a huge limitation of the method compared to the baselines
- The user study can be biased - small set of people, without description of the task and based on the figure 4 it is extremely easy to select the introduced method.
- Images are not photorealistic, furthermore the results have an disdvantage from the volumetric rendering (small noise is still visible on all counters)
- Results are biased in many directions: huge emotion bias despite the neutral emotions (Figure 1), colour issue for the shadows and unrealistic light baked to the texture.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Most of the following questions can improve the manuscript:
- Why do you use DMTet for discretization? The motivation is missing at line 144+
- In the paragraph near L155 you are elaborating the source of the existing issues. Do you think that main problem is an ambiguity and with rendering each representation for uniformly sample cameras will help to disambiguate the gradients during denoising?
- How does the back-view concept is so important if you introduce the landmark based controlNet and the FLAME structure? Can you elaborate why it is not enough to use just the full head with landmarks? The Figure 6 slightly contradicts original motivation of the textual inversion concept for the “back”.
- Could you explain in more details the limitation from InstructPix2Pix (L305)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: - All faces only in a neutral position that is not always correlate with the textual description (e.g. neutral clown)
- The overall image quality is biased to the unrealistic colours
- Most of the ideas will work only for the presented setup of the heads
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 2. Response to reviewer `#ooKH`
We thank reviewer `#ooKH` for recognizing that our solution is "interesting" and our qualitative and quantitative evaluation "shows superiority".
We address the proposed questions as follows - **we regularly refer to the general response above and the provided one-page pdf with figures:**
## 2.1. Details about the user study
We conducted the user studies in the form of a questionnaire supported by Google Forms. Each volunteer will be presented with 20 randomly selected generated results with rotating videos, with the description to be:
```You will be presented with 20 sets of results with each generated by 5 text-to-3D methods. Your task is to evaluate and rank the results based on three dimensions: consistency with the provided prompt, texture quality, and geometry quality. For each set of results, please assign a score of 5 to the best candidate and 1 to the worst, with intermediate scores for the others.```
### 2.1.1. Updated results with additional volunteers
>The user study can be biased - small set of people
To reduce the potential bias in the user study, we have now expanded the evaluation to include additionally 22 volunteers, bringing the total number of participants to 42. The updated user study statistics are presented in Fig. 8, indicating that our method consistently achieves the highest ranks.
### 2.1.2. Additional quantitaive evaluations
To present more comprehensive evaluations, we have also now conducted a quantitative evaluation using CLIP-R and CLIP-Score as the objective metrics. The results, provide further quantitative evidence that supports the subjective user study's findings. Please refer to Sec. 1.4 of the above general response for more details.
## 2.2. Noises on the textures
Please refer to Sec. 1.3 of the above general response.
## 2.3. Biased results in emotion and color
Please refer to Sec. 1.2 of the above general response.
## 2.4. Motivation of DMTet for discretization
We opted for a mesh-based approach to achieve enhanced resolution in the fine stage. Yet, when explicit meshes are directly derived from implicit representations like NeRF using marching cubes, they may yield lower-quality surfaces, especially for the boundary. Introducing DMTet, a differentiable mesh representation, into the optimization pipeline offers the potential to refine the geometry extracted from NeRF. We will elaborate this consideration in the forthcoming revised edition.
## 2.5. Source of the existing issues
The main reason for the Janus problem is the absence of 3D prior in the diffusion model, since it is trained on 2D images without camera pose conditioning.
Better sampling camera poses might solve this problem to some extent, but a relatively large batch size is needed to sample as many poses as possible to better alleviate the ambiguity (e.g., Fantasia3D uses 24 as batch size over 8 Nvidia RTX 3090 GPUs to guarantee a satisfactory result).
Instead, we focus on solving the core reasons for the Janus problem by proposing to integrate the 3D head prior to the pre-trained diffusion model via the projected facial landmarks. This proposed solution offers improved training efficiency and is more user-friendly.
We will further explain this in the forthcoming revised edition.
## 2.6. Importance of $\texttt{\<back-view\>}$ concept
Thanks for this question. Because the pre-trained diffusion model only supports 2D information conditions, we must project 3D landmarks to a 2D landmark map. However, the projection brings ambiguity over front and back views. Concretely, since the 2D image dataset used for training landmark ControlNet mostly contains only front or side face views, the model tends to generate a front-view face (i.e., a dataset bias), given such an ambiguous 2D landmark map. The proposed $\texttt{\<back-view\>}$ concept is designed exactly to resolve this bias.
> Can you elaborate why it is not enough to use just the full head with landmarks?
Using full-head landmarks and considering their self-occlusion might be a possible solution, but this would require more robust full-head landmark registration (hard to collect), along with the need for retraining the ControlNet on extra data. Instead, as a cheaper solution, we leverage available facial landmarks and ControlNet capabilities in this work. We will explore more elegant landmark usage in future work.
>The Figure 6 slightly contradicts the original motivation of the textual inversion concept for the “back”.
For the comment regarding Fig. 6, we intend to show that landmark control is relatively more important than the $\texttt{\<back-view\>}$ concept.
Without landmarks, it will be hard to distinguish different views and thus tend to generate always biased front views even with the help of $\texttt{\<back-view\>}$ concept.
In summary, landmarks should play a more central role in conveying 3D pose information to the 2D diffusion process, whilst the $\texttt{\<back-view\>}$ concept is also indispensable for alleviating the ambiguity brought by the projection process. We will further clarify this in the revised manuscript.
## 2.7. Limitation from InstructPix2Pix
As mentioned in the original paper, InstructPix2Pix is limited by the visual quality of the Prompt2Prompt-generated dataset. Specifically, it struggles with viewpoint changes, makes excessive image alterations, fails to isolate objects in cases, and cannot easily reorganize or swap objects. These limitations carry on our pipeline since we directly use InstructPix2Pix as is.
We showcase an unsatisfactory editing example in Fig. 14. Importantly, our framework can easily incorporate any improvements made in the InstructPix2Pix and its follow-up works, e.g., MagicBrush[1].
[1] MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing.
---
Rebuttal Comment 1.1:
Title: Reply and final note.
Comment: Thanks to the authors for their responses, I hope most of the common unclear moments will be added to the final text. After reading the rebuttal and comments from reviewers, I would keep my original ratings. | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewers' thoughtful feedback recognizing the presentation, novelty, and performance of our method.
The reviewers suggested additional experiments and illustrations to highlight strengths, clarify limitations, and illustrate future directions. We are pleased to have conducted all of these valuable recommended experiments, as outlined in the general and per-reviewer responses. Next, we would like to first provide some general responses to address common questions. **To avoid any confusion with the figures in the original submission, we have numbered the new figures in the attached document starting from Fig. 8 onwards.**
# 1. General response
## 1.1. Avatar expressions [Reviewers `#ztub`, `#thuE`, `#Gcqm`]
In its current form, our method does not support direct animation of the generated avatars, despite enabling barely satisfactory facial expression changes through editing (as shown in Fig. 9). This limitation arises because we initialize the NeRF density using only the FLAME model, without preserving the mesh structure and point correspondence. Our motivation for this design choice is to maximize the generalization capability across the entire pipeline. In principle, we could employ any head parametric model (e.g. NPHM[1]) as the 3D representation and optimize an avatar within its parametric space (i.e. shape and expression parameters). However, by constraining the optimization to a pre-trained parametric space in this manner, it would restrict the ability to generalize to out-of-distribution data (e.g. non-human-like avatars). We believe there is an inherent trade-off between generalization ability and structural consistency that warrants further investigation. As future work, we will explore solutions for the best compromise between these two competing objectives.
## 1.2. Biased results [Reviewers `#ooKH`, `#ztub`]
Indeed, the current pipeline exhibits biases in aspects like color, appearance, and emotion inherited from two sources: 1) the diffusion prior, which tends to memorize training data during generation[2]; and 2) the mode-seeking nature of the SDS loss, which requires large CFG values to boost fidelity at the cost of high saturation and unrealistic colors. As demonstrated in the manuscript, these biases plague all existing text-to-3D methods but are not specific to our model, though IESD can partially mitigate them during editing. By our knowledge and experience, some concurrent works might be able to alleviate these biases: a stronger diffusion model[4], image reconstruction loss[5], and more advanced score distillation[6].
## 1.3. Noises and artifacts in generated avatars [Reviewers `#ooKH`, `#Gcqm`]
We do acknowledge that the current results are far from perfect even though our results have already outperformed prior alternatives by a large margin. This is still an unsolved problem with a heavy need for further investigation and innovation. More specifically, this is due to the challenging mode-seeking nature of this zero-shot task, where real albedo textures are not available for model optimization. Compared with NeRF-based methods rendered at a much smaller $64\times 64$ resolution, the noises in our setting become more apparent because we are using mesh-based representation at an $8\times$ higher resolution at the fine stage. We hypothesize that further fine-tuning the diffusion prior on a large dataset with manually collected albedo maps[3] could help mitigate these issues, which we leave for future work. While not yet perfect, we believe our method demonstrates strong zero-shot performance and establishes a promising direction for high-fidelity generative avatar modeling without albedo supervision. We argue this already makes a significant contribution to the community.
## 1.4. More quantitative evaluation [Reviewers `#ooKH`, `#ztub`]
### 1.4.1 Generation evaluation
We appreciate the insightful suggestion. Following the suggestion, we computed CLIP-R (CLIP-L/14) and CLIP-Score metrics for all methods using the 30 text prompts from the generative process. As shown in Tab. 1, our approach substantially outperforms the competitors on both metrics. This aligns with and further substantiates the subjective superiority demonstrated in the user study. We will include this evaluation in the revised version.
### 1.4.2 Editing evaluation
We didn't provide a quantitative evaluation for editing because we consider editing to be fundamentally a subjective task and also there is no standard protocol to measure its performance. Regardless, as suggested, we have now adopted CLIP Directional Score as described in InstructPix2Pix and StyleGAN-NADA to measure the editing performance.
It measures how much the change in text captions agrees with the change in the images. Concretely, for the directional score, we encode a pair of images (the original and edited 3D models, rendered at a given viewpoint), as well as a pair of text prompts that describe the original and edited scenes, e.g., "a DSLR portrait of Saul Goodman" and "a DSLR portrait of Saul Goodman dressed like a clown". We compare our method with B3-B5 mentioned in Fig. 7 due to the absence of existing methods, with the scores calculated on 10 editing results shown in Tab. 1. Please note that this metric might also inherit the bias from CLIP. We will keep finding better evaluation metrics and improve this evaluation in the future.
[1] Learning Neural Parametric Head Models.
[2] Extracting Training Data from Diffusion Models.
[3] Dreamface: Progressive generation of animatable 3d faces under text guidance.
[4] DeepFloyd IF. https://stability.ai/blog/deepfloyd-if-text-to-image-model.
[5] TextMesh: Generation of Realistic 3D Meshes From Text Prompts.
[6] ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation.
Pdf: /pdf/de03e13a31336e3529243fcd449cffe187be6a6d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Cascading Contextual Assortment Bandits | Accept (poster) | Summary: This paper studies the contextual cascading assortment bandit problem, and proposes low regret algorithms for this problem.
Strengths: 1. This paper studies a novel problem by combining ideas from assortment bandits and cascading bandits.
2. The paper introduces some new algorithmic ideas.
Weaknesses: It seems to me the optimal cascading assortment should obey some structural properties, i.e., should similar items appear in the same assortment? Should different items appear in the same assortment? Should we pair desirable items together? Or should we pair high-weight and low-weight items together for an ‘anchoring effect’? It will be interesting to hear insights on this.
The writing contains noticeable grammatical errors and typos
The reward function in this work seems to greatly simplify the problem and removes the difficulty in prior work. See questions section below.
---------------
Update. Some issues in writing include:
1. several sentences in paragraph from line 215-227
2. line 272, the $Ht^{-1}$ I assume should be subscript.
3. Not writing out the constant in line 268 and 278 is strange.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The first claim the paper makes is that they are able to remove the dependence on K in the cascading bandit problem by using a swapping technique. However, I believe this depends on the specific loss function used, therefore this claim is not entirely accurate. Specifically, the swapping technique only holds when the order does not matter, but this does not generally hold for cascading bandits.
2. The second claim is the regret removes $\kappa$ term. However, the regret definition is different from assortment bandits, therefore is this claim really accurate?
3. In section 2.4, is the description talking about the cascading bandit problem or cascading assortment bandit problem?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The author did not address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time to review our paper and for your valuable feedback. We truly hope that we can resolve any doubts and misunderstandings of our result if there are any. Here are our responses to each comment and question.
**Structural Properties of Optimal Cascading Assortment?**
- The model is clearly defined in Section 2.2. Other than the MNL choice model structure stated in lines 127-128 and cascading interaction model, we do not impose any other structure on optimal assortment. With all due respect, we do not see why this has to be considered as a weakness.
**"The reward function in this work seems to greatly simplify the problem and removes the difficulty in prior work"**
- No, it does not. We strongly disagree with the comment. As we show in our paper, our proposed model encompasses two of the prominent existing combinatorial bandit instances, cascading bandits (K > 1, M=1) and assortment bandits (K=1, M>1), as well as single-action selection bandits (K=1, M=1), such as logistic bandits and multi-armed bandits with binary feedback. We propose a more general and more complicated model that is more challenging for regret analysis than the existing combinatorial parametric bandits. On top of that, we show even tighter and stronger regret bounds, overcoming the longstanding sub-optimal dependence!
**Questions**
**Q1: Swapping Technique and Dependence on $K$?**
- You have stated that the swapping technique we propose has limitations in eliminating the dependency on $K$. Unfortunately, that is not correct. The swapping technique is intended to remove the worst-case scanning probability, denoted as $p^*$. The swapping technique has no relations to removing $K$ at all.
- Now, we will show why dependence on $p^*$ appears and how we can mitigate this. See Section 4.2.1 in [16]. For contextual cascading bandits (with no assortments), the authors in [16] show that the regret at round $t$ is upper bounded as follows:
\begin{equation} \mathcal{R}^\alpha (t, S_t ) \le 2B \sum_{i \in S_ t } \beta_t (\delta) \|x_{ti}\|_{V_t^{-1}}. \end{equation}
- Since $V_t$ contains only information of observed base arms, $i \in \lbrack O_\tau \rbrack: \forall \tau\in \lbrack t\rbrack$, there is out of control issue in summation. [16] copes with this issue by a resorting to the case that $O_{t} = |A_{tk}|$ using the worst-case scanning probability $p^*$ as below, which then results in $\frac{1}{p^*}$ dependence in the final regret bound in [16].
\begin{align}
\mathbb{E} \lbrack \mathcal{R}^\alpha (t, S_t ) \rbrack
\le \frac{1}{p^\star} \mathbb{E} \lbrack \mathcal{R}^\alpha (t, S_t ) \mathbb{1} \lbrace O_t =|S_t | \rbrace \rbrack
\le \frac{2B}{p^\star} \mathbb{E} \lbrack \sum_{i \in \lbrack O_t \rbrack } \beta_t (\delta) \|x_{ti}\|_{V_t^{-1}} \rbrack.
\end{align}
- However, in our case, the instantaneous regret at round $t$ is upper bounded as follows:
\begin{equation*}
\mathcal{R}^\alpha (t, S_t ) \le f(S_t , u_t ) - f(S_t , w_t^\star) \le 2\Big( \frac{K}{K+1}\Big)^{K+1} \max_{i \in S_t } \beta_t (\delta) ||x_{ti}||_{V_t^{-1}}$.
\end{equation*}
- As we mentioned in section 4.1 (line 242-244), the assortment with the largest item with respect to $\|x_{ti}\|_{V_t^{-1}}$ is always examined since it is in the first position of $S_t$ by our proposed swapping technique.
- Due to the max operator and swapping technique, we can obtain the below inequality:
\begin{equation*}
\mathcal{R}^\alpha (t, S_t ) \le 2\Big( \frac{K}{K+1}\Big)^{K+1} \max_{i \in \lbrack O_t \rbrack} \beta_t (\delta)\|x_{ti}\|_{V_t^{-1}}.
\end{equation*} Thus, $p^\star$ does not appear in our regret analysis.
* The swapping technique is applied utilizing the structure of cacading model, where change in the order of assortments within the cascade does not affect the expected reward. Such a invariance is one of the key characteristics of the cascade model widely studied in the many existing previous literature [11, 12, 15, 21, 24].
**Q2: "the regret definition is different from assortment bandits, therefore is this claim really accurate?""**
- First, we would like to clarify the premise of your question that "the regret definition is different from assortment bandits." If you are referring to the use of $\alpha$-approximate regret in our work (and in most cascading bandit literature) and the regret using exact optimization in assortment bandits, then we can set $\alpha = 1$, which is the regret using an exact optimization without approximation. Hence, regrets in our setting and assortment bandits (without cascade) are comparable when $K = 1$ which is a special case of our problem setting. Therefore, the claim is accurate.
- Note that due to the complexity of computing the exact optimal solution in combinatorial optimization in general, it is very standard to use $\alpha$-approximate regret, denoted as $\mathcal{R}^{\alpha}$, in the wide range of combinatorial bandit literatures [7, 16, 22, 25, A, B]. To this end, in Appendix D, we even prove that an approximation solution using greedy algorithm for the cascading assortment optimization problem gives a 0.5 approximation of the optimal solution, which we believe is an independent contribution.
**Q3: Is Section 2.4 about Cascading bandit problem or Cascading assortment bandit?**
* The notions of $\alpha$-approximation oracle and $\alpha$-regret apply to our problem setting, cascading assortment bandit problem, which also includes both cascading bandits and assortment bandits. This is a very common notion in cascading and combiantorial bandit literature as seen in [7, 16, 22, 25, A, B] and many more.
---
**References**
[A] Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework and applications. ICML, pp.151–159. PMLR, 2013.
[B] Andi Nika, Sepehr Elahi, and Cem Tekin. Contextual combinatorial volatile multi-armed bandit with adaptive discretization. AISTATS, pp.1486–1496. PMLR, 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. However, I don't feel my questions were addressed. I will rephrase my questions.
Q1. The reward function you are using is specified in line 134. The related work [12; 16] and [11] were cited. [11;12] considers the non-contextual setting, and weights can only be 0/1. The reward is conjunctive / disjunctive; in either case, the order in the cascade does not matter. [16] considers the contextual setting with a more general reward function. And as [16] observed, using their more general reward function, the order does matter in the cascade, affecting both the feedback and the reward.
If I understand correctly, the reward function you are using follows more closely with [11;12], with the binary weight being replaced by the probability of being clicked, and reward is 1 if at least 1 item is clicked. Hence in your setting the order does not matter, and it is natural to move more uncertain items to the front as it aids parameter estimation (similar phenomenon has been observed in experiments in [11], where low preference items come early, as it helps learning).
Now, it is claimed previous bounds depend on $K$ and this paper has improved this. I don't think this is entirely accurate for the following two reasons:
1. In the non-contextual version, the complexity will depend on the number of arms $K$, this is unavoidable. In the contextual version, the parameter governing the complexity is $d$, the dimension, rather than the number of arms $K$. ( As an analogy, $K$ appears in multi-arm bandit regret bounds and $d$ appears in linear bandit regret bounds. )
2. The paper is using a simpler reward function compared with [16].
Q2. I understand the paper does not impose structural constraints on the cascading assortment. My question was, given the weights (assume latent $\theta$ is known), is there anything interesting we can say about the optimal solution (would similar weights be grouped together into an assortment, or different weights be grouped together)? Hence, my original question was more of a characterization question, not a learning question.
To be clear, I do think the paper makes interesting contributions, I am just worried that the claims of improvement over prior work are not entirely accurate.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer jQVq,
There seem to be fundamental misunderstandings in your comments. We genuinely hope that these can be addressed and resolved, and we approach this situation with an open attitude.
To be candid, the communication with you has been particularly frustrating for us as authors. Apart from the fact that these comments have arisen at such a late stage, the fact that your reply came without an acknowledgment of any errors on your initial assessment, and with clear indications of misunderstanding about the basics of our problem setting, is disheartening. For instance, interpreting \(K\) as the total number of items (arms) rather than as the cascade length casts doubts on how effectively our work can be evaluated. We sincerely hope that these discrepancies are the result of unintentional mistakes.
Despite the limited time remaining, our aspiration is to establish a foundation of shared understanding.
Given the time constraints imposed by the impending deadline for the discussion period, our priority is to ensure that our dialogue rests on a common grasp of the basics. With this objective in mind, **we would greatly appreciate your input on the following straightforward *yes/no* questions** so that we both know we share some common grounds. This approach should facilitate a more productive discourse, given the brevity of time:
1. Do you acknowledge that **your initial assertion concerning the relationship between the swapping technique and the cascade length \(K\) is incorrect**? (Please note that the purpose of the swapping technique is to eliminate dependence on $p^*$. Please refer to our prior response for clarification.)
2. Is it clear to you that **\(K\) represents the cascade length in our work, and is not indicative of the total number of items**? (There seems to be confusion where \(K\) has been mistaken for the total number of items (arms), as indicated in your comment: “In the non-contextual version, the complexity will depend on the number of arms, this is unavoidable”)
3. Do you comprehend that **Li et al. 2016 [16] assume that the learning agent (algorithm) possesses precise knowledge of the position effect for each position**?
Upon a positive response to all of the above questions, we can proceed with our subsequent discussions.
---
**We respectfully disagree with your assertion that "the paper is using a simpler reward function compared with [16].**
We kindly ask you to consider the substantial body of literature in the cascade bandits field that does not hinge on the assumption of **known** position effects, as employed in [16]. To illustrate this, we can readily cite numerous recent works [21, 24, C, D, E], among many others, even including in state-of-the-art results [Vial et al., 21]. It's pivotal to note that assuming *known position effects* is not universally recognized as standard practice, nor does it inherently denote technical advancement. Our approach adheres to the most prevalent form of cascading feedback that does not rely on the assumption of *known position effects*. Just because we do not use the assumption of "known position effects", should our work be considered simpler? We strongly dispute your claim.
Whether one considers assumption of *known position effects* as [16] or not, our model considers *cascades of assortments* (subsets of multiple items per cascade), distinct from [16]'s focus on *cascades of single items*. (See Figure 1. [16] is the second figure "Cascading Bandit", and the our setting is the fourth one.) Furthermore, in [16], the click probability for each cascade (single item) is governed by a simple linear model, independently for each item. In contrast, our work employs the more intricate MNL choice model to compute the click probability for each assortment-based cascade which can accommodate substitution effect and correlation in click probabilities among items. As evident from the technical results and proofs presented in the supplementary material (which can be cross-referenced with the analysis in [16]), the regret analysis in our study is significantly more intricate. Notably, our work is the pioneering endeavor to explore cascades of MNL models for the first time.
---
[21] Vial, Sanghavi, Shakkottai, and Srikant. "Minimax regret for cascading bandits." Advances in Neural Information Processing Systems 35, 2022.
[24] Zhong, Chueng, and Tan. “Thompson sampling algorithms for cascading bandits.” The Journal of Machine Learning Research, 2021.
[C] Kveton, et al. “On the value of prior in online learning to rank.” International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
[D] Zhong, Cheung, and Tan. “Best arm identification for cascading bandits in the fixed confidence setting.” International Conference on Machine Learning, 2020.
[E] Wan, Ge, and Song. “Towards scalable and robust structured bandits: A meta-learning framework.” International Conference on Artificial Intelligence and Statistics, 2023. | Summary: This paper studies a new contextual combinatorial multi-armed bandit model, which generalizes the contextual cascading bandits and assortment bandits. For the offline problem when item parameters are known, the authors propose a 0.5-approximate solution. For the online problem where parameters are not known a priori, the authors first propose a UCB algorithm, called UCB-CCA, which yields a regret bound of $\tilde{O}(\kappa^{-1}d\sqrt{T})$ regret bound. To remove the unsatisfying $\kappa$ which may be relevant to the cascade length $K$, the authors further leverage Bernstein-type concentration and propose a new algorithm UCB-CCA+, which removes the $\kappa^{-1}$ dependence and achieves regret bounds that are independent of $K$. Finally, the authors conduct experiments to show the practical efficacy of the proposed methods.
Strengths: Overall, I feel this is a decent work that is suitable to put in the combinatorial MAB literature.
1. From the model perspective, the model is new and general, which covers contextual cascading bandits and MNL bandits as degenerate cases.
2. For the results, Table 1 gives a clear comparison with existing works and this paper gives the first regret bound for this new model. Interestingly, when the
3. For the analysis, this paper not only is grounded by existing works from MNL bandit, but also gives some new techniques
4. For the writing, it is clear and intuitive supported by intuitive figures and tables.
Weaknesses: Overall, I do not have major concerns, yet I have some minor comments, which I hope to get some clarification to validate my understanding.
1. In line 8 of Algorithm UCB-CCA+, the algorithm uses the true parameter $w_t^*$ which is unknown, is it a typo?
2. In line 4 of Algorithm UCB-CCA+, it is a combinatorial optimization problem over a confidence radius $B_t(\delta)$, it is NP-hard in general? Is any computational-efficient method for this?
3. I cannot find the lower bound result for the current problem. Is the current result matches the lower bound?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please comment point 1,2,3 in the above weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors adequately addressed the limitations and there are no potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time to review our paper and for your valuable feedback. Here are our responses to each comment and question:
**Typo in Algorithm UCB-CCA+**
* Yes, it is a typo. Thank you very much for catching it. It should be corrected to $\theta_t$.
**Combinatorial Optimization in UCB-CCA+**
- Thank you for your question. Yes, finding the optiaml cascade is weakly-NP hard as we show in Lemma D.1 in the appendix. To this end, in Appendix D, we prove that a greedy selection for the cascading assortment optimization problem gives a 0.5 approximation of the optimal solution. We believe such approximation optimization guarantee is very rarely shown in combinatorial bandit literature. We believe that this result serves an independent contribution.
**On Lower Bounds**
- Thank you for your questions on possible lower bounds. For logistic bandits ($K=1, M=1)$ which is a special case of our problem setting, [A] established a regret bound as $\Omega(d\sqrt{T})$. Also, [5] proved that a regret lower bound for assortment MNL bandits ($K=1, M\geq1$) is $\Omega(d\sqrt{T})$. Thus, our regret upper bound matches with these lower bounds in terms of time horizon $T$ and dimensionality $d$ in these special cases.
- Lastly, for non-contextual cascade bandits, [21] derived a regret lower bound of $\Omega(\sqrt{LT})$ where $L$ is the total number of items, which does not depend on the cascade length $K$. Hence, the $K$-indepedence in the regret upper bound in our result appears to be sound and tight in terms of $K$.
- For general contextual cascading assortment bandits ($K>1, M>1), to our knowledge, proving a regret lower bound remains an open problem. We will include these discussions on lower bounds in a revised version of our paper.
---
**References**
[A] Marc Abeille, Louis Faury, and Clément Calauzènes. "Instance-wise minimax-optimal algorithms for logistic bandits." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification from the authors. I am still a little confused about the computation problem of UCB-CCA+. I think your 0.5-approximate solution can only apply when you input the (optimistic) weights, e.g., line 3 of UCB-CCA has the $u_{t,i}$. However, for UCB-CCA+, the confidence radius is over $g_t(\theta)$, where you cannot have an explicit form of $u_{t,i}$ like line 3 of UCB-CCA. Therefore, in line 4 of UCB-CCA+, you need to have a double-oracle that optimizes over $\theta$ and $S$, which should be NP. Please correct me if I am wrong, e.g., you are actually using Eq. (7) to compute $u_{t,i}$ for UCB-CCA+.
---
Reply to Comment 1.1.1:
Comment: Thank you for your question, and we are more than happy to elaborate further on the optimization in MNL-CCA+. As you noted, one would have to resort to a joint optimization oracle in general. But, there are also ways to utilize the approximate optimization result. For any given parameter $\theta$, we can compute a 0.5 approximate cascade optimal with respect to the given parameter by using the greedy algorithm. The analysis of the greedy algorithm (Lemma D6 in particular) also indicates that we can replace the objective $f(S,w_t)$ with a simpler proxy function. Also, the approximate guarantees can be applied not only to UCB weights $u_t$ but also to any given weights $w_t$ -- the solution would be approximately optimal with respect to those particular weights being used. However, searching for the optimal $\theta$ is hard since the set $B_t(\delta)$ may be non-convex -- this is also evident in the previous literature in assortment bandits (e.g., Agrawal et al. 2023 [2]). Based on this, a possible way to compute an approximate assortment is as follows. We could use a grid search heuristic that searches over a grid of points in the set $B_t(\delta)$. for each point $\theta$ in the grid, we use the greedy algorithm to obtain a 0.5 approximate cascade and finally, compare all the candidate cascades and choose the best one among them. Again, we appreciate your question and constructive feedback. If you have any further questions, please feel free to let us know. | Summary: This paper studies the Cascading Contextual MNL bandits problem. Two effective algorithms UCB-CCA and UCB-CCA+ are proposed. Compared to existing cascading bandits and MNL bandits, the regrets of the two algorithms have some better dependence on the length of cascades and \kappa. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
Strengths: S1. Combining cascading bandits and MNL bandits is interesting and can find real applications.
S2. The proposed UCB-CCA algorithm has a regret independent of K, the length of cascades.
S3. The results of numerical simulations are good.
Weaknesses: W1. This paper focuses on removing the dependence on K and \kappa. It seems to me that improving the dependence on d may be more helpful than removing the dependence on K, since in practice K could be a small constant and the dimension of contextual vectors could be large. Note that [18] has an algorithm for MNL contextual bandits that has a \sqrt{dT} regret.
W2. In theorem 5.2, the dependence on \kappa is removed by increasing T. Such a treatment explicitly assumes that T is much larger than 1/\kappa, which makes some sense in practice. However, I am not sure if this practical assumption is appropriate in the theoretical analysis of regret.
W3. In the numerical simulations, the authors report the cumulative regrets of algorithms. What about the curves of revenues of algorithms? Since all algorithms adopted in experiments have sublinear regrets, the cumulative regret is actually an insignificant term compared to the revenue. I wonder if the difference between the proposed algorithms and the baseline is still as large as reported in Fig 2 when reporting the curves of cumulative revenues.
W4. The submission file and the full version (Supplementary Material) are inconsistent with each other in some parts.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1. For UCB-CCA+, as \kappa is removed in the regret, does it mean that we only need the assumption that the Fisher information matrix is invertible?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss the limitations of their work in Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time to review our paper and for your valuable feedback. Here are our responses to each comment and question:
**improving dependence on $d$?**
- We believe that you are referring to $\tilde{\mathcal{O}}(\sqrt{dT})$ regret in Theorem 4 of [19]. Then, please note that the regret bound in Theorem 4 of [19] contains $\log(TN)$ dependence. That is, if the total number of items is very large, such that $N>\exp(d)$, then the regret bound would eventually be even worse than $\tilde{\mathcal{O}}(d\sqrt{T})$.
- Note that in both logistic [A] and MNL bandits [5] which are both special cases of the cascading contextual assortment bandits, the regret lower bound is of $\Omega(d\sqrt{T})$. Hence, this suggest that $\mathcal{O}(d)$ dependence cannot be improved for arm-independent bounds. Note that our regret upper bounds in Theorem 4.1 and Theorem 5.2 are in the regime of arm-independent bounds. Hence, we do not see why this has to be considered a weakness.
**$T$ and $1/\kappa$**
- First of all, note that Theorem 5.2 holds true for all values of $T$, regardless of whether $T$ is larger than $1/\kappa$ or not. Hence, there is **no necessity for the assumption of $T \gg 1/\kappa$ at all** in order for Theorem 5.2 to hold. It is just that depending on the relationship between $T$ and $1/\kappa$, the leading term may be different.
- Now, suppose $T$ is small enough such that the second term in Theorem 5.2 becomes dominant as you stated. Then, the total regret would be $\mathcal{O}(\frac{1}{\kappa} d^2 \log T)$ which in this case has $\frac{1}{\kappa}$ dependence but is logarithmic in $T$, and we have already assumed that $T$ is small if the second term were to be a learning term. So, for small $T$, $\log T$ becomes even smaller. Hence, such a case is not a concern, both theoretically and practically. Hence for sufficiently large $T$, the regret is $\mathcal{O}(d \sqrt{T})$ and small enough $T$, $\mathcal{O}(\frac{1}{\kappa} d^2 \log T)$.
**Plotting curves of revenues?**
- Considering that the definition of regret is the cumulative difference between the expected reward of the optimal action and the expected reward of the action chosen by the agent, conceptually and for numerical purposes, we do not see any difference between comparsion based on regret and comparsion based on revenue. Given that standard metric in the bandit literature by default is regret, we do not see why this has to be considered a weakness. We would be more than happy to include plots in terms of revenue if required.
**Supplementary Material**
- When the supplementary material was submitted, the entire manuscript was uploaded which included the main text for convenient reading that includes the proofs and hyperlinks within the document. The uploaded supplementary material includes minor revisions in the main text.
#### Questions
**Q1: Assumption on the Fisher information matrix only?**
* We still need Assumptions 2.2 and 2.3. What we claim is that $\kappa$ in Assumption 2.3 no longer depends on the leading term of our regret upper bound.
We trust that our responses have sufficiently addressed your questions and alleviated any concerns. Should you need any clarification, we are more than happy to address them during the discussion period.
---
**References**
[A] Marc Abeille, Louis Faury, and Clément Calauzènes. "Instance-wise minimax-optimal algorithms for logistic bandits." International Conference on Artificial Intelligence and Statistics. PMLR, 2021
---
Rebuttal Comment 1.1:
Title: further comments to the authors' rebuttal?
Comment: Dear Reviewer tk8N,
Do you have further comments on the authors rebuttal?
Area Chair | Summary: This paper studies a new combinatorial bandit problem that generalizes the existing cascading and assortment bandits. The authors first propose a UCB-based algorithm, UCB-CCA, that achieves a tighter regret bound than existing bounds for cascading contextual bandits by eliminating the dependence on cascade length $K$. They also introduce an improved algorithm, UCB-CCA+, and use a Bernstein-type concentration to prove a regret bound without $\kappa^{-1}$ dependence, where $\kappa$ is a problem-dependent constant in the regret bound of UCB-CCA. Numerical experiments validate the effectiveness of the proposed algorithms.
Strengths: 1) This paper is the first to study the combination of contextual cascading and assortment bandits. This new problem is well-motivated by real-world applications in recommender systems.
2) One of the main technical contributions is the new Lipschitz continuity of the expected reward function for contextual cascading assortment bandits in Lemma 4.2, which helps prove the regret bound of UCB-CCA is independent of $K$ and $M$. (However, I have a question about the proof of this Lipschitz continuity; see below.)
3) The proposed UCB-CCA+ algorithm achieves an improved regret bound than that of UCB-CCA, solving the two technical challenges (dependence on cascade length and $\kappa$) faced by contextual cascading and assortment bandits simultaneously.
Weaknesses: 1) For contextual combinatorial bandits, there is a recent result [A] that provides a regret bound independent of the cascade length $K$ using a variance adaptive algorithm. Moreover, its regret bound can get rid of $p^*$, which raises a concern that whether the optimistic exposure swapping in Section 3.3 is necessary or whether the $p^*$ issue can be resolved by a more involved analysis.
2) The Lipschitz continuity in Lemma 4.2 is a key component of the analysis. However, the proof in line 402-405 is unclear to me. I would appreciate it if the authors could add more details about the proof; a simple example of the non-contextual cascading bandit can also be helpful.
3) Although UCB-CCA+ achieves a good regret bound, there is no discussion on the lower bound of contextual assortment combinatorial bandits: would it be similar to that of the contextual combinatorial bandits or contextual assortment bandits?
[A] Xutong Liu, Jinhang Zuo, Siwei Wang, John CS Lui, Mohammad Hajiesmaili, Adam Wierman, and Wei Chen. Contextual Combinatorial Bandits with Probabilistically Triggered Arms. In International Conference on Machine Learning, 2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Typo: line 207: In round (at every round) $t$
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time to review our paper and for your valuable feedback. Here are our responses to each comment and question:
**Comparison with [A]**
* Thank you for introducing [A]. We are more than happy to compare our work with [A].
* First of all, we would like to point out the NeurIPS policy on "recent work" which states, *"What is the policy on comparisons to recent work? Papers appearing less than two months before the submission deadline are generally considered concurrent to NeurIPS submissions. Authors are not expected to compare to work that appeared only a month or two before the deadline."*
(https://neurips.cc/Conferences/2023/PaperInformation/NeurIPS-FAQ#:~:text=are%20generally%20considered%20concurrent%20to,or%20two%20before%20the%20deadline.)
- [A] was published in ICML in July 2023, and its first arXiv version was posted on March 30th, 2023, which was about a month and a half before the NeurIPS submission deadline. Hence, we are not obliged to compare [A] with our work by the policy. Nevertheless, we are more than willing to offer a comparison.
* Upon reviewing [A], we observed that there are significant technical differences in the analysis between our work and [A]. To elaborate the distinction, writing the expected reward function as $\sum_{i=1}^{K}p_{i}^{\mu, S}(\bar\mu_{i} - \mu_{i})$ in [A], where $p_{i}^{\mu, S} = \prod_{j=1}^{i-1}\mu_{j}$, allows them to remove $p^\star$. Then, the difference $\bar\mu_{i} - \mu_{i}$ is represented as a weighted norm of the feature vector, $||x_{i}||_{V_t^{-1}}$. To square a weighted norm of the feature vector, the Cauchy-Schwarz inequality can be applied and a dependency on $K$ arises. That is, the regret bound in Theorem 1 of [A] is independent of $p^\star$, but dependent on $K$. Improving upon the $K$-dependent regret bound, [A] shows that contextual cascading bandits satisfy triggering probability and variance modulated (TPVM) condition by Lemma 19 in [B] and eventually carves off the $K$ dependence.
* On the other hand, in our work, to simultaneously eliminate dependencies on both $p^\star$ and $K$, we utilize the mean-value theorem combined with the swapping technique. We observe that the techinques used in both works are unique. Also, we would like to clearly highlight that our model is based on MNL choice model (and considers cascades of assortments) whereas [A] is based on much simpler linear click model.
**Lipschitz continuity in Lemma 4.2**
* By the mean value theorem, \begin{align*}
f(S_t , u_t ) - f(S_t , w_t^* ) = \nabla_\theta f(S_t ,\bar w)(\theta_t - \theta^* )
= \left\lbrace \prod_{A_{t \dot{k}}\in S_t } p_t (i_0 |A_{t \dot{k}}, \bar w) \right\rbrace \sum_{A_{tk}\in S_t } \sum_{i \in A_ {tk}} p_t (i|A_{tk}, \bar w) x_{ti}^\top (\theta_t - \theta^* )
\end{align*}
* For a convenience, let $\sum_{i\in A_ {tk}}p_{t}(i|A_{tk}, \bar{w}) := P_{tk}$. Then, we can simplify $\lbrace \prod_{A_{t\dot{k}}\in S_{t}} p_{t}(i_{0}|A_{t\dot{k}}, \bar{w}) \rbrace \sum_{A_{tk}\in S_{t}} \sum_{i\in A_ {tk}}p_{t}(i|A_{tk}, \bar{w})$ as follows: $\prod_{\dot{k}\in[K]}(1 -P_{t\dot{k}})\sum_{k\in[K]}P_{tk}$.
* We can see that this expression is maximized as $\left(\frac{K}{K+1}\right)^{K+1}$ when $P_{tk}=\frac{1}{K+1}$ for all $k\in[K]$, since $0 < P_{tk} < 1$.
**On Lower Bounds**
- Thank you for your questions on possible lower bounds. For logistic bandits ($K=1, M=1)$ which is a special case of our problem setting, [C] established a regret bound as $\Omega(d\sqrt{T})$. Also, [5] proved that a regret lower bound for assortment MNL bandits ($K=1, M\geq1$) is $\Omega(d\sqrt{T})$. Thus, our regret upper bound matches with these lower bounds in terms of time horizon $T$ and dimensionality $d$ in these special cases.
- Lastly, for non-contextual cascade bandits, [21] derived a regret lower bound of $\Omega(\sqrt{LT})$ where $L$ is the total number of items, which does not depend on the cascade length $K$. Hence, the $K$-indepedence in the regret upper bound in our result appears to be sound and tight in terms of $K$.
- For general contextual cascading assortment bandits ($K>1, M>1), to our knowledge, proving a regret lower bound remains an open problem. We will include these discussions on lower bounds in a revised version of our paper.
---
**References**
[A] Xutong Liu, Jinhang Zuo, Siwei Wang, John CS Lui, Mohammad Hajiesmaili, Adam Wierman, and Wei Chen. "Contextual Combinatorial Bandits with Probabilistically Triggered Arms."" In International Conference on Machine Learning, 2023.
[B] Xutong Liu, Jinhang Zuo, Siwei Wang, Carlee Joe-Wong, John Lui, and Wei Chen. "Batch-size independent regret bounds for combinatorial semi-bandits with probabilistically triggered arms or independent arms."" Advances in Neural Information Processing Systems, 35:14904–14916, 2022.
[C] Marc Abeille, Louis Faury, and Clément Calauzènes. "Instance-wise minimax-optimal algorithms for logistic bandits." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. It addresses most of my concerns. One more note on the comparison with [A]: from my understanding, the algorithm in [A] can get rid of $p^*$ and $K$ simultaneously according to their Table 2 (both Disjunctive and Conjunctive Combinatorial Cascading Bandits satisfy the TPVM condition). I would like to maintain my score.
---
Reply to Comment 1.1.1:
Comment: We are glad our responses have addressed your concerns. Thank you very much for your support and overall positive feedback. | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your overall positive feedback and recognizing the significance of our contributions. We introduce a novel combinatorial bandit model, *the cascading contextual assortment bandit*, which generalizes two of the prominent existing combinatorial bandits, cascading and assortment bandits and also generalizes single-action selection bandits. Not only do we generalize these bandit problem settings, but also we tackle longstanding open problem with suboptimal dependence on the cascade length. Hence, we take on a more general and more difficult problem and propose provably efficient algorithms and salient features and improved analysis. We strongly believe that the new model, the algorithms, the regret analysis, and the approximate optimization guarantees that we provide in this paper offer meaningful contributions to the community.
Incorporating the review of Reviewer nP45, we have also included an additional comparison to the CombCascade in [12] and intend to include more comparisons with additional methods in a revised version. For this experiment, we set (1) the total number of items $N=10$, the length of the cascade $K=2$, the size of the assortment $M=2$, the dimension (for the feature vector and parameter) $d=5$ (see Figure 6) and (2) $N=15, K=2, M=2, d=10$ (see Figure 7). You can see the results in the attached pdf file. Our proposed algorithm UCB-CCA and UCB-CCA+ perform better than C$^3$-UCB in [16] and CombCascade in [12].
Pdf: /pdf/10654b5e1fed5148498f4163434dab3dacc71a3b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces the cascading contextual assortment bandit problem and provides a UCB type algorithm. This problem is motivated by online content recommendation systems. They develop a UCB algorithm that is applicable to this problem setting, and prove that their algorithm improves upon existing regret bound rates in the cascading and assortment bandit problems respectively.
Strengths: - Based on the authors' discussion, the scaling rate of their regret bounds is both sharper and more interpretable/intuitive than that in existing literature. It seems their regret bound improves upon those both in the assortment and cascading bandit literatures respectively.
- The authors have a nice discussion about why we expect the regret to decrease with $K$ and then show how their result shows this type of dependence. Additionally, their result and discussion of how their regret bound scales with $\kappa$ in a way that is not worsening its dependency on $M$ seems nice.
- The overall writing and presentation in the paper pretty good. I think the table 1 is quite useful and Figure 1 was very helpful for understanding the problem.
Weaknesses: - In the evaluation, it was not clear to me why you only compared to C^3-UCB and not the other methods listed in table 1. While the algorithm has these regret guarantees, its not entirely clear if the algorithm really performs well in practice from the simulations in the evaluation section.
- Only applicable to environments in which the probability that a user clicks is determined by a generalized linear model. Additionally, the feature vector $x_{ti}$ captures both contextual information on the user and the item. It's not clear how this kind of vector could be chosen in practice.
- There is not a real related works section in the main paper. I see there is one in the Appendix.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - It is not clear to me that Assumption 2.3 as written will ever hold. Do you really need to take an infinum over all $\theta \in \mathbb{R}^d$? Based on my understanding, this means that if you take $\theta = \lambda \cdot x_i$ you can make $w_i = x_i^\top \theta$ arbitrarily small or large with the choice of $\lambda$. Then, based on the model from line 127, it seems you could make $p_t(i_m | A_{tk}, w)$ arbitrarily small. Can you explain there any reasonable settings where Assumption 2.3 will hold? If there are, please have a discussion of this and also more information about how to interpret Assumption 2.3.
- You say below assumption 2.2 that the regret bound is $c$ times larger if you allow the norm of $x_i$ and $\theta^\star$ respectively to be bounded below $c$. Is \emph{knowledge} of $c$ needed by the algorithm? In other words, does your algorithm currently implicitly take advantage of knowledge that you assume that $x_i$ and $\theta^\star$ are bounded by $1$? If so, this would severely limit the practical applicability of the approach. If this is the case, this limitation should be discussed. If it is not, it should also be mentioned when discussing scaling by $c$.
- Could you a sentence or two (or more) about how you suggest to choose ridge penalty $\lambda$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: - I would like the authors to address the questions / limitations I list in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time to review our paper and for your valuable feedback. Here are our responses to each comment and question:
**Numerical Evaluations**
- First of all, since our problem setting and proposed model, the cascading assortment bandit, are novel, there are not existing methods proposed exactly under this new model. Hence, we can only report to comparing against special cases. While both assortment bandits and cascading bandits are special cases of our model, assortment bandits do not possess cascading effect for which removing the suboptimal dependence on the cascade length is one of the main objectives. Hence, we aim to compare with cascading bandit algorithms to see the effect. Among the contextual cascading bandit algorithms, C$^3$-UCB is one of very few algorithms whose implementation exist. In this rebuttal (see the pdf file attached to the global rebuttal), we have included a comparison with another related method, CombCascade in [12], and we plan to incorporate further comparisons with other methods in a revision. We would be happy to include more results in the list. We appreciate your feedback in making our paper more persuasive to readers through additional experimental performance comparisons. But also, as this is the first theoretical work proposing the new cascading assortment bandit, the new provably efficient algorithms, and their improved regret bounds removing suboptimal dependence that existed even in less general cases, we respectfully request that it be mainly assessed based on its theoretical merit.
*"Only applicable to environments in which the probability..."*
- All parametric models for clicks in the bandit framework (whether it is a linear, logistic, or MNL model, along with their combinatorial adaptation, such as cascading, assortment, semi-bandit, etc.) have their own modeling assumption on click probability. Regret bounds are mostly derived under the realizability of each modeling assumption. Hence, we do not necessarily agree with the comment that "applicability to environments" with modeling assumptions should be considered a weakness. Rather, as we show in our paper, our proposed model encompasses two of the prominent existing combinatorial bandit instances, cascading bandits (K > 1, M=1) and assortment bandits (K=1, M>1), as well as single-action selection bandits (K=1, M=1), such as logistic bandits and multi-armed bandits with binary feedback. Under this more general model, we show even tighter and stronger regret bounds!
*"the feature vector $x_{ti}$ captures both contextual information on the user and the item..."*
- Suppose the user at round $t$ is characterized by a feature vector $u_t$ and the item $i$ has a feature vector $v_{ti}$ (note that we can allow item feature to vary over time), then we can use context feature vector as $x_{ti} = \text{vec}(u_t v_{ti}^{\top})$, the vectorized outer-product of $u_t$ and $v_{ti}$, as the combined feature vector of item $i$ at round $t$. This is a common technique also used in [16, 18, 19]. If a user's information is not accessible (for example, due to privacy issues), then one can use item-dependent features only, say $x_{ti}=v_{ti}$.
**Related Works Section**
- Due to the limited space in the main text, we defer the "Related works" section to the appendix. We are more than happy to moving the "Related works" to the main text in the revisied version.
**Questions**
**Q1 on Assumption 2.3**: It is a very good question. First of all, Assumption 2.3 is the standard regularity assumption in the MNL contextual bandit literature [5, 6, 18, 19, 20, 23]. Since true $\theta^*$ is assumed to be $\|\theta^*\| \leq 1$ (in Assumption 2.2), we can only consider $\theta \in \mathbb{R}^d$ with $\|\theta\| \leq 1$ for Assumption 2.3 (hence modification in the subscript). Since $\|x_i\| \leq 1$ for all $i$, $x_i^\top \theta$ is bounded by $-1 \leq x_i^\top \theta \leq 1$ for all $i$. Hence, the probability $p_t(i | A_{tk}, w)$ cannot be arbitrarily small. Now, one practical implication of this assumption is that under any choice model we can possibly consider, we consider items that provide utilies to users (items that has aleast some probilities to be clicked).
**Q2 on Assumption 2.2**: The boundness assumption is also a standard assumption in almost all parametric bandit literature [1,3,5,14,16,18,19,20,23] that includes linear, logistic, GLM, and MNL bandits. However, in practice and also in theory, you do not have to tune $c$ seperately. Since all the unknown hyperparameters can be combined $c' := c \cdot \frac{1}{\kappa} \cdot \sigma$ and tuned as a whole (where $\sigma$ is sub-gaussian parameter which is assumed in almost all parametric bandits; note that in MNL bandits, it is known that $\sigma = \frac{1}{2}$ but in general parametric bandits, $\sigma$ is not known). If this were to be considered hinderance, then the same argument should be made about linUCB and linTS as well as almost all existing parametric bandit algorithms.
**Q3 on Choosing $\lambda$**: If $\lambda$ can be any value between 1 and $d$, then the regret bound would not change the leading factor. Hence, a common choice of $\lambda$ is $\lambda = 1$ or $\lambda = d$.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your response.
Regarding Assumption 2.3, will you revise Assumption 2.3 to only consider $\theta \in \mathbb{R}^d$ such that $\\| \theta\ \| \leq 1$? Or are you saying that as stated in the paper, this is the assumption you need? My understanding is that currently the Assumption 2.3 as stated is too strong and not likely to hold, but you actually don't need it to be so strong for your proofs, as you just need an infinum over $\theta \in \mathbb{R}^d$ such that $\\| \theta \\| \leq 1$.
Regarding Assumption 2.2, I think it is okay to have this limitation, but I think you should be up front about it. I suggest you add a statement about it and also mention that this limitation is also true of many other common algorithms in the literature. This will help the future readers of your paper, who may want to apply your approach understand the strengths and weaknesses of your method.
---
Reply to Comment 1.1.1:
Comment: Thank you. Yes, we will revise Assumption 2.3 to include $\lVert \theta \rVert \leq 1$. Of course, we are more than willing to be up front about all our assumptions as we already stated the scalability of the upper bound on norms. We can include more discussion on Assumption 2.2. Thank you for your responses and support! | null | null | null | null | null | null |
Should I Stop or Should I Go: Early Stopping with Heterogeneous Populations | Accept (spotlight) | Summary: This work proposes a method for adapting stopping tests of randomized experiments in heterogeneous populations. Specifically, the authors motivate the problem, namely why heterogeneous treatment effects lead to late stopping of randomized experiments, for instance when a minority group is harmed. They then propose a two stage method which first predicts a weighting of the original test statistic components used in stopping decisions, then and then uses these statistics to make the stopping decision. The methodological contribution is well-motivated and supported by theoretical results which analyse the convergence behavior of these weights, and the probability of stopping under the assumption of knowing the group membership.
Strengths: * There is a lack of machine learning methods that addresses the heterogeneous early stopping problem, and this paper provides a possible, first solution while making few assumptions. This renders the work an original, well-motivated contribution.
* The paper is very clear and well written. In particular, the links between each of the sections are very clear.
* The experimental results, including the simulated scenarios, are convincing and interesting. For instance Figure 2 makes clear why the proposed approach is advantageous over homogeneous stopping tests.
Weaknesses: * The task is somewhat niche. It is furthermore unclear to what degree the stopping task in randomized experiments could be reformulated as a similar task in another domain, i.e. to what degree this or similar problems have been solved in other contexts.
* Overall, the work makes various idealised assumptions in both the theoretical results, and in the (synthetic) experiments considered. It is an interesting proof of concept, but there is a lot of work which would need to be done to make this method applicable in practical settings, for instance real-world clinical trials which this method is motivated by, but does not evaluate on. Two further points to note is the lack of performance on high-dimensional data that the authors themselves notes, which is present in some clnical trials. Second, the method crucially relies on treatment effect estimation methods in Stage 1, which themselves are far from being widely applicable in practice.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * The problem setup of CLASH stops the entire experiment if a stopping decision has been made. However, in clinical trials for example, it may not be ethical to stop the experiment for a majority group which benefits from the treatment. Could CLASH be adapted to an online setting where the experiment is only stopped for the harmed subgroup? It would be interesting if the authors could comment on this.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: * Prop. 3.1 assumes the group membership (and CATE) is known, which is never the case in practice. The authors state this, yet it is unclear how one would efficiently infer group membership in large-scale scenarios in practice, and whether the proposed solution in Stage 1 of the algorithm would work. It is consequently unclear how well the proposed tests would generalise if group membership is not known, or how the asymptotic behavior analysed in this Proposition would change in this case. It would be helpful if the authors could comment on this.
* The experimental settings are all limited to two groups. In many real-world settings such as clinical trials, we would expect more than two groups with somewhat homogeneous treatment effect. How this method would perform in such cases is unclear, and neither discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review: we appreciate your positive feedback and constructive suggestions. We address each of your comments in further detail below.
**R6: _Overall, the work makes various idealised assumptions in both the theoretical results, and in the (synthetic) experiments considered. It is an interesting proof of concept, but there is a lot of work which would need to be done to make this method applicable in practical settings, for instance real-world clinical trials which this method is motivated by, but does not evaluate on. Two further points to note is the lack of performance on high-dimensional data that the authors themselves notes, which is present in some clinical trials. Second, the method crucially relies on treatment effect estimation methods in Stage 1, which themselves are far from being widely applicable in practice._**
Thank you for raising this important point. Our global response considers several additional simulation settings, including on high-dimensional data (Fig 2e in the global response). We find that CLASH is able to perform well even in experiments with 100 covariates. Overall, we believe the additional simulations indicate the CLASH can be effective in situations closer to real world clinical trials and A/B tests.
Further, we emphasize that CLASH does not require using machine learning-based methods for causal estimation. Practitioners can use much simpler techniques–including linear regression–to infer heterogeneous treatment effects in stage 1. CLASH is agnostic to the specific method used: practitioners can use the method with which they are most comfortable, as long as it yields reasonably accurate estimates of the effect heterogeneity.
**R6: _The problem setup of CLASH stops the entire experiment if a stopping decision has been made. However, in clinical trials for example, it may not be ethical to stop the experiment for a majority group which benefits from the treatment. Could CLASH be adapted to an online setting where the experiment is only stopped for the harmed subgroup? It would be interesting if the authors could comment on this._**
This is an excellent point: we discuss this in detail in point (2) of our global response and have updated our manuscript accordingly to address this decision-making process. In short, yes – CLASH can be used to inform early stopping on only the harmed subgroup rather than the entire trial population.
**R6: _Prop. 3.1 assumes the group membership (and CATE) is known, which is never the case in practice. The authors state this, yet it is unclear how one would efficiently infer group membership in large-scale scenarios in practice, and whether the proposed solution in Stage 1 of the algorithm would work. It is consequently unclear how well the proposed tests would generalise if group membership is not known, or how the asymptotic behavior analysed in this Proposition would change in this case. It would be helpful if the authors could comment on this._**
Thank you for giving us the chance to clarify: group membership knowledge is not needed when using CLASH. You are correct to point out that the test in Prop 3.1 could never be used in practice, as it requires prior knowledge of the groups and treatment effects. However, it is an important result because it describes the statistical power of the test that CLASH converges to in large samples. Thm 3.2 establishes that in large samples, CLASH converges to the test described in Prop 3.1. Prop 3.1 indicates that this test has power-1 in large samples; thus CLASH must also have power-1 in large samples. We emphasize that CLASH does not require knowledge of group membership to operate: it infers this in stage 1 using the provided covariates and observed outcomes. Thus, CLASH is able to obtain the optimal power of the test described in Prop 3.1 despite not knowing group membership a priori. Our simulation experiments and empirical application demonstrate that CLASH efficiently infers group membership in practice.
**R6: _The experimental settings are all limited to two groups. In many real-world settings such as clinical trials, we would expect more than two groups with somewhat homogeneous treatment effect. How this method would perform in such cases is unclear, and neither discussed in the paper._**
Thank you for raising this important concern. Figure 1 in our global response considers situations with more than two groups, and demonstrates that CLASH can be effective in such settings.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: My points of concern are largely resolved, my questions answered. I have also read through the other reviews and their responses.
I think this paper particularly shines in tackling an interesting, understudied problem, for which no good solutions seem to exist. It is simple and straight-forward, yet solid. I vote for accepting the work. | Summary: The paper focuses on stopping tests for harm in clinical trials or A/B testing where heterogeneous treatment effect is involved. The proposed method contains two phases: First, the population harmed by the trials is identified via conditional treatment effect estimation. Second, weighted versions of widely-used test statistics are computed to determine if the trial needs to be stopped early. Theoretical analyses are carried out to show that the proposed method meets the desired requirements: producing high probability when a subgroup is harmed while limiting the unnecessary stopping. Experiments using simulation and real-world data show improvement over homogeneous stopping tests and the existing heterogeneous ones.
Strengths: + The paper is well-written and easy to follow. The research problem is clearly defined and the motivation for the proposed method is nicely presented.
+ Theoretical analyses show that the proposed method could achieve the desired properties. The convergence property (Thm. 3.2.) is particularly important and well-explained.
Weaknesses: - In lines 191 to 193, $(x_i, y_i)$ is excluded from training set when estimating $\tau(x_i)$. Does it mean that the CATE estimation model needs to be trained for $n$ times during stage 1? Are there any comparisons of the running time between baseline models, say SUBTLE?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In the lower part of Fig. 2, when the treatment has no effect on the majority, why CLASH has lower probabilities of stopping early than the homogeneous approach when the harmful treatment effect on the minority group is below 0.4?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations of the proposed method are discussed in the last section. It would be great if the authors could add a few sentences about how they plan to address the limitations in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review: we appreciate your positive feedback and constructive suggestions. We address each of your comments in further detail below.
**R5: _In lines 191 to 193, $(x_i,y_i)$ is excluded from training set when estimating $\tau(x_i)$ Does it mean that the CATE estimation model needs to be trained for n times during stage 1? Are there any comparisons of the running time between baseline models, say SUBTLE?_**
Our apologies for not being more clear. While it is possible to use leave-one out cross-validation---in which case the model would need to be trained n times---we recommend using k-fold cross-validation or progressive cross-validation instead. In this case, the model would only need to be trained k times during stage 1. Our simulation experiments and empirical application use 5-fold cross-validation; we will make this more clear in the revised manuscript. In our empirical application, at an interim checkpoint with 40,000 participants (i.e., when CLASH indicates that the experiment should be stopped), all four methods take under a minute to run, though CLASH is the slowest (clocking in at 36 seconds).
**R5: _In the lower part of Fig. 2, when the treatment has no effect on the majority, why CLASH has lower probabilities of stopping early than the homogeneous approach when the harmful treatment effect on the minority group is below 0.4?_**
Thank you for raising this important point. Overall, the situation in which the majority group is unaffected and the minority group is only slightly harmed reflects both an easy case for the homogeneous baseline and a difficult case for CLASH. CLASH performs better when it is easy to differentiate the harmed and unharmed groups based on the covariates and outcomes. The more similar the effects on the minority and majority groups, the harder this task is. Meanwhile, the homogeneous baseline is the exact opposite: it performs better when the whole population ATE is more similar to the effect on the minority group. However, we emphasize that even in this case, the difference between the two methods is small and only exists for the Bayesian estimation-based stopping test.
---
Rebuttal Comment 1.1:
Title: My concerns have been addressed
Comment: Thank the authors for the response. I have read the responses, as well as the discussion between the authors and other reviewers. I think my main concerns have been clarified/addressed. Overall, this is a solid paper, so I am happy to raise the rating to 7. | Summary: The authors propose an approach to early stop clinical trials in order to prevent subgroup level harms. Their approach involves first estimation of sequential estimation of a an individualized treatment effect using machine learning methods followed by reweighting the test statistic at each iteration with the estimated mean and standard deviation of the CATE.
Furthermore their choices are such that they do not need to make apriori assumptions on what groups represent the minority subgroups.
The authors present results around optimality of the metrics using the estimated CATE and perform extensive real world and synthetic experiments to demonstrate the effectiveness of their approach.
Strengths: * The problem of early stopping of a clinical trial to prevent aggregate and subgroup level harms is an important one. The contribution is timely and relevant.
* The demonstration of the proposed method on the time-to-event (survival) setting is welcome since outcomes in most real world clinical trials are censored time-to-events.
* The extensive theoretical insights and real world and synthetic experiments are thoughful and welcome.
Weaknesses: * The paper requires estimates of the uncertainity in CATE at an *individual* level. CATE is itself hard to estimate, and its uncertainity is even harder in practice to estimate. The authors propose to use "bootstrap" to compute this quantity. This seems practically impossible especially with high dimensional covariates.
* While the paper is motivated strongly there is a fundamental question that remains to be addressed: The current method is such that it does not require apriori assumptions on which covariates specify the subgroups. It is unclear as to how in practice the group would be specified implicitly by the model. For instance, in the case of a large RCT there would always exist trivially small subgroups that are harmed. However these subgroups would not be generalizable and such results would not transport. Ideally this should be reflected in the uncertainity estimates upto generalization error. However this maybe violated in small sample size regimes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The cdf for chosen weights, $w$ corresponds to a normal distribution. I think this is because the estimated weights tend to a normal distribution by central limit theorem, but I was unable to find a result for this in the paper can the authors point to this.
* Can the authors address the second weakness pointed out in more details. Specifically does the current setup allow a practioner to specify something along the lines of a "minimum" group size to prevent the model from raising false alarms with trivial subgroups?
* Finally, the current setup it seems does not allow for anyway of specifying how the covariates across different subgroups are related. For a subgroup to be actionable there should be similarities in there covariates. Is there a way to enforce or obtain such similarities from the experiment.
Overall I am willing to readjust my scores favorably based on answers to my questions as well as discussions and deliberations during the author rebuttal phase.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does raise an interesting concern over whether current trials should be stopped for the entire population if a single subgroup is found to be harmed from the intervention. As the authors point out, this is a question of medical ethics and is largely beyond the scope of this manuscript, and hence I am inclined to not consider it as a "limitation". I think however that perhaps some examples of where such decisions can lead to different outcomes should be included in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review: we appreciate your positive feedback and constructive suggestions. We discuss each of your comments in detail below, and incorporate our responses in the revised paper.
**R4: _The authors propose to use "bootstrap" to compute this quantity. This seems practically impossible especially with high dimensional covariates._**
Thank you for allowing us to clarify. You are correct: CLASH requires estimating uncertainty in individual CATE estimates. However, the bootstrap is merely one way to obtain such uncertainty estimates. For example, a causal forest estimates these uncertainties in the process of fitting, and thus requires little additional computation. We recommend using such methods with CLASH wherever possible. Bootstrapping is a last-resort in cases where there are strong domain-specific reasons to use a certain CATE estimation method, and that method is unable to estimate the required uncertainties on its own. We have added this suggestion to the methods section of our revised paper.
**R4: _While the paper is motivated strongly there is a fundamental question that remains to be addressed…it is unclear as to how in practice the group would be specified implicitly by the model. For instance, in the case of a large RCT there would always exist trivially small subgroups that are harmed. However these subgroups would not be generalizable and such results would not transport._**
Thanks for raising this important point. As detailed in Part 2 of our global response, when CLASH indicates that an experiment should be stopped, practitioners can identify the specific harmed groups either by analyzing the distribution of the estimated CLASH weights or by using existing subgroup identification methods (see global response for more details). We also suggest ways to ensure that the identified harm subgroups are not trivially sized (e.g., limiting depth for tree-based methods). In general, domain expertise must play an important role in determining what is and what is not a meaningful group on which to stop.
Regarding inaccurate uncertainty estimates in small samples: in Part 1 of the global response, we find that CLASH rarely stops experiments with very low N (Fig 2d). Thus, CLASH displays the opposite problem with small samples: it does not stop experiments due to trivially-sized harmed groups, but rather is unable to detect harmful effects with low N (note that the homogenous baseline is also unable to detect such effects). In general, we advise using CLASH with caution in experiments with small samples; we discuss this further in our limitations section.
**R4: _The cdf for chosen weights corresponds to a normal distribution. I think this is because the estimated weights tend to a normal distribution by central limit theorem, but I was unable to find a result for this in the paper can the authors point to this._**
Thank you for giving us the opportunity to clarify. The normal CDF is used primarily because it leads to fast convergence of the CLASH weights to the optimal weights (see Thm 3.2), not because the weights have an asymptotically normal distribution. It may be possible to use other CDFs and achieve similar results; however, the normal CDF proved to be an effective choice, as it yields provably fast weight convergence. We emphasize that CLASH does not require any central limit theorem-like assumptions on either the weights or the estimated CATEs.
**R4: _Can the authors address the second weakness pointed out in more details. Specifically does the current setup allow a practioner to specify something along the lines of a "minimum" group size to prevent the model from raising false alarms with trivial subgroups?_**
Per Part 2 of our global response, we recommend a tree-based heuristic that practitioners can use to identify the harmed groups. If this heuristic implies that only a trivially sized group is harmed, practitioners can ignore CLASH’s recommendation and continue the experiment. However, this decision depends heavily on the ethical and financial considerations of the practitioners.
**R4: _...For a subgroup to be actionable there should be similarities in there covariates. Is there a way to enforce or obtain such similarities from the experiment._**
Thank you for raising this concern. We agree that to stop an experiment only on one group, the group should have similar values for a few key covariates. This relates closely to our discussion above on identifying the harmed group (further detailed in point 2 of our global response). Practitioners can identify such harmed groups in an actionable way by using either our tree-based heuristic or existing subgroup identification techniques (e.g. [1]). Limiting the depth of the tree-based heuristic offers an easy way to ensure the identified group is actionable and of non-trivial size. For example, by limiting the tree-depth to 2, investigators can identify the two covariates that most drive harm and get a well-defined group on which to stop the experiment (if this is what they choose to do). Subgroup identification techniques provide analogous ways to get actionable groups (e.g. limiting tree-depth, variable selection, etc.).
[1] Zhang, et al, DOI: 10.21037/atm.2018.03.07
**R4: _Limitation: …perhaps some examples of where such decisions can lead to different outcomes should be included in the manuscript._**
We address this limitation in further detail in part (2) of our global response, including examples from both clinical and technology domains of whether / when to stop on a subgroup; we will also provide these examples in the revised paper.
**R4: _Overall I am willing to readjust my scores favorably based on answers to my questions as well as discussions and deliberations during the author rebuttal phase._**
Thank you for your detailed feedback. We hope our response has helped answer your comments, and look forward to further discussion this week. | Summary: The authors propose CLASH, a method for early stopping in RCTs and A/B tests on heterogeneous populations. They some theoretical results that show that CLASH works, and provide simulations and one real experiment.
Strengths: - Clear writing
- Good motivation
- Excellent exposition of theoretical results for readers who may not be able to fully understand proof details
- Convincing (albeit limited) experimental evidence
Weaknesses: The biggest issue with this paper has to do with the experiments.
EDIT: I have raised my score to a 7 after the authors answered my questions.
- Unrealistic settings in the simulation - see the questions section.
- Your real-world data experiment is nice, but it's a single experiment with a single outcome - you could be doing as well as Oracle purely by chance. You could run it multiple times by randomizing the order of the data and reporting a distribution of stopping times against oracle, SUBTLE, and homogeneous.
- "Note that stopping the experiment just in one region would affect statistical inference at the end of the experiment, as the treatment would no longer be randomly assigned across regions. Practitioners can use covariate adjustment, inverse probability weighting, or adaptive sampling methods [13] to adjust for this selection." I would recommend actually performing this calculation and reporting the outcome.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In the simulation you have "X maps deterministically to G" -> is this not unrealistically simple? What happens if you use a stochastic function with varying degrees of noise?
- Similarly, "recruiting one treated and one untreated participant at each step" is unrealistic. Why not try to mimic real recruitment procedures and have batch recruitment? Does this affect your results at all?
- In your simulation "Y is normally distributed" with a fixed standard deviation of 1.0. What happens if the standard deviation is larger or smaller? Is it important to know this standard deviation in practice?
- You write "All performance increases are robust to an increase in the number of covariates" but also "CLASH works better with a relatively small number of covariates": these two statements conflict, and I did not see any experiments that demonstrate the latter. It's important to add experiments that quantify how your models perform (or don't) in higher dimensions. What happens when the dimensionality is 100 and 1000 in the simulation?
- "We only sample from Region and Regions 5-8; this gives us one harmed group (Region 1) that comprises 28% of the total population" - why do this? Why not sample uniformly at random from the entire dataset?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review: we greatly appreciate your insightful feedback. We address each of your constructive suggestions below, which we believe have further strengthened the paper.
**R3: _The biggest issue with this paper has to do with the experiments. I'll happily raise my score to a 7 once additional experimental results are included…_**
Our global response presents results from additional simulation experiments, many of which directly address your comments below. We include these results in the appendix of the revised paper.
**R3: _Your real-world data experiment is nice…you could run it multiple times by randomizing the order…_**
Thank you for raising this important point. We have included your suggested analysis in the revised paper. We find that CLASH stops the experiment at the same interim checkpoint as the Oracle in 62.6% of shuffled datasets. The mean (std. error) stopping times for each method across 1,000 shuffles is below.
CLASH: 57,200 (609)
Homogeneous: 64,420 (848)
Oracle: 48,500 (431)
**R3: _‘Stopping the experiment just in one region would affect statistical inference’... I would recommend...performing this calculation…_**
Thank you for this suggestion. We report results from our empirical application below and in the revised paper. We assume practitioners choose to stop the experiment in the region with the largest harmful effect (Region 1). This choice leads to bias in the naive estimate of the full population ATE; however, inverse propensity weighting (IPW) corrects the bias and successfully recovers the ATE.
ATE estimate without stopping: 0.11 (0.006)
Naive ATE estimate with stopping: 0.03 (0.006)
Estimate using IPW: 0.10 (0.006)
**R3: _X maps deterministically to G -> is this not unrealistically simple? What happens if you use a stochastic function...?_**
Your comment is well-taken, especially for clinical trials. To address your question, Fig 2a in the global response demonstrates that CLASH still performs well even with stochastically determined groups. That said, we would like to note that deterministic mappings from covariates to harmed groups do occur in real-word settings. For example, a new product feature evaluated in an A/B test may increase system crashes for only certain device types (e.g., Android devices with a certain chip). In such cases, the harmed groups can be determined entirely from the covariates.
**R3: _'recruiting one treated and one untreated participant at each step' is unrealistic. Why not…have batch recruitment?_**
Thank you for giving us the opportunity to clarify. This assumption is actually not necessary for CLASH, but rather for certain stopping tests that CLASH can be used with. For example, the mSPRT (Johari et al., 2017) requires this assumption, as it considers an “observation” at each time step to be the difference in outcomes between a treated and untreated unit. Note that this is not as strong an assumption as it may initially seem: if the data is i.i.d. (as we assume), then at any interim checkpoint, the data collected thus far can be re-ordered to ensure that there is a treated and untreated observation at every step. That said, it does complicate estimation in cases where the treatment-control split is not 50-50.
However, we emphasize that CLASH does not require this assumption if used with non-SPRT techniques. For example, CLASH with the O’Brien-Fleming test can be used with batch recruitment with imbalanced treatment and control groups. This does not affect our results: for example, Figs 2b and 2c in our global response (which focus primarily on outcome variance) do not assume one treated and untreated participant per time step. We demonstrate that CLASH is still able to perform well in these settings.
**R3: _What happens if the standard deviation is larger or smaller? Is it important to know this standard deviation in practice?_**
Figs 2b and 2c in the global response consider these scenarios. We find that CLASH and the Oracle are both affected by increasing variance (Fig 2b). Notably, CLASH outperforms the homogenous baseline for all settings. Needing to estimate the variance (instead of knowing it a priori) does have an effect on CLASH’s performance (Fig 2c); however, CLASH still outperforms the homogeneous baseline across all considered variances.
**R3: _What happens when the dimensionality is 100 and 1000 in the simulation?_**
Fig 2e in the global response considers a high-dimensional setting with 100 and 500 covariates. We find that CLASH is still able to perform well with 100 covariates, outperforming the homogeneous baseline for medium and large effect sizes. With 500 covariates, CLASH only stops the experiment slightly more often than the homogeneous baseline for large effects. This result thus illustrates CLASH’s limitations in dealing with very high-dimensional covariate sets. We note, however, that 500 covariates is a fairly extreme setting for an experiment with 4,000 participants. In practice, even if 500+ covariates are available, practitioners would be able to use their domain expertise or statistical methods (e.g., LASSO) to specify a subset of covariates on which to assess harm. We have updated our limitations section to reflect this discussion, and recommend feature selection before running CLASH in very high-dimensional settings.
**R3: _'We only sample from Region 1'... Why not sample uniformly at random…?_**
This is a great question – in fact, we do present results from uniform sampling in Fig S14 in App H of the submitted paper. CLASH is able to outperform the homogeneous approach in this setting, though only for larger experiments (80k+ participants). However, CLASH’s main focus is on experiments in which a minority group of participants is harmed. If we were to uniformly sample all regions, the harmed group (Regions 1-4) would form the majority (60% of all participants); we thus only sample from Region 1 to illustrate our main contribution.
---
Rebuttal Comment 1.1:
Title: Good response
Comment: Thank you for the complete response. I will raise my score to a 7. | Rebuttal 1:
Rebuttal: We thank the reviewers for their positive comments and constructive feedback. In this global response, we focus on two themes raised by multiple reviewers: (1) additional experiments, and (2) the decision to stop only on the harmed group. We separately provide responses to individual reviewers.
### (1) Additional Experiments
The attached pdf contains figures showing CLASH’s performance in reviewer-suggested experiments. All figures use Gaussian outcomes and the O’Brien-Fleming stopping test; more experiments will be added to the paper supplement.
**Figure 1**: CLASH’s performance for trial populations with >2 groups. CLASH performs well: it stops more frequently than the homogeneous baseline across a range of effect sizes and as often as the Oracle for larger effects. We consider two settings:
a. Three groups of unequal size (group size and effect size, respectively, in parentheses): one weakly benefitted (87.5%, -0.1), one strongly harmed (6.25%, x-axis), and one weakly harmed (6.25%, x/2).
b. Four equally sized groups (effect sizes in parentheses): strongly benefitted (-x), weakly benefitted (-x/2), weakly harmed (x/2), and strongly harmed (x).
**Figure 2**: Specific reviewer-suggested experiment settings. In all figures, the treatment harms the minority group and weakly benefits the majority (effect size: -0.1). Unless specified, there are 5 covariates and minority group size is 12.5%.
a. Stochastic group membership: covariates map stochastically to the benefited and harmed groups. We construct a 25% minority group deterministically from covariates as before, but randomly assign p% of this group to be harmed (the remainder is benefitted). CLASH outperforms the homogeneous baseline both when p=0.5 and 0.75.
b. Smaller/larger outcome variance: smaller / larger variance in the observed outcomes. CLASH outperforms the homogeneous baseline in all settings. CLASH and the Oracle both perform better with lower variance.
c. Estimated outcome variance: variance is not known, but rather estimated from the observed outcomes. Estimating the variance has an effect on CLASH’s performance, but CLASH still outperforms the homogeneous baseline.
d. Small sample sizes: small sample sizes (N = 200, 400, 1000). CLASH outperforms the homogenous baseline by a wide margin in experiments with moderately small samples (N=1000). While the gap between CLASH and the homogenous baseline decreases as sample size decreases, CLASH still outperforms the baseline with as few as 400 participants. However, with very small samples (N=200), neither CLASH nor the homogeneous baseline stops the experiment. We have included this setting as a limitation in our revised paper.
e. Large number of covariates: high-dimensional covariates (d=100, 500). CLASH is robust to increasing dimensionality to a point, outperforming the homogeneous baseline even with 100 covariates. The extreme case with 500 covariates is more challenging: here, practitioners may need to perform feature selection before running CLASH. We discuss this in our revised limitations section.
### (2) Decision to Stop Only on Harmed Group
We now focus on what investigators can do once CLASH indicates that an experiment should be stopped. We have added this discussion to the Methods section of the revised paper.
*Stopping Decision*. When CLASH indicates a group is being harmed, investigators should make choices based on domain expertise. If the nature of harm is serious (e.g., mortality) they may decide to stop the experiment for all participants. For milder harms (e.g., crashes in an A/B test), they may decide to stop the experiment only for the harmed group. If the identified harm is much less consequential than the potential benefit (e.g., harm of increased headaches vs. benefit of curing cancer), they may decide to not stop the experiment at all. The specific choice made will depend heavily on the ethical and financial aspects of the experiment, as well as a thorough review of the interim data. CLASH is not intended to make this decision for investigators; however, it helps investigators realize that a group is being harmed and thus a stopping decision needs to be made.
*Tree-based heuristic for identifying the harmed group*: If investigators choose to stop the experiment only for harmed participants, they must first identify the harmed group. The distribution of CLASH weights at stopping time can help in this task: groups with estimated participant weights close to 1 are likely to be harmed. Fig S13 in Appendix H illustrates this in our empirical application: the estimated weights in Regions 1 and 2 are both close to 1, indicating that these are the groups on which to stop. With few covariates, investigators can manually inspect the weight distribution for each covariate combination to identify the harmed group. With many covariates, investigators can use a simple heuristic: a regression decision tree on the estimated CLASH weights can find the covariate values for which the weights are the largest. Limiting the depth of this tree can ensure that the identified group is actionable (i.e., it is possible to stop the experiment on it) and of non-trivial size.
There are alternative approaches to this harmed group identification task; for example, practitioners can use subgroup identification methods on the raw outcomes (e.g. [1]). We are agnostic to the choice of method: investigators can pick the approach most appropriate for their domain.
*Treatment effect estimation*. Stopping the experiment in only one group can affect inference at the end of the experiment, as the treatment is no longer randomly assigned across covariates. To estimate the ATE over the entire population, practitioners should use inverse propensity weights to correct for the induced selection bias. We illustrate this correction in our response to reviewer K2p1 below, and in our revised manuscript.
[1] Zhang, et al, DOI: 10.21037/atm.2018.03.07
Pdf: /pdf/682543ebde6599523b13d874d256a710aa05a4a0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a method to decide when to stop a clinical trial or an A/B study in cases where the treatment/intervention only causes harm to a minority group of participants, in which traditional methods can fail to detect the need for stopping an experiment. The paper includes a thorough theoretical analysis of the method, which suggests that it has desirable properties (stopping when a treatment is harmful and not stopping when it is not harmful) for large enough n. It also includes extensive simulations demonstrating that the method outperforms established methods in most cases and performs well in a real-world application.
Strengths: - The paper is well written and easy to follow.
- The research question is interesting and important also from a fair AI perspective as this research protects minority groups.
- The paper includes a very extensive analysis of the problem, including theory, simulations and an empirical application.
Weaknesses: - The simulations could include a wider range of sample sizes, i.e. sample sizes of 50-200 participants which are common in clinical trials.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: First of all, I want to say that this paper was a pleasure to read. I was impressed by the thoroughness and extend of the analyses. I only have a few suggestions. I hope you will find them helpful and constructive.
**Major points**
- I would like to see how the method holds up for much smaller sample sizes in simulations and on the empirical data, since there are many clinical trials with sample sizes around 50, 100 or 200 participants. Even if the method does not work well for this, it would be an important information to have so that practitioners do not apply this method when it’s not appropriate.
- On p. 3, you write “For example, consider a situation in which there are two equally sized groups with equal but opposite treatment effects, that is, p(G = 0) = p(G = 1) and τ (0) = −τ (1). The ATE is zero, and so any stopping test with H0 : ATE ≤ 0 is designed to continue to completion at least (1 − α)% of the time.” However, later you focus on Gaussian outcomes. Can your method handle the first scenario as well (i.e., bimodal distributions)? Or could it be extended to Gaussian mixtures for example?
- In the simulations shown in the supplement (Figure S6), there are certain scenarios for which the homogenous baseline outperforms CLASH (e.g., panel A: Bayesian Estimation MaxSPRIT, no treatment effect on the majority and small harmful effects or panel B: MaxSprit for small effects). Can you speculate on why this is the case? I do not think it is a problem if your method is not the best in every scenario, but it is important to outline and understand the conditions under which it is outperformed by other methods.
**Minor points**
- How would this method be used to decide for whom a trial should be stopped early in practice? Would this be based on the probability of being harmed, does this require a threshold? Perhaps you can elaborate more on this.
- Out of curiosity, could this method be extended to multiple groups that show different harm (e.g., a strongly and a weakly harmed group)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors adequately discuss the limitations of their work. Depending on the performance for much smaller n in simulations it might be worthwhile expanded on the required sample size as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your detailed review and positive feedback: we’re glad you enjoyed reading our paper and appreciate your encouragement. We address each of your constructive suggestions below, which we believe have further strengthened the paper.
**R2: _The simulations could include a wider range of sample sizes, i.e. sample sizes of 50-200 participants which are common in clinical trials…I would like to see how the method holds up for much smaller sample sizes in simulations and on the empirical data, since there are many clinical trials with sample sizes around 50, 100 or 200 participants. Even if the method does not work well for this, it would be important information to have so that practitioners do not apply this method when it’s not appropriate._**
Thank you for this important suggestion. Fig 2d in our global response considers an experiment with smaller sample sizes (N=200, 400, 1000). Overall, we find that CLASH can be effective in small samples: it outperforms the homogeneous baseline with as few as 400 participants, and with 1000 participants, the performance gap is wide. However, N=200 is a challenging scenario: neither CLASH nor the homogeneous baseline is able to stop experiments with such few participants. We have included this setting as a limitation in our revised paper.
**R2: _On p. 3, you write “For example, consider a situation in which there are two equally sized groups with equal but opposite treatment effects, that is, p(G = 0) = p(G = 1) and τ (0) = −τ (1). The ATE is zero, and so any stopping test with H0 : ATE ≤ 0 is designed to continue to completion at least (1 − α)% of the time.” However, later you focus on Gaussian outcomes. Can your method handle the first scenario as well (i.e., bimodal distributions)? Or could it be extended to Gaussian mixtures for example?_**
Apologies for the confusion on this point. Our simulation experiments do in fact consider this setting: the outcomes are Gaussian within each group (i.e., majority and minority), not the population as a whole. Thus, viewed over the whole population, the outcomes are generated from a mixture of two Gaussians. We have clarified this point in the revised paper.
**R2: _In the simulations shown in the supplement (Figure S6), there are certain scenarios for which the homogenous baseline outperforms CLASH (e.g., panel A: Bayesian Estimation MaxSPRIT, no treatment effect on the majority and small harmful effects or panel B: MaxSprit for small effects). Can you speculate on why this is the case? I do not think it is a problem if your method is not the best in every scenario, but it is important to outline and understand the conditions under which it is outperformed by other methods._**
Thank you for giving us an opportunity to address this point. In general, CLASH performs better when it is easy to differentiate between the harmed and unharmed groups from the covariates and outcomes. The more similar the effects on the minority and majority groups, the harder this task is; CLASH thus performs best when |majority effect - minority effect| is large. The homogeneous baseline is the exact opposite: it performs better when the whole population ATE is more similar to the effect on the minority group. Thus, the case in which the treatment has no effect on the majority but a small harmful effect on a large minority group (25% or 50%) is a near-ideal situation for the homogeneous baseline, but a more difficult situation for CLASH. However, it is worth noting that the difference in stopping probability between CLASH and the homogeneous baseline is relatively small, even in this difficult situation. We have added this discussion to the revised paper to provide more insight for practitioners.
**R2: _How would this method be used to decide for whom a trial should be stopped early in practice? Would this be based on the probability of being harmed, does this require a threshold? Perhaps you can elaborate more on this._**
Thank you for raising this important point. We have addressed this in point (2) of our global response. In short, the decision will heavily depend on domain specifics, but CLASH weights can provide heuristics that can help inform practitioner decisions.
**R2: _Out of curiosity, could this method be extended to multiple groups that show different harm (e.g., a strongly and a weakly harmed group)?_**
Yes, CLASH can be leveraged in this scenario: see Fig 1 in our global response for a demonstration.
---
Rebuttal Comment 1.1:
Title: All questions answered and thank you
Comment: Thank you very much for addressing my concerns and including these additional experiments. I stand by my first score highlighting that this is a strong paper. I am certain that this work will be a great contribution to the conference. I am looking forward to reading more about your work and wish you a great conference. | Summary: A two-stage approach CLASH was proposed to determine the early stopping time of a randomized experiment when the treatment is harmful to a subset of the population. The indicators of the harmed groups are estimated by causal machine learning methods in stage 1, and the early stopping time is determined using the weighted test statistic in stage 2. Theoretical properties of the proposed method were established. Simulation studies were conducted to show the existing homogenous method's failure and CLASH's success. Finally, an illustration of the proposed methods was presented by analyzing real data from a digital experiment. Overall, the proposed methodology is useful, and the paper is well-written.
Strengths: The method is solid, clearly described, and useful in different areas. With the accommodation of both Gaussian and time-to-event experimental outcomes and considering the limited prior work on this specific area, the proposed method can make a good impact. The results of the simulation studies and real data analysis are convincing and support their conclusions.
Weaknesses: The simulation settings seem relatively simple and difficult to interpret: only several binary covariates were included, and there was only 1 harmed group. It would be interesting to see some simulation results under settings closer to the real data analysis (multi-level categorical covariates and multiple harmed groups). Including several sentences to interpret the simulation settings would also be helpful.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The author provided a paragraph to discuss the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and positive feedback: we’re glad you found our work useful and impactful. We appreciate your constructive comments, which we address below and in our revised paper.
**R1: _The simulation settings seem relatively simple and difficult to interpret: only several binary covariates were included, and there was only 1 harmed group. It would be interesting to see some simulation results under settings closer to the real data analysis (multi-level categorical covariates and multiple harmed groups). Including several sentences to interpret the simulation settings would also be helpful._**
In our global response, we summarize results from additional simulation experiments that evaluate CLASH in settings closer to real-data analysis, per your suggestions. Specifically, Fig. 1 presents results with multiple harmed groups, while Fig 2e presents results with 100 and 500 binary covariates (similar to many multi-level categorical covariates). We find that CLASH outperforms the homogeneous baseline in these more realistic simulation settings, and hope that this finding encourages use of CLASH in real-world experiments. We include these new results, as well as more detail on interpreting these simulation settings, in Section 4 of the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have read the other reviews and rebuttals. I will keep my original score and agree to accept this paper. | null | null | null | null |
PoET: A generative model of protein families as sequences-of-sequences | Accept (poster) | Summary: This paper proposed an autoregressive generation pre-trained model of protein families. The models are trained over the sequences-of-sequences being organized by a set of specific protein sequences. It utilized a shared in-sequence position encoder to capture conditioning among sequences in an order independent manner, and thus is able to generalize to large context lengths. This paradigm can help to explore the correlation among sub-sequences to improve the performance of generation, especially when lack of sufficient multiple sequence alignment supervised training data.
Experiments were conducted over DMS data sets to show the effectiveness of the proposed method.
Strengths: 1.Proposed an autoregressive generation model which is trained in the manner of sequences-of-sequences, which is easy to append more protein sequence and explore the correction among protein to guide the generation.
2.Shared absolute position encoder is employed in self-attention across different sub sequences to relieve the impact of sub-sequence ordering on next amino acid generation.
Weaknesses: 1.Although shared inter sequence position encoder can relive the impact of input sub-sequence order, the generation procedure is still order-dependent since Pr(next amino acid|s1, s2, ...). For example, if we arrange all short sequences at the begging of the combined sequence, it would be difficult to generated longer sequence such as insertion-based variant. It is better to explore such impact or validate whether the generation is robust to the input order by careful experimental study.
2.Lack of deep insight or analysis of the results, even when some results might be anti-intuitive, such as longer input sequence results in negative performance.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1.Transformer’s attention mechanism still suffer from high time and memory consumption, please give the time and space complexity of the proposed transformer structure.
2.How to grantee the diversity of generated sequence? In autoregressive models, the maximum probability guided generation usually suffer from simplex result.
3.Generally, the hyper-parameters in ensembling will affect the evaluation performance. Does this mean a fair comparison in Table 1.
4.In ablation study, increasing the context length even results in negative performance. This is anti-intuitive. Is this phenomenon due to the decoding strategy (See question 2) or longer sequence weaken the effect of invert count sampling?
5.How to perform ensemble to fuse the PoET and the other baselines, what is the motivation of ensemble? It seems unfair to compare with other baselines w/o ensemble.
Typos:
213: All combinations of these are parameters are used
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Please refer to weakness.
The organization and representation (typos) can be further improved. For example, it would be better to give a brief organization/introduction of experimental study at the beginning of Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their consideration of our paper, and for their questions and comments. We note that there seems to be some misunderstanding regarding certain aspects of our method and experiments, which we hope to address below in addition to the reviewer’s questions. We will also clarify these points in the revised manuscript.
- Although shared inter sequence position encoder can relive the impact of input sub-sequence order, the generation procedure is still order-dependent since Pr(next amino acid|s1, s2, ...). For example, if we arrange all short sequences at the begging of the combined sequence, it would be difficult to generated longer sequence such as insertion-based variant. It is better to explore such impact or validate whether the generation is robust to the input order by careful experimental study.
The model is not sensitive to the lengths and ordering of sequences in the way suggested. The model is able to generate novel insertions and the relative positional encodings only act as a prior on the alignment between sequences in multihead attention. The sequence clusters the model is trained and evaluated on have large numbers of indels between sequences and have high length variability. Please see our global response comment for more details.
- Lack of deep insight or analysis of the results, even when some results might be anti-intuitive, such as longer input sequence results in negative performance.
As pointed out by the other reviewers, we provide extensive analysis of the model and prompt construction methodology and discussion of these results in the appendix. That being said, we cannot explain every phenomena found in these experiments. In particular, we suspect that the model capacity and context length results are both related to the misspecification problem of variant function prediction and density estimation as discussed in [1] and Appendix D.2 Lines 399-412. Hopefully these will prove fruitful for us or others to explore in future work.
- Transformer’s attention mechanism still suffer from high time and memory consumption, please give the time and space complexity of the proposed transformer structure.
We utilize flash attention [2], so the model requires O(N^2) time and O(N) memory where N is the length of the sequence.
- How to grantee the diversity of generated sequence? In autoregressive models, the maximum probability guided generation usually suffer from simplex result.
Sampling from the model yields high diversity results and this diversity can be controlled using well known methods like nucleus sampling [3].
- Generally, the hyper-parameters in ensembling will affect the evaluation performance. Does this mean a fair comparison in Table 1.
There are no additional hyper-parameters in our ensemble. It is a simple average over each prompt. Other settings such as homolog retrieval method, sampling homologs, context length, etc, are tuned based on validation set performance (lines 204-208) and are not specific to the ensemble. Furthermore, several other methods use ensembles over large numbers of language models, whereas we only use one.
- In ablation study, increasing the context length even results in negative performance. This is anti-intuitive. Is this phenomenon due to the decoding strategy (See question 2) or longer sequence weaken the effect of invert count sampling?
We think this is probably due to the misspecification problem between variant fitness prediction and density estimation. Increasing the context length does generally improve the generative performance of the model (as measured by perplexity on heldout sequence clusters), but hurts performance on variant effect prediction.
- How to perform ensemble to fuse the PoET and the other baselines, what is the motivation of ensemble? It seems unfair to compare with other baselines w/o ensemble.
We simply average them together (Appendix H). This is why we report the PoET + TranceptEVE ensemble in a separate section, consistent with the TranceptEVE hybrid model, and separate from the other non-multi-method ensembles.
- Typos: 213: All combinations of these are parameters are used
We will correct this in the revised manuscript.
[1] Weinstein, Eli, et al. "Non-identifiability and the Blessings of Misspecification in Models of Molecular Fitness." Advances in Neural Information Processing Systems 35 (2022): 5484-5497.
[2] Dao, T., Fu, D. Y., Ermon, S., Rudra, A., & Ré, C. (2022). FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. arXiv [Cs.LG]. Retrieved from http://arxiv.org/abs/2205.14135
[3] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rygGQyrFvH.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors giving detailed explanation and supplying more evidence to clarify my concerns, especially for the input order w.r.t. shared absolute position embedding. Except for the diversity of length distribution, this question talks about the sensitivity analysis of input order or the robustness of the proposed method. I think a better way to clarify this question would be to compare the generated sequences based on different input sequences, such as input sequences by concatenating shorter sequences at first vs. input sequences organized by random orders. This is due to the fact that the causal attention is not symmetric and would still be order dependent.
If the generated sequences are either better than the baselines or independent of the input sequence order, I will increase my score.
---
Reply to Comment 1.1.1:
Title: Additional experiment added
Comment: We thank the reviewer for their additional clarification and helpful suggestions. We have performed the requested experiment and report next sequence perplexities and joint log likelihoods of sequences-of-sequences with random, shortest-to-longest, and longest-to-shortest orderings in our addendum comment (Addendum 2: on sequence ordering). We find that sequence ordering has little-to-no effect on next sequence perplexity or joint log likelihood with PoET, and that the results are significantly better than the baselines regardless of ordering. We also note that, in practice, we always use random orderings and orderings can even be marginalized during inference by sampling multiple random orderings. We will include these additional findings in the manuscript and hope this has sufficiently addressed the reviewer’s concern. | Summary: This paper proposes an autoregressive generative model (protein evolutionary transformer, PoET) of whole protein families. Current generative protein language models are either difficult to direct to produce a protein from a specific family of interest or must be trained on a large multiple sequence alignment (MSA) from the specific family of interest, making them unable to benefit from transfer learning across families. This model can incorporate new sequence information without retraining, generalize to large context lengths, and avoid issues related to conditioning on MSAs. They propose a novel Transformer layer that models order-dependence between tokens within sequences and order-independence between sequences. The advantages include that PoET can be used as a retrieval-augmented protein language model, 2) generate and score novel indels in addition to substitutions, and does not depend on MSAs of the input family, 3) extrapolate from short context lengths allowing it to generalize well even for small protein families.
Strengths: - This paper proposes a novel Transformer layer that models order-dependence between tokens within sequences and order-independence between sequences.
- This paper provides a detailed analysis of experiments in the supplementary material.
- The proposed PoET outperforms existing protein language models and evolutionary sequence models for variant effect prediction in extensive experiments on the 94 deep mutational scanning datasets in ProteinGym.
Weaknesses: - As the title stated, the core of this paper is a generative model of protein families, but they only do a downstream task, ie, fitness prediction. In this way, I suggest incorporating 'fitness prediction' into the title.
- The missing comparison with some fitness prediction baselines.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Line 64, the contribution 'PoET can be sampled from and can be used to calculate the likelihood of any sequence efficiently.' sampled from which? Moreover, I don't think this is a contribution.
2. Line 197, why the log-likelihood of the variant can be used as the fitness predictor? What's the intuition behind this? or can you provide the related reference?
3. Line 227, Comparison to baselines. There are many methods for fitness prediction in directed evolution, such as CLADE and CLADE2.0. Why not compare with them?
4. In Table 1, What's the result of PoET if no ensemble?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper presents fitness prediction as the only downstream task of PoET. It is better to show more tasks to verify the effectiveness of PoET.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work and for their constructive comments and questions. These are addressed below.
- As the title stated, the core of this paper is a generative model of protein families, but they only do a downstream task, ie, fitness prediction. In this way, I suggest incorporating 'fitness prediction' into the title.
We consider generative evaluations of the model in terms of the perplexity of generating heldout sequences (Figure 4, 10) as well as examine samples from the model (Appendix M.2). Furthermore, we have expanded this analysis as discussed in the global response. That said, we do focus primarily on the evaluation of our model in terms of its ability to predict variant fitness. We will revise the manuscript to better emphasize this point.
- Line 64, the contribution 'PoET can be sampled from and can be used to calculate the likelihood of any sequence efficiently.' sampled from which? Moreover, I don't think this is a contribution.
What we mean by this is that samples can be efficiently drawn from the distribution over sequences modeled by PoET. PoET can also be used to calculate closed form likelihoods of sequences. This is in contrast to, say, energy based models which are generative models but that can only provide un-normalized likelihoods and are inefficient to sample from, or to GANs which can be sampled from but do not offer any way to estimate likelihoods. Specifically in the variant effect prediction space, most models considered are not proper generative models offering the ability to draw samples and calculate likelihoods efficiently. Thus, it is an important property of our model as we mention in that section.
- Line 197, why the log-likelihood of the variant can be used as the fitness predictor? What's the intuition behind this? or can you provide the related reference?
The log-likelihood or some un-normalized or approximate version thereof is used by every unsupervised variant effect predictor that we are aware of. The idea is that natural protein sequences must be evolutionarily fit and, therefore, by learning the generative distribution of these sequences we are capturing this density. The probability of a sequence variant is then reflective of fitness, because we assume that more fit sequences are more likely to be observed in nature (examples include refs [3, 4, 9, 10, 11, 13, 23] in main text).
- Line 227, Comparison to baselines. There are many methods for fitness prediction in directed evolution, such as CLADE and CLADE2.0. Why not compare with them?
These methods appear to be primarily supervised variant effect predictors, whereas we only consider the unsupervised variant effect prediction problem here and compare with other likelihood-based methods. These methods also only report results for four datasets, not for ProteinGym, a far more comprehensive collection of DMS datasets used here. In fact, for its unsupervised component, it appears that CLADE2.0 uses evolutionary scores from DeepSequence VAE, MSA Transformer, profileHMMs, and ESM-1v, all of which we already compare with, or compare with a better version of,and dramatically outperform.
- In Table 1, What's the result of PoET if no ensemble?.
We report results without ensembling in Appendix Table 4.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the detailed rebuttal. After carefully reading the rebuttal, I think a partial of my concerns has been addressed. However, for a thorough evaluation, I would like to see more downstream task experiments beyond variant fitness. | Summary: The authors of the paper present Protein Evolutionary Transformer (PoET), an autoregressive transformer-based model that is able to generate sets of related protein sequences, enabled by the proposed novel Transformer layer. This sequences-of-sequences generating method benefits from transfer learning, is able to handle novel indels, and does not need a multiple sequence alignment (MSA) as its input. The model can be conditioned on sequences of a protein family of interest for generation and scoring. The proposed method shows similar or superior performance compared to other models in fitness prediction tasks, while being faster without requiring retraining.
Strengths: This paper is very well-written, with clear explanations, great examples and figures, and a comprehensive Related Work section. The experiments are explained well, they align with the claims made in the paper, and the results are discussed adequately. Moreover, there's an abundance of extra information in the appendix.
Weaknesses: I enjoyed reading this paper, I do not have any major concerns. However, I did miss a "Limitations" section or something equivalent, either in the main text or in the supplementary. Moreover, I think some results (mainly those in the "Ablation" section 5.2) could be made stronger, for example by averaging performance over multiple runs and showing error bars, because the observed trends are not always that convincing. Finally, and this is more of a general issue, even though the results show improved/competitive performance, the average correlation values to experimental data is relatively low, i.e. around 0.5, which is still quite a weak correlation. The mismatch between density estimation and fitness prediction has been discussed before, for example in [1], and it might be worth discussing to some extent in this paper as well.
[1] Weinstein, Eli, et al. "Non-identifiability and the Blessings of Misspecification in Models of Molecular Fitness." Advances in Neural Information Processing Systems 35 (2022): 5484-5497.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. As mentioned in "Weaknesses": Limitations section missing.
2. As mentioned in "Weaknesses": perhaps include some discussion on the (mis)match between density estimation and fitness prediction.
3. As mentioned in "Weaknesses": if at all possible, it would be very informative to report averages over multiple runs, especially for the ablation results (Figure 3) but also for Table 1, to make the observations more convincing.
Minor comments:
4. The abstract states that PoET *outperforms* other models. However, from the number of boxed values in Table 1, this claim might be a bit too strong.
5. Do you report the dimensionality of embedding size $d$ somewhere? I might have missed it.
6. If I understand correctly, the relative positional encoding scheme would probably not be beneficial when there are big differences in sequence length amongst homologous proteins. In that case, there must be a better encoding possible (some MSA-like). Could you discuss this or perhaps mention it as a limitation?
7. The results on ensembling PoET with other models (Table 1 + Table 6 in the appendix) are interesting. Did you also try ensembling PoET with one or multiple other PoET model(s)?
8. Section 5.2.1:
* In general, it could be considered "cheating" to monitor the correlation during training since this is essentially your test data. If it's just for these experiments then it's fine, I'm just checking if it's not something you used as a stopping criterion in general?
* Context Length: apart from the earlier suggestion to do multiple runs here, it's also worth pointing out that the drawn conclusions depend quite strongly on when it was decided to stop training.
* Model Size: the stopping criterion of "when the performance seemed to plateau" looks a bit arbitrary. If we compare the lines in the right graph in Figure 3, the blue line was cut off quite early while similar "plateaus" can be found in the purple and brown lines, which were allowed to train longer.
9. Missing references:
* Figure 1a is never referenced.
* Paper reference missing for RoPE?
* Not all appendices are referenced in the main text (B, E, F, I, and O are missing).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Even though some limitations are touched upon in the main text, a thorough discussion of limitations is lacking.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their overwhelmingly positive response and comments. In answer to the reviewer’s questions:
- As mentioned in "Weaknesses": Limitations section missing.
We sought to discuss limitations throughout the paper as appropriate, but we will revise the paper to include an explicit “Limitations” section for the camera ready version.
- As mentioned in "Weaknesses": perhaps include some discussion on the (mis)match between density estimation and fitness prediction.
Yes, this is an extremely interesting point, which we mention briefly in Appendix D.2 Lines 399-412, but we will expand on this discussion in the revised manuscript.
- As mentioned in "Weaknesses": if at all possible, it would be very informative to report averages over multiple runs, especially for the ablation results (Figure 3) but also for Table 1, to make the observations more convincing.
We did not feel that multiple runs were necessary given the high cost of training these models and the fact that results are already averaged over a large number of datasets, which we think represents a more interesting source of variation, anyway, regarding expected performance on new variant effect prediction problems. We also do not have access to multi-run averages from other methods. Statistically significant claims relating to this source of variation are indicated with a box in Table 1.
- The abstract states that PoET outperforms other models. However, from the number of boxed values in Table 1, this claim might be a bit too strong.
We will adjust the text as suggested.
- Do you report the dimensionality of embedding size d somewhere? I might have missed it.
The dimensionality of the embedding is 1024. We report this in Appendix B.
- If I understand correctly, the relative positional encoding scheme would probably not be beneficial when there are big differences in sequence length amongst homologous proteins. In that case, there must be a better encoding possible (some MSA-like). Could you discuss this or perhaps mention it as a limitation?
The relative positional encoding scheme basically acts as a weak prior on the alignments between the sequences. We thought the same and have also trained a version of PoET without any positional information between sequences, which performs essentially identically to the version of the model we presented here. This suggests the model is able to learn the correct alignments and is not sensitive to this prior. We agree that there are other encodings which may be better, although, it is not-so-obvious how they can be applied correctly with an autoregressive decoder.
- The results on ensembling PoET with other models (Table 1 + Table 6 in the appendix) are interesting. Did you also try ensembling PoET with one or multiple other PoET model(s)?
We have not, but this would be interesting to try!
- Section 5.2.1:
- In general, it could be considered "cheating" to monitor the correlation during training since this is essentially your test data. If it's just for these experiments then it's fine, I'm just checking if it's not something you used as a stopping criterion in general?
All hyperparameters, including training time and model size (Figure 3 caption) are tuned based on validation set performance only. This is consistent with other methods evaluated on ProteinGym.
- Context Length: apart from the earlier suggestion to do multiple runs here, it's also worth pointing out that the drawn conclusions depend quite strongly on when it was decided to stop training.
- Model Size: the stopping criterion of "when the performance seemed to plateau" looks a bit arbitrary. If we compare the lines in the right graph in Figure 3, the blue line was cut off quite early while similar "plateaus" can be found in the purple and brown lines, which were allowed to train longer.
We agree that the stopping criterion can have a strong effect on results, and will note this as a limitation of the ablation. While we try to be as objective as possible, our ability to conduct ablations is ultimately constrained by compute resources.
- Missing references:
- Figure 1a is never referenced.
- Paper reference missing for RoPE?
- Not all appendices are referenced in the main text (B, E, F, I, and O are missing).
We will correct these in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions and minor concerns point-by-point. I'll happily stick to my high score if indeed all promised changes are made. I understand that certain experiments are limited by time and compute requirements, but if the paper does get accepted, I would still suggest adding results for an ensemble of multiple PoET models for the camera-ready version for completeness. | Summary: Current generative protein language models focus on generating individual protein sequences and are not specifically trained to generate sequences for an entire protein family. The paper introduces a new autoregressive generative model that tackles this limitation by generating Multiple Sequence Alignments (MSAs) for the entire family. The model utilizes a TieredTransformerDecoder to model the within-sequence interactions in an order-dependent manner, as well as the between-sequence interactions in an order-independent manner. The authors evaluate their models on protein variant fitness tasks by measuring the probability of variants conditioned on searched MSA of the parent sequence. The proposed method achieves large improvements on the ProteinGym benchmark. Importantly , the method addresses the problems faced by previous approaches, such as handling proteins with low-depth MSAs and accommodating indel mutations in addition to substitutions.
Strengths: 1. Protein variant fitness prediction tasks are very important for protein design. The approach, which achieves large improvements on this task, will definitely have large potential impact.
2. The related work section is well-documented, and it is relatively clear where the authors' contributions lie.
3. According to Table 1, the proposed method performs exceptionally well on proteins with low-depth MSAs, which supports the motivation behind generating MSA instead of individual sequences.
4. While the technique used in the paper is not entirely new, the idea of generating MSA instead of individual sequences is novel enough for an application-oriented paper.
Weaknesses: 1. The soundness of autoregressive decoding of MSA is questionable. Please see the questions below.
2. The authors claim their contribution as proposing a model for generating sequences for the entire family, yet few experiments are conducted to evaluate the generated MSAs.
3. Considering the significance and volume of this work, it would be disappointing if the code were not provided. The authors only promise to offer an accessible API, but it’s not sure whether they will provide the training code and trained models.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Major points:
1. The author proposes to encode the MSA independently of the order between different sequences. But they still use an autoregressive decoding method that introduces order dependence in the output probabilities. For example, the probability equation $p(s_1)p(s_2|s_1)p(s_3|s_1,s_2)=p(s_1,s_2,s_3)=p(s_1,s_3,s_2)=p(s_1)p(s_3|s_1)p(s_2|s_1,s_3)$ may not hold when using the proposed autoregressive decoding. In the former case, generating tokens of $s_3$ is conditional on $s_1$ and $s_2$, whereas in the latter case, generating tokens of $s_3$ is conditional on only $s_1$.
Therefore, when measuring the fitness with $p(v,s_1,…,s_m)$, the results will differ using $p(v|s_1,…,s_m)p(s_1,…,s_m)$ and $p(s_{j+1},…,s_m|v,s_1,…,s_j)p(v|s_1,…,s_j)p(s_1,…,s_j)$. How is this problem addressed during inference? How is the order of MSA sequences determined during training?
2. A trivial baseline for generating MSA is to use an autoregressive language model, such as ProGen, conditioning the model on the parent sequence and continuing the generation with a start token afterward. If limited by the context window, we can only generate one sequence at each time. Although the model is not trained on MSA datasets, it is now well-known that language models trained with next-token prediction loss can excel at generating next sequences. How would this perform compared with the proposed approach?
3. The organization of the paper is problematic. It appears that the most significant contribution of this paper lies in the improvement of protein variant fitness prediction tasks. However, the title and introduction sections place more emphasis on the generation of MSAs, which lacks substantial support in their experiments (the main results and ablation study are all based on fitness prediction). I suggest the authors reconsider the introduction section to highlight the "proposal of a new approach for protein fitness prediction" rather than "proposal of a new approach for generating MSA."
Minor points:
1. In Sec. 5, it would be helpful to introduce the included baseline methods and provide the corresponding references before discussing the results.
2. In Table 1, the performance of TranceptEVE M on Indels is shown to be lower than that of TranceptEVE L. Could you provide an explanation for this observation?
3. Line 154 in Sec.3.1.2: while it may be well-known, it would be better to explain the abbreviation "RoPE" when it is first mentioned and include a reference at that point.
Overall, though with flaws, I think the paper makes a big contribution to the protein design community considering the amount of work in the paper. I’m willing to raise my score if the authors can address my concerns.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors should add a paragraph to discuss the potential limitations and negative societal impacts of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and excitement about the potential impact and performance of our method. In particular, we appreciate that the reviewer feels that our “...paper makes a big contribution to the protein design community considering the amount of work in the paper.” We respond to the reviewers specific comments and questions below.
- The soundness of...
To be clear, we do not decode MSAs. We decode sets of related protein sequences, which are unaligned.
- The authors claim their contribution...
The model is a generative model of whole protein families as sequences-of-sequences, but this full family generative task is not our primary application of interest. In the same way that GPT is a generative model of text, but is only used conditioned on prompts in practice, we are primarily interested in using our model as a conditional generative model, which is permitted by the structure of the model. The conditional generative capabilities enables both scoring of variants for fitness prediction and efficient generation of novel sequences via sampling. We have expanded our evaluation of the generative capabilities of the model (see global response), and will update the manuscript to include these in the main text.
We have also explored unconditional generation of whole families with PoET, where we find that sampled families produce reasonable looking multiple sequence alignments, when aligned. We have also found that, interestingly, despite the lack of homology of these families to any natural proteins, conditioning AlphaFold2 on these multiple sequence alignments leads to feasible looking structure predictions with higher pDLLTs than are given to structure predictions using a single sequence from the sampled family alone. Unfortunately, we do not have enough space in the review supplement to include these results, and we don’t want to read too much into what this means, aside from the fact that PoET generates protein families that follow reasonable family-level statistical constraints. We will add some discussion of this to the manuscript.
- Considering the significance...
We will release code and the trained models with the camera ready version of the manuscript.
- The author proposes...
This is true, only the individual decoder layers are invariant to the order of the sequences (discussed at lines 131-132); the full model, composed of multiple such layers, is not. When considering variant fitness prediction, we only ever consider the conditional likelihood of the variant given the homologues, p(v | s1 … sn). In practice, we do not find ordering of the sequences to be an issue when considering the full joint likelihood p(s1 … sn), because the model is trained on random orderings of sequences, which requires it to learn to generate sequences given any ordering of the prior sequences. We will emphasize this fact in Section 3.2 for the revised manuscript.
- A trivial baseline for generating ...
ProGen is not designed to condition on sequences or to generate multiple sequences. It only conditions on control tags, which are generally annotations from Uniprot. Please see Review Supplement Table 1 in the global response to see how ProGen performs in terms of perplexity when conditioning on a parent sequence. We have also compared PoET with a baseline model that simply autoregressively generates the whole sequence-of-sequences in (Section 5.2.2 Figure 4, Appendix K Figure 10), which shows that general language models do not perform well at this task as they are unable to generalize beyond their training context length. In contrast, our specialized transformer layer allows PoET to generalize well to much longer context lengths.
- The organization of the paper...
We will revise the manuscript to better emphasize the variant effect prediction task. Also, please see the global response for additional analysis of the generative capabilities of PoET.
- In Sec. 5, it ...
We will revise the manuscript as suggested.
- In Table 1, the performance of TranceptEVE M on Indels is shown to be lower than that of TranceptEVE L. Could you provide an explanation for this observation?
These results are reported directly from the TranceptEVE paper. We also aren’t sure why TranceptEVE M performs better than L on indels, but it could be related to the mis-specification of the variant effect prediction task, or some other source we aren’t aware of. The number of datasets with indels is also significantly smaller than the substitution-only DMS datasets, which could contribute (the difference between M and L is not statistically significant, see lines 256-257).
- Line 154 in Sec.3.1.2: ...
We will revise the manuscript accordingly.
- The authors should...
We believe an explicit “Limitations” section is optional at NeurIPS this year and sought to discuss limitations throughout the manuscript where appropriate. We will collate these limitations and others raised in this discussion in a clear “Limitations” section in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's detailed response! The additional experiments performed during rebuttal clearly add value to the paper and my concerns are addressed by the authors' response. Therefore, I decide to raise my score to 7. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank all of you again for your positive comments and constructive suggestions and questions. Here, we present additional analyses addressing some common themes in the reviews. This discussion is intended to be viewed with the figures in the attached review supplement PDF. We will include these results in the revised manuscript.
Additional analysis shows that PoET generates high quality, high diversity sequences.
- We have performed additional analysis of the sequences generated in the chorismate example and include an additional example on lysozyme. We have calculated the maximum sequence identity between our generated sequences and any natural protein and also folded the generated sequences using AlphaFold2 to show predicted structural conservation and pLDDT. PoET generates high diversity sequences that are predicted to be structurally similar, supporting the quality of sequences generated by PoET (Figure 1a for chorismate, Figure 1b for lysozyme).
- We have added an additional analysis of lysozyme sequences generated by PoET, including a comparison with lysozyme sequences generated by the fine-tuned ProGen model from Madani et al [0]. PoET generates higher diversity sequences than ProGen and these sequences are predicted to be more structurally conserved than sequences sampled from ProGen at similar levels of diversity. These structures are also predicted with high confidence by AF2, having high pDLLTs (>90% for almost all PoET generated sequences).
PoET was trained and evaluated on indel rich protein sequence clusters, has superior perplexities for these heldout sequences, and generates plausible indels in our chorismate mutase example.
- The uniref50 clusters in the validation set for perplexity evaluation are highly indel rich. An alignment for an exemplar cluster shows that there are large insertions and high diversity between sequences (Figure 2a). A histogram of the columns by percent gap shows that highly gappy columns are the most common (Figure 2b, left). Across all of the validation clusters (Figure 2b, right), this trend remains true, with alignments becoming more indel rich when more sequences (longer context lengths) are considered.
- PoET achieves low perplexities on these heldout sequences, showing that it is a good generative model of these indel rich families. It outperforms our baseline transformer, ProGen, and profileHMM, achieving better perplexities at each number of provided homologues (see Tables below and Figures 4 and 10 in the manuscript).
- In our chorismate mutase generative example, PoET generates sequences with novel indels and low sequence identity to natural chorismate mutases that are predicted to fold into the conserved chorismate structure with high pLDDT according to AlphaFold2 (Figure 3).
PoET achieves low perplexities on heldout Uniref50 clusters, outperforming ProGen and profileHMMs, even when conditioning on no or a very small number of homologs.
- We also evaluate heldout perplexity of ProGen continuing from a prompt sequence (as suggested by Reviewer jcwZ), where we find that PoET achieves better perplexity than ProGen without any conditioning, and when conditioned on the cluster seed sequences (Review Supplement Table 1). We note that our holdout set and ProGen’s holdout set are not the same, so sequences from our heldout Uniref50 clusters may occur in the ProGen training set and PoET still outperforms ProGen in this analysis.
| Model | # Sequences Conditioned On | Mean Perplexity | Std. Dev |
|--------|----------------------------|-----------------|----------|
| ProGen | 0 | 15.591333 | 5.524855 |
| ProGen | 1 | 14.788089 | 4.022846 |
| PoET | 0 | 14.545277 | 2.941460 |
| PoET | 1 | 10.907637 | 2.817105 |
Review Supplement Table 1: Perplexity evaluation on our heldout Uniref50 clusters. ProGen and PoET are either conditioned on no homologs (0 sequences conditioned on) or the seed sequence from the cluster (1 sequence conditioned on) and perplexities are calculated on the remaining sequences. This evaluation only includes sequences with length <512, due to length limits for ProGen.
- PoET generalizes well from a very small number of homologs. Conditioning on only 5 homologs, PoET already achieves a perplexity of 7.9 on unseen family members, in contrast to 17.0 for the PSSM and 13.7 for the profileHMM (Review Supplement Table 2). This demonstrates that PoET is a significantly better generative modeling option for families with small numbers of homologs and generalizes remarkably well from small numbers of sequences.
| Model | # Sequences Conditioned On | Perplexity |
|-------|----------------------------|------------|
| PSSM | 1 | 18.541033 |
| PSSM | 5 | 16.982748 |
| PSSM | 10 | 15.912713 |
| HMM | 1 | 20.137338 |
| HMM | 5 | 13.716971 |
| HMM | 10 | 11.888966 |
| PoET | 1 | 10.173127 |
| PoET | 5 | 7.922363 |
| PoET | 10 | 7.252593 |
Review Supplement Table 2: Median perplexities of sequences from heldout Uniref50 clusters after conditioning on 1, 5, or 10 cluster members with a PSSM, HMM, or PoET. All sequences from all heldout clusters are included in this analysis.
We will incorporate these additional analyses into the revised manuscript to better support the generative modeling capabilities of PoET.
[0] Madani, A., Krause, B., Greene, E.R. et al. Large language models generate functional protein sequences across diverse families. Nat Biotechnol (2023). https://doi.org/10.1038/s41587-022-01618-2
Pdf: /pdf/d059ec0d693aee389edb367eaa5febd2318177ce.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: - This paper introduces PoET, a novel autoregressive transformer architecture that learns a distribution over protein families.
- More specifically, PoET takes as input a concatenated set of homologous sequences, for example retrieved with a MSA. The model architecture consists of a stack of several TieredTransformerDecoderLayers (the main innovation from a modeling standpoint), where each layer applies successively: 1) causal self-attention within each sequence 2) causal self-attention across sequences in the homologous set 3) a standard feedforward NN.
- From a practical standpoint, the model can be used at inference both to generate new proteins or to assess the effects of mutations on protein fitness
- The former (new sequence generation) is partially covered (one interesting yet limited example in appendix for chorismate mutase)
- The latter (fitness prediction performance) is the focus of experiments conducted in the paper. PoET achieves remarkable performance on the ProteinGym benchmarks using a 200M-param model
Strengths: - The performance of the architecture on the zero-shot fitness prediction tasks from ProteinGym is quite remarkable for a model of this size
- The model architecture is simple and effective
- The different ablations to curate the optimal prompt engineering strategy are thorough and packed with many interesting insights
Weaknesses: - The sequence generation abilities of the model are not really explored (besides one example for 1 protein family in appendix)
- This claim at the end of the introduction (lines 69-70) appears to be wrong: "improves prediction of variant effect in sequences with large numbers of mutations." Performance on deep mutants (5+ in Table S2) is lower than baselines (Tranception & TranceptEVE). Would suggest removing this sentence and adjust the second to last sentence of conclusion, as well as fixing the bolding in the appendix table.
- Some other claims are not properly substantiated. For example: "One advantage of PoET is that is able to not only score indels, but also generate sequences with indels" (lines 257-258). There does not seem any evidence in the paper that the quality of such "generated sequences with indels" would be any good, especially given the specifics of the position encoding chosen which relies on the assumption that "amino acids at similar absolute positions in homologous proteins are more likely to be drawn from the same distribution" (168-169). See questions for other claims where evidence seemed light.
- The performance seems to plateau as the number of parameters grow (the ~600M-param model performs on par with the ~200M-param version) and in terms of context length (12k better than 24k at inference) suggesting this architecture has perhaps already reached its limits
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - “Deep mutational scans and directed evolution experiments have been used to successfully design novel proteins [1, 2], but can be costly and difficult to implement, which makes these experimental methods inapplicable for many proteins and functions of interest.” -- please elaborate on which proteins / functions DMS and directed evolutions are "inapplicable" (reference paper would be great to add here).
- “PoET is a fully autoregressive generative model, able to generate and score novel indels in addition to substitutions, and does not depend on MSAs of the input family, removing problems caused by long insertions, gappy regions, and alignment errors.” -- isn't your method subject to these errors as well since you retrieve homologous sets at inference via a MSA?
- Any particular reason for using Diamond at training and then colabfold or jackhmmer at inference? Have you tried training on homologous sets retrieved with MSAs? (would make inference closer to training)
- Given your position encoding (which requires coordinate systems to be ~ equal across homologous sequences), isn't your method also not adapted to scoring sequences with long insertions/deletions?
- What are the limitations of your method? It seems very compelling for the majority of settings, except perhaps: 1) if one wants fast inference for good performance, GEMME is a better option 2) if one cares about deep mutants, Tranception / TranceptEVE (or ensembled with PoET) seem better 3) if one cares about disordered proteins / proteins with very few homologs (ie. less than 10), then it is not clear whether your model would handle well (since sets with fewer than 10 homologs were removed for training as per line 181). Would you agree with these limitations? How does the performance vary as the number of homologs goes to zero (note this would be a substantially different setting vs proteins in the "low MSA depth" bucket from ProteinGym)?
- Figure S1/S2: there is a reference to Prots2prot? Please also provide the performance at the DMS level for PoET alone
- Given the last point in the "weaknesses" box above, which avenue(s) do you see to further improve the performance of your model?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Not particularly discussed, would suggest adding a couple sentences about it, as per the questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and suggestions. In the manuscript, we strove to provide a comprehensive analysis of the prompt engineering aspects, which we hope will be relevant for using PoET and also for future work in retrieval-/homologue-augmented protein ML models. We’re glad the reviewer found this part interesting. We address the reviewer’s questions and concerns below. We will also expand the discussion of limitations to include points raised in this discussion.
On PoET as a generative model, especially relating to the reviewer’s concerns about the quality of sampled indels: we focused our analysis on the variant effect prediction problem, because we view it as a good surrogate for generation in the sense that assigning high likelihood to high functioning variants means that samples or other decodings from the model, which are high likelihood, are also likely to be functional. In addition to our chorismate mutase example in the appendix, we have expanded our analysis of indel generation and will show some additional sampling examples in the revised manuscript. Please see our global response for more details. As suggested by other reviewers, we will also revise the introduction and other sections to better emphasize the variant effect prediction task.
- This claim at the end of the introduction...
We will revise the manuscript as suggested.
- The performance seems ...
This was one of the most surprising findings to us in this work. We also expected larger models and longer context lengths to perform better. We suspect this is related to the fundamental mis-specification of the variant effect prediction problem as density modeling, as recently discussed by [1] and in our Appendix (lines 399-412). We think this is also supported by the fact that holdout perplexity does improve at longer contexts lengths, but variant effect prediction does not.
- “Deep mutational scans ...
Fundamentally, deep mutational scanning can only be applied in settings where a high throughput functional assay is available, which usually means that variants can be produced in a single pot and the function of those variants can be read out via a selection and sequencing assay. Low throughput, expensive, and time consuming assays are not amenable to deep mutational scanning due to cost and time limitations. These references [2,3] discuss some of the challenges and considerations for developing functional assays. We will add these references to the text.
- “PoET is a...
We do not retain the alignment, passing only the unaligned sequences into PoET as the prompt. The challenge with operating on the MSA is not retrieval, but rather that large gaps disrupt the continuity of the sequences as viewed by axial transformer style models. We also do not assume a specific correct alignment.
- Any particular reason...
We used Diamond because it was the only tool available to perform the all versus all homology search in a reasonable amount of time. It is substantially faster than alternative homology search programs while attaining highly competitive performance.
- Given your position...
The relative positional encodings only provide a prior on the alignment. The model is able to, and does, learn how to properly correspond sequence elements within the sequences. In fact, in newer experiments, we removed the between-sequence relative positional encoding altogether and the model performs nearly identically, suggesting that this does not hinder model performance on long sequences and indels. We also note that our validation uniref50 clusters are extremely gappy and have high variability in sequence length within each cluster (see global response). Thus it directly measures our ability to model such indels.
- What are the...
We examine the generative performance of PoET, in terms of perplexity on heldout sequence clusters, using different numbers of homologous sequences, where we see good performance. In fact, PoET performs dramatically better than profileHMMs or other viable protein sequence models in this very few homologs regime (See Appendix Figure 10 for evaluation in terms of number of tokens in MSA and Review Supplement Table 2 in the global response for evaluation in terms of number of sequences), so we view this as a major advantage of PoET. Given our current understanding, we generally agree with your other comments. Clearly, GEMME is much faster than any neural network-based model, but we don’t think this is a significant limitation, especially given that we can efficiently sample from PoET as an alternative. Any kind of exhaustive evaluation of high order variants is likely to be intractable regardless of method chosen simply due to the huge size of those spaces.
- Figure S1/S2:...
Prots2prot was an old working name for PoET. We will correct this in the manuscript.
- Given the last...
Given that we think this is a problem related to misspecification of the unsupervised variant effect prediction problem, as well as the challenge in directly linking DMS fitness measurements with specific protein properties (often, many factors contribute to fitness and it may not be clearly associated with one property), this is a challenging question to answer. We think the ability to include additional conditional information in the prompt, such as structures, or specific property specifications could help, and would provide additional mechanisms for controlling the generative distribution and sequence generation.
[1] Weinstein, Eli, et al. "Non-identifiability and the Blessings of Misspecification in Models of Molecular Fitness."
[2] Fowler, D., Fields, S. Deep mutational scanning: a new style of protein science. Nat Methods 11, 801–807 (2014). https://doi.org/10.1038/nmeth.3027
[3] Tiefenauer, L., & Demarche, S. (2012). Challenges in the Development of Functional Assays of Membrane Proteins. Materials, 5(11), 2205–2242. https://doi.org/10.3390/ma5112205
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Dear authors,
Thank you for the detailed responses and additional analyses provided during rebuttal. One point that was not addressed in your response is about the DMS-level performance of PoET alone (second-to-last question), ie., adding PoET alone to Fig S1/S2. This figure is quite insightful -- for instance, we see that ESM1v is performing fairly well across the board, but tanks on a handful of assays, bringing the overall average Spearman down. Since PoET is not shown there, I was thus curious about the variance of the performance across assays, relative to other baselines (eg., ESM1v being "high variance" and TranceptEVE "low variance"). Based on ablations in Tables S1-S3, performance seems relatively stable across settings, but perhaps you noticed something insightful at a more granular level?
---
Reply to Comment 1.1.1:
Title: Variance across DMS assays
Comment: We thank the reviewer for the clarification. We will add PoET alone to Fig S1/S2 in the revised manuscript.
On a per dataset level, PoET performs very similarly to the ensemble of PoET + TranceptEVE L, with similar variance and slightly worse performance across the datasets. We did not notice any particular trends in this difference. Since we can't share images at this point of the discussion, we provide the table below which shows the 5th, 25th, 50th, 75th, and 95th percentiles across the substitution only DMS datasets for the models in Fig S1 and PoET alone. The statistics reflect the observation that ESM1v performance has higher variance; the interquartile range for ESM1v is higher than that of other models and it has markedly worse performance in the lower quantiles.
| | 5th Percentile | 25th Percentile | 50th Percentile | 75th Percentile | 95th Percentile |
|:---------------------|---------------:|----------------:|----------------:|----------------:|----------------:|
| MSA Transformer | 0.10825 | 0.34406 | 0.43761 | 0.51985 | 0.65331 |
| ESM1v | 0.01776 | 0.26636 | 0.45941 | 0.54331 | 0.65768 |
| TranceptEVE L | 0.18136 | 0.40970 | 0.48699 | 0.54856 | 0.67994 |
| PoET | 0.18757 | 0.40236 | 0.50819 | 0.58754 | 0.69969 |
| PoET + TranceptEVE L | 0.18879 | 0.42292 | 0.51518 | 0.59493 | 0.70788 | | null | null | null | null | null | null |
De novo Drug Design using Reinforcement Learning with Multiple GPT Agents | Accept (poster) | Summary: This paper proposes a method named MolRL-MGPT for drug molecular generation.
Concretely, GPT-based agents are used to iteratively generate candidate compounds, and a special reward signal is adpoted to encourage agents to explore in diverse directions.
The experiments on GuacaMol benchmark show the superiority of the method.
Strengths: 1. An effective method for De novo Drug Design is proposed.
2. A special reward signal is proposed to promote the diversity of the agents.
Weaknesses: It is not appropriate to call the proposed method a “RL-based” method though the score function can be treated as the “reward”. RL follows Markov decision process and aims to maximize the accumulated return, but it seems that the goal in this paper is to maximize the final score. Meanwhile, there is no RL objective function in this paper. (I suggest to change the RL-related statement to another, and this will not affect my rating.)
1. Eq.3 aims to make each agent get a fix improvement which neglect the difficulty of different generation timesteps.
2. Eq.4 forcing the different agents obtain different scores, but different score is not equal to the different explore direction.
3. It seems that the agents are treated as independent units without sharing their experience. Since the different agents cooperate with each other and aim to achieve a common goal, different kinds of correlation need to be considered.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Though there are some transformer-based models for chemical language. There still exists a gap between molecules structure and natural language. How the model will perform when the GPT is replaced with graph network?
2. How to determine the suitable hyperparameters in real-world applications?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable review comments! We address your main concerns below:
**Q1 (in *weaknesses*)**: It is not appropriate to call the proposed method a “RL-based” method though the score function can be treated as the “reward”. RL follows Markov decision process and aims to maximize the accumulated return, but it seems that the goal in this paper is to maximize the final score. Meanwhile, there is no RL objective function in this paper.
**A1**: I concur with your opinion that our algorithm diverges somewhat from the conventional understanding of "reinforcement learning". However, we employ this terminology primarily because, in the context of molecular design for computer-aided drug discovery, "reinforcement learning" serves as the widely recognized term for denoting a category of multi-step generation algorithms featuring reward functions (https://www.sciencedirect.com/science/article/pii/S0928098722002093), and our algorithm falls within the scope of this methodological category. Moreover, in section 2 (related works), we allocate a paragraph to provide an overview of "RL-based drug design algorithms", which contains various previous works in this category.
**Q2 (in *weaknesses*)**: 1. Eq.3 aims to make each agent get a fix improvement which neglect the difficulty of different generation timesteps. 2. Eq.4 forcing the different agents obtain different scores, but different score is not equal to the different explore direction. 3. It seems that the agents are treated as independent units without sharing their experience. Since the different agents cooperate with each other and aim to achieve a common goal, different kinds of correlation need to be considered.
**A2**: It seems that you are a little confused about our algorithm design, so I will clarify certain aspects of our design in further detail.
1. Eq.3 is the loss function of the first agent in the RL process. Its purpose is to encourage the agent to learn the characteristics of high-scoring molecules as far as possible without deviating too much from the prior parameter. At each RL step, each term of the loss function is recalculated, so the improvement of the agent is not fixed. In addition, during the RL process, it does tend to be progressively more challenging for the agent to improve the molecular score, but this does not contradict the objective of the loss function to improve the molecular score.
2. Eq.4 is not intended to force different agents to generate molecules with different scores, but rather to motivate all agents to comprehend the characteristics of high-scoring molecules while simultaneously exploring the chemical space in diverse directions. We accomplish the second objective by rewarding the difference in the probability of generating the same SMILES by different agents, which is similar to the idea in Eq.3 of guaranteeing the effectiveness of agents by punishing the difference in the probability of generating the same SMILES for agents and the prior model.
3. Indeed, we do not treat agents as independent units in our algorithm. This is evident in Eq.4 where we explicitly reward the differences between agents. You also mention some ideas in the design of cooperative multi-agent reinforcement learning algorithms, but they might not be directly applicable to the specific framework of our algorithm design.
**Q3 (in *questions*)**: Though there are some transformer-based models for chemical language, there still exists a gap between molecules structure and natural language. How the model will perform when the GPT is replaced with graph network?
**A3**: This question poses an important inquiry into the effectiveness of distinct molecular representations. Notably, the Simplified Molecular Input Line Entry System (SMILES) string we employ encapsulates identical structural information as the two-dimensional molecular graph. Hence, the potential of SMILES-based molecular design algorithms parallels that of their 2D molecular graph counterparts.
Furthermore, recent research has demonstrated the superior performance of SMILES-based reinforcement learning algorithms compared to graph-based models in molecular design tasks (https://openreview.net/forum?id=yCZRdI0Y7G). This outcome underscores our confidence that our algorithm using GPT to generate SMILES strings competes robustly with graph-based approaches.
It is worth acknowledging that both SMILES strings and molecular graphs inherently sacrifice spatial information in comparison to actual three-dimensional molecules. Nonetheless, the existing algorithms for generating *de novo* molecules in 3D have not reached a stage where they can supplant the effectiveness of string-based and graph-based methods. I hold the view that should a significantly advanced 3D generation model emerge, its integration to replace the GPT agent in our algorithm would hold substantial promise as a future direction.
**Q4 (in *questions*)**: How to determine the suitable hyperparameters in real-world applications?
**A4**: In the RL process of MolRL-MGPT, the pivotal hyper-parameters include: the number of agents, the number of RL steps, learning rate, $\sigma_1$, $\sigma_2$, the number of samples for experience replay, etc. Among them, our experiments in section 4.3 suggest that 4 is the best number of agents. Regarding the number of RL steps, it's advisable to extend this count as far as practicable, until the agents show no significant progress for many steps. Other hyper-parameters do need to be tuned specifically in each design task (by monitoring the changing curves of the agents throughout the RL process), where the learning rate and $\sigma_2$ are particularly sensitive. For our experiments, the setting of hyper-parameters can be found in the paper and codes.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply, but some of the concerns still exists.
1. The first 2 terms of Eq.3 regularize the parameter deviating. However, the 3 term seems can not "encourage the agent to learn the characteristics of high-scoring molecules as far as possible", since the score function may be inaccurate and the first 2 terms is also unstable, especially at the beginning of the training. Therefore, I doubt the effectiveness of Eq. 3. It looks too strange and lack the theoretical guarantees. Please refer to PPO[1] for a possible way.
2. From the rebuttal, the 1st term of Eq. 4 " motivate all agents to comprehend the characteristics of high-scoring molecules". However, it is not related to explore "the chemical space in diverse directions", and I can not understand where the "diversity" comes. Exploration in RL means choose a non-optimal action to prevent the model to fall into sub-optimal results, and the "exploration" in Eq. 4 seems can not try 'non-optimal action' but only encourage high score. We have no evidence that different molecules with "multiple high scores" is equivalent to "diversity" since they may be similar.
3. "Explicitly reward the differences between agents" is not equal to build the relationships among agents since the rewards only contain very little information.
4. Since large language model (LLM) is trained via natural language, the effectiveness of LLM in molecules still needs further evidence.
[1] Proximal Policy Optimization Algorithms
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply! Our answers to your new questions are as follows:
> The first 2 terms of Eq.3 regularize the parameter deviating. However, the 3 term seems can not "encourage the agent to learn the characteristics of high-scoring molecules as far as possible", since the score function may be inaccurate and the first 2 terms is also unstable, especially at the beginning of the training. Therefore, I doubt the effectiveness of Eq. 3. It looks too strange and lack the theoretical guarantees. Please refer to PPO[1] for a possible way.
Our design of Eq.3 is similar to the RL loss function of Reinvent (https://jcheminf.biomedcentral.com/articles/10.1186/s13321-017-0235-x) and is indeed innovative compared to the original PPO. It is true that the scoring function may be inaccurate, but the RL training is typically stable as long as well-trained models and reasonable hyper-parameters are used. You can verify the validity of our design by running our code.
> From the rebuttal, the 1st term of Eq. 4 " motivate all agents to comprehend the characteristics of high-scoring molecules". However, it is not related to explore "the chemical space in diverse directions", and I can not understand where the "diversity" comes. Exploration in RL means choose a non-optimal action to prevent the model to fall into sub-optimal results, and the "exploration" in Eq. 4 seems can not try 'non-optimal action' but only encourage high score. We have no evidence that different molecules with "multiple high scores" is equivalent to "diversity" since they may be similar.
The "diversity" mainly comes from the 2nd term of Eq.4, which encourages agents to search for diverse molecules. Moreover, to be clear, our algorithm does not use the exploration-exploitation paradigm, and by "exploration" we mean "searching".
> "Explicitly reward the differences between agents" is not equal to build the relationships among agents since the rewards only contain very little information.
You are right, but our objective is not to make agents share their full experience, just to avoid them falling into the same local optima. Our design is sufficient for this purpose.
> Since large language model (LLM) is trained via natural language, the effectiveness of LLM in molecules still needs further evidence.
Our agents use the GPT architecture, but they are pre-trained on a dataset of chemical molecules (SMILES strings), not natural language. In addition, we have presented previous related works in the original paper (line 92), including ChemFormer (https://iopscience.iop.org/article/10.1088/2632-2153/ac3ffb) and MolGPT (https://pubs.acs.org/doi/10.1021/acs.jcim.1c00600), which can demonstrate that LLMs are also effective for chemical language. Our work aims to further explore the potential of LLMs for chemical language. | Summary: This paper creates a multi-agent reinforcement learning approach to promote diversity in the search space of small molecules during molecular optimization. Because of the complicated nature of early stage drug discovery research, the diversity is useful during early work in the identification and validation of small molecules, however previous RL methods did not manage to address diversity in a satisfactory way. The algorithm is rather simple and employs pre-trained GPT agents that speak the language of molecular SMILES. Importantly, the algorithm manages to outperform previous state of the art methods on a number of public benchmarks.
Strengths: The use of multiple agents in this small molecule optimization setting is original and proves to be very useful. The explicit diversification by equation (4) ends up creating a stronger model than previous attempts that used RL or other deep learning methodologies, and importantly, it outperforms (even if only barely) the graph Genetic Algorithm, a significantly simpler algorithm that puts other methods to shame in several benchmarks. The paper demonstrates the increase in diversity of the designs with the increasing number of agents. The paper is written in a clear fashion, and the main result it significant. The authors provide code.
Weaknesses: A minor weakness of this method is that the performance of the code is somewhat slow, which is somewhat understandable given the slow individual query of the GPT models and the challenges of the additional RL-related operations. I agree with the overall attitude of the authors that this particular concern is not very important. The performance of the graphGA algorithm is nearly identical to that of the new method, despite its extreme simplicity and efficiency. On the other hand the lack of pre-training is a limitation of graphGA, at least in principle.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What is the radius and the bitlength in the features in Eq(2). Is chirality included?
This may be a limitation of the notation used in the paper and perhaps the authors already tried to do the correct thing in the code (I didn't check), but it appears that equation (4) does not use a standardized SMILES representation of the molecule, so it is possible that the agents will try to learn to produce different textual representations of the same molecule (i.e. each agent could canonicalize in a different way). Of course the magnitude of such an effect probably depends on the details of the training protocol and on the amount of augmentation with randomized SMILES in the pre-training protocol. Would it be possible to instead use the canonical and certain randomly sampled representations of the same molecule as the inputs x to equation (4)? Could the authors test their final agents for their preference towards different textual representations of the same molecules?
The statement at the end of page 7 is very unclear: did the authors test the listed small molecules in tables 2 and 3 against the SARS CoV-2 protease and polymerase in a laboratory setting and show that they don't work? If so, it would be very useful to provide a brief description of these experiments in a supplemental material or cite a separate publication after anonymization is no longer a concern. At a minimum, they should clarify the language at the end of that section.
Regarding performance, did the authors monitor the GPU usage during training and do they achieve the maximal power draw on the GPU during the optimization experiments?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This work has no negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your approval and valuable review comments! We address your main concerns below:
**Q1 (in *weaknesses*)**: A minor weakness of this method is that the performance of the code is somewhat slow, which is somewhat understandable given the slow individual query of the GPT models and the challenges of the additional RL-related operations.
**A1**: While our algorithm does require a considerable amount of time to execute, the primary time consumption does not stem from the inference and updating of GPT agents. Instead, the bulk of the time is dedicated to the computation of oracles on CPUs. For instance, in a single docking task for a specific target, the entire reinforcement learning process demands around 100 hours when executed on a single NVIDIA A100 GPU alongside 64 CPU cores, during which more than 95% of the time is cost by the docking process itself, even though parallel computing of Quick Vina software is implemented. Therefore, the issue of slow drug design isn't unique to our algorithm; other methods also struggle with low efficiency when dealing with time-intensive oracles like docking. Furthermore, as you rightly mentioned, the runtime of our algorithm remains significantly shorter when compared to the years that conventional drug discovery processes typically span.
**Q2 (in *weaknesses*)**: The performance of the GraphGA algorithm is nearly identical to that of the new method, despite its extreme simplicity and efficiency. On the other hand the lack of pre-training is a limitation of graphGA, at least in principle.
**A2**: Initially, it's important to acknowledge that although GraphGA does not leverage the deep learning techniques that have gained prominence in recent times, it has been proven to be a competitive approach for molecular design. It ranks second in a comprehensive benchmark test, trailing only behind another SMILES-based reinforcement learning method (https://openreview.net/forum?id=yCZRdI0Y7G). Nevertheless, as you mentioned, this approach may have approached the zenith of performance attainable through non-deep methods. This implies that for intricate molecular generation tasks, such as docking, GraphGA's performance is likely to be suboptimal, because it cannot effectively utilize the chemical information embedded within molecular datasets through pre-training paradigm.
**Q3 (in *questions*)**: What is the radius and the bitlength in the features in Eq(2). Is chirality included?
**A3**: For the implementation of internal diversity, we adopt the function in the widely-used Therapeutics Data Commons (TDC) package (https://github.com/mims-harvard/TDC/blob/6af2a41679a0699446ad627be8051504548e86fa/tdc/chem_utils/evaluator.py#L99C31-L99C31). Specifically, in the Morgan fingerprints (ECFPs) the radius is 2, the bitlength is 2048, and chirality is not included.
**Q4 (in *questions*)**: It appears that equation (4) does not use a standardized SMILES representation of the molecule, so it is possible that the agents will try to learn to produce different textual representations of the same molecule. Would it be possible to instead use the canonical and certain randomly sampled representations of the same molecule as the inputs x to equation (4)? Could the authors test their final agents for their preference towards different textual representations of the same molecules?
**A4**: We refrain from employing the canonicalized SMILES representation while computing the generation probability in equations (3) and (4). This choice arises from the fact that GPT agents inherently grasp the grammar of SMILES strings rather than focusing on molecular structures. The probability using an equivalent yet non-originally generated SMILES string lacks a direct meaning in reinforcement learning optimization. As a result, it is indeed possible for the agents to produce different SMILES of the same molecule.
Nevertheless, when computing property scores, the variability in SMILES representation does not influence the outcome, since oracles take as input either the molecule items in RDKit or molecular fingerprints, both of which uniquely correspond to the molecular structure. Furthermore, in the process of updating the molecular memory, canonicalized SMILES are employed. This ensures that a given molecule does not exist in the memory in multiple forms, preventing duplicate entries.
**Q5 (in *questions*)**: The statement at the end of page 7 is very unclear: did the authors test the listed small molecules in tables 2 and 3 against the SARS CoV-2 protease and polymerase in a laboratory setting and show that they don't work?
**A5**: We sincerely regret any confusion caused. Our approach lacked wet experiments to validate the molecular properties outlined in tables 2 and 3 due to the absence of experimental conditions. The values presented in the tables are the result of in silico oracles. The intention of our statement at the end of page 7 is to emphasize that:
As the Quantitative Estimate of Drug-Likeness (QED) is calculated through a comparison with the distribution of compounds in the existing drug database – one that lacks molecules designed for SAR-COV-2 – it's important to note that a low QED score doesn't necessarily imply inefficacy against the SAR-COV-2 targets. On the contrary, molecules exhibiting effectiveness might well lie beyond the boundaries of the existing drug compound distribution.
**Q6 (in *questions*)**: Regarding performance, did the authors monitor the GPU usage during training and do they achieve the maximal power draw on the GPU during the optimization experiments?
**A6**: As we explained in **A1**, the predominant portion of our algorithm's execution time is cost by the oracles running on the CPU, despite utilizing 64 CPU cores. Consequently, the GPU operated below its maximal capacity throughout the reinforcement learning process and is barely operating most of the time in the docking tasks.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: I acknowledge having read the response by the authors. I am a little skeptical that the answer A4 is sufficient to introduce substantial diversity given the easy choice of randomizing the string representation of the best molecule. I wonder if the diversity result is severely suboptimal compared to what such a method could achieve.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply! In response to your question, the randomness of SMILES itself does not affect the molecular diversity measurements, because internal diversity (equation (2)) is calculated for molecules, not SMILES strings. Even if multiple different SMILES strings of the same molecule are generated, it will only be counted once in internal diversity. | Summary: This paper proposes a novel multi-agent reinforcement learning algorithm with agents parameterized with a pre-trained GPT architecture for de novo drug design. The authors propose a modified objective function with an intrinsic reward inspired bonus to encourage diversity between agents and also propose to use a constraint to keep the fine-tuned agents close to the pre-trained agents. The authors evaluate their algorithm on Guacamol, by generating a number of inhibitors for SARS-CoV2 targets. They also perform ablations on their method with the GSK3$\beta$ and JNK3 maximization tasks.
Strengths: Some strengths of this paper are:
- An intrinsic reward like term added to the agents' loss which encourages diversity.
- The method performs favorably to the other methods compared against on Guacamol benchmark.
- It performs comprehensive ablations on the GSK3$\beta$ and JNK3 maximization tasks
Weaknesses: My main complaint about this paper is that I am not convinced of the algorithm's superiority over rivaling methods based upon the experiments section. I feel that the paper has both missed some necessary baseline methods and that the experiments as they stand are insufficient in demonstrating the paper's main claim that its method leads to improved diversity over other methods. I also have concerns about missing related work in multi-agent RL in which there is already a body of literature on encouraging diversity among agents, as well as other competing methods which have been applied to molecular drug generation such as diffusion models and GFlowNets. I also have concerns regarding this paper's reproducibility. I will go over these concerns one by one.
## Experiments section
1. The main claim of this paper is that its approach leads to superior diversity, but I did not see any experiments _comparing_ the diversity of molecules it generates to molecules generated by other methods. Indeed, there were experiments looking at the diversity of molecules generated by their method, but it was only their method. There isn't an indication whether their main claim of improved diversity is true if there is no comparison to other methods.
2. While the generated molecules do look diverse to my non-chemist eye, I would like to see more generated molecules and critically some measurement of diversity of the generated molecules to be convinced. Also, these results were on only one seed which is insufficient. At minimum there should be three seeds, ideally quite a bit more (see [here](https://ai.googleblog.com/2021/11/rliable-towards-reliable-evaluation.html)).
3. There are missing baselines in the experiments. The authors should have compared to an existing LLM molecule generation method such as MolGPT, but this was missing from their experiments. There are other methods for encouraging diversity for molecule design with RL-inspired machinery such as GFlowNets (https://arxiv.org/abs/2106.04399) that should be compared against. Also, it would have been nice to see a comparison with one of the recent works on diffusion for molecular design (e.g., https://arxiv.org/pdf/2203.17003.pdf, https://arxiv.org/pdf/2305.01140.pdf), or at minimum a compelling reason for why not to compare to these methods.
4. In the Guacamol experiments it seems experiments were run over one seed. This is insufficient.
5. A more minor comment: it's hard to understand the significance of the Guacamol task when the tasks are ordered 1-20 without context of what the tasks actually are.
## Missing references
1. I mentioned in the last point, but it would have been nice to see some discussion of other methods such as GFlowNets or diffusion models which also try to encourage diversity in molecular design.
2. There is already a body of literature on encouraging diversity in multi-agent RL, but I did not see references to this literature. Some representative papers may be https://arxiv.org/abs/2106.02195 and https://openreview.net/forum?id=H-6iczs__Ro.
## Reproducibility
1. All experiments seem to have been run with one seed. There is no way to know if the results would hold with more seeds.
2. There is no listing of the hyperparameters used or the hyperparameter tuning methods used (or values tried if a grid search).
3. There is no (anonymized) submitted code available to verify or reproduce the authors' claims.
## Clarity issues
1. In the section explaining the loss function, the indexing used is rather confusing. E.g., the authors use a loss $L_1$ which seems fixed, then also a loss $L_k$ which seems to index the different agents (so what about when $k=1$?).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. In the loss function, why does the sum over agents only go up to $k-1$? Shouldn't the second loss term be something like $\sum_{j=1}^n s(x) \left|P_k(x) - P_j(x)\right|$? Why compare to only the agents before this index as the ordering seems arbitrary.
2. Did the authors consider using a method for encouraging diversity which already exists in the multi-agent RL literature? If so, why did they not use it and did they run any comparisons?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The main and most important limitation of this work is that I do not know whether the proposed method actually is competitive with rival methods due to some missing experiments, baselines, and insufficient reproducibility.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable review comments! We address your main concerns below, and we sincerely hope that you will reconsider and upgrade your rating:
**Q1 (diversity comparison, in *experiments 1, 2*)**
**A1**: To further demonstrate the advantages in diversity of our approach, we have added several baselines to the experiments on GSK3$\beta$ and JNK3 maximization (**Table 2 in pdf**) for comparison. Additionally, we supplement a experiment on QED maximization (**Table 3 in pdf**) with diversity measurements. The results of these experiments indicate that our approach is indeed effective in promoting the molecular diversity in drug design.
We did not provide diversity measurements in experiments on the GuacaMol benchmark, where each task produces a single score that comprehensively represents the performance of a given method (following the official guidelines). Therefore, the advantages of our approach in terms of diversity have been inherently contained in our higher scores, while reporting molecular diversity in GuacaMol is unreasonable.
**Q2 (missing baselines, in *experiments 3 / missing references 1*)**
**A2**:
1. We have listed the related LLM-based methods of molecular generation in our paper (line 92), but notably none of them are designed for *de novo* drug design tasks. It should be especially discriminated that *de novo* drug design aims to generate molecules with properties beyond the property distribution of the training set, while previous LLM-based methods cannot achieve this (https://arxiv.org/abs/2203.14500). Specifically, MolGPT, mentioned by the reviewer, is a pre-trained model aiming to learn the existing datasets and generate molecules within the property distribution. Other LLM-based approaches, including ChemFormer, also do not target *de novo* drug design, so we did not include them in our experiments.
2. You also mention some diffusion-based methods for molecular generation, but at the moment diffusion model works only in 3D molecular generation, which mainly target the structure and quantum properties. In contrast, 1D/2D molecular generation (our aim) focuses on the biochemical properties, which is a different direction of research than 3D molecular generation (https://arxiv.org/abs/2203.14500). Hence, we did not include diffusion models in our experiments.
3. GFlowNet is a baseline of *de novo* drug design, and we didn't include it in the original paper mainly because of its relatively poor performance. As supplementary, we have included its results in **Table 1, 2, 3 (in pdf)**.
**Q3 (multiple seeds & Guacamol tasks, in *experiments 2, 4, 5 / reproducibility 1*)**:
**A3**: First, we would like to emphasize that in section 4.3, we report the standard deviations across different seeds. In the GuacaMol benchmark experiment, we have also launched multiple runs to verify the robustness of the algorithm. However, due to the high stability of our algorithm, and most of the previous works on GuacaMol did not report standard deviations, so we did not include them in the original paper.
However, to address any reservations, we have included the standard deviation values for our algorithm's performance on the GuacaMol task across 5 different seeds in **Table 1 (in pdf)**. Additionally, we have listed the names of the 20 GuacaMol tasks, the specifics of which can be found in the GuacaMol paper.
**Q4 (MARL references, in *missing references 2/questions 2*)**
**A4**: Although, as you've mentioned, previous literature in cooperative multi-agent reinforcement learning has presented some approaches to encourage diversity. However, they can hardly be directly applied to *de novo* drug design because:
1. Typically, the objectives of these techniques are enhancing the behavioral diversity of agents, leading to a collection of diverse agents at the end of the RL process. In contrast, our aim centers on enhancing the diversity of the objects (molecules in our context) that the agents are tasked with searching.
2. These methods are typically designed for well-defined virtual spaces (like games) or real-world 3D spaces, leveraging specific space characteristics such as trajectories, which are difficult to define and utilize in the chemical space.
**Q5 (in *reproducibility 2, 3*)**
**A5**: Our codes are available at https://anonymous.4open.science/r/MolRL-MGPT-835E. We explain your question in detail in the author rebuttal.
**Q6 (in *clarity*)**
**A6**: We have accurately formulated the loss function in section 3.2. Specifically, Eq. (3) defines the loss function for the first agent, which is also the first term of the loss functions for other agents. Eq. (4) defines the loss functions for all agents, and it's noteworthy that when $k=1$, Eq. (4) simplifies to Eq. (3).
**Q7 (*questions 1*)**
**A7**: For the design of loss functions, a seemingly natural approach is to reward the differences between all agents equally. However, this method exhibits relatively low robustness in experiments, that is, agents are likely to greatly lose the validity of generated SMILES due to great mutual interference. In contrast, our approach (Eq. (4)) organizes agents in a sequence, and each agent is only rewarded for the difference between it and the agents preceding it, which enables agents to get promoted more stably.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their work on improving their paper! I have a few comments to their response, which I list below.
### On Q1
I appreciate your adding diversity metrics to the JNK3 and GSK3 maximization tasks and agree that this betters the case for this paper. I note that it some measure of diversity on the COVID target synthesis task would've been appreciated as well.
### On Q2
> We have listed the related LLM-based methods of molecular generation in our paper (line 92), but notably none of them are designed for de novo drug design tasks. It should be especially discriminated that de novo drug design aims to generate molecules with properties beyond the property distribution of the training set, while previous LLM-based methods cannot achieve this (https://arxiv.org/abs/2203.14500). Specifically, MolGPT, mentioned by the reviewer, is a pre-trained model aiming to learn the existing datasets and generate molecules within the property distribution. Other LLM-based approaches, including ChemFormer, also do not target de novo drug design, so we did not include them in our experiments.
The authors' argument here is fair, though I will note that the same argument could be made about JT-VAE which the authors do include as a baseline. There, the VAE is used with a Bayesian optimization procedure, and a similar setup could be applied to the other LLM-based approaches, though I agree that this may be outside the scope of this paper (but if the authors did this and showed their method outperforms it would certainly improve the case for their method!).
> You also mention some diffusion-based methods for molecular generation, but at the moment diffusion model works only in 3D molecular generation, which mainly target the structure and quantum properties. In contrast, 1D/2D molecular generation (our aim) focuses on the biochemical properties, which is a different direction of research than 3D molecular generation (https://arxiv.org/abs/2203.14500). Hence, we did not include diffusion models in our experiments.
I'm not sure what is included in 1D/2D molecular generation, but there are graph based diffusion models for molecule generation, e.g., https://arxiv.org/abs/2209.14734. You can see https://arxiv.org/pdf/2304.01565.pdf for a survey on more of these sorts of methods. It's fair that the main focus of this paper is on LLM based approaches to de novo drug design, but since these 3D based methods seem to work well I think including one as a baseline would be helpful to illustrate the benefit of the authors' approach.
> GFlowNet is a baseline of de novo drug design, and we didn't include it in the original paper mainly because of its relatively poor performance. As supplementary, we have included its results in Table 1, 2, 3 (in pdf).
Great! :)
### On Q3
> First, we would like to emphasize that in section 4.3, we report the standard deviations across different seeds. In the GuacaMol benchmark experiment, we have also launched multiple runs to verify the robustness of the algorithm. However, due to the high stability of our algorithm, and most of the previous works on GuacaMol did not report standard deviations, so we did not include them in the original paper.
It's indeed unfortunate that prior works did not run experiments over multiple seeds. It's good to see the authors running their experiments with more seeds on the GuacaMol experiment. I'm curious what the results across multiple seeds are for the baselines for GuacaMol as I only see multiple seeds for the MARL experiments.
I would also like to see performance vs other baselines on the COVID task, again across other seeds to consider raising my score (e.g., report values of top-k across 1k or some similar number of molecules generated per seed, or mean of the reported metrics across the 1k generated molecules per seed).
> Additionally, we have listed the names of the 20 GuacaMol tasks, the specifics of which can be found in the GuacaMol paper.
Great! :)
### On Q4
> Typically, the objectives of these techniques are enhancing the behavioral diversity of agents, leading to a collection of diverse agents at the end of the RL process. In contrast, our aim centers on enhancing the diversity of the objects (molecules in our context) that the agents are tasked with searching.
I don't see why behavioral diversity of agents wouldn't also lead to improved diversity of the objects generated. Shouldn't more diverse behavior policies generate more diverse objects? If not, why not?
> These methods are typically designed for well-defined virtual spaces (like games) or real-world 3D spaces, leveraging specific space characteristics such as trajectories, which are difficult to define and utilize in the chemical space.
Why are trajectories difficult to use in chemical space? In the case of this paper they should just be the generated SMILES string, no? | null | null | Rebuttal 1:
Rebuttal: We would like to express our sincere appreciation to all the reviewers for your valuable feedback on our paper, and we have responded to all your questions (in the corresponding rebuttal sections). We also add some supplementary experimental results in the **pdf**, including more baselines on the GuacaMol benchmark, more baselines on GSK3$\beta$ and JNK3 maximization, new experiments on QED maximization and other details. We promise to present these contents in the final version of the paper.
In addition, we would like to explain the issue raised by reviewer xd9X that **our codes are not available**:
Indeed, in the abstract of my paper, I have included an Anonymous GitHub link to the MolRL-MGPT codebase (https://anonymous.4open.science/r/MolRL-MGPT-835E). Regrettably, it appears that during your review, the Anonymous GitHub service encountered an interruption, obstructing your reproducing our experiments and potentially tarnishing your impression. We sincerely apologize for this inconvenience and want to clarify that we do not consider it to be attributable to our actions. Now the Anonymous Github has been restored, and the timestamp verifies that we initially committed the code in May. We earnestly beseech you to reevaluate the code, as it contains many details of our experiments, including hyper-parameter settings, etc.
Pdf: /pdf/42d11b084e7c02f7b93c6fe1f906ff8352eef18f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Task-aware Distributed Source Coding under Dynamic Bandwidth | Accept (poster) | Summary: This paper proposed a task-aware distributed source coding framework called NDPCA (Neural Distributed Principal Component Analysis). This framework aimed to solve the problem of efficient compression of correlated data in multi-sensor networks. In section 2 and 3, the authors provided a formulation of the problem and solved the problem in a linear setting with their proposed method DPCA. They also described ways to determine bandwidth allocation and analyzed the bound of DPCA reconstruction loss in practical conditions. In section 4, the authors proposed NDPCA to generalize their DPCA method to nonlinear tasks by combining a neural autoencoder. They also discussed their training and inference methods to avoid re-training neural networks for different bandwidths. In section 5, the experiments showed performance of proposed framework in three tasks. All experiments assumed two data sources and compared the framework with three other baseline methods. The results showed that task-aware NDPCA performed similar to or better than all baselines in three tasks and had a graceful tradeoff between performance and bandwidth.
Strengths: 1.The idea of compressing data from distributed sources into different sizes according to their importance is novel and effective. The proposed formulation of the problem and its solution in linear setting serve as a good insight for nonlinear version of the problem.
2.The proposed training and inference methods of NDPCA do not require retraining for different bandwidth, which is convenient and saves computing and storage resources.
3.The experiments in this paper compare the proposed framework to three baseline methods in three different tasks. The authors comprehensively analyze the results to show the advantages of task-aware NDPCA in different practical situations.
Weaknesses: 1.The theoretical analysis in section 2 and 3 only covers situations where encoder, decoder and tasks are linear, which is not a strong support for NDPCA that works for non-linear situations.
2.The experiments assume only two data sources, which is a heavy restriction. The performance of NDPCA when there are more than 2 (or a very large number of) data sources are not presented or analyzed.
3.As described in the last paragraph of section 5, the autoencoders are poor at generalizing out-of-distribution data, which is also the weakness of NDPCA. This paper does not offer a solution to this problem, which will possibly limit the usefulness of NDPCA in practical scenarios where fresh data is generated and transmitted in real-time.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1.What is the performance of NDPCA when there are more than 2 data sources?
2.What is the performance of NDPCA when using testing dataset? If it is too low, is there an insight why or any possible solutions?
3.In section 5, it is very interesting that NDPCA outperforms JAE, but the reason is only simply explained in the paper. Could you please elaborate a bit?
4.Could you provide a simple theoretical analysis or insights explaining why difference of DPCA reconstruction losses does not harms the result of non-linear tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments.
Weakness:
1. The key focus of our study is to **harmonically combine linear DPCA modules and neural networks**, leveraging their different purposes. The linear DPCA module is designated to measure the importance of sources with singular values, and the neural networks are designed to process complicated real-world data and work harmonically with the DPCA modules. While there are no theoretical guarantees for their combination, we tested NDPCA on real-world datasets to demonstrate its capability.
2. (also question 1) Both our DPCA and NDPCA frameworks have been designed to accommodate multiple sources effectively. In particular, our DPCA linear formulation demonstrates its effectiveness with multiple sources, as outlined in lines 137-139 of the original submission. **We also add one additional experiment with NDPCA compressing 4 sources with different sizes of views**. Again, its performance was compared to that of a task-aware vanilla distributed autoencoder (DAE) in the same setting. The results, presented in the global review's attached PDF, reveal NDPCA's superiority over DAE in the Airbus dataset, and NDPCA with two sources slightly outperformed the one with four sources. The distribution of bandwidth to sources is unequal because each source has different importance to the task, which is related to the size of the view.
3. (and also question 2) **The results presented in the paper are based on testing sets**, emphasizing the case where fresh data is generated and transmitted in real time. We used data augmentation (using the albumentation package as described in the appendix) to aid the neural networks in adapting to unseen data better, bridging the gap of generalization.
Questions:
3. We find the results interesting as well. Two perspectives explain the performance difference between NDPCA and JAE. First, NDPCA has two encoders, making its model parameters slightly larger than JAE, which may contribute to better performance. Second, NDPCA uses DPCA modules to compress representations Z, which can effectively remove noise injected in the data, giving it an advantage over JAE in noise removal and performance.
4. The reason is that after DPCA reconstruction, NDPCA uses a decoder to decode the data back to its original space. During training, the encoders and decoder are exposed to the DPCA reconstruction, while the other methods we tried in the appendix section do not, so the DPCA reconstruction only slightly harms the result of non-linear tasks. Of course, it is impossible to not harm the result of non-linear tasks as data is always compressed.
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: Thanks for your clarification on the problems! The answers have provided some insights on technical details of the paper, and the additional experiment on NDPCA compressing data from 4 sources. Although its still theoretically unclear why neural networks play an important part in NDPCA and the experiments have room for improvement, it could be a solid technique in applications. I would increase my score.
---
Reply to Comment 1.1.1:
Title: Can you double-check the score?
Comment: A kind reminder to double-check whether the score is correctly saved. | Summary: This paper studies compression in a distributed computing setting, named neural distributed principal component analysis(NDPCA). The proposed NDPCA can adapt to available bandwidth and flexibly allocates bandwidth to multiple sources according to their contribution to the final task. Experiments demonstrate the effectiveness of NDPCA on bandwidth allocation.
Strengths: By dynamically distributing bandwidth among sensors, NDPCA implements a graceful trade-off between performance and bandwidth, enabling adaptive resource allocation. The experiments conducted in the paper demonstrate that NDPCA significantly improves the success rate of multi-view robotic arm manipulation by 9% and the accuracy of object detection tasks on satellite imagery by 14% compared to an autoencoder with uniform bandwidth allocation.
Weaknesses: The proposed method is mainly a control algorithm that can adapt to multiple streams of data, which sound like a resource allocation algorithm. The neural processing of data is not essential here.
The experiments may be too simplistic to be convincing. For example, the three experiments are all in a two-view setting and may not provide a realistic assessment of bandwidth allocation capabilities. Besides, there is no comparison with prior works.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: In lines 198-199, “We observed that the autoencoder 199 automatically learns representations with small correlation”. Does this observation still exist when there are more encoders (>2)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: No particular limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback.
Yes, our NDPCA can be interpreted as a resource allocation algorithm, but the key focus of our study is to **harmonically combine linear DPCA modules and neural networks**, leveraging their different purposes. The linear DPCA module measures the importance of sources using singular values, while the neural networks process complex, high-dimensional real-world data, making them essential for real-world applications.
As stated in the related work section, to the best of our knowledge, our work is the only one focusing on **designing a framework to compress multi-sourced data to different compression levels using the same model.** Other previous works have centered on designing new neural architectures for multi-view image compression, using different neural network layers. Given this distinction in focus and setting, we chose not to compare NDPCA with other works and only compared it with JAE and DAE. **We also add one additional experiment with NDPCA compressing 4 sources with different sizes of views.** Again, its performance was compared to that of a task-aware vanilla distributed autoencoder (DAE) in the same setting. The results, presented in the global review's attached PDF, reveal NDPCA's superiority over DAE in the Airbus dataset, and NDPCA with two sources slightly outperformed the one with four sources. The distribution of bandwidth to sources is unequal because each source has different importance to the task, which is related to the size of the view.
Questions:
Yes, similar trends of small correlations also exist in our additional experiments with 4 sources | Summary: This work targets compressing the correlated data to be communicated in a multi-sensor network. The multi-sensor network pipeline is defined as the following steps: (1) each edge sensor compresses the data and transmits it to a central node, and (2) the central node decompresses the data and passes it to a machine learning task for the final output. Specifically, the authors first formulate a task-aware distributed source coding problem based on the target application. Then, they provide a theoretical justification for the formulated problem and propose a task-aware distributed source coding framework. The experiments on CIFAR-10 denoising, multi-view robotic arm manipulation, and satellite image object detection validate that the proposed framework can achieve better accuracy as compared to an autoencoder baseline under the same compression ratio.
====Post Rebuttal===
Thanks for the rebuttal, I have read the author’s rebuttal. The rebuttal addressed my concern on "Scalability to different scenarios" by adding more explanation on Locate and Lift tasks. For the other two concerns (e.g., "Need more motiving applications" and "Lack of compression with recent neural compression methods for images"), I was not fully convinced by the rebuttal.
Strengths: > + Clear problem formulation: the problem formulation with visual-friendly figures can help the readers easily understand the target application,
> + Providing theoretical analysis and algorithm table: the provided theoretical analysis and algorithm table helps the readers understand the proposed method in a more structured way.
> + Comprehensive analysis of experiment results: the experiment settings are well described, and comprehensive analyses are conducted to better show figures and tables.
Weaknesses: > + Need more motiving applications: although pure theory work can also have a huge impact, it would be great to add more real-world motiving applications for justly that compressing data for multi-sensor networks is an important question, and the proposed method is the key enabler. For example, with more and more convenient Internet access, will the varying and limited bandwidth still be a problem for existing edge devices? Why assume the data will not be decoded for humans but only for computers? What is the key metric to benchmark the multi-sensor network can achieve decent performance? > 99% accuracy on some specific applications or do the machine learning task in a real-time manner? Why is the improvement by the proposed method meaningful? Will the whole pipeline with the proposed method achieve some specific common-agreed metric (e.g., 30FPS real-time), and the pipeline without the proposed method cannot?
> + Lack of compression with recent neural compression methods for images: although this work does not only target compression for images, but at least two of the three benchmark datasets are pure image datasets. Since there exist some works that target encoding images into neural networks (e.g., https://arxiv.org/abs/2006.09661), it will be great to add the comparison and discussion with those works.
> + Scalability to different scenarios: In the task of “Locate and Lift”, the trends of different methods are not the same trend as the trends on the other two datasets. Specifically, the proposed NDPCA does not always achieve a higher success rate as compared to the baseline DAE. Is it caused by the task type being quite different from image denoising and object detection? Thus, the scalability of different scenarios is unclear.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: > + What is the cost of training the proposed NDPCA as compared to methods without training?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See "Weaknesses"
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness:
1. In our study, we consider a scenario where satellites send data to a mission control center on Earth, using independent encoders due to their distance apart, with the control center having a joint decoder. Similarly, as the IoT era nears, factories will use sensors in distant locations, demanding efficient machine-learning models to process compressed data from these sensors jointly.
As shown in Fig. 4 and the appendix, **our method can trade off human perception and computer features with a weighted loss, but in our main focus, we show results that are fully task-aware.**
Our primary emphasis stems from the limited communication bandwidth. **To effectively execute machine learning tasks, the bandwidth should prioritize transmitting task-aware features.**
The key metric in the multi-sensor network setting is the performance of the task with limited bandwidth as we argue that bandwidth is the bottleneck of such settings. In other settings, real-time inference may be essential, but it is not the focus of the paper. The NDPCA framework only adds negligible computation which we elaborate on later.
2. As stated in the related work section, to the best of our knowledge, our work is the only one **focusing on designing a framework to compress data to different compression levels by the same models.** Other previous works have centered on designing new neural architectures for multi-view image compression, using different neural network layers. Given this distinction in focus and setting, we chose not to compare NDOCA with other works and only compared it with JAE and DAE.
3. The Locate and Lift experiment is a reinforcement learning behavior cloning task that is **highly influenced by the initial environmental conditions during both the training and testing phases**. This is also why we showcase the performance of runs from multiple random seeds. The sensitivity is evident from the substantial variance in performance observed in various RL papers, such as the soft-actor-critic paper by Haarnoja et al. in 2018 (Figure 1). In the realm of RL research, researchers typically compare the average performance of different methods across multiple random seeds. In this paper, **the NDPCA method consistently outperforms DAE on average in our specific experimental setting.**
Questions:
NDPCA has additional DPCA modules compared to vanilla DAE. The DPCA module performs singular value decomposition (SVD) on a batch of data. The memory and computational time overhead of using SVD during training are neglectable compared to the backpropagation of deep neural networks. The reason is that SVD is only a series of matrix operations and the dimension of matrices in SVD is “batch size x dimension of representations”, which makes it easy to compute. | Summary: This paper proposes a solution for multi-view machine learning with distributed computing and limited bandwidth. Different views of data are encoded and then compressed to be transmitted to the decoder for learning tasks. It assigns higher bandwidth for data with better quality to make a tradeoff between different quality of data. Experiment results show higher psnr or reconstruction rate or accuracy than prior art.
Strengths: The problem this paper tries to solve is important. The multi-view distributed learning will improve over single-view learning and the communication can be a bottleneck for the system.
Weaknesses: The experimented model and data is somehow outdated. For example, CIFAR10 is an old dataset and the paper does not provide the details on the classification network or the accuracy for classification. Detection dataset (airbus) and model (YoloV1) should be updated to larger and modern ones, such as COCO and YoloV6, V7or V8. Though interesting, the method seems, at least to me, to be working on toy examples and models.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Why for CIFAR10, only the reconstruction PSRN is reported, not the accuracy of the classification? If this is the case, we could use any image dataset, not necessarily classification dataset, such as DIV2K for image reconstruction.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback.
Weakness:
Regarding the use of CIFAR10, we acknowledge its age but included it as a toy example for quick iteration and sanity checks due to its manageable size. Thus, we can try different methods to improve uncorrelatedness and linear compressibility, as presented in the ablation study in the appendix.
Regarding the airbus detection with Yolov1, we also experimented with Yolov8 but found no significant difference between the two. Additionally, we opted for Yolov1 due to its transparency and ease of modification for our setting, supported by abundant open-source resources available online. It is important to note that our paper primarily focuses on **the methodology of data compression for multiple sources**, rather than showcasing state-of-the-art computer vision models.
Questions:
In short, CIFAR10 PSNR was selected for its convenience in quick iteration and sanity checks during our research. We use CIFAR-10 as a toy example to demonstrate the use of NDPCA in the presence of sources with unequal importance to the task. Due to the **simplistic nature of the classification task, which only requires 4 bits (digit 0-9) as the information bottleneck**, we choose reconstruction as our “task”, making it more suitable to showcase the performance across a range of available bandwidth. Moreover, we utilized image reconstruction to showcase the ability of NDPCA in task-agnostic settings. While choosing other image datasets is feasible, we already demonstrated the results of NDPCA in the computer vision domain with our Airbus experiments. | Rebuttal 1:
Rebuttal: This is the attached file to show the latest results from our additional experiments with 4 data sources.
Pdf: /pdf/284df6158c9603435b28d55dfa962e9747d80c8d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents the neural distributed principal component analysis (NDPCA) method that compresses features from multiple sensor sources with a given total bandwidth limit. NDPCA carries the following novelties. First, it is task-aware. The algorithm trains the compression networks by minimizing the final task. Second, the single model can compress the features at different bandwidth limits. This is achieved by training the networks at the maximum available bandwidth and picking the largest components based on the available bandwidth at inference time.
The authors evaluated the NDPCA method on three different tasks: 1) denoising of CIFAR-10 images, 2) multi-view robotic arm manipulation, and 3) object detection from satellite imagery. They compared NDPCA against the following baselines: 1) vanilla distributed PCA with equal bandwidth allocation, 2) joint PCA that jointly encodes all sources, and 3) task-agnostic NDPCA. The evaluation result shows NDPCA achieves much better task performance than vanilla distributed PCA and can even outperform joint PCA.
Strengths: * This paper presents a sound method to compress features from multiple sources given limited bandwidth. The method outperforms the baselines by doing the compression in a task-aware manner.
* The evaluation used three real-world applications.
Weaknesses: * I have some doubts about the problem setup used in the paper. The paper assumes there is a total availabe bandwith that is shared by all sensor sources, and there is no per-source bandwidth limit (i.e., one can allocate all the bandwith to one single source). It will be useful to provide some references to support this assumption. My concern with this setup is that, to have the communication bottleneck shared by all sources, those are likely very close to each other. If the sensors are close to each other, one can build a joint encoder that encodes the features from all sources jointly, which makes joint PCA a realistic option.
* The evaluation contains only setups with two sources. It will be useful to provide results showing the method's performance with more sources.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: It will be useful to provide some references to support the assumption of bandwidth limit.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer for their valuable feedback, and we will address each point in detail.
Weakness:
1-1. Our study was inspired by the scenario presented in the introduction, involving multiple distant satellites transmitting data to an Earth-based mission control center. Employing independent encoders for the satellites and a joint decoder for the control center, we mimicked this setup. Given the upcoming IoT era where scattered sensors gather data, our research focused on a similar challenge: effectively utilizing compressed data from individual sensors for joint decoding and processing, which holds significance for future applications.
1-2. Regarding the reviewer's concern about the per-source bandwidth limit, **we would like to highlight that our proposed framework, NDPCA, is adaptable to this scenario.** NDPCA is capable of measuring the importance, represented by singular values, of each dimension from all sources. As a result, we can allocate bandwidth to each source effectively using greedy allocating algorithms, prioritizing the most important available source per source constraints.
2. Both our DPCA and NDPCA frameworks have been designed to accommodate multiple sources effectively. In particular, our DPCA linear formulation demonstrates its effectiveness with multiple sources, as outlined in lines 137-139 of the original submission. **We also add one additional experiment with NDPCA compressing 4 sources with different sizes of views.** Again, its performance was compared to that of a task-aware vanilla distributed autoencoder (DAE) in the same setting. The results, presented in the global review's attached PDF, reveal NDPCA's superiority over DAE in the Airbus dataset, and NDPCA with two sources slightly outperformed the one with four sources. The distribution of bandwidth to sources is unequal because each source has different importance to the task, which is related to the size of the view.
Question:
We examine a scenario where there's an uplink bandwidth limitation, essentially a collective constraint shared by all sensors. For instance, in an IoT setup with 100 sensors and a single central node, this constraint translates to a reception bandwidth limitation for the central node, less than or equal to the combined bandwidth of all 100 nodes. Such a situation is typical in wireless sensor networks.
Reference: P. Liu *et al.*, "Training Time Minimization in Quantized Federated Edge Learning under Bandwidth Constraint," *2022 IEEE Wireless Communications and Networking Conference (WCNC)*.
Furthermore, in the satellite network scenario described in our introduction section and our object detection experiments, the bandwidth available to satellites is not on par with that of 5G wireless networks. Satellite network providers like Starlink offer internet speeds ranging from 50 to 250 Mbps, whereas 5G networks can reach speeds of up to Gbps.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for your response. I will keep my score. | Summary: This paper proposes a distributed task-specific compression method called NDPCA, composed of both a neural network autoencoder and a linear PCA reconstruction. Given multiple sources of data, NDPCA first compresses the information separately using different independent neural network encoders. Next, it applies a linear distributed encoder based on PCA, which further bottlenecks the information. For each neural encoded source, it projects it into a PCA subspace, such that the total number of dimensions used is equal to a predefined bandwidth $m$. Each source is allocated a different number of dimensions, based on the ranking in the top $m$ singular values. The information is decoded twice, first reprojected back from the PCA subspaces and secondly using a neural network decoder that goes back to the original sources space. The reconstructed sources are input to a task specific network. The authors propose to learn the autoencoders in NDPCA using task-aware losses, that is the task loss should be the same with the reconstructed sources as with the original sources. One of the benefits of the proposed approach is that, because of the use of PCA as an intermediate reconstruction step, one can dynamically choose the total number of eigenvectors used (and, consequently, the bandwidth) without a need for retraining the model. The authors provide a theoretical analysis of their approach and multiple experiments on a number of different tasks that support the methodological choices.
Strengths: 1. The paper is written clearly, well structured and easy to follow.
2. The proposed method, while simple, is novel and achieves good results. The random DPCA module is a nice idea.
3. The experimental evaluation is done on multiple different tasks, which demonstrated the applicability of the proposed framework.
4. I like the negative results discussion provided in the paper, relating to (1) uncorrelatedness and (2) linear compressibility.
Weaknesses: 1. While the framework presented is described to work for any number of input sources, all the experiments are conducted by considering only 2. I would have loved to see how the method behaves for a larger number of sources.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. By using SVD in training, how much overhead does it introduce? In terms of memory/time vs when not using random DPCA in training.
2. How important is the batch size? You state in the limitations that SVD can become unstable for small batch sizes, but do the results improve with a higher batch size?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our paper.
Weakness:
Both our DPCA and NDPCA frameworks have been designed to accommodate multiple sources effectively. In particular, our DPCA linear formulation demonstrates its effectiveness with multiple sources, as outlined in lines 137-139 of the original submission.
**We also add one additional experiment with NDPCA compressing 4 sources with different sizes of views.** Again, its performance was compared to that of a task-aware vanilla distributed autoencoder (DAE) in the same setting. The results, presented in the global review's attached PDF, reveal NDPCA's superiority over DAE in the Airbus dataset, and NDPCA with two sources slightly outperformed the one with four sources. The distribution of bandwidth to sources is unequal because each source has different importance to the task, which is related to the size of the view.
Questions:
1. The memory and computational time overhead of using SVD during training is neglectable compared to the backpropagation of deep neural networks. The reason is that SVD is only a series of matrix operations and the dimension of matrices in SVD is “batch size x dimension of representations”, which makes it easy to compute.
2. Yes, a small batch size may cause the SVD to be ill-conditioned and unstable as the matrix might have multiple 0 singular values. For a higher batch size, **we do not observe any significant improvement compared to a sufficiently large (stable) batch size**.
a
---
Rebuttal Comment 1.1:
Comment: After carefully reading the other reviews and the authors' rebuttal, I decided to maintain my initial rating. I think this paper has merits to be accepted and I do not agree with the sentiment of other reviewers that were, in my opinion, too dismissive of the paper. | null | null | null | null |
Instructing Goal-Conditioned Reinforcement Learning Agents with Temporal Logic Objectives | Accept (poster) | Summary: This paper considers the problem of instructing goal-conditioned RL agents to follow specifications expressed in Linear Temporal Logic (LTL) formulae. The proposed method works as follows. First, construct a Buchi automaton from the LTL specification, which is then converted to a directed graph representation. Then, use a weighted-graph search algorithm to solve for a high-level plan that satisfies the LTL specification, utilizing the value function of the goal-conditioned agent as a surrogate of the difficulties of achieving each goal. Finally, execute the high-level plan using the goal-conditioned agent. The proposed method is evaluated on three benchmark environments: LetterWorld, ZoneEnv, and Ant-16rooms. The method is compared with two baselines for learning LTL satisfying policies. It is shown to outperform the two baselines, as well as generalize better on out-of-distribution tasks.
Strengths: This paper focuses on a promising direction and targets an important problem of the field: how to learn/search policies that can generalize to complex, compositional task specifications, while using less or no additional training on the new tasks. The problem formulation of LTL specifications is a fruitful step towards this general direction, therefore of great potential significance.
The proposed method of using Buchi automaton and weighted graph search (using value functions as weights to measure difficulty of achieving each goal) to solve high-level plans for LTL tasks is technically interesting and novel to my knowledge. It also makes sense intuitively and seems to be a good method for this problem.
Weaknesses: In my view, there are several improvements that needs to be made in terms of experimental settings and expositions of the paper before it is ready to publish here.
- The proposed method considers the setting where low-level policies to achieve each goal are given, and targets the problem of how to solve for a high-level plan that can satisfy the LTL task specification. In this case, the baselines should be alternative methods for computing a high-level plan, with the same assumption that the low-level policies are given. Then the experiments can show how good the proposed method is in solving the problem it targets. It seems that the current baselines do not operate on the same premise (i.e., given low-level policies for each goal, how to solve high-level plan).
- The writing of the paper could be improved to help with clarity. For example, it would be helpful to briefly introduce how the proposed method works and summarize the experimental results in the introduction section. There are also a few typos in the paper, e.g., line 70 “white zones”.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Are the baselines also given a goal-conditioned agent, and focusing on solving the high-level planning problem?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: I did not find a discussion on the limitations in this paper. One possible aspect for discussion is what to do when the task cannot be divided into high-level LTL solving and low-level goal achieving. For example, the case where how to achieve each goal is context dependent.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful feedback and constructive comments! We present our response to each of your concerns and questions below.
**R1. Comparison with the baselines**: "The baselines should be alternative methods for computing a high-level plan, with the same assumption that the low-level policies are given."
We thank the reviewer for this valuable suggestion. In the supplementary material line 792 Sec E.3, we included a baseline for random high-level path selection, evaluated using the Ant 16-room environment and LTL specifications 1 to 8 (line 700 Sec E.1). Recall that our algorithm uses the goal value function $\mathcal{V}(s_0, g_1, g_2)$ to measure the capability of reaching goal $g_2$ from goal $g_1$ from the viewpoint of the agent at an initial state $s_0$. This information is crucial for performing high-level task planning over the graph representation of a Büchi automaton. The baseline randomly generates high-level paths from the Büchi automaton of an LTL specification, and the goal-conditioned agent executes toward the goals along these randomly selected paths. The success rates for the ablated version of our algorithm across all specifications are presented in the last column of Table 2, and the example trajectories of the agent for LTL specifications 1 to 8 in this ablated setting are depicted in Figure 15. This result reveals that without high-level path planning guided by the goal value function $\mathcal{V}$, the performance in satisfying LTL specifications 1 to 8 significantly deteriorates. This decline can be attributed to the randomly selected paths sometimes crossing obstacles in certain rooms, making it challenging for the Mujoco ant to navigate through them.
During the rebuttal period, we additionally compared our approach with the Logical Options Framework (LQF) [1] baseline. The LQF baseline learns a meta-policy for choosing amongst the options to reach subgoals to reach the final state of the finite state automaton representation of an LTL property. The learning algorithm is based on value iteration over the product of the finite automaton and an environment. That is on every step in the environment, two transitions are applied: the option transition and the finite state automation transition. The options can be recombined to fulfill new tasks. Compared to our technique, LQF only supports co-safe LTL [2] where the "always" operator is not allowed and the "next", "until", and "eventually" operators can only be used in positive normal form. We compared our technique with LQF given the same goal-condition policy for subgoals using the Ant 16-room environment and LTL specifications 1 to 7 (line 700 Sec E.1). We excluded specification 8 because it is not supported by LQF. We find the two strategies learned the same high-level planning strategy but LQF takes ∼50-100 retraining steps, while our technique generalizes zero-shot to these specifications and we additionally support the $\omega$-regular specification 8. We will include these additional results in a revised version of our paper.
We did not compare our approach with LQF using the colored ZoneEnv environment because this is a multi-task benchmark, which is out of the capability of the meta-policy learning algorithm in LQF. We are unaware of any existing high-level planning algorithms for LTL that generalize to multi-task and out-of-training-distribution environments in a zero-shot manner. To our best, our technique is the first algorithm that can do so.
[1] Brandon Araki, Xiao Li, Kiran Vodrahalli, Jonathan A. DeCastro, Micah J. Fry, Daniela Rus. The Logical Options Framework. ICML 2021: 307-317
[2] Amit Bhatia, Lydia E. Kavraki, Moshe Y. Vardi. Sampling-based motion planning with temporal goals. ICRA 2010: 2689-2696
**R2. The writing of the paper could be improved to help with clarity**
We appreciate your suggestion for improving the writing of our paper. We will introduce how the proposed method works at a high level and summarize the experimental results in the introduction section, and we will correct the typos in the paper.
**R3. Are the baselines also given a goal-conditioned agent, and focusing on solving the high-level planning problem?**
Please see R1 above.
**R4. Limitations:**
We thank the reviewer for the suggestion to explicitly discuss the limitations of our approach. Indeed, we assume the atomic propositions in LTL properties *can only be goals* within the goal space of goal-condition policies e.g. colored zones in the ZoneEnv navigation benchmark. We do not allow other sources of atomic propositions e.g. external environment signals that are out of the agent's control. For example, our current algorithm does not apply when the agent needs to pursue different tasks based on an external signal. The reviewer is indeed correct in pointing out that our current strategy may not be sufficient when achieving each goal depends on context-dependent external environment signals. We will clarify this limitation in a revised version of the paper. Please also refer to our global response for a justification of our current strategy.
A potential solution to the aforementioned limitation is using a task monitor, which acts as an external memory, to maintain a record of completed sub-goals and past external environment signals. During task execution, when receiving a new environmental signal, our task planning algorithm can dynamically revise the high-level path for the remaining sub-goals that the goal-conditioned agent needs to achieve. We leave it for future work.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal
Comment: Thank you for the detailed rebuttal. The new experiments with baselines that learn high-level plans given low-level policies addressed my concerns and made the paper more complete and solid. I am happy to revise my score.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We are grateful to the reviewer for taking into account our rebuttal and the new experimental results. We will integrate the results and the discussion into the main paper. | Summary: The paper proposes a new technique for multi-task RL when the tasks are specified using a high-level language (LTL in this case). The approach involves identifying a set of skills corresponding to a set of reachability and safety objectives and training policies for them. While training these policies (which are represented using a single goal-conditioned policy), a separate value function is trained to measure, for every pair of goals, the expected return for the task of reaching one goal from another. Then, given an LTL formula, a subtask graph structure is constructed which is used to compute a high-level plan for performing the task using the learned skills. Experimental results suggest that the proposed approach outperforms a state-of-the-art method for multi-task RL (for LTL tasks) and is better suited for multi-task performance than another compositional approach which is designed for single-task RL.
Strengths: - The ability to learn a set of skills that can be used to perform a wide range of long-horizon tasks specified using LTL is very useful. This proposed approach is a simple and natural way to achieve this.
- Although the idea of planning over a graph structure in order to perform a complex temporal task is not new, the paper provides a way to achieve this for all of LTL (rather than a subset of LTL considered in prior work) and furthermore applies the idea to multi-task RL.
- The experimental results look promising and show that the proposed approach can be used in a wide range of environments to solve complex tasks without further training (after training the goal-conditioned policy)
Weaknesses: - The main weakness, in my opinion, is that the approach doesn't seem to be general enough to handle all of LTL as claimed by the authors. For instance, the LTL task is eventually reduced to following a single path (with a loop at the end) in the automaton graph. But it might not be optimal to follow a single path and one might have to use different high-level strategies from different states of the MDP. Furthermore, the high-level plan is computed using the trained value function which does not consider the ability to stay safe and avoid triggering alternate transitions when measuring the ability to trigger a specific transition in the automaton. The heuristics for handling such avoidance constraints during test time seems reasonable but it is a bit ad-hoc and it is unclear why it is good in general (an ablation study might improve the paper).
- The clarity of the paper could be improved. Many assumptions are made throughout the paper (such as transitions using conjunctive predicates and goals being disjoint). It appears that some of these assumptions can be removed. A clearer presentation could help mitigate doubts about what assumptions are necessary.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - It is mentioned that disjunctions can be handled by adding separate transitions. But conjunctions seem to be restrictive too. For example, the assumption doesn't allow for a transition with predicate $a\land\lnot b$. How are such things handled?
- How is the cycle to loop within an accepting SCC picked after reaching an accepting state?
- How are the goals (predicates) chosen for training the goal-conditioned policy during an episode?
- Is there a class of MDPs or LTL formulas for which the proposed approach of computing a single path in the automaton can be justified?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: As mentioned in weaknesses, there are some limitations to the proposed approach which should be discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful feedback and constructive comments! We present our response to each of your concerns and questions below.
**R1. Single path problem:** "The LTL task is eventually reduced to following a single path in the automaton graph. But one might have to use different high-level strategies from different states of the MDP."
We clarify that our task planning algorithm may choose different high-level paths for the same LTL task, contingent upon the specific initial environment states. For example, consider the Ant 16-room navigation environments shown in Fig (m) and Fig (n) in the global rebuttal. Based on the learned value function (formalized in line 185), for different initial environment states, our task planning algorithm chooses distinct high-level paths tailored for these initial states.
**R2. What are our assumptions?**
We assume the atomic propositions in LTL properties *can only be goals* within the goal space of the underlying goal-conditioned policy e.g. colored zones in ZoneEnv navigation. We do not allow other sources of atomic propositions e.g. external environment signals that are out of the agent's control. For example, our current algorithm does not apply when the agent needs to pursue different tasks based on an external signal. We will clarify this assumption.
*This limitation is related to the *single path problem* raised in R1.* A potential solution is using a task monitor, which acts as an external memory, to maintain a record of completed sub-goals and past external environment signals. During task execution, when receiving a new environmental signal, our task planning algorithm can dynamically revise the high-level path for the remaining sub-goals that the goal-conditioned agent needs to achieve. We leave it for future work.
**R3. Handling avoidance constraints:** "The high-level plan is computed using the trained value function which does not consider the ability to stay safe and avoid triggering alternate transitions. The heuristics for handling such avoidance constraints during test time seems reasonable but it is a bit ad-hoc and it is unclear why it is good in general (an ablation study might improve the paper)."
We acknowledge the reviewer's valid point regarding the limitation of our task-planning approach, which does not consider the goal-conditioned agent's ability to stay safe before reaching a sub-goal. One solution is to learn an *extended* value function as described in [1]. A Boolean composition of the extended value functions for goal reaching and staying away from unsafe zones can provide a more accurate estimation of the agent's capability of triggering a specific transition in the automaton. We leave the integration with [1] for future work and will clarify this limitation.
We indeed provided an ablation study for handling avoidance at test time. The results were included in the supplementary material line 924, Sec F.3. We investigated the impact of the value threshold $\sigma$ for avoidance. Given a goal-conditioned policy $\pi$, If the value function $V^\pi(s, g) \ge \sigma$ at an environment state $s$ (implying the agent is close to a region $g$ in the goal space to avoid), the agent must take safe actions to move away from $g$ (line 908). As we used sparse reward functions that yield only 0 or 1 for training $\pi$, $\sigma$ should be set somewhere close to (but less than) 1. The results were summarized in Table 3, where we observe that an excessively large value of $\sigma$ compromises the agent's ability for goal reaching, while an insufficiently small value negatively impacts the agent's ability to stay safe.
[1] Nangue Tasse, G., James, S. and Rosman, B. A boolean task algebra for reinforcement learning. NeurIPS 2020.
**R4. How do we handle conjunctive predicates for a transition?**
Our strategy supports environments with overlapping subgoal regions (line 919 in the supplementary material). Given a goal-conditioned policy $\pi$, we support $F(g_1 \wedge g_2)$ by at any environemnt state $s$ taking an action $\arg\max_a{min(Q^\pi(s, g_1, a), Q^\pi(s, g_2, a))}$. We support $F(g_1 \wedge \neg g_2)$ by reusing our avoidance strategy to avoid $g_2$ when the agent is deemed close to the goal region $g_1$ i.e. the value function output $V^\pi(s, g_1)$ is above the threshold $\sigma$. Please see an extended discussion on our avoidance strategy in line 896, Sec F.3. Fig (g) and (h) in the global rebuttal provide examples demonstrating how conjunctive predicates are handled for transitioning. Compared with the path in Fig (g), the agent in Fig (h) takes a detour to avoid touching the red zone.
**R5. How is the cycle to loop within an accepting SCC picked after reaching an accepting state?**
After reaching an accepting state, we apply Dijkstra's algorithm to find the shortest cycle in the SCC that contains the accepting state. Edge weights are determined by the trained value function that measures, for every pair of goals, the expected return of reaching one goal from another.
**R6. How are the goals chosen for training the goal-conditioned policy during an episode?**
Our technique assumes the existence of a goal-labeling function $L$ that maps environment states to valid goals. The goals in ZoneEnv are colored zones: Yellow $y$, Red $r$, White $w$, and Jetblack $j$. For example, $r \in L(s)$ if and only if the agent steps onto a red zone at an environment state s. For each color, we use a fixed random vector as the goal representation for the goal-conditioned policy. The goals in Ant 16 rooms are in the form of $(r, c)$ that denotes the horizontal and vertical (integer) positions of the Mujoco ant. Given an environment state $s$, define $L(s) = (r, c)$ if the position of the ant in $s$ is close to $(r, c)$ within a threshold. During training, we randomly sample initial states and goals and correspondingly adapt the reward function and the goal representation. More details are in the supplementary material Sec E.
---
Rebuttal Comment 1.1:
Title: Single Path Problem
Comment: We thank the reviewer again for the insightful comments. We revisited the single-path problem and conducted further experiments during the rebuttal period.
> The LTL task is eventually reduced to following a single path in the automaton graph. But one might have to use different high-level strategies from different states of the MDP.
We experimented with an alternative high-level planning strategy to evaluate the necessity of different high-level strategies from different states of the MDP *in our context*, using the Ant 16-room environment. We applied LTL specifications 1 to 8 (line 700 Sec E.1) to this environment. Specifically, we employed the goal-conditioned policy to follow a high-level plan, attempting the next sub-goal in the shortest path (in our graph representation of the LTL specification) for $h$ time steps, and subsequently replanning from the current environment state $s_t$ for the remaining sub-goals every $h$ time steps. Before replanning, we updated the weight of any edge between two sub-goals $g_1$ and $g_2$ based on $s_t$ and the trained goal value function $\mathcal{V}(s_t, g_1, g_2)$. We tried various values of $h$. However, the results do not exhibit significant differences compared to our original setting without replanning. For instance, considering the most complex LTL specification $\phi_5$ (line 729), our average success rate over 3 trained policies (each evaluated 200 times) is 86.8% without replanning (Table 2), while with replanning ($h=50$), the success rate is 84.7%.
In the experiments, our approach doesn't seem to rely on replanning (and different high-level strategies from different environment states) mainly because the high-level graph structure of a temporal task is already provided in its LTL specification and hence known to us. Prior approaches that combine learning and planning such as [1,2] require iterative replanning due to a lack of knowledge about temporal task specifications and thus have to *sample* the task's high-level graph structure for planning purposes. It's worth noting that our approach could be combined with [1] for improved sub-goal reaching. As illustrated in the rebuttal, integrating our technique with iterative planning can also address the current limitation of handling external environment signals (see R2). We will include the above discussion in our paper.
[1] Ben Eysenbach, Ruslan Salakhutdinov, Sergey Levine. Search on the Replay Buffer: Bridging Planning and Reinforcement Learning. NeurIPS 2019
[2] Kara Liu, Thanard Kurutach, Christine Tung, Pieter Abbeel, Aviv Tamar. Hallucinative Topological Memory for Zero-Shot Visual Planning. ICML 2020 | Summary: This paper presents a method to transfer learned or planned goal-directed skills in domains to novel tasks represented by linear temporal logic within the same domain. The key idea of this paper is to train goal conditioned policies to achieve (and avoid) Boolean goals, and to compose them temporally to achieve temporal logic goals.
The paper proposes to first convert the automaton corresponding to the LTL specification into a Buchi automaton, which is then converted into a directed graph with a target state. The algorithm also estimates the cost-to-go heuristic at each node to estimate edge traversal costs, and finally combines an optimal graph search along with learned goal conditioned policies to achieve the temporal logic goals.
The authors primarily benchmark against LTL2Action where the specification is embedded into a feature vector using a graph neural network, and this embedded latent feature vector is used alongside a state-feature vector in a through a Deep-RL algorithm to compute the final policy. The authors demonstrate that their approach beats LTL2Action on range of randomly sampled tasks, and two specific avoidance tasks.
Strengths: **Sound problem definition**: The authors are correct in their statement that with competent goal-conditioned policies available to the agent, the agent can solve temporal LTL tasks through composition of these policies. There has been a lot of recent interest in this approach, and the authors have demonstrated that in two navigational environments that have been utilized in research on RL + temporal goals. There are however some issues with the assumptions made by the authors as I describe in the following sections.
**Evaluations**: The authors might have chosen just two navigational domains, but have focused on evaluating over a wide range of temporal logic formulas. Such evaluations are much more valuable in the space of RL for temporal tasks as is considered in this paper.
**Originality and significance**: Prior work suffers from lack of transferrability to all novel tasks as the library of learned skills is inadequate to transfer to all possible tasks. The authors propose to pre-train a goal conditioned skills that should offer more coverage of the logical transition-space. The idea is well demonstrated in the zones and letters environment in the paper, but there are some additional concerns that I highlight in the next section.
Overall, the idea of composing pre-learned skills to novel task scenarios is not original in and of itself. But the combination presented in this paper is novel to the best of my knowledge. However, there are elements of similarity with prior work that have not been addressed adequately
Weaknesses: **Positioning in context of prior work**: The core idea of skill reuse is not entirely novel. There are prior works addressing this [1],[2],[3] that appear to be missing from discussion. Infact both these works handle a wider variety of logical composition for the transition edges which appear to not be considered by this paper. While the core idea in this paper is distinct, it deserves to be discussed in context of these works that appear to handle the problem with greater generality.
**Correctness concerns**: The authors claim that their approach is applicable to all $\omega$-regular automata. However their approach appears to have a strong reliance on a single unique accepting state, in all the examples that they test on. Generalized Buchi acceptance condition requires that the system visit atleast one state in each accepting set infinitely often, and the approach described here is incompatible with such a specification. An example of this would be the patrolling task $\square \diamond a \wedge \square \diamond b$. Here an accepting run would require the agent to visit both $a$, and $b$ infinitely often, but this would not be discoverable by the graph search algorithm described here.
A second correctness concern relates to the type of edge transitions that can be accomplished by the goal conditioned policy. In general an edge transition in automata is described by the self-transition edge that is maintained until the transition trigerring truth evaluation is reached. The goal conditioned policy implicitly assumes that the trigger transition is reached as the first distinct transition in the truth values of the propositions. This might be true for navigational tasks, where each state has at most one proposition true, but may not be true in general when not all propositions are controlled by the agent, or even in cases where simultaneous satisfaction of multiple propositions might be required. For instance the specification $\diamond(a \wedge b)$. [2] would appropriately identify this specification as unsatisfiable, and abort task execution, whereas the behavior of the system proposed in this paper is uncertain. In particular, none of the goal conditioned policies are applicable to this outcome. In contrast [1] can compose the policies logically to satisfy the specification if it is indeed satisfiable. If we study the type of transitions occuring within automata, then there are many such edge cases pertaining to self-transition, and simultaneous truth value changes that cannot be handled by this approach. These limitations must be explicitly acknowledged in the submission.
**Difficulty of training goal-conditioned policies**: This approach relies on having a good goal conditioned policy to perform the task. However, this is in general a challenging problem, and I am not aware of any works that have managed to train competent goal conditioned policies beyond grid world domains that achieve good coverage over all possible logical goals. I would appreciate if the authors add some text to evaluate the quality of goal conditioned policies before using them for this algorithm, or point to works that address this issue.
[1] - Nangue Tasse, G., James, S. and Rosman, B., 2020. A boolean task algebra for reinforcement learning. Advances in Neural Information Processing Systems, 33, pp.9497-9507.
[2] - Liu, J.X., Shah, A., Rosen, E., Konidaris, G. and Tellex, S., 2022. Skill transfer for temporally-extended task specifications. arXiv preprint arXiv:2206.05096.
[3] - Xu, D. and Fekri, F., 2022. Generalizing LTL Instructions via Future Dependent Options. arXiv preprint arXiv:2212.04576.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: These questions pertain to the limitations of the approach proposed, please clarify if your approach can handle it, or update the limitations to include them:
1. Can this approach handle simultaneity? for example ensure goals $a$ and $b$ are true at the same time instant as in $\diamond (a \wedge b)$
2. Can this approach handle self transition that requires maintaining some proposition as true until a goal is reached? For example $\diamond b \wedge a U b$
3. Can the graph search be adopted for recurrence type formulas from Manna and Pnueli's temporal hierarchy [4] (TLDR https://spot.lre.epita.fr/hierarchy.html) $\square \diamond a \wedge \square \diamond b$
4. What are the implicit assumptions you are making on the environment for the validity of the proposed approach? Please clarify these explicitly
[4] - Manna, Z. and Pnueli, A., 1990, August. A hierarchy of temporal properties (invited paper, 1989). In Proceedings of the ninth annual ACM symposium on Principles of distributed computing (pp. 377-410).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: I do not believe that the limitations are adequately identified and acknowledged. Please refer to the weakness and questions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful feedback and constructive comments!
**R1. Positioning in context of prior work**
We appreciate your suggestion to discuss the related work [1,2,3]. In particular, we will credit [1] for inspiring our approach to managing the logical composition of value functions. Briefly, our method differs by supporting $\omega$-regular LTL properties, which are not immediately applicable in these prior works. Our approach provides equivalent expressivity in terms of logical composition for the transition edges compared to these prior works. Please refer to responses R3-R6 below.
**R2. Single Unique Accepting State**
- "However their approach appears to have a strong reliance on a single unique accepting state. An example of this would be the patrolling task $GF a \wedge GF b$. Here an accepting run would require the agent to visit both $a$, and $b$ infinitely often, but this would not be discoverable by the graph search algorithm described here."
We respectfully clarify that this is a misunderstanding of our technique. A Büchi automaton accepts an input if and only if it passes through an accepting state infinitely many times as it reads the input. Consider the converted graph representation of the Büchi automaton for the LTL task $GF w \wedge GF y$ depicted in Fig (a) of the global rebuttal. While there is just one accepting state, it only accepts trajectories in which the accepting state is reached infinitely often in a loop, meaning that the agent must visit the white zone ($w$) and the yellow zone ($y$) in Fig (b) infinitely often. This kind of infinite looping behavior $(wy)^\omega$ can be handled by our technique with a goal condition agent capable of reaching $w$ and $y$. As illustrated in the global response, we also support multiple accepting states.
**R3. The type of edge transitions that can be accomplished by a goal-conditioned policy.**
- 3.1 Can we handle simultaneous satisfaction of multiple propositions?
Yes. Our approach supports simultaneous satisfaction such as $F(g_1 \wedge g_2)$. As illustrated in line 919, Sec F.3 in the supplementary material, to use a goal-conditioned policy $\pi$ to reach the overlapped goal space covered by $g_1$ and $g_2$, at any environment state $s$, we take an action $\arg\max_a{min(Q^\pi(s, g_1, a), Q^\pi(s, g_2, a))}$. Fig. (g) in the global rebuttal provides an example.
- 3.2 Can we handle self-transition that requires maintaining some proposition as true until reaching a goal?
Yes. In our graph representation $G_\mathcal{B}$ of a Büchi automaton $\mathcal{B}$, a "self-transition" on a node $q$ describes the goal-related constraint $\phi$ that must be maintained until the agent can transition to a neighbor node of $q$ with the goal condition $\psi$. If $q$ is on the planned optimal path, the agent for $B$ needs to use a goal-conditioned policy $\pi$ to ensure $\phi$ before accomplishing $\psi$. In the paper, we primarily focus on reach avoidance, where $\phi = \bigwedge_k \neg g_k$ encodes regions in the goal space to avoid. At a current environment state $s$, when the value function $V^\pi(s, g_k)$ is greater than a threshold, we take a safe action $\arg\min_a {Q^\pi(s, g_k, a)}$ that moves the agent away from the dangerous zone $g_k$ (see line 908 for the formalization). Fig. (i) in the global rebuttal provides an example.
Dually, our strategy applies to cases where $\phi = \bigwedge_k g_k$ encodes goal regions that the agent must stay within before transitioning out from $q$. In such cases, if the value function $V^\pi(s, g_k)$ is less than a threshold, we can take an action $\arg\max_a {Q^\pi(s, g_k, a)}$ that encourages the agent to remain in $g_k$. As our navigation benchmarks do not support the evaluation of this feature, we plan to explore it in future work.
**R4. Training goal-conditioned policies**
Our training algorithms are illustrated in Sec. A of the supplementary material. We evaluated our goal-conditioned policies for ZoneEnv and Ant 16 rooms over 1000 rollouts. We randomly sample the initial agent state and the goal to reach in each rollout. The length of a rollout for ZoneEnv is 500 and for Ant 16 room is 1000. The goal-reaching success rate is 97.2% for ZoneEnv and 67.8% for Ant 16 rooms (evaluated over 3 trials of training). A typical failed case in Ant 16 rooms is shown in Fig. (o) in the global rebuttal where the ant gets stuck at wall corners. Fig (p) highlights the usefulness of LTL instructions to guide the agent to explore a detoured path to the goal.
Indeed, goal-conditioned RL has made substantial progress in recent years. We recommend the following paper that surveys state-of-the-art algorithms for training competent goal-conditioned policies in high-dimensional continuous environments:
Minghuan Liu, Menghui Zhu, Weinan Zhang. Goal-Conditioned Reinforcement Learning: Problems and Solutions. CoRR abs/2201.08299 (2022)
**R5. Can this approach handle simultaneity? e.g. $F(a \wedge b)$?**
Yes. Please see R3.1.
**R6. Can this approach handle self transition that requires maintaining some proposition as true until a goal is reached e.g. $F b \wedge a U b$?**
Yes. Please see R3.2.
**R7. Can the graph search be adopted for recurrence type formulas from Manna and Pnueli's temporal hierarchy $(GF a \wedge GF b)$?**
Yes. Please see Fig (b) in the global rebuttal as an example.
**R8. What are the implicit assumptions you are making on the environment?**
We assume the atomic propositions in LTL properties *can only be goals* within the goal space of the underlying goal-conditioned policy e.g. colored zones in the ZoneEnv navigation benchmark. We do not allow other sources of atomic propositions e.g. external environment signals that are out of the agent's control. For example, our current algorithm does not apply when the agent needs to pursue different tasks based on an external signal. We will clarify this important assumption in the paper.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed response
Comment: I apologise for missing the relevant sections in the appendix that address the problem of $\omega$-regular specifications beyond the simple ones described in the main text of the paper. I urge the authors to include at least one such example as a part of the main paper. Further I urge the authors to include the discussion of the the prefix $p$ and the looping suffix $q^\omega$ as a part of the main paper. Finally, as authors mentioned [1] can be used to logically compose value functions almost as a plugin, and perhaps can be included in the main paper as well. Solving $F (a \wedge b)$ utilizes one of their composition operators, and the authors have already experimented with such specifications in the appendix. Finally, once the limitations on the subgoal representations of the propositions are included in the paper, my primary soundness concerns are alleviated. To me adressing all of LTL was a major claim of the paper, and a result of major significance, therefore the harsh rating.
The limitation of approximate handling of $\omega$-regular properties remains, but the authors are well aware of it from the rebuttals, and have provided examples in the appendix that are real-world relevant. While it may be heuristic, it is still improving the zero-shot transfer capability beyond prior approaches, and therefore makes a significant contribution.
One final limitation is that I don't understand what the method would perform if it were faced with an unsatisfiable task specification, e.g. $F (a\wedge b)$ where there are no regions of the state-space where $a$ and $b$ overlap. Does it 'fail gracefully'? This can be added to the appendix if the authors agree that it would be valuable
I am happy to update my score to an accept rating, and would also like to advocate for the paper to be accepted.
---
Reply to Comment 1.1.1:
Title: Thank you for your support
Comment: We appreciate your constructive feedback and your support for our paper!
We experimented with unsatisfiable task specifications. For properties such as $GF (a \wedge b)$, when the goal regions $a$ and $b$ are close but not overlapping, the agent's behavior mirrors that of $GF a \wedge GF b$, oscillating between $a$ and $b$. However, if these goal regions are far apart, our agent does not exhibit good-looking behavior. We will make sure to acknowledge this limitation in our paper and give concrete examples in the appendix. We will incorporate all of your suggestions into the paper, as we believe this will significantly improve its quality. | Summary: This paper considers the problem of learning to solve a linear temporal logic
(LTL) tasks in a Markov Decision Process (MDP). Given a fixed Markov Decision
Process, this is done by:
1. Pre-training a goal-conditioned policy to solve a uniform sampling of reach-avoid tasks,
where goals correspond to atomic propositions.
1. The input LTL sentence is translated into a Buchi automata.
1. The Buchi automata is transformed into a weighted graph.
- Weights are determined using the value function of the pre-trained policy.
1. A path is generated by solving a sequence of shortest path problems.
The approach is then experimentally validated against the LTL2Action method using
the prior works domain and concept class.
---- update ----
After re-reading and being pointed to Appendix section D, it seems my major concerns are accounted for. What remains is the question of how to incorporate this into the main text. As such I am increasing my score to erring toward accept.
Strengths: The approach tackles an important problem. Namely, learning to solve sparse
tasks represented in a formal specification language defined over infinite runs
of the system.
Weaknesses: 1. The proposed approach is ultimately heuristic and is susceptible to being
"catastrophically myopic." In particular, the greedy sequence of shortest
path problems is necessarily biased toward solutions that work for finite
horizons, but says nothing about the infinite time behavior, e.g., the "lassos"
generated by sequence of shortest path queries. It is not hard to imagine
constructing an adversarial Buchi automata that uses a sequence of easy
to reach accepting states to lead the agent into a long term bad position.
1. The paper claims to address infinite horizon specifications, but then
compares against a regular language benchmark. This undercuts the stated
motivation. For example, all of the base-line problems have a finite
accepting prefix, e.g., as opposed to G(x -> F y).
1. The approach should be compared to hierarchical, meta RL, and compositional
RL works. For example, the graph approach seems very similar to [1] but adapted
to goal conditioned policies.
[1] Jothimurugan, Kishor, Rajeev Alur, and Osbert Bastani. "A composable specification language for reinforcement learning tasks." Advances in Neural Information Processing Systems 32 (2019).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. How does the system generalize to infinite horizon queries?
1. When is the proposed heuristic guaranteed to work vs have arbitrary failures.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: See weakness 1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful feedback and constructive comments! We present our response to each of your concerns and questions below.
**R1. The proposed approach is ultimately heuristic and is susceptible to being "catastrophically myopic." In particular, the greedy sequence of shortest path problems is necessarily biased toward solutions that work for finite horizons, but says nothing about the infinite time behavior, e.g., the "lassos" generated by sequence of shortest path queries.**
As illustrated in the global rebuttal, our task planning algorithm considers both the shortest path $p$ to an accepting state and the shortest cycle $q$ from the accepting state. Please see Fig (j) and (k) in the global rebuttal as an example. For a complex LTL property $\varphi_1 \vee \varphi_2$ for Ant 16-room navigation in Fig (j), our algorithm picks $\varphi_2$ for the goal-conditioned agent to execute in Fig (k) as it deems $\varphi_1$ more costly although the ``lassos" in $\varphi_1$ is closer to the agent's initial position. We acknowledge that using a bounded $\omega$ overapproximates the true probability for optimal path selection $pq^\omega$, which is a limitation. However, we have the flexibility to use arbitrarily large $\omega$ without incurring additional costs in planning.
**R2. When is the proposed heuristic guaranteed to work vs have arbitrary failures?**
Our task planning technique for temporal properties is based on a goal-conditioned agent trained for reachability properties. For an $\omega$-regular LTL property, while our technique has the flexibility to use arbitrarily large $\omega$ in planning a path for LTL-satisfying runs, it can lead to good-looking bad policies, which induce trajectories that eventually fail to reach the desired accepting state but along the way visited the accepting state arbitrarily many times (potentially prolonging until the heat death of the universe).
**R3. The paper claims to address infinite horizon specifications, but then compares against a regular language benchmark. This undercuts the stated motivation**
We included experimental results of evaluating our algorithm against $\omega$-regular LTL specifications in line 663, Sec. D of the supplementary material (as well as Fig. 14d) which is referred to in the submitted paper. We will incorporate these important experimental results in the main body of the revised paper. Please also see the additional evaluation results in Fig. 8 of the global rebuttal.
**R4. The approach should be compared to hierarchical, meta RL, and compositional RL works.**
We compared our work with DiRL [1], a state-of-the-art compositional RL algorithm for LTL properties. We used DiRL instead of SpecRL [2] as suggested by the reviewer because DiRL is a stronger baseline and empirically outperforms SpecRL on our benchmarks. **The experiments and results were illustrated in line 689, Sec. E of the supplementary material.**
The comparison is conducted on the Ant 16-room environment. The agent needs to navigate a Mujoco Ant in 16 rooms separated by thick walls (Fig. 3). We use 8 LTL specifications (line 700 Sec E.1) including 5 properties taken from DiRL. These tasks have increasing levels of difficulty as they require the agent to sequentially reach a growing number of sub-goals.
Similar to ours, DiRL leverages the compositional structure of a task specification to enable learning. However, there are some key differences. First, in DiRL, a unique low-level policy is learned for each sub-goal transition. Second, DiRL does not support $\omega$-regular LTL properties. Lastly, DiRL is not applicable to multi-task RL scenarios because the low-level policies are specific to a single environment setting and do not generalize across different environments. As a result, we trained a separate agent for each of the LTL specifications for DiRL. In contrast, our algorithm trains a single goal-conditioned agent and evaluates this agent over all 8 LTL specifications.
The results in Sec E.2 show that our agent is able to satisfy all specifications with ~90% success rate trained using 3e6 environment steps, whereas the DiRL method has to exercise 3e6 timesteps for each of the specifications to match the success rate of our approach. Our approach demonstrates immediate high success rates on arbitrarily new LTL specifications without the need for additional training. In contrast, DiRL requires retraining from scratch for new tasks. More comprehensive evaluation results are in Sec E.2 and E.3.
We also compared our approach with a state-of-the-art hierarchical RL algorithm, R-AVI [3]. We used the abstract graph of an LTL specification as input to R-AVI. We found that R-AVI does not scale to the complex LTL specifications for Ant 16-room navigation and can only achieve less than ~40% success rate for all of them when trained using 3e6 environment steps.
We did not compare our approach with DiRL and R-AVI using the colored ZoneEnv environment because this is a multi-task benchmark, which is out of the capability of the learning algorithm in DiRL and R-AVI.
We will move the comparison with DiRL and R-AVI to the main paper in a revised version.
[1] Kishor Jothimurugan, Suguman Bansal, Osbert Bastani, and Rajeev Alur. "Compositional reinforcement learning
990 from logical specifications." Advances in Neural Information Processing Systems 34 (2021).
[2] Kishor Jothimurugan, Rajeev Alur, and Osbert Bastani. "A composable specification language for reinforcement learning tasks." Advances in Neural Information Processing Systems 32 (2019).
[3] Kishor Jothimurugan, Osbert Bastani, and Rajeev Alur. Abstract value iteration for hierarchical reinforcement learning. International Conference on Artificial Intelligence and Statistics (2021).
**R5. How does the system generalize to infinite horizon queries?**
Please see R1 and the global rebuttal.
---
Rebuttal Comment 1.1:
Title: Reconsidering
Comment: Hi,
Thank you for the rebuttal. I will look into the appendix as that seems critical to answering my concerns.
I would urge the authors to fit this into the main text if possible, as it seems to have been a very common criticism.
---
Reply to Comment 1.1.1:
Title: Thank you for reconsidering
Comment: We appreciate the reviewer for reevaluating our paper. We promise that we will incorporate the important results regarding $\omega$-regular LTL properties, as highlighted in both the appendix and the rebuttal, into the main text. | Rebuttal 1:
Rebuttal: We greatly appreciate the valuable feedback and suggestions provided by the reviewers! We will begin by addressing the primary concern raised by the majority of the reviewers in the global rebuttal. We will address the concerns of each reviewer in the individual review responses.
### **How do we support $\omega$-regular LTL properties?**
Our technique can handle $\omega$-regular LTL specifications even though the underlying goal-conditioned agents have never seen such specifications during training time. ***We included experimental results of applying our algorithm to $\omega$-regular LTL specifications in the supplementary material line 663, Sec. D which is referred to in the submitted paper. We will incorporate these important experimental results in the main body of the paper.***
**1. Summary of the Task Planning Algorithm.** Our technique generates policies for $\omega$-regular LTL specifications based on a goal-conditioned RL agent and a learned goal value function as follows:
* We first convert $\varphi$ to a Büchi automaton $\mathcal{B}$, which is subsequently converted to a graph representation $G_\mathcal{B}$ using the techqniue illustrated in Sec 3.1.
* We associate edges on $G_\mathcal{B}$ with weight $w$ equal to the capability of the goal-conditioned agent to transition between the goals on the source and target nodes, estimated by the learned $\mathcal{V}$ value function. See the formalization of $\mathcal{V}$ in lines 185-188.
* We decompose $G_\mathcal{B}$ into strongly connected components (SCCs) using Tarjan’s algorithm.
* To find an optimal path on $G_\mathcal{B}$ for task execution to satisfy the LTL $\varphi$, we follow these steps:
* For each accepting state $s_a$ in a maximal SCC, we use Dijkstra's algorithm to find the shortest path from the initial state to $s_a$, denoted as $p$.
* Next, we apply Dijkstra's algorithm to find the shortest cycle from $s_a$ back to $s_a$ in the maximal SCC, denoted as $q$.
* The optimal path for the accepting state $s_a$ is $pq^\omega$ where $\omega \rightarrow \infty$ represents the number of times the shortest cycle is executed. The cost of the optimal path is calculated as $w(p) + \omega \cdot w(q)$ where $w(p)$ or $w(q)$ is the sum of the weights of the edges on $p$ or $q$. In the implementation, we use $\omega = 5$ to estimate the path cost.
* Finally, we select the optimal path on $G_\mathcal{B}$ as the least-cost path from the initial state to any accepting state of $G_\mathcal{B}$.
**2. Examples.** Please see the demonstrations in Fig. 8 of the attached PDF file where Fig (b), (c), (d), (e), (f), (k), (l) are trajectories produced by our trained goal-conditioned agents instructed by the corresponding $\omega$-regular LTL properties in the ZoneEnv and Ant 16-room environments respectively. We conducted 1000 evaluations for each of these $\omega$-regular LTL properties on randomly sampled environments to determine the task success rate. A task run is deemed successful if the loop from the accepting state on the optimal path chosen by our task planning algorithm can be consecutively executed 5 times within 2000 timesteps during our evaluation. Our success rate surpasses 90% for all these properties. ***We included more thorough experimental results in Sec. D of the supplementary material.***
**3. Justification.** The objective of our task planning algorithm is to find a policy $\pi^* = \arg\max_{\pi} \mathbb{E} \tau \sim \pi(\cdot \vert \cdot, \varphi) \left[ \mathbb{1}[\tau \models \varphi] \right]$, which generates the maximum number of LTL-satisfying runs for a given LTL property $\varphi$. Our formalization of goal-condition RL uses a sparse binary reward function (Equation 2) - a reward of 1 is provided only when the specified goal is successfully achieved by the end of a training episode. In this setting, when the discounted factor $\gamma \rightarrow 1$, the weight $w$ for each edge on $G_\mathcal{B}$, which is determined by the learned value function $\mathcal{V}$ (e.g. $w =-\log{\mathcal{V}(\cdot)}$), is inversely proportional to the (lower bound of the) probability of reaching the goal region represented by the target node from that represented by the source node of the edge. As such, graph search in the task planning algorithm seeks the optimal path as the one the agent is most likely to succeed. The main strength of our technique is its ability to adapt a goal-conditioned agent into one capable of achieving arbitrary LTL properties using atomic propositions from the goal space in a zero-shot manner. We acknowledge that the use of a bounded $\omega = 5$ overapproximates the true optimal probability, which is a limitation (even though we can use arbitrarily large $\omega$ with no additional cost).
Pdf: /pdf/6386178f2f8c47abc05eb4b27bab6ab21a7ea314.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Deciphering Spatio-Temporal Graph Forecasting: A Causal Lens and Treatment | Accept (poster) | Summary: The paper introduces CaST, a novel framework designed to address challenges in Spatio-Temporal Graph (STG) forecasting. CaST tackles issues related to temporal out-of-distribution and dynamic spatial causation by leveraging a causal lens and employing techniques such as back-door adjustment and front-door adjustment. Experimental results on real-world datasets demonstrate the effectiveness and practicality of CaST, outperforming existing methods while providing good interpretability.
Strengths: Novel design of CaST.
Thorough quantitative and qualitative analysis of the experiment results.
Weaknesses: There are too many design decisions that are made for CaST, making it hard to validate each individual component's contribution to the model performance.
Missing complexity analysis.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Is the causal structure learned through a data driven approach or based on heuristic reasonings?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: I am debating if this is a limitation or not. Obviously the proposed method is composed of a lot of sophisticated design components. But maybe just considering causal structure in the time series data can be more compelling and easier to be generalizable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your thorough evaluation of our paper and the feedback you provided. Thank you for recognizing the novelty in the design of our framework and the comprehensiveness of our analysis.
**[Weaknesses]**
**wrt complex design decisions.** Thank you for your feedback. In fact, our main contribution is to approach STG data from a causal perspective and use causal tools to address challenges in STG forecasting, as outlined in our first contribution in Introduction and detailed in Section 3. While the framework may appear relatively intricate, it serves as a vehicle to implement our causal-focused approach, which we believe offers distinctive advantages and presents the core of our contribution. For the validation of these modules in CaST, we have carefully designed a comprehensive ablation study in Section 5 to test the impact of each component on overall performance. For instance, by removing the environment and entity features, we assess the effectiveness of our temporal disentangler in distinguishing these two features.
**wrt complexity analysis.** We apologize for not including this crucial part in our main text. We omit hidden dimensionality in the following analysis for simplicity. The complexity of the spatial module is $ O(M^2)$ induced by the edge convolution operation (i.e., HL Deconfounder) [1], with $M$ denoting the number of nodes; the complexity of the temporal module (i.e., Env Disentangler) is $O(T^2)$, where $T$ represents the historical signal length. Generally, we have $T \ll M$, leading to $O(M^2)$ additional overheads over the original Backbone Encoder. Hope a revision with the above analysis is still considered. Thank you.
[1] Edge representation learning with hypergraphs. NeurIPS 2021.
**[Questions]**
- **Q.** Is the causal structure data-driven or heuristic?
**A.** Thank you for raising this question. For the CaST framework, the modules’ weights derive from the edge convolution operation, which can be trained through backpropagation. This indicates that our Cast is a data-driven model. As for the SCM, after an extensive literature review, we introduced it specifically for STG data in our scenario, which is also the main contribution of our work.
**[Limitations]**
**wrt sophistication of design components.** Thank you for your thoughtful feedback. As responded in the Weaknesses section, our primary contribution lies in introducing a causal perspective and the novel deep learning framework is used for its implementation. Although CaST comprises multiple components, we believe that each plays a vital role in addressing the challenges we've identified. Beyond the temporal causal effect, it is equally important to model the spatial one considering the highly related spatial correlation in STG data. Thus, to address both the temporal OoD and dynamic spatial causation in STG forecasting, every component is essential.
Again, thank you for your feedback. We have refined our work based on your insights.
---
Rebuttal Comment 1.1:
Comment: Thanks for sharing your thought. I would keep the current grade.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your time and valuable comments! | Summary: The paper presents a novel framework, CaST, designed to address the challenges of temporal Out-Of-Distribution (OOD) issues and modeling the underlying dynamic spatial causation in Spatio-Temporal Graph (STG) forecasting. The authors construct a Structural Causal Model (SCM) to uncover the causal mechanisms of STG data, inspiring them to employ back-door and front-door adjustments to mitigate the confounding bias induced by temporal environment and spatial context, respectively. Moreover, they utilize the Hodge-Laplacian (HL) operator for edge-level convolution to capture the ripple effect of causation along space and time. Empirical results from experiments on three real-world datasets demonstrate that CaST outperforms existing methods while maintaining interpretability.
Strengths: 1. The authors provide an in-depth analysis of the STG forecasting problem, revealing the underlying generation mechanisms of STG data and identifying the key sources of spatio-temporal distribution shift through the lens of causality.
2. The authors offer an interpretation of the specialties of temporal and spatial confounders (i.e., temporal environment and spatial contexts), making the application of backdoor and frontdoor adjustments in CaST targeted and well-motivated.
3. Extensive experiments validate the model's superiority, including comparisons against state-of-the-art STGNNs, ablation studies, hyperparameter sensitivity analysis, and case studies with detailed visualizations of the model's ability to capture spatial causality and identify temporal environments.
4. The figures provide readers with an intuitive understanding of the main problem, key ideas, and compelling experimental results.
Weaknesses: 1. The concept of temporal disentanglement is not entirely novel. Although the specific designs of disentanglement headers for environment feature and entity feature may have some novelty, as shown in Figure 4(a), the authors did not adequately clarify this in the main content.
2. For the backdoor adjustment and corresponding temporal disentanglement module: a) The paper lacks a clear explanation regarding the connection between estimating $p(Y|do(X))$ via backdoor adjustment and temporal disentanglement, raising doubts about the effectiveness of the temporal disentanglement module. b) The proposed mutual information regularization cannot ensure that no causal features are leaked into the disentangled environment features $H_e$, possibly leading to the loss of essential causal information.
3. For the frontdoor adjustment and corresponding spatial context filtering module: a) The interpretation of the mediator variable $\hat{H}_i$ is confusing. The authors believe that $\hat{H}_i$ should be "a node representation containing only information propagated based on genuine causation within their spatial context". However, the spatial context $C$ is a confounder, and a mediator containing information from the confounder cannot satisfy the frontdoor criterion due to a backdoor path from the mediator to the label $Y,$ i.e., $\hat{H}_i \leftarrow C \rightarrow Y.$ b) The paper lacks a sufficient explanation regarding the connection between estimating $p(Y|do(X))$ via frontdoor adjustment and spatial context filtering, raising doubts about the effectiveness of the spatial context filtering. c) The reviewer fails to find an explanation for why using the HL operator on edge-graph can learn causation while other GNNs like GAT cannot.
4. The reference [1] addressed the spatio-temporal distribution shift in dynamic graphs. Since the spatio-temporal graph is a special type of dynamic graph, the authors should compare CaST with the approach described in [1].
5. The ablation studies are insufficient. Validating the fact that **w/o Env** and **w/o Ent** can cause performance degradation is unnecessary, as it is a natural result of eliminating information predictive of the label. Additionally, the ablation studies on loss functions corresponding to the two disentanglers are missing.
6. The visualization of dynamic spatial causal relationships in section 5.2 is insufficient since the authors do not check whether other SOTA STGNNs can also capture these relationships.
7. The model's sensitivity to the hyperparameters $\alpha$ and $\beta$, which control the importance of different terms in the final loss function, should be discussed.
[1] Zeyang Zhang, et al. Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift. In NIPS 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why do the authors assume that temporal environment $E$ and spatial context $C$ are independent? For example, the states of neighboring nodes of node i will also be influenced by the change of $E$.
2. How do the proposed modules output statistical estimands that are required to calculate $p(Y|do(X))$ in Eqn. (1) and (2)?
3. Why do the authors focus on learning $p(Y|do(X))$ rather than $p(Y|do(X), E, C)?$ Although $E$ and $C$ are confounders, they can also provide additional information for accurate prediction.
4. In ablation studies, why **w/o Edge**, i.e., replacing 'GCN with causal scores' with GCN, causes such a performance drop in PEMS08? The STGCN also adopts GCN for spatial message passing, but it achieves MAE 18.60 in Table 1.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and constructive feedback on our paper. We appreciate the time you've taken to provide valuable insights. Please find our responses below.
**[Weaknesses]**
**wrt the novelty of temporal disentanglement.** Thank you for your feedback. We acknowledge that the temporal disentanglement concept has been proposed by previous efforts. And we have highlighted the novelty of implementing the temporal disentangler module in the revision.
**wrt backdoor adjustment and temporal disentanglement.**
- a) We appreciate your attention to this aspect of our work. The relationship between estimating $P(Y|do(X))$ via backdoor adjustment and temporal disentanglement is detailed in Section 4 (please refer to Lines 167-170).
- b) Regarding your concern about the potential loss of essential causal information, our ablation study in Section 5.2 can address this issue. By testing removing environment or entity features for prediction, our results show that both components offer valuable predictive information, and the entity contains more essential information that aids forecasting. This aligns with our aim for temporal disentangler design: we expect the entity contains more local and vital information like periodicity while the environment contains more global information like the trendy.
**wrt frontdoor adjustment and spatial context filtering.**
- a) Thank you for bringing up this insightful query. In our model, $\hat{H}_i$ acts as an estimate for $X^*$, derived from the general causal relationship, while $X^*$ itself is a surrogate that mimics the deconfouned $X$. As depicted in Figure 2c, when applying frontdoor adjustment by introducing $X^*$, it is a cause of $Y$. Consequently, all backdoor paths from $X^*$ to $Y$ can be blocked (i.e., there are no unmeasured confounders, e.g., $C$, between $X^*$ and $Y$). As a result, the relationships $X^* \leftarrow C \rightarrow Y $ and $\hat{H}_i\leftarrow C \rightarrow Y $ don't exist.
- b) The relationship between estimating $P(Y|do(X))$ via frontdoor adjustment and spatial context filtering is detailed in Section 4 (please refer to Lines 171-177).
- c) The HL operator was carefully chosen to address the ripple effect of causation, not merely the causation itself. This choice aligns with the second challenge we aim to tackle in our work, as outlined in the Introduction, Lines 40-46, and Figure 1b. The effectiveness of the HL operator lies in its ability to perform convolution on the graph's edges rather than nodes. The ablation study in Table 2 demonstrates this.
**wrt a related model for discussion.** Thank you for bringing the related work DIDA to our attention. It indeed offers significant insights. While both CaST and the DIDA address spatio-temporal distribution shifts, they focus on different tasks: CaST on STG forecasting, and DIDA on link prediction. The distinct nature of these tasks makes direct comparison less meaningful. Nevertheless, we recognize the significance of DIDA and will incorporate this discussion in Section 2.
**wrt hyperparameters sensitivity in the loss function.** Thanks for commenting on this. We've provided additional experiments on the AIR-BJ dataset (see the table below). We find that adjusting the $\alpha$ and $\beta$ weights in the loss function greatly influences the model's performance. We reach a trade-off at $\alpha=0.5$, $\beta=1.5$, and $\alpha=1$, $\beta=1$ to yield the lowest MAE and RMSE, suggesting an optimal trade-off.
| $\alpha$-$\beta$ | MAE | RMSE |
|:-------:|:-----:|:-----:|
| 0.5-0.2 | 23.04 | 35.51 |
| 0.5-0.5 | 23.06 | 35.41 |
| 0.5-1.0 | 23.06 | 35.52 |
| 0.5-1.5 | 22.80 | 34.79 |
| 1.0-0.2 | 22.93 | 35.23 |
| 1.0-0.5 | 22.96 | 35.35 |
| 1.0-1.0 | 22.81 | 34.91 |
| 1.0-1.5 | 22.95 | 35.33 |
| 1.5-0.2 | 22.96 | 35.39 |
| 1.5-0.5 | 22.96 | 35.27 |
| 1.5-1.0 | 22.96 | 35.33 |
| 1.5-1.5 | 22.92 | 35.28 |
**wrt visualization of spatial causal relationships.** Thank you for pointing that out. Due to time and space constraints, we'll certainly include such comparisons in the revision.
**[Questions]**
- **Q.** Why do the authors assume the independence of temporal and spatial factors.
**A.** Thank you for raising this valuable question. Please kindly see our response to Reviewer u4Nz’s Weaknesses Section.
- **Q.** How modules output statistical estimates.
**A.** Thank you for your question. As outlined in Section 4, our framework indeed implements both Eq.1 and Eq.2. For the former, we employ the Environment Disentangler to isolate environmental features from the input data, and an Environment Codebook to categorize these environments, fulfilling the requirements of Eq.1. As for Eq.2, we use the HL Deconfounder to discern the causal relationships between nodes, assisting in measuring the causal influence of $X$ on $X^*$, thereby meeting the essential criteria of Eq.2.
- **Q.** In ablation studies, why w/o Edge causes such a performance drop in PEMS08?
**A.** Referencing Line 311, 'w/o Edge' excludes the use of causal scores to guide spatial message passing rather than replacing the process with GCN. This omission naturally leads to a substantial performance drop. While we indeed tested various methods for learning the causal score for spatial message passing (Table 2), where results are only marginally worse than using edge convolution.
Thank you again for your valuable feedback. We have revised our manuscript based on your feedback.
---
Rebuttal Comment 1.1:
Comment: The rebuttal is acknowledged and I would like to keep the score
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our rebuttal and for your thoughtful feedback throughout the review process. | Summary: The paper introduces a model for out-of-distribution prediction in spatio-temporal data. The confounding effects are decoupled in spatial and temporal contexts, and treated with frontdoor and backdoor adjustments, respectively. Empirical evidence shows improved performance.
Strengths: - The tackled problem is extremely relevant.
- To my knowledge, the proposed model consisting of a combination of edge-level filtering, back-door, and front-door adjustment is novel and sound.
- The achieved performance is remarkable.
Weaknesses: - It's unclear to me to what extent the considered assumption of decoupled spatial and temporal environments is reasonable. For instance, the weather (mentioned in line 124) is related to both time and space.
- The experimental setup is not ideal in my opinion.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: ### Questions:
- Line 282: Most of the literature considered a different number of steps (see, eg, DCRNN and MTGNN). Although different choices are valid as well, considering the same setting provides a means to assess reproducibility and allows a broader comparison of the methods. Is there a particular reason for not considering these settings?
### Suggestions:
- The top and bottom halves of Fig. 3 referred to in Sec. 4 are not easily identified in the figure. I suggest highlighting them, e.g., with boxes or shaded areas.
- Line 211: This is guaranteed only if the KL divergence is 0. I suggest rephrasing it.
- The causal strength constitutes an important element. I suggest providing a proper definition and discussion about it.
- Other methods in the literature are generally winning over the methods considered in the paper (see e.g. [https://arxiv.org/abs/2005.11650](https://arxiv.org/abs/2005.11650) and the reference therein). I suggest considering some of them, like Graph WaveNet [48] which is also a quite established baseline from 2019.
### Typos
- Line 162: Eq. 8.
### After rebuttal
I have read the author's rebuttal which has addressed all the raised points.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review of our submission. We appreciate the insights and constructive feedback you have provided. We are grateful for your acknowledgment of the relevance, novelty, and performance of our work. Below, we address your comments in a point-by-point manner.
**[Weaknesses]**
**wrt assumption of decoupled spatial and temporal environments.** Thank you for raising a valid point. Our assumption to treat space and time separately aligns with numerous mainstream ST models, such as GraphWaveNet [1] and STGCN [2]. This treatment optimizes computations and reduces memory consumption, making it practical for real-world applications. It's noteworthy that the performance does not be compromised, as seen with STSGCN [3], where space and time are integrated. Based on these efficiencies and results, we adopted this assumption for our model.
[1] GraphWaveNet for Deep Spatial-Temporal Graph Modeling. IJCAI 2019. (citation: 1,100+)
[2] Spatio-temporal Graph Convolutional Net works: A Deep Learning Framework for Traffic Forecasting. IJCAI 2018. (citation: 2,500+)
[3] Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. AAAI 2020.
**wrt experimental setup.** In the context of air quality prediction, forecasting 12 steps (namely 12 hours) is not practical since citizens usually plan their journey one day ahead. Therefore, we adopted a popular setting from air quality forecasting literature [4], i.e., using the previous 24 steps to predict the next 24 steps. We maintained this setting for PEMS08 for consistency. Furthermore, such longer-term forecasting is more challenging, which poses new hurdles to the prediction model.
[4] Airformer: Predicting nationwide air quality in china with transformers. AAAI 2023.
**[Questions]**
- **Q.** Reason for experiment setting?
**A.** Please kindly see our response to the Weaknesses Section.
**[Suggestion]**
**wrt Figure 3 enhancements.** Thanks for your suggestion. Actually, one of the co-authors also raised this issue before paper submission. We have tried to improve the figure, but we haven't managed to revise it beautifully and clearly yet. We will try our best to improve this figure in the revision. Thank you.
**wrt KL divergence clarification.** Thank you for pointing out this problem. We've revised the text to: "By minimizing the mutual information, the information overlapping between $H_e$ and $H_e$ decreases. When this value approaches zero, each representation is ensured to possess only self-contained information."
**wrt definition on causal strength.** Thank you for your suggestions. We’ve defined ‘causal strength’ as the magnitude of the causal effect between a cause and its outcome. It measures how alterations in one variable directly impact another. A stronger causal strength shows a clear effect, while a weaker one suggests a less obvious relationship. We've included this definition in our revised manuscript.
**wrt considering other methods.** Thank you for your suggestion. Due to the limited time, we've assessed Graph WaveNet [1] on two datasets (i.e., PEMS05 and AIR-BJ) for a comprehensive comparison, as shown in the table below.
| Model | PEMS08 (MAE) | PEMS08 (RMSE) | AIR-BJ (MAE) | AIR-BJ (RMSE) |
|---------------|-----------------|----------------|----------------|----------------|
| GraphWaveNet | 16.94 ± 0.43 | 26.70 ± 0.68 | 23.48 ± 0.43 | 36.21 ± 0.69 |
| CaST (ours) | **16.44** ± 0.10| **26.61** ± 0.15| **22.90** ± 0.09| **34.84** ± 0.11|
**wrt Typo in Line 162.** Thank you for kindly bringing this to our attention. We have rectified this typo in our revision.
Thank you again for your detailed review and insightful feedback. We have meticulously addressed and incorporated your suggestions into our revised manuscript. We believe these are not essentially technical issues. Considering the other three reviewers agree that our paper has good merits such as satisfactory novelty and comprehensive evaluation, we sincerely hope that you could reconsider our score. Thank you so much!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing all my concerns.
To follow up:
> **wrt assumption of decoupled spatial and temporal environments.**
Implementation-wise, I agree that it's convenient. However, I am still not fully convinced that this assumption is reasonable for a model meant to be causal -- e.g., due to the weather (line 124).
> **wrt experimental setup** [...] We maintained this setting for PEMS08 for consistency [...]
In my opinion, consistency between traffic and air-quality problem setups is not needed. While I see the importance of considering more challenging scenarios, keeping some consistency wrt the literature would have been beneficial for credibility and reproducibility.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the follow-up comments.
**wrt assumption of decoupled spatial and temporal environments.** We appreciate your agreement on the convenience from an implementation standpoint. In the context of our paper regarding this assumption, we acknowledge that certain factors, like weather, can possess both spatial and temporal attributes simultaneously. However, our focus isn't on specific factors per se, but on the impact these factors exert on specific objects — not on the weather as a generalized concept, but its state at a particular location and time. Additionally, there are factors that change only spatially or only temporally, e.g., road networks and working days. Moreover, in the SCM we proposed, the $E$ and $C$ aren't concrete sets with explicitly enumerated factors. Instead, they act as latent, generalized variables. Their role is to encapsulate the broad temporal or spatial effects on $X$ and $Y$, rather than to detail every distinct influencing factor. With this understanding, we believe our assumption about the independence of spatial and temporal effects holds significance. We appreciate your feedback and apologize for any lack of clarity in our paper and previous response.
**wrt experimental setup.** We sincerely appreciate your feedback. Moving forward, we will strive to strike a balance between practical application scenarios and aligning with the reproducibility standards set by existing literature. Thank you. | Summary: This paper studies the problem of Spatio-Temporal Graph forecasting under the lens of causal treatments. The authors proposed a framework consisting of two major components to mitigate the commonly seen limitations for spatial-temporal GNNs, i.e., 1) the backdoor environment disentanglement block, which models the temporal environmental changes, and 2) the Hodge Laplacian deconfounder capturing the spatial context. The proposed framework is applied in the air and traffic flow datasets to study spatial-temporal interactions.
Strengths: 1. [Originality/Significance] The framework is proposed to deal with the limitation of the current STGNN models, i.e., the out of distribution issue and dynamic spatial causation. This paper’s originality comes from a novel combination of causal back-door treatment for temporal components, and a Hodge Laplacian decoupling layer for the spatial contexts.
1. [Quality] The overall experiments presented in this paper for supporting the proposed CaST pipeline are quite nice. It contains different aspects such as ablation study on the core components of the model, which can provide deep insight into the model.
1. [Clarity] The paper is well-written, with detailed descriptions starting from the background, STG data generation causal graph, causal treatment formulation, to each building block in the CaST pipeline.
Weaknesses: 1. From Appendix D (L646), it looks like the Hodge Laplacian used here is the down Hodge Laplacian $\partial_1^\top\partial_1$, rather than the full Laplacian (because $\partial_2$ is set to zero here). I would suggest mentioning this in the main text, because this is a big assumption to make. Additionally, there is some existing work using Hodge Laplacian on graphs (e.g., [A]), I would suggest citing this paper for completeness.
1. Related to #1, after applying the edge filter you created, the filtered edge signal will be divergence free [B,C] (due to the low frequency/null space of the down Laplacian being the curl or harmonic flows). This implies that any gradient edge signal (i.e., a flow $x\in\mathbb R^{|E|}$ that can be expressed as $x = \partial_1 y$ for some node function $y \in \mathbb R^{|V|}$) will very likely be filtered out after the convolutional layer, suggesting that the HL deconfounder can learn more information when the flow is divergence free (compared with a curl-free flow). Given that the traffic flow is incompressible (thus divergence free) while air (PM2.5) flow is not, it might also suggest why we see huge drop in performance for the ablation study (Figure 5a) when removing edge signal on the PEM508 (traffic flow) dataset, compared with the AIR_BJ (air flow). I would suggest discussing this assumption/limitation in more detail in the main text.
---
[A] Schaub, Michael T., Austin R. Benson, Paul Horn, Gabor Lippner, and Ali Jadbabaie. “Random Walks on Simplicial Complexes and the Normalized Hodge 1-Laplacian.” SIAM Review 62, no. 2 (2020): 353–91.
[B] Chen, Yu-Chia, Marina Meilă, and Ioannis G. Kevrekidis. “Helmholtzian Eigenmap: Topological Feature Discovery & Edge Flow Learning from Point Cloud Data.” ArXiv:2103.07626 [Stat.ML], March 13, 2021. https://arxiv.org/abs/2103.07626v1.
[C] Schaub, M. T., and S. Segarra. “Flow Smoothing And Denoising: Graph Signal Processing In The Edge-Space.” In 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 735–39, 2018. https://doi.org/10.1109/GlobalSIP.2018.8646701.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Related to the weakness #1, have you tried/considered constructing a $\partial_2$ from e.g., a clique-complex (filling all triangles in the edges), and see if there is performance gain by using the full up and down Hodge Laplacian?
1. In L275, is there any specific reason to regularize only the scaling $\beta$ of $\mathcal L_{mi}$ rather than $\mathcal L_{cod}$? I understand that there is a regularization parameter $\alpha$ in Eq. (6), but how do you make sure the first term of $\mathcal L_{cod}$ is comparable with the log-probability ($\mathcal L_{pre}$)?
1. In L745-750 of Section G, the authors mentioned the edge filter being computationally intensive. If there is space, I would like to see an empirical comparison on the runtime.
1. How should I correctly understand the removal of the Environment codebook in the ablation study (Figure 5a)? Specifically, it seems like the gain for adding Env feature is not huge, does it mean that the environment disentangler is not disentangle the entity from environment well enough (so that some “environmental features” are still presented in the entity features)?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed extensively the limitations as well as the social impact in the Appendix, therefore, I do not have any other aspects to add. While I appreciate the detailed discussions in Section G, I believe that it will improve the paper even more if the authors can at least have a brief overview (3-5 sentences) of the limitation/social impact in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to sincerely thank you for the time and effort put into reviewing our submission. Your feedback is invaluable and helps enhance the quality of our paper. We appreciate your acknowledgment of the originality, quality, and clarity of our work. Below, we respond to your comments point-by-point.
**[Weaknesses]**
**wrt the down HL and related work.** We sincerely thank you for the suggestions. We have updated the main text to include this assumption, ensuring clarity for readers. As for the existing work [A], we did cite it for completeness in our revision. Thanks.
**wrt the divergence-free property.** Thank you for your insightful comment. By investigating some related literature in traffic flow theory, we agree that the traffic dataset is incompressible and divergence-free. In this case, message passing is predominantly driven by causal intensity. Conversely, PM$_{2.5}$ in the air quality datasets is compressible [1]. This introduces two challenges: firstly, discerning the real causal score becomes difficult due to potential influences from multiple stations, unlike the explicit relationships between road pairs (i.e., up/downstream influences) within traffic data. Secondly, the vast distances between stations may weaken spatial correlations. We are delighted to add the above discussion to our revision, which considerably strengthens the theory of our model.
[1] Hydrodynamic analysis of compliant foil bearings with compressible air flow[J]. J. Trib., 2004, 126(3): 542-546.
**[Questions]**
- **Q.** Have you tried constructing a $\partial_2$?
**A.** Thank you for raising the insightful question. We’ve opted against constructing a $\partial_2$ for three primary reasons: (1) our emphasis on primary interactions through causation ripple effects on edges; (2) the lack of explicit meanings for triangles in our task; and (3) reduced computational load. However, considering applications like molecular structure modeling where triangles, such as Benzene Rings, hold significance, a well-defined $\partial_2$ is essential. In light of your suggestion, we believe it would be valuable to explore the impact of constructing $\partial_2$ on our model's performance in future work.
- **Q.** Why is only scaling $\mathcal{L}_{mi}$?
**A.** Thank you for your inquiry. We chose not to scale $\mathcal{L}_{cod}$ because its magnitude aligns closely with that of the first term, following the work [2].
- **Q.** The computational intensity and runtime comparison?
**A.** We appreciate your insightful comment and apologize for any confusion caused due to the lack of clarification in our paper. Indeed, the computational intensity stems from constructing the higher-order (edge) graph from the original dataset, which can be done in the preprocessing stage in one shot. In other words, it won't produce extra computational costs to the training phase. Thus, its impact on the overall training efficiency is negligible. We have followed your advice to clarify this part in our revision.
- **Q.** Does the slight gain from the Env feature suggest issues with the temporal disentangler?
**A.** Thank you for your insightful question. In designing the temporal disentangler, our intent is to distinctly identify the entity feature, capturing nuances of time series dynamics like periodicity, and the environment feature, aimed at reflecting global trends. As we had foreseen, the removal of the entity causes a significant decrease in performance, while the removal of the environment yields a subtler drop. This aligns precisely with the outcomes presented in our ablation study (see Figure 5a).
[2] Neural discrete representation learning. NeurIPS 2017.
**[Limitations]**
**wrt a brief overview of the limitations/social impact in the main text.** We appreciate your suggestion and have added a brief overview of them correspondingly in our revision.
Once again, thank you for your constructive feedback. We hope our revisions and clarifications address your concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed response!! I have no further comments/questions. I am confident that this is a good paper for the NeurIPS community, so I will keep the score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your constructive feedback and valuable comments, which really contributed to the enhancement of our manuscript. Thank you very much!! | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to express our sincere gratitude to all the reviewers for their thorough evaluation and constructive feedback on our manuscript. Your insights have been invaluable in enhancing the quality and clarity of our work. We have made several revisions accordingly to address the concerns raised:
- **Enhanced Clarifications**: We have enhanced our manuscript with more in-depth explanations where necessary, such as an explanation of ablation studies via a divergence-free perspective and a detailed complexity analysis.
- **Additional Experiments**: Based on the feedback, we have conducted additional experiments to further validate the effectiveness of our proposed model. These include an ablation analysis focusing on hyperparameters within the loss function and a comparison with Graph WaveNet [1].
- **Clearer Definitions**: We have added clear definitions for terms like 'causal strength' to ensure readers have a comprehensive understanding of our method.
- **Expanded Literature Discussion**: We have expanded our discussion on related works, like [2, 3]. This helps in positioning our work in the broader context of the field.
We believe that these revisions have significantly improved our manuscript. We hope that our responses and the changes made address the concerns of the reviewers adequately.
Once again, thank you for your time and effort in reviewing our work. We look forward to your continued feedback.
Best regards,
Authors
**Reference**
[1] GraphWaveNet for Deep Spatial-Temporal Graph Modeling. IJCAI 2019.
[2] Random Walks on Simplicial Complexes and the Normalized Hodge 1-Laplacian. SIAM Review 62, no. 2 (2020): 353–91.
[3] Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift. In NIPS 2022. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Towards Understanding the Dynamics of Gaussian-Stein Variational Gradient Descent | Accept (poster) | Summary: This paper focuses on the theoretical understanding and algorithmic contributions of Gaussian-Stein Variational Gradient Descent (Gaussian-SVGD) for Gaussian Variational Inference (GVI). The paper discusses the dynamics of Gaussian-SVGD and provides convergence rates in both mean-field and finite-particle settings when the target is Gaussian. It also shows that Gaussian-SVGD converges to the best Gaussian approximation to the target in KL-divergence for non-Gaussian targets. The authors propose two algorithmic frameworks for density-based and particle-based implementations of Gaussian-SVGD, which encompass various GVI algorithms proposed in the literature as special cases. The Gaussian-SVGD, and other related algorithms, are compared against one another on three examples: Gaussian target, Bayesian logistic regression, and a Gaussian mixture model. The authors highlight future directions for research, including establishing convergence results for particle-based Gaussian-SVGD algorithms under log-concave and more general targets, and exploring acceleration techniques.
Strengths: The paper is very well written and the mathematical content is clearly articulated. The authors have done a good job of relating this work to other similar works from the literature. The paper is well-structured and the authors' particular contributions to this line of research are clearly outlined.
The technical detail, as far as I can see, is correct and the contribution of this work to the ML community, in particular the new convergence rates, are of significant interest. The paper is closely linked to other related works but is still highly original.
The paper provides a nice balance between establishing important theoretical results and providing a usable algorithm for users to implement in practice.
Weaknesses: This is a nice piece of work and there are no areas where I can see significant weaknesses. However there are a few things that the authors may wish to consider to improve their paper:
* Starting from a very simple question, "Why should I use Gaussian-SVGD?" I think the authors have skipped some of the motivation for this work. If you know that the target is Gaussian, then of course you would just fit a Gaussian to it and would not require a particle-based variational inference approach. If the target is non-Gaussian, then why would you use Gaussian-SVGD over standard SVGD?
* It is not clear why regularized SVGD is introduced. What is the motivation here? Is it simply to show that you can derive a Gaussian-SVGD algorithm with the regularized Stein metric?
* The kernel $K(\cdot,y)$ is introduced in Def 2.2 without being defined. Additionally, I think in eq. (1) you need to replace $\rho$ with $\rho_t$.
* The simulation section is unfortunately a bit weak compared to other similar papers. The authors only consider three simple models and the details are largely glossed over, (e.g. what is $d$? How many particles? etc.). The authors seem to have run out of space and only provide a short simulation study for the logistic regression example. It would be better to include a more detailed simulation study in the main paper and to consider more challenging models.
* Additionally, on the point of experiments, it is a bit surprising that SVGD is not also included in this list.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * This sort of relates to my previous question approach motivation. It is well-known that SVGD does not scale well to high-dimensional target distributions. Is it possible to show, either theoretically (i.e. results like Theorem 3.7) or empirically, that Gaussian-SVGD is better than SVGD when $d$ is large?
* Similarly, what is known about how many particles are needed compared to SVGD?
* Particle-based methods are preferred over standard variational methods which fit a Gaussian distribution to the target because of their nonparametric nature. In the experiments section, it seems like the Gaussian mixture model would have been the more interesting example to include in the paper, particularly in the context of how good the approximation to the target is compared against SVGD. Could this be included?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The paper could discuss more the theoretical limitation of this work. However, from an ethical perspective, I do not believe that this paper touches on issues that would have a negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the detailed review and incisive questions.
1.**About why we should use Gaussian-SVGD; "If the target is non-Gaussian, why would you use Gaussian-SVGD over standard SVGD?"**:
We interpret the question more as **when** we should use Gaussian-SVGD rather than **why**. It is important to note that **standard SVGD (e.g. using a bilinear or RBF kernel) does not solve GVI**. On one hand, if our goal is to find the best Gaussian approximation, we should no doubt use Gaussian-SVGD instead of SVGD. Furthermore, if the goal is to estimate the mean and variance from a target distribution, Gaussian-SVGD is still preferred because standard SVGD can subject to significant error in estimating the covariance especially when the dimension is high [B].
However, as shown in [A] for a class of log-concave targets, samples generated via GVI (i.e., doing a Gaussian targets) would provide accurate estimators of the mean vector and the covariance matrix, and Gaussian-SVGD is essentially a deterministic algorithm to efficiently and accurately perform GVI.
For a large class of Bayesian inference problems, thanks to the Bernstein von Mises theorem, the posterior distribution is approximately Gaussian in the limit of large samples under appropriate regularity conditions. In fact, there is an array of works providing **stochastic** algorithms (which is based on randomized discretizations of the BW Gradient flow) for solving GVI (see, e.g., [17, 36]), and [55, 68, 70, 57] show the improved practical performance of GVI on various practical problems. We show in this work that there is a surprising **deterministic** discretization (BWPF) of the BW Gradient flow if we aim to solve GVI.
Furthermore, when one refers to standard SVGD, one also needs to specify the choice of kernels. In a way, our work suggests a principled way of selecting the kernel (which could be one of the bi-linear kernels) in standard SVGD for provably estimating the posterior mean and covariance for log-concave targets.
[A] A Katsevich, P Rigollet. "On the approximation accuracy of gaussian variational inference." arXiv preprint arXiv:2301.02168, 2023.
[B] Ba, Jimmy, et al. "Understanding the variance collapse of SVGD in high dimensions." ICLR. 2021.
2.**About why regularized SVGD is introduced**: What we want to show is that the regularized Stein metric with kernel $K_2$ coincides with the Stein metric with a different bilinear kernel, which we can call as $K_4$. This phenomenon is interesting and also makes the algorithm fall under our general framework of Gaussian-SVGD. We will modify the paragraph on "Three Bilinear Kernels" on Page 3 to include $K_4$. Moreover, $K_4$ is important because it interpolates between $K_2$ and $K_3$. Note that $K_2$ and $K_3$ both have advantages and disadvantages, and $K_4$ can strike a balance between them by choosing $\lambda$ (See Table 1 and Figure 1).
3.**Is Gaussian-SVGD is better than SVGD when d is large? and What is known about how many particles are needed compared to SVGD?**
These two are very deep question, actually (from a theoretical perspective). For the first question, we do observe empirically that Gaussian-SVGD is better than SVGD (say with Gaussian or Matern kernels) when d is large, from our preliminary experiments in terms of estimating posterior covariances of log-concave targets. The intuitive reasoning is the parametric nature of SVGD incurs significantly less variance compared to nonparametric SVGD.
In an ongoing work, we are attempting to have a high-dimensional analysis (i.e., double asymptotic analysis) of Gaussian SVGD. Comparing to general SVGD theoretically in this regime, and providing a theoretical answer to the second question, are harder because we currently don't have tools to analyze SVGD well, even in fixed or low-dimensional setup. Nevertheless, a positive answer to these question would be great, and we hope to get there someday!
4.**About Gaussian mixture model, simulation section and SVGD**:
First, we would like to point out that we have a Gaussian mixture model experiment in Section D of the appendix. Taking your suggestion, we plan to move all experiments to the main paper if we are allowed an additional page. If not, we will have the Gaussian mixture model in the main draft and other experiments in the appendix.
Our primary goal is to make progress towards understanding SVGD theoretically, given its widespread usage in practice. Nevertheless, we will be happy to add more numerical experiments, with different dimensions for the current models and also some new models.
For the experiments, when one refers to SVGD it is important to specify which kernel to use. Given that you simply to refer to SVGD, we take it that you refer to some non-bi-linear kernels? That is a great suggestion, and we will be happy to add a comparison to SVGD with a Gaussian or Matern kernel in our experiments.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for your response to my questions. I will maintain my current score. | Summary: This paper delves into an examination of Gaussian-SVGD through the analysis of mean-field PDE and discrete particle systems. It offers evidence for finite particle convergence and supports these findings with empirical validations.
Strengths: Through clear modeling, the article provides a comprehensive analysis for Gaussian-SVGD.
The paper gives new interpretations for Wasserstein Gradient Flow and SVGD under linear systems, which are very interesting.
Weaknesses: The setup of the paper is relatively artificial, because in actual scenarios, linear kernels are rarely used to implement SVGD.
The paper does not put much effort into justifying the significance of Gaussian variational inference, such as making comparisons with other distribution classes. If there were clear conclusions, it would have a significant impact on the significance of this paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Can you elaborate some advantages of Gaussian-SVGD compared with RBF-SVGD?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: The setup of the article is somewhat artificial. It would be more meaningful if the significance of this linear system could be demonstrated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1.**Linear kernels are rarely used to implement SVGD. It would be more meaningful if the significance of this linear system could be demonstrated. Signficance of GVI**
First, we should emphasize again that Gaussian-SVGD is **NOT** SVGD with a linear kernel when the target is not Gaussian. We should use Gaussian-SVGD in order to perform GVI and cannot directly use RBF-SVGD for the same purpose. Moreover, there is no simple modification to make RBF-SVGD work for the GVI task. (The linear approximation in Equation (26) does not guarantee Gaussian dynamics for RBF kernel.) The linear kernel is used in our paper due to the fact that it leads to a **deterministic** algorithm that can be used to solve GVI.
Second, we elaborate on why GVI (or more generally variational inference) is preferred to exact sampling in many situations. The key argument is that **if one is only interested in summary statistics of the target such as the mean and covariance (as is often the case in Bayesian inference), generating samples may not be the most suitable way to achieve this goal**. In contrast to exact sampling, variational inference (VI), aims to find, among all measures in a certain parameterized family P, the closest measure to the target distribution. In general, VI algorithms usually show better performance both computationally and statistically than sampling algorithms. In specific, classical MCMC methods like MALA can be very expensive in terms of time cost, and it is notoriously difficult to identify clear-cut stopping criteria for the algorithm; The particle-based RBF-SVGD suffers from severe particle degeneracy especially in high dimensions [C] and cannot guarantee good covariance estimation.
For a large class of Bayesian inference problems, thanks to the Bernstein von Mises theorem, the posterior distribution is approximately Gaussian in the limit of large samples under appropriate regularity conditions. Thus, Gaussian VI is considered an important and basic task in this field. Notably a recent work [A] shows that in terms of mean and covariance estimation (of the posterior) using GVI has improved rates over other methods like Laplace approximation. In fact, there is an array of works providing **stochastic** algorithms (which is based on randomized discretizations of the BW Gradient flow) for solving GVI (see, e.g., [17, 36]), and [55, 68, 70, 57] show the improved practical performance of GVI on various practical problems. We, however, present **deterministic** algorithms to do the same task, which are even more practically useful.
Finally, the flexibliy offered by non-parametric nature has hindered researchers from obtaining theoretical results that align with practice. Given this, our contributions in this work as Reviewer BMUk pointed out provides "a nice balance between establishing important theoretical results and providing a usable algorithm for users to implement in practice". **Understanding SVGD on Gaussian families is an important step towards understanding the general situation. Surprisingly this special case has not been well studied in the literature and our work provides nearly-complete solution to this important special case.**
[A] A Katsevich, P Rigollet. "On the approximation accuracy of gaussian variational inference." arXiv preprint arXiv:2301.02168, 2023.
[C] Zhuo, Jingwei, et al. "Message passing Stein variational gradient descent." ICML, 2018.
To conclude, the **advantages of Gaussian-SVGD compared with RBF-SVGD** come in the following two perspective:
- Gaussian-SVGD performs GVI, which is a more preferred approach for Bayesian inference.
- Gaussian-SVGD provides a theoretically principled framework for understanding SVGD that **aligns with practice**. Gaussian-SVGD provides a **deterministic algorithm** to implement GVI. Uniform-in-time propogation type result (i.e., Theorem 3.7) are available for Gaussian-SVGD. Such results are not available currently for RBF-SVGD.
If a practitioner asks "When should I use Gaussian-SVGD", our paper provides a nearly-complete answer to this question. However, if a practitioner asks the question, "When should I use RBF-SVGD", currently no convincing answer could be given. The best we can think of is something along the lines of try it out and see if it works well, which is extremely ad-hoc.
---
Rebuttal Comment 1.1:
Title: update/feedback
Comment: Dear Reviewer EWuB,
As the deadline for the discussion phase is fast approaching, we were curious if you had any feedback for our response. Please also let us know if you have any further questions. Thank you and looking forward to hearing from you.
Sincerely,
Authors | Summary: This paper studies the Stein variational gradient descent and its variants with a bilinear kernel on the space of Gaussian measures. The authors prove the rate of convergence of the dynamics, proposed finite particle algorithms and proved a uniform-in-time propagation of chaos, and finally prove the convergence rate of the finite particle algorithms.
Strengths: 1. The study of the dynamics properties of SVGD and its variants on Gaussian space seems to be interesting, which is a natural analog to the Bures-Wasserstein gradient flow. This paper gives a comprehensive analysis of such algorithms, including the well-posedness of the dynamics and the rate of convergence.
2. The proof and the results seem to be correct. I didn't check all the details in the proof, but both the proof and the results make sense to me.
3. The paper is in general well-written, the theorems are stated properly, and I think it is not hard for readers to understand the main idea.
Weaknesses: I do not observe an obvious weakness of this paper, but I do have a few questions, please refer to the "Questions" part.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Intuitive explanation of the convergence rate: as proved in the paper, the rate of convergence of SVGD for centered Gaussian is $O(e^{- t})$, which do not depend on $\lambda$. This seems to be interesting, but I'm wondering if there's an intuitive explanation for this result? The dependence of $O(e^{- t/\lambda})$ in the Wasserstein gradient flow case makes more sense to me intuitively, since larger $\lambda$ will give a more "flat" potential.
2. Regarding the non-Gaussian target:
- It is shown in Theorem 4.1 that when $\rho_{\theta^*}$ be the unique Gaussian measure that minimize $D_{KL}(\rho_{\theta^*}||\rho^*)$, then the Gaussian-SVGD will converge to $\rho_{\theta^*}$. But when there are multiple $\rho_{\theta^*}$ that achieve the minimization, will the dynamic converge to one of them?
- Does the particle dynamics Algorithm 2 yield a similar uniform-in-time propagation of chaos results as in Theorem 3.7 with the same rate for Gaussian-SVGD?
Minor points and typos:
1. Some ambiguity in definitions:
- The definition of the kernels on page 3 (lines 111-118) is not entirely clear to me, in particular, what is $\mu$ and $\Sigma$ here? If I understand the latter part correctly, I think $\mu$ and $\Sigma$ will be the mean and covariance of $\rho_t$, which is actually time variant. I suggest the authors clarify this point in the definition.
- The same problem appears in equation (26) when defining $\widehat{\nabla V}$, since $\widehat{\nabla V}$ depends on $t$, which is a time-dependent vector field.
2. Definition 2, line 101: $G_{\rho}^{Wass}$ should be $G_{\rho}^{Stein}$
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the detailed review and incisive questions.
1.**Intuitive explanation of the convergence rate:**
The intuition could be obtained from the looking at corresponding ODE arising in the analysis. The mean-field dynamics of SVGD with bilinear kernel (for $K_1, K_2$ or $K_3$) takes form of linear ODEs $\dot{\Sigma}_t \approx - \Sigma_t\Sigma^{-1} $ where $\Sigma$ is the scaling matrix of the kernel and $\Sigma_t$ is the covaraince matrix of the solution of the mean-field dynamics of SVGD started from Gaussian initial data. The above linear ODEs warrant the rate of convergence of $\Sigma_t$ to be $O(e^{-t})$ for $K_1,K_2$ and $O(e^{-\frac{t}{\lambda}})$ for $K_3$ since the scaling matrix $\Sigma$ is equal to the identity matrix in the first two cases and equal to the covaraince matrix of the target distribution for the case of $K_3$.
Furthermore, this also explains the fact that by choosing different regimes of $\lambda$, regularized SVGD can interpolate between the two extremes, WGF and SVGD (See Table 1).
2.**Regarding the non-Gaussian target:**
In the context of Theorem 4.1, when there are multiple $\rho_{\theta^*}$ that achieve the minimization, the dynamic will converge to one of them, but we can't distinguish where it converges as it depends on the initialization. We will clarify this further in the revision.
3.**Uniform-in-time propagation of chaos for particle dynamics**
Currently, we don't have results of uniform-in-time propagation of chaos for Algorithm 2 with a general target, but we believe this should hold for the class of log-concave targets. Rigorously proving this is an interesting and challenging direction for future work.
4.**Typos:**
Thanks very much for pointing out the typos. We will fix them, along with the ambiguities mentioned, in the revision. | Summary: This work performs a theoretical investigation of Gaussian SVGD, a special case of SVGD restricted to the submanifold of Gaussian densities by means of a bilinear kernel. The authors characterize the mean-field dynamics of Gaussian SVGD, with a particular focus on Gaussian targets and obtain finite-particle guarantees in certain settings. Moving beyond Gaussian targets, the authors show that the dynamics of mean-field Gaussian SVGD converges to the best Gaussian approximation (In KL divergence) to the target density, and obtain convergence rates for the mean-field dynamics with the target density is log-smooth and log-concave
Strengths: The work provides a thorough treatment of the mean-field dynamics of Gaussian SVGD. The experimental evaluation is also satisfactory.
Weaknesses: My primary concerns are as follows :
1. Most of the results presented in this work consider continuous-time mean-field dynamics under the assumption of a Bilinear kernel and Gaussian target density. In my opinion, these assumptions adversely impact the significance of these results, as they fail to explain one of the major appeals of SVGD, i.e., the fact that it provides a non-parametric approximation to a large class of target densities. (e.g. [1] shows discrete-time mean-field guarantees for subgaussian densities and [2] extends it to the class of densities satisfying a generalized Talagrand’s inequality). While the exponentially fast convergence guarantees for the mean-field dynamics are seemingly appealing, they fall short of providing a satisfactory explanation of the behavior of SVGD in practically relevant settings.
2. Despite the simplifying assumption of Gaussian targets and bilinear kernels, the work does not provide any quantitative discrete-time finite-particle convergence rates. To the best of my understanding, Theorem 3.9 is the only result that considers finite-particle discrete-time dynamics, but it does not provide any quantitative rates.
3. It is not clear why one should prefer Gaussian SVGD over the Bures-Wasserstein Gradient Flow [3] since the only setting where it seems to outperform Bures-Wasserstein Gradient Flow is that of centered Gaussians, which is a highly restrictive condition.
4. The results in Section 4 leave much to be desired. Considering the fact that the best known results for Bures-Wasserstein gradient flow [3] cover both log-concave and strongly log-concave densities (see also [4] for a computable JKO discretization of the Bures-Wasserstein flow with convergence guarantees for log-concave and log-strongly concave densities), it is not at all clear what the benefits of Gaussian SVGD are when the target is not Gaussian.
5. The overall presentation of the results requires a lot of work. For instance, it is not clearly stated where exactly in the Appendix each Theorem is proved.
6. In addition to the limited applicability of these results in explaining the behavior of SVGD (as the most practically relevant case is that of non-logconcave targets) and the absence of satisfactory finite-particle discrete-time guarantees, I am somewhat concerned about the technical novelty of these results. To the best of my understanding, It seems like the mean-field continuous-time guarantees of this work can be easily obtained from the well-established results on SVGD [5,6] by restricting to the (finite-dimensional) Gaussian submanifold. While limited theoretical contribution in itself is not a major weakness, I find it difficult to recommend acceptance given the unsatisfactory practical usefulness of these results.
[1] Salim et. al., “A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1”
[2] Sun et. al., “Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition”
[3] Lambert et. al., “Variational inference via Wasserstein gradient flows”
[4] Diao et. al., “Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein Space”
[5] Duncan et. al., “On the geometry of Stein variational gradient descent”
[6] Liu, “Stein Variational Gradient Descent as Gradient Flow”
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. Could you obtain discrete-time finite-particle rates for Gaussian SVGD under the settings considered in your results. If not, could you highlight what would be the technical challenges involved in obtaining such a result?
2. Could you clarify why the mean-field continuous-time Gaussian SVGD dynamics is of interest, when (to the best of my knowledge) : 1) It does not satisfactorily explain the behavior of SVGD in practical scenarios, 2) It is outperformed by Bures-Wasserstein Gradient Flows (as well as the practically implementable Bures-JKO scheme) in most settings
3. Could you comment on the technical novelty of these results? To the best of my understanding, the mean-field guarantees can be obtained by adapting the existing analysis of the Gradient Flow induced by SVGD [5,6] to the submanifold of Gaussian densities (a technique which is well-established for Wasserstein gradient flows in [3]).
4. I would recommend reworking the overall presentation. Explicitly stating where each theorem is proven in the Appendix would be a good starting point.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: I found the discussion on the limitations of this work to be somewhat unsatisfactory. The authors state that limitations are discussed on Section 6 (Other Related Works) and Section 7 (Conclusion). Section 6 discusses some of the prior literature and Section 7 highlights some future directions. Neither of these sections adequately discuss the limitations of this work (e.g. applicability is primarily restricted to centered Gaussian targets for most of the results, finite-particle discrete-time rates are absent in most settings)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1.***Could you obtain discrete-time finite-particle rates***
Yes. We have the following **quantitative** result:
**Theorem:** For a centered Gaussian target, suppose the SVGD particle system with $K_{1}$ or $K_{2}$ is initialized by $\bigl(\boldsymbol{x}\_{i}^{(0)}\bigr)\_{i=1}^{N}$ such that $\boldsymbol{\mu}\_{0}=\boldsymbol{0}$ and $C_{0}Q=QC\_{0}$. For $0<\epsilon< 0.5$, we have $\boldsymbol{\mu}\_{t}=\boldsymbol{0}$ and $\lVert C_{t}- Q \rVert\to 0$ as long as all the eigenvalues of $Q^{-1}C\_{0}$ lie in the interval $(0, 1+1/\epsilon)$. Furthermore, if we set $u_{\epsilon}$ to be the smaller root of the equation $f\_{\epsilon}'(u)=1-\epsilon$ (it has $2$ distinct roots) where $f\_{\epsilon}(x) := (1 + \epsilon(1-x))^2x$, then we have linear convergence, i.e.,
$$
\lVert C\_{t}- Q \rVert\leq (1-\epsilon)^{t}\lVert C_{0}-Q \rVert\leq e^{-\epsilon t}\lVert C\_{0}-Q \rVert
$$
as long as all the eigenvalues of $Q^{-1}C\_{0}$ lie in the interval $[u\_{\epsilon}, 1/3+1/(3\epsilon)]$.
This is a direct refinement of Theorem 3.9 and is the 1st such result for finite-particle, discrete-time setting matching practice. The only other rates (for deterministic SVGD) by [62] do not align with practice. Obtaining similar results for general targets is extremely challenging and is one of the central open problems in SVGD.
2.**Bilinear kernel and Gaussian family assumptions**.
None of the existing results in the literature cover this simple case. [49] requires that radial kernels, and bilinear kernels are not. [27] relaxed this. But they need boundedness, which is also not satisfied by bilinear kernels.
For Gaussian targets, we have shown uniform in time propagation of chaos in Theorem 3.7. Prior works only derived bounds for the empirical measure that **grow with time $t$**. This is unrealistic because SVGD only exactly recovers the target distribution or solves the GVI problem when it converges at $t=\infty$, resulting in $\infty$ in their bounds.
**Understanding SVGD on Gaussian families is an important step towards understanding the general situation. This special case has not been well studied in the literature and our work provides nearly-complete solution to this important special case.**
3.**Relation with BWGF**. There is a little bit misunderstanding here we guess and we apologize for lack of clarity previously.
- Gaussian-SVGD is not 1 algorithm or 1 flow. **It is a family of flows with different bilinear kernels.** BWF is one of them (the one with kernel $K_3$, See Page 3). WGF and $K_3$-SVGD agree but only on the Gaussian submanifold and they induce BW flow.
- We show each flow in this family induces two algorithms, one updates the density parameters ( so **stochastic**), the other updates the particles (so **deterministic**). The particle-based one is more stable in practice. **BWGD is the stochastic algorithm from BW flow appeared in [3] whereas BWPF is the particle-based one proposed by us.** It is hard to derive the deterministic particle-based counterpart of BWGD from the perspective of WGF. But from the perspective of $K_3$-SVGD, it is natural.
4. **With all due respects, we simply disagree with the claim on lack of theory novelty.**
- We prove the **first** result on **uniform in time** propagation of chaos in SVGD. Previously such results were not available even for the setting we consider. Our proof ideas are generalizable to other setting although admittedly more challenging.
- We explicitly solved the finite-particle dynamic system for Gaussian targets (See Theorem 3.6 and Theorem 3.8). Although there is no novelty in checking a given solution, **finding the solution is not trivial** in the first place.
- We point out that our results with bilinear kernels **cannot be obtained as special cases of previous works on non-parametric SVGD**. [1] and [2] did not cover bilinear kernels. They require their kernels to be bounded and have bounded derivatives, which **is essentially required by their proof** and not satisfied by bilinear kernels. **Interestingly this is not an issue of technique but rather an issue of bilinear kernels.** The fact that Gaussian-SVGD using bilinear kernels is capable of solving GVI for general targets is **non-trivial** and that it provides deterministic algorithms for GVI is even more striking. No such **deterministic** particle method was available for GVI prior to our work.
- For the mean field guarantee, [5] (Thm 22) or [6] is not applicable to our setting. Their condition cannot be easily checked in general (in fact not hold) for the Gaussian-SVGD setting we consider. Deriving tight results for special settings **rather than imposing unrealistic conditions for general settings** is a small but more solid step towards solving the general problem.
- Directly applying the technique of [3] to SVGD would make the calculation too complicated to proceed. Therefore, we choose a different approach by calculating the explicit form of the restricted Riemannian metric tensor (e.g. See Theorem A.3). This approach is standard in Riemannian geometry but has added some novelty to our context.
5. The results in **[3] and [4]** provide rates for discrete-time finite-particle algorithms in the log-concave and non-log-concave settings. However, their algorithm is stochastic and their results only hold in expectation. The fluctuations of their **stochastic algorithms** are not quantified.
Our finite-time discrete-time result is an exact guarantee for the **deterministic** algorithm! Compared to [3] and [4], it holds for Gaussian targets. So our work complements the results in the works of [3] and [4]. Extending the results in [3] and [4] to quantifying the fluctuations and extending our results for non-Gaussian targets are both interesting future directions to make a fair comparison between these different algorithms.
6.**Thanks for your suggestion. We will explicitly state where each theorem is proven in the main section of the paper.**
---
Rebuttal Comment 1.1:
Title: update/feedback
Comment: Dear Reviewer 1EPQ,
As the deadline for the discussion phase is fast approaching, we were curious if you had any feedback for our response. Please also let us know if you have any further questions. Thank you and looking forward to hearing from you.
Sincerely,
Authors
---
Rebuttal Comment 1.2:
Comment: $\newcommand{\cN}{\mathcal{N}}$ I thank the authors for their response. Upon re-examining the proofs once again (which, admittedly, has been somewhat time-consuming given the issues with the structure and presentation) and reading the authors’ rebuttals, I am sorry to say that the rebuttal fails to adequately address my concerns. I discuss the major shortcomings below
**Section 3:** The results of Section 3 considers the problem of sampling from $\cN(b, Q)$. For this target, the potential is given by $V(x) = \tfrac{1}{2}(x-b)^{T} Q^{-1} (x-b)$ and $\nabla V(x) = Q^{-1}(x-b)$. It is clear that evaluating $V$ and $\nabla V$ require knowledge of $b,Q$. To this end:
***Each of the dynamics under consideration in Section 3 requires knowledge of $b$ and $Q$ in order to sample from $\cN(b,Q)$. This is most apparent in the ODE system of Equation (3) and also observed in the finite particle systems of Equation (13) and Equations (19)***
This renders the practical utility of these results useless as one can directly sample from $\cN(b,Q)$ if $b$ and $Q$ is known. The authors make the statement that *“Deriving tight results for special settings rather than imposing unrealistic conditions for general settings is a small but more solid step towards solving the general problem”*. Personally, I cannot think of a setting more unrealistic than sampling from $\cN(b,Q)$ with known $b, Q$
While one could argue that despite the absence of practical utility, these results shed light on the behavior of SVGD. I would find such an argument to be quite weak since the setup considered in Section 3 is very far removed from practice. In particular, **I do not believe any applications where one would sample from $\cN(b, Q)$ with known $b, Q$ by using SVGD. Secondly, practical applications of SVGD typically do not use bilinear kernels**. On this note, I do not find the boundedness assumption of prior works to be very restrictive as this is satisfied by a large class of commonly used kernels (e.g. RBF, Laplace) whereas bilinear kernels are not commonly used.
**Theorem 3.7:** The uniform-in-time propagation of chaos bound applies only to the continuous-time particle system of Equation (13), i.e. Gaussian SVGD with Gaussian Target and Bilinear Kernel. An examination of the proof in Appendix K clearly shows that the key steps in the proof crucially depends on the precise form of the dynamics (which is specific to $\cN(b, Q)$ and requires knowledge of $b, Q$ in advance to implement). **Given the hyper-specific nature of this result, I fail to see how this is an important contribution of its own. I find the statement that *Our proof ideas are generalizable to other setting* to be an overclaim.**
**Section 4:** While the results on Section 4 admit some utility, I find the scope to be quite limited. **In particular, both Theorems 4.1 and 4.2 consider a continuous-time system and not a practical algorithm**. Convergence rates are only obtained under log-smoothness and log-concavity. On the contrary, **the work of Diao et. al. [4] actually gives an implementable discrete time algorithm with quantitative convergence rates for both log-concave and strongly log-concave targets**. Admittedly, the rates are in expectation but considering the fact that the stochasticity in their algorithm arises from Monte-Carlo estimation of Gaussian expectations, I believe high-probability guarantees should follow easily (as $ \nabla V$ is Lipschitz and $\nabla^2 V$ is bounded above and below in the PSD sense).
Overall, the absence of discrete-time finite-particle rates in Section 4 for the actual algorithm under consideration (i.e. Algorithm 2) significantly impacts the utility of these results. I find it concerning that the authors present the finite-particle system in Equation 26 to be an important contribution but do not prove any concrete nonasymptotic convergence guarantees.
In its current state, I find the theoretical contributions of this work to be quite incomplete and the presentation to be immensely sloppy. To this end, I choose to keep my current score.
---
Reply to Comment 1.2.1:
Comment: We thank the reviewer 1EPQ for their feedback. Although we have provided detailed arguments in the reply to AC, we still want to emphasize one issue here. Throughout the page-long response of the reviewer, we believe **the only spot-on argument** for rejecting is that we have everything else but a discrete-time finite-particle bound for general targets. We admit that we are currently not able to achieve this desired result and would like to comment more regarding their concern.
**Firstly getting a discrete-time finite-particle bound for general targets for Gaussian-SVGD is really a difficult problem.** Indeed, in the rebuttal we have provided the reason why the methods in previous works fail for our case. One key point is the troublesome unboundedness of bilinear kernels. (Why consider bilinear kernels? Note that bilinear kernels are used in Gaussian-SVGD for the purpose of performing GVI. Even for original SVGD we see no reason why such special and elegant case should be ignored in the studies.) Moreover, as the reviewer carefully figured out from examining our proof, our new approach does not directly yield discrete-time results for general targets even though the idea is general in principle.
**Secondly this single weakness should by no means overweigh all the merits of our algorithmic and theoretical contributions.** As the reviewer had pointed out in their first feedback, "our experimental evaluation is satisfactory". While a complete theory is not avaliable, there is empirical evidence that the algorithm we propose has superior performance compared to previous methods. And such deterministic particle-based algorithm has its importance from algorithmic perspectives. Moreover, we provide a general framework that unifies various previous algorithms via different bilinear kernels along with a systematic way of comparing them. These algorithmic contributions are further strengthened by novel theoretical results with new insights, which we believe that we have already emphasized enough in the rebuttal. For example, if the reviewer is familiar with the literature of propagation of chaos or has once implemented SVGD, they should understand that the uniform vs non-uniform in time is a huge issue. Also, we believe anyone with proper math training could appreciate the beauty of our explicit solution for the finite particle system with Gaussian targets and bilinear kernels. We feel that it is a pity that all these important and interesting contributions were ignored by the reviewer in their response.
Therefore, we would request the reviewer to re-evaluate our contribution in a more comprehensive manner. For certain algorithms, it might take years to develop its full theory. If a paper should be rejected only because it hasn't achieved a complete theory for the proposed algorithm, then the SVGD by Liu \& Wang and many other famous algorithms could not have been published at all, which would have been a great loss for the whole community. | Rebuttal 1:
Rebuttal: General Clarifications
=================
1.**Quantitative results for finite particle, discrete-time setting**
We have an updated version of Theorem 3.9 (discrete-time, finite-particle) with the following **quantitative** convergence rates:
**Theorem:** For a centered Gaussian target, suppose the SVGD particle system with $K_{1}$ or $K_{2}$ is initialized by $\bigl(\boldsymbol{x}\_{i}^{(0)}\bigr)\_{i=1}^{N}$ such that $\boldsymbol{\mu}\_{0}=\boldsymbol{0}$ and $C_{0}Q=QC\_{0}$. For $0<\epsilon< 0.5$, we have $\boldsymbol{\mu}\_{t}=\boldsymbol{0}$ and $\lVert C_{t}- Q \rVert\to 0$ as long as all the eigenvalues of $Q^{-1}C\_{0}$ lie in the interval $(0, 1+1/\epsilon)$. Furthermore, if we set $u_{\epsilon}$ to be the smaller root of the equation $f\_{\epsilon}'(u)=1-\epsilon$ (it has $2$ distinct roots) where $f\_{\epsilon}(x) := (1 + \epsilon(1-x))^2x$, then we have linear convergence, i.e.,
$$
\lVert C\_{t}- Q \rVert\leq (1-\epsilon)^{t}\lVert C_{0}-Q \rVert\leq e^{-\epsilon t}\lVert C\_{0}-Q \rVert
$$
as long as all the eigenvalues of $Q^{-1}C\_{0}$ lie in the interval $[u\_{\epsilon}, 1/3+1/(3\epsilon)]$.
This is a direct refinement of Theorem 3.9 and is the 1st such result for finite-particle, discrete-time setting matching practice. The only other rates (for deterministic SVGD) by [62] do not align with practice. Obtaining similar results for general targets is extremely challenging and is one of the central open problems in SVGD.
2.**Gaussian-SVGD versus bilinear SVGD**
We like to clarify that our algorithm (i.e, Gaussian-SVGD) is **different** from SVGD with a bilinear kernel, when the target is not Gaussian. Standard (bilinear or RBF) SVGD can sample from the target but does not solve Gaussian variational inference (GVI) whereas Gaussian-SVGD can solve GVI. To achieve this, in Equation (26) we use a linear approximation $\widehat{\nabla V}$ instead of the actual $\nabla V$. (Note that this linear approximation does not guarantee Gaussian dynamics for RBF kernel.)
- If one wants to perform GVI, one essentially need to use Gaussian-SVGD instead of bilinear SVGD or RBF-SVGD.
- To estimate the posterior mean or covariance of a **not necessarily Gaussian target**, one can use either Gaussian-SVGD or standard (bilinear or RBF) SVGD. The particle-implementation of Gaussian-SVGD provides a **deterministic** algorithm for GVI, and is shown to perform well in our experiments. Furthermore, [A] shows the GVI provably does better than many other existing methods like Laplacian approximation. In contrast, there are no such theoretical guarantees for bilinear SVGD, RBF-SVGD or general SVGD in this settings.
[A] A Katsevich, P Rigollet. "On the approximation accuracy of gaussian variational inference." arXiv preprint arXiv:2301.02168, 2023.
3.**Further clarifications from prior works**
We now explain further the advantages of studying the Gaussian-SVGD and the different types of results available in the literature on SVGD. First note that the flexibility offered by the \emph{nonparametric} aspect of SVGD also leads to unintended consequences. Indeed, on one hand, from a practical perspective, the question of how to pick the right kernel for implementing the SVGD algorithm is unclear. Existing approaches are mostly ad-hoc and do not provide clear instructions on the selection of kernels. On the other hand, developing a deeper theoretical understanding of SVGD dynamics is challenging due to its nonparametric formulation. [49] derived the continuous-time PDE for the evolving density that emerges as the mean-field limit of the finite-particle SVGD systems, and shows the well-posedness of the PDE solutions. Furthermore, the following different types of convergences could be examined regarding SVGD, some of which has been analyzed previously in the literature:
- (A) Unified convergence of the empirical measure for $N$ finite particles to the continuous target as time $t$ and $N$ jointly grow to infinity;
- (B) Convergence of mean-field SVGD to the target distribution over time;
- (C) Convergence of the empirical measure for finite particles to the mean-field distribution at any finite given time $t\in [0,\infty)$;
- (D) Convergence of finite-particle SVGD to the equilibrium over time;
- (E) Convergence of the empirical measure for finite particles to the continuous target at time $t=\infty$.
From a practical point of view (A) is the ideal type of result that fully characterizes the algorithmic behavior of SVGD, which could be obtained by combining either (B) and (C) or (D) and (E). Regarding (B), [44] showed the convergence of mean-field SVGD in kernel Stein discrepancy, which is known to imply weak convergence under appropriate assumptions. The works of [19],[34],[60] and [65] sharpened the results with weaker conditions or explicit rates. The work [27] extended the above result to the stronger Fisher information metric and Kullback-Leibler divergence based on a regularization technique. The works [44] and [34] obtained time-dependent mean-field convergence (C) of $N$ particles under various assumptions using techniques from the literature of propagation of chaos. The work of [62] obtained even stronger results for (C) and combined (B) to get the first unified convergence (A) in terms of KSD. However, they have a rather slow rate $1/\sqrt{\log\log N}$, resulting from the fact that their bounds for (C) still depends on the time $t$ (sum of step sizes) double-exponentially. Moreover, there has not been any work that studies the convergence (D) and (E) for SVGD, which illustrate a new way to characterize the unified convergence (A).
4.**Prior work** by [49] requires that the kernel be radial which rules out the important class of bilinear kernels that we consider. The work of [27] relaxed the radial kernel assumption. However, they required boundedness assumptions which we avoid in this work for the case of bilinear kernels. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work studies the behavior of Gaussian-SVGD (SVGD projected to the space of Gaussian distributions, using bilinear kernels) and its variants. The authors studied the behavior of the mean-field PDEs and established for Gaussian targets finite-particle convergence results which are significantly better than previous results for nonparametric kernels and targets. They have also derived density- and particle-based implementations of Gaussian-SVGD for general targets, which are shown to generalize recent works on Gaussian variational inference (GVI) with algorithmic guarantees. One of the proposed algorithms outperform recent works on a Bayesian logistic regression experiment.
Strengths: - The theoretical results appear interesting (to me as a non-expert, and note that I didn't check the proofs): the mean-field results sometimes improve over recent works on Gaussian variational inference with algorithmic guarantees, and the finite-particle results apply to standard SVGD, albeit with the other restrictions.
- The discussions on connections between Gaussian SVGD and existing GVI approaches are also interesting, and one of the proposed methods appear promising empirically.
Weaknesses: - While the restrictions to Gaussian families and affine kernels are similar to some of the recent works, it inevitably limits the scope of this work as much of the interest around SVGD arises from its flexibility.
- While Gaussian variational inference in general has demonstrated competitive empirical performance, it is unclear if the proposed approach maintains this property, as it is not clear if the more complex algorithms interact nicely with common modifications such as minibatching. This is also relevant because while some of the recent works (e.g., [17, 36]) also studied alternative GVI approaches, they come with full algorithmic guarantees, whereas for the present method there is no qualitative convergence guarantees for the discrete-time, finite-particle case.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: For the experiments it would be helpful to compare with ordinary gradient descent on the variational parameters, to provide some ideas on the practical utility.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the detailed review and incisive questions.
1. **Restrictions to Gaussian families and affine kernels:**
- For a large class of Bayesian inference problems, thanks to the Bernstein von Mises theorem, the posterior (target) distribution is approximately Gaussian in the limit of large samples under appropriate regularity conditions. Notably a recent work [A] shows that in terms of mean and covariance estimation (of the posterior) using GVI has improved rates over other methods like Laplace approximation. There is an array of works providing **stochastic** algorithms (which is based on randomized discretizations of the BW Gradient flow) for solving GVI (see, e.g., [17, 36]), and [55, 68, 70, 57] show the improved practical performance of GVI on various practical problems. We show in this work that there is a surprising **deterministic** discretization (BWPF) of the BW Gradient flow if we aim to solve GVI. Furthermore, we also have **quantitative rates** for mean and covariance estimation for **deterministic** SVGD algorithm in the finite-particle discrete-time setting (which we describe next), similar to the results for the **stochastic** algorithms mentioned above.
[A] A Katsevich, P Rigollet. "On the approximation accuracy of gaussian variational inference."
arXiv preprint arXiv:2301.02168, 2023.
- In terms of assumptions on kernel in prior works on SVGD, we also highlight that prior work by [49] requires that the kernel be radial which rules out the important class of bilinear kernels that we consider. The work of [27] relaxed the radial kernel assumption. However, they required boundedness assumptions which we avoid in this work for the case of bilinear kernels.
**To sum up, understanding Gaussian-SVGD is an important step towards understanding the general situations with other kernels (note here that flexibilty offered by fully-nonparametric SVGD suffers from the issue of kernel choice). Surprisingly this special case has not been well studied in the literature and our work provides nearly-complete solution to this important special case.**
2. **About discrete-time, finite-particle guarantees:**
- We have an updated version of Theorem 3.9 (discrete-time, finite-particle) with the following **quantitative** convergence rates:
**Theorem:** For a centered Gaussian target, suppose the SVGD particle system with $K_{1}$ or $K_{2}$ is initialized by $\bigl(\boldsymbol{x}\_{i}^{(0)}\bigr)\_{i=1}^{N}$ such that $\boldsymbol{\mu}\_{0}=\boldsymbol{0}$ and $C_{0}Q=QC\_{0}$. For $0<\epsilon< 0.5$, we have $\boldsymbol{\mu}\_{t}=\boldsymbol{0}$ and $\lVert C_{t}- Q \rVert\to 0$ as long as all the eigenvalues of $Q^{-1}C\_{0}$ lie in the interval $(0, 1+1/\epsilon)$. Furthermore, if we set $u_{\epsilon}$ to be the smaller root of the equation $f\_{\epsilon}'(u)=1-\epsilon$ (it has $2$ distinct roots) where $f\_{\epsilon}(x) := (1 + \epsilon(1-x))^2x$, then we have linear convergence, i.e.,
$$
\lVert C\_{t}- Q \rVert\leq (1-\epsilon)^{t}\lVert C_{0}-Q \rVert\leq e^{-\epsilon t}\lVert C\_{0}-Q \rVert
$$
as long as all the eigenvalues of $Q^{-1}C\_{0}$ lie in the interval $[u\_{\epsilon}, 1/3+1/(3\epsilon)]$.
The above result is essentially direct refinement of the current result in Theorem 3.9. and is the first such result for finite-particle, discrete-time setting which matches the observed empirical performance. The only other comparable result for **deterministic** SVGD is by [62], where the results are not in alignment with practice. Obtaining similar results for general targets is extremely challenging and is one of the central open problems regarding SVGD. In view of this, our result provides a first concrete step towards developing a theory of SVGD that aligns with practice.
We also remark here that the fully algorithmic results for GVI in e.g., [17, 36] are for **stochastic** algorithms and are only proved in expectation.
3. **About ordinary gradient descent:** We thank the reviewer for this great suggestion. We will be happy to add this result in our revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, and I especially appreciate the added result on discrete-time finite-particle convergence. I am keeping my score unchanged to reflect the fact that I cannot vouch on the technical contributions of the results. I believe the work would be greatly strengthened if the authors could either
- add comparisons to OGD-implemented GVI in more practical scenarios (with minibatching, or on larger-scale datasets with computational constraints), or
- discuss in more detail how Gaussian-SVGD may help with the analysis of SVGD with general kernels,
in which case the contribution towards a broader audience would be more clear. On the latter point, I would note that I'm more positive with the use (and analysis) of affine kernels due to the connection to [46] and GVI, but radial kernels still appear to be the more viable choice if a non-Gaussian variational family is needed, which constitute an important scenario.
---
Reply to Comment 1.1.1:
Title: Official reply by Authors
Comment: Thanks for your acknowledgement.
1) Indeed, thanks to your earlier suggestion, we are currently running the asked comparison experiments to add in our revision. We will be happy to add this in the revision.
2) It is currently unclear to us what exact family of distributions are characterized (either in a parametric or non-parametric way) by using the RBF kernel. This is generally related to lack of deep understanding of SVGD itself. However, we are examining Elliptical-SVGD, i.e., with the metric being the Bures-Wasserstein metric corresponding to the family of elliptical distribution (instead of Gaussian as in our current work); see [A,B] for details. The kernel in this case could be obtained here by a delicate and careful reverse-computation. The set of tools and insights we developed in this work are extremely useful to establish similar results (both the discrete and continuous settings) for the problem of Elliptical Variational Inference, which has applications for performing variational inference under heavy-tails [C]. While the technical details are in the initial phases of an ongoing work (and are definitely beyond the scope of this submission, and require a separate paper to be completely presented), taking your suggestion, we will be happy to discuss this briefly in this draft, as a potential direction for generalization of our work to non-Gaussian variational inference.
A]-Muzellec, Boris, and Marco Cuturi. "Generalizing point embeddings using the Wasserstein space of elliptical distributions." Advances in Neural Information Processing Systems 31 (2018).
[B]-Matthias Gelbrich. On a formula for the l2 Wasserstein metric between measures on Euclidean and Hilbert spaces. Mathematische Nachrichten,147(1):185–203, 1990.
[C]-Domke, Justin, and Daniel R. Sheldon. "Importance weighting and variational inference." Advances in neural information processing systems 31 (2018). | null | null | null | null | null | null |
Tight Bounds for Machine Unlearning via Differential Privacy | Reject | Summary: This paper studies the machine unlearning problem from the perspective of differential privacy. Specifically, the authors propose to use differentially private models directly so that unlearning update is not necessary (or unlearning is an identity map), and the motivation is to make the unlearning procedure independent of side information (i.e., original training set) to avoid privacy leakage. A tight lower bound on the number of data points that such kind of DP model can be unlearned is shown in the paper, along with some new proving techniques.
Strengths: This paper uses the idea from Renyi DP and zero-concentrated DP to optimize the DP parameters, and thus refine the lower bound result proposed in previous work by Sekhari et al. An interesting combination of off-the-shelf results from DP literature is used to obtain the improvement of deletion capacity lower bound from $\widetilde{\Omega}\left(\frac{n \varepsilon}{\sqrt{d \log \left(e^\varepsilon/ \delta\right)}}\right)$ to $\Omega\left(\frac{n \varepsilon}{\sqrt{d \log (1 / \delta)}}\right)$. Furthermore, a deletion capacity upper bound is studied when the loss function is linear, showing that the proposed lower bound is actually tight.
Besides the improvement in the lower bound, the authors also introduce the post-processing, chain rule, and composition theorem for unlearning analog to classical DP. This could benefit future works to study unlearning via differential privacy.
Weaknesses: Although this paper may be interesting in theory, I do not think it can fit nicely into machine unlearning problems. The main motivation in the paper to use DP directly for unlearning is to avoid the usage of side information, which basically means the original training set. However, for most unlearning scenarios, having the entire training set is not a critical issue, and sometimes even a must when we need to perform model retraining. Consider the case when the size of data points that require deleting is larger than the deletion capacity. We have no choice but to retrain the model from scratch, and we need the remaining training samples to complete the retraining. So I do not think the motivation holds in the first place. Furthermore, a related discussion on the difference between DP and unlearning appears in the previous work by Sekhari et al. (Section 3.2: Strict separation between unlearning and differential privacy), where they show that if one designs the unlearning update carefully, the deletion capacity can be improved to $\Omega(\frac{n \sqrt{\varepsilon}}{(d \log (1 / \delta))^{1 / 4}})$, which enjoys better dependence on the dimension $d$ even compared to the results presented in this paper. As a matter of fact, many unlearning papers have pointed out the fact that using DP directly can lead to large overhead and low utility. Therefore, whether one can use the theoretical results presented in the paper to design better unlearning algorithms is questionable.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weakness section for more details.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The results do not fit nicely with machine unlearning problems, and it would be hard to utilize the results to design unlearning algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s reading of our paper, and thank them for their comments; however, we believe that their assessment of our work hinges on a few misunderstandings, that we hope to clarify below:
* Regarding the situation where the number of deletion requests exceeds the deletion capacity. Indeed, in this case, retraining from scratch on the remaining samples (or on an entirely new dataset, which is often unrealistic) is necessary. Which actually makes the case for understanding what the deletion capacity exactly is (to know when this threshold is reached after many typically small unlearning requests), which our paper does. In addition, this would typically happen after many requests: meaning that one can design (consistent with the motivation of our paper) unlearning settings where the original dataset is kept in a secure (albeit hard to access) location, and only needs to be accessed very rarely – namely, when this deletion capacity has been reached. This mitigates data breach risks, while still allowing unlearning in the setting considered.
* The result of Sekhari et al. mentioned by the reviewer does achieve better deletion capacity by storing additional information (namely, an additional statistic $T(S)$ of roughly $d^2$ bits about the dataset $S$), so while the deletion capacity is better this does not address the main concern considered in our work: offering **both** privacy and unlearning.
* Regarding the overhead due to DP algorithms. Indeed, DP algorithms typically incur a cost in terms of utility; we refer the reviewer to our response to Reviewer tB6F for a discussion of our motivation in that regard. In short, we do not deny the utility cost of DP, and as a result are not advocating using DP *when not required* just to achieve unlearning; instead, our motivation/use case is when privacy **is** required **and** unlearning is requested, in which case combining the two can (as our paper shows) come as essentially no cost until a large number of deletion requests is made (the deletion capacity, which our work pinpoints for convex and strongly convex losses).
We hope that our response clarifies some of the points raised by the reviewer, and will lead them to increase their score.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: I would like to thank the authors for making efforts to answer the questions from us. After going through all the comments, I feel like the major concern of most reviewers is how the proposed results can benefit the unlearning community. To elaborate, the authors show that when we are allowed to store some additional statistics $T(S)$ about the original dataset, we can get better results on deletion capacity; the proposed result can only outperform previous ones when we are not allowed to use any side information. In other words, if I understand correctly, the authors try to claim that the proposed result respects privacy more strictly, while previous results are only a trade-off between privacy and model utility. However, the "trade-offs" here are hard to define, as DP itself is also another trade-off via the $(\epsilon, \delta)$ parameters. So for applications like unlearning where we do have access to the original data, why we need to exclude additional information beyond the model needs to be justified more clearly. This is also one of the reasons I believe why most unlearning works will include some kind of experiments to show the final performance as a way to justify their method.
Personally, I really would like to see more theoretical papers in the field of unlearning. However, for such endeavors to be impactful, there needs to be a solid foundation of well-articulated motivations and assumptions. Given the current scope and clarity, I would like to keep the score at this moment.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their comments. We do agree that empirical and practical assessment of any proposed solution would be important (and necessary) before deployment. However, in the case of our work, we are not quite clear on what experiments would be meaningful and useful, given that our results either rely on analyzing theoretically the use of existing algorithms (DP) for unlearning, or on lower bounds (which as such are not amenable to experiments). What type of experiments do you have in mind? | Summary: This paper studies connections between machine unlearning and differential privacy (DP). In machine unlearning, the goal is to remove up to, say, $m$ of the examples from a dataset of size $n$, in such a way that the produced model is close (in some form of statistical or computational distance) to the model that would have been produced if we did not have the m examples to begin with.
A recent paper of SAKS'21 formulates unlearning with the same style that DP is usually defined. This paper follows the same definition.
The main question studied in this paper is about "deletion capacity". Namely, how many examples from the dataset can we delete while we lose the accuracy of the model up to a given parameter $\alpha$ and keep the machine unlearning "secure" with specified parameters $(\epsilon,\delta)$ (defined similarly to DP)? More formally, $\alpha$ is the regret in the agnostic setting, which is the extra risk compared to the best model in the family.
The main result of the paper is to establish matching upper and lower bounds (up to constant factors) on the deletion capacity for algorithms that basically do not do any deletion and when certain (natural, but still limiting) properties hold on the models and the loss function. Namely, the paper studies how to achieve $(\epsilon,\delta)$ privacy when the comparison is made between datasets that have hamming distance $m$, rather than $1$, and while the regret is bounded by $\alpha$.
At a technical level, the paper achieves its tight upper and lower bound (on the deletion capacity, within its own defined framework) by actually *not* caring about achieving differential privacy in its standard sense and directly aiming to satisfy the DP Lipschitz property over $m$-close databases out of the box. To achieve tight bounds the paper moves to other notions of DP (based on Reny divergence) first and then comes back to DP after a more effective composition theorem (that exist for such DP style definitions) is applied.
The paper also studies composition of unlearning, but only the statements are stated in the paper and no discussion is presented.
Strengths: At a technical level, studying the DP for $m$-close databases (with the motivation of doing nothing for unlearning!) is interesting, and finding tight bounds for such problem is cool. but I would have preferred a more direct depiction of the result up front by saying that what happens here is not really unlearning and is about DP for $m$-close datasets. Then, maybe depicting unlearning as a potential application would be good, since the application to unlearning comes with some limitations that prevent us from using the full capacity of what unlearning allows.
The proofs also look interesting and as far as I could tell, the difference between this work and previous work is explained well.
Weaknesses: As explained above, I think the connection to unlearning is a bit far fetched, as it comes with strong limitations.
(has a related question)
What I understand is that the paper studies tight bounds for settings in which the unlearning is *not even done* and the closeness of the produced models holds due to the DP-like property over $m$-close data sets. I am not sure why this is equivalent to "not storing anything besides the model itself". If they are equivalent, this needs a proof.
Citations are not in good shape. Examples:
The work of Cohen et al is cited for initiating a formal study of "the right to be forgotten", while works like: https://eprint.iacr.org/2020/254 are done earlier.
The main question of the paper is about privacy vs unlearning, which is also studied in previous (uncited) works like
https://www.usenix.org/conference/usenixsecurity20/presentation/salem
https://arxiv.org/abs/2005.02205
https://arxiv.org/abs/2202.03460:
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the question in the section above. In addition:
why is theorem 1.6 is called a composition theorem? it seems it would be more natural for a composition theorem to allow up to k batches of deletion in an adaptive way.
line 207.5: why is the distribution p and not D ?
line 229: why is F* defined like this, and not like how it was defined in line 209 ?
the def of loss in line 277: why is the loss defined like that? The loss should take a model and a full *labeled* example and then output something. Your notion ignores the label (maybe x itself has the label already?) and is defined for a set (it would be a risk, in that case, and it typically takes average).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper is clear in that they only study specific "unearning" methods that come with limitations. It is mentioned that the limitation is not to store anything other than the model, but my understanding is that their limitation is to not do anything when unlearning. These seem different to me, but hopefully the author(s) will clarify this limitation in rebuttal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and are grateful for their pointers to the literature. We agree that our literature review was lacking, and apologize for that: we will make sure to update it (using these pointers, and others) in the final version, to provide a clearer and more accurate picture of the work in this area.
Regarding the technical comments and typos: we will fix the latter (thank you!); as for the comment on the loss function, we look at the general formulation where the vector (data point) includes all information, including label/value; and the quantity defined here is the empirical loss on a dataset – the normalization by n was not included here for convenience, indeed, but would just be a normalizing factor. We will clarify this to avoid any ambiguity.
Finally, regarding the limitations of our work, and in particular of our notion of unlearning as “not even done”, we refer the reviewer to our response to Reviewer tB6F, which hopefully will clarify this aspect.
---
Rebuttal Comment 1.1:
Title: Ack
Comment: thanks for the response. | Summary: This paper addresses the concept of "machine unlearning" within the framework of differential privacy. The authors provide tight bounds on the maximum number of data points that can be successfully unlearned without significantly impacting the model's accuracy. They also establish the analog of key properties of DP for machine unlearning. The paper introduces novel results for convex and strongly convex loss functions, as well as properties of post-processing and composition of unlearning algorithms.
Strengths: 1. The paper addresses the important and practical problem of machine unlearning, which enables individuals to request the removal of their data from trained models.
2. The paper builds upon previous work and provides enhanced theoretical results. It closes the gap between upper and lower bounds on the deletion capacity achievable by differentially private machine unlearning algorithms.
Weaknesses: 1. The paper focuses primarily on theoretical analysis and proofs, but it lacks empirical experiments to validate the proposed machine unlearning algorithms.
2. The paper considers convex loss and strongly convex loss functions in its theoretical analysis. While these assumptions may hold for some models and applications, they may not be applicable to a wide range of real-world machine learning models, such as deep learning models, which often involve non-convex loss functions. This limitation restricts the generalizability and practical applicability of the proposed algorithms.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Considering that many real-world machine learning models, such as deep learning models, involve non-convex loss functions, how applicable are the proposed machine unlearning algorithms to these models?
2. How does the deletion capacity impact the overall utility and performance of the machine learning models in practical scenarios?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. Lack of experiments.
2. Assumptions on loss functions are constraints.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and comments, and address both of their questions together. Our paper does focus on convex and strongly convex losses, as is common in a significant part of the learning and optimization literature; we note that while this assumption on the loss is not always satisfied (as the reviewer correctly points out), analyzing these cases however is a good (and necessary) starting point, and one that provides not only a good rule of thumb but also surprisingly good results overall (as shown by many algorithms, starting with SGD, whose guarantees for convex losses appear to carry over surprisingly well in practice.
Now, as our results do study how DP guarantees imply unlearning ones, the applicability of these algorithms to practical unlearning scenarios then will follow from that of the corresponding DP algorithms for these scenarios. Put differently: design a good and practical differentially private algorithm (as many DP practitioners are working and focusing on), get a good and practical unlearning guarantee from it.
Regarding the lack of experiments: our paper focuses on understanding the interplay between DP and unlearning, and the analogies between the two, from a fundamental point of view. Because of this, we believe that our results stand by themselves (in particular, lower bounds are not amenable to experiments) and are in scope for NeurIPS. We agree that continuing this direction of research further will lead to real-world use down the line, which will require experimental results and evaluation: and we do hope our work will spark interest in this line of research.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. No further questions as of now. | Summary: The paper studies approximate unlearning with procedures which do not store any side information (and satisfy differential privacy) in convex learning problems and establishes tight upper and lower bounds.
Strengths: 1. Machine unlearning has recently gained much interest owing to privacy regulations. The paper studies a certain formulation of approximate unlearning inspired via differential privacy, with the additional restriction of storing no side information. This formulation is particularly appealing since (a). storing side information can be expensive and prone to attacks and (b). it does not require new algorithms for learning and unlearning, but simply (existing) private algorithms. The paper studies the limits of such a formulation for convex learning problems, improving the upper and lower bounds in the previous works. The question is very natural and the authors resolve the loose ends remaining in the prior works.
2. The proofs of improved upper and lower bound are simple yet interesting: the improved upper bound follows due to the use of stronger group-privacy properties of zCDP, as opposed to approximate DP; and the lower bound establishes an interesting "converse to group-privacy" due the linear loss function under consideration.
Weaknesses: 1. While the problem is very natural, the final quantitative improvements are rather minor. For constant $\epsilon$, which is usually the case of DP, the bounds are same. There is also very limited discussion on the quantitative improvements compared to prior work.
2. The scope of the paper seems limited; the paper essentially ties some loose ends present in the prior analysis, which is mostly interesting for theoretical reasons. I don't know if the contributions could have impact on the larger area of machine unlearning.
3. The techniques largely borrow from the differential privacy literature. The lower bound instance and the argument of padding the sample multiple times are present in prior works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would encourage the authors to provide an extended discussion of the quantitative improvements compared to prior work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The scope of the the work is limited to procedures which store no side information and satisfy approximate unlearning (in the spirit of DP). These restrictions basically leave DP procedures as candidate learning/unlearning algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their time and valuable feedback; we address below their main question, regarding quantitative improvement upon previous work; and will incorporate these into the final version of our paper.
Our improvements regarding the deletion capacity of unlearning algorithms (setting aside, for the sake of this discussion, the other contributions of our paper: namely, the extension to “pure” unlearning algorithms, as well as the various additional results regarding, e.g., composition of unlearning algorithms) are twofold:
1. The first, the upper bound, can indeed appear minor, in that (as the reviewer points out) for small epsilon it is relatively small. Yet, it is important to mention that while “small epsilon” is usually desired, and the ideal setting, in practice epsilon is typically not small: see, for instance, https://desfontain.es/privacy/real-world-differential-privacy.html for a summary of typical settings in deployments at scale. The value of epsilon is almost always greater than 1, and often set to 8 to 16 (and sometimes even larger). For such values, or improvement becomes non-negligible. (Moreover, this is addressing the practical aspects of the upper bound improvement: we feel important to mention that another key aspect lies in understanding the fundamental limits of this approach (unlearning via DP), from a theoretical and conceptual point of view.)
2. The second, the lower bound, goes much beyond this, as the previous lower bound did not feature any meaningful dependence on the deletion capacity at all! In that sense, our results together not only improve upon previous work, they also show what to expect overall – prior to our result, it was by and large open what the “right” limit of deletion capacity was in the large gap left open. The fact that our lower bound shows that this “right” limit happens to be close to the previously known upper bound (again, with the above large v. small epsilon caveat) while our improved upper bound closes the remaining gap does not imply that the situation was roughly well-understood before!
Finally, regarding the limitation, we point the reviewer to our response to Reviewer tB6F regarding the takeaway of our work; namely, that our work is not meant to rule out non-DP approaches to unlearning, but rather one of its main objectives is to deepen our understanding of whatever unlearning guarantees DP brings “for free”, valuable in situations where a DP solution would be required (and thus whichever unlearning guarantee is provided by it comes as an bonus/incentive).
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: I thank the authors for their detailed response. While I agree that this work solidifies our understanding of unleraning guarantees arising from DP based algorithms,the scope is still (in my opinion) narrow, espically since both upper and lower bounds are based on improvement to results in prior work. Nonetheless, I increase my score to 5.
---
Rebuttal 2:
Title: final discussions
Comment: Dear Reviewer,
As discussions come to an end soon, this is a polite reminder to engage with the authors in discussion.
Please note we take note of unresponsive reviewers.
Best regards,
\
SAC | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading of our submission, and are grateful for their detailed comments and suggestions. We will address their specific comments in the final version of our work, and respond individually to their questions and concerns below. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work derives tight bounds for the deletion capacity of machine unlearning algorithms that are differentially private. These bounds are stated in terms of a deletion capacity formulated as a function of data points that can be removed before the estimation risk (an accuracy measure) becomes too large. The estimation risk (or the excess risk) is assessed over the sampling distribution of i.i.d. data points. The authors derived the results in detail for 1-Lipchitz convex loss functions and presented briefly the parallel result for 1-L strongly convex loss functions.
My assessment, consisting of strengths, weaknesses, and questions, can be found in the sections below.
Strengths: I find this a well-written paper. For a technical topic that is focused on the proof, the paper nevertheless walks the reader through while offering both clarity and insight. The structure of the paper is also quite reasonable, with the contribution overview very helpful.
Weaknesses: Please see my question in the Limitation section. I think the lack of comment for that question in this paper is its biggest weakness.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: My biggest concern has been raised as a question in the Limitation section, because it is more appropriate under that heading. Below are a few questions on the technical side that could benefit from some clarification.
- Your estimation risk, and the definition of deletion capacity, calls for an expectation over the sampling variability of i.i.d. data points following a distribution D. I would like to understand better the impact that this assumption has, in particular the independence part, on your results. In particular, how would data dependence change the results? This is an important point of difference between your framework and DP which do not take data variability into consideration (except Pufferfish which you do not use).
- Related to this, perhaps you could consider including in the sketch proof of Theorem 3.3 (Line 293) some details concerning the reduction from population to empirical losses. I imagine that the issue of data variability comes up here.
- I am failing to appreciate the significance of requiring the data points to take values in \pm 1/\sqrt{d}, where d is the dimension of the parameter space. If we are discussing an asympototic regime (in which the bounds are situated, since they involve both d and n), what does this “shrinking” ball of data mean?
- A minor point: the word “grouposition” can use some clarification. I think I know what the authors mean but this is not standard vocabulary.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Looking at the form of the unlearning algorithm \bar{A} in Theorem 3.1, am I to conclude that the unlearning algorithm that is to achieve the tight bound as you present in this work is in fact an algorithm that literally *does nothing* to the deletion request, i.e. it outputs A(S) just as before? If my understanding is correct, this point calls for a moral discussion: yes the learning-unlearning pair satisfies Definition 2.5, but with the relaxation of alpha, epsilon, and delta, this technical argument will provide a slippery slope for justified inaction. Maybe your paper is not the first to consider this, but one more paper gets published without paying due attention to these ethical questions, the literature grows a bit more oblivious to common sense.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their time, comments and positive assessment of our work; we address below their questions, starting with the main one (“Limitation”).
Indeed, the reviewer is correct, in that our paper analyzes unlearning algorithms which do not do anything (at deletion request time). However, this is not quite the case that the algorithm “does nothing” overall: instead, the point here is that the algorithms considered benefit from some “deletion bonus” somehow *for free*, as they were designed to already satisfy the very stringent notion of differential privacy (DP) [and, accordingly, paid the price in utility that comes with it].
Put differently, the aim of this paper is not to promote “justified inaction”, but instead to characterize what unlearning guarantee comes “for free” [and when it stops] if one decides to offer the strong guarantee of differential privacy. That is:
* “Plain” algorithms offer neither privacy nor unlearning
* Privacy comes at a cost (which has often been argued to be steep)
* Unlearning comes at a cost
* Sometimes, only one of the two is required; sometimes, both are desired. Do the “costs” add up, or does paying the cost for DP offering a headstart in terms of unlearning?
Again, our aim is not to discourage unlearning-only solutions when DP is not required; but instead, by understanding the interplay between DP and unlearning, to show that the joint differential privacy+right to be forgotten requirement is more affordable than it seems. We will add a more detailed discussion of this point to clarify it.
Turning to the more technical questions:
* Indeed, our notion of risk (and the resulting definition of deletion capacity) are linked to the population risk, and our algorithms as a result assume that the dataset is drawn i.i.d. This is necessary to relate empirical loss to population loss, and is the standard setting in learning. While these notions (risk and deletion capacity) do rely on this standard iid assumption, it is important to note that the definition of $(\varepsilon, \delta)$-unlearning (Definition 2.5) itself does not: and, in that sense, is indeed analogous to the standard definition of DP (which is indeed assumption-free and non-distributional).
* We will expand on this reduction between empirical and population loss in the supplemental of the paper. As the reviewer correctly mentions, this correspondence is indeed where the i.i.d. assumption on the dataset comes in.
* This is a good question! The reparameterization to $\{\pm1/\sqrt{d}\}^d$ is mostly for convenience in the lower bound argument, and prevent any unwanted dependence on the dimension to be unaccounted for (e.g., in the Lipschitz constant of the loss function we consider for the lower bound). More precisely, this “shrinking” makes all datapoints considered unit vectors (with respect to $\ell_2$ norm), which simplifies the argument and makes it “cleaner.”
---
Rebuttal Comment 1.1:
Comment: I thank the authors for taking the time to respond to my questions. | null | null | null | null | null | null |
Accelerating Monte Carlo Tree Search with Probability Tree State Abstraction | Accept (poster) | Summary: This paper presents a novel approach called Probability Tree State Abstraction (PTSA) to improve the efficiency of Monte Carlo Tree Search (MCTS) algorithms, which have shown remarkable performance in challenging tasks. The computational complexity of MCTS algorithms is influenced by the size of the search space, and the proposed PTSA algorithm aims to address this issue. The algorithm introduces a general tree state abstraction with path transitivity, which helps in reducing the number of mistakes during the aggregation step. The theoretical guarantees of transitivity and aggregation error bound are also provided. The PTSA algorithm is integrated with state-of-the-art MCTS-based algorithms, including Sampled MuZero and Gumbel MuZero, and experimental results on various tasks demonstrate its effectiveness. The PTSA algorithm accelerates the training process of these algorithms, achieving a search space reduction of 10% to 45%.
Strengths: 1. The approach of aggregation considers the entire path, not only a state, is novel and unique.
2. The PTSA algorithm presented in this paper can be applied with any other state abstraction functions mentioned in previous studies, in a general way.
3. The paper provides extensive experimental data. It includes environments such as Atari games, as well as tasks with continuous action spaces like CartPole and LunarLander, and board games like Gomoku. The rich variety of experimental environments demonstrates the effectiveness of the proposed method across various tasks.
4. Integrate PTSA with state-of-the-art algorithms can achieve comparable performance with smaller branching factors. In other words, PTSA provides a more efficient method with less computational cost.
Weaknesses: 1. The meaning of probability in PTSA (in Definition 4.3) is not well-defined and requires further clarification. This will be addressed in the Questions section below.
2. There are some errors in the proofs presented. This will be discussed in detail in the Questions section as well.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why does $v_0.pruning$ do in line 17 in Algorithm 1? Any difference from $S_L.delete(b_j)$.
2. What role does the probability $\mathbb{P}$ in Definition 4.3 play in Algorithm 1?And, how to calculate $\phi$ in line 15 in Algorithm 1? In other words, when does $\phi(b_i)=\phi(b_s)$ hold true? Are both related?
3. In your paper, you mentioned a previous work titled "Monte Carlo Tree Search with Iteratively Refining State Abstractions." That method directly calculates the distance between states and performs aggregation if the distance, denoted as $d(s_1, s_2)$, is below a threshold. This approach differs from the method proposed in your paper, but both aim to reduce the branching factor of MCTS. Have you conducted any experiments comparing your method with the approach mentioned above? I couldn't find any analysis of that method in Table 1 or the experimental section below. Some insight into the reason for this omission should be provided.
4. This paper mentioned “reduce the computation time” with abstraction. My question (or curiosity) is how much overhead the checking operations (in Lines 14 and 15) incur. Note that in line 207 there is a time complexity which is required to be described in more detail, like $\log N_s$.
5. Equation (19) in the appendix is written incorrectly. Need to be fixed. For example, the final $p_{bM}(b_2, b_3)$ should be $p_{bM}(b_1, b_3)$. Also fix some other wrong indices in (19).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your uplifting review and valuable feedback. We appreciate your comments and would like to address them below.
**Questions:**
1. We apologize for not providing a clear explanation of the pruning/delete/add actions in the paper. We have provided more detailed explanations of the actions and notation used in Algorithm 1:
$S_L $ is a list that records the searched paths in the current search tree. $S_L.delete(b)$ and $S_L.add(b)$ refer to removing and recording path $b$ in $S_L$ respectively. The $pruning(b_j)$ action denotes removing unique nodes of path $b_j$ compared to the other abstracted path in the search tree.
2. We sincerely apologize for any misunderstanding caused by the simplification of the abstraction decision in Algo. 1. $(\phi(b\_i)=\phi(b\_s))$ returns a boolean value, where "true" denotes aggregating $b\_i$ and $b\_s$. This boolean value is determined by calculating the probability $\mathbb{p}(\phi(b\_i)=\phi(b\_s))$ based on Equations (5) and (6). In the practical implementation, once the probability is computed, a random number (0~1) is generated and compared to the probability. If the random number is less than the probability, $\phi(b\_i)=\phi(b\_s)$ holds true. We will provide a clearer explanation of this issue in the final version.
3. The work presented in "Monte Carlo Tree Search with Iteratively Refining State Abstractions" makes significant contributions by introducing an alternative approach called "abstraction refining" to replace progressive widening in MCTS. Their experiments primarily compare the "abstraction refining" method with progressive widening. As "abstraction refining" provides an improvement during node selection and expansion, it does not conflict with our method of performing abstraction after completing the backpropagation. Additionally, due to the challenge of accurately calculating the distance between hidden states, there is a lack of a clear implementation for integrating the "abstraction refining" method with MuZero algorithm. Hence, this comparison is difficult to be conducted in our experiments.
Furthermore, Table 1 focuses on describing the state abstraction functions used in previous model-free algorithms, which is why the "abstraction refining" method is not included in Table 1. In future work, we plan to study the impact of incorporating the criterion $d(s_1, s_2)$ in our algorithm.
4. According to your suggestion, we have conducted more experiments including 9 Atari games to further discuss the limitation of the added computational complexity. Experimental results (shown in Table 1 of the uploaded PDF) demonstrate that PTSA introduces an acceptable decrease in trajectory collection efficiency (less than 8% on average), which results in a significant reduction in the whole training time.
5. Thanks for your careful review. We have corrected the typos in Appendix Equation (19):
\begin{equation}
p_{bM}\left(b_{1}, b_{2}\right) \wedge p_{bM}\left(b_{2}, b_{3}\right)
= p_{vM}\left(v_{1}, v_{3}\right) \wedge p_{vM}\left(v_{2}, v_{4}\right) \wedge p_{vM}\left(v_{3},
v_{5}\right) \wedge p_{vM}\left(v_{4}, v_{6}\right)
= p_{vM}\left(v_{1}, v_{5}\right) \wedge p_{vM}\left(v_{2}, v_{6}\right) \\
= p_{bM}\left(b_{1}, b_{3}\right).
\end{equation}
We appreciate your feedback and support. | Summary: This paper proposed a novel search algorithm, PTSA, to improve the search efficiency of MCTS. Empirical results show that PTSA can be integrated with Sampled MuZero and Gumbel MuZero and can reduce the original branching factor by 10% up to 45%.
Strengths: The proposed PTSA algorithm can reduce the branching factor of MCTS and improve the computational efficiency of MCTS-based algorithms. The authors also provide both theoretical and empirical analyses.
Weaknesses: * The author claims that the proposed method can reduce the branching factor by 10% up to 45%. However, the result is based on only five Atari games. Based on Figure 3, the aggregation percentage varies across different Atari games. Can these five games represent all Atari-57 games? It would be more convincing to run more Atari games to support the claims.
* Moreover, it is unclear for the aggregation percentage on control tasks and Gomoku experiments. Without these experiments, it is inappropriate to claim “reduce branching factor by 10% up to 45%”.
* The time complexity of the proposed approach is higher than the original MCTS. It is unclear whether PTSAZero will still improve its efficiency when running under a larger simulation number. Currently, the authors only run “PTSAZero N=18” in Atari experiments. Will “PTSAZero N=30” perform better than “PTSAZero N=18”?
* Besides, in the board games such as Gomoku or Go, it is common to run large simulation numbers such as N=400 or N=800 during evaluation. It would be better to provide additional experiments/analyses to demonstrate the scale-up ability for PTSAZero. For example, providing the aggregation percentage/time usage/strength when using different simulation numbers.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * In Algorithm 1, line 15, if $b_i$ and $b_s$ have different lengths, will their $\phi_{Q_{\alpha}^{psi}}(b)$ be different? In addition, what is the definition for $\phi_{Q_{\alpha}^{psi}}(b)$? Definition 4.3 only shows the probability.
* In Algorithm 1, line 17, $v_0$ is root node and $b_j$ is a selection path. what does $v_0$.prunning($b_j$) mean?
* In Figure 2, will PTSA get better performance when using a larger simulation (N=30)? Current experiments only used N=18. It would be better to add another experiment with a larger simulation to show the scale-up ability of PTSA.
* In the Gomoku experiment, what does the expert opponent stand for? How many simulations are used in the Gomoku evaluation? As Gomoku is a two-player game, why not compare PTSAZero to other methods directly?
* line 302: “The winning rates of different methods w.r.t. training time are shown in Figure 4”. Should the range of the win rate be between 0 and 1 in Figure 4?
* In Figure 3, it seems that the aggregation percentage varies across different Atari games. Which type of game may have a higher aggregation percentage? Why do you choose these games? Can these five games represent Atari-57 games? Do you have more experiments on other Atari games?
* In Atari experiments, “As Gumbel MuZero does not require large simulations for Atari and control tasks”. In fact, Gumbel MuZero improves training efficiency by only using N=2 in Pacman, and the result is comparable to N=50. It would be more convincing to add additional experiments to compare the training efficiency between “Gumbel MuZero N=2” and “PTSAGZero N=2“ in Atari experiments.
* In Figure 2 (f), the label of the green curve is “MuZero N=50”, should it be “MuZero N=30”?
* Line 17, typo: Muzero -> MuZero.
* Figure 2, typo: state-of-art -> state-of-the-art.
* Figure 3 is shown after Figure 4. Please fix the order of these figures.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have addressed the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and the explicitly listed concerns. We hope that our answers will help to resolve the concerns.
**Weaknesses**:
1. We appreciate your feedback and acknowledge your concern regarding the limited number of Atari games used in our evaluation. We have conducted more Atari experiments (Assault, Seaquest, Breakout, and MsPacman tasks). The experimental results can be found in Figure 1 (in the uploaded PDF), which demonstrate the effectiveness of our algorithm in training acceleration.
2. We would like to emphasize that our method improves the training efficiency with search space reduction. In our experiments including Gomoku, Control, and Atari tasks, we observe a 10%-45% search space reduction. As shown in Figure 2 (in the uploaded PDF), we provide additional evidence of reduced branching factors in more tasks to support this statement. We will revise our claim to eliminate this ambiguity.
3. Figure 1 in the uploaded PDF shows the training results of each method with $N \in \\{ 18,30\\}$, indicating that PTSA exhibits superior training efficiency compared to other algorithms with the same simulations. This result demonstrates that the PTSA also improves the training efficiency with larger simulations.
We also compare the performance of PTSA N=30 and PTSA N=18. The results show that PTSA N=18 achieves comparable performance as PTSA N=30 with less training time. The presentation of results for N=18 in the paper aims to highlight how PTSA enables MCTS-based algorithms to achieve comparable performance even at smaller N values, thereby enhancing training efficiency.
4. We appreciate your suggestion. To compare the effectiveness and aggregation percentage of PTSAZero with different numbers of simulations ($N$), we have conducted additional experiments as shown in Figure 2&3 (in the uploaded PDF). We observe that increasing the number of simulations does not significantly affect the aggregation percentage for the same environment and state abstraction function.
**Questions:**
1. The two abstracted paths are required equal in length, which aligns with the mapping principle of state abstraction theory. Given two paths to be abstracted, the mapping principle denotes that two abstracted nodes at the same depth should be mapped into the same abstracted state. If two paths are of different lengths, their abstracted path lengths will also be different, which conflicts with the mapping principle.
Similar to other state abstraction functions, the $\phi_{Q^{\psi}_{\alpha}}(b)$ function maps a given path to a path in the abstract space. In Section 4.3, we define a method to compute the probability of two paths being mapped to the same abstract path under $\phi\_{Q^{\psi}\_{\alpha}}(b)$. We will provide a clearer explanation in the final version.
2. We apologize for not providing a comprehensive explanation of the pruning operation in the paper. We have provided more detailed explanations of the actions and notation used in Algorithm 1, including the pruning/delete/add operations. $S_L$ is a list that records the searched paths in the current search tree. $S_L.delete(b)$ and $S_L.add(b)$ refer to removing and recording path $b$ in $S_L$ respectively. The $pruning(b_j)$ action denotes removing unique nodes of path $b_j$ compared to the other abstracted path in the search tree.
3. According to your suggestions, we have conducted additional experiments. The results can be found in Figure 1 (in the uploaded PDF), while the analysis is given in response to the weakness.
4. Two methods including pitting them against each other and employing the same rule-based agent as the opponent are both commonly used to evaluate AI agents in board games. Using the same rule-based agent as the opponent is sufficient for a fair comparison in our experiments, which is commonly utilized in previous works [1,2,3]. Additionally, to maintain consistency, we use the same number of simulations for evaluation as during training.
5. We apologize for the incorrect use of the term "win rate." It should have been referred to as the "return" in the figure, where a value of 1 represents a win and -1 represents a loss during evaluation. We will rectify this typo in the final version.
6. You have raised an intriguing point that the characteristics of the state space in different tasks and the state abstraction function are the key factors influencing the reduction effect. This presents an important direction for future research.
We have conducted more Atari experiments, and the corresponding results can be found in the uploaded PDF. 9 Atari games are utilized in our experiment, including Pong, Alien, Breakout, etc., which are commonly used tasks in the previous works.
7. Our motivation is to enhance MCTS efficiency by reducing the branching factor. However, when N is very small, the advantages of the PTSA algorithm diminish due to the extremely limited search space. Considering that N=2 already results in a significantly small branching factor, it is unnecessary to evaluate the performance in such a small N scenario.
8,9,10,11 We will correct these typos and mistakes in the final version. Thank you very much for pointing out these issues, and we will address these issues and improve readability in the final version.
[1] Spending thinking time wisely: Accelerating MCTS with virtual expansions
[2] Action guidance with MCTS for deep reinforcement learning
[3] Opponent modeling based on action table for MCTS-based fighting game AI
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses and additional experiments. Do you have experiments on using large simulation numbers (such as N=400 or N=800) on Gomoku? Since PTSA significantly reduces the branching factors, it would be great to see more reduction rates when using large simulations.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. Due to the several days required for training with N=400 and N=800, the results were not immediately presented in the uploaded PDF. The following table presents the average reduction rate with different simulations in the Gomoku-19$\times$19 task, where the reduction rate increases with the increase in the number of simulations.
| Simulations| reduction rate|
|------------------|------------|
| N=25 | 28.3% |
| N=100 | 33.4% |
| N=400 | 37.7% |
| N=800 | 40.3% | | Summary: The paper proposes a novel tree state abstraction function (PTSA) for use during MCTS. The primary contributions of the paper are algorithmic and empirical. The key idea involves aggregating paths in the tree if their Q-values (as probabilities) along the path closely match an existing path with the same parent node. An analysis of the abstraction quality and error bounds are included. Experiments on Atari and Gym environments show that a recent MCTS variant leveraging PTSA outperforms a number of strong baselines.
UPDATE: I thank the authors for their detailed response. After reading the other reviews and comments, I'm more inclined to recommend acceptance and have updated my score to reflect that.
Strengths: + The paper tackles an important problem of accelerating MCTS search. It does so using tree state abstraction. The approach is intuitively clear and is also supported by analysis.
+ The paper proposes a novel tree state abstraction function based on path transitivity. The abstraction function is based on the difference in the Q values of the nodes (converted to probabilities) in the path. Although important implementation details are not clear to me, the intuition that abstracting entire paths accelerates search makes sense as does the abstraction of only the most recent path during search leading to smaller trees during online search. The paper is accompanied by analysis showing the correctness of the approach and an error bound under certain conditions. Overall, the algorithm seems to have high novelty.
+ The experiments are conducted on a number of Atari and Gym environments. Sampled MuZero with the proposed abstraction (PTSA) outperforms a number of strong baselines by a significant margin. The implementation seems to work very well. This seems to be a new state of the art in state abstraction for modern MCTS variants.
Weaknesses: - The approach is intuitively clear and seems to perform well empirically, which increases confidence. However, I found the description of the implementation details of Algorithm 1 difficult to follow. Please consider including a more careful description of the implementation in Section 4. The issue is exacerbated by the absence of code. This is currently preventing me from giving the paper a higher score.
- For example, the actual implementation of the algorithm in L15 of Algorithm 1 is unclear to me. I expect it to involve Eq 5 with some value of $\alpha$ like 0.7. But Eq 5 returns a real-valued prob estimate for a path pair (b_i, b_s). How is that turned into a boolean value (True / False) in L15? It's probably not real-valued equality. This is a key detail so please explain.
- There are a number of other implementation details that are difficult to find or missing. See the questions for examples.
- Given that the primary contribution is algorithmic and empirical, I'd have hoped to see the source code included. Reproducibility is going to be challenging without it and since this paper seems to establish a new state of the art, I'd encourage the authors to release their code.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - I had a number of questions about the exact implementation
- What exactly is the implementation of the branching condition of L15 of Algorithm 1? How does it relate to Eq 5 and 6 which is defined as a function taking two inputs (b_i, b_j) and returning a probability?
- What exactly is learned during offline learning vs online search? $d, v, p$ are likely learned offline. What about the abstraction function $\phi$? This seems online to me. Correct?
- What is $l$ set to in the implementation? How does it value impact performance?
- What is the implementation of the pruning function in L17 of Algorithm 1?
- How are the legal actions for the abstracted state computed?
- What is the size of $S_L$? How was it chosen? How does varying it affect performance?
- As described in L346-349, there seem to be a number of choices for the designer to make. These are not clear to me at the moment besides the obvious ones (e.g., $\alpha, N$). Please enumerate what exactly needs to be hand-designed or set manually for a given domain and what can be used off-the-shelf.
- Is there a reason to not include code? Will code be included in the final version?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. We address each comment and concern below.
**Weaknesses:**
1. We appreciate your feedback and will provide a more careful description of the implementation details in the final version. We have provided more detailed explanations of the pruning/delete operation used in Algorithm 1. Detailed modifications can be found in the general response.
2. We will release our code upon acceptance, and conduct further research to explore how different state abstraction functions can be better tailored to fit the characteristics of the environment. We hope that this research will help to enhance the performance and applicability of our approach in a wider range of applications.
**Questions:**
1. Questions about the exact implementation:
1. We sincerely apologize for any misunderstanding caused by the simplification of the abstraction decision in Algo. 1. $(\phi(b\_i)=\phi(b\_s))$ returns a boolean value, where "true" denotes aggregating $b\_i$ and $b\_s$. This boolean value is determined by calculating the probability $\mathbb{p}(\phi(b\_i)=\phi(b\_s))$ based on Equations (5) and (6). In the practical implementation, once the probability is computed, a random number (0~1) is generated and compared to the probability. If the random number is less than the probability, $\phi(b\_i)=\phi(b\_s)$ holds true. We will provide a clearer explanation of this issue in the final version.
2. Yes, your understanding is correct. Offline learning involves updating the dynamics, prediction, and value networks by sampling trajectories from a buffer. Online searching involves interacting with the environment to obtain high-quality trajectories, similar to MuZero algorithm. However, due to space limitations, Algorithm 1 in the paper only provides the MCTS search process, which may cause confusion for readers. We would like to clarify this issue in the final version.
3. The parameter $l$ is determined by the number of non-shared nodes between paths $b_i$ and $b_j$. A larger value of $l$ indicates that more nodes need to be evaluated to determine if they satisfy the aggregation condition. When $l=1$, path aggregation will degrade to node aggregation.
4. We apologize for not providing a clear explanation of the pruning/delete/add actions in the paper. We have provided more detailed explanations of the actions and notation used in Algorithm 1: $S\_L $ is a list that records the searched paths in the current search tree. $S\_L.delete(b)$ and $S\_L.add(b)$ refer to removing and recording path $b$ in $S\_L$ respectively. The $pruning(b\_j)$ action denotes removing unique nodes of path $b\_j$ compared to the other abstracted path in the search tree.
5. The process of obtaining legal actions is similar to MuZero, which stores hidden states rather than real states. Therefore, the legal actions at the root node are obtained based on the corresponding real states. For example, in a board game, the legal actions represent the legal moves available in the current real state.
6. $S\_L $ is a list that records the searched paths in the current search tree. In algorithms such as MuZero, SMuZero, and EfficientZero, the size of $S\_L$ equals the number of simulations conducted. In our proposed algorithm, the size of $S\_L$ is adjusted by the number of simulations subtracting the number of path aggregations. A larger $S\_L$ indicates a larger tree search space, which may lead to inefficient exploration.
2. We would like to resolve your concers from the following two aspects:
a. State abstraction functions need to be selected based on different environments: For example, $\phi_{a^*}$ requires the values of two states to be exactly equal and have consistent optimal actions. Such abstraction conditions can be inefficient for some tasks, such as Atari, where two states may have similar but not equal values. For different tasks, it may be necessary to choose suitable state abstraction functions to achieve better abstraction performance.
b. The parameters of the state abstraction function require manual tuning: Some parameters of the state abstraction function need to be adjusted manually based on the characteristics of the environment. For instance, the parameter $d$ in $\phi\_{Q^*\_d}$ typically ranges from 0 to 1. If the Q-value distribution in the environment is relatively smooth, a smaller value for $d$ (e.g., 0.2) is generally preferred. The parameter $\epsilon$ in $\phi^{\epsilon}\_{Q^*}$ is adjusted according to the range of the reward function. When the reward function has a larger range, the value of $\epsilon$ tends to be relatively larger as well. Furthermore, you mentioned some parameters used in MCTS-based algorithms, such as simulations $N$. These parameters can be informed by previous experiences with MCTS work. For example, in Atari tasks, a common choice for the number of sampled actions $K=6$.
Studying how to select suitable state abstraction functions and fine-tune the parameters accordingly can be a direction for future research. We hope these responses can resolve your concerns.
3. We will release our code upon acceptance, and provide the training instructions along with the corresponding random seed settings.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. After reading the other reviews and comments, I'm more inclined to recommend acceptance and have updated my score to reflect that. | Summary: To accelerate MCTS, the paper proposed a novel probability tree state abstraction (PTSA) algorithm to improve the search efficiency of MCTS.
They define states that are similar by using path transitivity and claim that such a method can have fewer mistakes. According to the results of Atari and Gomoku, the method can be 10% ~ 45% faster.
Strengths: 1. The method provides some theoretical guarantee.
2. The experiments take place in many environments.
3. The ablation studies have tried many abstraction functions.
Weaknesses: 1. The intuition of the paper is weird. The method required of all states on the paths needs to be similar. However, there are two problems here. First, the V value might be more incorrect at the beginning. Second, even if the V value is correct for the whole path, this method reduces the chance of pruning more nodes. For example, in Atari, agents can reach the exact same state with different paths. Since the environment is MDP, we should merge those two states.
2. It is unknown for the performance when the simulation is higher. The abstract error normally increases when the simulation increase. The method might delete some good paths that can only be identified after numerous simulations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How do you prune a path from a tree? What will happen to those branches that are on the path?
2. Have you considered abstraction functions that also require the best action should be the same[1]?
[1] Are AlphaZero-like Agents Robust to Adversarial Perturbations?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Stated in the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your helpful comments. We hope that our answers will help to resolve your concerns.
**Weaknesses:**
1. For the first problem, during the early stage of training, inaccurate estimation of the V values may prevent state abstraction methods from correctly aggregating states, which is a common issue for previous state abstraction methods. Addressing this issue is indeed a significant contribution of this paper. PTSA provides improved fault tolerance compared to previous abstraction functions by probabilistically avoiding incorrect aggregations, especially during the early stage of training, which is consistent with the results presented in Section 5.2 of our experiments.
For the second problem, the aggregation cannot be applied to all the similar or same states in the tree search space. For instance, given two paths with the same start node: $b_1=(v_0,v_1,v_2,\cdots, v_n)$ and $b_2=(v_0,v'_1,v'_2,\cdots, v'_n)$. $v_n$ and $v'_n$ represent similar or the same states. Assume $v_n$ and $v'\_n$ are aggregated into the same node. However, this operation leads to a ring structure, which conflicts with the MCTS tree structure. Intuitively, this issue can be addressed by deleting the intermediate nodes $(v\_1,\cdots,v\_\{n-1\})$ or $(v'\_1,\cdots,v'\_\{n-1\})$. However, deleting these nodes decreases the probability of exploring from these nodes, leading to inefficient exploration.
2. We have conducted additional experiments to demonstrate that the proposed state abstraction function does not affect the convergence and performance of the algorithm when using larger simulation counts. Please refer to Figure 3 in the uploaded PDF for the experimental results. One possible reason is that more simulations lead to more accurate estimated node values, which makes the judgment of state abstraction functions more accurate.
**Questions:**
1. We apologize for the lack of detail in our paper regarding the pruning operation. The $pruning(b_j)$ action denotes removing unique nodes of path $b_j$ compared to the other abstracted path in the search tree. We have provided more detailed explanations of the add/delete operations used in Algorithm 1. The specific modifications can be found in the general response.
2. Thanks for your suggestion. The requirement of the same best action is a very interesting and useful property, and it is consistent with state abstraction $\phi^\{\epsilon\}_\{a^*\}$ described in Table 1. This property may help to improve the robustness of the existing state abstraction functions. We plan to explore and combine this property with our proposed state abstraction method in future work. Furthermore, we will cite and discuss this work [1] in the final version.
[1] Are AlphaZero-like Agents Robust to Adversarial Perturbations?
---
Rebuttal Comment 1.1:
Title: Official comment
Comment: Weaknesses:
1.1 Even when utilizing PTSA, there's potential for incorrect aggregations. Consider sequences (s1, s2, s3, ..., s_n), (s1, s2, s3, ..., s_n-1, s'_n), if v(s_n) is similar with v(s'_n), then PTSA may aggregate them. However, v(s'_n) might be wrong.
1.2 For MCTS, it is acceptable to have two paths leading to the same node (s1->s2->s3, s1->s4->s3) as long as there is no cycle (s1->s2->s3->s1). This approach enables the aggregation of more nodes.
2 You need to compare baselines with PTSA using the same simulation. For example, PTSAZero n=30 looks worse than MuZero n=30 in Pong.
Questions:
2 Sorry for not making [1] clear enough. Their main concept is that V can be wrong but will improve after looking forward (evaluating the next state).
Let $T(s, a)$ be the transition function. [1] will require $V(T(s_1,a^*_1 )), V(T(s_1,a^*_2 )) , V(T(s_2,a^*_1 )) , V(T(s_2,a^*_2 ))$ to have a similar value. It can be extended from the optimal action to the optimal path.
---
Reply to Comment 1.1.1:
Comment: We appreciate your valuable comments and will address these points in the following response.
**Weakness**
1.1 Completely eliminating errors in aggregation remains a challenge, and existing state abstraction methods cannot guarantee the absence of incorrect aggregations. We have emphasized that our method aims to improve the fault tolerance of abstraction during the training process, rather than guaranteeing the absence of incorrect aggregations, which is consistent with the results presented in Section 5.2 of our experiments.
1.2 In MuZero, the nodes within the search tree represent hidden states rather than explicit states, posing challenges in determining the appropriate nodes to aggregate. Following your recommendations, we compare the performance of PTSAZero with the modified version (consider two similar nodes not only paths) in Pong, Freeway, and Boxing tasks. Results are given in the following table:
| Methods |Task: Time-Return | 60 mins | 120 mins | 180 mins | 240 mins | Average Reduction Rate |
| -------- | ------- | ------- | ------- | ------- | -------- | -------------- |
| PTSAZero N=18 | Pong | 8.7 | 16.8 | 17.5 | 18.1 | 16.5% |
| | Freeway | 27.8 | 28.3 | 28.8 | 27.1 | 10.6% |
| | Boxing| 35.4 | 62.5 | 89.1 | 90.2 | 41.3% |
| PTSAZero (modified) N=18 | Pong | -8.6 | -4.4 | 1.7 | 8.2 | 72.8% |
| | Freeway | 14.2 | 18.8 | 20.5 | 26.1 | 69.2% |
| | Boxing| 25.4 | 40.7 | 53.9 | 67.0 | 52.2% |
The results demonstrate that although there are more nodes to be aggregated, more incorrect aggregations may lead to worse performance.
2 Figure 1 in the uploaded PDF shows the comparison in Assault, MsPacman, Breakout, and Seaquest tasks with PTSA using the same simulation, which demonstrates that PTSAZero N=30 achieves comparable performance as MuZero N=30 with less training time. The following Table presents the results in Pong, Freeway, and Boxing tasks:
| Methods |Task: Time-Return | 60 mins | 120 mins | 180 mins | 240 mins |
| -------- | ------- | ------- | ------- | ------- | -------- |
| PTSAZero N=30 | Pong | -11.2 | 4.5 | 18.5 | 19.2 |
| | Freeway | 16.7 | 27.3 | 28.5 | 27.0 |
| | Boxing| 37.2 | 64.9 | 68.5 | 81.3 |
| MuZero N=30 | Pong | -3.5 | 11.3 | 16.3 | 18.0 |
| | Freeway | 19.0 | 24.5 | 27.3 | 26.5 |
| | Boxing| 31.4 | 58.5 | 64.0 | 70.6 |
**Question**
2 Thanks for providing additional clarification for [1]. It appears that their main concept revolves around the idea that the value function V may initially be incorrect but can improve through forward-looking evaluations of the next state. PTSA compares the nodes at different depths along the path, which shares some similarities with the method described in [1]. Our contributions focus on extending the theory of state abstraction to tree structures and ensuring the transitivity of state abstractions within the tree space. Our method takes a different perspective compared to the method described in [1]. We will discuss [1] in the final version. | Rebuttal 1:
Rebuttal: ## General Response ##
We thank all reviewers for their valuable feedback. We have carefully considered your suggestions and conducted additional experiments (shown in the uploaded PDF) to address your concerns, as outlined below:
1. We have conducted more Atari experiments (Assault, Seaquest, Breakout, and MsPacman tasks). The experimental results can be found in Figure 1 (in the uploaded PDF), which demonstrate the effectiveness of our algorithm in training acceleration.
2. We have conducted more experiments to evaluate the aggregation percentage in Atari, Gomoku and Control Tasks. The experimental results can be found in Figure 2 (in the uploaded PDF), which further validate the reduction of the tree search space resulting from our proposed algorithm.
3. To compare the effectiveness and aggregation percentage of PTSAZero with different numbers of simulations ($N$), we have
conducted additional experiments as shown in Figure 2&3 (in the uploaded PDF). We observe that increasing the number of simulations does not significantly affect the aggregation percentage for the same environment and state abstraction function.
4. As shown in Table 1 (in the uploaded PDF), we have also conducted an analysis of the time consumption. PTSA introduces an acceptable decrease in trajectory collection efficiency (less than 8% on average), which results in a significant reduction in the whole training time.
Furthermore, we have improved the presentation to make our paper more comprehensible, which will be incorporated into the final version:
(1) We provide more detailed explanations of the notations used in Algorithm 1, including the pruning/delete/add actions and the hidden state $h$:
$S_L$ is a list that records the searched paths in the current search tree. $S_L.delete(b)$ and $S_L.add(b)$ refer to removing and recording path $b$ in $S_L$ respectively. The $pruning(b_j)$ action denotes removing unique nodes of path $b_j$ compared to the other abstracted path in the search tree. $h$ denotes the hidden feature of the original real environment state, which is employed to prevent MCTS from interacting with the actual environment during simulation in MuZero.
(2) We have corrected a typo in Appendix Equation (19). The corrected version is as follows:
\begin{equation}
p_{bM}\left(b_{1}, b_{2}\right) \wedge p_{bM}\left(b_{2}, b_{3}\right)
= p_{vM}\left(v_{1}, v_{3}\right) \wedge p_{vM}\left(v_{2}, v_{4}\right) \wedge p_{vM}\left(v_{3}, v_{5}\right) \wedge p_{vM}\left(v_{4}, v_{6}\right)
= p_{vM}\left(v_{1}, v_{5}\right) \wedge p_{vM}\left(v_{2}, v_{6}\right)
= p_{bM}\left(b_{1}, b_{3}\right)
\end{equation}
Finally, we would like to express our sincere gratitude to all the reviewers for their valuable suggestions. The code will be released upon acceptance. We hope that our response will resolve your concerns.
Pdf: /pdf/08257a3002e43c4b6d4830f66f2c3997803df612.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper suggests a method of abstracting the state space explored by a Monte Carlo Tree Search (MCTS) algorithm, in order to reduce the complexity of the search. We can create an abstraction of the state space for MCTS by considering an abstraction over entire paths in the tree - two paths of equal length, that start from the same node in the tree, can be aggregated into a single abstract state, thus reducing the search space. The paper proposes to use a probabilistic approach to the abstraction process, using the justification that this enables the algorithm to recover from aggregation errors that it commits early on. The specific probabilistic approach discussed relies on a divergence measure between the distribution of the value functions across the entire two paths, thus merging together with high probability actions that lead to similar distributions of the value function. This abstraction helps mitigate the worst weakness of MCTS - it reduces the large search space. Some theoretical guarantees are provided, as well as several experimental results for different game problems and for control tasks.
Strengths: The paper deals with the very important challenge of improving MCTS techniques. This type of research is applicable in many domains, as this is a well known and well used algorithm.
The experimental results looks extensive and well-presented, and are the main strength of the paper. I especially liked the comparison of different state abstraction functions, as it showcases the contribution of the paper in coming up with a specific one that seems to work well. Adding a successful novel approach to a well established algorithm is not a trivial task, and experimental results seem very promising. This seems like a strong enough reason to publish on its own.
Weaknesses: I thought the main weakness of the paper is its readability. I had a tough time understanding the approach and the logic behind it, even though I have some experience with MCTS specifically (though admittedly, it had been awhile). Some more careful attention can be given to notation and explanations. The math in this paper requires close scrutiny and some of the explanations seem to assume a close familiarity with the specifics of MCTS, as well as state abstraction functions. This results in a reduced reading experience and lower comprehension.
Some examples:
1. In equation (1) Q is never explicitly defined, figure 1 appears much earlier in the paper than the definition of the probability state abstraction
2. The complex distinction between paths, states and nodes is not explicitly stated, and sometimes ignored - table 1 is referenced during the general RL discussion that has a state notation (s1, s2) but uses a different notation, that is later used for nodes (v1, v2).
3. Some of the notation within Algorithm 1 is never explained (e.g., actions like pruning/delete/add and the usage of a hidden state h).
4. Q* usage is never explained
5. In the explanation after definition 4.3 - encourages nodes that have the same candidate actions with similar value distribution expectations to be aggregated - should that be be encourages paths ? The entire definition seems to be about aggregating paths instead of specific states, but paths that start from the same node.
It is fine to delegate some details to referenced works, but a paper should at least succinctly explain its own notations and be as self-contained as possible. I trust these weaknesses in explanations and paper organization can be fixed by the authors.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. Are you planning to publish the code you used?
2. Please check your math for some typos - eq. 19 in appendix A.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: Some limitations are briefly addressed, namely the need for hyper-parameter tuning and manually selecting the underlying abstraction function. I believe another limitation may lie in the added computational complexity of this method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your uplifting review and valuable feedback. We appreciate your comments and would like to address them below.
**Weaknesses:**
Thanks for your feedback on the paper's readability. We will include more clear and coherent explanations of our method to improve the readability.
1. $Q(s, a)$ denotes the value of action $a$ in state $s$. We will explicitly define all variables and concepts used in the final version. Moreover, we will revise the paper to ensure that the definition of the probability state abstraction is presented before Figure 1.
2. We will provide more clear and explicit definitions for the concepts of paths, states and nodes in the final version: A path is a sequence of nodes in the search tree; A node in the search tree denotes the representation of the corresponding state. According to your advice, we will use the more general state notation $s$ as the input to the abstraction function in Table 1.
3. We apologize for not providing a clear explanation of the pruning/delete/add actions in the paper. We have provided more detailed explanations of the actions and notation used in Algorithm 1:
$S_L $ is a list that records the searched paths in the current search tree. $S_L.delete(b)$ and $S_L.add(b)$ refer to removing and recording path $b$ in $S_L$ respectively. The $pruning(b_j)$ action denotes removing unique nodes of path $b_j$ compared to the other abstracted path in the search tree. $h$ denotes the hidden feature of the original real environment state, which is employed to prevent MCTS from interacting with the actual environment during simulation in MuZero.
4. $Q*(s,a)$ denotes the value of optimal action in state $s$.
5. We appreciate and agree with your opinion that using "path" instead of "state" is more appropriate. We will make the necessary changes in the final version.
**Questions:**
1. We will release our code upon acceptance, and also conduct further research to explore how different state abstraction functions can be better tailored to fit the characteristics of the environment. We hope that this research will help to enhance the performance and applicability of our method in a wider range of applications.
2. Thanks for your careful review. We have corrected the typos in Appendix Equation (19):
\begin{equation}
p_{bM}\left(b_{1}, b_{2}\right) \wedge p_{bM}\left(b_{2}, b_{3}\right)
= p_{vM}\left(v_{1}, v_{3}\right) \wedge p_{vM}\left(v_{2}, v_{4}\right) \wedge p_{vM}\left(v_{3}, v_{5}\right) \wedge p_{vM}\left(v_{4}, v_{6}\right)
= p_{vM}\left(v_{1}, v_{5}\right) \wedge p_{vM}\left(v_{2}, v_{6}\right) \\
= p_{bM}\left(b_{1}, b_{3}\right).
\end{equation}
We really appreciate your feedback.
**Limitations:**
Thanks for your feedback on the limitations of our work. To further discuss the limitation of the added computational complexity, we have conducted more experiments including 9 Atari games to further discuss the limitation of the added computational complexity. Experimental results (shown in Table 1 of the uploaded PDF) demonstrate that PTSA introduces an acceptable decrease in trajectory collection efficiency (less than 8% on average), which results in a significant reduction in the whole training time. | null | null | null | null | null | null |
Fine-Grained Theoretical Analysis of Federated Zeroth-Order Optimization | Accept (poster) | Summary: This paper provides generalization analysis of federated zeroth order optimization.
Strengths: The paper is well-written and addresses a relevant problem. The theoretical results appear correct, although I haven't thoroughly checked them.
Weaknesses: Since I'm not super familiar with this direction, I just have a few clarification questions I have asked below. The one limitation in my opinion is the lack of multiple-local updates at the clients.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Theory:
- Line 105: what is the intuitive reason for the requirement $b_2 \geq d$?
- In (3), aren't we considering multiple local steps at the clients?
- In (5), shouldn't there be an additional difference $\mathbb E[F_S(w(S))] - F(w^*)$?
- In the paragraph following Theorem 1 in lines 187-88, it is said that $\mathbb E[F_S(A(S))]$ has no adverse impact on the upper bound. Why so? If this term is not close to zero, doesn't it introduce a bias, irrespective of $\epsilon$?
- Line 206: one can always find constant $c$ such that $\beta c \geq 1$ holds. Shouldn't there be some more conditions for this statement to hold?
- Line 228 "while introducing the dependence $(nN)^{-1}$": but theorem 2 also has $(nN)^{-1}$
- Line 249: what is $\alpha$ in this bound?
- Are all the results stated assuming full-client participation, since only $N$ appears in the bounds, even though (3) is with partial client participation ($M$).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** Line 105: what is the intuitive reason for the requirement $b_2\geq d$?
**A1:** Thanks for your constructive comments. The requirement $b_2\geq d$ is adopted by many previous work. For example, as mentioned in the second paragraph of Introduction in [1], “deterministic zeroth-order approaches require at least $b_2\geq d+1$ queries”. In Section 3.3 of [2], the objective function is also evaluated $2d$ times to estimate gradients of all $d$ coordinates ($b_2=2d$). We have modified our explanation in line 105 of the main paper.
[1] K. Nikolakakis, et al. Black-box generalization: Stability of zeroth-order learning. NeurIPS, 2022.
[2] P. Chen, et al. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. AISec, 2017.
***
**Q2:** In (3), aren't we considering multiple local steps at the clients?
**A2:** Due to some obstacles of proof technology, we don’t expand our analysis to the multiple-local case (H>1). If these obstacles are overcome in the future, we will further analyze the case you mentioned.
***
**Q3:** In (5), shouldn't there be an additional difference $\mathbb E[F_S(w(S))] - F(w^*)$?
**A3:** The complete form of Equation (5) should be $\mathbb{E}[F(A(S))-F(w^*)]\leq \mathbb{E}[F(A(S))-F_S(A(S))]+\mathbb{E}[F_S(A(S))-F_S(w(S))]$ since $\mathbb{E}[F_S(w(S))-F(w^*)]=F(w(S))-F(w^*)\leq 0$. To avoid ambiguity, we have corrected it with the complete inequality.
***
**Q4:** ... Why $\mathbb E[F_S(A(S))]$ has no adverse impact on the upper bound? If this term is not close to zero, doesn't it introduce a bias, irrespective of $\epsilon$?
**A4:** Thanks. In this paper, we assume the output global model has a small empirical risk [3][4] on the training set to study the theoretical generalization performance on the unknown testing set. If this term $\mathbb E[F_S(A(S))]$ is not small enough, we don’t wish that the performance on the training set is generalized to the unknown testing set. Thus, we let the term $\mathbb E[F_S(A(S))]=\mathcal{O}(1/(nN))$. An explanation has been added in the paragraph following Theorem 1.
[3] Y. Lei et al. Fine-grained analysis of stability and generalization for stochastic gradient descent. ICML, 2020.
[4] S. Li, Y. Liu. High probability guarantees for nonconvex stochastic gradient descent with heavy tails. ICML, 2022.
***
**Q5:** Line 206: one can always find constant $c$ such that $\beta c\geq 1$ holds. Shouldn't there be some more conditions for this statement to hold?
**A5:** These results [5][6][7] in Table 1 are not concise due to some parameters, such as $\beta$ and constant c. Our results can’t be directly compared with them. Thus, we select some specific cases, (e.g., $\beta c \geq 1$ ) to show the advantage of our results, which is a standard strategy for comparisons of error bounds (see e.g., Table 1 in [8], Table 1 in [9]). We can’t ensure that one can always find constant $c$ such that $\beta c \geq 1$ holds. Following your valuable comments, we have added a remark for the parameter conditions in line 206 of the main paper.
[5] M. Hardt, et al. Train faster, generalize better: Stability of stochastic gradient descent. ICML, 2016.
[6] W. Shen, et al. Stability and optimization error of stochastic gradient descent for pairwise learning. arXiv, 2019.
[7] K. Nikolakakis, et al. Black-box generalization: Stability of zeroth-order learning. NeurIPS, 2022.
[8]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[9] K. Nikolakakis, et al. Black-box generalization: Stability of zeroth-order learning. NeurIPS, 2022.
***
**Q6:** Line 228 "while introducing the dependence $(nN)^{-1}$": but theorem 2 also has $(nN)^{-1}$
**A6:** When we don’t take some specific value of $\mu$, the order of Theorem 3 is $\mathcal{O}(((nN)^{-1}\sqrt{\log T}+1)\mu T^{\frac{1}{2}}\log T)$. Compared with Theorem 2, the bound in Theorem 3 is independent of Lipschitz parameter. Meanwhile, the dependence on $\mu$ is improved from the partial dependence $(L/(nN) + \mu)$ to full dependence $((\sqrt{\log T}/(nN) + 1) \mu)$. We have corrected this statement in line 28 of our main paper.
***
**Q7:** Line 249: what is $\alpha$ in this bound?
**A7:** $\alpha$ is the parameter of PL condition (Assumption 3 in lines 171 and 172 of the main paper).
***
**Q8:** Are all the results stated assuming full-client participation, since only $N$ appears in the bounds, even though (3) is with partial client participation ($M$).
**A8:** Firstly, synchronous FedZO is with partial client participation. During the whole training process, there is a probability $\frac{M}{N}$ for each client at each iteration to be selected to update the global model, which is the reason why $N$ appears in our bounds. While, for a single iteration, only M clients are used to update the global model. In fact, our bounds would have the dependence $\frac{1}{M}$ if the term $\frac{M}{N}$ is relaxed to 1 due to $M\in[1, N]$. The details can be found in the proofs of Theorems 2, 3 (see lines 41 and 56). Secondly, asynchronous FedZO is with full-client participation, which is presented in Equation (7) of the main paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: As I said earlier, I'm not super familiar with this space, no I just had clarification questions, all of which the authors have answered satisfactorily. I apologize for not increasing the score (owing to my own limited knowledge of the field, I cannot advocate for the paper too strongly), but support the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your support of our work. | Summary: The analysis of Federated Zeroth-Order Optimization is limited now. This work considers the zeroth-order optimization in federated learning and establishes the generalization error bound of FedZO under the Lipschitz continuity and smoothness conditions.
Strengths: 1. The analysis of Federated Zeroth-Order Optimization is limited now. This work provides generalization bounds with theoretical analysis and asynchronous FedZO.
Weaknesses: There is no experimental analysis.
1. In deep learning, the first-order stochastic optimizer is very popular. Although some cases are mentioned where gradient information is expensive to obtain, it is necessary to use experiments to verify the necessity of the zero-order algorithm.
2. Experiment about the efficiency of algorithms are also missing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could you provide more motivation to study zeroth-order optimization in federated learning? In practice, we have many stochastic optimizers in federated learning. Does zeroth-order optimization have any advantage compared with these optimizers?
2. What is the key challenge of zeroth-order optimization in federated learning compared with the optimization in the single-machine setting?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No experiments are provided to verify their theoretical analysis
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** ... it is necessary to use experiments to verify the necessity of the zero-order algorithm.
**A1:** Thanks for your constructive comments. As your mentioned, there are some cases where gradient information is expensive to obtain and even unavailable [1], such as federated hyperparameter tuning [2] or distributed black box attack of deep neural networks (DNN) [3]. First-order optimizer is not suitable for these cases, which had been validated in many previous work [4][5][6].
[1]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[2]Z. Dai, et al. Federated bayesian optimization via thompson sampling. NeurIPS, 2020.
[3]X. Yi, et al. Zerothorder algorithms for stochastic distributed nonconvex optimization. arXiv, 2021.
[4]J. Nocedal and S. Wright, Numerical optimization, Springer Science & Business Media, 2006.
[5]A. Conn, et al. Introduction to derivative-free optimization, SIAM, 2009.
[6]L. Rios and N. Sahinidis. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 2013.
***
**Q2:** Experiment about the efficiency of algorithms are also missing.
**A2:** Thanks. Some relevant experiments on FedZO had been provided in [7]. In addition, for the asynchronous version, there are many related works [8][9] validating that the asynchronous strategy can take full advantage of the clients computation capabilities. It should be noted that, as far as we know, there is a gap in the theoretical generalization guarantee of federated zeroth-order optimization algorithm, especially for the stability-based analysis, and our major contribution is providing the related stability-based theoretical generalization analysis. Our theoretical results conform to these empirical behaviors [7][8][9] from both generalization and optimization perspectives which are listed as follows.
**Generalization:** Our generalization bounds (Theorems 2, 3) are negatively correlated with the number M of selected clients at each iteration (the dependence $\frac{1}{M}$ can be seen in lines 41 and 56 in Appendix). It is consistent with Figures 1(b) and 4 of [7].
**Optimization:** Our optimization bounds (Theorems 4 and 6) are negatively correlated with the total iteration number $T$, which is presented in all figures of [7].
To further explicitly show the efficiency of the asynchronous version, some necessary experiments (similar to the ones of synchronous FedZO [7]) will be conducted later. We have utilized “global response” to provide Figure 1 to show the structure of asynchronous FedZO.
[7]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[8]X. Lian, et al. Asynchronous decentralized parallel stochastic gradient descent. ICML, 2018.
[9]X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous parallel stochastic gradient for nonconvex optimization. NIPS, 2015.
***
**Q3:** Could you provide more motivation to study zeroth-order optimization in federated learning? ... Does zeroth-order optimization have any advantage compared with these optimizers?
**A3:** This paper aims to fill the gap in the generalization guarantee of zeroth-order optimization in federated learning, which is valuable to the development of related algorithms. Compared with many other stochastic optimizers in federated learning, zeroth-order optimization algorithm can tackle some special cases with unknown gradient information [10] (e.g., federated hyperparameter tuning [11] or distributed blackbox attack of DNN [12]) and achieve satisfactory performance (e.g., FedZO can serve as a satisfactory alternative for the FedAvg algorithm [10]).
[10]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[11]Z. Dai, et al. Federated bayesian optimization via thompson sampling. NeurIPS, 2020.
[12]X. Yi, et al. Zerothorder algorithms for stochastic distributed nonconvex optimization. Automatica, 2022.
***
**Q4:** What is the key challenge of zeroth-order optimization in federated learning compared with the optimization in the single-machine setting?
**A4:** In federated learning, multiple clients use their own data to train a global model. Thus, the key challenge compared with single-machine learning is how to tackle the relationship among all clients, especially for the asynchronous version. We deal with it by introducing new formulas of the update of FedZO and asynchronous FedZO (see Equation (4) in Appendix B.3 and Equation (9) in Appendix B.5), and designing a new error decomposition strategy (see line 271). Besides, the unavailability of gradient information leads to another key challenge, i.e., estimating the real gradient of loss function. In this paper, we use the standard finite difference method to estimate the gradient and the second order Taylor expansion to make an approximation, respectively. Following your constructive comments, we have added a remark in line 118 of the main paper to demonstrate the above statements.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I improve my score. Thanks.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your recognition of our work. | Summary: This paper presents a detailed analysis of Federated Zeroth-Order Optimization (FedZO) by developing the analysis technique of on-average model stability. The authors establish generalization error bounds for FedZO and refine them using heavy-tailed gradient noise and second-order Taylor expansion. They extend the analysis to the asynchronous case and contribute to systematic assessments, on-average model stability technique, and improved bounds for practical FedZO applications. In short, the main contributions of the paper are the establishment of systematic theoretical assessments of FedZO, the development of the analysis technique of on-average model stability, and the refinement of generalization and optimization bounds for practical applications of FedZO.
Strengths: 1. To my best knowledge, the authors provide the first generalization error bound of FedZO under the Lipschitz continuity and smoothness conditions, and also refine generalization and optimization bounds by replacing bounded gradient with heavy-tailed gradient noise and utilizing the second-order Taylor expansion for gradient approximation.
2. This paper has also provided the theoretical analysis for asynchronous FedZO, which has in fact seldomly considered in the federated zeroth-order optimization field.
3. The theoretical results are generally sound and can be inspiring for the follow-up works on federated zeroth order optimization.
Weaknesses: 1. While the authors provide a new error decomposition strategy for the asynchronous case, they do not compare their results with existing error bounds for other asynchronous optimization algorithms.
2. This paper does not provide any experimental results to validate the theoretical findings, which may limit its practical relevance. Empirical verification may make this paper more sound. So, I encourage the authors to provide certain empirical experiments to validate their main theoretical results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can the authors provide more intuition behind the on-average model stability analysis technique? It may be helpful for readers who are not familiar with this concept to have a more intuitive understanding of how it relates to the generalization error.
2. The authors mention that their refined generalization and optimization bounds have important implications for practical applications of FedZO. Can they provide some examples of these practical applications and how their results could be used in these scenarios?
3. The paper focuses on the theoretical analysis of FedZO. Can the authors provide some insights into how their results could be used in practical applications of federated learning? For example, how could their results be used to improve the performance of real-world federated learning systems?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** ... compare with other asynchronous optimization algorithms.
**A1:** Thanks. Following your constructive comments, we have added comparisons with existing error bounds of asynchronous optimization algorithms [1] and [1][2] in Tables 1 and 2, respectively. Limited by the length of Rebuttal, parts of these new comparisons are listed below and in Tables 1, 2 of “global response”.
**Table 1**
| | | | | | | |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|**Algorithm+Reference**|**Generalization bound**|**Tool**|**L**|**$\theta$**|**$v^2$**|**B.**|
|**AD-SGD**[1]|$\mathcal{O}(\frac{n-\lambda}{n(1-\lambda)}(1+\frac{\beta \eta_1 }{M})^T)$|Uni.|$\surd$|$\times$|$\times$|$\times$|
|**AD-SGD**[1]|$\mathcal{O}(\frac{nM-\lambda}{n(1-\lambda)}L^2T)$|Uni.|$\surd$|$\times$|$\times$|$\times$|
| | | | | | | |
where $\lambda$ characterizes the properties of decentralized topology.
**Table 2**
| | | | | | | | |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|**Algorithm+Reference**|**Optimization bound**|**Step size**|**L**|**$\theta$**|**$\beta$**|**B.**|**$\sigma$**|
|**AD-SGD**[1]|$\mathcal{O}((r+\frac{C_{\lambda}}{\lambda^{t_0}}+\frac{t_0}{M})(\log T)^{-1})$|$\eta_t=\mathcal{O}(\frac{Mc}{t+1})$|$\surd$|$\times$|$\surd$|$\times$|$\times$|
|**AD-PSGD**[2]|$\mathcal{O}(T^{-1/2})$|$\eta_t=\mathcal{O}(\frac{n}{b_1(\sqrt{T}+1)})$|$\times$|$\times$|$\surd$|$\times$|$\surd$|
| | | | | | | | |
where $C_{\lambda}=\frac{4}{\lambda e^2\log\lambda^{-1}}+\frac{2}{\lambda\log\lambda^{-1}}$.
[1]X. Deng, et al. Stability-based generalization analysis of the asynchronous decentralized SGD. AAAI, 2023.
[2]X. Lian, et al. Asynchronous decentralized parallel stochastic gradient descent. ICML, 2018.
***
**Q2:** ... provide empirical experiments ...
**A2:** There are some works providing some relevant experiments on FedZO [3]. For the asynchronous version, many related work [4][5] validates that the asynchronous strategy can take full advantage of the clients computation capabilities. It should be noted that our major contribution is exploring the theoretical generalization guarantee of federated zeroth-order optimization algorithm which is a gap as far as we know, especially for the stability-based analysis. Our theoretical results conform to these empirical behaviors [3] from both generalization and optimization perspectives which are listed as follows.
**Generalization:** Our generalization bounds (Theorems 2, 3) are negatively correlated with the number M of selected clients at each iteration (the dependence $\frac{1}{M}$ can be seen in lines 41 and 56 in Appendix), which is consistent with Figures 1(b) and 4 of [3].
**Optimization:** Our optimization bounds (Theorems 4 and 6) are negatively correlated with the total iteration number $T$, which is presented in all figures of [3].
To further explicitly show the efficiency of the asynchronous version, some necessary experiments (similar to the ones of synchronous FedZO [3]) will be conducted later. We have utilized “global response” to provide Figure 1 to show the structure of asynchronous FedZO.
[3]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[4]X. Lian, et al. Asynchronous decentralized parallel stochastic gradient descent. ICML, 2018.
[5]X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous parallel stochastic gradient for nonconvex optimization. NIPS, 2015.
***
**Q3:** ... provide more intuition behind the on-average model stability ...
**A3:** Compared with other tools including uniform convergence, algorithmic stability enjoys the dimension-independence of hypothesis parameter space [6][7]. We list the advantages of on-average model stability over several other stability tools as follows [8]:
**Vs. Uniform (model) stability:** On-average model stability is a weaker stability tool than uniform (model) stability.
**Vs. On-average stability:** On-average model stability measures the stability of model parameters $w$ instead of function values $f(w)$, which can improve our analysis.
More comparisons among stability tools can be found in Appendix C of [9]. We have added the above statements in line 137 of the main paper.
[6]W. Rogers et al. A finite sample distribution-free performance bound for local discrimination rules. The Annals of Statistics, 1978.
[7]L. Devroye et al. Distribution-free performance bounds for potential function rules. TIT, 1979.
[8]Y. Lei et al. Fine-grained analysis of stability and generalization for stochastic gradient descent. ICML, 2020.
[9]J. Chen, et al. On the Stability and Generalization of Triplet Learning. AAAI, 2023.
***
**Q4:** ... provide some practical examples and how their results used?
**A4:** Our theoretical results conform to the related experimental results of FedZO in [10]. Please see **Q2** for some examples. In addition, our results are also consistent with some distributed zeroth-order optimization algorithms, e.g., [11].
[10]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[11]E. Kaya, et al. Communication-efficient zeroth-order distributed online optimization: Algorithm, theory, and applications. Access, 2023.
***
**Q5:** ... how their results could be used in practice ...
**A5:** Firstly, from the perspective of generalization theory, our results are consistent with the experimental results of previous work [12]. Secondly, from the perspective of practical application, our results can provide guidance for parameter choices (e.g. the number M of selected clients at each iteration, the total iteration number T and the step size $\mu$ of gradient estimate) under some special accuracy requirement.
[12]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022. | Summary: This paper studies the theoretical analysis for federated zeroth-order optimization (FedZO). The main contributions of this paper include 1) deriving the generalization bound of synchronous FedZO with different assumptions (bound gradient, heavy tail gradient noise). The main technical is to establish the relationship between generalization error and $\ell_1$ on-average model stability; 2) deriving the generalization and optimization bounds for asynchronous FedZO.
Strengths: 1. This paper derives the generalization bound of synchronous FedZO with different assumptions (bound gradient, heavy tail gradient noise). The main technical is to establish the relationship between generalization error and $\ell_1$ on-average model stability;
2. This paper derives the generalization and optimization bounds for asynchronous FedZO with novel technique by a new error decomposition strategy.
3. The paper is very well written. In particular, it explains the main technical challenges, main techniques and the implication of the results very well.
Weaknesses: I didn't find major weaknesses in this paper. Perhaps, it will be beneficial if the authors could provide some experimental study to validate the theories.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It seems that (5) does not equal to $\mathbb{E}[F(A(S)) - F(\omega^*)]$. Could you please clarify?
2. In the paragraph below (4), do you mean Algorithm 1 instead of Algorithm A?
3. In Assumption 2, does $z_i^t$ means a data sample?
4. In Theorems 2, 3 and so on, what is the physical meaning of $\mu$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There are no much discussion on the limitation of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** Perhaps, it will be beneficial if the authors could provide some experimental study to validate the theories.
**A1:** Thanks for your constructive comments. Considering the gap in the generalization analysis of federated zeroth-order optimization algorithm, the major contribution of this paper is providing its stability-based theoretical guarantee, which is meaningful to the development of this algorithm. At present, there are some works providing some relevant experiments on FedZO [1]. For the asynchronous version, many related work [2][3] validates that the asynchronous strategy can take full advantage of the clients computation capabilities. Our theoretical results conform to these empirical behaviors [1][2][3] from both generalization and optimization perspectives which are listed as follows.
**Generalization:** Our generalization bounds (Theorems 2, 3) are negatively correlated with the number M of selected clients at each iteration (the dependence $\frac{1}{M}$ can be seen in lines 41 and 56 in Appendix). It is consistent with Figures 1(b) and 4 of [1].
**Optimization:** Our optimization bounds (Theorems 4 and 6) are negatively correlated with the total iteration number $T$, which is presented in all figures of [1].
To further explicitly show the efficiency of the asynchronous version, some necessary experiments (similar to the ones of synchronous FedZO [1]) will be conducted later. We have utilized “global response” to provide Figure 1 to show the structure of asynchronous FedZO.
[1]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[2]X. Lian, et al. Asynchronous decentralized parallel stochastic gradient descent. ICML, 2018.
[3]X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous parallel stochastic gradient for nonconvex optimization. NIPS, 2015.
***
**Q2:** It seems that (5) does not equal to $\mathbb{E}[F(A(S)) - F(\omega^*)]$.
**A2:** The complete form of Equation (5) should be $\mathbb{E}[F(A(S)) - F(w^*)] \leq \mathbb{E}[F(A(S))-F_S(A(S))]+\mathbb{E}[F_S(A(S))-F_S(w(S))]$ since $\mathbb{E}[F_S(w(S))-F(w^*)]=F(w(S))-F(w^*)\leq 0$. To avoid ambiguity, we have corrected it with the complete inequality.
***
**Q3:** ... do you mean Algorithm 1 instead of Algorithm A?
**A3:** Algorithm A indicates the general federated learning algorithm including Algorithm 1 (synchronous FedZO). Theorem 1 is developed for Algorithm A, while the rest results (Theorems 2-6) are developed for Algorithm 1 and its asynchronous version (synchronous FedZO and asynchronous FedZO). We have made the above explanation in line 111 of the main paper.
***
**Q4:** In Assumption 2, does $z_i^t$ means a data sample?
**A4:** Your understanding is correct. $z_i^t$ means the sample of the i-th client used at the t-th iteration.
***
**Q5:** ... what is the physical meaning of $\mu$?
**A5:** The meaning of $\mu$ had been briefly explained before Equation (3). According to the definition of gradient, $\mu$ represents the distance between two parameters used to estimate gradient. We have modified our related explanation to improve the readability of our paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for answering my questions. I am keeping my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your recognition and support of our work. Thanks. | Rebuttal 1:
Rebuttal: Thanks for the comments of all reviewers. Considering the limitation of character count, we provide two figures and three tables in "global response".
Figure 1 denotes the structure of the asynchronous FedZO algorithm.
Figure 2 denotes some sub-Weibull survival curves with varying tail parameters $\theta$ inspired by [3].
Table 1 denotes some new comparisons with the stability-based generalization bounds of asynchronous optimization algorithms [1].
Table 2 denotes some new comparisons with the optimization bounds of asynchronous optimization algorithms [1][2].
Table 3 denotes some new comparisons with the optimization bounds of distributed zero-order optimization algorithms [6][7].
**Table 1**
| | | | | | | |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|**Algorithm+Reference**|**Generalization bound**|**Tool**|**L**|**$\theta$**|**$v^2$**|**B.**|
|**AD-SGD**[1]|$\mathcal{O}(\frac{n-\lambda}{n(1-\lambda)}(1+\frac{\beta \eta_1 }{M})^T)$|Uni.|$\surd$|$\times$|$\times$|$\times$|
|**AD-SGD**[1]|$\mathcal{O}(\frac{nM-\lambda}{n(1-\lambda)}L^2T)$|Uni.|$\surd$|$\times$|$\times$|$\times$|
| | | | | | | |
**Table 2**
| | | | | | | | |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|**Algorithm+Reference**|**Optimization bound**|**Step size**|**L**|**$\theta$**|**$\beta$**|**B.**|**$\sigma$**|
|**AD-SGD**[1]|$\mathcal{O}((r+\frac{C_{\lambda}}{\lambda^{t_0}}+\frac{t_0}{M})(\log T)^{-1})$|$\eta_t=\mathcal{O}(\frac{Mc}{t+1})$|$\surd$|$\times$|$\surd$|$\times$|$\times$|
|**AD-PSGD**[2]|$\mathcal{O}(T^{-1/2})$|$\eta_t=\mathcal{O}(\frac{n}{b_1(\sqrt{T}+1)})$|$\times$|$\times$|$\surd$|$\times$|$\surd$|
| | | | | | | | |
**Table 3**
| | | | | | | | |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|**Algorithm+Reference**|**Optimization bound**|**Step size**|**L**|**$\theta$**|**$\beta$**|**B.**|**$\sigma$**|
|**EF-ZO-SGD**[6]|$\mathcal{O}((d/T)^{1/2} + d/T)$|$\eta_t=\mathcal{O}(1/\sqrt{dT})$|$\surd$|$\times$|$\surd$|$\times$|$\times$|
|**FED-EF-ZO-SGD**[6]|$\mathcal{O}((d/T)^{1/2} + (d/T)^{3/2})$|$\eta_t=\mathcal{O}(1/\sqrt{dT})$|$\surd$|$\times$|$\surd$|$\times$|$\times$|
|**Distributed ZO Primal–Dual**[7]|$\mathcal{O}((d/(MT))^{1/2})$|$\eta_t=\mathcal{O}(d^{-1/2}(t+d^{1/(2\theta)})^{-\theta})$|$\times$|$\times$|$\surd$|$\times$|$\surd$|
| | | | | | | | |
[1]X. Deng, et al. Stability-based generalization analysis of the asynchronous decentralized SGD. AAAI, 2023.
[2]X. Lian, et al. Asynchronous decentralized parallel stochastic gradient descent. ICML, 2018.
[3]M. Vladimirova, et al. Sub-weibull distributions: Generalizing sub-gaussian and sub-exponential properties to heavier tailed distributions. Stat, 2020.
[6]E. Kaya, et al. Communication-efficient zeroth-order distributed online optimization: Algorithm, theory, and applications. Access, 2023.
[7]X. Yi, et al. Zerothorder algorithms for stochastic distributed nonconvex optimization. Automatica, 2022.
Pdf: /pdf/0eae70e3675bfb1d354cb2d283f4edeadc13a5d1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper fills the gap of theoretical guarantee for the Federated zeroth-order optimization (FedZO) algorithm. It provides the initial generalization error bound for FedZO and presents refined generalization and optimization bounds. The structure of the paper is logical and the accompanying theoretical proofs in the supplementary material are comprehensive and robust.
Strengths: 1. The need for filling the theoretical gap is important.
2. The Author's contribution is solid.
3. This paper is well-organized and easy to follow.
Weaknesses:
This paper addresses a topic that isn't directly within my area of expertise, so my comments may not fully capture the nuances specific to this field, but I hope my remarks will prove useful to the authors.
**1.** In Assumption 3, the authors refer to the PL condition but they do not explain what PL stands for earlier in the text.
**2.** Regarding the assumptions made throughout the paper, it would be helpful if the authors could elaborate on whether each assumption is considered strong, mild, or weak, or if it is a relaxed version of a concept from previous work.
**3.** The term "heavy-tailed" is used but not clearly defined. It would benefit readers if the authors could briefly describe what they mean by "heavy-tailed" early in the paper. In the context of imbalanced regression problems, is "heavy-tailed" equivalent to "long-tailed"? In my understanding, both terms refer to highly biased or imbalanced data.
**4.** There are minor grammatical errors that need to be corrected.
In conclusion, this paper is well grounded and presents detailed, solid findings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors did not provide the limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** ... not explain what PL stands for ...
**A1:** Thanks for your constructive comments. As we all know, under the non-convex condition, local optimal model isn’t equivalent to global optimal model. Assumption 3 simply requires that the gradient grows faster than a quadratic function as we move away from the optimal function value [1], which stands for the fact that every stationary point ($|\nabla F_S(w)|=0$) is a global minimum [2]. With this assumption, we can study the optimization error with the form $F_S(w)-F_S(w(S))$ rather than $|\nabla F_S(w)|$ [3] under the non-convex condition. We have added a remark to clarify the meaning of PL condition following Assumption 3.
[1]H. Karimi, et al. Linear convergence of gradient and proximalgradient methods under the polyak-łojasiewicz condition. ECML, 2016.
[2]L. Lei, et al. Non-convex finite-sum optimization via scsg methods. NeurIPS, 2017.
[3]S. Li, Y. Liu. High probability guarantees for nonconvex stochastic gradient descent with heavy tails. ICML, 2022.
***
**Q2:** ... it would be helpful if the authors could elaborate on whether each assumption is considered strong, mild, or weak, or if it is a relaxed version of a concept from previous work.
**A2:** The strength of assumptions is crucial for learning theoretical analysis. The related illustrations for all assumptions are listed as follows.
(1)**Lipschitz continuity (bounded gradient):** It is one of the most general assumptions which is considered strong. Some milder related condition are provided in some previous work, e.g., Assumption 2.8 in [4]. We just assume the Lipschitz condition in our first case. In the rest cases, we introduce a milder assumption, i.e., bounded gradient noise (i.e., heavy-tailed gradient noise) to avoid the dependence on $L$.
(2)**Smoothness:** It can be called bounded second-order gradient assumption which is also considered strong. Generally, Holder continuity is a milder condition than smoothness [5].
(3)**Heavy-tailed gradient noise:** As mentioned in (1), heavy-tailed gradient noise can be regarded as a milder condition than Lipschitz continuity. It stands for a refined bounded variance of gradient which is similar to [6].
(4)**PL condition:** It is commonly assumed in non-convex optimization [1][2][3]. [1] shows that PL condition is weaker than the main conditions that have been explored to show linear convergence rates without strong convexity, e.g., essential strong convexity[7][8].
We have added the above four illustrations following the corresponding assumptions respectively. In the last part of Appendix, we have discussed whether we can relax our current assumptions, e.g., whether we can relax smoothness to Holder continuity.
[4]S. Li, Y. Liu. High probability guarantees for nonconvex stochastic gradient descent with heavy tails. ICML, 2022.
[5]Y. Lei, Y. Ying. Fine-grained analysis of stability and generalization for stochastic gradient descent. ICML, 2020.
[6]Y. Zhou, et al. Understanding generalization error of SGD in nonconvex optimization. Machine Learning, 2022.
[7]J. Liu, et al. An asynchronous parallel stochastic coordinate descent algorithm, ICML, 2014.
[8]I. Necoara, et al. Linear convergence of first order methods for non-strongly convex optimization. Math Program, 2019.
***
**Q3:** ... It would benefit readers if the authors could briefly describe what they mean by "heavy-tailed" early in the paper. ...
**A3:** Your understanding of "heavy-tailed" is correct. "Heavy-tailed" is equivalent to "long-tailed". In this paper, we use a special heavy-tailed distribution, i.e., sub-Weibull distribution. We have briefly described the meaning of this distribution following Definition 2.
***
**Q4:** There are minor grammatical errors that need to be corrected.
**A4:** Thanks. We have carefully checked the whole manuscript and correct all grammatical errors. For example, we have corrected Equation (5) to “$\mathbb{E}[F(A(S))-F(w^*)]\leq\mathbb{E}[F(A(S))-F_S(A(S))]+\mathbb{E}[F_S(A(S))-F_S(w(S))]$”.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: Thank you for your detailed responses.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your recognition of our work. | Summary: This paper studies the generalization and optimization analysis of the Federated zeroth-order optimization (FedZO) algorithm. It develops tailored techniques for the federated setting to establish generalization bounds. One bound relies on Lipschitz continuity, and the second removes this dependency. The optimization error in the order of $\mathcal{O}(1/T)$ is also developed for FedZO. The authors extend results to the asynchronous federated setting, where all the workers participate throughout the update process and asynchrony may cause inconsistency in local workers within the same iteration.
Strengths: The paper is well-crafted, and the authors effectively elucidate the distinctions from prior methods.
The theoretical results are quite comprehensive. Existing techniques in the literature may not be directly applicable to the FedZO algorithm. The authors fill in this gap by first developing error decomposition and estimation techniques and then presenting the first algorithmic stability-based generalization analysis for FedZO.
Weaknesses: The paper demonstrates a thorough analysis of the existing federated learning algorithm, FedZO, which is commendable. However, to further enhance its novelty and impact, it would have been valuable if the authors had introduced a new algorithm and conducted a comparative analysis against FedZO.
Furthermore, to gain deeper insights into the analysis, it is recommended to include empirical evaluations exploring the effects of different parameters on the bounds. This empirical investigation would provide valuable practical implications and strengthen the overall findings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Unlike other existing papers, all the findings in this paper rely on the $\theta$ tail parameter of the Sub-Weibull distribution. The extent of this dependence's impact is uncertain in practical settings, and it would be helpful if the authors could provide insights or comments on this matter.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** ... introduced a new algorithm and conducted a comparative analysis against FedZO.
**A1:** Thanks for your constructive comments. Our major contribution is exploring the theoretical generalization upper bounds of federated zeroth-order optimization algorithm which seem to be a gap, especially for stability-based analysis. Not limited to FedZO [1], we extend FedZO to the asynchronous version and give a similar theoretical generalization bound. From the theoretical perspective, the bounds of asynchronous FedZO can recover the bounds of FedZO, which indicates this extension is theoretically reasonable. From the practical perspective, there are many related work [2][3] validating that the asynchronous strategy can take full advantage of the clients computation capabilities. To further explicitly show its effectiveness, some necessary experiments (similar to the ones of synchronous FedZO [1]) will be conducted later. We have used “global response” to provide Figure 1 to show the structure of asynchronous FedZO.
[1]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[2]C. Xu, et al. Asynchronous federated learning on heterogeneous devices: A survey. arXiv, 2021.
[3]A. Koloskova, et al. Decentralized stochastic optimization and gossip algorithms with compressed communication. ICML, 2019.
***
**Q2:** ... it is recommended to include empirical evaluations exploring the effects of different parameters on the bounds ...
**A2:** Thanks. Our theoretical results are consistent with the relevant experiments in [4]. For example, our generalization bounds indeed have negative dependence on the number M of selected clients at each iteration (see lines 41, 56 in Appendix) which is presented in Figures 1(b) and 4 of [4]. Our optimization bounds (Theorems 4 and 6) are also negatively dependent on the total iteration number $T$, which is presented in all figures of [4]. As mentioned in A1, we will provide more empirical evaluations.
[4]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
***
**Q3:** ... The extent of the impact of the dependence on $\theta$ is uncertain in practical settings ...
**A3:** Some previous work indicated that the distribution of gradient noise of SGD exhibits heavier tail than sub-Gaussian (e.g., sub-Weibull [5][6][7]). In our paper, the distribution of gradient noise of each local client is equivalent to the one of SGD with the same model. Thus it is reasonable to assume that the real first order gradient noise for each local client is a sub-Weibull random vector (Assumption 2). It should be noted that the purpose of considering the heavy-tail condition is to provide the theoretical impact of heavy-tailed phenomena on the generalization performance of the FedZO algorithm. As your mentioned, due to the unknown gradient information, the dependence on the heavy-tail parameters $\theta$ and $K$ is uncertain for practice yet. Luckily, our generalization bounds indicate that, in the federated zeroth-order optimization, the degeneration of generalization performance caused by the heavy-tailed phenomena is mild, which is similar to some previous theoretical results (e.g., Theorems 3.3, 3.9 in [5]). We have provided some sub-Weibull survival curves with varying tail parameters $\theta$, which inspired by [8], via “global response” (Figure 2).
[5]S. Li, Y. Liu. High probability guarantees for nonconvex stochastic gradient descent with heavy tails. ICML, 2022.
[6]L. Madden, et al. Highprobability convergence bounds for non-convex stochastic gradient descent. arXiv, 2020.
[7] M. Gurbuzbalaban, et al. The heavy-tail phenomenon in sgd. ICML, 2021.
[8]M. Vladimirova, et al. Sub-weibull distributions: Generalizing sub-gaussian and sub-exponential properties to heavier tailed distributions. Stat, 2020.
---
Rebuttal Comment 1.1:
Comment: Your responses are greatly appreciated. Taking into account the responses, I will uphold my current score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your recognition of our work. | Summary: This paper provides theoretical guarantees for both synchronous and asynchronous FedZO algorithms. It first establishes a generalization error bound of FedZO under conventional assumptions. Then, the bounds are further improved by using the second-order Taylor expansion and heavy-tailed gradient noise. The theoretical results seem promising; however, some technical concerns remain unresolved.
Strengths: This paper focuses on providing theoretical guarantees for the newly proposed FedZO algorithms. The mathematical tools used in the analysis are relatively advanced in recent literature. In addition to analyzing the original FedZO algorithm, this paper also extends its analysis to include asynchronous FedZO.
Weaknesses: This paper primarily focuses on providing further theoretical guarantees for a recently proposed algorithm, which addresses a minor problem. As a result, the contributions and impacts to the federated learning community are limited. Furthermore, some assumptions and results are not well-justified, as discussed below.
[Related to federated learning]
In federated optimization literature, heterogeneous data distribution across clients is a significant challenge. In the original FedZO paper [16], Assumptions 3 and 4 were used to describe the impacts of such heterogeneity. However, this paper seems to have overlooked this necessary aspect in the analysis, which could significantly affect the proof of Theorem 2.
[Experiment]
This paper introduces asynchronous FedZO and provides learning guarantees compared to FedZO. However, the original FedZO does not evaluate the asynchronous version. It is crucial to include experimental evaluations of both synchronous FedZO and asynchronous FedZO in this paper to support the proposed theories.
[Organization]
One key aspect of this paper is the approximation of the first-order gradient using a second-order Taylor expansion and Assumption 2. However, the details of these methods are missing in the main paper, making it challenging to follow the arguments.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: [Comparison of generalization bounds]
This paper has mentioned, "Due to the essential difference between FedZO and ZoSS, the previous analysis technique in [36] cannot be used for federated learning directly." So, why are the generalization bounds comparable with ZoSS in Table 1? It seems more reasonable to compare them with distributed zero-order optimization.
[Comparison of optimization bounds]
A vital challenge of zero-order optimization is the dimension issue of model parameters, as shown in Table 1 in FedZO. In this paper, how does it remove the dimension $d$ from the optimization bound? Is this improvement applicable to arbitrary zero-order optimization?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No limitations are discussed in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** ... overlooked Assumptions 3 and 4 of [1] ...
**A1:** Thanks for your constructive comments. In the original FedZO [1], the reason for making Assumptions 3 and 4 is to connect the gradient of local empirical loss and the gradient of global population risk. The gradient of population risk characterizes the convergence of the FedZO algorithm which is one of the major theoretical contributions of [1].
Compared with [1], we have the following two reasons why we didn’t make such assumptions.
(1)Theorem 2: We can directly bound the gradient of local empirical loss with the Lipschitz parameter $L$ (see line 42 in Appendix).
(2)Theorems 3, 4, 5, 6: We consider the gradients of all clients simultaneously (see Equations (8) and (13) in lines 60 and 90 of Appendix) rather than the one of single client like Assumption 4 in [1]. After careful checking, we find some mistakes in (8) and (13) (e.g., the decomposition strategy of norm). Luckily, they don’t make a difference in the order of our results. We have corrected them in our new manuscript.
Following your comment, we will try to improve our results by further considering the impact of such heterogeneity.
[1]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
***
**Q2:** ... experimental evaluations...
**A2:** Thanks. Some relevant experiments on FedZO had been provided in [2]. For the asynchronous version, many related work validates that the asynchronous strategy can take full advantage of the clients computation capabilities [3][4].
Filling the gap of theoretical generalization analysis is crucial for the development of federated zeroth-order algorithm, which is the major contribution in this paper. Our results conform to these empirical behaviors from both generalization and optimization perspectives which are listed as follows.
**Generalization:** Our generalization bounds (Theorems 2, 3) are negatively correlated with the number M of selected clients at each iteration (the dependence $\frac{1}{M}$ can be seen in lines 41 and 56 in Appendix). It is consistent with Figures 1(b) and 4 of [2].
**Optimization:** Our optimization bounds (Theorems 4 and 6) are negatively correlated with the total iteration number $T$, which is presented in all figures of [2].
To further give empirical observation of the asynchronous version, some necessary experiments (similar to the ones of synchronous FedZO [2]) will be conducted later. We have utilized “global response” to provide Figure 1 to show the structure of asynchronous FedZO.
[2]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[3]X. Lian, et al. Asynchronous decentralized parallel stochastic gradient descent. ICML, 2018.
[4]X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous parallel stochastic gradient for nonconvex optimization. NIPS, 2015.
***
**Q3:** ... Taylor expansion and Assumption 2 are missing in the main paper ...
**A3:** Thanks. As shown in line 36 of Appendix, the unknown gradient is estimated by the second-order Taylor expansion. Following your valuable comment, we have supplemented the detailed steps of Taylor expansion in line 100 of the main paper. As for Assumption 2, we utilize some properties of sub-Weibull distribution to bound the gradient noise after estimating the gradient. We have also added a related remark following Assumption 2 to improve the readability of our paper. Considering the reader who is not familiar with this distribution, we have provided some sub-Weibull survival curves with varying tail parameters $\theta$ inspired by [5], via “global response” (Figure 2).
[5]M. Vladimirova, et al. Sub-weibull distributions: Generalizing sub-gaussian and sub-exponential properties to heavier tailed distributions. Stat, 2020.
***
**Q4:** ...why comparable with ZoSS rather than distributed zero-order optimization?
**A4:** Table 1 shows the comparisons of stability-based generalization bounds. The reason to compare with ZoSS is that it is the only one using stability tools to analyze the theoretical generalization performance for zeroth-order optimization algorithm. Indeed, as you mentioned, it is reasonable to make comparisons with distributed zero-order algorithms. Thus, we have added some comparisons about optimization bound, e.g., [6][7], in Table 2. Limited by the length of Rebuttal, parts of these new comparisons are provided in Table 3 of “global response”.
[6]E. Kaya, et al. Communication-efficient zeroth-order distributed online optimization: Algorithm, theory, and applications. Access, 2023.
[7]X. Yi, et al. Zerothorder algorithms for stochastic distributed nonconvex optimization. Automatica, 2022.
***
**Q5:** ... how remove $d$ from the optimization bound? Is this improvement applicable to arbitrary zero-order optimization?
**A5:** For the optimization bound, the common technology is dimension related tool (e.g., uniform convergence) which is the major reason why removing the dimension $d$ is challenging. Different from this technology, the key of our proof is to directly build the iteration sequence $t(t-c)\mathcal{E}[F_S(w^{t+1})-F_S(w(S))]$ (c denotes a constant) via the smoothness assumption, the PL condition and the step size setting $\eta_t=\eta_1/(\alpha(t+a))$. After iterative computation, we can obtain the optimization bound $\mathcal{E}[F_S(w^T)-F_S(w(S))]$. Our proof can’t be applied to the zeroth-order algorithm which is without these settings. Note that, our optimization bounds rely on the quality of initial model like many previous work (e.g., [8][9]). We have added a remark behind Theorem 4 to make the above statement.
[8]W. Fang, et al. Communication-efficient stochastic zeroth-order optimization for federated learning. TSP, 2022.
[9]H. Yu, et al. Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning. AAAI, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. Most of my concerns have been addressed. I can improve the score. Given that data heterogeneity stands out as a crucial assumption, I would greatly appreciate it if the authors could conduct a thorough and comprehensive analysis of this aspect, as they have also mentioned in their rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your constructive comments and recognition of our work. Our current results may be refined by further considering data heterogeneity which measures the dissimilarity among the data of different clients. As mentioned in our rebuttal, we will try to conduct a thorough and comprehensive analysis of this aspect. | null | null |
On the Convergence of No-Regret Learning Dynamics in Time-Varying Games | Accept (poster) | Summary: This paper considers the problem of online learning in time-varying games under different setups. Specifically, the authors consider the case where all the players apply optimistic gradient descent (OGD) algorithm with a certain choice of learning rate. The main results that in the two-player bilinear game setup, the sum of squared duality gap is bounded by $O(1+V_{\epsilon-NE}+V_A)$, recovering the best-known result in the stationary game setup. This result is based on the bounded second-order path length of the learning dynamic and the important observation that sum of dynamic regret with respect to (approximated) Nash is (almost) non-negative. Next, they extend this result to the strongly convex-concave game with multiple steps and obtain similar duality gap guarantees. The author also consider the potential game and general-sum game setup with guarantees on the duality gap and CEgap bound respectively. Experiments are also done to support their theoretical results.
Strengths: - The problem considered in this paper is important and the authors show that the classic OGD algorithm achieves desirable average duality-gap and other equilibrium-related gap bound with provable guarantees.
- The authors also do experiments on time-varying potential games and time-varying zero-sum games to verify their obtained NEgap bound.
Weaknesses: The main concern is the novelty of this paper. Specifically, this paper shares similarity to the results in [60] in many perspectives, although the authors explain in many places in the paper that how their results are different from [60].
- One of the main lemma Lemma A.1 is the same as the DRVU property shown in [60].
- Property A.3 is very similar to Eq.(32) shown in [60].
- Example shown in Proposition 3.2 is almost the same as the example shown in Appendix C, case 2 in [60].
- From the technical perspective, I feel like the analysis is very similar to the one shown in [60]. While [60] does not consider the approximated NE path-length, it is not hard to extend their analysis to the approximated NE path-length by replacing P_T with \epsilon-NE path-length +T\epsilon. Also, the boundedness of the second-order dynamic is also shown in Lemma 16/18 in [60].
- In addition, as mentioned by the authors, there are parameter-tuning issues in achieving better individual regret guarantees in bilinear and strongly convex-concave games, which is also handled by the meta-base structure proposed in [60].
The authors also derive results for general-sum games and potential games. Given Property 3.8 that the sum of regret with respect to CE is positive, it seems that the average CEgap bound is also not very hard to obtain given the bounded second-order dynamic of OGD.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - As mentioned in the above section, can the authors explain more in detail on the main technical difficulties and challenges compared to the analysis shown in [60].
- In Property 3.1, the authors argue that this extends the exact NE path-length in [60] to the approximated NE path-length. In [60], they also provide a bound with respect to W_T, which is the variance of the game matrix. Can the authors explain more on the comparison between the epsilon-NE path-length and W_T? Is there a case where both the exact NE path-length and the W_T are large but the epsilon-NE path-length is very small?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their constructive feedback. Below, we stress the key differences between our results and the ones in [60].
Starting from Section 3.1, we indeed build on a number of ingredients from [60], as we carefully acknowledge throughout the paper; this includes the dynamic RVU bound (Lemma A.1) and using second-order path lengths, as the reviewer points out. We stress, however, that using RVU-type bounds and second-order path lengths is very standard in this line of work, so we do not believe that those similarities weaken our contribution. Indeed, we provide a number new insights that are of interest, leading to our main result in Theorem 3.3 that is different from the results in [60] in many aspects. First, we use a variation measure depending on an approximate sequence of NE; note that while the example in Proposition 3.2 is similar to that in Appendix C of [60], as the reviewer points out, it serves a different purpose in our context. Furthermore, we connect nonnegativity of dynamic regret with the MVI property from variational inequalities. This allows extensions to settings such as polymatrix zero-sum games and convex-concave games; given the tremendous interest such settings have received, we believe that those extensions are important. Our results are also based on a simpler algorithm: simply run optimistic gradient descent (OGD)—or variants of optimistic mirror descent—with a time-invariant (constant) learning rate. We believe that this has an independent interest given how well-studied those algorithms are in the static setting, and it is also worth noting that several prior papers have motivated using a constant learning rate—which has not been done in this context. In contrast, the algorithm of [60] has further layers of complications, which are of course there to handle issues such as parameter tuning; while such issues are crucial in the setup of [60] in order to minimzie regret, they are not present in our setting precisely because our focus is different. Namely, our focus (in Section 3.1) is to characterize the equilibrium gap of OGD.
We also want to highlight that our results and our technical approach beyond Section 3.1, namely in Sections 3.2-3.4, are in general different from those in [60]; this includes time-varying potential games, general-sum games, and strongly convex-concave settings. Each of those settings introduces their own challenges that are in general different from the ones encountered for bilinear saddle-point problems.
So, despite a number of similarities with [60] that we carefully point out throughout the paper, most of those similarities are technical tools used consistently in this line of work (such as RVU-type properties and second-order path lengths), and we do not believe that they weaken our contribution. We believe that we provide a number of new insights beyond what was known in prior work, and our results are overall complementary with [60].
We finally answer the reviewer's following question.
*“Is there a case where both the exact NE path-length and the $W_T$ are large but the epsilon-NE path-length is very small?”*
Yes. Consider the sequence of matrices $A^{(1)}, \dots, A^{(T)}$ provided in Proposition 3.2. Now let us instead take the sequence $A^{(1)} + c^{(1)} I, \dots, A^{(T)} + c^{(T)} I$, where $c^{(1)}, \dots, c^{(T)} \in \mathbb{R}$ and $I$ is the all-ones matrix. It is clear that the variation measures that depend on the Nash equilibrium are exactly the same no matter how $c^{(1)}, \dots, c^{(T)}$ are chosen. On the other hand, by suitably selecting the sequence of $c^{(1)}, \dots, c^{(T)}$ we can make $W_T$ to be arbitrarily large. This is a quite trivial example, but demonstrates the volatility of $W_T$ as a variation measure since it is not robust to transformations that retain (approximate) Nash equilibria.
---
Rebuttal Comment 1.1:
Title: Thanks for the authors' response
Comment: Thanks for the authors' response on my questions and the comparison between the submission and [60]. However, I am still not convinced the technical contribution of this submission compared with [60].
- While the authors argue that "using RVU-type bounds and second-order path lengths is very standard in this line of work", the main theorem (Theorem 3.1) of this submission is by directly combining the dynamic RVU-type bound proposed in [60] and the property of (almost) non-negative sum-of-regret property, which has already been shown in [60] (see Equation (32) of [60]) and section 3.1 of [Anagnostides et al., 2022]. As mentioned, the only difference I think is that [60] consider the exact NE divergence but the authors consider the approximate NE divergence, which is not hard to be obtained by extending the analysis in [60] by replacing P_T with \epsilon-NE path-length +T\epsilon following in Lemma 16/18 in [60]. From the technical perspective, the extension to poly-matrix zero-sum games, convex-concave games and the general sum game (using CE) is not complicated based on the (almost) non-negative sum-of-regret property as shown in proposition 3.2 in [Anagnostides et al., 2022].
- While the author argued on the choice of \eta, I believe in [60], if the measure is only for equilibrium gap (duality gap in [60]), constant learning rate is also enough to prove the results as I think using a meta-base algorithm design in [60] is mainly about adapting to different metrics. Also as mentioned in Section A.1.5, to achieve better regret bound, an adaptive tuning method proposed in [60] is necessary. Then this is exactly what is proposed in [60].
Based on the above, I am not convinced that the contribution is significant.
[Anagnostides et al., 2022] On Last-Iterate Convergence Beyond Zero-Sum Games, ICML 2022
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the constructive feedback.
We first want to point out that the paper of Anagnostides et al. referenced by the reviewer deals solely with static games; unlike [60], it does not rely on the nonnegativity of dynamic regret, which is an important ingredient in the dynamic setting. All similarities of our approach with [60]-including all points made by the reviewer above-are already carefully explained throughout the paper, especially in the subsection describing our contributions.
To address the reviewer's point, many of the extensions we provide, including using an improved variation measure based on approximate Nash equilibria, time-varying variational inequalities based on the MVI property, and correlated equilibria, are not complicated, but do require combining multiple suitable ingredients, which we believe to be a valuable contribution. And in any event those are new results concerning well-studied problems not derived in prior work, so we consider them to be an important addition to the existing literature. For example, the bilinear formulation of correlated equilibria we employ is definitely not standard, which is another reason why that important setting was not addressed in prior work. Our results also cover time-varying potential games and time-varying strongly convex-concave games, which depart considerably from [60]. | Summary: In this work the authors consider no-regret learning in multiagent games where the underlying game varies across different rounds. They study several classes of games and various learning algorithms that the agents can use. Naturally, the results they obtained are parametrized by variation measures of the underlying game that the agents participate in. To be more precise:
* For time varying zero-sum games, they focus on the setting where both of the agents are using optimistic gradient descent (OGD), which is a variant of gradient descent that puts a bias on more recent rounds of the game. Interestingly, they show that almost all iterates of OGD are approximate Nash equilibria provided that some variation measures related to changes in the set of approximate equilibria of the games and the underlying payoff matrices are $o(T)$.
* Then, they consider sequences of games where the games in each round are strongly convex-concave. For this class of games, they are able to show a similar result as above, but under weaker variation conditions for the underlying sequence of games.
* Finally, they consider time-varying general-sum games. Naturally, since Nash equilibria are not tractable in this setting, they consider convergence to correlated equilibria. They prove similar results as above, but now the variation measure they use is related to the set of correlated equilibria of the game.
Their results have implications to other settings as well such as meta-learning and dynamic regret guarantees in static games.
Strengths: * The paper studies a very natural problem and provides strong results under various settings of interest, which also have implications in other settings, as I mentioned in the summary.
* For the most part, the paper is easy to follow and the authors have done a good job placing their work in the literature.
* The authors are not trying to oversell the proof technique, which is heavily inspired by prior works, but uses some natural and clever modifications. For example, instead of letting the variation measure to depend on variations of the set of exact Nash equilibria which would make the problem very difficult to handle since this set is very sensitive to any changes of the payoff matrices, the authors consider variations of the set of approximate Nash equilibria, which behaves much nicer. Since the results are strong and general, I don't think that the authors should be penalized for the fact that the proof techniques are not very novel.
Weaknesses: * Some parts of the paper might be a bit hard to follow for non-experts, especially in Section 1.1. For example, the MVI property and the RVU bound were not defined. I think the authors could make the transition to this section a bit smoother, although I understand that the space limitations are making it trickier.
* Even though the variation measures the authors use are intuitive and it makes sense that the regret should scale with these quantities, there are no lower bounds to show the extent to which these results are optimal.
Some minor comments:
* In Proposition 3.10, it might be useful to state which dynamic benchmark you consider for their dynamic regret bound.
* With this bibliography style it is a bit hard to keep track of the references, although I understand that it saves some valuable space.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * What are the technical challenges to generalize the results to the bandit or some other partial feedback model?
* Would there be any benefit if you considered a variational measure wrt approximate correlated equilibria in general sum games instead of exact correlated equilibria?
* To what extent do you think that the results are tight?
* Do the results for two-player zero-sum games generalize to multiagent zero-sum polymatrix games? I don't see any inherent obstacles to do that using your approach, but I might be missing something.
* Another class of general sum games that is tractable in the single-shot setting are games in which the underlying matrices are rank-1. Is there any hope to obtain similar results for this class of games? I would imagine that the techniques would need to be substantially different from your approach.
* In Theorem 3.3 (and similar results) if the parameter $L$ is not known does the usual guess-and-double trick work to get the bound?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their feedback.
*“Some parts of the paper might be a bit hard to follow for non-experts, especially in Section 1.1.”*
We will make sure to introduce further background in the revised version of the introduction.
*“What are the technical challenges to generalize the results to the bandit or some other partial feedback model?”*
An important challenge beyond the full-feedback setting is that it is not known whether RVU-type properties hold, which is crucial for our analysis; see, for example, the paper “More Adaptive Algorithms for Adversarial Bandits” by Luo and Wei. So it seems that an entirely different approach is needed beyond the full-feedback setting.
*“Would there be any benefit if you considered a variational measure wrt approximate correlated equilibria in general sum games instead of exact correlated equilibria?”*
Indeed, Theorem 3.9 can be more generally expressed with respect to approximate correlated equilibria (as Theorem 3.3). In light of Proposition 3.2, this can lead to a substantial improvement in the convergence bounds; we will point this out in the revised version.
*“To what extent do you think that the results are tight?”*
We believe that the dependence on the variation measure $\mathcal{V}^{(T)}_{\epsilon-NE}$ is unavoidable for any online learning algorithm, but it is less clear whether the dependence on $\mathcal{V}_A^{(T)}$ is necessary; closing the upper and lower bounds here requires further work.
*“Do the results for two-player zero-sum games generalize to multiagent zero-sum polymatrix games? I don't see any inherent obstacles to do that using your approach, but I might be missing something.”*
Indeed, as we point out in Remark A.5, Property 3.1 can be generalized to any time-varying variational inequality problem that satisfies the MVI property, which includes zero-sum polymatrix games. As such, our analysis readily carries over; we will point this out in the revised version.
*“Another class of general sum games that is tractable in the single-shot setting are games in which the underlying matrices are rank-1. Is there any hope to obtain similar results for this class of games? I would imagine that the techniques would need to be substantially different from your approach.”*
This is an interesting question. We believe that the MVI property no longer holds when the sum of the matrices is only known to be rank-1. So we agree that it seems to require very different techniques.
*“In Theorem 3.3 (and similar results) if the parameter $L$ is not known does the usual guess-and-double trick work to get the bound?”*
Depending on the normalization assumptions that we make, $L$ can be upper bounded by a parameter that depends on the number of actions of each player, and it is a mild assumption that this is known to the players. Alternatively, one could also use the doubling trick, as the reviewer suggested.
---
Rebuttal Comment 1.1:
Title: Authors' Rebuttal
Comment: I would like to thank the authors for their detailed response. I don't have any further questions. | Summary: This paper studies learning dynamics in games that change over time. This is a similar setting to [60], but while [60] focuses on regret guarantees, this paper focuses on iterate convergence to NE.
The main result states that for bilinear zero-sum games, running optimistic OGD guarantees that,
$$
\sum_{t=1}^T EqGap_t = O(V_{NE-\epsilon}^T + V_A^T),
$$
where $EqGap_t$ is the NE gap (i.e., the difference between the player utility and best response), $V_{NE-\epsilon}^T$ is the variation of $\epsilon$-approximate NEs of the games $+\epsilon T$ (in fact they allow different $\epsilon$ for each $T$), and $V_A^T$ measures the variation of the game matrices. $V_{NE-\epsilon}^T$ can be much smaller compared to the variation of the exact NEs.
The paper continues by providing variation-dependent bound for strongly convex games. Next, they provide a bound on the sum of NE gaps for general sum-potential games (which depends on some notion of variation of the potential function), as well as bound on the sum of CE gaps in general games. Finally, they present results for *dynamic* regret in static games.
Strengths: The paper provides a set of interesting results. These include,
- Various results on the sum of NE gaps. Specifically, I find the notion of $V_{NE-\epsilon}^T$ very elegant, and indeed, it seems like a much more reasonable notion for characterizing the complexity of time-varying-games instance.
- Bounds on the dynamic regret in static games - even though this is a basic question, according to the authors these are the first results that show $\sqrt{T}$ dynamic regret in games (they also show $\log T$ dynamic regret under a stronger feedback model)
Weaknesses: - The main text lacks proofs/proof sketches. So it is impossible to understand the main ideas and techniques, even at a high level, without diving into the full technical proofs in the appendix (+ it makes it hard to evaluate the technical contribution of the paper).
- In several places, it is quite hard to follow the text. Specifically, section 1.1 is rather technical, given that it is part of the intro. In addition, the section on general games that starts in line 287 was not sufficiently clear to me. For example, the authors mention that *"there exist matrices
$A_1, . . . , A_n$, with each matrix $A_i$ depending solely on the utility of Player i..."* but what are exactly these matrices represent? What does the value of the optimization problem in (3) represents? Why does *"incorporating the 0 vector will be useful"*? and why does there exist $\mu^\star$ that satisfy the condition in line 300?
- Lack of comparison to previous work: what is the result of [30] and how is it compars to your result in "meta-learning"? How does Corollary 3.4(2) compares to the result in [60] (except for the difference between $V_{NE-\epsilon}^T$ and $V_{NE}^T$)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What is the difference between the sum NE gaps (Corollary 3.4) and dynamic regret (line 342)?
See additional questions in the Weaknesses part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their feedback.
*“The main text lacks proofs/proof sketches. So it is impossible to understand the main ideas and techniques, even at a high level, without diving into the full technical proofs in the appendix”*
We will make sure to provide high-level proof sketches in the main body, so that the main body is self-contained in the revised version.
*“the section on general games that starts in line 287 was not sufficiently clear to me.”*
We will provide a more self-contained presentation regarding the derivation of the bilinear formulation of correlated equilibria in the revised version; a textbook treatment can be found, for example, in Chapter 12 in the book “Game Theory Basics” by Von Stengel. Below, we address the reviewer’s questions regarding the bilinear formulation.
This bilinear formulation represents a game between an additional player (namely, the mediator) and the set of players. Each player tries to optimally deviate from the mediator strategy, while the mediator is trying to find a strategy so that no player has an incentive to deviate, which is by definition a correlated equilibrium. So, each matrix $A_i$ encodes the deviation benefit of player $i$ (assuming that all other players are following the mediator's recommendation), and the value of the optimization problem (3) is the sum of the players’ deviation benefits. Notice that since a correlated equilibrium exists, there exists a mediator strategy $\mu^\star$ such that $(\mu^\star)^\top A_i x_i \leq 0$, for any player $i$. The same of course holds by allowing players to select strategies in conv$(X_i, 0)$ (since we just multiply by a nonnegative scalar). So, the bilinear formulation remains legitimate after taking the convex hull with $0$, and it proves that a $\mu^\star$ satisfying the condition of Line 300 indeed exists.
*“Lack of comparison to previous work: what is the result of [30] and how is it compars to your result in "meta-learning"?”*
Compared to the results in [30], we note that our guarantees depend on different similarity metrics, which are in general incomparable; yet, we remark that there are settings in which our similarity metric can be arbitrarily smaller than the one in [30], even in zero-sum games. Furthermore, for general-sum games, we obtain algorithm-independent similarity metrics, which was left open in that work.
The algorithms we employ are also different. In particular, unlike [30], our approach is essentially agnostic to the meta-learning in that we do not need to know the boundaries of each game. Instead, our meta-learning guarantees are byproducts of our results for time-varying games, which is a more general problem than meta-learning.
*“How does Corollary 3.4(2) compares to the result in [60]”*
First, the authors of [60] focus on regret minimization in time-varying games, while Corollary 3.4 provides guarantees of iterate-convergence to Nash equilibria. Those two problems are in general unrelated; for example, even in static games an algorithm can have vanishing regret but at the same time all iterates can have a large Nash equilibrium gap. Moreover, all of our results concern the behavior of optimistic mirror descent (OMD), while the algorithm in [60] is more complicated. Given the tremendous amount of interest OMD has received in recent years, understanding its behavior is a question of independent interest. Finally, leveraging the connection we make with the MVI property (Remark A.5), our Corollary 3.4 directly applies to time-varying variational inequality problems as well, such as time-varying zero-sum polymatrix games, not just time-varying bilinear saddle-point problems.
*“What is the difference between the sum NE gaps (Corollary 3.4) and dynamic regret (line 342)?”*
Those are indeed the same; as such, notice that Line 342 is in fact a direct consequence of Corollary 3.4.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have no further questions. | Summary: The paper studies optimistic gradient descent for time-varying games. Authors prove convergence bounds for zero sum games involving the first order variation of approximate nash equilibrium, which can be significantly tigher than variation bounds involving exact nash equilibria and second order bounds in payoff matrices. The paper also includes refined convergence bounds involving second-order variation for strongly-convex-concave games. The results also have implications for meta-learning, where games are repeated many times.
The authors also extend results to time-varying general-sum multiplayer games with correlated equilibria, extending exisiting meta-learning similarity measures to general sum games and proving new single-player regret bounds. Techniques are applied to static games, improving our understanding of dynamic regret.
Strengths: Nonstationarity is an important and challenging area of study for learning in games, and is under-explored.
Bounds involving approximate nash equilibrium variation can be significantly tighter than existing bounds involving variation in exact nash equilibria.
The results for general sum games solving two independent open problems.
Ideas provide improved understanding of dynamic regret for static games, including both positive and negative result.
The paper is well written, including a variety of results while still providing technical exposition on insights behind the proofs and contextualizing the result.
Weaknesses: Observation 3.11 requires two point feedback, so it's less clear this is a significant improvement.
The paper could probably be a bit more self contained. The paper borrows ideas from [60], like a dynamic RVU bound but it is hard to follow without additional context.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Typo 88: accelerates-> accelerated
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for their feedback.
We will make sure to provide further background in order to make the paper more self-contained in the revised version. We also thank the reviewer for spotting a typo; we will fix it in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have no other comments at this time. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Lovász Principle for Unsupervised Graph Representation Learning | Accept (poster) | Summary: The paper introduces the "Lovasz Principle", an unsupervised or semi-supervised graph representation learning approach inspired by the Lovasz number. The motivation for using the Lovasz Principle is well established, and the authors extensively discuss related work and similar approaches. The experimental setup is sound, and the Lovasz Principle is tested against many widely used graph representation learners, with better results in all experiments.
Strengths: - The paper is well-written and easy to follow, with compelling arguments and intuitive explanations.
- The steps taken to derive the Lovasz Principle are well laid on.
- The experimental setup is sound and contains many different approaches to graph representation learning. Variants of the Lovasz Principle consistently obtain the best scores on all datasets. I appreciate the inclusion of Figure 2 for a quick visual summary.
- Compared to InfoMax, the Lovasz Principle is simpler because it does not require a discriminator for the Jensen-Shannon MI estimator. The Lovasz Principle also has a faster runtime.
Weaknesses: - The authors discuss the non-uniqueness of $U_i*$ and $c_i*$ and give some intuition regarding why this is the case, but mention that they "hope that similar graphs have similar representations". It would be interesting and make the argument more compelling if the authors computed $c*$ for various "similar" and "different" graphs and showed empirically that they are indeed similar or different.
- Similarly, it would be interesting if the authors empirically showed that $\mathcal{F}_W$ is actually a good approximator for $(U_i*, c_i*)$ by comparing them with $\mathcal{A}(G_i)$, where $\mathcal{A}$ is some solver.
- It would make the paper more readable if the authors moved the footnote from page 4 to the caption of Tab. 1.
- I appreciate that the authors go into great detail regarding hyperparameter sensitivity in the Supplementary material, but they don't mention the effect of the hyperparameters in the main paper. The reader should be aware that hyperparameters can significantly affect the final performance, and one should consider this when using the Lovasz Principle.
- I could not find any code provided by the authors for reproducibility.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - While Figure 1 and the description for the pentagon example are great, I was slightly confused when I initially read the part regarding the orthogonal vector pairs. This is because I've read $(u_i*, u_j*)$ as being a vector, not a pair. It would be great if the authors somehow clarified the intuition about the orthogonality.
- I assume that $\odot$ in Eq. 8 is the Hadamard product? It would be nice to state this somewhere explicitly.
- The authors mention obtaining node-level representations via the $F(\cdot, \cdot, \theta)$ function but are tackling only graph-level tasks. Have you tried node-level tasks? It would be interesting to see the approach tested on them.
- Have you tried assessing if $\mathcal{F}_W$ is a good approximator for $(U_i*, c_i*)$? Maybe the formulation helps only in a representation-learning sense and doesn't produce a good approximator.
- Will the code be publicly available? I will update my score accordingly if this is the case.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The Lovasz Principle adds some new hyperparameters that could affect the final performance if not correctly chosen. The authors have included an extensive discussion regarding this in the Supplementary to the paper, but I believe this should also be mentioned in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness 1: Handle vector comparison between "similar" and "different" graphs**
**Response:** Given two graph sets $\mathcal{S}_ 1$ and $\mathcal{S}_ 2$, we denote $c_ i$ as the handle vector for $G_ i$, $i=1,2$. We define the following cross-set difference:
$\ell_ {c}(\mathcal{S}_ 1, \mathcal{S}_ 2) = \frac{1}{|\mathcal{S}_ 1| \times |\mathcal{S}_ 2|} \sum_{G_ i \in \mathcal{S}_ 1, G_ j \in \mathcal{S}_ 2} \Vert z_ i - z_ j \Vert_ 2^2.$
Given a graph dataset $\mathcal{G}$, it is natural to assume that the graphs in the same class are "similar" and graphs in the different class are "different". Then we select $\mathcal{S}_1$ and $\mathcal{S}_2$ from the first class of $\mathcal{G}$ where $\mathcal{S}_1 \cap \mathcal{S}_2 = \emptyset$. $\mathcal{S}_3$ is selected from the second class of $\mathcal{G}$. We use InfoGraph as our GNN framework by substituting its InfoMax loss with the Lovasz loss Eqn. (9). For each dataset, we select 30 graphs for each subset. We repeat the experiment10 times and report the results as follows:
\begin{matrix}
\hline
Difference & MUTAG & PROTEINS & DD & NCI1 \\\\ \hline
\ell_{c}(\mathcal{S}_ 1, \mathcal{S}_ 2) & 0.24\pm 0.13 & 0.27 \pm 0.08 &0.23 \pm 0.10 &0.26 \pm 0.12 \\\\
\ell_{c}(\mathcal{S}_ 1, \mathcal{S}_ 3) & 0.31\pm 0.16 & 0.33 \pm 0.11 &0.28 \pm 0.15 &0.35 \pm 0.17 \\\\ \hline
\end{matrix}
The results show that the handle vector of "similar" graphs are close, while that of "different" graphs are different.
**Weakness 2: $\mathcal{F}_W$ vs $\mathcal{A}(G)$**
**Response: please refer to our response to your Question 4.**
**Question 1: intuition about orthogonality**
**Response:** In [1], a coding source with symbols is represented as a graph $G = (V, E)$, where each vertex represents one symbol. If the symbols of vertex $i$ and vertex $j$ share some information in common, these two symbols are probably confused in coding a message. This confusable relationship is represented as an edge between vertex $i$ and vertex $j$. Laszlo Lovasz proposed to represent the information of a coding symbol as an orthonormal vector. If vertex $i$ and vertex $j$ are not adjacent, that means they share no information in common, and thus their vector representations $v_i$ and $v_j$ should be orthogonal to each other, thus $v_i^\top v_j = 0$. That is the intuition about the orthogonality, which is derived from the information theory view.
[1] László Lovász. On the Shannon capacity of a graph
**Question 2:** $\odot$ is a Hadamard product**.
**Response:** Yes. Will state it explicitly.
**Qeustion 3: Lovasz principle for Node-level tasks**
**Response:** Yes. Our Lovasz principle can also be applied to node-level tasks but may not provide very good results. The reason is that Lovasz principle aims to learn the global representation of graphs and may not provide discriminative representations for individual nodes. Here we apply Lovasz principle to GCN and then evaluate the node-level representations in the edge prediction task, comparing with VGAE[2]. We report the AUC scores as follows.
\begin{matrix}
\hline
Method & Cora & Citeseer & Pubme \\\\ \hline
VGAE & 91.4\pm 0.01 & 90.8\pm 0.02 &94.4\pm0.02 \\\\
Lovasz & 84.3\pm 0.24 & 83.1\pm 0.96 &87.2\pm0.37 \\\\
\hline
\end{matrix}
VGAE outperformed the Lovasz principle. Nevertheless, if we convert the node-level tasks to subgraph tasks, i.e., representing each node by a small (local) subgraph containing the node, we may obtain better results for Lovasz principle. We will add more discussion about the application to node-level tasks in the supplementary material.
[2] Kipf T N, Welling M. Variational graph auto-encoders[J].
**Question 4: Quality of solver Approximation**
**Response:** Solving for the Lovasz number is a problem with the computation complexity of $O(n^3)$, which can be learned by a GNN with sufficient capacity.
Given a GNN model $\mathcal{F}_W$ trained via the Lovasz principle, the predicted Lovasz number is defined as
$\hat{\vartheta}(G) = \max_ {p \in V} \frac{1}{(\hat{z}^\top \hat{h} _ p)^2}, ~~\text{with} ~~(\hat{z}, \hat{H}) = \mathcal{F} _ W(G)$.
The ground truth Lovasz number of $G$ is denoted as $\vartheta(G)$, which can be computed by the optimization method in [3]. We define an evaluation metric as
$r = \frac{|\hat{\vartheta(G)} - \vartheta(G)|}{\vartheta(G)}$
[3] Wen Z, Yin W. A feasible method for optimization with orthogonality constraints[J].
We use InfoGraph as our GNN $\mathcal{F}_W$ by substituting its InfoMax loss with the Lovasz loss.
We also propose a constrained optimization method in the **Concern about the regularization approximation instead of exact constraints** part of the rebuttal for Reviewer JpeL [4].
[4]Reviewer JpeL: \url{https://openreview.net/forum?id=0vdEHDwamk¬eId=J2z1RvNm3g}
We select 50 graphs from four dataset and reported the Lovasz number regression rate $r$ for both the regularized ($\mu = 10$) optimization and the constrained optimization for Lovasz principle as follows.
\begin{matrix}
\hline
r& \text{MUTAG} & \text{PROTEINS} & \text{DD} & \text{NCI1} \\\\ \hline
regularized\ opt. & 0.097\pm 0.034 & 0.082\pm 0.021 &0.063\pm 0.011 &0.102\pm 0.036 \\\\
constrained\ opt. & 0.065\pm 0.024 &0.073\pm 0.016 &0.061\pm 0.012 & 0.085\pm 0.023 \\\\
\hline
\end{matrix}
We see that the estimation errors given by $\mathcal{F}_W$ $ are less than 10% in almost all cases. The constrained optimization method [3] performs better than the regularized optimization.
**Question 5: Will the code be publicly available?**
**Response:** Sure, we will make all of our codes publicly available. We will upload the codes to Github. Actually, our Lovasz principle is a loss that can be used by almost all GNN methods.
**Limitation**
**Response:** We will move more results from the supplementary material to the main paper since NeurIPS allow adding an additional page in an accepted paper.
---
Rebuttal Comment 1.1:
Comment: I extend my gratitude to the authors for diligently addressing both my concerns and those of the other reviewers. The inclusion of new experiments and observations has strengthened the paper, and I'm convinced it constitutes a valuable contribution worthy of acceptance for publication at NeurIPS.
I would urge the authors to incorporate the less favorable results (e.g., VGAE vs. the Lovasz principle) into their Supplementary materials, as a thorough discussion of these aspects could enhance the work's completeness.
In light of my concerns being satisfactorily resolved, and the authors' commitment to making their code publicly accessible, I have revised my score from 6 to 7.
---
Reply to Comment 1.1.1:
Title: Thanks for the feedback
Comment: We sincerely thank you for the feedback on our rebuttal and for recognizing our work and raising the score. We will follow your suggestion and make the paper more complete. | Summary: This paper presents a technique for unsupervised graph-level (and potentially node-level) representation learning on graphs. The main idea behind the proposed method is the concept of the *Lovász number*, a graph invariant that is related to several graph properties and is computed by solving a min-max optimisation problem on the graph. In particular, to compute this, one needs to optimise a graph-level representation (named the *handle vector*), as well as a set of node-level representations (named the *optimal graph representation*), under certain constraints.
Although the optimisation problem can be solved in polynomial time, it has several disadvantages (it is computationally expensive, it does not have a unique solution, it cannot generalise to unseen graphs, etc.). The authors propose to overcome them by replacing the conventional optimiser with a neural network-based one, i.e. they train a neural network to map each graph to the aforementioned representations, by optimising the Lovasz number objective (regularised by the constraints) with gradient descent. This idea is extended to a more fine-grained setup in order to incorporate subgraph-level information via the so-called Lovasz kernel (a kernel that compares subgraph-level Lovasz numbers). The method is evaluated on several scenarios (unsupervised, semi-supervised and transfer learning), showing consistently competitive performance, and ablated with regards to some algorithmic choices and hyperparameters.
Strengths: **Presentation and motivation**: The paper is generally well-written (with the exception of some clarity issues – see weaknesses). The idea of the Lovasz number is well-explained and illustrated, as well as its role as a graph description is well-motivated. Moreover, the arguments in favour of devising a learning algorithm instead of a conventional optimiser are well-presented.
**Originality and potential impact**. The concept of the Lovasz number is under-explored by the graph ML community (especially from the perspective of graph neural networks), therefore this paper brings a refreshing idea into the field, that might be useful in topics beyond unsupervised learning and pose some interesting new research questions (I mention some of them in the weaknesses section). It may also incentivise more research exploring other graph properties and combinatorial optimisation problems for unsupervised learning.
**Empirical results**. The method seems to provide consistent performance improvements over multiple unsupervised learning baselines, which indicates its practicality. Moreover, judging from the reported results in the appendix, it seems relatively robust to hyperparameter choices.
Weaknesses: **Lack of theoretical justification & ambiguity regarding what the model is learning**. Given the empirical results, it seems that the method has some appealing property that makes it work well in practice. However, perhaps the biggest weakness of this paper is that it remains unclear what this property is and the authors have not discussed this adequately. In particular,
- It is unclear if the GNN is actually learning the handle vector + optimal graph representations (when optimising Eq. (8)). To be more precise, it is unclear if a GNN *can* express this (mapping a graph to representations that optimise Eq. (2), using a GNN). Given the computational complexity of the problem, I suspect that a GNN (of linear complexity), probably will *not* be able to optimally succeed in this task, but perhaps maybe only approximate it? These are important theoretical questions that should be discussed by the authors.
- Additionally, the constraints are not guaranteed to hold, which casts further doubt on the ability to converge to the desired representations. I think it would be useful to actually test (e.g. for the models whose performance is reported in the tables) how far are the resulting representations from satisfying the constraints, and how good are they in approximating the Lovasz number. Moreover, do the node features (that are taken into account by the GNN, but not in Eq. (2)) affect the resulting representations? Reporting these results will provide further insights into what these models are actually learning.
- I am wondering if it is actually desired to learn to infer the handle vector since the results in Table 1 (reported as Lovasz number) are significantly worse than the proposed method (reported as Lovasz principle). Isn’t this contradictory to the initial motivation?
- Eventually, I am wondering why is this method so good (and better than the Lovasz number) at solving the tested downstream tasks. What kind of information do the resulting representations contain? Maybe something related to substructures? Note that the arguments mentioned by the authors are attractive properties of the Lovasz number, but not the handle vector.
**Clarity issues and inadequate justification of design choices**. Some of the choices made by the authors are not sufficiently justified and I’d like to encourage them to provide more details. In particular,
- Section 3, Eq. (8): Wouldn’t it be possible to guarantee the satisfaction of the 2nd constraint (unit-length constraint for the handle vector) by construction? For example, isn’t it possible to extract an unconstrained graph representation using a GNN and then divide it by its norm? It would be interesting to discuss if something similar could happen for the 1st constraint.
- Section 3, Eq. (10): Similarly, here why does one need a second encoder (and therefore the third regularisation term)? Why not use the same encoder and compute both objectives with that?
- Section 4. This section lacks the required level of detail and some parts are inadequately explained.
- I am a bit confused by the iterative nature of the algorithm here. Could the authors elaborate? Do the iterations refer to gradient iterations, i.e. is this akin to bi-level optimisation? If this is the case, why is the first term of Eq. (14) not indexed by $t$?
- How many subgraphs are needed to obtain a good estimate of the kernel?
- The role of the spectral embedding idea is unclear to me. Why are the handle vectors encouraged to be orthogonal in Eq. (13), and most importantly what is the role of the last term?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: **Experiments**.
- Could the authors discuss how the loss terms behave in practice (for both Eq. (8) and Eq. (13))? Does one term dominate over the others, and how far are the constraints from being satisfied (this is also related to my first concern)?
- More experimental/implementation details are needed regarding the transfer learning experiment. The authors mentioned that these can be found in the supplementary material, but I couldn't find enough details. For example, it was unclear how the pre-training is performed and where the Lovasz number loss is used.
- Appendix Section 3.2.: I found it a bit puzzling that the performance does not deteriorate when $\mu$ is too large (e.g. 1000). I would expect that these values would imply an unsuccessful optimisation of the first term of the objective. Have the authors tested this (e.g. by testing if the learned graph and node-based representations lead to a value close to the Lovasz number)?
- Appendix section 4.1. is a bit confusing. It seems that in several cases, using the orthonormality constraint deteriorates performance. Could the authors comment on this?
- Did the authors reimplement the baselines, in order to ensure fairness of comparison?
**Minor**:
- If I am not mistaken the Lovasz number is computed by also optimising over the dimension of the handle vector. Could the authors account for this as well?
- Could the authors provide more details on the computational complexity of computing the Lovasz number (this is relevant to the question if a GNN can solve this problem to optimality)?
- In the last two sections of Table 1, I assume that the authors simply used the neural network architectures of the mentioned papers, however the way the results are presented is a bit confusing. Maybe explicitly mentioning it in the caption would help. Moreover, the important element that should be highlighted in this table, is if the Lovasz optimisation objective improves against the baseline one, so maybe the authors would like to reorder the rows in the table to emphasise this comparison.
- It might be also useful to add some results in Table 1 from end-to-end supervised learning methods, in order to obtain a more complete picture of the capabilities of the unsupervised methods.
- Have the authors considered using the node-level representations for node-wise tasks?
- L291: “For those contrastive…naïve strategy”. Do the authors refer to the baselines? This should be clarified to avoid confusion.
- Typo: Section 4.3.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors make a short discussion on the limitations in the conclusion, but I think more should be mentioned (especially regarding the ambiguity regarding the learned representations - see the weaknesses section). No foreseeable negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We highly appreciate your comprehensive review, insightful comments, and positive assessment.**
**Weakness: Lack of theoretical justification & ambiguity regarding what the model is learning**
* We assumed that the training data ($N$ graphs) are from the same distribution. Thus, the graphs should share some common structures or properties. For instance, when two graphs are in the same class, their orthonormal representations should be similar and the Lovasz numbers of them or their subgraphs are similar [Johansson et al., 2014]. Therefore, it is expected that a sufficiently large neural network is able to learn the intrinsic structure from the training graphs and then solve the optimization related to the orthonormal representation and handle vector in an end-to-end manner. The idea is similar to the general idea of learning to optimize [Li and Malik ICLR 2017; Chen et al. JMLR 2022]. Please also refer to our response to Q4 of Reviewer 2QoV.
* Although in our paper we only reported the results given by the regularized optimization, we actually have found that the performances are similar to the constrained optimization (solved by projected update or exact penalty method). Moreover, in the ablation study of the supplementary material, we can see that the method is not very sensitive to the penalty parameter $\mu$. This is due to the good quality of the handle vector, though not the exact solution. In addition, the node features are indeed useful though Eq. (2) is not related to the node features.
We define the following metric:
$r = \frac{|\hat{\vartheta(G)} - \vartheta(G)|}{\vartheta(G)}$
We have the following results (see our response to Reviewer 2QoV). The estimation errors are less than 10% in almost all cases.
\begin{matrix}
\hline
r& \text{MUTAG} & \text{PROTEINS} & \text{DD} & \text{NCI1} \\\\ \hline
regularized\ opt. & 0.097\pm 0.034 & 0.082\pm 0.021 &0.063\pm 0.011 &0.102\pm 0.036 \\\\
constrained\ opt. & 0.065\pm 0.024 &0.073\pm 0.016 &0.061\pm 0.012 & 0.085\pm 0.023 \\\\
\hline
\end{matrix}
* The bad performance of the Lovasz number in Table 1 stems from the fact that the orthonormal representation and handle vector for a graph is not unique, which was explained in lines 149-157 in the main paper. Thus, even if two graphs are exactly the same, the solver for (2) may provide two very different handle vectors, which leads to bad performance in downstream tasks.
* In our Lovasz principle, because of using neural network and the end-to-end formulation, the orthogonal representation and handle vector of each graph are unique and rotation-invariant. That's why it is much better than the Lovasz number in Table 1. By the way, in our Lovasz principle, the end-to-end formulation learned the distribution information of the $N$ graphs successfully, while solving the Lovasz number for each graph by a solver independently failed to learn the distribution information.
**Clarity issues and inadequate justification...**
* Yes. We have provided a solution in the response to Reviewer JpeL. Besides this projection-based strategy, we can use the exact penalty method, i.e., increasing $\mu$ gradually, which is a common strategy for constrained optimization.
* The InfoGraph method used two encoders (see Eq. (17)) for the semi-supervised learning and the works (GraphCL, AD-GCL, JOAO, AutoGCL, etc.) followed the convention. To ensure a fair comparison, we considered two encoders in our work.
* -----
* The optimization related to (14) is not a bi-level optimization. In each iteration $t$, the objective function is (14) and the parameters are updated by a mini-batch optimization. We can just regard the $K_{ij}^{(t-1)}$ in (13) as a constant not related to the network parameters, so the optimization problem changes in every iteration. The idea is very similar to that of the iteratively reweighted least squares. We will provide more explanation in the revision.
* We follow the implementation code of the Lovasz kernel [Joh+14] in the Grakel framework [1]. When dealing with large-size graphs, Grakel [1] selects subgraphs of sizes 3, 4, and 5, while excluding others. This is their code implementation for achieving a well-approximated Lovasz kernel using samples as in [Joh+14, Theorem 1].--------[1] Grakel \url{https://ysig.github.io/GraKeL/0.1a8/kernels/lovasz_theta.html}
* The orthogonality of the handle vectors and the last term (sum to 1) are constraints of spectral embedding [2,3].
[2] Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation[J].
[3] \url{https://perso.telecom-paristech.fr/bonald/documents/spectral.pdf}
**Experiments**
* The loss behavior is reported in the attached PDF file.
* We will provide more details about the experimental setting of transfer learning.
* The performance is indeed not sensitive to $\mu$ if it is in the range of $[0.1,1000]$. When $\mu$ is too large, the learned Lovasz number is quite different from the one given by a solver.
* Yes, in some cases, the orthonormality constrainn reduced the accuracy. One possible reason is that the baseline method on a specific dataset learned diverse features which lead to difficulties in making them orthonormal.
* We reproduced the results of the baselines and found that they are very close to the results reported in the original paper. Therefore, we just report the original results of the authors in the tables. This convention has been followed by previous works such as [Hu et al., 2019; You et al., 2021; Yin et al., 2022]. It should be pointed that for our methods, based on the codes released by [You et al., 2021; Yin et al., 2022], we just replaced the Infomax principle by our Lovasz principle, without changing any structure or optimization parameter in the original codes. Therefore, the comparison is fair.
**Minor issues**
We thank the reviewer's detailed comments again and we will address these minor issues in the revised paper.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Dear Authors,
Thank you for your reply. Although some of my questions have been answered, my major concern regarding what the GNN actually learns and if vanilla GNNs can express the Lovasz number remains.
- An intuitive explanation about why the method works so well is still missing. Why should one choose to learn to solve for the Lovasz number and not for a different graph property? The lack of sensitivity to $\mu$ makes me even more sceptical.
- In their response to Reviewer 2QoV, the authors claim that “Solving for the Lovasz number […] can be learned by a GNN with sufficient capacity”. However, this is not proven but only tested experimentally, which in fact indicates that the Lovasz number can be approximated, not optimally computed. I am still sceptical if a GNN can indeed express the solution to this problem, mainly due to its computational complexity, as I already mentioned in my initial review; note that many graph properties are proven to be incomputable by vanilla GNNs. In case this is also true here, one possible proof idea is to find a pair of graphs (counterexample) that have the same Lovasz number but are indistinguishable by vanilla GNNs.
*Minor*. Additionally, the motivation behind adapting the spectral embedding idea is still not adequately explained.
For the time being, I will keep my score unchanged due to the above reservations.
---
Reply to Comment 1.1.1:
Title: Further clarification
Comment: Dear Reviewer,
We thank you for your further comments. Our responses are as follows.
1. The good performance is owing to the good property of the handle vector of Lovasz number. According to Figure 1 (Pentagon example for Lovász number, also presented in the first figure of the PDF in the global rebuttal) in our paper, intuitively, the handle vector can be viewed as the handle of an umbrella, where the ribs are nodes of the graph. Thus, the handle vector is able to capture the global structure of a graph and becomes a natural vector representation of the graph.
Importantly, according to the definition of Lovasz number, i.e.,
$$\qquad\vartheta(G):=\min _{\boldsymbol{c}, \boldsymbol{U} \in \mathcal{U}} \max _{p \in V} \frac{1}{\left(\boldsymbol{c}^{\top} \boldsymbol{u}_p\right)^2}$$ we see that the handle vector $\mathbf{c}$ is the vector with smallest angles to the most spreading orthonormal representation of a graph. So the handle $\mathbf{c}$ can be regarded as a 'centroid' of ribs of the umbrella. The computation of $\mathbf{c}$ is based on the representations of all nodes in a graph.
We actually have studied other graph properties but did not find such a vector. It is worth noting that Lovasz number is polynomial-time solvable (e.g. SDP) while most other graph properties such as the clique number are NP-hard.
2. The Lovasz number is polynomial-time computable and can be solved by SDP [Grötschel et al.,1981; Galli and Letchford, 2017]. Currently, we cannot prove that the Lovasz number can be exactly solved by a GNN. But we think it is an important problem to be studied in the future and it is difficult to solve all problems in one paper. The are many successful papers on learning to optimize. They use neural networks to solve convex $\ell_1$-minimization [1], neural network training [2], black-box optimization [3], combinatorial optimization [4], etc., but they **did not prove** that the corresponding problems can be exactly solved by the neural networks.
[1] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In Proceedings of the 27th international conference on international conference on machine learning, pages 399–406, 2010.
[2] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. In Advances in neural information processing systems, pages 3981–3989, 2016.
[3] Yutian Chen, Matthew W Hoffman, Sergio G´omez Colmenarejo, Misha Denil, Timothy P Lillicrap, Matt Botvinick, and Nando Freitas. Learning to learn without gradient descent by gradient descent. In International Conference on Machine Learning, pages 748–756, 2017.
[4] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. Advances in neural information processing systems, 30:6348–6358, 2017.
In our response to Reviewer 2QoV, we used experiments to show that the solutions given by our neural network $\mathcal{F}_W$ are close to the solutions given by an exact solver. Given a GNN model $\mathcal{F}_W$ trained via our Lovasz principle, the predicted Lovasz number is defined as
$$
\qquad\hat{\vartheta}(G)=\max _{p \in V} \frac{1}{(\hat{z}^\top \hat{h} p)^2}, \text { with }(\hat{z}, \hat{H})=\mathcal{F}_W(G)
$$
The ground truth Lovasz number of $G$ is denoted as $\vartheta^\ast(G)$, which is computed by the solver exactly. We define an evaluation metric as $$\qquad r=\frac{|\hat{\vartheta}(G)-\vartheta^\ast(G)|}{\vartheta^\ast(G)},$$
the smaller the better.
We use InfoGraph as our GNN $\mathcal{F}_ W$ by substituting its InfoMax loss with the Lovasz loss. We also consider a constrained optimization to train $\mathcal{F}_ {W}$.
We select 50 graphs from four datasets and reported the Lovasz number regression rate $r$ for both the regularized $(\mu=10)$ optimization and the constrained optimization for Lovasz principle as follows.
**Table: Approximation Error for the exact Lovasz number**
\begin{matrix}
\hline
& MUTAG & PROTEINS & DD & NCI1 \\\\
\hline regularized opt. & 0.097 \pm 0.034 & 0.082 \pm 0.021 & 0.063 \pm 0.011 & 0.102 \pm 0.036 \\\\
constrained opt. & 0.065 \pm 0.024 & 0.073 \pm 0.016 & 0.061 \pm 0.012$ & 0.085 \pm 0.023 \\\\
\hline
\end{matrix}
We see that the estimation errors given by $\mathcal{F}_W $ (regularized or constrained) are **less than 10%** in almost all cases. This demonstrates that the solutions provided by a neural network $\mathcal{F}_W$ are quite good compared to the exact solutions given a solver. By the way, in our response to Reviewer 2QoV, we showed that the handle vectors are discriminative for graphs from different classes.
We hope this clarification could alleviate your concerns and we are still considering how to prove that the neural networks are able or unable to solve the Lovasz number exactly, though it is very challenging. Thank you again.
Sincerely,
Authors | Summary: This paper centers on graph-level representation learning, aimed at converting graphs into vectors useful for downstream tasks like graph classification. The authors propose a unique learning principle named the Lovász principle, inspired by the Lovász number in graph theory. The Lovász number, a real number serving as an upper bound for a graph's Shannon capacity, has strong ties to various global graph characteristics. The authors suggest that the handle vector, used for calculating the Lovász number, could be an effective choice for graph representation given its ability to capture global graph properties. However, its direct application poses challenges. To address these, the authors propose using neural networks to offer the Lovász principle. They also present an enhanced Lovász principle capable of directly and efficiently utilizing subgraph Lovász numbers. Experimental results demonstrate competitive performance of these Lovász principles in comparison to the baselines in both unsupervised and semi-supervised graph-level representation learning tasks.
Strengths: 1. The authors' proposal of Lovász theta kernels for graph representation learning shows great promise, thanks to its theoretical foundation.
2. The paper includes thorough experiments conducted in various settings, including unsupervised, semi-supervised, and transfer learning. These experiments provide strong evidence of the effectiveness of the proposed methods.
Weaknesses: 1. Although the author proposed a new graph kernel, the contribution seems slightly small to me.
2. The author did not demonstrate the framework of the model.
3. Some classic baselines were not compared.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I strongly suggest that the author discuss the application of graph kernel in updated scenarios, such as graph prompt learning. It is highly recommended that the authors consider citing the prompt learning on graph[1,2] paper and explore its application in conjunction with the proposed method in this paper, especially for unsupervised settings. This would enhance the discussion and potential application of prompt-based models.
[1] Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. Z Liu, X Yu, Y Fang, X Zhang - Proceedings of the ACM Web Conference 2023, 2023
[2] Sun, M., Zhou, K., He, X., Wang, Y. and Wang, X., 2022, August. Gppt: Graph pre-training and prompt tuning to generalize graph neural networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.
2. Providing a framework of the proposed model can indeed enhance the clarity of its functionality.
3. Why semi-supervised learning baseliens such as GCN, GAT, GIN are not compared in the paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question 1: Applications for graph prompt learning**
**Response:** We appreciate your suggestion and will cite papers [1,2] in the revised paper. Graph prompt learning involves transferring learning in graphs. Challenges arise when refining a pre-trained Graph Neural Network (GNN) for specific tasks using limited labeled data. Fortunately, when the GNN is initially pre-trained based on the Lovász principle, it acquires representations geared towards solving the Lovász number problem, rather than being tailored to a specific task. This pre-trained GNN, rooted in the Lovász principle, can seamlessly adapt to downstream tasks' unlabeled data to learn their orthogonal representations. Moreover, we can determine the true Lovász number of the downstream task's data and employ the pre-trained GNN to predict the Lovász number for the same data. The regression loss between the true Lovász number and the predicted Lovász number serves as a prompt to fine-tune the pre-trained GNN. The Lovász principle, being a graph-based learning principle, is exceptionally well-suited for graph prompt learning.
**Question 2: Lack of model framework**
**Response:** We in this rebuttal provide a model framework in the attached PDF file.
Lovasz principle Eqn. (8) is actually a loss function Eqn. (9). This loss function finds utility in all graph learning models satisfying Eqn. (7), including examples like InfoGraph [3] and GraphCL [4]. To illustrate, consider InfoGraph [3]: by substituting the original InfoMax loss with the loss function from Eqn. (9), we establish an InfoGraph framework guided by the Lovasz principle. Importantly, the Lovasz principle's applicability isn't confined to any specific GNN model or graph data; rather, it offers a versatile approach across various contexts. Its generality is parallel to that of the InfoMax principle.
[3]Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-
476 supervised graph-level representation learning via mutual information maximization.
[4]Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph
527 contrastive learning with augmentations.
**Question 3: Comparison with some basic GNN models**
**Response:** When it comes to learning graph-level representations, our primary comparison baselines (InfoGraph[3], GraphCL[4], AD-GCL[5], JOAO[6], AutoGCL[7]) stand out as the most current and influential methods spanning from 2019 to 2022, each boasting high citations on Google Scholar. Notably, these approaches are considerably more sophisticated and effective compared to fundamental GNN models like GCN, GAT, and GIN. Furthermore, these updated methods incorporate basic GNNs as foundational elements, with GIN serving as a fundamental component in all five techniques. Particularly, InfoGraph employs five GINs to acquire $H$ and $z$. It's worth mentioning that graph-level representation learning achieved by merely adding a READOUT function [8] to basic GNNs doesn't lend itself well to direct comparison.
[5] Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. Adversarial graph augmentation to improve
graph contrastive learning.
[6] Yuning You, Tianlong Chen, Yang Shen, and Zhangyang Wang. Graph contrastive learning automated.
[7] Yihang Yin, Qingzhong Wang, Siyu Huang, Haoyi Xiong, and Xiang Zhang. Autogcl: Automated graph contrastive learning via learnable view generators.
[8] Tian Xie and Jeffrey C Grossman. Crystal graph convolutional neural networks for an accurate and
interpretable prediction of material properties.
**We sincerely thank the reviewer for recognizing the contribution of our work.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your comprehensive response. It has thoroughly addressed my queries. Consequently, I've adjusted my score to 7.
---
Reply to Comment 1.1.1:
Title: Authors‘ feedback
Comment: Dear Reviewer 4DKt,
We thank you so much for your constructive comments and for adjusting your score.
Sincerely,
Authors | Summary: The authors present a new method for graph representation in supervised, semi-supervised and transfer learning based on the Lovasz theta function [Lov79]. They also incorporate local information using Lovasz subgraph numbers inspired by the work on Lovasz theta kernal [Joh+14]. They also present empirical evaluation and show that their method is competitive with state-of-the-art.
Strengths: - The paper is easy to follow for the most part.
- The empirical evaluation shows the method performs well compared to existing methods.
Weaknesses: Some rewriting needed on Section 4 and minor typos. Also ablation studies and final parameter selection still a bit unclear.
**SLN**
- Lines 209-211. Please define "subgraph Lovasz number" explicitly.
- Lines 214-218. The description of Lovasz $\vartheta$ kernel [Joh+14] is misleading.
- Lines 214-216. Possibly I misunderstood this, but subgraph lovasz value as defined in [Joh+14, Definition 2] can be well-approximated using $O(n \log n)$ samples [Joh+14, Thm. 1].
- Lines 217-218. In so far as the kernel being a pair-wise method and not capturing the "global structure" of $\cal G$, here is the trivial equivalent using the kernel: $ \sum_{i=1}^{\cal G} \sum_{j=1}^{\cal G} \hat{k}_{\tt Lo}(G_i, G_j) $.
- Lines 226-229. How is $K^{(t)}_{ij}$ computed, more precisely the number of sampled subgraphs and how those are selected?
**Ablation and default parameters**
- The ablation study has explored parameter choices for $\mu$, $\eta$, etc. over log-ranges for the different graph datasets. However, what is not so clear is the influence of graph dataset properties (e.g., size of the graphs, density, node properties, etc.)
- It would be great if the authors could present preferably in the main text or the appendix, a set of sane starting points. Of course, for each use-case, HPO should be done but often, the practitioner take is to go with defaults initially.
**Minor typos**
- Appendix. Section 4. lines 80-91 "orthnormal" -> "orthonormal"
- Appendix. Section 5. line 95. "guarantee" -> "guaranteed"
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
- It would be interesting to see in eqn (9) how the different loss terms during training corresponding to the theta-value ($l1$) and orthonormal representations ($l2$ and $l3$) and similarly for SLN when thinking of the final READOUT as non-linear analogues of the Lovasz representation.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We sincerely thank the reviewer for recognizing our contribution. Our responses are as follows.**
**On SLN**
* The definition of "subgraph Lovasz number" is in [Joh+14, Definition 2]. Given a graph $G = (V, E)$, let $G[S]$ be a subgraph of $G$ induced by a vertex subset $S \in V$. The "subgraph Lovász number" is defined as
$ \vartheta_S (G) = \min_c \max_{u_i \in U_{G|B}} \frac{1}{(c^\top u_i)^2}, $
where $U_{G|S} := \{u_i \in U_G | i \in S\}$ and $U_G$ is the orthonormal representations associated with $\vartheta(G)$. Note that in general $\vartheta_S (G) \neq \vartheta(G[S])$. We added this definition to the revised paper.
* You're right. In the case of larger graphs, [Joh+14] doesn't calculate the Lovasz kernel for all graphs; instead, they employ a sampling strategy to decrease the run-time. Similarly, in our experiments involving large graphs, we also utilize sampling techniques to enhance run-time efficiency. The equivalent totally sum-up kernel you mentioned is actually effective in capturing the graph information across the entire graph dataset $\mathcal{G}$. We will modify the formulation accordingly.
* We follow the implementation code of the Lovasz kernel [Joh+14] in the Grakel framework [1]. When dealing with large-size graphs, Grakel [1] selects subgraphs of sizes 3, 4, and 5, while excluding others. This is their code implementation for achieving a well-approximated Lovasz kernel using samples as in [Joh+14, Theorem 1].
[1] Grakel \url{https://ysig.github.io/GraKeL/0.1a8/kernels/lovasz_theta.html}
**On ablation study and default parameters**
* Thanks for pointing it out. We will add a comprehensive description of the data statistics, parameter settings, and starting points for each dataset in the supplementary material. We will also release our codes publicly.
**Question: Analysis of Losses and the non-linear READOUT analogues**
* The role of each loss has been analyzed in the supplementary material. In Appendix 3 (Parameter sensitivity analysis), we choose the weight of each loss from $10^{-3}$ to $10^6$. The performance of very small hyperparameters can be regarded as the ablation for each loss. The Ablation study in Appendix 4 also indicates that orthonormal regularization is of great importance in our Lovasz principle.
by the way, instead of using a fixed $\mu$, we can increase the value of $\mu$ gradually, which corresponds to the exact penalty method and is able to well approximate the constrained optimization.
* The role of Lovasz principle is very similar to the READOUT function or pooling layers in graph-level representation learning. The typical READOUT [2] methods tend to be straightforward, like average pooling or max pooling. The Lovasz principle, on the other hand, is firmly rooted in graph theory, stemming from its connection to the Lovasz number.
[2] William L Hamilton. Graph representation learning. Synthesis Lectures on Artifical Intelligence and Machine Learning
**We're grateful for your suggestion. We plan to enhance the clarity of the definition and presentation of SLN in the main text, and we'll also include additional experimental details about the data and default parameters in the supplementary materials.**
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns which have been satisfactorily addressed.
---
Reply to Comment 1.1.1:
Title: Authors' comment
Comment: We highly appreciate your feedback and your recognition of our work. | Rebuttal 1:
Rebuttal: * We thank the area chair and all reviewers for processing our submission. We have provided detailed responses to every reviewer. In the revision, we will fix minor issues pointed out by the reviewers and improve the presentation of this work. We will also add a comprehensive description of the data statistics, parameter settings, and starting points for each dataset in the supplementary material. We will make all of our codes publicly available.
* The attached PDF contains three figures:
1) the intuitive example of orthonormal representation and handle vector;
2) the flowchart (framework) of our method;
3) the iteration performance of the loss function and its terms.
* Here we briefly summarize a few key points of our rebuttal.
1) For Reviewer JpeL as well as Reviewer JWhm and Reviewer 2QoV, we provide some experimental of comparison between regularized optimization and constrained optimization. The results showed that the solutions are close.
2) For Reviewer JWhm and Reviewer 2QoV, we showed the learning ability of $\mathcal{F}_W$ for solving the Lovasz problem. We defined $r=\frac{|\vartheta(G)-\vartheta(G)|}{\vartheta(G)}$ to measure the quality of $\mathcal{F}_W$. The results showed the approximation error is less than 0.1 in almost all cases. This means $\mathcal{F}_W$ can indeed learn an end-to-end solver for the Lovasz problem.
3) For Reviewer 2QoV, we defined a metric $\ell_c(\mathcal{S}_1,\mathcal{S}_2)$ to quantify the similar or dissimilarity between graphs in the same class or in different classes. The results showed that the handle vectors are effective in providing discriminative graph-level representations.
Pdf: /pdf/0d0524473025fe199dcec1b89b7aa030d8d88a74.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This submission presents a new approach to graph-level representation learning inspired by the Lovász number in graph theory. Specifically, the study proposes using the Lovász principle as a novel framework for unsupervised/semi-supervised graph representation learning. It offers a method to utilize handle vectors, which capture a graph's global features through neural networks. An enhanced Lovász principle is also proposed, which efficiently uses subgraph Lovász numbers, thereby ensuring similar graphs have similar representations. Experimental results show that the Lovász principles outperform other graph representation methods (such as InfoMax) in unsupervised learning, semi-supervised learning, and transfer learning. Therefore, the Lovász principles offer a promising new approach to graph-level representation learning.
Strengths: There are three main strengths:
1. A new approach based on the Lovász principle is proposed for graph-level representation learning. This principle, inspired by the Lovász number in graph theory, adds a new perspective and strategy to graph learning, shifting from traditional methods.
2. Some subgraph tricks are used to enhance the performance of Loasz-based GNN models.
3. The Lovász principles outperform other methods in experiments on unsupervised learning, semi-supervised learning, and transfer learning.
Weaknesses: Weaknesses:
1. Compared with current GNN models, such as Infomax-based and kernel-based methods, there are indeed some gains in terms of ACC on the task of graph classification. However, the overall gained performance in Table 1,2, and 3 is not very significant, especially when you consider the variances. This may question whether Lovász Principle can provide more useful global information than InfoMax. The other concern is that it is unclear how the approximation affects the whole performance since the final vectors used in the proposed models are approximated.
2. In terms of the novelty of using the Lovász Principle for designing GNN models, one of my main concerns is that this idea is not very novel. There is a lack of discussion on the difference between current work and previous works. See related works in [1]. In terms of run-time comparison, there is no significant reduction compared with InfoMax based on the run-time table in the appendix.
[1] Yadav, P., Nimishakavi, M., Yadati, N., Vashishth, S., Rajkumar, A. and Talukdar, P., 2019, April. Lovasz convolutional networks. In The 22nd international conference on artificial intelligence and statistics (pp. 1978-1987). PMLR.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, this paper has a good idea of using the Lovasz Principle (the handle vector) to present the graph and incorporate this idea to design GNN models. However, there is some related work, and the authors should have discussed it in the submission.
Q1: How could you deal with large-size graphs? For example, the datasets RDT-B and RDT-M5K were used in [2]. My concern is that it is unclear how you can estimate C_{S_i,S_j} accurately when n, the number of nodes in the graph, is large.
Q2. Compared with InfoMax, why the performance of InfoMax-based (in terms of best ones) are very close to the proposed? More discussions are needed.
[2] Sun, Fan-Yun, Jordan Hoffmann, Vikas Verma, and Jian Tang. "Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization." arXiv preprint arXiv:1908.01000 (2019).
Some minors:
1. It is helpful to list out all dataset statistics for people who are unfamiliar with this area.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Concern about the novelty**
**Response:** We have to clarify that our method is **very different** from [1] (Lovasz Convolutional Networks (LCN)). LCN was motivated by the observation that removing certain vertices from a graph doesn't affect the graph's global properties such as the Lovasz number. LCN replaces the affinity matrix $\hat{A}$ of classical GCN by a Lovasz kernel $K = \frac{A}{-\lambda_{\text{min}}(A)} + I$ and hence yields the following model: $f({X}, {K}) = \text{softmax}({K} \text{ReLU}({K}{X}{W}^{(0)}){W}^{(1)})$. In contrast, our Lovasz principle is inspired by the umbrella analogy in Figure 1. We summarize the differences as follows:
* The goal of LCN is node embedding and classification while the goal of our Lovasz principle is graph representation learning.
* More importantly, according to $f({X}, {K})$, we see that LCN does not involve any optimization related to Lovasz number. It just replaced $\hat{A}$ of classical GCN with a predefined $K$. In contrast, our Lovasz principle transcends specific GNN designs and it solves the optimization of Lovasz number and orthogonal representation via deep neural networks:
$\mathcal{L}_ {\mathrm{Lo}}=\sum_{i=1}^{|\mathcal{G}|} \max _{p \in V_i} \frac{1}{\left(\left(\boldsymbol{z}_i^\phi\right)^{\top} \boldsymbol{h}_p^\theta\right)^2}+\mu\left(\left\Vert\boldsymbol{M}_i \odot\left(\boldsymbol{H}_i^\theta\left(\boldsymbol{H}_i^\theta\right)^{\top}-\boldsymbol{I}_n\right)\right\Vert_F^2+\left(\left(\boldsymbol{z}_i^\phi\right)^{\top} \boldsymbol{z}_i^\phi-1\right)^2\right)$
* Our Lovasz principle yields better graph representations than other methods such as the Infomax principle.
We sincerely thank the reviewer for pointing out the reference and we will include it in our revised paper and provide a discussion.
**Concern about the regularization approximation instead of exact constraints**
**Response:** We use regularization instead of constraint because its optimization is much easier and its performance is very close to that of the constrained optimization. For the constrained optimization ("strict Lovász principle"), we consider the following projection based methods:
Step 1: $\hat{H}^{t} = F({A}, {X}; \theta^t)$ and ${\hat{z}}^{t} = f({A}, {X}; \phi^t)$
Step 2: ${H}^{t} = \text{Proj}_U ({\hat{H}}^{t})$ and ${z}^{t} = \frac{{\hat{z}}^{t}}{\|{\hat{z}}^{t}\|}$
Step 3: obtain $\theta^{t+1}, \phi^{t+1}$ by SGD updating
The $\text{Proj}_U$ project ${\hat{H}}^{t}$ to the orthonormal representation space, which is similar to the Gram–Schmidt process. We define:
$\text{proj}_w (h) := \frac{<h, w>}{<w, w>} w$.
Let $W_k$ be the set that $W_k := \{{w}_1, {w}_2, ..., {w}_k\}.$ For each vertex $i \in V$, we denote $\Omega_i$ as the set of vector ${w}_j$s where $j$ can be each vertex not adjacent to vertex $i$. Then the ${H}^{t} = \text{Proj}_U ({\hat{H}}^{t})$ is as following:
--------------------------------
$ {w}_1 = \hat{h_1^t}, {e}_1 = \frac{{w}_1}{\|{w}_1\|}$
...
$w_k = \hat{h_k^t} - \sum_{w \in W_{k-1} \cap \Omega_k} \text{proj}_w (\hat{h}_k^t), e_k = \frac{w_k}{\|w_k\|},$
...
Output ${H}^{t+1} = [{e}_1, {e}_2, ..., {e}_n]^\top$
--------------------------------
The comparisons between the regularized ($\mu = 1$) optimization and the constrained optimization for two methods on four datasets are as follows.
\begin{matrix}
\hline
&method & MUTAG & PROTEINS & DD & NCI1 \\\\ \hline
regularized\ opt. & InfoGraph & 89.67\pm 1.54 & 75.26 \pm 1.43 &74.13\pm 1.49 &78.21\pm 1.35 \\\\
regularized\ opt. &GraphCL &87.24\pm1.96 & 75.87\pm 2.17 & 79.14 \pm 1.67 &79.13\pm 1.27 \\\\
constrained\ opt. &InfoGraph &86.12\pm 2.32 &75.49\pm 1.52 &76.42\pm 1.56 & 77.80\pm 1.24 \\\\
constrained\ opt. &GraphCL &87.52 \pm 2.75 & 76.11\pm 1.36 & 78.54 \pm 2.21 &77.63\pm 1.58 \\\\ \hline
\end{matrix}
We see that the approximation has less effect on performance. We also tested the **exact penalty** method, i.e. increasing $\mu$ gradually and the performance is similar to those in the above table.
**Question 1: Dealing with large-size graphs**
**Response:** This is a good question. Our first formulation $\mathcal{L}_ {Lo}$ is more efficient than the Infomax principle and can handle large graphs easily. In our second formulation $\mathcal{L}_ {SLN}$, we use the truncated Lovasz kernel [8] for large-size graphs. Instead of calculating the Lovasz number for all $2^{|V|}$ vertex subsets $V$, we evaluate it on a more manageable set of subgraphs sampled from the $2^{|V|}$ possibilities. This sampling strategy significantly mitigates the computational complexity on large graphs and works very well in practice [8]. We will include the experiments on RDT-B and RDT-M5K [2] in the revised paper.
[8] Johansson F et al. Global graph kernels using geometric embeddings[C] ICML 2014.
**Question 2: Concerns about the significance of performance improvement over InfoMax principle**
**Response:** Actually, the improvement especially our second method $\mathcal{L}_{ELo}$ over InfoMax principle is large in most cases. For instance, in Table 1, in terms of InfoGraph on NCI1, the performance of InfoMax principle is 76.20±1.06 while the performances of our two methods are 78.21±1.35 and 79.36±1.57 respectively. Considering that we have eight datasets, we apply the paired t-test on the mean scores over the datasets to show the significance of our methods over each of the baselines. The p-values are as follows. A p-value less than 0.05 indicates a significant difference. We see that 9 out of 10 cases are significant. This demonstrated the significance of gains given by our methods.
\begin{matrix}
\hline
& InfoGraph &GraphCL &AD-GCL &JOAOv2 &AutoGCL \\\\ \hline
InfoMax\ vs\ Lovasz& 0.00067 & 0.00286 & 0.02238 & 0.07346 & 0.00059 \\\\
InfoMax\ vs\ Enhanced Lovasz.& 0.00005 & 0.01625 & 0.01540 & 0.01319 & 0.00035 \\\\ \hline
\end{matrix}
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. The additional experiments provided are valuable. I've reviewed the comments from the other reviewers, and it seems they didn't mention this significant related work. The distinction you made between the proposed method and existing work is also insightful. I have no further concerns at this time and will reserve any final remarks(changes) for the discussion period.
---
Reply to Comment 1.1.1:
Title: Authors' comments
Comment: Thank you for your response to our rebuttal. Please feel free to let us know if you have any questions. | null | null | null | null | null | null |
The Adversarial Consistency of Surrogate Risks for Binary Classification | Accept (poster) | Summary: This paper proves the necessary and sufficient condition for a loss function to be adversarially consistent.
In the previous literature, either adversarial consistency for restricted hypothesis spaces or negative results for adversarial consistency has been known.
This paper follows this research line to provide a general condition to characterize loss functions.
The condition only requires $C\_\\phi^\*(1/2) < \\phi(0)$, which is quite simple to check.
This even holds for nonconvex loss functions.
The proof technique relies on the strong duality and complementary slackness results between adversarial surrogate loss minimization and the optimal coupling between benign and adversarial distributions.
Strengths: - Very general condition to characterize adversarially consistent losses: Unlike previous results of adversarial $\\mathcal{H}$-consistency (Awasthi et al. (2021)) and negative results for adversarial consistency (Meunier et al. (2022)), this work contributes to show the necessary and sufficient condition for a loss function to be adversarially consistent. This is a new insight into the community and may help design loss functions in adversarial training.
- The condition applies even to nonconvex losses: Traditionally, the theory of calibrated losses mainly concerns convex losses, such as Bartlett et al. (2006) because their proof technique essentially relies on the first-order optimality condition when characterizing loss minimizers. This is a transparent proof technique yet excludes nonconvex losses. In contrast, the proof technique of this paper first translates the optimality of the adversarial loss into the optimal coupling (Propositions 4 and 5) and then deals with the standard consistency analysis for the adversarial distribution $\\mathbb{P}^\*$ (by leveraging Lemma 1).
Weaknesses: Overall, I do not see any concerns about this paper.
There are a few minor comments and questions, which are mentioned in the following "Questions."
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - (Comment) In the introduction, you may consider emphasizing the main result shown in this paper is related to consistency for all measurable functions, not $\\mathcal{H}$-consistency, to clarify how the result differs from the previous works.
- (Question) Regarding Proposition 2: The counterexample $\\mathbb{P}\_0 = \\mathbb{P}\_1$ seems very malicious and rarely happens in practice. Are there any other counterexamples for which the corresponding loss function is not adversarially consistent?
- (Comment) In the definition of $W\_\\infty$, it is better to explain what $(x,y) \\sim \\gamma$ does mean.
- (Typo) In l.233 "fo" -> "of"
- (Typo) In l.260 $R$ -> $R^\\epsilon$
- (Comment) In l.275, it seems better to discuss the existence of the coupling $\\gamma\_i$.
- (Comment) In l.307, I don't think Theorem 4 immediately implies adversarial consistency because it is unclear whether the inequality is tight for any distributions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors mention the limitations in conclusion: The extension of the convergence rate for general loss functions is left open.
This is theoretical work, and potential negative societal concerns are not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Thank you for catching those typos. We have corrected them in our paper.
- Regarding proposition 2: Yes you are correct that the counter example $\mathbb P_0=\mathbb P_1$ is particularly malicious. We are currently writing a follow-up paper that identifies all the counterexamples to consistency for $\phi$ satisfying $C_\phi^*(1/2)=\phi(0)$.
- About Theorem 4 and consistency: Let $f_n$ be a minimizing sequence of $R_{\phi_\rho}^\epsilon$: then $\lim_{n\to \infty} R_{\phi_\rho}^\epsilon(f_n)=R^\epsilon_{\phi_\rho,*}$.
The inequality in Theorem 4 immediately implies that $\lim_{n\to \infty} R^\epsilon(f_n)=R^\epsilon_*$, so $f_n$ is a minimizing sequence for the adversarial classification risk. Thus the $\rho$-margin loss is consistent.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for clarification. I misunderstood when reading the inequality in Theorem 4. Look forward to seeing the extension of Proposition 2. | Summary: The paper provides a sufficient and necessary condition for surrogate loss to be adversarial consistency, and provide $\rho$-margin loss that satisfies the proposed condition so that it can replace 0-1 loss.
Strengths: Understanding the consistency of loss in the adversarial setting is a rather ongoing topic and hasn’t been addressed yet. The paper studies the consistency of surrogate loss for robust binary classification and presents some examples of loss function that satisfies the calibration condition, which has a good contribution to the community. Past work shows no convex loss (which people often use in practice) is adversarially consistent. This paper further provides sufficient and necessary conditions regarding what kind of surrogate loss is adversarially consistent. The presentation is clear and easy to follow.
Weaknesses: It shouldn’t be surprising that $\rho$-margin loss can be a good surrogate loss for adversarial training, as when $\rho$ is extremely small the loss is approximately 0-1 loss. Yet such loss is non-differentiable and therefore hard to use in practice.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In the paper, the author also presents another shifted sigmoid loss that satisfies the consistency property. I wonder if the author can shed any light on the performance of such loss function used in adversarial training on some simple dataset like MNIST.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Somewhat surprisingly, there are surrogate losses very close to the $0$-$1$ loss that are not adversarially consistent. Let $\psi(\alpha)=1/(1+\exp(\alpha))$) be the sigmoid loss and define $\phi_K(\alpha)=\psi(K\alpha)$. Notice that for large $K$, $\phi_K$ is also a close approximation to the 0-1 loss. However, our results imply that $\phi_K$ is not adversarially consistent.
This example illustrates that the consistency of the $\rho$ margin loss does not follow merely from the fact that it approximates the $0$-$1$ loss well.
- Yes, running an experiment on MNIST comparing different losses would definitely be interesting. To limit the scope of this paper, we focus on theoretical aspects of adversarial learning and leave experiments for future work.
---
Rebuttal Comment 1.1:
Title: Replying to Rebuttal by Authors
Comment: Thank you for answering my questions. It's indeed surprising and a bit confusing, as the author mentioned in the paper that shifted sigmoid loss is adversarial consistent, but from the rebuttal, the scaled sigmoid loss without shifting isn't adversarial consistent. So seems for sigmoid loss the shifting is important. I'm curious whether the author has any intuitive explanation or it is purely based on the technical details in the proof.
---
Reply to Comment 1.1.1:
Comment: The fundamental reason losses satisfying $C_\phi^*(1/2)<\phi(0)$ are adversarial consistent is that minimizers of $C_\phi(\eta,\cdot)$ are bounded away from zero. Shifting the sigmoid function changes its behavior near zero, while scaling it does not. This difference accounts for their different properties.
Notice that when 0 doesn't minimize $C_\phi(1/2, \cdot)$ the counterexample of proposition 2 breaks down, and so the condition $C_\phi^*(1/2)<\phi(0)$ rules out this counterexample. | Summary: The paper analyzes the consistency of surrogate losses in the case where there is an adversary that perturbs the sample data. Unlike in empirical risk minimization, where many convex surrogate losses have been proven to be consistent, the authors show that no convex surrogate losses are adversarially consistent. The authors provide a theoretical analysis to back up the claim. In addition, the authors design a non-convex surrogate loss that is adversarially consistent.
Strengths: - The paper presents an important analysis of the consistency of surrogate losses under adversarial examples setting.
- The authors provide a thorough theoretical analysis of the property of surrogate losses under adversarial examples setting.
- The authors propose a surrogate loss that is adversarially consistent.
Weaknesses: - The presentation of the paper is a bit hard to parse. For example, some of the notations are used before they are defined, like in the last few paragraphs of the introduction.
- To improve the clarity of the presentations, I suggest the authors incorporate some figures to illustrate the difference of consistency property in adversarial vs regular cases.
- It will be good to also show the benefit of the adversarially consistent surrogate loss empirically. It could be experiments on real-world data or even some synthetic data to demonstrate the effect of having adversarially consistent surrogate loss in practice
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please answer my concerns in the previous section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: No concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - We apologize we forgot to define $\tau$ towards the end of the introduction. In the shifted sigmoid loss, the constant $\tau$ is any positive number.
The remaining notation in this paragraph seems to be properly defined, but in our revision we will try to clarify this part of the paper further.
- Thank you for the suggestion to include figures. Space permitting, we will include several simple examples in the revised version to illustrate the main concepts.
- Yes, running an experiment on MNIST comparing different losses would definitely be interesting. To limit the scope of this paper, we focus on theoretical aspects of adversarial learning and leave experiments for future work.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal.
I'd still strongly suggest including a simple experiment in the paper to show the benefit of consistent surrogate loss empirically, as a good theory should also be practical. | Summary: this paper tackles the problem of consistency in adversarial classification. Consistent losses are losses whose minimization lead to the minimization of the 0/1 loss. Although consistent losses are known for a long time in standard classification, they were not known in the adversarial setting. This paper show a simple necessary and sufficient condition for a loss to be adversarially consistent.
Strengths: The existence of consistent losses in adversarial classification is a very important problem, and this paper solves it in some way.
Up to section 3, the paper is very well written and easy to understand.
Also, section 5 provides very useful bounds on margin losses, which are the counterpart of what was published in the non-adversarial setting by Bartlett, Zhang, and others.
Weaknesses: section 4 is much harder to understand. For example, to understand propositions in that section, reading [Frank and Niles-Weed] helps a lot. In my opinion, authors should work on improving the clarity of this section.
details:
- Proposition 2, Line 139: a unit ball is of radius 1. But here, radius is R=epsilon/2
- Proposition 3, Line 211: recall that the inequality holds thanks to assumption 1, to make things clearer.
- Why put the proof of proposition 2 in the paper ? The proof is exactly the same as Meunier’s proof of the same result. Removing the proof would release space to improve clarity of section 4.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. It seems like thm1 is already known. In particular, points (1) and (2) boil down to the equivalence between calibration and consistency in the non adversarial setting, isn’t it right ?
2. In Proposition 4: aren’t \mathbb{P}*_0 and \mathbb{P}*_1 maximizers of \bar{R}_\phi instead of \bar{R} ? I don’t see the logic here.
Make explicit that there exists a pair \mathbb{P}*_0 and \mathbb{P}*_1 maximizing both \bar{R} and \bar{R}_\phi. I also don't see how it is related to the reference [Frank and Niles-Weed]. Please make this clearer.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weaknesses:
- We will look for ways to clarify Section 4 by adding additional context for the main Propositions. Regarding Proposition 2, the statement is of course crucial for our main theorem, and this example is both simple and illuminating.
- Your comments on lines 139, 211: we have incorporated your feedback.
Questions:
1. Yes you are right. We included a proof of this result only because we couldn't find it stated in this precise form in prior work. We will clarify that proposition 1 is already known and add a citation to results on calibration.
2. Yes, that was a typo, thank you for finding this mistake! To clarify, $\mathbb P_0^*$ and $\mathbb P_1^*$ are maximizers of $\bar R_\phi$, and we do not need to assume that they are maximizers of $\bar R$.
Relation to the reference [Frank and Niles-Weed]:
Lemma 16 of [Frank and Niles-Weed] proves an approximate complementary slackness condition for a convex relaxation to $R_\phi^\epsilon$ which they call $\Theta$. Later, in Lemma 26, they prove that minimizing the convex relaxation $\Theta$ is equivalent to minimizing $R_\phi^\epsilon$. Combining the approximate complimentary slackness result (Lemma 16 of [Frank Niles-Weed]) together with the equivalence of minimizing $\Theta$ and $R_\phi^\epsilon$ (Lemma 26 of [Frank Niles-Weed]) results in Proposition 4 of our paper.
We will clarify this in our revised version.
To avoid confusion, we will include a self-contained proof of this result in our appendix.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. Yes, a self-contained proof would *definitely* improve the readability of the paper. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Two-Stage Predict+Optimize for MILPs with Unknown Parameters in Constraints | Accept (poster) | Summary: Predict+Optimize is an emerging paradigm that lies in the intersection of classical optimization (particularly mixed integer programming) and machine learning. Specifically, it considers the setting where a parameterized optimization problem:
$$ x^{\star}(\theta) = \operatorname*{argmin}_{x} f(x;\theta) \text{ subject to } C(x;\theta) $$
must be solved yet the parameters $\theta$ are unknown. Here $C(x;\theta)$ represents constraints that must be respected, which could be equality, inequality, or inclusion. In the last 4--5 years this topic has seen heightened interest. Most approaches, including the one proposed in this paper, assume access to observed features $A$ that correlate with $\theta$, and aim to learn a predictor $f(A)$ so that $\hat{\theta} := f(A) \approx \theta$. Then, a problem of the form above is solved with $\hat{\theta}$ in place of $\theta$ to yield a predicted minimizer $x^{\star}(\hat{\theta})$. Hopefully, $x^{\star}(\hat{\theta}) \approx x^{\star}(\theta)$.
The paper in question is one of the first to actively consider the setting where the constraints depend on $\theta$; prior work mostly considered constraints of the form $C(x)$. The method proposed is interesting, intuitive and general. They start from the observation that any scheme handling parametrized constraints must be allowed to make post-hoc adjustments to $x^{\star}(\hat{\theta})$ after the true parameters $\theta$ are revealed, as $x^{\star}(\hat{\theta})$ might not be feasible. Thus, they propose to solve a second, penalized problem:
$$ x_2^{\star} = \operatorname{argmin}_x f(x;\theta) + r(x^{\star}(\hat{\theta}), x) \text{ subject to } C(x;\theta) $$
where $r(\bullet, \bullet)$ penalizes the discrepancy between the stage 1 solution $x^{\star}(\hat{\theta})$ and the stage 2 solution $x_2^{\star}$.
The bulk of the paper is devoted to motivating, introducing and implementing this proposed new method. Numerical experiments illustrate that this new method performs well. Finally, there is a little bit of theory (contained in the appendix) relating this new approach to older approaches to predict+optimize with parametrized constraint sets.
Strengths: - I agree with the assertion in the appendix that the proposed approach is _simple and powerful_. I think this is an elegant solution to an important problem.
- The paper is well-written; I did not find any typos or unclear sentences. The running example of stocking a store is useful.
- I appreciated the choice of experiments---they are closer to "real-world" problems than many experiments I have encountered in Neurips papers.
- The Appendices are thoughtfully written, addressing many potential reader's questions and providing several nice extensions of the model at hand.
- The framework proposed is indeed original, but I think some additional comparisons with existing literature is needed, see "weaknesses".
Weaknesses: - With regard to novelty, I think the authors should compare their work to various differentiable quadratic program (QP) solvers. For example, in [1] formulas for the derivatives of $x^{\star}$ with respect to constraint (_e.g._ $\partial x^{\star}/\partial A$, to reference the notation of your equation (5)) are given. Note that the MILP in (4) can be turned into a QP by using a quadratic regularizer (as done in [2]) instead of the logarithmic regularizer of Mandi & Guns.
- I found the use of the name "Two-stage Predict+Optimize" for your proposed framework a bit confusing. In much of the Predict+Optimize literature, the two-stage approach refers to the approach of training a predictor $f(A)$ to minimize the mean square error $\|f(A) - \theta\|^2$ (see for example [3]) I strongly suggest you add a remark discussing this towards the beginning of your paper.
- I think the benchmarks considered are a little weak. For example, all 5 classical regression methods are slight variations on the "two-stage approach" mentioned above. You should include some SOTA approaches for (one-stage) predict+optimize, e.g. training with the SPO+ loss, Perturbed Optimization [7], blackbox backpropagation [8] etc or justify why such approaches are incompatible with the proposed Two-Stage Predict+Optimize framework. The PyEPO benchmarking suite [4] could be useful.
- I'd like to see more discussion on the computational cost of your proposed approach. Given that it is a tri-level (!) problem, I think this is something to address. For the experiments, could you indicate the dimension of $x$ and the number of constraints, as well as the time required to train using the 2S and IntOpt-C approaches? It might be nice to add a remark about possible approaches to scaling your proposed framework, see [5,6].
[1] _Optnet: Differentiable optimization as a layer in neural networks_ Amos & Kolter, 2017.
[2] _Melding the data-decisions pipeline: Decision focused learning for combinatorial optimization_ Wilder _et al_, 2019
[3] _Interior point solving for LP-based prediction+optimization_ Mandi & Guns, 2020
[4] _PyEPO: A PyTorch-based end-to-end predict-then-optimize library for linear and integer programming_ Tang & Khalil 2022
[5] _Faster predict-and-optimize using three-operator splitting_ McKenzie _et al_, 2023
[6] _Backpropagation through combinatorial solvers: Identity with projection works_, Sahoo _et al_, 2022
[7] _Learning with differentiable perturbed optimizers_ Berthet _et al_, 2020
[8] _Differentiation of blackbox combinatorial solvers_ Vlastelica _et al_, 2019
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The two-stage nature of your proposed approach reminds me a lot of Model-agnostic Meta-Learning (MAML) [2]. Moreover, I think there is an interesting analogy between the connection between your work and that of Hu et al [3] and the connection between MAML and later implicit MAML (I-MAML) works [4]. I think the appeal of your work could be broadened by adding a discussion of this connection.
- An interesting variant of predict+optimize is the setting where at train time the true cost vectors ($c$ in the notation of your equation (4)) are never revealed, rather only the true solutions $x^{\star}$ are accessible (see [1] for a discussion on this). Do you have any thoughts on how to extend your approach to this setting?
[1] _Faster predict-and-optimize using three-operator splitting_ McKenzie _et al_, 2023
[2] _Model-agnostic meta-learning for fast adaptation of deep networks_ Finn _et al_, 2017
[3] _Predict+Optimize for packing and covering LPs with unknown parameters in constraints_ Hu _et al_ 2022
[4] _Meta-learning with implicit gradients_ Rajeswaran _et al_, 2019.
-----
After discussion with the authors I've raised my score 6--> 7
-------
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: As mentioned above, I think the computational complexity of the proposed approach could be a limitation. This should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive review of our paper, concrete suggestions and also your questions. We respond to your points below to address your remaining reservations about our work. Hopefully you are more convinced by our contributions. Please let us know if you have any additional questions.
-----
Response to "weaknesses":
1. We added comparisons to cvxpylayers using quadratic regularization (and no regularization and log-barrier regularization). Please see the overall response and our response to reviewer Xm8w.
2. We will clarify in the paper, thank you for pointing this out.
3. One-stage Predict+Optimize methods referenced in the review, by definition, cannot handle unknown parameters in constraints (since Hu et al. was the first Predict+Optimize framework for unknowns in constraints), and therefore they do not apply and are incompatible with our new Two-Stage Predict+Optimize framework. See also Table 1 in the PyEPO paper for reference. Moreover, we added additional comparisons with cvxpylayers.
4. For the experiments in the paper, see the following table for the number of decision variables (dimension of $x$), constraints, unknown parameters and features. Training times are reported in Appendix E, and as discussed in the overall response, 2S has rather comparable training time compared with IntOpt-C. In the rebuttal experiments, our training times are also faster than using cvxpylayers instantiated using various regularizations.
We agree that it is an interesting future work direction to speed up training and further improve the training-time scalability; the main focus of our work is on getting good learning performance.
| Problem name | Brass alloy production | Titanium-alloy production | 0-1 knapsack | Nurse scheduling problem |
|------------------------------|:----------------------|:-------------------------|:--------------|:--------------------------|
| Dimension of $x$ | 10 | 10 | 10 | 315 |
| Number of constraints | 12 | 14 | 21 | 846 |
| Number of unknown parameters | 20 | 40 | 10 | 21 |
| Number of features (per parameter) |4096 | 4096 | 4096 | 8 |
------
**Q1**: Thank you for pointing out the high-level similarities between our framework and that of MAML. Here is a brief comparison between the two.
*Similarities*: In both MAML and our setting, the prediction isn't evaluated/used directly. Instead, after prediction happens, new information is revealed by the environment that allows us to adapt the model. In our framework, this is the second stage optimization after the true parameters are revealed. In the meta-learning setting, it is the true task being revealed, and the model can be fine-tuned. Training needs to be aware of this adaptation, in order to perform well.
*Differences*: The most important difference of course is that the adaptations are quite different: second stage optimization (our setting) vs fine-tuning (meta-learning). Also, in MAML, there are no features for the future task, and the future task is drawn purely distributionally (and we assume that the training algorithm has access to samples). On the other hand, in Two-Stage Predict+Optimize, we have features for predicting the true parameters. In this sense, MAML is closer to traditional stochastic programming, and our framework in a sense is a *contextual* variant. Though of course, one could plausibly also formulate a contextual version of MAML to bring it closer to our setting.
**Q2**: This is a really interesting question, and we believe it is enough scope for another paper. Here we present our initial thoughts on the problem, with basic theory. It remains to empirically test whether this approach would actually work.
We'll consider the same two-stage setting as in our paper, where we get features in the first stage, need to make a soft commitment using predictions, and then once the true parameters are revealed we solve for a second stage solution.
The difference is that, now, for training, we get feature-solution pairs $(A, x^\ast)$, where $x^\ast$ is the optimal solution under the true parameters $\theta$, instead of feature-(true parameter) $(A,\theta)$ pairs.
Assuming the penalty is non-negative and that the penalty for no solution modification is 0, we train a predictor $\hat{\theta}(A)$ so as to minimize
$Pen(x_1 \to x^\ast,\hat{\theta}(A))$
where $x_1 = \mathrm{argmin}\text{ } obj(x, \hat{\theta}(A))$ s.t. $C(x, \hat{\theta}(A))$ holds.
This is somewhat similar to what McKenzie et al. already does, except that our first stage optimization also has unknowns in constraints, and that we use the problem-specific penalty function as opposed to a generic l2 loss.
Note the following two basic lemmas justifying this training loss, which are straightforward to prove:
1. It is minimized at $\hat{\theta}(A)$ that induces $x_1 = x^\ast$, yielding a training loss of 0. So for example, $\hat{\theta}(A) = \theta$ is a minimizer.
2. Suppose further that the penalty function has no explicit dependence on the unknown parameters, and only depends on the first and second stage solutions. Then, for every $\hat{\theta}(A)$, we have $\text{training loss} + obj(x^\ast,\theta) = Pen(x_1 \to x^\ast) + obj(x^\ast, \theta) \ge Pen(x_1 \to x_2) + obj(x_2, \theta) = \text{test loss}$, where $x_2$ is the second stage solution induced by the predicted parameters $\hat{\theta}(A)$ and the true parameters $\theta$.
Since $obj(x^\ast,\theta)$ does not depend on the predictions $\hat{\theta}(A)$, it does make sense to train to minimize the training loss, in order to minimize the test loss.
We believe that this is a reasonable starting point for this different Predict+Optimize problem, but this is out of the scope of the current paper.
---
Rebuttal Comment 1.1:
Comment:
Thanks to the authors for their thorough rebuttal! After reading the other reviews, I have some additional questions. Then, I will respond to some of the points raised in your response
--------
1. Reviewer Pw5Y points out that your method can only use linear penalty functions. I looked over the paper and couldn't find any discussion of what a suitable penalty function should be, besides from an oblique reference in line 223-224. Could you elaborate on what penalty functions are admissable and why?
2. I am still a little confused by the benchmark approaches of Section. 5. If I understand correctly, you use ridge regression, k-NN etc to learn a prediction function $\hat{\theta} = f(A)$ by solving
$$ \min_{f}\sum_{i=1}^{n}\|f(A^i) - \theta^i\|_2^2 $$
(or similar loss function). Then, at test time you solve Stage 1 using $\hat{\theta}$, then Stage 2 using $\hat{\theta}, \theta$ to obtain $x^{\star}_2$. Then $x^{\star}_2$ is plugged into $\mathrm{PReg}(\theta, \hat{\theta})$ to obtain the values listed in Table 2 and Table 3. Is this correct?
--------
Response to response to "weaknesses"
1. Thanks for implementing additional benchmarks, particularly cvxpylayers. In your implementation, do you use it to solve both Stage 1 and Stage 2?
3. You're right. Most of the standard P&O methods I listed can't handle parameters in the constraints, so you are justified in not including them as benchmarks.
4. Thanks for providing this table! I highly recommend you include it in the final version.
------
Thanks for humoring my two somewhat off-topic questions. Some further thoughts:
**Q1:** In addition, the point I was trying to convey is that the evolution from MAML to iMAML was triggered by realizing that a one-step correction process (in MAML this is a single step of gradient descent) can be reframed as a secondary optimization problem. I think this is analogous to the evolution from the method of Hu et al to the proposed method.
**Q2:** Again, thanks for exploring this connection! It is interesting, but I agree it is out of the scope of this paper.
------
I am satisfied with the authors response. Assuming the paper is edited so as to include additional benchmarks and expand the literature, I would be happy to see it accepted at NeuRIPS. I will adjust my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your additional comments and questions. We will incorporate the discussion raised in the reviews and our rebuttal, including the additional benchmarks and literature review, into the final version.
-----
**Questions**
1. The design of the penalty function is guided by the real-world application and is thus a modeling issue. The precise characterization of the penalty functions in our Section 4 is: as long as the Stage 2 optimization (which includes the penalty function) is expressible as a MILP. This includes linear penalties, and also convex/concave piecewise linear penalties (for minimization/maximization resp.) through the introduction of auxiliary variables. Linear penalties are widely applicable in real-world scenarios and can cover many realistic problems. Even if the penalty is indeed non-linear in nature, we can often use a linear approximation which is sufficient for most practical purposes.
2. Yes, your understanding is correct.
-----
**Response to response**
1. Yes, our implementation uses cvxpylayers in both Stages 1 and 2.
-----
**"Off-topic" questions**
1. Thank you for further pointing out the similarities to (i)MAML. We will digest this more fully and discuss in the final version. | Summary: The paper presents a novel '2-stage' framework for Predict+Optimize with uncertain parameters in the constraints. In the first "stage" a soft commitment is made based on the predictions, and in the second, the commitment is updated based on updated information in such a way that the objective value plus a penalty for deviating from the commitment is minimized. This generalizes the framework of Hu et al. [9]
Strengths: 1. Great idea: I find the new framework simple and sensible, I think this is the right framework for Predict+Optimize with unknown constraints. The biggest plus is that it allows for improving soft commitments in which constraints aren't violated, which is not something that Hu et al. allow for, but makes a lot of practical sense. It also removes the need to create a differentiable projection.
1. Clarity: The paper is well written and covers most bases (the tables could be bigger, though!)
Weaknesses: 1. Requires linear penalties: The penalties have to be linear in the decision variables. Hu et al. [9] doesn't require that.
1. Is slower than Hu et al.: Because Hu et al. only require running a differentiable projection, which can be quite cheap, it is cheaper than 2S (Appendix E confirms this).
1. Generalization of Hu et al.'s framework: While Hu et al. propose a specific projection that is limited to packing and constraint problems, there has been work in the literature about differential projections for enforcing feasibility constraints generally [A] and efficiently [B] that could be used instead. It would have been great to compare against those in the experiments.
1. I understand that the space is limited, but a lot of important information, like the description of the experiments and runtimes are in the Appendix. It would have been useful to have summaries in the main text...
_References:_
[A] Chen, Bingqing, et al. "Enforcing policy feasibility constraints through differentiable projection for energy optimization." Proceedings of the Twelfth ACM International Conference on Future Energy Systems. 2021.
[B] Sanket, Shah, et al. "Solving online threat screening games using constrained action space reinforcement learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 02. 2020.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Table 3, the difference between 2S and the other methods *decreases* as the penalty factor increases. Why is that? I would have expected the opposite (as in Tables 2 and 4).
1. On line 355 the paper says, "On the other hand, the advantage of our 2S method over other approaches actually becomes more significant as the capacity increases, demonstrating the superior accuracy of our approach." However, the difference between 2S and other approaches for the lowest penalty factor actually decreases in absolute value?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are not described, despite having responded 'yes'. The fact that the paper does not acknowledge any of these limitations highlighted in the `weaknesses' section is worrying.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review of our work.
We also appreciate the points you raised in the weaknesses section.
Please see our response below.
The paper is also further strengthened now by our additional experiments in the overall response.
We hope to further convince you of the merits of our work; please let us know if you have any additional questions.
---
Response to "weaknesses":
1. Linear penalties: thanks for pointing out that Hu et al. can handle non-linear penalties; we will add a comment to clarify in the paper. We also want to point out that our framework, as stated, can already handle special cases of non-linearity: for example, one can (in many cases) express the absolute-value function in linear programs. In addition, the Section 4/Appendix B gradient calculations can in fact be adapted to handle general differentiable non-linear objectives just like Hu et al., though of course the caveat that the second-stage optimization problem needs to be solvable efficiently for the framework to be useful. We chose to present only MILPs as a main overarching application for this paper mainly because of their widespread use in discrete optimization, with readily available solvers.
2. Yes, training for 2S is slower than IntOpt-C, but as we pointed out in the overall response, the runtimes are quite comparable.
3. The projection of [B] is identical to the correction function proposed by Hu et al. The projection of [A] on the other hand is $\ell_2$ projection, which we hadn't compared against. For this rebuttal, we ran an experiment analogous to Table 1 in the main paper, for the alloy production problem (a covering LP) and the $\ell_2$ projection. We find that the $\ell_2$ projection performs even worse than the [B]/Hu et al. correction, for the linear penalty function used in the alloy production problem. Please see the overall response for more details.
4. We did struggle with the page limit. If accepted, we will move more information to the main body of the paper, given the extra page available.
---
**Q1**: In Table 3, we omitted to present penalty factors $\ge 1$ for space reasons. Additionally, in that problem setting (proxy buyer knapsack), large penalty factors are unrealistic anyway. We now include the corresponding rows here.
| PReg | Penalty factor | 2S | Ridge | k-NN | CART | RF | NN | TOV |
|---------|:--------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| | 1 | 10.90±0.15 | 10.93±0.19 | 11.11±0.17 | 11.16±0.14 | 11.01±0.31 | 11.26±0.23 | |
| cap=100 | 2 | 12.31±0.16 | 12.48±0.20 | 12.49±0.21 | 13.77±0.26 | 12.60±0.39 | 12.78±0.30 | 29.68±0.14 |
| | 4 | 14.54±0.15 | 15.57±0.25 | 15.68±0.39 | 19.01±0.56 | 15.77±0.62 | 15.84±0.50 | |
| | 1 | 10.23±0.12 | 10.46±0.23 | 10.40±0.18 | 10.46±0.19 | 10.49±0.21 | 10.86±0.30 | |
| cap=150 | 2 | 11.18±0.15 | 11.88±0.30 | 11.63±0.20 | 12.56±0.31 | 11.83±0.19 | 12.12±0.17 | 40.23±0.19 |
| | 4 | 13.20±0.16 | 14.71±0.49 | 14.43±0.33 | 16.75±0.63 | 14.53±0.29 | 14.65±0.41 | |
| | 1 | 6.77±0.36 | 7.67±0.18 | 7.51±0.27 | 7.71±0.20 | 7.67±0.16 | 8.00±0.65 | |
| cap=200 | 2 | 8.19±0.12 | 8.84±0.22 | 8.69±0.26 | 9.24±0.30 | 8.80±0.20 | 8.97±0.37 | 48.13±0.24 |
| | 4 | 9.71±0.35 | 11.17±0.40 | 11.06±0.32 | 12.29±0.59 | 11.05±0.46 | 10.91±0.53 | |
| | 1 | 1.37±0.08 | 3.08±0.19 | 2.94±0.16 | 3.17±0.17 | 3.05±0.25 | 3.28±0.96 | |
| cap=250 | 2 | 3.34±0.15 | 3.80±0.20 | 3.73±0.15 | 3.94±0.20 | 3.79±0.26 | 3.89±0.58 | 53.43±0.26 |
| | 4 | 4.46±0.09 | 5.25±0.35 | 5.32±0.27 | 5.47±0.35 | 5.29±0.48 | 5.11±0.39 | |
Along with these extra rows, we can see that the trend of Table 3, in terms of the difference between 2S and other methods, is in fact identical to the trend in Table 2. The difference first decreases, then increases, as the penalty factor increases. We can explain this phenomenon as follows.
First, when the penalty factor is small, the rational behavior for the buyer is to just take every order, and only decide which orders to drop when the true parameters are revealed (at close to no cost). 2S identifies and exploits this behavior for small penalty, while classic regression methods are agnostic to this possible tactic. Thus, the advantage of 2S compared to classic regression methods is large in the small penalty case.
Second, when the penalty factor is large, 2S will analogously learn to be conservative, such that the first stage solution likely remains feasible under the true parameters, in order to avoid the necessary (and high) penalty due to having to change to a feasible solution. Again, classic regression methods will be agnostic to this possible tactic, leading to a large advantage of 2S over the classic methods.
Table 4 only has the increasing trend from the large penalty, since it is neither a covering nor a packing program, and so there is no analogous tactic/exploitation for small penalty.
**Q2**: Here, "advantage" refers to the improvement in *percentage* of our method over other approaches.
Take penalty factor = 0.05 as an example, the improvement percentage is (8.67 - 1.26)\%/8.67\% = 85.47\% when capacity is 100, 91.37\% when capacity is 200, 94.73\% when capacity is 300, and 96.79\% when capacity is 400.
We will clarify in the paper.
---
Limitations: as discussed above, we will clarify the points you raised in the paper. Thank you for pointing them out.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your response, esp., for running extra experiments with the L2 penalty (very interesting to know that it does worse!) and explaining the results in Table 3. I am still slightly on the fence about whether the proposed method is *always* better than Hu et. al. (e.g., if solving the problem with nonlinear penalties is too expensive), but I hope that the authors add this to the limitations section. Overall, I'm satisfied with the authors' response to my concerns and have increased my score to a 7.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We thank the reviewer for their further comments and improved evaluation of our work. If accepted, we will add a "Limitations" section in the paper to discuss the points raised in the reviews and our rebuttals. | Summary: The authors propose a framework for learning latent variables in optimization problems that appear both in the constraints and objective. In this setting, the user is given features and asked to provide a solution to an optimization problem where the objective and constraints of the optimization problem are partially observed and related to the features. Additionally, the user can optimally modify the solution once the true parameters are known at a cost based on the change. The overall goal is to ensure that the total regret is low where the regret is the total value of the final solution after fixing minus the fixing cost and minus the objective value of the optimal solution in hindsight. The authors propose that the second stage solution should be considered the output of an optimization problem which is given the first stage solution as input and then backpropagates through both optimization problems to update the weights of the predictive model predicting parameters for the first stage optimization problem. The authors propose differentiating through continuous relaxations of these optimization problems using previous work that differentiates through iterates of an interior point method.
The authors evaluate their approach on several settings to demonstrate improved predictive performance over the investigates baselines.
Strengths: The main strength of the work is that it considers penalizing the recourse using a flexible optimization problem rather than having a domain-dependent method of doing so as was done in previous work for packing and covering. Additionally, the paper itself is easy to read and
Weaknesses: Given that the main contribution of this work is that the framework has good empirical performance, it would be good to strengthen the experiments by evaluating against relevant baselines.
The work does not compare against relevant baselines and claiming generality to MILP to discount several differentiable continuous optimization baselines when in practice, the proposed approach simply relaxes the integrality constraints to consider differentiating through a continuous LP. Given that this approach considers differentiation of MILP with respect to the constraints as simply differentiating the constraints of the LP relaxation, the authors should evaluate against methods which can differentiate through constraints of an LP which includes cvxpylayers [1], and OptNet with quadratic regularization to differentiate through LP [2,3].
Additionally, the previous CombOptNet work [19 in the paper] which is formulated explicitly for learning constraints in combinatorial settings, and whose datasets are used for two of the three settings, is not compared against.
[1] Agrawal, Akshay, et al. "Differentiable convex optimization layers." Advances in neural information processing systems 32 (2019).
[2] Amos, Brandon, and J. Zico Kolter. "Optnet: Differentiable optimization as a layer in neural networks." International Conference on Machine Learning. PMLR, 2017.
[3] Wilder, Bryan, Bistra Dilkina, and Milind Tambe. "Melding the data-decisions pipeline: Decision-focused learning for combinatorial optimization." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Are there any specific reasons that previous work that learns constraints for LP or comboptnet are not applicable to the investigated settings?
Are there any specific components of this method which are specialized to handle integrality that cannot be handled by simply applying previous approaches for differentiating through continuous problems?
It might be helpful to compare the gradients of this approach on a subjective level as well. In equation (3) are the gradients from one component relatively large compared to gradients from the other? How do they differ overall from previous work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Relevant limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for pushing us on running more experiments and comparing with more baseline methods, which strengthens the paper. We believe we have adequately addressed your concerns through the additional experiments presented in the overall response, along with the following discussion.
We are happy to answer any additional questions you may have.
-----
Response to "weaknesses": originally, we didn't compare with cvxpylayers because cvxpylayers is a conic generalization of OptNet, and Mandi and Guns had already given empirical evidence that their approach is better than QPTL (which uses OptNet for differentiating through programs). However, after you pointed out the lack of comparison, we ran additional experiments comparing 2S with using cvxpylayers (without regularization, with quadratic regularization and with log-barrier regularization) in place of our Section 4 gradient computation approach. We find that (see Tables 1 and 2 in new pdf) 2S offers at least as good solution quality while being around 30-50\% faster. Please see the overall response for more details and the precise description of the experiments.
We also added comparisons to CombOptNet, which is not designed to learn solutions to have good post-hoc regret, but rather, to learn $\hat{x}$ that is close to the optimal $x^*$. Our additional experiments find that CombOptNet both has far inferior solution quality in post-hoc regret, takes far longer to train, and needs more data to have reasonable generalization. Please again see the overall response for more details.
-----
**Q1**:
They are applicable and we now have further experiments, see above and overall response.
**Q2**: No, and we added experiments comparing 2S (using the Section 4 gradient computations) with using cvxpylayers (with different regularizations). Please see the above.
**Q3**: This is a very interesting question. We re-ran the experiments for 2S to investigate. First note that in Equation 3, the terms correspond to gradients coming from the second and first stages of optimization respectively, and they have common factors that we will ignore. We find the following pattern: the gradients due to the second stage optimization is small compared to the first stage analogue. Furthermore, as the training continues, the gap widens. The effect is especially pronounced when the penalty factor is large. This can be explained by the fact that, if the penalty factor is large, then the model has incentive to give predictions that yield an $x^\ast_1$ that will change minimally to $x^\ast_2$ in the second stage optimization. Thus, as we train more and more, we can expect the $\frac{\partial x^\ast_2}{\partial x^\ast_1}$ term to decrease in magnitude. The same effect persists, but in a less pronounced way, even when the penalty factor is smaller.
---
Rebuttal Comment 1.1:
Title: followup
Comment: **literature**
With regards to the related literature, it seems there is more work on making predictions for unknown parameters in the constraints [1]. Additionally there is followup work from Hu et al for prediction in constraints [2], although I believe it was released after the neurips submission deadline.
I agree that this space has been much less investigated than predicting hidden objective coefficients. However, it seems that there has been work regarding constraint prediction + optimization, although it is unclear how directly applicable these approaches are to this work. It would be helpful to tie in more related work to explain other methods that are in a similar vein to this approach and why the are or are not applicable. I believe this might also help with the literature weakness mentioned by Reviewer FnYQ. Overall, a more comprehensive explanation of related work not only serves to explain what methods inspired the proposed approach, but also to help readers understand how this approach compares to existing work when deciding whether it is suitable for their problem at hand.
For experiments against cvxpylayers, it is promising that the proposed approach gives similar or slightly improved performance. However, in my experience, Cvxpylayers is considerably slower than the QPTH code released with optnet due to implementation. As a result, the runtime improvements for this approach are unclear but in any case may be due to implementation details. Overall, it would be important to explain why the previous approaches perform well in terms of solution quality, and to either explain how a previously possible approach of applying optnet fail, or how the view taken by the proposed approach allows for new capabilities.
[1] Nandwani, Yatin, Rishabh Ranjan, and Parag Singla. "A Solver-Free Framework for Scalable Learning in Neural ILP Architectures." Advances in Neural Information Processing Systems 35 (2022): 7972-7986.
[2] Hu, Xinyi, Jasper CH Lee, and Jimmy HM Lee. "Branch & Learn with Post-hoc Correction for Predict+ Optimize with Unknown Parameters in Constraints." International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research. Cham: Springer Nature Switzerland, 2023.
---
Reply to Comment 1.1.1:
Comment: Thank you for your additional concrete and constructive comments. In view of your comments and other reviews, we have added a new overall response (see "official comment" to our overall rebuttal) to discuss at a high-level the related literature that isn't directly within the Predict+Optimize line of work. The one-line summary is that these works are either solving a different problem by design (though technically applicable, but will perform badly in our setting), or they are technical tools for differentiating through mathematical programs that are orthogonal to our new framework (the "Two-Stage"-ness with the post-hoc regret), which is our primary contribution. If accepted, we will incorporate this discussion into the paper. We understand your concern about further contextualizing our work, and we believe our new overall response addresses it.
For the two additional references you pointed to, you are correct that the new Hu et al. [2] is very recent and concurrent work. This followup work still uses their prior "correction function" framework in AAAI'23, contrasting our new two-stage framework. As for the Nandwani et al. paper, as explained in our new overall response, it is designed to learn for a completely different goal, and is unlikely to work well for post-hoc regret. We have attempted for over 12 hours to run the knapsack benchmark on their implementation, but we had troubles setting up their environment. Even their installation and demo code failed to run properly, as we tried doing this on multiple machines to make sure it is not a machine-specific problem. We have contacted the authors for help. If possible before the discussion deadline, we will endeavor to provide experimental results on the Nandwani et al. method, in addition to the CombOptNet results we already have, to further confirm our analysis.
Please let us know whether we have addressed your concerns or if you have any remaining questions.
-----
Here we also respond directly to your comments about the implementation of cvxpylayers (and related tools such as QPTH).
We agree that the runtime improvements may be due to implementation details, that QPTH (OptNet) for example might be a faster implementation for quadratic regularization.
However, as our rebuttal experiments show, quadratic regularization gives *worse learning performance* than our implementation in terms of post-hoc regret (for small penalty, the difference is small, but the difference grows with the penalty).
The learning performance, unlike runtime, is independent of the implementation.
Our rebuttal experiments thus confirm that our choice of log-barrier regularization for its best learning performance, and we point out that OptNet supports *only* quadratic programs.
As we point out in our overall responses, the primary contribution of our paper is in the framework. Tools for differentiating through LPs are just for *instantiating* our framework. Even **if** there are other tools that work better than our Appendix B calculations, these hypothetically better tools would not diminish our main contributions and should in fact further demonstrate the applicability of the framework. | Summary: The authors develop a two-stage predict-and-optimize approach. There is a recent paper of Hu et al. [9] that extends the predict-and-optimize framework to having unknown parameters in the constraints. The authors argue that the Hu et al. approach, which requires defining both a "correction function" and a "penalty function", could be simplified by solving a single optimization problem. The authors show how to run the Mandi and Guns [15] solver to get the answer to this optimization problem. They then demonstrate a wide range of examples for their approach, including some real-world examples.
Strengths: The authors have written a clear and easy-to-understand paper. I have not checked the proofs with great detail, but the mathematics looks very reasonable. The authors are getting impressive results on the examples that they explore.
The novelty is that the authors are combining the work of Hu et al. with the work of Mandi and Guns.
This is a fairly important problem within the Predict+Optimize framework and getting good solutions to these problems is important, so the significance is that this approach could hopefully be used. For the examples that the authors are demonstrating, their approach is clearly better to Hu et al.
Weaknesses: As a point of order, it's frustrating that papers sometimes cite references so narrowly. I understand this a bit better when a paper is very mathematical, for instance building a proof based on another proof. But in this paper, the proofs are fairly algorithmic/arithmetic and involve well-known components. I haven't tried to discover who the authors are, but I notice that that 7 of the 23 references [1, 2, 3, 7, 14, 15, 16] are to the same group of close collaborators. Then, another 11 of the 23 references [6, 8, 11, 12, 13, 17, 18, 19, 20, 21, 22] are to software packages, test problems and text books. This leaves only 5 of 23 references which are to papers that are somehow intellectually related but not from a single group of collaborators (of these 5 remaining papers, two are to Elmachtoub and co-workers, two are to Hu and co-workers, and one is to Wilder et al.).
I am anxious about how many of the papers (7) refer to one closely-related group when only 5 of the cited papers are to work outside the group (I discard the 11 software packages / textbooks / test instances because these don't require the same work in linking ideas). Work that singularly focuses on just a single small group of collaborators sometimes neglects to take the broad perspective that often characterizes good science.
The reason that I'm marking "Fair" in the "Presentation" tick mark is this very limited interaction with the literature. Other than this problem, the paper is quite clear.
****
At least as the paper is worded, there is a Hu et al. [9] framework that has been introduced that matches the authors setting, but everything would be better if only we would "adapt the approach of Mandi and Guns [15]" [Section 1, line 79]. For example, most of Section 4 (lines 211 - 268) discuss an adaptation of Mandi and Guns [15]. The point of the paper often seems to be that if only Hu et al. had read Mandi and Guns a bit more carefully, Hu et al. would have done things differently.
Overall, the current paper reads like a "letter to the Editor" objecting to elements of the work of Hu et al. [9] and trying to correct it. The Hu et al. framework is described in detail (lines 40-82 in the introduction and also Section 3 seem dedicated to Hu et al. and why what they're doing is wrong). Then, Section 4 proposes fixes for the Hu et al. framework using the Mandi and Guns framework.
The impression this leaves is that the authors' work is a bit incremental. As the authors acknowledge, their framework and the framework of Hu et al. are "mathematically equivalent in expressiveness". So the only difference is that the Hu et al. [9] paper develops a "correction function" and a "penalty function" whereas these authors solve an additional optimization problem (this optimization problem seems very similar to the one already proposed in Mandi and Guns).
The reason that I'm marking the "Contribution" as "Fair" is that I can't see anything except modifying Hu et al. to work more like Mandi and Guns.
****
I disagree with the authors that their approach "should be the canonical framework for the Predict+Optimize setting". The authors have a lot of impressive results, especially in comparison to Hu et al., but there are times where solving an optimization problem may take too long and it may be better to have a function that is quick-to-evaluate.
This small note (that the authors are claiming that it's always better to solve an optimization problem) is the only reason that I'm marking the "Soundness" as "Good" rather than "Excellent".
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I've tried to be pretty specific as to what are the weaknesses of the paper so that the authors can correct as relevant. Please take the "Weaknesses" section as a set of questions where I'm happy to be corrected.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: This part of the paper is fine.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and detailed review of our work. Below, we address the points you raise in "weaknesses".
**Significance**: We believe there might be a misunderstanding of the main message of our work. Our point is not "everything is better if we do it the way of Mandi and Guns". In fact, Hu et al. also use the work of Mandi and Guns as a technical component for differentiating through LPs. As we argue, however, their high-level view of Predict+Optimize with unknowns in constraints, by imposing a correction function, is sub-optimal. As a result, they're using Mandi and Guns's work not in the best way possible. We give a simple change in perspective, which enables better algorithmic use of differentiating through LPs.
Please also refer to "main message" in our overall response for more details.
We also point out that, at least in our opinion (and a view that seems shared also by reviewers Pw5Y and Jj2a), a simple idea that leads to much better performance is valuable, instead of a downside on the quality of the work.
The fact that we presented a detailed comparison with the Hu et al. framework should also be a plus and not a minus.
**Canonicity** of the framework: we agree that there is a tradeoff in runtime and learning capabilities/solution quality. We will tone down the claim in the paper by remarking *up front* the tradeoffs. Thank you for pointing out this nuance that we forgot to address. Please also see our overall response concerning the training times of these different approaches.
**Literature**: While we have cited all the works that directly inspire our paper, we understand that we have indeed missed some related references. We are happy to incorporate and cite other missing works, and welcome additional suggestions. We additionally wish to point out that, as far as we understand, the Stuckey group and the Guns group are two distinct research groups on Predict+Optimize, despite Guns occasionally collaborating with the Stuckey group. The focuses on the groups also seem somewhat different: the Stuckey group seems to work more directly on combinatorial optimization whereas the Guns group works more on "continuous" optimization techniques as far as we can tell.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: **Significance.** Agreed that the detailed comparison to the Hu et al. framework is not a weakness, the issue is that just adjusting one paper (Hu et al.) to work more like another paper (Mandi & Guns) seems insufficient for a NeurIPS publication, especially since the Mandi & Guns paper seems to be among the 7 papers from a close group of collaborators where the authors focus their attention. If the authors had considered the literature more broadly before deciding on the Mandi & Guns paper to modify the Hu et al. framework, this would be more manageable.
**Canonicity.** Thanks to the authors for agreeing to tone down the claim.
**Literature.** Thanks to the authors for acknowledging that they have missed some papers and for promising to incorporate them. I don't have knowledge on the relationship between research groups in this area, I've only observed that the 7 papers identified share a lot of the same authors.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for further engaging.
**Significance**:
We believe that there is still a misunderstanding of the contribution of our paper.
The point of our work is **not** to "work more like Mandi and Guns".
As explained in our overall response, our main contribution is the new Predict+Optimize framework for handling unknown constraints.
The learning method we propose, in order to substantiate our new framework, is one for training a neural network.
As such, it requires a way to compute training gradients, and in fact so does the training method proposed by Hu et al. for their prior framework.
Both papers chose the machinery provided by Mandi and Guns only as *a technical tool* for differentiating through LPs.
Thus, the improvement of our paper over Hu et al. is **not** in making things "more like Mandi and Guns", but in **how** we use those tools (that we use it *also* for a second-stage optimization problem).
As the reviewer already noted, we get impressive empirical results from this change of perspective.
In our rebuttal to reviewer Xm8w, we explained that we chose Mandi and Guns because their work had already compared with QPTL (LP+quadratic regularization using OptNet, and cvxpylayers is just a conic extension to OptNet).
Our choice was therefore **principled**, and not because we only "focus [our] attention on a close group of collaborators".
In the rebuttal experiments, we have further demonstrated that this choice is indeed the best, with our 2S method outperforming all the reviewers-suggested alternatives, including cvxpylayers and CombOptNet.
We also want to re-emphasize that Hu et al. was the **first** and **only** paper to date to propose a Predict+Optimize framework that handles unknown parameters in constraints. Naturally, our paper explains how we are different from and how we improve upon that work.
We find a *simple* idea (which, again, is *not* "be more like Mandi and Guns") to significantly improve the performance, and this should be seen as a benefit and not something to be considered incremental.
Finally, as the reviewer recognized, both Predict+Optimize and specifically the problem of handling unknowns in constraints, are important research directions for the NeurIPS/ICML community.
Given that our paper gives a simple idea which yields substantial empirical improvements, as verified both by the in-paper and rebuttal experiments, we strongly believe that our work will be a significant and valuable contribution to the community.
**Literature**: As we previously explained in the rebuttal, the Stuckey Group and the Guns Group are distinct. Both groups have made significant contributions to the Predict+Optimize area, and have published many related works. Given that the main theme of the paper is Predict+Optimize, we don't understand the issue of citing these groups. If the reviewer is aware of additional missing references, we are more than happy to cite them. | Rebuttal 1:
Rebuttal: Thank you for your constructive and in-depth feedback for improving the paper.
We are encouraged by the reviewers recognizing that our paper 1) tackles an important problem (reviewers FnYQ, Jj2a) in Predict+Optimize, 2) presents a simple/sensible, elegant and flexible solution (reviewers Pw5Y, Jj2a, Xm8w), and 3) gives experiments that are close to real-life (reviewers FnYQ, Jj2a).
We are also grateful for the constructive criticisms which can improve our paper.
In this overall response, we wish to emphasize again the main conceptual contribution of this paper, as well as address the criticisms on our experiments and citations.
We will additionally respond to remarks and questions separately for each individual review.
We believe that our rebuttals below have adequately addressed your criticisms, and hope that you will improve your evaluation of our work based on these responses. Please let us know if you have remaining concerns. We will address them.
**Main message**: While Hu et al. gave the first Predict+Optimize framework to handle unknowns in constraints, their framework required specifying an ad-hoc correction/recourse/differentiable projection. In this work, we provide a *simple* change of perspective (Section 3), viewing the recourse action itself *naturally* as the solution of an optimization problem. This simple change (a) yields much better test-time performance (see e.g. experiments in Table 1 in the paper), (b) allows for post-hoc correction even when the stage 1 solution (a soft commitment) doesn't violate constraints under the true parameters (as was recognized by reviewer Pw5Y), and (c) enables the algorithmic training methodology for generalizing to handle MILPs in Section 4. We emphasize that, especially in the context of frameworks, simplicity and the associated flexibility is a virtue and not a downside (as was recognized by reviewers Pw5Y, Jj2a, Xm8w).
**Experiments**: Thank you for pushing us on additional experiments, which strengthen the empirical part of our paper. Based on reviewer feedback, we ran two new experiments comparing with the suggested methods.
1. We applied cvxpylayers and CombOptNet to the 0-1 knapsack benchmark to compare with our 2S method. See Table 1 in the new pdf for post-hoc regrets and Table 2 for the training times. More precisely, for cvxpylayers, we use it with various regularizations (a. LP with no regularization, b. with quadratic regularization, c. with log-barrier as in our paper) to replace the Section 4/Appendix B gradient calculations. For CombOptNet, we just run it as is, since it is a method for learning unknowns in constraints. These methods are evaluated at test-time using the Two-Stage framework, as we did in the paper.\
We find that cvxpylayers never gives better solution quality while 2S is 30\%--50\% faster. For CompOptNet, the solution quality is very bad, since it was designed to learn a first-stage solution $\hat{x}$ close to $x^\ast$ and not learning for small post-hoc regret. CompOptNet is also drastically slower. We further observe that, using only 700 training samples (as in the experiments in the paper and rebuttal), CompOptNet does not have good generalization. Only with the full 4500 training samples in the CombOptNet paper do we get some reasonable generalization. See Figures 1-4 for training+test loss curves (using loss $\|\hat{x}-x^\ast\|^2$), for 700 training samples (Figures 1,2) and 4500 training samples (Figures 3,4). This is evidence that CompOptNet is more data-hungry than our 2S method.
2. We tested the differentiable projection idea in references [A,B] given by reviewer Pw5Y, on the Alloy Production benchmark. The projection in [B] is identical to the Hu et al. correction function, and so we only additionally tested the $\ell_2$ projection method in [A], implemented using cvxpylayers. The experiment set-up follows that of Table 1 in the submission: both training and testing use $\ell_2$ projection in the second stage, as opposed to solving the second stage optimization problem defined in Section 3. Table 3 in the new pdf shows both the post-hoc regret and training time for $\ell_2$ projection. We find that, not only is $\ell_2$ projection slow, but it has even worse post-hoc regret than the Hu et al. correction. We suspect that this is due to the Hu et al. correction function preserving the direction of the solution vector whereas $\ell_2$ projection can change the direction, and that this makes a difference for Alloy Production. In any case, this experiment confirms again that our Two-Stage framework has better post-hoc regret than a framework based on differentiable projections, reinforcing the main message of our paper.
**Runtime**: Appendix E gives the training time for each method tested in the submission. In the Alloy Production problem, which is the *only* setting where the Hu et al. IntOpt-C method can be applied, their running time is quite comparable with our proposed 2S method. In the additional experiments in the rebuttal (see Tables 2,3 in the new pdf), the 2S method is also faster to run than cvxpylayers and CombOptNet, while offering at least as good and sometimes substantially better learning performance. We believe that our method presents a reasonable tradeoff between runtime and learning in practice. If accepted, we will use the extra page to include a runtime discussion in the main paper.
**Literature**: We thank the reviewers for the additional references. The Predict+Optimize literature, and more generally the decision-aware learning literature, is rapidly growing in recent years, with different research groups even sometimes calling the same concepts different names. As such, it has become increasingly difficult to keep track of all the relevant literature. While we have cited all the works that directly inspire ours, we have nonetheless missed some other. We are grateful for the reviewers pointing us to works we have missed, and will address them in the paper.
Pdf: /pdf/82160d59a85874583997b26843f1160dcc224d90.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Collaborative Score Distillation for Consistent Visual Editing | Accept (poster) | Summary: The paper introduces an approach to achieve consistent visual editing by leveraging a pre-trained pix2pix diffusion model. The authors propose a generalization of the SDS loss (originating from DreamFusion) to a CSD loss, which utilizes Stein variational gradient descent. This new loss function enables the joint distillation of multiple samples from a text-to-image diffusion model. The CSD loss is applicable to diverse visual editing scenarios, including panorama images, videos, and 3D scenes. Additionally, the CSD loss can be utilized for text-to-3D generation as well.
Strengths: - The authors provided results for various visual editing applications, such as panorama images, videos, and 3D scenes, as well as text-to-3D generation. This demonstrates the promise of the suggested method's flexibility for numerous applications.
- The paper reads well and flows smoothly.
- The authors presented all the required preliminaries, making the paper self-contained for newcomers.
Weaknesses: - The paper lacks reproducibility due to several crucial missing details:
- The paper does not specify the meaning or selection process of parameter N in all applications, nor how to choose the final parameter theta from the set of N parameters.
- The formulation is missing information on the aggregation over multiple views/frames/crops, making it unclear how to implement this aspect.
- The explicit specifications of classifier-free guidance weights are missing, particularly in the 3D generation application where DreamFusion originally requires a high weight to produce desired results.
- I don't quite undertsand why the additional epsilon is required in both equations 8 and 9, where from equation 7 it is clear that only the score (epsilon_phi) is given. I find this to be very not justified, nor the transformation from random noise to the image-conditioned prediction in equation 9. It seems like there were several options to take here instead as well, and the specific choice seems arbitrary. The provided ablation in section 4.4 is insufficient to justify these choices, as it does not cover all possible alternatives (such as using only the score or other deterministic predicted noise).
- The paper lacks preliminary information on multidiffusion, despite being depicted in figure 2. The related work section is generally lacking and should elaborate on other methods beyond DreamFusion.
- Since the suggested method is presented as a generalization of the SDS loss, a more extensive comparison to SDS should have been provided in all applications. This is an important baseline that is currently missing.
- Figure 1 is overcrowded and confusing. It would be clearer if the generated panorama on the left side corresponded to the text description rather than being identical to the source image.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - What is the reason behind the focus on editing applications in most of the experiments? Was panorama generation also explored by the authors?
- How does the paper provide an explanation for the observed comparison in the panorama editing results shown in figure 2? Was there an investigation into the potential use of higher weighting parameters for the baselines or other configurations?
- The results obtained for text-to-3D generation appear to be quite similar to the results achieved with DreamFusion using the SDS loss. Can the authors offer an explanation for this similarity in performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors discussed limitation, however the limitation figure is shown soley in the appendix. Morover, the authors did not discussed training times which could serves as on of the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer yGH2,
We sincerely appreciate your efforts and comments to improve the manuscript. We respond to your comment in what follows.
---
**[W1] Lack of detailed explanation for reproducibility**
As for reproducibility, we comprehensively included the details required to implement our method, both in the main manuscript and appendix:
To clarify, N denotes the total number of images and we select B minibatch of samples at each iteration to update. The implementation details are in Appendix D. Also, we provide additional ablation studies on the effect of batch size B in response [Q3] of Reviewer qUzm.
Implementation details regarding aggregation can be found in Section 3.3 and Appendix D.
Details on classifier-free guidance weights can be found in Appendix D for text-guided editing tasks, and in Appendix B.2 for text-to-3D generation
For further clarification, we offer line-by-line explanations alongside our submitted code for video editing. The submitted code shows that B is determined by the number of video frames, i.e.,B = N (line 132), how the final edited images are obtained (line 247), and where the combination of scores from multiple samples for gradient updates occurs (lines 227 and 229).
Lastly, we note that the code has been submitted as part of the supplementary material. We advocate for reproducible research and will open-source our code that reproduces results in the paper.
---
**[W2] Detailed analysis about the choice of subtracted baseline noise**
Our rationale behind the image-conditional noise is elaborated in Appendix A, as well as in Sections 3.2 and Section 4.4. Furthermore, we elucidate the choice of using image-conditioned noise as follows:
- Subtraction of random noise: Directly applying Eq. (7) solely with the score function of the target distribution leads to a high variance of gradient, which can severely hinder convergence. Thus, as also demonstrated in DreamFusion, subtracting random noise is crucial as an effective regularization.
- Introduction of image-conditional noise: However, as shown in Figures 7 and 12 in our manuscript, the default choice of SDS results in severely blurred outputs, as the noise-denoise process of SDS blurs the image. Thus, we propose to subtract image-conditional noise so that the diffusion noise only alters the part where the text instructs to change. This approach is supported by the principle of Wasserstein gradient flow, where the optimal gradient flow in variational inference is given as the difference between the target score function and source score function (See response [Q1] to Reviewer v8pz for further details).
- Other choices for baseline noise: Recent works show different choices in subtracting baseline noise: Delta Denoising Score [1] estimates the noise of the source image by providing a suitable source prompt, and ProlificDreamer [2] fine-tunes U-Net to obtain the noise of the source distribution (Also, refer to the response [Q1] to Reviewer v8pz). However, those are not favorable in our cases as source prompts are not given in real image-editing nor fine-tuning U-Net is computationally expensive.
---
**[W3] Missing information about detailed explanation of baseline methods**
In response, we will elaborate more details about baseline methods and clarify the difference from our method in our final manuscript for a comprehensive understanding.
---
**[W4] Lack of extensive comparison with SDS in visual editing experiments**
While SDS is the most relevant approach to our method, the quality of SDS edited images is superseded by more recent works, which we considered as baselines for comparison, as explained in [W2]. Thus, we primarily compared our method with much stronger, state-of-the-art baselines of each modality to show its remarkable performance. Nevertheless, we compared our method with SDS in ablation studies, as shown in Figure 7, and Appendix C (random noise stands in for SDS), and qualitative examples are in Figure 12. We will clarify this more explicitly in our final manuscript.
---
**[W5] Concept figure is confusing**
Thank you for the detailed suggestions. We revised Figure 1 in supplementary PDF to better illustrate our method in overview.
---
**[Q1] Reasons behind the focus on editing applications rather than generation**
We primarily focused on editing applications in most of our experiments due to the inherent limitations associated with SDS. Specifically, SDS tends to converge towards certain modes, resulting in blurred outputs with fewer details. Given this issue, we have opted to focus on text-guided editing of panorama images utilizing image-conditioned noise, rather than generation using random noise. However, we think extending our idea for generation tasks would be an interesting future work to explore.
---
**[Q2] More detailed explanation for the results shown in Figure 2**
Regarding additional explanation for the panorama image experiments, we refer to [GR1] in the common response. Our intention in Figure 2 was to emphasize that our method has more controllability compared to InstructPix2Pix+MultiDiffusion by varying the guidance scale when edit images.
---
**[Q3] Reason for the similarly obtained results between SDS and CSD for text-to-3D synthesis**
The reason for the similar results is because of the identical experimental setup (e.g., hyperparameters for NeRF training, see Appendix B.2), especially using the same random seeds. Nonetheless, CSD presents finer details compared to SDS, as shown in Table 3 and Figure 13 in the appendix, illustrating the qualitative benefits of CSD.
---
**[L1] Lack of comparison of computation time**
Following your suggestion, we measure the computation time compared to the baselines in [GR2] in the global response.
---
**Reference**
[1] Hertz et al., Delta Denoising Score, ICCV 2023
[2] Wang et al., ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, arXiv 2023
---
Rebuttal Comment 1.1:
Title: Please check other's reviews and authors' responses
Comment: Dear Reviewer yGH2,
Could you check the other reviewers' comments and authors' responses?
Do you have further questions for the authors?
Thanks, Your AC
---
Rebuttal Comment 1.2:
Title: Post rebuttal
Comment: I want to thank the authors for making an effort in their rebuttal and addressing the reviewers' concerns.
The authors responded to most of my concerns. The revised method figure is better and serves the method now.
However, I agree that the novelty and the results are not impressive enough, and I suspect that the required revision would be quite significant to address the mentioned clarifications (I still think the N parameter can be confused to be multiple instances of panorama/video/3D scene).
Regarding the limited innovation concern raised by other reviewers- if the goal is to address the limitation of SDS, I would ask again why most editing results are presented? It seems that the main contribution of the paper is to claim better editing capabilities over SDS, is that correct?
For that reason, I will stay with my current score, and I will make a final decision after a joint discussion with the other reviewers.
---
Reply to Comment 1.2.1:
Title: Response to reviewer yGH2
Comment: Dear reviewer yGH2
Thank you for your response and we are happy to hear that we have addressed most of your concerns. However, we realized that some of our previous responses (e.g., for your inquiry [Q1]) may cause some confusion, which we would like to clarify in what follows.
We clarify that our goal is neither `addressing the limitation of SDS’ nor ‘having better editing capabilities over SDS’. Instead, we aim for developing a visual editing method that handles consistency arising in high-dimensional versatile modalities including panorama images, videos, and 3D scenes. To this end, we first formulate the manipulation of high-dimensional versatile modalities as a multi-particle variational inference problem, interpreting the complex visuals as a set of images that satisfy modality-specific consistency. Then, we propose an effective algorithm that adopts SVGD for the diffusion models; importantly, our derivation shows that this adaptation is indeed a generalization of SDS (which is a source for your confusion), but is not just a workaround to overcome the limitation of SDS. Furthermore, we show that providing a better baseline noise, which approximately estimates the score of the source distribution, could improve the editing quality. As recognized by Reviewers v8pz and qUzm, we do believe that our problem formulation and approach are novel, i.e., we address completely a new problem, not for overcoming a limitation of existing approaches. We also remark that although resolving limitations of SDS, e.g., mode collapse, is beyond our scope, generating panoramic images, videos, or 3D scenes using our idea could be an interesting direction to explore in the future.
Finally, we sincerely appreciate that our manuscript has been improved by incorporating valuable feedback from the reviewers. For instance, following your suggestion, we will include a detailed explanation of the definition of parameter N for different modalities to enhance the presentation. However, we believe that the requested clarifications will not change the essential value of our original paper.
If you have any further concerns, questions, or suggestions, please do not hesitate to let us know.
Thank you very much,
Authors | Summary: This paper presents a novel method called Collaborative Score Distillation (CSD) for consistent visual synthesis and manipulation. The proposed CSD-Edit utilizes pre-trained text-to-image models and can be competent for panorama image editing, video editing and 3D scene editing tasks, and generate inter-sample consistent results. Sufficient experiments on several tasks have demonstrated that the effectiveness of the proposed method.
Strengths: - This paper proposes an optimization strategy called Collaborative Score Distillation (CSD) for text-to-image models to perform consistent visual editing, which has demonstrated impressive effectiveness on a variety of tasks including visual editing of panorama images, videos and 3D scenes.
- The visualization is impressive, qualitatively illustrating the effectiveness of the proposed method and its ability to generate consistent results.
- The quantitative results also demonstrate the effectiveness of the proposed CSD and its superiority over the existing baseline methods to some extent.
Weaknesses: - The innovation of the proposed Collaborative Score Distillation is somehow limited, such as combining SDS presented in DreamFusion [26] and SVGD to obtain the general form of SDS.
- The panorama image editing results tend to produce duplicate content, especially in Figure 2 and Figure 10, and it appears that the diversity of images generated by CD-Edit is limited.
- The improvement of the numerical results of the quantitative comparison cannot solidly support the superiority of the proposed method, because considering the randomness of the generated/edited results of generative models, it may lead to relatively large fluctuations in the numerical results.
- Considering that the proposed method is an optimization-based method, the time of optimizing a scene should also be taken into account when compared with other baseline methods. The supplementary materials include the number of iterations of implementation details, but there is still no objective time comparison.
- Some metrics are expected to assess the intra-image and inter-image consistency, and if objective metrics are lacking, subjective user study may be taken into consideration.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Considering the randomness of the results of the generative models, I’m curious about the criteria used to select the results compared in the paper. This detail should be explained for both the baseline methods and the proposed method to eliminate artificially introduced bias.
- Repeatability in the generated/edited images may be worth further analysis and ablation study.
- Other concerns have already been mentioned in Weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer j3eS,
We sincerely appreciate your thoughtful comments, efforts, and time to improve our manuscript. We respond to each of your questions and concerns one-by-one in what follows. Please let us know if you have any comments/concerns that we have not addressed up to your satisfaction.
---
**[W1] Limited innovation of proposed method**
As also recognized by Reviewers v8pz and qUzm, we emphasize that our method is not just a naive combination of existing techniques, but rather a novel composition of ideas to address the limitation of SDS in a principled manner. Specifically, we first claim the lack of inter-sample consistency under SDS, which hinders its broader potential applications, e.g., to higher-dimensional visual synthesis. We addressed these challenges by casting them as a multiple-sample variational inference problem and proposed a method that uses SVGD. This reinterpretation serves not only as a simple solution to the problem but also as a practical innovation that is scalable to recent diverse tasks - e.g. adapting text-to-image diffusion models for high-dimensional manipulation. We believe that this could be a useful addition to the field, extending the applicability of SDS for consistent synthesis, which is also highlighted by reviewer v8pz.
---
**[W2 & Q2] Lack of diversity and repeatability in the edited results when applying CSD-Edit**
First, we note that our goal is to generate samples consistently in order to demonstrate the power of our method, which allows synchronous editing across a set of images. Nevertheless, it is possible to control the diversity of the generated image with different mini-batch sizes. Note that using a large minibatch size B would encourage more consistency among samples (see our response to reviewer qUzm [Q3] for more explanation). As shown in Figure 3 in the supplementary PDF file, varying the minibatch size B can control the diversity of the generated output: given similar penguins in the source image, the diverse chickens can be generated by using a small batch size. It is important to note that even with this diversity, coherent structures are preserved. This is because randomly selected samples are processed synchronously during each optimization iteration. Lastly, please refer to [Q3] of response to Reviewer qUzm in additional ablation study on the effect of batch size.
---
**[W3 & Q1] Consideration of randomness in evaluation**
Since the variance in editing (not generation) tasks from different random seeds is relatively small, we did not highlight the randomness in the evaluation. Instead, we ensured that the hyperparameters were fine-tuned for baseline methods for fair comparison. However, to address your concern, we conducted an additional evaluation with the randomness of generation models into consideration. We repeated under the identical experimental setup using 5 different random seeds and reported average scores along with standard deviations for each method to evaluate desired edits. As demonstrated in Table 1 and Table 2 of the supplementary pdf, CSD-Edit consistently outperforms the baselines, highlighting the robustness of our approach across different runs in achieving desired edits.
---
**[W4] Lack of comparison of computation time**
To address your concern, we measure the computation time and compare with the baselines. Please refer to [GR2] in the common response for further details.
---
**[W5] Lack of subjective user study**
In addition to our objective evaluation, we conduct additional user studies to compare with video editing and 3D scene editing baselines. Notably, ours outperforms the baselines by a large margin. Please refer to [GR3] in the common response.
---
Rebuttal Comment 1.1:
Title: Please check others' review and authors' responses
Comment: Dear Reviewer j3eS,
Could you check the other reviewers' comments and authors' responses?
Do you have further questions for the authors?
Thanks, Your AC
---
Rebuttal Comment 1.2:
Comment: Thanks for the responses to my comments!
I have carefully read the comments of other reviewers and the author's responses.
This rebuttal has solved most of my puzzles. Although I still think the novelty is not very impressive, the completion and writing of this work are satisfactory. Therefore, I decide to raise my score from 4 to 5.
---
Reply to Comment 1.2.1:
Title: Thank you for the response
Comment: Dear reviewer j3eS
Thank you for your response! We again sincerely appreciate your efforts and time in reviewing and providing incisive comments on our paper.
We are pleased to hear that our rebuttal addressed your concerns well. If you have any further concerns, questions, or suggestions, please do not hesitate to let us know.
Thank you very much!
Authors | Summary: The paper presents a novel method, Collaborative Score Distillation (CSD), for diffusion models. The authors propose a new approach to score distillation that leverages the inter-sample relationships to generate more consistent and coherent images. The paper also introduces CSD-Edit, an extension of CSD, which enables the editing of images, videos, and 3D representations. The paper demonstrates the effectiveness of the proposed approach through extensive experiments, showing that the method outperforms existing methods in various tasks, including high-resolution image editing, video editing, and 3D scene editing.
Strengths: (+) The paper presents a novel approach to score distillation that takes into account the inter-sample relationships. This is a departure from existing methods, which typically focus on individual samples. The idea of using Stein Variational Gradient Descent (SVGD) to enforce consistency among samples shows promise in improving the quality of generated images. The ability to edit images, videos, and 3D could have wide-ranging applications in various fields.
(+) The paper is well-written and the proposed methods are clearly explained.
(+) The authors demonstrate the effectiveness of their approach through extensive examples (in the main paper, supplementary document, and webpage).
Weaknesses: (-) The results presented in the paper are not entirely convincing. For instance:
* In Figure 2, the style editing results between CSD-Edit and Instruct-Pix2Pix + MultiDiffusion are difficult to discern quantitatively. Moreover, noticeable stitching artifacts are visible in the results.
* The video editing results available on the webpage exhibit a prominent flickering effect, which detracts from the perceived quality of the output.
* In Table 2, the advantages of CSD-Edit over Instruct-NeRF2NeRF in 3D scene editing are marginal, which raises questions about the practical superiority of the proposed method.
(-) The paper does not provide a robustness analysis of the proposed method. It would be beneficial to understand how the method performs under different conditions or with different types of input.
(-) Please refer to the Questions section for more comments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Could the authors provide more details on the computational efficiency of their method, particularly in comparison with other score distillation methods such as SDS and SJC? It would be beneficial to understand the trade-offs between the quality of the results and the computational resources required.
- How does the method handle errors or biases in the pre-trained Instruct-Pix2Pix model?
- How is the number of samples (N) determined and selected in different experiments and applications (as per Eq. (8))? This appears to be a crucial hyperparameter in the proposed method, and a more detailed explanation of how it is chosen would be helpful.
- In Section 4.4, how are view-dependent prompts and CSD unified and integrated for use in text-to-3D applications?
- The multi-head (or Janus) problem is a well-known issue in text-to-3D applications. Can CSD be used to address this problem? If so, how effective is it, and are there any limitations or challenges in using CSD for this purpose?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss some limitations of their approach in the supplementary material, but these could be more prominently addressed in the main paper. In particular, the issue of artifacts in high-resolution image editing and the flickering effect in video editing are significant limitations that should be discussed in more detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer qUzm,
We sincerely appreciate your efforts and comments to improve the manuscript. We respond to your comment in what follows.
---
**[W1] Some of the results which are not entirely satisfactory**
For your information, we further clarify our experimental results and commit to including these findings in our final manuscript to provide a more comprehensive assessment of our proposed method.
- (Comparison with InstructPix2Pix+MultiDiffusion): The superiority of our method over InstructPix2Pix+MultiDiffusion lies in the controllability over instruction-fidelity across different guidance scales. Please refer to [GR1] in the global response for further details.
- (Regarding the remaining stitching and flickering effects): While our CSD-Edit shows improved consistency compared to prior state-of-the-art baselines across all modalities, this might not be an ultimate approach in ensuring absolute consistency due to the inability of diffusion models and autoencoders. Thus, injecting modality specific data prior could be a promising future work. We will discuss this limitation more prominently in our final manuscript.
- (Quantitative results in 3D scene editing): To address your concerns regarding the quantitative results for 3D scene editing, we conduct additional user studies including other editing tasks. Notably, ours outperforms the baselines by a large margin. Please refer to [GR3] in the global response for further details.
---
**[W2 / Q2] Robust analysis of the proposed method / Errors and bias inherited from pre-trained Instruct-Pix2Pix**
Here, we present a detailed explanation on how our method performs on different conditions or types of input. Since we aim to distill the generative prior of InstructPix2Pix, our method performs well at which InstructPix2Pix excels. For instance, our method shows remarkable performance in stylization or changing multiple objects to others. Also, our method is able to do image editing with multiple prompts, i.e., region-wise edit of a panorama image, in Appendix B.1.
Besides, as we mentioned in the limitation section, the errors and biases of InstructPix2Pix might be transferred to our method. In particular, InstructPix2Pix often edits undesirable objects. Therefore, when editing a panorama image without SVGD, we often observe that unwanted objects are changed in some patches, breaking the consistency of the output image. On the other hand, we empirically observe that our mixing scores in CSD acts as a regularizer that prevents abrupt change in images, ensuring better consistency. For instance, see Figure 4 in the supplementary PDF for visual examples: the tiger is generated in the unwanted region of a source image when SVGD is not applied in update.
---
**[Q1] Trade-offs between the quality of the results and required computational resources compared with SDS and SJC**
We measure the computation time and compare it with the baselines in [GR2] of the global response.
---
**[Q3] Detailed explanation about the number of samples N**
For clarification, we denote N as total number of images (e.g., total patches of a panorama image or total frames of a video) and we update B minibatch of samples per iteration. Intuitively, using a large B would encourage more consistency among samples, but it is more computationally expensive. Also, we observed that using large batch sizes dilutes the effect of editing. Thus, the batch size controls the tradeoff between computation time, editing quality, and preserving consistency. To further verify our choice of B, we provide additional ablation study on the panorama image editing experiments. Given the same experimental setup as in Section 4.1 with fixed guidance scale=7.5, we swept over B = 4, 8, 12, and measure the CLIP source-target image similarity, CLIP directional image-text similarity, and computation time (iteration per second for total 200 iterations, measured on single A100 40GB GPU). The following table below demonstrates the effect of batch size.
\begin{array}{lccc}
\text{Batch size} & \text{CLIP Image Sim.} & \text{CLIP Directional Sim.} & \text{Time (iter / sec)} \newline
\hline
\text{B=4} & 0.6392 & 0.2401 & 2.86 \newline
\text{B=8} & 0.6917 & 0.2165 & 1.47 \newline
\text{B=12} & 0.7394 & 0.1953 & 1.02 \newline
\end{array}
Also, we provide qualitative examples on the effect of batch size in Figure 3 of supplementary PDF. We show that one can control the diversity of generated output, e.g., same penguins are changed to diverse chickens, by choosing appropriate batch size. We will include this ablation study in our final manuscripts.
---
**[Q4 & Q5] View-dependent prompting and handling Janus problem in text-to-3D synthesis via CSD**
In Figure 6 of our manuscript, we show that CSD enables consistent visual synthesis of 2D images when view-dependent prompting is applied. While these view-dependent prompts might guide the generation of different shapes or contents for each image, CSD helps to generate coherent objects across different views by updating their scores synchronously. Based on this observation, we apply CSD into text-to-3D synthesis by computing CSD on sampled views at each iteration.
Through experimental results shown in Table 3 and Figure 13 in Appendix, we demonstrate the effectiveness of CSD on text-to-3D generation, especially in potent to mitigate the Janus problem. We believe that the consistent update among views, facilitated by the reduced variance, results in learning better geometry, thus helping better 3D synthesis. Nonetheless, it is important to note that our approach does not entirely remedy the fundamental cause of Janus problem. The fundamental cause of the Janus problem resides in the insufficient understanding of 3D geometry and a lack of diverse views during the training of diffusion models, specifically those views other than front faces.
---
Rebuttal Comment 1.1:
Title: Please check others' reviews and authors' responses
Comment: Dear Reviewer qUzm,
Could you check the other reviewers' comments and authors' responses?
Do you have further questions for the authors?
Thanks,
Your AC
---
Rebuttal Comment 1.2:
Comment: Dear Authors,
Thank you for providing a detailed rebuttal in response to the initial reviews. At this time, I do not have any further questions regarding the paper. I will make my final decision after a collective discussion with the other reviewers.
Best,
Reviewer qUzm
---
Reply to Comment 1.2.1:
Title: Thank you for the response
Comment: Dear reviewer qUzm
Thank you for your response! We again sincerely appreciate your efforts and time in reviewing and providing incisive comments on our paper.
We are pleased to hear that our rebuttal addressed your concerns well. If you have any further concerns, questions, or suggestions, please do not hesitate to let us know.
Thank you very much!
Authors | Summary: This paper presents a novel method for achieving consistent visual synthesis using a diffusion model. Specifically, the authors extend Score Distillation Sampling to accommodate more complex visual modalities, represented as multiple images. They introduce a principled method to jointly optimize these multiple samples (referred to as particles), ensuring that each sample matches the distribution of an image (evaluated by a pre-trained diffusion model) while maintaining consistency with each other. This methodology is rooted in Stein Variational Gradient Descent. The authors successfully demonstrate a substantial performance improvement in visual quality and consistency across a diverse range of visual editing and synthesis tasks, including panorama editing, video editing, and 3D editing.
Strengths: S1: The proposed method is well-founded and extends the original Score Distillation Sampling (SDS) concept to a novel setting that requires the simultaneous optimization of multiple images. Achieving consistency among different samples is non-trivial and often depends on domain-specific structure (e.g. nerf in DreamFusion, cross attention control in video synthesis). This paper introduces a more principled technique for consistent synthesis, marking a significant contribution to the field.
S2: The overall presentation of the paper is commendable, with a clear structure and explanation that makes it easy to follow.
S3: The method proposed is theoretically sound and delivers impressive results across a wide range of editing tasks. Besides its theoretical contribution, the method demonstrates strong practical applications, incorporating application-specific structure for panorama, 3D, and video. This demonstrates its potential as a solid baseline for future methods in the field.
Weaknesses: While the current approach achieves a degree of consistency by jointly optimizing different images using Stein Variational Gradient Descent, it doesn't necessarily guarantee flawless consistency. This is evidenced by the presence of artifacts at patch boundaries and flickering between video frames. Therefore, consistent synthesis may still necessitate the incorporation of more general structures, such as volumetric rendering, in future visual synthesis pipelines (3D or Video Editing).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: This is not related to my evaluation. But could the author compare this paper with a concurrent paper ProlificDreamer [1]. Specifically, in Section 3.2, the authors mention the need to substitute the random noise with the score predicted by an unconditional model. I am interested in understanding whether this is related to the second noise prediction feature present in ProlificDreamer
[1] Wang, Zhengyi, et al. "ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation." arXiv preprint arXiv:2305.16213 (2023).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer v8pz,
We sincerely appreciate your thoughtful comments, efforts, and time to improve our manuscript. We respond to each of your questions and concerns one-by-one in what follows. Please let us know if you have any comments/concerns that we have not addressed up to your satisfaction.
---
**[W1] How to ensure flawless and complete consistency going further from proper consistency between a set of images?**
At a high level, CSD aims at preserving the consistency between the source images when updating with pre-trained text-to-image diffusion models. In principle, we cast this as a multi-particle variational inference problem and apply Stein
Variational Gradient Descent algorithm. Through empirical validation, we show that CSD can generate images that follow the text instruction without breaking the consistency. However, we acknowledge that our method might not guarantee the `perfect flawless’ consistency. We also agree with your suggestion in that incorporating modality-specific generators, such as neural fields, or utilizing better priors to ensure consistency, e.g., optical flow for temporal consistency, could be definitely an interesting future work to explore when adapting text-to-image diffusion models to high-dimensional visual synthesis.
---
**[Q1] Additional comparison to a concurrent work, ProlificDreamer [1] to understand the role of subtracted noise**
Thank you for bringing out the concurrent work, ProlificDreamer, which is very relevant to our paper! Both CSD and ProlificDreamer are based on similar methods, but the tasks of interest are quite different, leading to different choices of second noises for each: we are interested in manipulation of various modalities including images, videos, and 3D scenes, while ProlificDreamer only considers text-to-3D generation. In its basis, CSD and ProlificDreamer have identical objectives in distilling the generative prior of pre-trained text-to-image diffusion models using weighted KL divergence (i.e., Eq.(4) in our manuscript). Then the particle-based optimization using Wasserstein gradient flow [2] is given by the different between the score function of target distribution $p$ and source distribution $q$:
$$d\mathbf{x} = (\nabla_{\mathbf{x}} \log p(\mathbf{x}) - \nabla_{\mathbf{x}} \log q_t(\mathbf{x}))dt $$
Here, the score of target distribution is given by a pre-trained diffusion model, while the score function of source distribution is not present as default. In our paper, we use the image-conditional noise estimate of InstructPix2Pix, which approximately estimates the score of source distribution. In ProlificDreamer, they resort to online fine-tuning of a diffusion model with respect to the current NeRF scenes. Notably, we additionally compute kernels and mix the scores of multiple samples, which enhances consistency. We believe that ProlificDreamer and CSD are complementary works that the idea of CSD can be used in ProlificDreamer to enhance 3D consistency in text-to-3D generation, which we leave as future work.
---
**Reference**
[1] Wang et al., ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, arXiv 2023
[2] Richard Jordan, David Kinderlehrer, and Felix Otto. “The variational formulation of the Fokker–Planck equation”. In: SIAM journal on mathematical analysis 29.1 (1998). Publisher: SIAM, pp. 1–17.
---
Rebuttal Comment 1.1:
Title: Please check others' review and authors' responses
Comment: Dear Reviewer v8pz,
Could you check the other reviewers' comments and authors' responses?
Do you have further questions for the authors?
Thanks,
Your AC
---
Rebuttal Comment 1.2:
Title: Reply to Response
Comment: Thank you for the response! It addressed my original questions. Other reviewer's concerns about the inference efficiency and qualitative results are still valid. I will finalized the rating based on discussions with other reviewers.
---
Reply to Comment 1.2.1:
Title: Thank you for the response
Comment: Dear reviewer v8pz
Thank you for your response and we are pleased that we have addressed your concerns.
Furthermore, we would like to clarify that our method can achieve better computational efficiency compared to baselines while maintaining the superior editing quality simultaneously. We may have inadvertently given the impression that our method is less computationally efficient in our original submission, as computation time was not our primary focus.
However, this is not true: one can easily enhance its computational efficiency by reducing the total optimization iterations and utilizing a higher learning rate, allowing for more edits in each iteration. To further clarify this, we have conducted additional experiments focusing on both computational efficiency and editing quality. Please refer to [GR2-1] in the global response for more details.
In addition, regarding the qualitative results, could you provide more detailed information if possible? Your insights are invaluable, and giving further feedback on them will greatly help us to strengthen our manuscript. We believe that we thoroughly addressed the concerns about the qualitative results raised by other reviewers (qUzm, j3eS). Nonetheless, we are willing to address any remaining concerns you might have.
Once again, we deeply appreciate your time and efforts on our paper.
Sincerely,
Authors | Rebuttal 1:
Rebuttal: Dear reviewers and AC,
We sincerely appreciate your valuable time and effort spent reviewing our manuscript.
As reviewers highlighted, we believe our paper presents a principled and novel method (v8pz, qUzm) that performs effective visual editing of versatile modalities (v8pz, qUzm, j3es, yGH2), validated by extensive experiments in both quantitative and qualitatively (v8pz, qUzm, j3es), followed by comprehensive presentation (v8pz, qUzm, yGH2).
Here, we collected the common questions that multiple reviewers have asked and responded to each question one-by-one in what follows. We also kindly ask you to check out the attached supplementary PDF file together. Please let us know if you have any comments/concerns that we have not addressed up to your satisfaction.
---
**[GR1] More detailed explanation for the results shown in Figure 2 (Reviewer qUzm, yGH2)**
Figure 2 illustrates two benefits of our method: spatial consistency and instruction-fidelity. Patch-wise editing of a panorama image results in spatial inconsistencies due to visible patch boundaries (Figure 2, right top). This is mitigated in InstructPix2Pix+MultiDiffusion (Figure 2, middle row) by using overlapping patches, but the edited image loses its fidelity to the instruction as the scores are diluted by other scores, e.g., one patch may respond to the instruction much more or much less compared to others, thus the effect is diluted in such cases. On the other hand, CSD is able to mitigate such a diluting effect by optimizing with a subset of images. Thus, in Figure 2, given the same guidance scale, our method shows better fidelity to the instruction.
The effect of guidance scales on the instruction-fidelity of image editing is demonstrated in Figure 5 of our manuscript. Each dot on the graph represents different guidance scales, and a direct comparison of each guidance scale shows a noticeable gap between InstructPix2Pix+MultiDiffusion and CSD-Edit in terms of CLIP directional scores. This underscores the superiority of our method in balancing source-target image consistency and instruction-fidelity across scales, thereby highlighting its controllability, particularly given the subjective nature of achieving a desired edit. In the enhancement of representation, we revise Figure 5 (now Figure 2 in the supplementary PDF) following the reviewers’ comments and will include the revised figure along with a detailed explanation in our final manuscript.
***
**[GR2] Measurement and comparison of computation time (Reviewer qUzm, j3eS, yGH2)**
Regarding the computational efficiency of our method, we measure the computation times and compare them with the baselines of each task. All these evaluations were conducted on a single NVIDIA A100 80GB and AMD EPYC 7V13 64-Core Processor.
- For panorama image editing experiments, we compare with the baseline InstructPix2Pix+MultiDiffusion. Note that the computation time of both methods depends on the input image resolution, whereas we show that our method becomes more efficient as the resolution goes higher. The following table below shows how the computation time (total time in seconds) differs by the size of the input image.
\begin{array}{lccc}
\text{Method} & \text{Resolution} & \text{Total time (sec.)} \newline
\hline
\text{InstructPix2Pix+MultiDiffusion} & 1920\times640 & 62 \newline
\text{CSD-Edit (Ours)} & 1920\times640 & 68 \newline
\hline
\text{InstructPix2Pix+MultiDiffusion} & 3968\times 4352 & 487 \newline
\text{CSD-Edit (Ours)} & 3968\times 4352 & 275 \newline
\end{array}
Note that the baseline method requires computing noise estimates of every patch at each diffusion step, while our method only requires computing a minibatch of patches per iteration.
- In video editing experiments, we measure the total computation time of our method and baseline methods to obtain the results shown in Figure 11. The results are shown in the following table below:
\begin{array}{lc}
\text{Method} & \text{Total time (sec.)} \newline
\hline
\text{FateZero} & 192 \newline
\text{Pix2Video} & 77 \newline
\text{CSD-Edit (Ours)} & 423 \newline
\end{array}
Although our method requires more computation time compared to the baselines, users can choose to early stop the optimization when they achieve a desired edit, which is not an attribute of the baseline methods as they resort to diffusion samplers.
- In text-to-3D generation experiments, when directly comparing with DreamFusion (or SJC), there is a slight increase in the computational cost due to the usage of LPIPS metric as distance for RBF kernel. For instance, it takes 1 hour to generate a 3D model with DreamFusion, while using CSD takes 84 minutes. We note that the increment is not substantial, however, ours yields better qualities with finer details, as shown in Table 3 and Figure 13 in the Appendix of our manuscript.
We will add this information to our final manuscript.
---
**[GR3] User study (Reviwer qUzm, j3eS)**
While we have primarily relied on objective metrics for assessing consistency and instruction-fidelity, we agree that a subjective user study could provide valuable insights, especially given the subjective nature of editing tasks. Thus, we conducted additional user studies, where we asked three questions in evaluating the editing methods: the consistency of the edited results, frame-wise image instruction-fidelity, and the editing quality. For each of the three studies, we asked 20 subjects to rank different methods. As shown in Table 3 and Table 4 in the supplementary PDF file, our method consistently outperforms others achieving the best user preferences across all three aspects. We commit to including these findings in the final version of our manuscript to provide a more comprehensive assessment of our proposed method.
Pdf: /pdf/4d30ed5b389a20583b1b252a256bb13ddf8063bc.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Transformer-based Planning for Symbolic Regression | Accept (poster) | Summary: The paper introduces TPSR, a Transformer-based Planning strategy for Symbolic Regression. TPSR incorporates Monte Carlo Tree Search into the transformer decoding process, enabling the integration of non-differentiable feedback such as accuracy and complexity. Experimental results show that TPSR outperforms existing methods in terms of fitting-complexity trade-off, extrapolation abilities, and robustness to noise.
Strengths: - The paper is well written and easy to understand.
- The idea of enhancing large scale pre-trained Transformers with improved search capablities is very promising in the context of symbolic regression
- The model shows good performance both compared to the E2E baseline and the GP methods.
Weaknesses: - My main concern is about the novelty of the approach. A very similar idea has been recently investigated in [1] where the authors also combine MCTS with pre-trained Transformers. I would be grateful if the authors could clarify any eventual differences between the two approaches.
- The impact of $\lambda$ seems quite significant in your experiements. However, it is not clear to me how one should select it in practice.
[1] Kamienny, Pierre-Alexandre, Guillaume Lample, and Marco Virgolin. "Deep Generative Symbolic Regression with Monte-Carlo-Tree-Search." (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness part above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable feedback on our work. We appreciate your positive comments on the clarity and potential of our work in symbolic regression.
---
> * My main concern is about the novelty of the approach. A very similar idea has been recently investigated in [1] where the authors also combine MCTS with pre-trained Transformers. I would be grateful if the authors could clarify any eventual differences between the two approaches.
>
We understand your concern about the novelty of our approach and its potential similarity to the work [1]. Allow us to clarify the distinct differences between our Transformer-based Planning for Symbolic Regression (TPSR) and the DGSR-MCTS approach:
* **General Approach:** The general mechanism used for generating the equations is different in DGSR-MCTS and TPSR. DGSR-MCTS exploits a pretrained mutation policy M to generate the expression by following a series of mutations from an empty expression (root). This is while TPSR follows the seq2seq approach of E2E to generate the expression token-by-token. Consequently, TPSR uses the pretrained E2E as its backbone but DGSR-MCTS pretrains the mutation policy from scratch.
* **Definition of MCTS and Search Strategy:** One fundamental distinction lies in the definition and application of Monte-Carlo Tree Search (MCTS). In DGSR-MCTS, the search tree consists of full mathematical equations, with each node representing a distinct equation and edges corresponding to mutations between equations. In contrast, our TPSR employs MCTS as a decoding strategy in the context of the transformer model. Each node in the search tree of TPSR represents the current state of generated tokens, potentially forming non-complete sequences, with edges corresponding to mathematical operators or variables. As a result, the search tree of DGSR-MCTS with "n" nodes includes "n" different equations, while the TPSR search tree includes intermediate decoding sequences, and completed equations only exist at the terminal nodes. This distinction inherently leads to major differences in selection, expansion, and back-propagation mechanisms within the MCTS algorithm.
* **Parameter Update and Learning:** DGSR-MCTS utilizes MCTS to update and learn the distribution of mutations for a group of out-of-distribution datasets. The approach involves fine-tuning an actor-critic-like model to adjust the pre-trained model on a group of symbolic regression instances. On the other hand, TPSR uses the pre-trained transformer's learned distribution to guide the expansion during the search process, without updating any specific parameters for in-domain or out-of-domain equations (without fine-tuning). Consequently, the same settings and pre-trained model are applied to both in-domain and out-of-domain equations in TPSR.
* **Computation Time:** Another notable difference is the computational requirements of the two approaches. DGSR-MCTS involves pre-training a mutation policy, a critic network, and performing fine-tuning stages for these networks, leading to significantly higher computation time (a limit of 24hrs and 500K evaluations as stated in their original paper). In contrast, TPSR has substantially lower computation time and the number of evaluations, typically in the order of $10^2$ equations, taking approximately $10^2$ seconds (as shown in Fig. 6 and 7 of the main paper). This renders TPSR more suitable for applications where fast yet accurate equation discovery is critical.
[1] Pierre-Alexandre Kamienny, Guillaume Lample, Sylvain Lamprier, and Marco Virgolin. "Deep Generative Symbolic Regression with Monte-Carlo-Tree-Search." (2023).
---
> * The impact of $\lambda$ seems quite significant in your experiements. However, it is not clear to me how one should select it in practice.
>
Indeed, the impact of $\lambda$ as the complexity regularizer is significant in most cases. We evaluated and discussed different values of $\lambda$ as it can affect the trade-off between accuracy and complexity, as shown in Table 1 and Fig. 10 (Appendix D.1). The appropriate choice of this hyperparameter may depend on the specific use case, where the balance between finding an accurate function and sacrificing complexity, versus emphasizing interpretability and equation simplicity over relative accuracy, becomes relevant. However, we agree that proposing default settings for hyperparameters would be beneficial for general use. Based on our results, particularly Fig. 10 in Appendix D.1, we conclude that setting $\lambda = 0.1$ can achieve high accuracy while reducing complexity and avoiding overfitting (please also check Fig. 6). We will make sure to include this discussion in the updated manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my review.
I have raised the rating after reading the rebuttal. I would suggest the authors update the manuscript to better clarify the above points.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for reviewing our rebuttal. We are glad that our response has resolved your concerns and appreciate the raised score. We will make sure to update the manuscript accordingly and include the above points in the updated version.
---
Rebuttal 2:
Title: Looking forward to discussion
Comment: Dear Reviewer ijKU,
Thank you for your feedback during the review process! If there are any concerns or questions, please do not hesitate to let us know - before the author discussion period ends. We will be happy to answer them during the discussion.
Thank you,
Paper13018 Authors | Summary: Authors propose a transformer-based planning (using MCTS) strategy to solve symbol regression task. Different from traditional decoding method, the new method is able to integrate non-differentiable feedback into the transformer-based process of equation generation. Experiments demonstrate the significent performance.
Strengths: Distilling symbolic equation from noisy data is intractable. Recent progress is achieved by training neural networks to generate candidate symbolic expressions, which is really promising.
This work combines the Monte Carlo Tree Search and pretrained transformer-based symbol regression model for equation generation. Compared with Genetic programming method, the new approach not only leverages pre-trained priors, but also considers feedbacks during the generation process.
Weaknesses: There is not much initiality in the new method. It demonstrates a new application for a combination of two existing methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Are there experiments to show changes of performances, if we change the selection set of mathematical operators and symbols?
Can out-of distribution data be identified, and used for the promotion of the symbolic regression process?
It might not be diffiuclt for symbol regression method to find laws, such as f=ma. Could it find E = m c^2?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Monte Carlo Tree Search is statistical, and Pre-trained transformer is trained through data, the integration of the two methods is still within the traditional paradigm of machine learning, so, may not work well for out-of distribution data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful comments and questions on our paper. We appreciate your positive remarks regarding the importance of this study.
---
> * Are there experiments to show changes of performances, if we change the selection set of mathematical operators and symbols?
>
In symbolic regression, the choice of mathematical operators and symbols significantly impacts the equations obtained. Our work proposes a transformer-based decoding strategy with MCTS, which requires adhering to the vocabulary of operators defined by the pre-trained transformer SR model (As mentioned in Appendix E, limitations, line 768). This constraint ensures compatibility with the pre-trained model but may restrict the set of available mathematical symbols.
Generally, the selection of mathematical symbols in symbolic regression involves a trade-off between expressivity and problem complexity. Larger vocabularies provide greater expressivity, allowing the method to represent more diverse equations. However, this increase in expressivity can also enlarge the search space, making the problem more complex. To strike a balance, most recent symbolic regression works use common mathematical operators that are prevalent in benchmark problems and scientific datasets, such as the Feynman dataset.
---
> * Can out-of distribution data be identified, and used for the promotion of the symbolic regression process?
>
We believe that this is a very important question in symbolic regression, especially regarding pretrained SR models. We would like to discuss this from two aspects.
First, we would like to discuss the comparison of our TPSR with pretrained models. Pretrained symbolic regression methods, in contrast with search methods, are trained with a large set of synthetic equations, originating from a distribution $\Omega$. In fact, these equations are generated using an expression generator with specific settings, and the points are sampled in specific ways. Therefore, equations and/or datasets that are not generated from the same generator can be considered out-of-distribution. In our work, we evaluate our TPSR model and the pretrained E2E model on both in-domain equations from the same distribution of data, as well as SRBench equations which are considered out-of-distribution compared to the training samples. As shown in Table 1 of the main paper, using lookahead planning in TPSR significantly improves the performance of the pretrained E2E model on out-of-distribution SRBench datasets. We have also observed that the performance improvement gap between TPSR and E2E is higher for SRBench datasets compared to this gap in in-domain datasets (also discussed in lines 271-275).
Second, from a broader perspective, since pretrained models have learned parameters conditioned on the input datasets and equations (in comparison to search methods), while they have the advantage of leveraging priors learned from large-scale data, they are limited in handling datasets that are very far from the training distribution and discovering very different equation forms from what was generated during the training. TPSR can help to remedy this issue by searching and lookahead planning in the decoding stage; however, it is still limited to the degree that it can be applied to out-of-distribution datasets. This is because of the dependency of TPSR on the fixed priors of the pretrained SR model. We believe that improving SR models for this purpose is an exciting line of research that should be considered for future works, as also mentioned in our conclusion section (lines 360-362). Some potential ideas include fine-tuning the weights of the SR model using non-differentiable rewards for the new out-of-distribution datasets to improve its performance.
---
> * It might not be diffiuclt for symbol regression method to find laws, such as f=ma. Could it find E = m c^2?
>
Thank you for raising this thought-provoking question. We believe that this question can be examined from various angles, and we have tried to address your concerns below.
Symbolic regression methods operate by fitting equations to datasets of observations (features X and corresponding y). When considering the applicability of these methods, it's important to acknowledge that the original benchmark datasets, including well-known cases like the Feynman dataset, often do not span extreme ranges of values for X. As an example, Equation I.48.20 from the Feynman dataset ($e = \frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}$) samples values in the range of $U(1,5)$ for $m$, $U(3,10)$ for $c$, and $U(1,2)$ for $v$. This is while, for instance, the value of $c$, representing the speed of light, could be considered a constant with a significantly higher value of $2.998*10^8$ which is far from the simplified covered range in Feynman dataset. In fact, when operating under simplified assumptions and utilizing large quantities of synthetic observations, it is possible to recover complex equation forms using advanced symbolic regression models. However, real-world scenarios present greater complexity due to factors such as diversity, ranges, and precision of observations.
Besides the range of observations, situations like the equation $E=mc^2$ pose a challenge when assuming $c$ to be a constant. In such cases, identifying $c$ requires additional constraints, such as employing dimensional analysis. Notably, due to the inherent constant nature of $c$, even if a diverse dataset encompassing various value ranges for $m$, $c$, and $E$ is employed, the relationship between $m$ and $e$ would exhibit a linear correlation. Accordingly, we recognize that there are inherent limitations in scientific discovery when using contemporary symbolic regression models. Addressing these limitations offers an exciting avenue for future advancements in the field.
We hope our response addresses your concern. Please do let us know if we have correctly interpreted your concern or if further clarification is needed.
---
Rebuttal 2:
Title: Looking forward to discussion
Comment: Dear Reviewer QJmi,
Thank you for your feedback during the review process! If there are any concerns or questions, please do not hesitate to let us know - before the author discussion period ends. We will be happy to answer them during the discussion.
Thank you,
Paper13018 Authors | Summary: This paper proposes to incorporate Monte Carlo Tree Search (MCTS) on top of pretrained transformer-based SR models to guide equation sequence generation. This is to address the challenges where existing methods purely rely on the pretrained transformer’s output and without accounting for external performance requirement. In MCTS, the authors develop a reward function to encourage the balance between fitting accuracy and regulating complexity for the SR generation. Also, the caching tricks are employed to improve the implementation efficiency. SR benchmark datasets are used to demonstrate the improved performance of the proposed method over the state-of-the-art.
Strengths: Including performance feedback in the pipeline of generation of SR equation generation from pre-trained transformer-based SR models is well-motivated. To achieve so, this paper proposes including MCTS as the decoder in this pipeline, and imparting the external requirement, via a reward function in MCTS, to eventually improve the performance of equation generation. The extensive experiments and baseline comparison clearly show the effectiveness of the proposed method, in terms of fitting-complexity trade-off, extrapolation abilities, and robustness to noise.
The presentation of techniques is clear, and the evaluation in my opinion is solid. Overall, this paper makes a good contribution in the SR field.
Weaknesses: I only have two comments:
- In Equ. (1), how to select $\beta(s)$? It would be better to show its effect on the performance in the ablation study as well.
- Currently, the method still relies on a pre-trained transformer SR model. The authors could give some perspective about how (or if it is possible) the MCTS can also be incorporated in the transformer training (or fine-tuning) process.
-A typo in line 151: trnasformer--> transformer
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See my comments in the above section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See my comments in the above section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition of our motivation and contribution. The typo that you raised has been corrected and the response to your comments is provided below.
---
> * In Equ. (1), how to select $\beta(s)$? It would be better to show its effect on the performance in the ablation study as well.
>
In response to your comment, we conducted ablation study with different values of $\beta(s)$ on the Feynman SRbench datasets. Our findings, illustrated in Fig. 1 of the response PDF (please check global response), show: (1) small values of $\beta(s)$ (e.g., $\beta(s)$=0) offer lower performance, probably due to limited exploration. (2) large values of $\beta(s)$ (e.g., $\beta(s)$=100) also affect performance negatively due to excessive exploration compared to exploitation. (3) Optimal results are seen for $\beta(s)$ between 0.1 and 10. Also, Fig. 1(b) shows that as $\beta(s)$ increases, more equation sequence candidates emerge, signaling more exploration. However, after $\beta(s)$>0.1, the candidate count doesn't grow much, possibly due to repetitive sequences frequently activating caching mechanisms. We will make sure to include this ablation study in the updated version of paper.
---
> * Currently, the method still relies on a pre-trained transformer SR model. The authors could give some perspective about how (or if it is possible) the MCTS can also be incorporated in the transformer training (or fine-tuning) process.
>
We thank the reviewer for raising this very interesting question! Incorporating MCTS into the training/fine-tuning of the transformer SR models is indeed an intriguing direction for future work (we have also briefly mentioned this in the conclusion section (Lines 361-362)). To train/fine-tune transformer SR models with non-differetiale equation-specific feedback, one way is to employ deep reinforcement learning techniques such as policy gradient so that we can backpropagate to update model weights. However, if we want to use the search ability of MCTS instead of RL and policy gradient methods, following these directions might help: **(1)** Utilizing the transformer SR model as both the policy and value network. This actor-critic process can predict action probabilities, hence guiding MCTS to more promising search paths. Additionally, it can estimate the value of specific states, assisting MCTS during reward backpropagation. **(2)** Leveraging trajectories generated from MCTS, or its rollouts, to enrich the training set for the transformer model. These trajectories, particularly the novel solutions discovered during MCTS explorations, can expose the model to non-obvious equation formulations it might not have encountered in traditional training sets. **(3)** Crafting a co-adaptation feedback loop between MCTS and the transformer model where MCTS and the transformer model parameters can be adjusted iteratively based on each component's performance feedback.
We would like to point out that these potential directions will certainly involve intricate challenges and may demand rigorous experimentation to ascertain their efficacy. We acknowledge the constructive nature of your comment, and it further strengthens our resolve to delve into this direction in future works.
---
Rebuttal 2:
Title: Looking forward to discussion
Comment: Dear Reviewer 6B7y,
Thank you for your feedback during the review process! If there are any concerns or questions, please do not hesitate to let us know - before the author discussion period ends. We will be happy to answer them during the discussion.
Thank you,
Paper13018 Authors | Summary: This submission proposes a neural network-based approach to symbolic regression (SR), namely generating equations as sequences. It leverages the power of pretrained SR transformer models and the MCTS algorithm to tradeoff the fitting accuracy and equation complexity. Experimental results on the SRBench and the In-domain Synthetic datasets demonstrate that the proposed approach outperforms the backbone E2E transformer model.
Strengths: Soundness:
The techniques employed in the proposed approach are sound. The approach is able to use any non-differentiable target function to guide the training of a neural model for symbolic regression. Experiments demonstrate that it outperforms a state-of-the-art transformer model which is used as the backbone in the proposed approach, indicating that the implementation of the proposed approach is likely to be correct.
Presentation:
The submission is in general well written and organized, easy to follow.
Weaknesses: Presentation:
There is a minor issue on the term single-instance symbolic regression introduced in Related Work. According to the description about the therein algorithms GP, RL, GP+RL and MCTS, the difference between them and the proposed approach mainly lies in not employing pretrained knowledge. Thus, the term single-instance is strange and cannot tell the true difference from the proposed approach.
Contribution:
The proposed approach seems to be a combination of the E2E transformer model [18] and the MCTS framework [26]. Although the transformer model can be replaced with other neural network-based models, the contributions beyond [18] and [26] are not significant. Moreover, the current evaluation cannot confirm that either the proposed approach is a general framework for enhancing any neural network-based model for symbolic regression, or the approach achieves the truly state-of-the-art performance. For the former confirmation, the authors need to compare multiple implementations having different backbone models with the original backbone models. For the latter confirmation, the authors need to compare the proposed approach with more state-of-the-art solutions such as [30] and [31].
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why the algorithms GP, RL, GP+RL and MCTS used for symbolic regression are called single-instance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: As far as I can see, the authors have adequately addressed the limitations through sufficient discussions in the supplemented material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable review of our submission. We appreciate your positive feedback and would like to address your concerns.
**Terminology: Single-Instance SR.** We acknowledge that the distinction might not be clearly conveyed by the term "single-instance SR" itself. The intention behind using this term was to highlight that algorithms like GP and RL for SR typically focus on finding the best-fit equation for a "single" dataset at hand, without leveraging pretrained knowledge from a large set of datasets. However, we understand that this term may not fully capture the differentiation from the methods discussed in the next paragraph of Related Work which use pretrained knowledge for SR. To enhance clarity, we will revise this and use "symbolic regression without learned priors" instead of "single-instance symbolic regression". We hope that this clarification addresses your concern. If you have any further suggestions or insights, we would appreciate your input.
**Contribution.** While we integrate both pretrained SR models and MCTS, our contribution goes beyond merely combining two existing techniques. The MCTS planning employed in our work takes inspiration from similar methods used in NLP, such as [34], where MCTS is applied for text generation. However, utilizing MCTS planning for equation generation in SR introduces significant differences from its application in NLP. Incorporating the MCTS planning algorithm into the pretrained SR backbone demands a thorough redesign and the introduction of novel techniques to effectively address the distinctive challenges posed by this integration. We firmly believe that this differentiation constitutes an innovative aspect in itself (refer to Appendix C.2 and C.3, as well as Figures 8 and 9).
We want to again emphasize the following contributions:
* We are the first to combine MCTS as a planning-based decoding module with pretrained SR models for the task of equation generation. TPSR not only achieves significantly better performance than the backbone pretrained model but also holds competitive performance compared to the established baselines.
* We have designed the interfaces between these two components (pretrained SR model and planning search) effectively and dealt with unique challenges of making the framework computationally more efficient.
* We have also showcased the versatility of TPSR across different objectives, from fitting accuracy to complexity. Notably, TPSR allows for optimizing equation learning based on varying objectives without necessitating finetuning of the large pretrained SR models.
**Model-Agnostic and SOTA Confirmation.**
In the current version of the manuscript, we have explicitly indicated that our framework is model-agnostic and holds the potential to enhance sequence generation in a variety of pretrained SR models. This includes both existing SR models and potential future models that might exhibit greater capabilities. However, we originally provided the results of using TPSR only on the E2E backbone [18], as E2E is the SOTA pretrained SR model with open source code, and publicly accessible model's weights and logits. In response to your valuable suggestions, we explored integrating TPSR with other pretrained SR backbones to illustrate its model-agnostic enhancement capabilities. Consequently, we integrated our TPSR planning strategy with "Neural Symbolic Regression that Scales" (NeSymReS) by Biggio et al. [16], a pioneering work that proposes large-scale pretraining for SR. However, NeSymReS does have some limitations, including its acceptance of datasets with a maximum of three dimensions. Also, this approach predicts equation skeletons and requires a more complex constant optimization process. Despite these challenges, we evaluated both NeSymReS and its TPSR-enhanced version using a dataset comprising 52 Feynman equations with a dimensionality of $d \leq 3$. Results are provided in Table 1 of the response PDF (please check global response), showing that TPSR has significantly improved the fitting accuracy without changing the average complexity of the equations when $\lambda$=0.1 and with a slight increase when $\lambda=0$. We will include these results in the appendix and refer the readers to that section when discussing the model-agnostic feature of our framework. We would like to remark that due to the limitation of NeSymReS to very low-dimension problems, it cannot be evaluated on the SRBench datasets for which we have provided our main results.
Besides confirming that our TPSR framework is indeed model-agnostic and can be used as a planning module for future models, we have shown in our manuscript that the current TPSR model applied on E2E performs SOTA in some benchmarks and is a competitor in others. In line with your suggestion, we also investigated a comparison with the mentioned SOTA works [30] and [31]. We would like to note that the code for [31] was released two months after the NeurIPS submission deadline. Additionally, while the code of [30] was partially released approximately a month before our submission (as part of the DSO package), we observed that at least two out of the five main components of this model (pretrained weights and AI-Feynman), along with some details, were not included in their current release. We tried our best to conduct new experiments to compare our results with [30], utilizing their current code version. Results (shown in Table 2 of the attached PDF (please check global response)) represent that TPSR outperforms [30] in black-box datasets, and performs competitively in the Feynman dataset. However, we would like to note that since the current results of [30] are evaluated without some of the main components of the model, we think it is not a fair comparison to be included in the main results of our paper. Also, we again would like to emphasize that being model-agnostic is a much more important aspect of this work as the SR models are rapidly improving.
---
Rebuttal 2:
Title: Looking forward to discussion
Comment: Dear Reviewer zZmW,
Thank you for your feedback during the review process! If there are any concerns or questions, please do not hesitate to let us know - before the author discussion period ends. We will be happy to answer them during the discussion.
Thank you,
Paper13018 Authors | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for dedicating their time and expertise to review our manuscript. Please refer to the attached PDF where we have included the Tables and Figures referenced in the subsequent responses. We hope our clarifications address your concerns and look forward to further discussions.
Pdf: /pdf/bb4601c55a6af2871eadd2e6efed7c87adc6f45f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposed to combine pretrained Symbolic Regression models with MCTS procedure to improve SR performance without finetuning the pretrained models. Experiments are conducted to demonstrate the improved performance the proposal.
Strengths: 1. Proposed a new MCTS based decoding procedure to improve pretrained SR models performance without finetuning the models.
2. The paper is clearly written and easy to follow in texts.
The experiments clearly demonstrates the performance improving over the baselines and the E2E models' decoding.
Weaknesses: 1. A few key building blocks needs to be summarized from the literature to be a self-contained paper, e.g. how the datasets are embedded.
2. The methodology contribution is minor as only MCTS is introduced on top of SR models although experimental performance improvement is observed. Although this is a valuable contribution it might not meet the bar for NEURIPS.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Please provide more statistics of equation (1)'s terms. For example, would N(s) be mostly 1s in your setting? Please be more specific about what "visit count" means exactly. Will the sub-routine beam search generated sequence be accounted as a visit? Will cache hit be counted as visit during beam search sub-routine?
2. Figure 7, please clarify how you count the number of generated candidates. Are the sub-routine beam search generated equations are counted? Will this difference render the results differently?
Minors:
1. In the abstract, "GP-based methods" should use the full name of GP for broader readers' convenience.
2. Line 48, there should be an indent space at the beginning of the sentence.
3. In section 3, to be self-contained, please succinctly re-iterate the key components of the underlying SR pretrained models, e.g. dataset embedding.
4. Line 188, please briefly explain why this is still a Q function in standard MDP framework.
5. Section 4, please be specific whether E2E and TPSR are using exactly the same experiment settings except the MCTS and beam search difference. If possible, a table in the supplemental materials comparing the experiment settings across different approaches might be helpful.
6. Line 244, in the equation of R^2, is \bar{y} the average values of y in N_{test}? Please clarify.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and thoughtful questions. Please find our answers below.
**Summary of Pretrained SR Details.** We adopted the pretrained SR model backbone from [18], using its embedding module to embed data points, and leveraging Transformers encoder and decoder modules for representation and expression generation. Given the potential for large input sequences with tokenized numeric data, and the quadratic complexity of Transformers, [18] introduced a linear embedder module to map inputs to a single embedding space before feeding them to the Transformers encoder. We left out these details because of page limits and relied on references. However, we agree on the importance of clarity. We'll make sure to include a summary of these key points in our updated paper.
**Methodology Contribution.** We hope the clarifications provided below address your concern about contribution.
Integration is Not Simply Concatenation: While we leverage both MCTS and pretrained SR models, our contribution isn't a straightforward 'stacking' of the two. The fusion demands a meticulous redesign, modification, and the introduction of new methods to counter the distinct challenges of such integration. This distinction, we believe, is an innovation in itself and is elaborated upon in Appendix C.2 and C.3, with visual insights provided in Fig. 8 and 9.
Differentiating from Related Works: As highlighted in our related works, other works have used MCTS and LLMs for Planning in NLP. A notable example, [34], merges MCTS with a pre-trained discriminator.
This discriminator can assess both partial and complete states, streamlining its combination with MCTS to determine a node's (or state's) value. Contrarily, our SR equation generation model only allows evaluations and feedback upon equation completion, complicating and increasing the cost of the planning process. We'll need to generate complete equations using beam search simulations and design caching mechanisms to reduce repetitive generation calls to the pretrained SR model.
We want to again emphasize the following contributions:
* We are the first to combine MCTS as a planning-based decoding module with pretrained SR models for the task of equation generation. TPSR not only achieves significantly better performance than the backbone pretrained model but also holds competitive performance compared to the established baselines.
* We have designed the interfaces between these two components (pretrained SR model and planning search) effectively and dealt with unique challenges of making the framework computationally more efficient.
* We have also showcased the versatility of TPSR across different equation-specific goals, from fitting accuracy to complexity. Notably, TPSR allows for optimizing equation learning based on varying objectives without necessitating finetuning of the large pretrained SR models.
**Number of Visits Clarification.** The term $N(s)$ in equation (1) represents the "visit count" of state $s$, indicating the number of times that state $s$ has been encountered in the tree search during decoding. In our experiments, the value of $N(s)$ varies greatly depending on the complexity of the symbolic regression task and the state $s$ at hand. For simpler problems, or at early stages of the tree search, $N(s)$ might indeed be closer to 1, as states would not have been explored as thoroughly. For more complex problems, or deeper into the search, $N(s)$ can increase significantly as the same state may be visited multiple times in search. In our MCTS setting, a "visit" means that a state-action pair $(s,a)$ has been passed through during the tree search, and the corresponding child state $s'$ has been added to the tree. Sequences that are generated as part of the beam search sub-routine of simulations in the evaluation stage of MCTS are not directly considered as visits to the nodes corresponding to these sequences. Instead, they serve the purpose of completing the partial equation to allow for feedback computation. As for cache hits, they are also not counted as visits. The reason is that caching in this context is used to save computation by storing previously computed values, and a cache hit simply means retrieving a stored value rather than performing a new visit.
**Number of Generated Candidates.** It's important to note that the visit count is just used in the selection step of the search to promote exploration (as explained in equation (1) ), while the number of generated equation candidates (shown in Fig. 7 as budget), refers to the total number of complete equation sequences that have been generated by each method, i.e., the sample size in the E2E with sampling decoding baseline, and the number of function calls of beam search sub-routine in TPSR (refer to Alg. 1), excluding instances where cached sequences were identified and utilized through sequence caching.
Thanks for raising these insightful questions! We will make sure to include clarification on these points in the main paper.
**Response to Minor Comments:**
We will address each of the points you raised in the updated version. Answering some of the questions:
* **Line 188:** The term $Q$ function here represents the value associated with each node/state $s$ when action $a$ is taken (leading to node $s'$). If the node is part of the trajectory to the current state's root, this value is updated after the backpropagation step.
* **Section 4:** We assure that both E2E and TPSR use the same experimental settings, with the only difference being the MCTS and beam search/sampling implementations. Some details of the settings are already included in Lines 251-261. We agree that adding a table would make it more clear. We'll make sure to include it in the appendix of the updated version.
* **Line 244:** You are correct. The reported $R^2$ is on the test set, therefore, $\bar{y}$ represents the average values of $y$ in $N_{test}$. We will add a brief note to clarify this.
---
Rebuttal Comment 1.1:
Comment: Thank you for responding to my review. I've updated my scores slightly. Please keep improving the paper.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for reviewing our rebuttal and raising the score. We will make sure to update the manuscript in line with your suggestions.
---
Rebuttal 2:
Title: Looking forward to discussion
Comment: Dear Reviewer i1kS,
Thank you for your feedback during the review process! If there are any concerns or questions, please do not hesitate to let us know - before the author discussion period ends. We will be happy to answer them during the discussion.
Thank you,
Paper13018 Authors | null | null | null | null | null | null |
Convex-Concave Zero-Sum Markov Stackelberg Games | Accept (poster) | Summary: This paper develops policy gradient methods using stochastic gradient estimates from the trajectories of play for computing in polynomial time Stackelberg equilibria in convex-concave games, most notably including a certain class of reach-avoid problems. The authors also demonstrate through experiments the benefit of Stackelberg equilibria over their simultaneous-move counterparts in reach-avoid problems.
Strengths: Prior work by Goktas and Greenwald [23] addressed the problem of computing Stackelberg equilibria in a certain class of min-max optimization problems using an exact gradient oracle. The present paper extends those prior results by providing polynomial-time guarantees even when only stochastic gradient estimates from the trajectories of play are available, filling a gap that was left in prior work. Moreover, the paper nicely motivates the underlying problem, providing a number of concrete applications, especially with the experiments of Section 5 that demonstrate the benefit of computing Stackelberg versus Nash equilibria in a certain class of problems. Overall, the paper addresses an important problem with a theoretically sound approach, and provides non-trivial improvements over the prior state of the art.
The paper is also well-written, and the ideas and concepts are clearly exposed. The results are also accurately placed into the existing literature.
Weaknesses: The main weakness of this paper is that the overall contribution is arguably somewhat incremental in light of prior work, especially papers [23] and [25]. The techniques employed to extend the analysis from exact to stochastic gradient information are overall standard, and the main results are certainly not surprising. That being said, the paper closes a theoretical gap left from prior work, and I do believe that the technical contribution is non-trivial.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Some minor issues:
1. Line 12: Missing space before the first set of references
2. Line 50: A Stackelberg equilibria -> A Stackelberg equilibrium
3. Lines 59-60: Technically this is a pseudo-polynomial time algorithm since the dependence on $1/\epsilon$ is polynomial
4. Why are you using the notation $\mathbb{N}_+$ instead of just $\mathbb{N}$? (those are presumably the same sets)
5. Lines 132 - 134: The game being convex-concave is surely not the only condition that makes the problem amenable to first-order methods
6. Theorem 3.1: then, in expectation over all runs of the algorithm, then (I would remove the second repetition of the word "then")
7. Line 673: Missing dot
8. Line 694: as as
9. There are many overfull equations in pages 21-23 of supplementary; I strongly encourage to fix those
10. Overall there are missing punctuation marks in the equations throughout the entire paper
11. Line 839: the argument is incomplete
12. It would be helpful to introduce the notion of stochastic convexity since it is quite non-standard
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed all limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, and for taking the time to point out several minor issues!
**Regarding the weaknesses.**
Although our results are an extension of the convergence guarantees provided by Goktas et al. [1] to a setting with a stochastic first-order oracle, we view our main contribution to be the convergence of nested policy gradient methods in zero-sum stochastic Stackelberg games, and the applicability of these methods to reach-avoid problems. Our results in this vein make use of novel assumptions (based on the extensively studied notion of convex stochastic dominance) and proof techniques, to prove that under suitable assumptions zero-sum stochastic Stackelberg games can be shown to be convex-concave w.r.t. to the parameters of the leader and follower’s policy parameters, thereby allowing us to obtain convergence to a recursive Stackelberg equilibrium. Perhaps more importantly, in Appendix C, we show that the important class of reach avoid problems [2] can be modeled as convex-concave zero-sum stochastic Stackelberg games (see Theorem C.1. in Appendix C). Using this characterization, we obtain novel polynomial-time solution methods for these problems, which as we show in experiments, outperform known solutions/methods (i.e., Nash equilibrium).
**References**
[1] Goktas, Denizalp, and Amy Greenwald. "Convex-concave min-max stackelberg games." Advances in Neural Information Processing Systems 34 (2021): 2991-3003.
[2] Jaime F. Fisac, Mo Chen, Claire J. Tomlin, and S. Shankar Sastry. Reach-avoid problems with time-varying dynamics, targets and constraints. In Proceedings of the 18th International
Conference on Hybrid Systems: Computation and Control, HSCC ’15, page 11–20, New York, NY, USA, 2015. Association for Computing Machinery
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed reply. | Summary: This paper considers a convex-concave min-max Stackelberg game and proposes algorithm which converges to the Stackelberg equilibrium. The paper also proposes a policy-gradient based mechanism which also converges to the Stackelberg equilibrium for the Markov game.
Strengths: Stackelberg game is an important problem in multi-agent setup and also for hyper-parameter tuning. Thus, this paper seeks to address some interesting questions and close the gap. The results are clean and thus have value.
Weaknesses: 1. The contribution part is not clear and it seems that they are minor. There are recent works on First-order methods for Stackelberg-game [A1]. However, the paper has not compared with those approaches.
[A1]. Maheshwari, Chinmay, S. Shankar Sasty, Lillian Ratliff, and Eric Mazumdar. "Convergent first-order methods for bi-level optimization and stackelberg games." arXiv preprint arXiv:2302.01421 (2023).
2. The paper has made a lot of Assumptions in order to achieve results (specially for MDP). However, can they be satisfied in practice? Specially, it seems that the action correspondence is concave in $x$ is very strong assumption. Why $f(x,y)$ is convex in $y$ means that it must be affine?
3. For general simultaneous game, average gradient descent average converges to the saddle point for min-max game. This result also shows that average indeed converges to max-min solution. Discussion with the simultaneous game and the technical difficulties must be included. Further, the algorithm seems to avoid simultaneous game complications by finding the solution or the saddle point $(y, \lambda)$ for the lower-level game (assuming a saddle-point solver or simultaneous game setup whose results are kind of well-known). Hence, technical contributions seem to be limited. Further, there is no last-iterate convergence guarantee where the min-max game (simultaneous play) already have achieved that (using extra gradient, or optimistic gradient)
4. All the algorithms seem to have large sample complexity, given the fact that it relies on finding the lower-level solution for each upper-level point $x$. How this algorithm can be implemented in practice? Will leader just pause in updating its strategy while followers updating their strategies?
4. The paper is not very well-written. For example, the MDP setup is not at all clear. Does the leader and follower take decisions in turn at the same state? Does the state transition to the next state after both the leader and follower take actions?
5. The numerical setup should also be expanded.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The paper uses policy-gradient approach. However, it relies on the generator model where a state-action pair is drawn from the state-action occupancy measure? Is it practical? Further, the paper seems to work on continuous state-space. The convergence result is for finite state-space only, while limited results exist for function approximation setup (severely relies on the realizability assumption). However, this paper does not consider such. Does the policy-gradient algorithm have any convergence guarantee? Is the action-space also continuous? Standard policy-gradient does not work with the continuous action.
2. Two-time scale approximations are usually employed for Stackelberg game to reduce the sample complexity. Can it be done in this case as well. For example, see the recent work [A2, A3]
[A2]. Li, Haochuan, Farzan Farnia, Subhro Das, and Ali Jadbabaie. "On convergence of gradient descent ascent: A tight local analysis." In International Conference on Machine Learning, pp. 12717-12740. PMLR, 2022.
[A3]. Lin, Tianyi, Chi Jin, and Michael Jordan. "On gradient descent ascent for nonconvex-concave minimax problems." In International Conference on Machine Learning, pp. 6083-6093. PMLR, 2020.
3. Are Theorems 3.1 and 4.1 correct? In the averaging of $y$, why the numerators are multiplied by $\eta_x$ instead of $\eta_y$? I tried to find it in the proof, however, I could not. Can the authors point where it has been shown that the average is done over $y_t$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Not as such.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
1) Our model is not captured or covered by [A1], because in [A1], the action space of the followers does not depend on the leader’s action. Due to space limitations, we were unable to include a discussion of bi-level optimization more broadly; however, we can use the additional page in the camera-ready version to provide further context.
2) The assumptions that we present are specifically satisfied by reach-avoid problems (see Appendix C and Theorem C.1.). Reach-avoid problems are important because of their applicability to robotics problems, such as autonomous driving. The fact that our assumptions capture this important class of problems substantiates their utility. Further, these assumptions enable polynomial-time computational guarantees, and thus open the door to roboticists to build practical solutions for this important class of problems.
3) $f$ is assumed to be convex in (x,y) and concave in $y$; as such $f$ must also be affine in $y$.
4) Although we agree that proving convergence in last iterates would be of interest, even in simultaneous settings, assuming access only to a stochastic first-order oracle, convergence in last iterates is *not* known. The results for extragradient descent ascent hold only for *exact* first-order oracle settings [1].
5) Additionally, although our proof techniques are an extension of the convergence technique provided by Goktas et al. [2] to a setting with a stochastic first-order oracle, we want to note that our main technical novelty pertains to the convergence of nested policy gradient methods in zero-sum stochastic Stackelberg games, and their application to reach-avoid problems.
The algorithm by nature is sequential, meaning that the leader has to wait for the follower to best respond, so as to converge to a (recursive) Stackelberg equilibrium. Our experiments suggest that our algorithm is sample efficient in practice.
6) The leader and follower take action sequentially. The leader first commits to an action, after which the follower, having observed the leader’s action, takes its own action. Once both players have made their moves, the game moves to a new state. This description can be found in lines 29-33. For more explanation of such dynamics, we refer you to the introduction of Goktas et al. [2].
**Regarding your questions.**
1. Our generator model is not based on the state-action visitation distribution, but rather the history distribution. This model is realistic, since simulating a trajectory of the game is possible given the policies of both players. Additionally, we note that our convergence results apply to both continuous and discrete state spaces, but not to continuous action spaces.
2. Two-time scale approaches do not allow for convergence to a Stackelberg equilibrium, when the leader’s action determines the action space of the follower (See Example 3.3. in [4]).
3. This is a typo. The $\eta$ in the numerators should match the variables. Note that the average-iterate convergence result for $\mathbf{y}^{(t)}$ is due to Theorem 3.15 of Nemirosvki [3] as described in lines 768-770 of Appendix E. We realize this point may not be clear to the reader, so we will add an explicit reference in the camera-ready version.
**References**
[1] Golowich, Noah, et al. "Last iterate is slower than averaged iterate in smooth convex-concave saddle point problems." Conference on Learning Theory. PMLR, 2020.
[2] Goktas, Denizalp, and Amy Greenwald. "Convex-concave min-max Stackelberg games." Advances in Neural Information Processing Systems 34 (2021): 2991-3003.
[3] Nemirovski, Arkadi, et al. "Robust stochastic approximation approach to stochastic programming." SIAM Journal on optimization 19.4 (2009): 1574-1609.
[4] Goktas, Denizalp, and Amy Greenwald. "Gradient Descent Ascent in Min-Max Stackelberg Games." arXiv preprint arXiv:2208.09690 (2022).
---
Rebuttal Comment 1.1:
Title: Follow up Questions
Comment: Thank you for your detailed answers. I now understand your contributions. I have a few more follow-up questions.
1. The authors repeatedly mentioned the convergence of the nested policy gradient algorithm to the Stackelberg equilibrium as their main contribution. However, I have a few comments on this.
a. In order to prove the convergence the authors rely on Theorem 3.1. However, that heavily depends on the structure of the function such as convex in $x$, and affine in $y$. I am wondering whether this restricts the practicality of this approach. For example, it is well known that value function in general is not concave in the policy space. The authors did mention sufficient conditions on reward, I am wondering for which settings those would hold.
b. This leads to the second question. The proof of Lemma 1 is not correct. The authors have used the concavity of $v$ to prove the concavity of $q$, and then the concavity of $q$ to prove the concavity of $v$.
2. The reviewer is also confused with the structure of the constraint for the MDP. Is it related to the CMDP setup [1,2] (meaning the at some cumulative cost/utility has to be less than or equal to some threshold) or is it something else, like at every state a constraint is required to be satisfied? The paper has considered $\underline{g}$ which means taking minimum across all the states. How can one evaluate this value as one needs to evaluate the policy at every possible state?
3. I do not understand the statement ``Our generator model is not based on the state-action visitation distribution, but rather the history distribution". The algorithm (Algorithm 2) needs to generate samples for the current policy $\pi_x,\pi_y$. Hence, it needs to access any type of generator or simulator model (whether it depends on the history distribution is immaterial). Furthermore, I don't think the gradient computed in Algorithm 1 is an unbiased estimator as claimed by the authors (if it is an infinite-dimensional). Please see [3] on how to ensure unbiasedness in the Q-function evaluation.
[1]. Efroni, Y., Mannor, S. and Pirotta, M., 2020. Exploration-exploitation in constrained mdps. arXiv preprint arXiv:2003.02189.
[2]. Ghosh, A., Zhou, X. and Shroff, N., 2022. Provably efficient model-free constrained rl with linear function approximation. Advances in Neural Information Processing Systems, 35, pp.13303-13315.
[3]. Zhang, K., Koppel, A., Zhu, H. and Basar, T., 2020. Global convergence of policy gradient methods to (almost) locally optimal policies. SIAM Journal on Control and Optimization, 58(6), pp.3586-3612.
---
Reply to Comment 1.1.1:
Title: Answers to Follow-Up
Comment: Thank you for your reply! We really appreciate the time you are putting into reviewing our work!
Regarding the points you have made.
**1.**
**a.** Theorem 3.1 holds for any convex-concave min-max Stackelberg game for which we only have access to a noisy estimate of the gradient and is thus not necessarily restricted to stochastic Stackelberg games (unlike Theorem 4.1). As such, Theorem 3.1, has applications beyond stochastic Stackelberg games (which we discuss below), and also has applications to a number of problems of interest which are modeled as convex-concave min-max Stackelberg games such as resource allocation and automated test generation (see for instance [1] or [2]).
We understand your point that the state-value function of an MDP is in general not concave in the parameters of the policy, and it only satisfies a gradient dominance condition. However, this result holds in extremely broad settings (i.e., with no assumption on the transition probability function); in a number of interesting problems, the state-value function can be guaranteed to be concave in the parameters of the policy, as we show is the case with reach-avoid problems [3]. An important property of reach-avoid problems is that the transition function is deterministic and affine in the actions of the players (see Appendix C for more details), which allows us to show that the game is convex-concave.
Additionally, we would like to note that Assumption 2 is only a sufficient and *not* necessary for the game to be convex-concave. Moreover, our results also generalize directly to what we would call convex-incave min-max Stackelberg games (see Footnote 4 for an explanation). This class of games covers any zero-sum stochastic Stackelberg game in which for the leader, the marginal state-value function (see line 248 for a definition) is convex, and the follower’s problem is a discrete state/action MDP.
**1.b.** We disagree with you that Lemma 1 is incorrect. First of all, we would like to note that even if you disagree with our proof of the concavity of the state-value function, it is a known result in the economics literature (see for instance Theorem 1 of [4] and the discussion above Theorem 1). We include a proof of this fact in Lemma 1 only for completeness. We chose to present a compact version of the argument since it would otherwise require us to replicate steps in the proof of the Banach fixed point theorem. All of that said, our argument is simply an inductive one, and one simple way to understand it is as follows:
Consider the policy improvement process which consists of applying the Bellman expected operator iteratively, which is known to converge to the state-value function associated with a given policy. Suppose that we initialize the value function for this process to be a continuous, concave, bounded function (base case). Our proof then shows that the Bellman expected operator preserves concavity (inductive step), hence the process can only converge to a continuous, concave, bounded state-value function. You can find a similar argument in the proof of Theorem 1 of [4]. We hope this helps clarify our results. If you still disagree with us for some reason, we would be interested to understand your objection in more detail, since incorrectness of this result would invalidate a large body of results in mathematical economic theory.
**2.** The constraints in our setting are different than the way constraints are traditionally represented in the constrained MDP literature. In constrained MDPs, there are cumulative cost functions, which are required to satisfy a certain constraint; in our setting, the constraints are not cumulative, but rather required to be satisfied only locally. That is, in your own terms, “at every state a constraint is required to be satisfied”. Since the constraints are much less computationally complex in our setting, $\underline{g}$ can be easily computed in discrete-state MDPs by just checking the value of the constraint at each state, and in continuous-state settings by simply running gradient descent.
**3.** The goal of our statement was to point out that the gradient can be computed by unrolling trajectories of play. This is a standard assumption in literature on learning in games (see for instance [6]). It is our understanding that Zhang et al’s [7] goal is to obtain an unbiased gradient estimate using only *finite* horizon trajectories; however, in line with the learning in games literature (see for instance [6]), we do not restrict ourselves to finite horizon trajectories, and in such cases the REINFORCE estimator remains unbiased (see Lemma 2 of [6]). That said, we do agree that Zhang et al’s gradient estimate (Equation 3.6 of [7]) has better properties and we are happy to use that estimate should you think it is more appropriate. Our results hold with either estimate. | Summary: The authors propose a policy gradient method to solve the zero-sum stochastic Stackelberg game from noisy gradient estimates computed from observed trajectories of play. When the games are convex-concave, the authors prove that the proposed algorithms converge to Stackelberg equilibrium in polynomial time.
Strengths: The authors have completed a thorough theoretical analysis of the proposed policy gradient method and prove it converges to Stackelberg equilibrium in polynomial time assuming the game is convex-concave.
Weaknesses: 1. The authors claim they solved the convex-concave zero-sum Stackelberg problem but also restrict the application to a reach-avoid problem. The authors mention that "We also prove that reach-avoid problems are naturally modeled as convex-concave zero-sum stochastic Stackelberg games." in the abstract but in fact do not really PROVE it in the manuscript.
2. Besides, as the author mentioned, the reach-avoid game is usually formulated as a single-agent problem and there exist some works formulating it as a two-player zero-sum game. Why do the authors not address standard zero-sum problems like pricing or allocating goods across agents and time as in reference [25]? Experimenting merely on a reach-avoid game is not enough to demonstrate the effectiveness or superiority of the proposed approach compared to existing methods like Nash equilibrium.
3. The authors make several critical assumptions like convex-concave, convergence assumptions, etc. How feasible these assumptions are in reality and how to validate them are missing. The authors should address these limitations and discuss potential extensions to more realistic scenarios.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
**Regarding the weaknesses.**
We did include a proof of reach-avoid games being an instance of convex-concave zero-sum stochastic Stackelberg games in Appendix C (see Theorem C.1. and the ensuing proof). However, it seems like we failed to add a forward reference to this theorem in the main body of the paper; we will correct this oversight in the camera-ready version. And in the meantime, we are happy to answer any questions you might have about this result, or its proof.
The reason why we do not use our algorithm to solve, for example, the zero-sum stochastic Stackelberg resource allocation game introduced in [1] is because that game is not convex-concave, and as such it lies beyond the scope of our theory and paper. We focused specifically on reach-avoid problems, as they *are* naturally convex-concave (Theorem C.1.).
We refer you to part (2) of our response to common reviewer concerns.
We do provide one simple way of validating the convex-concavity of games, which we use in Appendix C to prove the convex-concavity of reach-avoid problems. In particular, it suffices to check that the problem satisfies Assumption 2.
**References**
[1] Denizalp Goktas, Sadie Zhao, and Amy Greenwald. Zero-sum stochastic stackelberg games. Advances in Neural Information Processing Systems, 35:11658–11672, 2022
[2] Tsaknakis, Ioannis, Mingyi Hong, and Shuzhong Zhang. "Minimax problems with coupled linear constraints: computational complexity, duality and solution methods." arXiv preprint arXiv:2110.11210 (2021).
[3] Goktas, Denizalp, and Amy Greenwald. "Convex-concave min-max stackelberg games." Advances in Neural Information Processing Systems 34 (2021): 2991-3003.
[4] Jaime F. Fisac, Mo Chen, Claire J. Tomlin, and S. Shankar Sastry. Reach-avoid problems with time-varying dynamics, targets and constraints. In Proceedings of the 18th International
Conference on Hybrid Systems: Computation and Control, HSCC ’15, page 11–20, New York, NY, USA, 2015. Association for Computing Machinery
[5] Badithela, Apurva, et al. "Synthesizing reactive test environments for autonomous systems: testing reach-avoid specifications with multi-commodity flows." 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: I appreciate the authors' thorough response. | Summary: The paper considers the setting of convex-concave zero-sum stochastic Stackelberg games. In these games, there are two players, a leader and a follower. The leader's strategies constrain the feasible strategy set of the follower. First, the leader commits to a certain strategy, and then, the follower best-responds to that strategy using a strategy that is feasible.
Previous work mainly focuses on the static version of that game where the utility of the two players is monotone (convex-concave games). The authors define what is a convex-concave stochastic zero-sum stochastic Stackelberg game and then use some machinery from min-max optimization to solve the problem of computing a Stackelberg equilibrium.
---
I acknowledge that I have read and evaluated the rebuttal. The rebuttal answered my concerns and reinforced with my initial positive assessment.
Strengths: * The exposition of previous work and motivation is clear.
* The paper uses standard methods and indicates a good understanding of the min-max optimization and constrained optimization literature.
* The guarantees are for the stochastic setting (i.e., no full information gradient is needed.)
Weaknesses: * Assumption on convexity concavity seems restrictive. Is there any other way to ensure the existence of Stackelberg equilibria AND convergence?
* The results seem a little straightforward given the existing machinery in the literature of min-max optimization.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Is the assumption of convexity-concavity necessary to prove convergence? Could some other assumption guarantee it?
* Do you believe that the convergence rates are optimal? Could you use some other optimization method to get better rates?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: * The main limitation in my opinion is the fact that convexity-concavity is assumed. I think the authors could elaborate more on the justification of such an assumption. (It could also be the case that this assumption is a necessity though.)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
**Regarding the weaknesses.**
Beyond convex-concave domains, convergence to a Stackelberg equilibrium unfortunately becomes NP-hard [1]. As global convergence guarantees to (recursive) Stackelberg equilibrium have not been obtained for more general zero-sum (stochastic) Stackelberg games under the assumption of a stochastic first-order oracle, we chose to restrict our attention to convex-concave domains.
We want to push back slightly on your comment that our results are straightforward. Although extensions of the convergence guarantees provided by Goktas et al. [2] to a setting with a stochastic first-order oracle may be straightforward, we want to note that our main contribution is regarding the convergence of nested policy gradient methods in zero-sum stochastic Stackelberg games, and their application to reach-avoid problems. Our results in this direction, make use of novel assumptions (based on the extensively studied notion of convex stochastic dominance) and proof techniques, to develop suitable assumptions under which zero-sum stochastic Stackelberg games can be shown to be convex-concave w.r.t. to the parameters of the leader and follower’s policy parameters, allowing us to obtain convergence to a recursive Stackelberg equilibrium. Our assumptions, proof techniques, and results open the door to proving the convergence of policy gradient methods in a new class of continuous state-discrete action space MDPs/Markov games.
Perhaps more importantly, in Appendix C, we show that the well-established class of reach avoid problems [3] can be modeled as convex-concave zero-sum stochastic Stackelberg games (see Theorem C.1. in Appendix C). Using this characterization of reach-avoid problems, we are able to obtain novel polynomial-time solution methods for these problems, which, as we show in experiments, outperform known solutions/methods (i.e., Nash equilibrium).
**Regarding your questions.**
We refer you to part (2) of our response to common reviewer concerns.
We believe that our convergence rates are “nearly” optimal, in the sense that, excluding the use of Nesterov’ momentum or other acceleration methods, our method is optimal. Our convergence rate could be improved by an order of magnitude by using acceleration, but to keep our paper accessible, e.g., to roboticists, and potentially be more impactful, we chose to not present such an algorithm. Additionally, we want to note that the nested nature of our algorithm is essential, because single loop algorithms cannot converge to Stackelberg equilibria in convex-concave zero-sum Stackelberg games with coupled/dependent action spaces (See Example 3.3 in [5]).
**References**
[1] Tsaknakis, Ioannis, Mingyi Hong, and Shuzhong Zhang. "Minimax problems with coupled linear constraints: computational complexity, duality and solution methods." arXiv preprint arXiv:2110.11210 (2021).
[2] Goktas, Denizalp, and Amy Greenwald. "Convex-concave min-max stackelberg games." Advances in Neural Information Processing Systems 34 (2021): 2991-3003.
[3] Jaime F. Fisac, Mo Chen, Claire J. Tomlin, and S. Shankar Sastry. Reach-avoid problems with time-varying dynamics, targets and constraints. In Proceedings of the 18th International
Conference on Hybrid Systems: Computation and Control, HSCC ’15, page 11–20, New York, NY, USA, 2015. Association for Computing Machinery
[4] Badithela, Apurva, et al. "Synthesizing reactive test environments for autonomous systems: testing reach-avoid specifications with multi-commodity flows." 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023.
[5] Goktas, Denizalp, and Amy Greenwald. "Gradient Descent Ascent in Min-Max Stackelberg Games." arXiv preprint arXiv:2208.09690 (2022). | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their time!
**Summary of our contributions**: We present polynomial-time first-order methods to compute Stackelberg equilibrium in convex-concave min-max Stackelberg games, assuming access to only a first-order gradient oracle (Theorem 3.1). We then introduce the class of convex-concave zero-sum stochastic Stackelberg games, provide sufficient conditions to validate convex-concavity (Lemma 1 and Lemma 2), and obtain polynomial-time convergence guarantees to recursive competitive equilibrium via a policy-gradient-type algorithm in discrete/continuous state and discrete action convex-concave zero-sum stochastic Stackelberg games (Theorem 4.1). Finally, we show that reach-avoid games [1], which have found important applications in robotics, can be modeled as convex-concave zero-sum stochastic Stackelberg games (Appendix C, Theorem C.1, part 1). Using this result, we obtain polynomial-time solution methods for such games (Appendix C, Theorem C.1, part 2.), and run experiments using neural policy classes.
**Summary of common concerns of reviewers**:
1) Reviewers gXac, sSzA, and 5fZF suggested our contributions could be seen as incremental.
2) Reviewers yZL6 and gXac requested additional motivation for the assumption of convex-concavity.
**Answer to common concerns**:
1) While our results and proof techniques necessitate extensions of Goktas et al.’s [2] results to a setting with a stochastic first-order oracle, we do not consider these extensions to be the main contribution of our work. Rather, our main contribution is the convergence of nested policy gradient methods in a novel class of zero-sum stochastic Stackelberg games, and their application to reach-avoid problems.
Our main results make use of novel assumptions (based on the extensively studied notion of convex stochastic dominance) and proof techniques, to prove that under suitable assumptions, zero-sum stochastic Stackelberg games can be shown to be convex-concave w.r.t. to the parameters of the leader and follower’s policy parameters, allowing us to obtain convergence to a recursive Stackelberg equilibrium in discrete/continuous state and discrete action space games. Our assumptions, proof techniques, and results imply convergence of policy gradient methods in a new class of MDPs, and open the door to proving the convergence of policy gradient methods in a new class of continuous/discrete state and discrete action space Markov games.
Even more importantly, we provide one of the first polynomial-time convergence guarantees for the well-studied class of reach-avoid problems [1] by modeling these problems as convex-concave zero-sum stochastic Stackelberg games. We are hopeful that our techniques might be picked up by roboticists who build robots to navigate real-world environments.
2) Beyond convex-concave domains, convergence to a Stackelberg equilibrium is NP-hard [3]. As global convergence guarantees to (recursive) Stackelberg equilibrium have not been obtained for more general zero-sum (stochastic) Stackelberg games under the assumption of a stochastic first-order oracle, we chose to restrict attention to convex-concave domains. It may also be possible to prove a local convergence result; however, such a result would overlook the fact that the popular class of reach-avoid problems [1] can be naturally formulated as convex-concave zero-sum stochastic Stackelberg games (Appendix C, Theorem C.1.), implying that we provide efficient solutions to these problems. A number of other problems of interest have also been shown to satisfy the convex-concavity assumptions (see for instance [4] or [5]) further motivating the class of convex-concave zero-sum stochastic Stackelberg games.
**References**
[1] Jaime F. Fisac, Mo Chen, Claire J. Tomlin, and S. Shankar Sastry. Reach-avoid problems with time-varying dynamics, targets and constraints. In Proceedings of the 18th International
Conference on Hybrid Systems: Computation and Control, HSCC ’15, page 11–20, New York, NY, USA, 2015. Association for Computing Machinery.
[2] Goktas, Denizalp, and Amy Greenwald. "Convex-concave min-max Stackelberg games." Advances in Neural Information Processing Systems 34 (2021): 2991-3003.
[3] Tsaknakis, Ioannis, Mingyi Hong, and Shuzhong Zhang. "Minimax problems with coupled linear constraints: computational complexity, duality and solution methods." arXiv preprint arXiv:2110.11210 (2021).
[4] Badithela, Apurva, et al. "Synthesizing reactive test environments for autonomous systems: testing reach-avoid specifications with multi-commodity flows." 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023.
[5] Goktas, Denizalp, and Amy Greenwald. "Gradient Descent Ascent in Min-Max Stackelberg Games." arXiv preprint arXiv:2208.09690 (2022). | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Graph Contrastive Learning with Stable and Scalable Spectral Encoding | Accept (poster) | Summary: The paper proposes a method for Graph Contrastive Learning (GCL) by contrasting the spatial and spectral views ($Sp^2GCL$). The spatial view is obtained using a message passing GNN. For the spectral view the authors propose an equivariant model called EigenMLP. EigenMLP precomputes the k smallest eigenvalues and corresponding eigenvectors for a graph. The eigenvectors are made sign invariant using positive and negative eigenvectors as in SignNet. Permutation/Basis equivariance is obtained by learning MLP weights from fourier features of the eigenvalues. The method uses the same node/graph representations as positive views and from different graph/nodes as negative views. The InfoNCE contrastive function is used as the objective. After the self supervised training, a linear classifier is used on the downstream task of node/graph classification/prediction.
Strengths: 1) Paper proposes an important direction of combining spatial and spectral views
2) The method obtains competitive results compared with baselines
Weaknesses: Regarding the views for a node/graph, the spectral views are obtained from the eigendecomposition that are made equivariant to sign and basis and the spatial views are obtained from an MPNN. This would give a fixed view for every node/graph and the concern is for a node/graph how to obtain multiple positive views? In the absence of multiple views the learning may be limited to fixed representations and may not scale well for larger models.
**Minor Typos:**
1. The eigenvalues and eigenvectors **encoder** the global shapes [13] and node absolute positions $\rightarrow$ The eigenvalues and eigenvectors **encode** the global shapes [13] and node absolute positions
2. It can **encoder** both the information of eigenvalues and eigenvectors $\rightarrow$ It can **encode** both the information of eigenvalues and eigenvectors
3. In practice, the sign-invariant neural networks may slow down model **converge** $\rightarrow$ In practice, the sign-invariant neural networks may slow down model **convergence**
4. Therefore, EigenMLP can learn more stable representations **again** structural perturbations $\rightarrow$ Therefore, EigenMLP can learn more stable representations **against** structural perturbations
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Regarding the views for a node/graph, the spectral views are obtained from the eigendecomposition that are made equivariant to sign and basis and the spatial views are obtained from an MPNN. This would give a fixed view for every node/graph and the concern is for a node/graph how to obtain multiple positive views? In the absence of multiple views the learning may be limited to fixed representations and may not scale well for larger models.
2. As pointed out by the authors, the spatial method learns local features and the spectral method learns global properties. Is it always the right approach to contrast them in the proposed manner? In which cases would it work and how do practitioners make a decision?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Some of the limitations have been addressed in the paper. Please refer to the Weakness and Questions section to address further limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments!
> **Q1: How to obtain multiple positive views? In the absence of multiple views the learning may be limited to fixed representations and may not scale well for larger models.**
A1: Constructing multiple positive views is important for contrastive learning. To achieve this goal, the most widespread approach is to augment the input data.
Several approaches exist for augmenting spatial features [1][2], including randomly dropping edges, nodes, and features. On the other hand, for spectral features, we can create multiple positive views by selecting different numbers of eigenvectors. For example, in descending order of eigenvalues, we can select the first $k$, $2k$, ..., eigenvectors. Each of these views captures distinct frequencies of node positions, thereby enabling the construction of multi-scale representations.
[1] Graph Contrastive Learning with Augmentations. NeurIPS 2020.
[2] Graph Contrastive Learning Automated. ICML 2021.
> **Q2: Minor Typos**
A2: Thank you for carefully checking our paper. We will polish our paper based on your suggestions in the revision.
> **Q3: Is it always the right approach to contrast them in the proposed manner? In which cases would it work and how do practitioners make a decision?**
A3: The definition of positive and negative views, as well as the selection of contrastive objective function, continue to be open challenges in the field of contrastive learning. Consequently, spatial-spectral contrast may not always be the right approach, depending on the property of the data.
Intuitively, the spatial methods encode the local feature information through message-passing and the spectral methods learn the positional information. Therefore, spatial-spectral contrast works well **if the positional information can complement the feature information**. However, it may not yield favorable results if the feature information dominates the classification accuracy. For example, in some heterophilic datasets, where two connected nodes tend to have different labels, the performance of MLP is better than GNNs [3]. In such cases, the positional information may not complement the feature information.
[3] Graph Neural Networks for Graphs with Heterophily: A Survey
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks to the authors for the clarifications and it helps my understanding of the proposed method. I will think through the discussed points in detail and consider my review in light of the responses.
Many Thanks
---
Reply to Comment 1.1.1:
Title: Thanks for your response
Comment: Hi, we are pleased to receive your prompt reply. If you have any questions, we are willing to discuss and clarify.
Best regards. | Summary: The authors present a novel approach called Sp2GCL that combines spatial and spectral views of graphs using EigenMLP, an informative and stable spectral encoder. The proposed method shows promising results in learning effective graph representations and outperforms other spectral-based methods in terms of both performance and efficiency.
Strengths: This work proposes contrasting two views in the spectral domain and spatial domain and introduces a novel encoder called EnigenMLP to encode spectral domain information, which has not been done by former work.
Weaknesses: - The contribution of the article is considered limited as the traditional Graph Neural Network (like GCN), which can be understood as spectral filtering in the spectral domain, is similar to the EnigenMLP proposed in this work.
- Simply contrasting these two representations might not lead to significant improvements, as indicated in the results where there is no clear enhancement and suspicion of cherry-picking.
- The article suggests that the proposed method may not offer significant improvement over Graph Convolutional Network methods, and the results indicate a lack of noticeable enhancement. This raises concerns about the effectiveness of the proposed approach.
The article mentions that the contribution of the work is relatively limited, indicating that the proposed method may not introduce significant advancements beyond existing approaches.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - How are the scalar eigenvalues extended to high-dimensional Fourier features in EigenMLP?
- What are the overheads of training and inference in EigenMLP?
- How does EigenMLP handle the sign and basis ambiguity issues in spectral features?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your helpful suggestions!
> **Q1: The contribution of the article is considered limited as the traditional Graph Neural Network (like GCN), which can be understood as spectral filtering in the spectral domain, is similar to the EnigenMLP proposed in this work.**
A1: EigenMLP is fundamentally different from the spectral filtering applied in Graph Neural Networks (GNNs) for several reasons:
1. EigenMLP serves as a method for encoding the **positional information** of nodes. In contrast, spectral filtering of GNNs utilizes graph spectrum to filter the noise present in **node features**.
2. The expressive power of traditional GNNs is limited by the Weisfeiler-Lehman test [1]. In contrast, incorporating the positional information can surpass this limitation [2]. Therefore, the expressive power of EigenMLP is better than message-passing GNNs.
[1] How powerful are graph neural networks? ICLR 2019.
[2] Graph neural networks with learnable structural and positional representations. ICLR 2022.
> **Q2: Simply contrasting these two representations might not lead to significant improvements, as indicated in the results where there is no clear enhancement and suspicion of cherry-picking.**
A2: We make an additional experiment to validate the effectiveness of the spatial-spectral contrast. Specifically, we directly concatenate the spatial features $A^{2}X$ and the spectral features $U\rho(\Lambda)$ as non-contrastive representations, i.e., $[A^{2}X, U\rho(\Lambda)]$. Then we use a linear classifier, which is the same as Sp2GCL, to evaluate the performance of non-contrastive representations. The experiment is conducted in the Pubmed and Flickr datasets. Results are shown below, from which we can see that the spatial-spectral contrast contributes a lot to learning graph representations.
| | Contrastive | Non-contrastive |
:-: | :-: | :-:
| Pubmed | 82.3±0.3 | 80.1±0.2 |
| Flickr | 52.05±0.33 | 50.27±0.47 |
> **Q3: The article suggests that the proposed method may not offer significant improvement over Graph Convolutional Network methods, and the results indicate a lack of noticeable enhancement. This raises concerns about the effectiveness of the proposed approach. The article mentions that the contribution of the work is relatively limited, indicating that the proposed method may not introduce significant advancements beyond existing approaches.**
A3: We need to clarify that graph convolutional networks (GCNs) are **semi-supervised methods**, implying that they need label supervision to update the model parameters. In contrast, our model is an **unsupervised method** that does not require the usage of labels. Simply comparing the performance of GCNs and our method is unfair. Moreover, we can see that in some datasets, our model outperforms GCNs a lot. For example, in the Pubmed dataset, our method (accuracy=82.3) has an improvement of 4% over GCNs (accuracy=79.0), which demonstrates the effectiveness of the proposed method.
Additionally, we do not mention that the contribution of our work is relatively limited. We consistently highlight that our model is an effective and efficient method.
- In terms of effectiveness, our model achieves comparable performance over various graph-related tasks.
- In terms of efficiency, Sp2GCL is 10 times faster than the state-of-the-art spectral-based GCL method, and EigenMLP is 30 times faster than existing sign- and basis-invariant spectral encoders.
> **Q4: How are the scalar eigenvalues extended to high-dimensional Fourier features in EigenMLP?**
A4: *We describe the Fourier features of eigenvalues in Equation 5.*
Specifically, we stack the sin and cos values of the scalar eigenvalues with different periods to construct its high-dimensional Fourier features, i.e., $[\sin(\lambda), \cos(\lambda), \sin(2\lambda), \cos(2\lambda), \cdots \sin(T\lambda), \cos(T\lambda)]$.
> **Q5: What are the overheads of training and inference in EigenMLP?**
A5: *The time overheads of EigenMLP and Sp2GCL are shown in Tables 9 and 10.*
Here we briefly report the results for a quick reference. In summary, Sp2GCL is 10 times faster than the state-of-the-art spectral-based GCL method, e.g., SPAN, and EigenMLP is 30 times faster than existing sign- and basis-invariant spectral encoders.
| Table 9 | Pre-processing | Training (100 epochs) | Inference ($\times 10^{-3}$) |
:-: | :-: | :-: | :-:
| SpCo | 127.16 | 22.81 | 4.5 |
| SPAN | 658.62 | 142.92 | - |
| Sp2GCL | 66.44 | 17.79 | 6.2 |
| Table 10 | Facebook (k = 100) | moltox21 (7831 Graphs) |
:-: | :-: | :-:
| MLP | 2.24 | 2.09 |
| SAN | 127.23 | 60.58 |
| BasisNet | 169.83 | 84.64 |
| EigenMLP | 5.34 | 3.14 |
> **Q6: How does EigenMLP handle the sign and basis ambiguity issues in spectral features?**
A6: *We describe how EigenMLP solves the sign and basis ambiguities in Section 4.2. We also theoretically prove this in Theorem 1, Section 5.1.*
- For sign-ambiguity, EigenMLP simultaneously takes the positive and negative eigenvectors as input, thus learning sign-invariant representations.
- For basis-ambiguity, EigenMLP leverages the learned eigenvalues to weight eigenvectors, thus eliminating the influence of coordinate rotation. | Summary: In this paper, the authors propose eigenMLP,an informative, stable, and scalable spectral encoder, which is invariant to the rotation and reflection transformations on eigenvectors and robust against perturbations. Based on eigenMLP, spatial-spectral contrastive framework is proposed to capture the consistency between the spatial information and spectral information.
Strengths: 1. The eigenMLP is very motivated. The motivation and the theoretical analysis are convincing.
2. Based on eigenMLP, the spectral augmentation is bounded by pertubation delta L.
Weaknesses: There are some major issues:
1. I think the eigenMLP is the main contribution but I don't know why the authors try to highlight that the spatial-spectral contrastive framework is a contribution. In my opinion, it is not the first work using spectral-based GNN and spatial-based GNN as two different views for GCL.
2. Because of the sparsity, eigenvalues usually obtain by randomized SVD. The computational cost is nk^2 rather than n^2k usually. I guess you do not use SVD designed for dense matrices. If you do you can check randomized SVD that may help you.
3. The novelty of eigenMLP: the eigenMLP looks like SignNet + position embedding.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. What is the T in Eq.8?
2. EigenMLP is able to outperform other spectral-based GNNs with the usual semi-supervised setting?
3. Why the MLP is so close to EIgenMLP in Table 5?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments!
> **Q1: I think the eigenMLP is the main contribution but I don't know why the authors try to highlight that the spatial-spectral contrastive framework is a contribution.**
A1: We agree that the main contribution of our paper is the design of EigenMLP, which is an effective and efficient spectral encoder. On the other hand, most GCL methods focus on a single domain, i.e., spatial or spectral, and fewer methods work on both domains. Therefore, we list the spatial-spectral contrastive framework as our contribution. In the revision, we will discuss the difference between our model and existing two-view GCL methods.
> **Q2: If you do you can check randomized SVD that may help you.**
A2: We express gratitude for your valuable feedback. Subsequent testing proves that random SVD can further reduce the complexity of the preprocessing. Coupled with the fact that our model is efficient during the training phase, the whole process is more scalable and efficient.
Specifically, we compare the sparse eigenvalue decomposition (EVD) algorithm (numpy.linalg.eigh) and the randomized SVD algorithm (sklearn.utils.extmath.randomized_svd). The outcomes demonstrate a notable acceleration in computation. For instance, in the context of the Pubmed dataset, the EVD procedure takes approximately **70 seconds**, while the SVD process is accomplished in nearly **2 seconds**.
> **Q3: The novelty of eigenMLP: the eigenMLP looks like SignNet + position embedding.**
A3: Generally, both SignNet and EigenMLP can be seen as positional encoding methods. The novelty of EigenMLP comes from two perspectives:
1. SignNet can only address the sign-ambiguity problem, while its advanced version, BasisNet, comes with a heavy computational burden. In contrast, EigenMLP presents an efficient and effective approach that simultaneously tackles the sign- and basis-ambiguity issues. As demonstrated in Table 10, EigenMLP is 30 times faster than BasisNet.
2. EigenMLP provides an efficient way to perturb the graph spectrum for better contrastive learning. Existing spectral augmentation methods, e.g., SpCo [1] and SPAN [2], need to decompose and reconstruct the graph structure, which is inefficient. EigenMLP can directly learn new eigenvalues from the Fourier features of raw eigenvalues. However, as we can see from Table 1, both SignNet and BasisNet cannot utilize the eigenvalues.
> **Q4: What is the T in Eq.8?**
A4: The symbol $T$ is the period of the Fourier features, which is a hyper-parameter in our model.
> **EigenMLP is able to outperform other spectral-based GNNs with the usual semi-supervised setting?**
A5: We conduct the semi-supervised experiment in the Pubmed dataset because it has the standard semi-supervised data split, i.e., 20 nodes per class for training and 1000 randomly sampled nodes for test [1]. We choose three competitive spectral GNNs as baselines: GPR-GNN [2], ChebyNet [3], and BernNet [4]. Notably, EigenMLP does not use feature information, while spectral GNNs use graph spectrum to filter the node features. Therefore, to perform a fair comparison, we use two different settings:
1. **Pubmed w/ features**: In this setting, we concate the representations learned by EigenMLP and original node features, and feed them into an MLP for classification. Spectral GNNs remain unchanged.
2. **Pubmed w/o features**: In this setting, we replace the node feature matrix with an identity matrix, and feed it into spectral GNNs for classification. As for EigenMLP, we only use the eigenvectors and eigenvalues.
The results are shown as follows, from which we can see that EigenMLP has a great improvement over spectral GNNs in learning positional information. Remarkably, even with the inclusion of node features, EigenMLP still maintains its superiority over the baselines.
| | Pubmed w/ features |Pubmed w/o features |
:-: | :-: | :-:
| EigenMLP | **80.15±0.43** | **75.62±0.16** |
| GPR-GNN | 79.92±0.38 | 71.42±0.12 |
| ChebyNet | 78.53±0.26 | 52.44±0.38 |
| BernNet | 79.86±0.24 |61.76±0.45 |
[1] Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2017.
[2] Adaptive Universal Generalized PageRank Graph Neural Network. ICLR 2021.
[3] Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. NeruIPS 2016.
[4] BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation. NeurIPS 2021.
> **Q6: Why the MLP is so close to EIgenMLP in Table 5?**
A6: In Table 5, Pubmed and Facebook are transductive datasets, while Molbace is non-transductive. In transductive datasets, the training, validation, and test nodes share the same adjacency matrix. Consequently, the signs and coordinates of eigenvectors remain consistent during the training and inference stages. Therefore, the model performance will not be influenced by sign- and basis-ambiguity. As a result, the performance of MLP is close to that of EigenMLP in Pubmed and Facebook. On the contrary, EigenMLP outperforms MLP a lot in the Molbace dataset.
The same phenomenon can also be observed in the stability experiment (Section 6.5). When the training and test data employ the same perturbation, the performance of MLP is comparable to that of EigenMLP, as evident from the results in the diagonal entries of Tables 7 and 8. However, in cases where different perturbations are applied, EigenMLP consistently outperforms MLP, as indicated by the results in the off-diagonal entries. | Summary: This paper proposes a graph contrastive learning model with spatial and spectral augmentations, with a novel spectral encoder EigenMLP that could address the stability issue from eigen-decomposition. To exploit the strength of spatial and spectral domains, SP2GCL deploys two augmentation views for the contrastive framework; in the spectral view, it introduces tricks to alleviate the ambiguity of signs and basis, in order to stabilize the training process. The performances of node-level and graph-level experiments, as well as the transfer learning tasks, show the advantages of the proposed framework.
Strengths: 1. Contrastive learning for graph data is a prosperous and promising field, especially for the label-sparse settings and for the exploration of properties of non-Euclidean data.
2. The spectral part in the proposed framework attempts to address the stability and overhead issues, which are the main obstacle in the spectral methods for graphs.
3. Bridging among the spatial and spectral views is an interesting attempt, which could also make full use of their complementary properties.
Weaknesses: 1. As the authors emphasize the stability of their method, more theoretical and empirical analysis are expected to validate it.
2. I feel confused about the introduction of Fourier features in Line 169; the reasons and influences of them could be more detailed.
3. The performances are not always competitive in the experiment section; it may help to explain based on the properties of data sets.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In the 3rd paragraph of Introduction, it's stated that spatial methods capture local features and so as spectral ones to global features, does it still true for the augmentations? Is it possible that some spectral augmentations merely perturb some kind of edges (such as low-degree or deviant ones)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable suggestions!
> **Q1: As the authors emphasize the stability of their method, more theoretical and empirical analysis are expected to validate it.**
A1: Here are the theoretical and empirical analyses:
1. **Theoretical analysis**: In Section 5.1, we theoretically analyze the stability of our method against structural perturbations. Specifically, Theorem 2 states that the inverse of the spectral gap bounds the change of EigenMLP, while Lemma 1 shows that the change of MLP is unbounded. Therefore, EigenMLP is more stable than MLP.
2. **Empirical analysis**: In Section 6.5, we conduct an experiment to validate the theoretical analysis, where two types of perturbations are used to evaluate the stability of EigenMLP and MLP. Detailed results are shown in Tables 7 and 8.
**To further verify the adversarial robustness**, we conduct a new experiment that injects adversarial edges into the graph structures and tests the performance of Sp2GCL and other GCL methods on the Pubmed dataset. We consider three competitive baselines, GRACE [1], SPAN [2], and SP-AGCL [3].
Specifically, we consider two graph adversarial attacks, i.e., Nettack [4] and Metattack [5]. To perform a fair comparison, we employ the data generated by Pro-GNN [6]. The training, validation, and testing nodes are randomly divided in a ratio of 1:1:8. The results are shown below, where Sp2GCL consistently learns stable representations against various adversarial attacks, while other GCL methods are more vulnerable.
| | Clean | Nettack ($n=5$) | Metattack ($p=0.2$) |
:-: | :-: | :-: | :-:
| GRACE | **85.9±0.1** | 74.5±1.3 ($\downarrow$13.3%) | 71.4±0.2 ($\downarrow$16.9%) |
| SPAN | 81.5±0.3 | 76.4±2.1 ($\downarrow$6.3%) | 72.7±0.6 ($\downarrow$10.8%) |
| SP-AGCL | 85.5±0.3 | 78.1±1.6 ($\downarrow$8.7%) | 75.1±0.5 ($\downarrow$12.2%) |
| Sp2GCL | 83.3±0.5 | **79.1±1.7 ($\downarrow$5.0%)** | **75.4±0.4 ($\downarrow$9.5%)** |
[1] Deep Graph Contrastive Representation Learning.
[2] Spectral Augmentation for Self-Supervised Learning on Graphs. ICLR 2023.
[3] Similarity Preserving Adversarial Graph Contrastive Learning. KDD 2023.
[4] Adversarial Attacks on Neural Networks for Graph Data. KDD 2018.
[5] Adversarial Attacks on Graph Neural Networks via Meta Learning. ICLR 2019.
[6] Graph Structure Learning for Robust Graph Neural Networks. KDD 2020.
> **Q2: I feel confused about the introduction of Fourier features in Line 169; the reasons and influences of them could be more detailed.**
A2: Before introducing the role of Fourier features, we first explain the role of eigenvalues. Eigenvalues are instrumental in addressing the basis-ambiguity issue as they are equivariant to the rotation of eigenvectors, i.e., $\mathbf{U}\mathbf{Q} \cdot (\Lambda\mathbf{Q})^{\top} = \mathbf{U} {\Lambda}^{\top}$, where $\mathbf{Q}$ is the random rotation matrix.
Notably, the raw eigenvalues tend to assign greater significance to higher frequencies. However, in practice, low-frequency information also wields considerable importance [7]. Employing raw eigenvalues falls short of attaining optimal results. To overcome this limitation, it becomes necessary to learn a set of new eigenvalues to reweight the eigenvectors.
In this paper, we choose to learn new eigenvalues from the Fourier features of the raw eigenvalues, i.e., $λ_{new}=[\sin(λ), \cos(λ), ..., \sin(Tλ), \cos(Tλ)]\mathbf{W}$. This design has been widely used in other fields. For example, Transformer uses positional encoding to preserve the order information of input tokens [8].
The advantages of Fourier features are two-fold:
- Fourier features provide a multi-scale representation of scalar eigenvalues and let neural networks learn high-frequency information [9].
- Fourier features are bounded in [-1, 1], which are more stable than other methods, such as polynomials.
[7] Revisiting Graph Neural Networks: All We Have is Low-Pass Filters.
[8] Attention Is All You Need. NeurIPS 2017.
[9] Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains. NeurIPS 2020.
> **Q3:The performances are not always competitive in the experiment section; it may help to explain based on the properties of data sets.**
A3: In some chemical datasets, the performance of EigenMLP drops slightly. This may be due to the fact that the properties of the molecule are determined by some important substructures [10] rather than the global structure. However, eigenvectors encode the global position information and are not good at modeling substructures. A possible solution is to use data augmentation to mask some nodes and obtain the eigenvectors of subgraphs.
[10] Convolutional networks on graphs for learning molecular fingerprints. NeurIPS 2015.
> **Q4: Does it still true for the augmentations? Is it possible that some spectral augmentations merely perturb some kind of edges (such as low-degree or deviant ones)?**
A4: We think this statement still holds for graph augmentations. According to the definition of eigenvalue decomposition, the graph structure is composed of different eigenspaces, i.e., $\mathbf{A}=\mathbf{U} \Lambda \mathbf{U}^{\top} = \sum \lambda_{i} u_{i} {u_{i}}^{\top} $.
Because the eigenvector $u_{i}$ is a dense vector, the induced eigenspace $u_{i} {u_{i}}^{\top}$ is a dense matrix. Therefore, a small perturbation in the eigenvalues will result in a global change in the graph structure, i.e., $\Delta \mathbf{A} = \Delta \lambda_{i} u_{i} {u_{i}}^{\top}$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification from authors. I have carefully read the rebuttal, which addressed most of my concerns. I will raise my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your appreciation
Comment: We express our sincere gratitude for your response. Thanks for your appreciation of our paper. | Rebuttal 1:
Rebuttal: We extend our gratitude to all the reviewers for their valuable feedback and insightful suggestions. We have diligently addressed the majority of the questions and suggestions raised during the official review process, and have provided comprehensive responses to individual reviewers in the corresponding rebuttals.
We are delighted to be recognized for our efforts in this research. We would like to express our appreciation to Reviewer 9zn6 for acknowledging the novelty and significance of our model in the context of spectral-based methods. Our thanks also go to Reviewer xYrm for endorsing our motivation and theoretical contributions. We thank Reviewer QJUG for the helpful suggestions. Lastly, we would like to convey our gratitude to Reviewer ieZM for their strong acknowledgment of the novelty and effectiveness of our work. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Model Sparsity Can Simplify Machine Unlearning | Accept (spotlight) | Summary: This paper considers the benefit of leveraging sparsity to improve standard unlearning techniques. They empirically verify that across a wide range of datasets and architectures. the sparsity benefits unlearning.
Strengths: 1. The recap of this paper on the unlearning literature and metrics is very thorough and helpful for readers who are more unfamiliar with the setting of unlearning.
2. The intuition of this paper is quite elegant and helpful. The address of different pruning methodologies is insightful.
3. The empirical results are seemingly quite promising.
Overall, this paper uses a simple idea to yield strong benefits. While a theoretical analysis would have been nice, the paper is strong as it is. I vote for acceptance strongly then.
I also believe this work is missing a citation. Pruning has also been shown in in its relation to generalization error (see "Generalization Bounds for Magnitude-Based Pruning via Sparse Matrix Sketching"). This, however, does not affect my score.
Weaknesses: 1. There is a slight inconsistency with the empirical results. There are times when sparsity significantly hurts the unlearning. There does not seem to be an observable pattern to this. I believe such an analysis is necessary.
2. Is it possible for the authors to conduct a small experiment to see how the effect of sparsity on unlearning scales with model size? For example, with varying Resnet sizes, does the effect of sparsity on unlearning change? I think this is a slight concern for me.
3. There are no theoretical analyses here. However, I won't be too harsh about this as the paper does mention this in the limitations.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. My main question is, do the readers have an intuition for when sparsity helps or hurts the unlearning process? Across several of the experiments, while it is true that sparsity generally improves unlearning, there are several cases where this is not the case. Are there patterns the authors noticed? For a practitioner, I think a deeper understanding of this is helpful.
2. It seems that most of the architectures are restricted to convolution-based architectures. Have the authors tried extending this to other architecture, such as transformers?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I would have liked it if the authors had recognized the slight inconsistency of their empirical results. However, on the whole, the limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your insightful comments that precisely recognize the strengths of our work. We are particularly excited about citing the referenced paper that establishes a connection between model pruning and generalization. Below, we offer our detailed responses to your comments, categorized by **[W]** for weaknesses and **[Q]** for questions.
**Response to W1, Q1**: Thank you for raising these questions.
As previously indicated in the literature [R4, R9] and highlighted within our paper (e.g., Line 145, Tab. 3, and Fig. 3), the effectiveness of MU is assessed by its performance gap to the gold-standard retrained model (Retrain). Following this criterion, our experimental results (e.g., Tab. 2 and 3) consistently demonstrate that incorporating sparsity enhances unlearning efficacy (UA, MIT-Efficacy) and narrows the performance gap (i.e., the numbers marked by the blue color in Tab. 2 and 3) between approximate unlearning and Retrain. In **Tab. R3** presented in the attached PDF, we illustrate the averaged unlearning performance disparity against Retrain across various methods. The advantage of sparsity in unlearning is evident. On the other hand, we acknowledge that an excessively aggressive sparsity choice could potentially compromise generalization performance and/or remaining accuracy (RA). This tradeoff is akin to the one encountered in model pruning alone. Nevertheless, with an appropriate sparsity level or the integration of soft sparsity-aware regularization, we can achieve significant gains in unlearning efficacy without substantially sacrificing generalization.
**Response to W2:** Thank you for your suggestion. In response, we conducted additional experiments involving both Resnet20s and Resnet50 on CIFAR-10, in addition to ResNet-18 in the paper (Tab. 3). The outcomes of these experiments are detailed in **Tab. R1** and **Tab. R2** of the attached PDF. In these new experiments, we assess the performance of both the "prune first, then unlearn" approach and the "sparsity-aware unlearning" technique (note that the latter is applied to the dense model rather than a predefined sparse model). During the rebuttal period, we opted to exclude the IU-based unlearning baseline due to its considerable computational demands and the challenge of tuning hyperparameters for optimal MU performance. Notably, our results reveal that across different model sizes, sparsity consistently diminishes the unlearning gap with Retrain (indicated by highlighted blue numbers, where smaller values are preferable). It's worth noting that while both ResNet20s and ResNet50 benefit from sparsity, the suggested sparsity ratio is 90% for ResNet20s and slightly lower than 95% for ResNet50 when striking the balance between MU and generalization.
**Response to W3:** Thank you for your valuable feedback. We acknowledge that while Prop. 2 theoretically demonstrates the reduction in unlearning error of gradient ascent-based unlearning with the presence of model sparsity, this result does not universally apply to all the unlearning methods we examined, despite their promising empirical performance. To address this, we will incorporate a discussion of this limitation in the Limitations section of our paper (Appendix D).
**Response to Q2:** Thank you for your insightful suggestion. We've included an additional experiment in our study, focusing on the application of Swin Transformer to CIFAR-10. This new experiment is presented in **Tab. R5** of the attached PDF. To facilitate a comparison between the assessed approximate unlearning methods (including the FT baseline and the proposed $\ell_1$-spare MU which also uses the FT loss as the unlearning objective function) and Retrain, we train the transformer from scratch on CIFAR-10. This could potentially result in a decrease in testing accuracy when compared with fine-tuning on a pre-trained model over a larger, pre-trained dataset.
In Tab. R5, the results are noteworthy: in both class-wise forgetting and random data forgetting contexts, substantial enhancements were observed using our proposed $\ell_1$-spare MU, leading to a much smaller performance gap with Retrain compared to FT. In particular, class-wise forgetting exhibited a remarkable 90.24% increase in UA, accompanied by a slight reduction in RA.
Motivated by this comment, we will also explore the application of our approach to language models in the future. We will incorporate this aspect into our Conclusion and Limitations sections.
---
Rebuttal Comment 1.1:
Title: Thank you to the authors
Comment: I acknowledge this response. I appreciate the improved experiments with larger model sizes. I maintain my score and vote to accept this paper.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Dear Reviewer 8xJa,
We sincerely appreciate your prompt response and are pleased that you found our additional experiments beneficial. We're thrilled that your score will be maintained.
Thank you once more for your valuable inputs in enhancing our submission.
Best regards, | Summary: The authors propose to consider network sparsification as a way to improve machine unlearning (MU), the task of unlearning evidence of a given set of training samples from a neural network. In particular, they consider LTH based pruning schemes as well as a regularized training loss to complement standard MU approaches and show that this enhances the unlearning process in a variety of benchmarks and evaluation metrics. For gradient ascent-based MU, they further show an improvement in an error bound with mask-based sparsification, such as LTH pruning. Overall, this paper introduces NN sparsification to the MU domain and shows the effectiveness of this combination on standard benchmarks.
EDIT: After the fruitful discussion and additional experiments, I raise my score to 7 (Accept).
Strengths: To the best of my knowledge, this is the first attempt to combine the recent advances of networks sparsification with the field of machine unlearning and hence presents itself as an original work.
The paper is clearly written and organized, which greatly eases the reading experience. The experiments are on a variety of tasks, benchmark datasets and models, which mostly support the success of combining model sparsity approaches for machine unlearning. To evaluate the successful unlearning apart from standard metrics, the authors propose a membership inference attack-based metric that allows a proper evaluation of how well a model forgot a given sample, which is an interesting and intuitive way to measure MU success.
Weaknesses: The overall idea of combining sparsification with MU is not necessarily innovative, especially given that it is currently applied in almost any subfield given the hype around the LTH. While this itself is not bad---often the simplest ideas can make a lasting impact---I feel the authors make their life a bit too easy by considering all MU approaches at once and speaking about general statements that often do not hold across datasets and the breadth of presented results. Especially since the focus is on the extensive results rather than theoretical insights, I would expect a much more faceted discussion, with the different effects of sparsification on different MU approaches that is evident from the tables (see Questions). Similarly, I would expect sparsity-aware unlearning, as the suggested algorithm of this paper, to be compared to in all experiments.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Given the focus of this work on the experiments (i.e., what is the impact of sparsification beyond GA approaches), my main concerns are around the results (tables are by the way very tiny), my questions are in order of appearance.
1. The bottomline of Tab.2 drawn in the paper is that the “linearly decreasing gamma scheduler outperforms other schemes”, which I do not see, as neither approach is consistently better in all metrics, for example UA and MIA constant gamma is better. How do you explain this difference and what are the potential effects?
2. Table 3 is broadly summarized in 5.2 as generally showing improvement of metrics through sparsification. Yet, for random data forgetting we see that the sparsification partially reduces UA and MIA, depending on the method and especially MIA is bad. For SVHN, UA and MIA completely break down. For CIFAR100, it becomes evident that we essentially have a strong trade-off between RA,TA and UA,MIA performance between original model and sparsified model. The paper would greatly benefit from insights and critical discussion of these results.
3. One of the biggest issues I have is that sparsity-regularized unlearning does not appear in any of these experiments as comparison. This is highly suspicious, as this was introduced in this paper as another approach of model sparsification. Given the limited success of l1-based pruning in the LTH field, I would really like to see how this performs here. Moreover, the results in Fig.5 are a comparison against FT, claiming to beat FT in terms of UA and MIA. But FT was literally described as trading of UA and MIA for better RA and TA in the sparse regime before and hence is likely the easiest to beat. Please properly compare your method against all other methods on the considered benchmarks.
4. Trojan model cleanse: Again, why is sparsity-aware unlearning not in the comparison? What would be its performance?
5. Why do you not compare to finetuning with LTH sparsification?
6. Proof of Prop. 2: I do not fully understand how you build the “diagonal of the mask m” after equation A12 so that diag(m)theta actually corresponds to the original mask hadprod theta. I feel this is an essential step from the known proof towards your proposition. Could you please explain in more detail?
Minor:
1. Figure 1 is not instructive, from the visualization it is not clear what the left part ‘Data’ has to do with the rest. It is also unclear how pruning and unlearning are combined here, it is an overly simplistic prictorial of the title. I would suggest removing it to gain space for a proper discussion of the results.
2. In proposition 1, L has not been introduced before.
3. Sometimes you reference your methods as sparsity-regularized unlearning, sometimes sparsity-aware unlearning. Please be consistent.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: As indicated by my questions, the limitations of this paper are not properly discussed in terms of the obtained results. In particular, differences in trends within a benchmark and across different datasets are not discussed, as well as the proposed regularized approach not compared to the prune-and-unlearn approach, which raises some concerns about limitations.
That being said, there is no free lunch, so a *critical* comparison and *appropriate discussion* of results and limitations answering my raised concerns would benefit the paper and consequently improve my rating.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the very insightful comment. Below, we provide our point-to-point responses to the comments, denoted by [W] for weaknesses and [Q] for questions.
**W1:** The overall idea of combining sparsification with MU is not necessarily innovative.
**Response to W1:** We are sorry to hear about the lack of novelty regarding our work. Please allow us to make the following clarifications.
1. As Reviewer r13y pointed out in the strengths section *"To the best of my knowledge, this is the first attempt to … hence presents itself as an original work."*, our work offers an innovative concept to probe the effect of sparsity on unlearning performance, which is substantiated by our thorough investigation across various approximate unlearning methods and metrics.
2. We respectfully disagree that *"our life is made a bit too easy by considering all MU approaches at once and speaking about general statements that often do not hold across datasets"*. It's important to note that we measure unlearning performance based on the smallest performance gap with the gold-standard retrained model (Retrain), as emphasized in Line 145 and indicated by blue numbers in Tab. 2 and 3. Our findings clearly demonstrate that sparsity indeed plays a pivotal role in reducing the unlearning gap across datasets (further elaborated in our response to the next question). Furthermore, our contribution goes beyond incremental advancements in the model pruning (or LTH) field. Our investigation into "Why sparsity for MU" encompasses crucial aspects:
a. We provided theoretical insights into the benefit of model sparsity (Prop. 2).
b. We showed that what is best for LTH (IMP) does not necessarily lead to the best unlearning performance and the pruning criteria for MU need to be carefully examined (Lines 220-257 and Fig. 4).
c. The proposed sparsity-aware unlearning is also a great improvement in integrating sparsity with MU. The above, together with our extensive empirical studies, clearly shows the novelty of our work.
**Response to Q1:** Thanks for raising this question. However, we believe this is caused by a misunderstanding of the "good" unlearning performance. As previously mentioned in the literature [R4, R9] and emphasized within our paper (e.g., Line 145, Tab. 3, and Fig. 3), the effectiveness of MU is gauged by its performance is closer to that of the gold-standard retrained model (Retrain). We invite the reviewer to revisit Tab. 2 for a detailed comparison. The decaying schedule, as evident in the table, results in the smallest performance gap with Retrain across the MU metrics highlighted in blue: (0.06, 0.41, 2.61, 3.16).
**Response to Q2:** Given our prior explanation regarding Q1, we invite the reviewer to reassess the outcomes in Tab. 3. To facilitate a more accessible comparison, we extended it to **Tab. R3**. This updated table introduces a consolidated metric termed "Disparity Average," which essentially computes the average of the performance gaps between each unlearning method and Retrain across all metrics. By consulting this metric, it becomes evident that sparsity consistently yields advantages for different MU methods.
**W2:** I expect sparsity-aware unlearning to be compared in all experiments.
**Response to W2, Q3, Q4:** Thank you for raising these questions. We apologize for any confusion regarding the experiments on sparsity-aware unlearning.
1. We opted not to directly juxtapose "sparsity-aware unlearning" with "prune first, then unlearn" due to the fact that the former is employed on a dense model (unlike OMP, L1 regularization doesn't involve a hard thresholding operation on model weights), while the latter operates on a sparse model. Given the distinct initial models for MU, we were cautious about the fairness of such a comparison. Nonetheless, in response to the reviewer's query, we have extended Tab. 3 to Tab. R3 to encompass the performance outcomes of "sparsity-aware unlearning."
2. The reason for only comparing sparsity-aware unlearning with FT in Fig. 5 lies in the fact that the objective function used in sparsity-aware unlearning is specified by the fine-tuning (FT) loss; see Line 254. However, we did extend the scope of comparisons to include sparsity-aware unlearning vs. Retrain, FT, and IU (influence unlearning) in Tab. A6 (Appendix). We refrained from incorporating GA due to its substantial generalization drop. For a complete comparison, including GA, please refer to **Tab. R6**.
3. In Fig. 6, the omission of sparsity-aware unlearning in model cleansing stems from our intention to demonstrate the unlearning performance and the backdoor attack success rate for different models' sparsity levels. Since sparsity-aware unlearning is applied to a dense model (lacking the hard thresholding for model weight sparsity), it contributes just a single data point at sparsity = 0% in Fig. 6. **Fig. R1** includes the absent model cleansing performance using sparsity-aware unlearning. Clearly, it can also effectively remove the backdoor effect while largely preserving the model's generalization.
**Response to Q5:** We would like to clarify that Fig. 4 has a comparison within the context of the LTH regime. Specifically, the label **Dense** signifies the FT-based model unlearning method applied directly to the dense model, while **IMP** represents the pruning approach recommended by LTH. It's worth noting that, in the figure, even though IMP exhibits enhanced generalization performance, it does not necessarily translate to improved unlearning efficacy (measured by UA and MIA-efficacy) in comparison to the 'Dense' method. In contrast, the OMP and SynFlow pruning techniques, which exhibit reduced reliance on training data, demonstrate significant improvements in model unlearning. Further details regarding this analysis can be found in Lines 220-257.
For the response to the Q6 and minor questions, please refer to the **Supplement Response to Reviewer r13y** part of the General Response.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: Thank you for the detailed response. I acknowledge the additional work put into this rebuttal. Below, my answer to the rebuttal to make clear which points need further discussion.
Weakness W1:
1. Given the original submission, as explained in my review, I disagreed with the "thorough investigation", the comparisons seemed rather selective. I appreciate the additional information on experiments provided in the rebuttal and consider this point resolved.
2. + Q1: I would appreciate an explanation on why overall performance should be close to the retrained model (rather than measured in absolute terms). What is the rationale behind similar performance serving as proxy for unlearning success?
W2:
1. I appreciate the effort. While the underlying approach is different, the resulting model is the same: a sparsified network. In theory, one could even control the sparsity of MU through \gamma (from the paper: " | Summary: This paper studies the machine unlearning problem from the perspective of model sparsification. Specifically, the paper proposes two types of model sparsification methods: data-independent (e.g., OMP) and data-dependent (e.g., sparsity-aware unlearning). An extensive set of simulations are shown in the paper to validate the unlearning performance for the sparse models.
Strengths: This paper performs a comprehensive empirical study on the influence of model sparsity on the unlearned model performance, which is an intuitive but less explored direction in machine unlearning literature. The authors give some theoretical insights on how model sparsity will affect the "unlearning error" in Proposition 2, although the loss there would be restricted to be convex and the unlearning mechanism is gradient ascent. For the simulations, different types of unlearning methods are evaluated with the same pipeline to study the unlearned model performance under dense and sparse regimes. Furthermore, the authors show two possible applications of machine unlearning, namely data cleaning and transfer learning, where the model sparsity may also help with the final model performance.
Weaknesses: Although this paper provides extensive simulations to show that sparse models can help increase the unlearning accuracy and the MIA efficacy while maintaining good remaining accuracy and testing accuracy, it still does not fully answer the very basic question "How to decide the level of model sparsity?" It is obvious that when we set the sparsity to 100%, the unlearned model should perform perfectly on UA and MIA-Efficacy, so there is no surprise that they should still perform well at 95% sparsity. As for the good performance on RA and TA, such observations have already been well studied in previous literature on lottery ticket hypothesis and model compression. So if one wants to incorporate sparse models for unlearning purposes, the first and most important question would be the sparsification level. Unlike previous problems where a sparse model is good when it performs well on the training statistics, in unlearning we do not know the forget set in advance, so we are not able to decide how well a sparse model will behave beforehand regarding the UA and MIA-Efficacy. The authors try to answer this question via the sparsity-aware fine-tuning, but again it falls back to decide the trade-off coefficient $\gamma$ and the current regularization scheduler is pure heuristic with little insights. Nevertheless, I understand that this is a challenging question even in general LTH problems and there would be no easy solution for that, especially in the context of unlearning.
As for the presentation of the paper, sometimes the notations or the terms are introduced without explanations. For example, in Proposition 1, the term $\mathbf{1}/N$ is used without explaining that $\theta_o=\theta(\textbf{1}/{N})$; in Proposition 2, what does the learning error $e(\mathbf{m})$ really means? Also, Proposition 1 does not seem to have any relationship with the remaining content and it is just a reformulation of the results in previous literature, so maybe it can be moved to the appendix to save space for the figures and tables.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Overall this is a good empirical paper and I do not have questions about the simulations. Please see the weakness section for the concerns on methodology.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I think the biggest limitation is that it would be hard to perform theoretical analysis for the update rule Eq (3) like those influential function-based methods which require the loss to be strongly convex. Also, the current simulations are all on computer vision tasks with CNN-based models, which limits the application domain. CV models are known to be redundant and remember training samples within coefficients, so it would be good to try other tasks in different domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate Reviewer hHiz for providing a detailed summary of our strengths. Below, we present our detailed responses to the comments, indicating **[W]** for weaknesses and **[Q]** for questions.
**W1:** How to decide the level of model sparsity? And how to decide the trade-off coefficient $\gamma$?
**Response to W1:** Thank you for posing these insightful questions. As noted by Reviewer hHiz, *"Nevertheless, I understand that determining the best sparsity level is a challenging question even in general LTH problems."* Indeed, identifying the optimal sparsity level for unlearning can be intricate, especially considering variations across datasets and architectures. For instance, similar to LTH, our empirical findings indicate that the optimal sparsity level to improve the efficacy of MU on ImageNet is approximately 80%, different from the 95% observed for CIFAR-10. While the optimal sparsity level for MU may differ, we possess some general intuition to aid in sparsity selection. The main criterion is to pinpoint a sparsity level that improves the unlearning efficacy while maintaining the generalization performance comparable to that of the original model.
Furthermore, the sparsity-aware unlearning method (Eq. 3) could help circumvent the imposition of a strict threshold on model sparsity. As clarified in Lines 277-279, the optimal choice for the tradeoff coefficient $\gamma$ tends to align with a decaying scheduler, as indicated by the minimized unlearning performance gap with Retrain in Tab. 2. This schedule underscores the advantage of emphasizing sparsity enhancement during the initial stages of unlearning, gradually transitioning to heightened attention on refining fine-tuning accuracy over the retained dataset.
**W2:** In Prop. 1, the term 1/N is used without explaining. In Prop. 2, the term e(m) lacks explanation.
**Response to W2:** Thank you for the careful reading. $w=1/N$ signifies the uniform weights employed for Empirical Risk Minimization (**ERM**) training. Thus, $\theta(1/N)$ pertains to the original model trained via ERM, as elaborated in Line 87.
Regarding $e(m)$ in Prop. 2, it pertains to the unlearning error when comparing the Gradient Ascent (**GA**)-based unlearning with the retrained model under the model sparsity mask $m$, see Eq. (A12) in Appendix B. When $m=1$ (no pruning involved), this concept was initially introduced in [Sec. 5.1, R11] to calculate the reverse GA steps that need to be reintegrated into the trained model for unlearning.
**W3:** Prop. 1 is just a reformulation of the results in previous literature, moving to the appendix.
**Response to W3:** We will move Prop. 1 to the Appendix. However, there's a specific reason behind the detailed exposition of IU (Prop. 1) in the initial submission. Our derived IU approach exhibits a minor yet crucial distinction from existing methods in the literature, such as [Eq. 1, R5] and [Eq. 7, R12]. As outlined in Lines 132-134, our work has accounted for the normalization effect of data influence weights ($\mathbf 1^T \mathbf w = \mathbf 1$) during the IU approach derivation. In practical terms, we have observed that IU with weight normalization outperforms existing IU methods, given their sensitivity to hyperparameter tuning.
**Limitations:** The current simulations are all on computer vision tasks with CNN-based models.
**Response to Limitations:** Thank you for your insightful suggestions. We've included an additional experiment in our study, focusing on the application of Swin Transformer to the CIFAR-10 dataset. This new experiment is presented in **Tab. R5** of the attached PDF. To facilitate a comparison between the assessed approximate unlearning methods (including the FT baseline and the proposed $\ell_1$-spare MU) and Retrain, we train the transformer from scratch on CIFAR-10. This could potentially result in a decrease in testing accuracy when compared with fine-tuning on a pre-trained model over a larger, pre-trained dataset.
In Tab. R5, the results are noteworthy: Substantial enhancements were observed using our proposed $\ell_1$-spare MU, leading to a much smaller performance gap with Retrain compared to FT. In particular, class-wise forgetting exhibited a remarkable 90.24% increase in UA, accompanied by a slight reduction in RA.
Motivated by this comment, we will also explore the application of our approach to language models in the future. We will incorporate this aspect into our Conclusion and Limitations sections.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: I would like to thank the authors for preparing the detailed responses and additional experiments. I just have one quick follow-up question. So for the IU approach, are you trying to say that you are considering an averaged-ERM instead of the sum-ERM in previous works (i.e., [1] section 3.1)? And that makes a lot difference? If so, I am interested to learn for what metrics the new IU method outperforms existing ones.
[1] Chuan Guo, Tom Goldstein, Awni Hannun, and Laurens Van Der Maaten, “Certified data removal from machine learning models,” arXiv preprint arXiv:1911.03030, 2019.
---
Reply to Comment 1.1.1:
Title: Additional response to Reviwer hHiz (Part 1)
Comment: Thank you for your prompt feedback. Below is our response to the follow-up question.
Yes, there exist differences between IU under sum-ERM and that under ave-ERM. To provide clarity on these differences, let's repeat the notations introduced in Appendix 1.
Recall that $\mathbf w$ signifies the influence weights assigned to training data points. If $w_i = 0$, then the $i$th training point $\mathbf z_i$ will be unlearned. And $L(\mathbf w,\boldsymbol\theta) =\sum_{i=1}^N [w_i L_i (\boldsymbol\theta,\mathbf z_i)]$ represents the weighted ERM loss. This loss corresponds to the ave-ERM when $\mathbf w$ is subject to the simplex constraint (i.e., $\mathbf 1^T\mathbf w=1$ and $\mathbf w\geq\mathbf 0$). By contrast, the sum-ERM does not impose the above constraint. Furthermore, let $\boldsymbol\theta_{\mathrm{o}}$ represent the original model trained through conventional ERM, which uses the weighted ERM loss with $w_i=c$ ($\forall i$) for a positive constant $c$.
Given the unlearning scheme (encoded in $\mathbf w$), the IU approach aims to delineate the model parameter adjustments required by MU from the initial model $\boldsymbol\theta_{\mathrm{o}}$. Such a model weight modification is represented as$$\\Delta(\\mathbf{w}) =\\boldsymbol\\theta(\\mathbf w)-\\boldsymbol\\theta_{\\mathrm{o}},$$where $\boldsymbol\theta(\mathbf w)$ denotes the Retrain solution of using either ave-ERM or sum-ERM given $\mathbf w$, i.e., $\boldsymbol\theta(\mathbf w):=\arg\min_{\boldsymbol\theta} L(\mathbf w,\boldsymbol\theta)$ with $\boldsymbol\theta$ being optimization variables.
The difference between ave-ERM and sum-ERM would play a role in deriving $\Delta(\mathbf{w})$, since
IU resorts to the first-order **Taylor expansion** of $\boldsymbol\theta (\mathbf w)$ (which is viewed as a function of $\mathbf w$).
* When the sum-ERM [R1] is considered, then the linearization point is typically given by $\mathbf w = \mathbf 1$. This leads to$$\\begin{align*}\\Delta^{\\mathrm{(sum)}}(\\mathbf{w})&=\\boldsymbol\\theta(\\mathbf w)-\\boldsymbol\\theta(\\mathbf 1)\\\\&\\approx\\boldsymbol\\theta(\\mathbf 1)+\\frac{d\\boldsymbol\\theta(\\mathbf w)}{d\\mathbf w}|{\\scriptstyle\\mathbf w=\\mathbf 1}(\\mathbf w-\\mathbf 1)-\\boldsymbol\\theta(\\mathbf 1)\\\\&=\\frac{d\\boldsymbol\\theta(\\mathbf w)}{d\\mathbf w}|{\\scriptstyle\\mathbf w=\\mathbf 1}(\\mathbf w-\\mathbf 1),\\end{align*}$$where we used the fact that $\boldsymbol\theta_{\mathrm{o}}=\boldsymbol\theta(\mathbf 1)$ for sum-ERM, and
$\frac{d\boldsymbol\theta(\mathbf w)}{d\mathbf w}$ is known as implicit gradient [R2] since it is defined upon an implicit optimization problem $\boldsymbol\theta(\mathbf w)=\arg\min_{\boldsymbol\theta} L(\mathbf w,\boldsymbol\theta)$.
* When the ave-ERM (Appendix 1) is considered, the linearization point is given by $\mathbf w=\mathbf 1/N$. This leads to$$\\begin{align*}\\Delta^{\\mathrm{(ave)}}(\\mathbf{w})&=\\boldsymbol\\theta(\\mathbf w)-\\boldsymbol\\theta(\\mathbf 1/N)\\\\&\\approx\\boldsymbol\\theta(\\mathbf 1/N)+\\frac{d\\boldsymbol\\theta(\\mathbf w)}{d\\mathbf w} |{\\scriptstyle\\mathbf w=\\mathbf 1/N}(\\mathbf w-\\mathbf 1/N)-\\boldsymbol\\theta(\\mathbf 1/N)\\\\&=\\frac{d\\boldsymbol\\theta(\\mathbf w)}{d\\mathbf w}|{\\scriptstyle\\mathbf w=\\mathbf 1/N}(\\mathbf w-\\mathbf 1/N),\\end{align*}$$where we used the fact that $\boldsymbol\theta_{\mathrm{o}}=\boldsymbol{\theta}(\mathbf 1/N)$ for ave-ERM.
Note that the derivation of the implicit gradient $\frac{d\boldsymbol\theta(\mathbf w)}{d\mathbf w}$ has been provided in Appendix 1. | Summary: The paper proposes that model sparsity leads to models that are easier to "unlearn from". The authors discuss in depth the technical measures that are and that should be used to evaluate various methods of unlearning, and suggest and demonstrate that sparsity is an effective tool in boosting these measures across a wide variety of unlearning applications.
Strengths: The paper strongly supports its main claim, with extensive discussion and experimental evaluation that shows beyond much doubt that model sparsity leads to easier and stronger unlearning results in practice.
The writing is extremely clear and all internal referencing and definitions help readers easily follow and understand the motivations and results. I particularly appreciate the use of emphasis and acronyms, and the way in which the paper makes it easy to flip back and forth to find where a term was defined or where a term is used.
Weaknesses and questions below notwithstanding, the paper is quite solid albeit with a narrow scope. I appreciate the time and detail that went into the experimental evaluations.
Weaknesses: I have two main concerns:
1) The authors don't seem to discuss or acknowledge that sparsity seems to lead to simply better (read: better generalizing) models, and as such we would expect better generalization to lead to less dependence on specific subsets of the training data, and thus easier unlearning. In essence, how does sparsity as a proxy for generalization aid unlearning more than other generalization methods? I would of course expect that strong regularization would affect performance, but perhaps other methods for better generalization that maintain performance may also "simplify machine unlearning".
2) My other main concern with this work is the sidelining of the body of work on $\epsilon-\delta$ forgetting. The authors reference this work in their Related Work section (probabilistic, DP), but there are many places within the main text up to that point where I was left wondering how those approaches may stack up. Simple methods such as Guo et al.'s [54] seem like an easy enough place to do a quick comparison. I acknowledge that those methods tend to depend heavily on hyperparameter choices (as stated by the authors), but a lot of that work seems highly relevant. The discussion in Section 2 around Proposition 1 is very suggestive and brings to mind work in Sekhari [57], and this citing paper seems to be trying to solve a similar problem as the authors here with approximating the Hessian inverse:
Deep Unlearning via Randomized Conditionally Independent Hessians. Ronak Mehta, Sourav Pal, Vikas Singh, Sathya N. Ravi. CVPR 2022.
(not major: I wonder if updating a subset of parameters is "similar" in some form to a sparsity approach?)
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Why were approximate methods in eps-delta not compared? If the authors strongly feel that this is out of scope I think it needs to be adequately justified.
2. Classwise and random-data methods were primarily evaluated; why not individual samples?
3. Gradient norms are often used to evaluate removal success, any reason why there were excluded?
4. Some main paper experimental results were limited to 95% sparsity, and there are a large number of references to the appendix with additional results.
Minor:
1. Depending on how the authors treat the approximate/DP/eps-delta, those might be included in the "approximate MU methods" in Section 2; I was concerned that this highly relevant work was not mentioned as I was reading through.
2. The metrics described in Section 2 are very similar to the "read-out functions" in [12], might be worth mentioning/referencing.
3. It could be helpful to clearly indicate "higher is better" "lower is better" for the various metrics, perhaps using simple up or down arrows, in the text and and in the tables. I do appreciate the authors detailed discussion of results however, careful reading covers all bases. Just thought it may help people skimming.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: 1. It's unclear if sparsity-promoting methods help unlearning moreso than other methods that improve model performance generally.
2. A reasonable set of related work in the form of $(\epsilon,\delta)$ forgetting is largely left un-evaluated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer MsfJ for acknowledging the contributions, soundness, and presentation quality of our paper. And greatly appreciate Reviewer MsfJ for proposing these insightful questions. Below, we provide our responses to the comments, denoted by **[W]** for weaknesses and **[Q]** for questions.
**Response to W1:**
As shown in L162-165 and Fig. 2, sparsity indeed benefits model generalization, particularly when using iterative magnitude pruning (**IMP**). However, we refrain from concluding that better generalization simplifies and improves machine unlearning (**MU**), as the method of achieving generalization improvement strongly influences MU performance. For instance, in the comparison of pruning methods shown in Fig. 4, IMP exhibits the best generalization performance compared to other pruning methods (OMP and SynFlow) but leads to the worst unlearning accuracy (with the largest gap from Retrain). The reason is IMP's strong dependence on the forgetting dataset, as stated in L224-225 and L252-255.
Thus, generalization improvement alone may not be a precise indicator for easier unlearning; it depends on the approach used to achieve generalization. To further substantiate this point, we performed additional experiments using Sharpness-aware minimization (**SAM**) [R1] during model training to enhance generalization before unlearning; see **Tab. R4** in the attached PDF. We did not observe a significantly reduced performance gap between FT and Retrain (against the former's variance) when compared to empirical risk minimization (**ERM**) training.
We propose that *if the generalization improvement method does not rely on additional dependence on the forgetting dataset, it could serve as an indicator for easier unlearning*. In such cases, improved generalization may suggest that the model suffers less from spurious correlations [R2] in the training data, potentially aiding in unlearning, as suggested by the reviewer. However, a more comprehensive investigation is warranted. This question poses valuable insights for future research.
**Response to W2 & Q1:** First, we will include a discussion on $\epsilon-\delta$ forgetting in Sec. 2. We will highlight its connection with influence unlearning (**IU**). Recent unlearning works [Sec 2.2, R3], [Sec 5.1, R4] have considered $\epsilon-\delta$ forgetting as part of IU.
Second, our focus in this paper is on efficient approximate unlearning on pre-trained models. However, the MU approach in [R5] requires modifying the model training pipeline and integrating it into the certified data removal process (Algorithm 1, 2 of [R5]). In addition, their MU paradigm is limited to linear classifiers or linear probing, which only updates the linear classification head for DL models. This setup differs from ours, where we investigate unlearning on the full DL model. This limitation was also noted in [Tab. 4, R6; Related work, R7]. Upon reviewing the implementation code of [R5], we found that even in the case of linear probing, they considered binary classifiers, rather than the prediction head ResNet used. Considering these factors, [R5] may not be an ideal candidate for efficient unlearning comparison, although we are happy to provide further elaboration on the distinctions in the paper.
Third, we sincerely appreciate your suggestion to consider reference [R8]. It indeed provides relevant insights into MU. In that work, they utilized a portion of the parameters to approximate the inversion of the Hessian matrix, enhancing the Hessian-based (IU) method. In contrast, our study focuses on revealing a crucial factor, weight sparsity, which impacts various MU methods. Our research encompasses both practical and theoretical aspects (Sec. 3 & Sec. 5), novel MU methods (Sec. 4), and emerging MU applications (Sec. 5.2).
**Response to Q2:** We chose not to consider individual samples for unlearning due to several compelling reasons.
First, we did not explore unlearning individual samples, as it can lead to substantial variance in unlearning performance depending on the selected forgetting sample. For instance, the UA for an individual sample would be either 100 or 0, resulting in significant variability that hampers meaningful comparisons across different settings or methods.
Additionally, we performed a literature review to validate the prevalence of class-wise and random data forgetting as primary unlearning settings. Supporting evidence for our approach can be found in [Tab. 1, R9], [Fig. 2, R7] for class-wise forgetting, and [Tab. 1, R9] along with [Sec 5.1, R10] for random data forgetting.
Lastly, we focus on random data forgetting and class-wise forgetting due to their direct relevance to the applications discussed in our paper. The former is aligned with the use case of model cleansing, while the latter is particularly relevant for enhancing transfer learning performance.
**Response to Q3**: We admit that *gradient residual norms* (**GRN**) could be a useful metric for evaluating MU methods. Although this was introduced by [R5] to offer insights into approximation errors, we exclude it due to the following reason.
Existing unlearning metrics center around three primary aspects: 1) efficiency (RTE); 2) fidelity (RA, TA); 3) efficacy (UA, MIA-Efficacy). In relation to GRN, it unveils the convergence of model retraining over the retained dataset, making it akin to a fidelity metric similar to RA (remaining accuracy). As evidenced in [Fig. 5, R8], GRN exhibits a closely aligned trend with RA under the same architecture, albeit with greater variance. Hence, we opt for RA to provide a more intuitive performance measure.
**Response to Q4:** We will move more results from the appendix to the main paper in our revised version.
For the response to the minor questions, please refer to the **Supplement Response to Reviewer MsfJ** part of the General Response.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the authors' responses to my questions and others. I am quite satisfied with the responses and have increased my score.
I do have some concern about how much work has been done during the review/rebuttal period, but I don't think that takes away from the authors' quality submission. Unfortunately that's how the game is played now...
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Dear Reviewer MsfJ,
Thank you for your swift response and for recognizing our efforts in addressing your previous questions. We are pleased to hear that our responses have been satisfactory. We will certainly make the revisions as discussed to enhance the quality of our work.
Thanks,
Authors, | Rebuttal 1:
Rebuttal: Dear Reviewers, ACs, and PCs:
We are glad to receive valuable and constructive comments from all the reviewers. We have made a substantial effort to clarify reviewers' doubts and enrich our experiments in the rebuttal phase. In our responses, **Tab. R**xx or **Fig. R**xx refers to the new **R**ebuttal results in the attached PDF. And **Tab. A**xx or **Fig. A**xx refers to the existing results in **A**ppendix. Below is a summary of our responses:
**Reviewer [MsfJ](https://openreview.net/forum?id=0jZH883i34¬eId=JyAsNNFA2c):**
1. We endeavored to elucidate the relationship between model generalization and unlearning efficacy, supplemented by additional experiments (**Tab. R4**) to substantiate our viewpoint.
2. We clarified the importance of ε - δ forgetting and the rationale behind not using it as a comparison method in our work.
3. We provided an explanation for our decision not to use individual samples as an unlearning setting.
4. We provided clarification regarding our decision not to use the gradient residual norm as an evaluation metric.
**Supplement Response to Reviewer [MsfJ](https://openreview.net/forum?id=0jZH883i34¬eId=JyAsNNFA2c):**
- **Response to Minor Q1:** Thank you for bringing this to our attention. We will enhance Sec. 2 by offering a more comprehensive discussion about the methodologies of approximate/DP/$\epsilon-\delta$. Furthermore, we will explicitly elucidate why these methods were not incorporated in our study, providing clear explanations that align with the responses we have already given for W2 and Q1.
- **Response to Minor Q2:** Thank you for pointing this out. We will mention this in revision.
- **Response to Minor Q3:** Thank you for bringing this to our attention. As highlighted in Lines 144-146, an unlearned model exhibiting performance closer to Retrain should be deemed superior. This explains our practice of presenting the performance gap with Retrain in blue within our results (e.g., Tab. 2 and Tab. 3). Consequently, drawing a simple conclusion that higher values are always superior, or vice versa, might not accurately capture the scenario.
**Reviewer [hHiz](https://openreview.net/forum?id=0jZH883i34¬eId=0l9gILoXgT):**
1. We have provided further clarification on how we determine the pruning ratio and the parameters of sparsity-aware unlearning.
2. We included an experiment on MU using Swin Transformer (**Tab. R5**).
**Reviewer [r13y](https://openreview.net/forum?id=0jZH883i34¬eId=QsW2VshsPd):**
1. We provided extra clarifications regarding our contributions.
2. We further clarified the evaluation metrics and improved the presentation of the main table (**Tab. R3**).
3. We extended the main tables to include a more thorough comparison of the unlearning performance between the sparsity-aware unlearning and the "prune first, then unlearn" approach (**Tab. R3 and R6**).
4. We included sparsity-aware unlearning on the Trojan model cleanse application (**Fig. R1**).
**Supplement Response to Reviewer [r13y](https://openreview.net/forum?id=0jZH883i34¬eId=QsW2VshsPd):**
- **Response to Q6:** Thank you for pointing out this. We used the following facts: $diag(m) = [ m_1, 0, …, 0; 0, m_2, 0, …, 0; …; 0, …, 0, m_d ]$ is a diagonal matrix with the diagonal line given by the d-dimensional vector $m$, and thus the matrix-vector product yields $diag(\mathbf m)\boldsymbol \theta = \mathbf m \odot \boldsymbol \theta $, where $\odot$ is the element-wise product. We will make this clearer in our revision.
- **Response to Minor Questions:** We appreciate your suggestion regarding the visual representation in Fig. 1. We will certainly consider your recommendation to remove the teaser figure and provide a more detailed results analysis. Regarding Prop. 1, "L" corresponds to the empirical risk minimization loss function, as indicated in Line 123. Additionally, we will make sure that our references to "sparsity-aware unlearning" remain consistent throughout the paper.
**Reviewer [8xJa](https://openreview.net/forum?id=0jZH883i34¬eId=pIntq1EXoH):**
1. We provided explanations on the observed inconsistency in our results and revised the main table (**Tab. R3**).
2. We included additional experiments on the ResNet-20 and ResNet-50 architectures (**Tab. R1, R2**).
3. We conducted an additional experiment on the SwinTransformer architecture (**Tab. R5**).
**References used in authors' response:**
> [R1] Foret et al. Sharpness-aware minimization for efficiently improving generalization. ICLR 2021.
>
> [R2] Sagawa et al. An investigation of why overparameterization exacerbates spurious correlations. ICML 2020.
>
> [R3] Wang et al. Federated unlearning via class-discriminative pruning. WWW 2022.
>
> [R4] Xu et al. Machine Unlearning: A Survey. ACM Computing Surveys 2023.
>
> [R5] Guo et al. Certified data removal from machine learning models. ICML 2020.
>
> [R6] Nguyen et al. A survey of machine unlearning. arXiv 2022.
>
> [R7] Graves et al. Amnesiac machine learning. AAAI 2021.
>
> [R8] Mehta et al. Deep unlearning via randomized conditionally independent hessians. CVPR 2022.
>
> [R9] Golatkar et al. Eternal sunshine of the spotless net: Selective forgetting in deep networks. CVPR 2020.
>
> [R10] Golatkar et al. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. ECCV 2020
>
> [R11] Thudi et al. Unrolling sgd: Understanding factors influencing machine unlearning. EuroS&P 2022.
>
> [R12] Warnecke et al. Machine unlearning of features and labels. NDSS 2021.
Pdf: /pdf/a07ac6e6b9b51b862c328cc8d211b65f612e98cb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Described Object Detection: Liberating Object Detection with Flexible Expressions | Accept (poster) | Summary: The paper presents a new multi-modal computer vision task, called Described Object Detection, which is a superset of existing OVD and REC tasks. In particular, the DOD task seeks to create models which can detect multiple instances of something in images, from textual descriptions, which could include describing the absence or presence of something. The paper then goes on to create a new dataset for this task, based on an existing one, called $D^3$. It then shows how existing OVD, REC, Bi-Functional, and proposed a baseline method perform on the data set.
Strengths: The paper is attacking a significant problem, is of high experimental quality, and is novel. The idea of having free-text descriptors for finding things in images is extremely important to the adoption of ML for various important, real-world tasks. In fact, I was surprised to learn that given the importance of this concept, something like DOD had not been proposed before. The paper is also of high quality in that it not only identifies this shortcoming of real-world, open-set detection tasks (i.e. DOD), but also creates a dataset, tests it across a range of existing methods, and proposes a new baseline. Because the paper does propose a new formal problem definition for something that is desired in the real-world performance of ML systems, and therefore realizes how current tasks like REC and OVD fall short of what is actually needed for things like open-set, zero-shot object detection it is a novel look at the problem domain. Finally, it is also worth mentioning that the inclusion of the absence examples is a novel idea, which shows in the poor performance of models on absence instances in the data set.
Weaknesses: The only major weakness of the paper is its clarity. This lack of clarity manifests primarily in two places. First, the description of the creation of the data set needs more detail. Specifically, how was CLIP utilized for adding complete annotations? Were full images given to CLIP with all of the possible annotations and then the most probable selected as an annotation for the image? And, if so, how did you deal with multiple annotations being present in the same images? Additionally, how were the negative image annotations created? Were these done by hand, and, if so, by how many people? Similarly, how many people were involved with creating multiple-instance bounding boxes? The annotation section might benefit from a diagram to better explain this process.
Second, the baseline method is poorly described. It's not clear from the main paper or the semantic similarity how the different proposed components come together into the OFA-DOD model. The paper would greatly benefit from a figure displaying the OFA model and the OFA-DOD model to better understand the architecture of the OFA-DOD model. As it stands now, a reader would have a hard time recreating the OFA-DOD model from the given text.
Finally, there are some minor typo issues that need to be addressed. For example, line 130 page 4 repeats “their” twice, and line 212 on page 8 has the wrong subject-verb agreement (i.e. “fails” instead of “fail”). The paper could use another proofread for typos and grammar.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: Please see the weaknesses section on the questions related to the annotation process and the OFA-DOD model.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper adequately addresses its limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedbacks.
**Due to the page limit, please refer to the *[general response (author rebuttal)](https://openreview.net/forum?id=0hwq2vOHT4¬eId=EtVOyLQxeQ)* and the PDF file there for the description and diagram of annotation process of D3 dataset, and the figures showing the model structures of OFA and OFA-DOD.**
We address the comments below.
### 1. Clarity on the creation and annotation process of the proposed dataset.
Thanks for the helpful suggestion.
We have added a diagram to show the annotation process, along with text to describe the details of each step, in the ***[general response (author rebuttal)](https://openreview.net/forum?id=0hwq2vOHT4¬eId=EtVOyLQxeQ)*** (diagram in the PDF). We sincerely hope the reviewers will look into this, and believe this diagram and the text will be able to answer the previous questions regarding the clarity on annotation process.
Additionally, we answer the specific questions related to the annotation process as below:
- [Simple explanation of the annotation process] The image are divided into different groups and the description in different group are unlikely to appear in images from other group. For each image, the refs in its group are used. Refs from other groups may also appear, but with a smaller probability, so we use CLIP for selecting a large number of candidates from these refs from other groups. We manually check by statistics that such CLIP filtering usually do not miss positive refs. Then annotators select the positive refs from these refs (rather than all refs in the dataset) and add boxes for each image.
- [How to use CLIP] CLIP is used merely to filter the references from other groups and decide some negative refs (not all). The refs kept are candidate refs.
- [How to decide positive and negative for an image, and how to deal with multiple annotations in a image] The annotators select the positive refs (one or more) from candidate refs. Candidate refs can be both the refs from the image's group and the refs filtered and kept by CLIP. They add boxes manually.
- [How were the negative image annotation created] For each image, the negative refs are (1) those filtered out by CLIP, which the annotators will check and make sure no positive exists, or (2) the candidate refs decided as not positive by the annotators. So, all refs not labeled positive are negative, and they all have manual negative certificates from the annotators.
- [Human annotation cost] The number of person involved in the annotation process:
- Data source and step 1: the authors (3 people, 3 days)
- Step 2: automatic, by programs (with CLIP)
- Step 3: trained annotators (5 people, 1 week)
- Step 4: trained annotators (15 people, 3 weeks)
- Step 5: trained annotators (15 people, 2 weeks)
As the annotation process is rather complicated, we recommend to look into the annotation process description in the general response first.
### 2. Clarity on the proposed OFA-DOD baseline and a figure of model structure.
Thanks for the question. We did not elaborate the details of OFA-DOD due to limited pages of the manuscript and its less important contribution compared with other parts, so we provided a more detailed description on the components differentiating OFA and OFA-DOD in the supplementary material, including the model structure, training setting and inference strategies.
In this response, we provides figures showing the differences between OFA (Fig.2) and OFA-DOD (Fig. 3) in the author rebuttal PDF file. As shown in the figures,
- The first modification, granularity decomposition, corresponds to replacing a shared decoder to two parallel decoders, one for global tasks and one for local tasks. This is likely to alleviate the conflicts between tasks of different granularity and make the model more suitable for localization tasks.
- The second modification, reconstructed data, refers to the reconstructed OVD & REC data for the local decoder. After the reconstruction, for both task, the input can be one or multiple references (or object category names) and they can corresponds to zero, one or multiple targets.
- The third modification, task decomposition, is depicted by adding a binary classification in the global branch. This task determines if a bounding box and a description is matched, and is used as a second step to reject negative instances in inference.
### 3. Some minor typo issues.
Thanks for the suggestions. We just fixed the typos you mentioned together with a few others we found and proofread the whole paper several times. Thank you again for the detailed suggestion.
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal
Comment: After having read the rebuttal to my questions, as well as the replies to the other reviewer's questions, I have no further questions at this point and I will stand by my current evaluation score. | Summary: In this paper, the authors propose the definition, dataset, evaluation metrics and benchmark results of a new task - Described Object Detection (DOD). DOD is designed to detect objects aligned with full, presence or absence langauge descriptions. The proposed evaluation metrics have three groups based on full, presence or absence descriptions. The authors also propose the Intra-scenario mAP that only detect the categories appear in the image, and Inter-scenario mAP that detecting all the categories. The authors provide the benchmarks results on the proposed task with a new OFA-DOD baseline.
Strengths: - The authors analysize the limitation of previous tasks
- The authors provide comprehensive benchmark results on the proposed new settings
Weaknesses: First, I want to clarify two different tasks: category-level object detection and visual grounding. The category-level tasks like OVD focus on recognizing pre-defined C categories, e.g., "cat," "dog," etc. These categories are commonly mutually exclusive (COCO) or share a parent-child relationship (LVIS). The OVD task will divide the C categories into base and novel two disjoint groups and the models are required to train on base classes and test on novel classes to verify the generalization ability. Instead of detecting C classes, the grounding tasks like REC, aim to align each region to a text phrase. The REC datasets like RefCOCO do not pre-defined a set of categories.
Based on my personal understanding on the OVD and REC tasks, I think this paper has the following weaknesses:
1. The authors argue that DOD is a superset of OVD (#35 and the right part of Figure 1). But I disagree with this argument. The OVD task focuses on evaluating the generalization ability. However, for the DOD task, the proposed dataset evaluation metrics do not consider dividing base/novel groups. Thus, OVD is suitable to evaluate the generalization ability, but DOD is not suitable. If the authors think that the DOD task does not focus on the generalization ability, they should not argue that "superset of OVD".
- 1.1 For the proposed D3 datasets for DOD, the authors pre-defined 422 categories. What happens to the other expressions that are not pre-defined? The authors may add some novel classes in the test set that are not available during training to simulate this situation. I think the base/novel setting is one advantage of OVD over DOD. If the proposed D3 dataset has already some novel expression in the test set, the authors may consider adding evaluation results on base/novel groups.
- 1.2 The left part of Figure 1 is the definition of object detection rather than open-vocabulary object detection with the base/novel setting.
- 1.3 From my perspective DOD is focused on fine-grained language understanding (e.g., the unrestricted language description in #123-127), which is not the main focus of OVD.
2. #188-122: For the REC task, the model is required to align the region with the input language expression and the datasets of the REC task. In my opinion, I don't consider "complete annotation" to be a disadvantage of REC. For instance, REC datasets such as RefCOCO & RefCOCO+ consist of 141,564 language queries, which is significantly larger than the proposed D3 dataset with only 422 language queries. Can we conclude that the vocabulary size of REC datasets is larger than that of DOD? Can we argue that due to the cost of annotation, the "complete annotation" requirement restricts the vocabulary size?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 3. In the proposed DOD task, the category-level definition is vague, and let me feel confused. The authors pre-defined 422 expressions or categories and use these 422 categories for evaluating the Inter-scenario mAP. But are these categories mutually exclusive or overlapped? For example, I guess some short expressions (e.g., 'backpack') may be a parent node of other long expressions (e.g., 'backpack with yellow color'). The authors may consider this point when designing their evaluation metrics, like adopting the positive/negative labels in the OpenImage and LVIS datasets.
- 3.1 Figure 3 (b) and (c) use the "number of instances", does it mean "number of categories"?
4. The motivation for the new DOD task is not clear to me. Can the authors provide straightforward application scenarios or real-world examples that previous tasks are limited while the proposed new DOD task offers broader possibilities?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: In #341-345, the authors have discussed their limitations regarding potential abuse and carbon emissions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments. **Due to the page limit, please refer to the *general response (author rebuttal)* for the motivation of DOD task.**
## 1. Is DOD a superset of OVD or not?
We want to clarify that DOD is a superset of OVD: The D3 dataset is designed solely for evaluation and does not include a training set, so models are trained on OVD/REC datasets and then evaluated on D3. Since the descriptions in D3 has little overlap with existing datasets, the categories in D3 can be regarded as novel categories relative to the training categories (regarded as base) in OVD/REC datasets.
The evaluation of D3 is conducted in a zero-shot/open-vocabulary manner, which is analogous to the evaluation of novel categories in OVD. So DOD also "focuses on the generalization ability." Based on the clarification in the reviewer's comments, DOD does qualify as a superset of OVD.
### 1.1 Base/novel split in D3.
We can regard all classes in training datasets (OVD, REC) as "base" and all the classes in D3 as "novel".
As mentioned above, D3 is exclusively designed for evaluation, and its categories has little overlap with the categories in the training datasets (OVD, REC, etc.), given that the 422 reference categories in D3 are specifically designed. Consequently, D3 can be seen as a test set comprising novel categories, which corresponds to the novel split in OVD. Therefore, the results reported on D3 so far can be regarded as zero-shot generalization results on "novel" categories.
We add a table to clarify this:
| task | training set | test set |
| --- | --- | --- |
| OVD (e.g. COCO OVD) | base group in OVD dataset (e.g. COCO-base) | novel group in OVD dataset (e.g. COCO-novel) |
| REC (e.g. RefCOCO) | training split of REC dataset (e.g. RefCOCO train set) | testing split of REC dataset (e.g. RefCOCO test set) |
| DOD | Existing OVD/REC datasets (e.g. COCO, RefCOCO, etc.) | D3 |
### 1.2 Fig. 1 (a) defines object detection rather than OVD (with base/novel setting).
The illustration of Fig. 1 was simplified to facilitate presentation of different tasks. In (a), our primary intention was to illustrate that for OVD (1) only short category names are provided without lengthy descriptions, and (2) a category may or may not appear in an image. It does not emphasize the aspect of zero-shot generalization (given that both OVD and DOD tasks inherently involve zero-shot scenarios).
Actually Fig. 1 (a) is not limited to either OD or OVD, as we did not specify whether categories like "oven," "dog," "person," etc. in the example, exist in the training dataset or not. The figure does not focus on generalization, does not distinguish between training and testing and thus does not delve into concepts such as base/novel divisions.
### 1.3 DOD focuses on fine-grained language understanding, which is not the main focus of OVD.
The DOD task includes short or long descriptions, not always fine-grained. It is the D3 dataset rather than the DOD task itself that focus more on fine-grained descriptions (but still covers different granularity).
In the case of the DOD task, when dealing with longer descriptions, it indeed involves fine-grained language understanding. However, when descriptions are shorter (especially just one or two words), it is more similar to category-based detection. This can be observed in Fig. 2 (b) of the paper and the examples in supp. Fig. 2. When descriptions are shorter, the task aligns with the essence of OVD, hence making DOD a superset of OVD.
For the proposed D3 dataset, its annotations do lean towards relatively longer descriptions, focusing more on fine-grained understanding. This design choice was intended to introduce higher levels of challenge.
## 2. In REC and DOD, complete annotation limits the scale of references, so not having complete annotation is not a drawback of REC.
In REC datasets, forgoing "complete annotation" did led to more references defined compared to DOD. However, lacking such complete annotation make REC not applicable to many practical detection-related scenarios (as described in the general response), which makes the DOD task with complete annotation useful.
## 3. Are categories in D3 mutually exclusive or overlapped? Consider this when designing the evaluation metrics.
These categories are not necessarily mutually exclusive; they can overlap in certain instances (hence, the classification in the DOD task is multi-label, not single-label classification), but there's no hierarchical inclusion relationship.
When designing the dataset's categories, we deliberately avoided including categories with hierarchical relationships (such as "backpack" and "backpack with yellow color") to prevent D3 from becoming too straightforward in terms of difficulty.
In D3, for an image, categories not positively labeled are manually verified as negative. In other words, our annotations are exhaustive, eliminating the need for partial negative labels to explicitly denote some negative categories in the federated annotation manner of datasets like OpenImage or LVIS. Therefore, naive mAP is suitable for D3 while for OpenImage and LVIS positive/negative labels needs to be considered.
### 3.1 number of instances" or "number of categories in Figure 3 (b) and (c)?
As shown in the caption of Fig. (3) (b) and (c), they mean "number of instances". More specifically, Fig. 3 (b) means number of positive instances in all images for one description (i.e., one category), which shows that each description has a sufficient number of instances across the dataset. Fig. 3 (c) means number of positive instances in a positive image for one description, which shows a description can have one or multiple (usually 2 to 5) instances in an image.
Note that for "number of categories", we show the distribution of number of descriptions (categories) on one image in supp. Fig 4 (a).
## 4. Motivation of DOD and real-world application that previous tasks are limited?
Please see the ***general response***.
---
Rebuttal Comment 1.1:
Comment: > Is DOD a superset of OVD or not?
1. Results on *base* classes
Before reading the rebuttal, I think the proposed method OFA-DOD is trained on the proposed DOD dataset and evaluated on the DOD dataset, so my concern is about missing *novel* classes for generalization. According to the rebuttal, OFA-DOD is trained on OVD/REC datasets and evaluated on the DOD dataset. So my concern now is about missing the results on *base* classes.
For the OVD setting, the evaluation is conducted on both the *base* and *novel* classes. However, the authors only report the performance on *novel* classes (e.g., on their proposed new dataset.) But the performance on *base* classes (e.g., on the Open-vocabulary COCO dataset, which is the training set used for DOD) is not provided. So I think missing the evaluation results on *base* classes is a weakness if the authors still argue that the DOD is a superset of OVD.
2.
The authors replied that “D3 is exclusively designed for evaluation, and its categories have little overlap with the categories in the training datasets (OVD, REC, etc.), given that the 422 reference categories in D3 are specifically designed.”
The ‘little overlap’ is not a rigorous term in an academic paper. How many classes are overlapped? For long-term expression, is there any synonyms in the proposed new dataset? The authors may provide more details.
> Are categories in D3 mutually exclusive or overlapped? Consider this when designing the evaluation metrics.
The authors replied that "When designing the dataset's categories, we deliberately avoided including categories with hierarchical relationships (such as "backpack" and "backpack with yellow color") to prevent D3 from becoming too straightforward in terms of difficulty."
**I think the hierarchical relationship is common in real-world scenarios**. Many existing datasets have a hierarchical relationship, e.g., the ImageNet / LVIS/ Open-Images datasets. But the authors deliberately avoid the hierarchical relationship, which raises my concerns about the authors arguing that their new setting is a real-world task.
> Other feedbacks
Some of my questions are not answered:
1. For the proposed D3 datasets for DOD, the authors pre-defined 422 categories. **What happens to the other expressions that are not pre-defined?**
2. REC datasets such as RefCOCO & RefCOCO+ consist of 141,564 language queries, which is significantly larger than the proposed D3 dataset with only 422 language queries. Can we conclude that the vocabulary size of REC datasets is larger than that of DOD? **Can we argue that due to the cost of annotation, the "complete annotation" requirement restricts the vocabulary size?**
---
Reply to Comment 1.1.1:
Title: Response to Reviewer YHXi's Feedback
Comment: Thank you for the responses.
### 1. Missing the results on *base* classes if DOD is a superset of OVD.
- **1**. By definition, OVD aims to generalize beyond *base* classes during training, and detect *novel* classes defined by an open vocabulary at inference. The definition of DOD is the same, except the classes are unrestricted references rather than only short class names. $D^3$ qualifies as a DOD evaluation set as it provides *novel* classes.
- **2**. **OVD task primarily focuses on evaluating performance on *novel* classes** after training on *base* classes, and evaluation on *base* is mainly an experimental setting to check the capability on seen classes and the upper bounds on unseen. (Some methods, e.g., MEDet and the SOTA CORA, do not provide LVIS *base* performance). The reviewer also mentioned in the original comment that OVD focuses on the generalization on *novel*, and our zero-shot evaluation on $D^3$ also reflects this perspective.
- **3**. In our evaluation, different baselines are trained on varying tasks and datasets, making it challenging to assess the performance on their *base* classes.
- **4**. In response to the suggestion, we **reconstruct **$D^3$** for training and eval.** We splitted $D^3$ and obtained a training set (*base*: 259 classes) and two test sets (*base*: 259, *novel*: 126). We conducted training on *base* and evaluated on both *base* and *novel*. The results are presented in the table below.
| Model | Novel | Base |
| --- | --- | --- |
| OwlViT | 9.7 | 15.2 |
| OFA_base | 4.3 | 11.3 |
| UNINEXT_huge | 18.6 | 23.8 |
| OFA-DOD | 21.6 | 25.1 |
### 2. Quantify the minimal overlap between *base* and *novel* classes for DOD task on $D^3$.
Thanks for the suggestion regarding rigor of the expression. We've analyzed category overlap between *base* (OVD dataset: COCO/LVIS; REC dataset: RefCOCO/+/g) and *novel* ($D^3$), by:
- For OVD, we used ChatGPT to generate synonyms from categories, matching them against $D^3$ references. Overlaps: COCO 0.4%, LVIS 0.9%.
- For COCO, which have less categories, we also perform manual check, resulting in 0.7% overlap with $D^3$.
- For REC, we apply a threshold on the sentence similarity calculated via HuggingFace's `bert-base-cased-finetuned-mrpc` model. Overlaps of $D^3$ with RefCOCO/+/g: 0.0%, 0.2%, 0.7%.
Thus, *novel* classes ($D^3$) overlap <1% with *base* classes (OVD & REC).
### 3. Avoiding hierarchical relationship in the dataset may raise concern on if the new setting is real-world task.
We argue that avoiding such hierarchy does not affect the real-world attributes. The possible references in real world are infinite and we can only annotate some of them.
Considering our restricted annotation capacity, **we forgo hierarchy to concentrate more on intricate categories and make the descriptions more diverse and rich.** Keeping hierarchical refs like "backpack" and "yellow backpack" seems redundant compared to others, given their similarity. Due to annotator limitations, our categories are limited. Thus, we prioritize informative, diverse refs.
**Our detection task is multi-label, not single-label, accommodating complex relationships between refs.** We include intricate references like "clothed dog" and "dog not lead by rope outside," with partial overlap and no simple inclusive hierarchy.
### 4. What happens to expressions not pre-defined, except the 422 in the dataset?
Sorry but we didn't quite understand the reviewer's question. If the reviewer's inquiry pertains to whether the $D^3$ dataset supports evaluation beyond the 422: $D^3$ does not support evaluation for categories beyond those defined, as the ground truth annotations are exclusively available for these categories. These categories represent a subset of reality, used to assess method performance, similar to detection/REC datasets. No dataset we know so far can cover all possible references in the world.
If the reviewer is inquiring about whether the DOD method supports inference for categories beyond the 422 defined: These baselines could theoretically infer from categories outside the 422, but performance might vary significantly, as seen in zero-shot results differences on the 422 categories.
### 5. Is the vocabulary of REC datasets larger than $D^3$ for DOD? Did the need for "complete annotation" limit vocabulary due to annotation costs?
Yes to both. Though our positive and negative sample count surpasses REC's test set, REC's vocabulary remains notably larger than DOD's due to the complete annotation requirement of DOD.
Enforcing complete annotation, akin to REC's large reference classes, is unfeasible for us due to high costs. Creating our 422-category dataset involved the contributions of a team of 15, spanning nearly 2 months, costing around $11,000.
We believe a reference-based dataset with complete annotation can be a valuable community resource. It acts as a starting point, potentially inspiring diverse and comprehensive future DOD datasets. | Summary: This paper introduces a new Described Object Detection (DOD) task, which extends the existing Open-Vocabulary Object Detection (OVD) and Referring Expression Comprehension (REC) tasks into a more general paradigm. For this new task, the authors build a Description Detection Dataset (D3), and find the troublemakers that currently hinder current REC, OVD, and bi-functional methods. They further propose a baseline that outperforms existing methods on the DOD task.
Strengths: 1. Focusing on language-driven object detection, the authors propose a new DOD task, which they argue is more practical and presents more challenges than the current OVD and REC tasks. They also introduce a new dataset D3 for this task.
2. On the DOD task, the authors thoroughly investigate the challenges faced by current REC, OVD, and bi-functional approaches, and they put forth a baseline method with state-of-the-art performance on this task.
Weaknesses: 1. While the authors introduce this new DOD task, instead of constructing a new training dataset or setting, they focus on analyzing the performance of models trained on the old OVD or REC tasks for this new task. This may not fairly represent the performance of existing methods for the new task.
2. As the DOD task can be seen as a more generalized version of the OVD task, where category names are extended to language descriptions, the experimental results and conclusions for OVD methods highly depend on their training data and settings. It would be beneficial to assess OVD baselines trained following the style and setting of the DOD task.
3. The compatibility of the DOD task with the existing OVD and REC tasks is not studied. Given that the DOD task is essentially a superset of the OVD and REC tasks, it would be beneficial to evaluate the baseline OFA-DOD in comparison with other methods on the OVD and REC tasks.
4. The existing methods and the proposed baseline are evaluated and compared under a zero-shot setting. How would the performance vary when the models are fine-tuned on the D3 dataset?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the limitations and broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### 1. Analysis of existing methods on DOD may be not fair because this work does not introduce new training dataset.
Our approach focuses on evaluating DOD using OVD/REC-trained models and providing insights for transitioning to DOD, emphasizing differences in training tasks and formats.
Currently, the evaluation is zero-shot for existing baselines. Ideally, we would also provide a dedicated DOD training set. However, due to the substantial annotation costs, we have not included a separate training set. Instead, we offer a dedicated test set with accurate annotations as benchmark.
Furthermore, existing OVD/REC data can be leveraged to train DOD models, given the similarity in task formats between OVD/REC and DOD. Taking into account the huge annotation costs for a DOD training set, creating a new training set would provide limited value, and is not necessary or cost-effective.
Therefore, this work focuses on:
- Comparing generalization performance: We analyze the performance of existing methods on D3, aiming to uncover the capabilities of different models across various types and tasks for DOD. The primary goal is to reveal distinctions between training tasks and formats (OVD/REC/bi-functional), rather than a direct comparison of superiority. Thus, we allow for variations in architecture and data among these models.
- Providing insights and guidelines: We offer insights and guidelines for transitioning from existing OVD/REC to DOD, and training DOD models using available data. These insights are not limited to a specific baseline. OFA-DOD is to demonstrate that simple adjustments can enhance the effectiveness of an REC model originally unsuitable for DOD.
### 2. Assess OVD baselines trained following DOD settings rather than original setting.
We modified and trained OVD methods with adjustments similar to those applied in OFA-DOD (i.e., the reconstructed data step, which we reconstructs REC to DOD format). The results are shown in the table below, and the methods trained under DOD setting are denoted as OWLViT-DOD and CORA-DOD. Notably, the performance of these models on D3 surpasses original baselines (OWLViT and CORA). This shows the proposed modification over existing methods like OFA are transferrable to other existing OVD/REC methods.
| Model | FULL | PRES | ABS |
| --- | --- | --- | --- |
| OWLViT | 9.6 | 10.7 | 6.4 |
| CORA | 6.2 | 6.7 | 5.0 |
| OFA | 3.4 | 3.0 | 4.3 |
| OWLViT-DOD | 12.1 | 12.8 | 10.1 |
| CORA-DOD | 7.9 | 8.2 | 7.1 |
| OFA-DOD | 21.6 | 23.7 | 15.4 |
### 3. Evaluate OFA-DOD on OVD and REC in comparison with other methods (to study the compatibility of DOD with OVD and REC).
We evaluate OFA-DOD on OVD/REC datasets. The results indicate **substantial improvements over OFA** for both OVD and REC. This shows the improvements of OFA-DOD over OFA make it better for DOD/OVD/REC, and when a model is improved to be more suitable for DOD, it also exhibit corresponding performance gains on REC and OVD. This implies that the DOD task is compatible with REC and OVD.
Compared with SOTAs on REC (without fine-tuning), OFA-DOD outperforms SOTA like G-DINO. Compared with SOTAs on OVD (fine-tuning on base classes), OFA-DOD is not good as CORA, but outperforms Detic on novel classes, showing good generalization ability. We argue the reason why OFA-DOD does not obtain SOTA on OVD is that the original OFA is not suitable for detection tasks, incapable of rejecting negative instances, lacking compatibility with multi-target outputs and yielding poor results. Although OFA-DOD has augmented its detection ability and improved its performance on OVD by more than 20 mAP, it is still far from perfect for OVD and DOD. This is no surprise as it is only a baseline for future research.
REC results:
| Benchmark | refcoco | | | refcoco+ | | | refcocog | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Split | val | testA | testB | val | testA | testB | val-u | test-u |
| OFA-base | 61.00 | 62.24 | 58.59 | 45.02 | 48.36 | 38.70 | 46.96 | 46.96 |
| OFA-DOD-base | 75.92 | 78.74 | 72.16 | 64.98 | 71.32 | 59.25 | 71.52 | 71.76 |
| G-DINO-L | 73.98 | 74.88 | 59.29 | 66.81 | 69.91 | 56.09 | 71.06 | 72.07 |
OVD results:
| Benchmark | COCO-OVD | | |
| --- | --- | --- | --- |
| Split | novel | all | base |
| OFA | 3.2 | 7.4 | 8.9 |
| OFA-DOD | 28.4 | 30.1 | 30.7 |
| Detic | 27.8 | 45.0 | 47.1 |
| CORA_R50 | 35.1 | 35.4 | 35.5 |
### 4. How would the performance vary when the models are fine-tuned on D3 rather than zero-shot eval?
This work does not propose a new training set (due to the reasons in Q1, including huge annotation costs and availability of existing detection & REC data), making fine-tuning unfeasible. However, to address the question from the reviewer, we partitioned the D3 dataset into training (80 groups, 238 presence and 80 absence refs) and testing (26 groups, 78 presence and 26 absence refs) subsets and fine-tune various baseline methods on the training subset, subsequently evaluating the performance on the testing subset.
We show the results of methods without or with (*) fine-tuning on the training subset and evaluated on the testing subset (not the complete D3). As the results shows, OVD/REC/bi-functional/DOD methods are improved by certain margins (2 to 5 mAP). Note that D3 remains as an evaluation benchmark, not providing a training set. The models are expected to be trained on datasets such as OVD/REC and then tested on the D3 dataset. The introduced training/testing split here is solely for validation purposes.
| Task | Model | FULL | PRES | ABS |
| --- | --- | --- | --- | --- |
| OVD | OWLViT | 9.7 | 10.5 | 7.1 |
| | OWLViT* | 13.0 | 13.8 | 10.7 |
| | CORA | 5.7 | 5.8 | 5.4 |
| | CORA* | 8.6 | 9.1 | 7.2 |
| REC | OFA_base | 4.0 | 3.9 | 4.1 |
| | OFA_base* | 8.2 | 8.1 | 8.4 |
| bi-func | UNINEXT_huge | 20.2 | 21.7 | 15.6 |
| | UNINEXT_huge* | 22.3 | 22.8 | 20.9 |
| DOD | OFA-DOD | 21.4 | 22.6 | 17.7 |
| | OFA-DOD* | 24.2 | 25.8 | 19.5 |
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their detailed response. I appreciate their efforts in addressing my concerns. However, I still believe that having a proper training dataset is more critical and necessary for this DOD task and could greatly enhance the impact of this work. Using existing OVD/REC data to train DOD models may not fully exploit the true potential of current methods or models. Therefore, I would like to maintain my original rating. | Summary: Brief Summary: The paper presents a new task Described Object Detection which extends open vocabulary object detection (OVD) to use phrases. This, in turn, extends referring expression (REC) to include objects not seen in the training data. To this end, the authors introduce a new dataset called D3 building on existing GRD dataset [44].
The key idea behind creating D3 are to have complete annotation (i.e. each referring expression has a bounding box in each image if present), have natural language (extends OVD), include absence expression (i.e. objects NOT having a particular feature / attribute such as blackboard with no signs), and one expression referring to more than one instance.
The authors experiment on the provided D3 dataset and provide a detailed benchmark with multiple baselines ranging from REC methods like OFA, OVD methods like OWL-ViT and bifunctional method like Grounding-DINO. The authors further propose a new baseline OFA-DOD which changes some pre-training schemes such as including additional localization tasks and find that it outperforms competitive baselines.
Strengths: Pros:
1. Dataset contribution is always welcome. It is clear the authors have put thought into the dataset construction. In particular, absence expression are quite interesting.
2. A number of baselines are considered and the takeaways are quite interesting that REC methods fail at this task due to being unable to effectively choose more than one boxes. While OVD methods perform better, there is a very large gap compared to bifunctional methods like grounding-dino. The proposed baseline OFA-DOD makes sense and it is good to see that it outperforms other baselines.
3. Visualization of the dataset as well as results in suppl. are very useful. Ablative studies on the baseline such as effect of training data (Table 5b) are interesting.
Weaknesses: Cons:
1. The authors should compare their work with zero-shot grounding [Ref1] which also extends REC to new objects. The obvious difference is that DOD can have more than one instances but a clear distinction would be helpful.
2. The main idea behind DOD is to encompass both OVD and REC. The authors need to motivate this setting more. In my opinion, having the two cases separate can be much more revealing than trying to combine the two. OVD is strictly object detection (with phrases) while REC explicitly requires disambiguation between different objects. For instance, "oversized glove on left-hand" in suppl fig 2 (last row), is simply object detection of glove and doesn't require reasoning whether it is "oversized" or on "left-hand". To me a more natural setting is to separate the two. I would like to know about the author's motivation for the task.
3. It is unclear in the text but in OFA-DOD which OFA is chosen, base or large? To have a fair comparison with Grounding-DINO it should be base but it is not clear (in main text as well as in suppl).
4. In suppl. Table 2, it seems Grounding-DINO outperforms OFA-DOD by considerable margin on Average Recall. Why is this the case? For practical use, wouldn't one prefer using Grounding-DINO?
[Ref1]: Sadhu, Arka, Kan Chen, and Ram Nevatia. "Zero-shot grounding of objects from natural language queries." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4694-4703. 2019.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors have discussed some limitations but more could be included such as:
1. The evaluation is heavily dependent on the choice of 412 phrases and the dataset used for GRD.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the review for the positive feedback on the contribution and design on the dataset. as well as the visual and experimental analysis and findings.
**Due to the page limit, please refer to the *[general response (author rebuttal)](https://openreview.net/forum?id=0hwq2vOHT4¬eId=EtVOyLQxeQ)* for the motivation of DOD task.**
We address the reviewer's comments below.
### 1. Comparison with zero-shot grounding.
Thanks for your reminder. Zero-shot grounding is an intriguing study focused on locating concepts absent in the training set. However, it assumes the presence of objects described by query phrases in images (as shown in the Figure 4 of their paper), still falling under the REC task. In contrast, DOD aims to detect objects described by flexible expressions throughout the dataset. Thus, there can be zero, one or multiple objects described by the language reference in an image. The specific differences includes:
- assuming the existence of objects (zero-shot grounding) vs. no such assumption (DOD).
- one target only vs. multiple target.
- short phrases vs. varied language description (from short category name, to phrases, and long descriptions).
DOD and zero-shot grounding have different focus and zero-shot grounding can be regarded as a variant of REC and a subset of DOD. We will incorporate this discussion into the manuscript to avoid potential misunderstandings. Thank you for your suggestions.
### 2. Motivation for the DOD task. Why not having DOD and REC separated.
As another reviewer also ask about the motivation, we put the clarification on the motivation in the ***[general response (author rebuttal)](https://openreview.net/forum?id=0hwq2vOHT4¬eId=EtVOyLQxeQ)***.
Additionally, regarding the mentioned "oversized glove on left hand", simplifying it to "glove" for object detection is used to highlight the second attribute (unrestricted description) of this dataset. For illustration purposes, only positive examples are included in the figure to avoid introducing information related to the first attribute (complete annotation). In reality, a glove might not necessarily be on the left hand, nor is it guaranteed to be oversized. Thus, the annotation of "oversized glove on left hand" does not align with the label "glove," and there are cases where a positive example of "glove" is a negative example for "oversized glove on left hand". For instance, in supp. Fig. 1 top row, "partially damaged car" cannot be simplified to "car." Applying a "car" object detector would lead to detecting numerous undamaged cars, generating an excessive number of detection results that do not achieve the intended objective.
### 3. In OFA-DOD which OFA is chosen, base or large?
Thanks for the reminder. We build and evaluate OFA-DOD based on OFA-base. We wil add this note in the text and add the "base" subtext in Tab. 2 in the manuscript.
### 4. In supp. Table 2, why is Grounding-DINO better than OFA-DOD on Average Recall? Wouldn't one prefer G-DINO in practical case?
We discuss the metric average recall in the supp. Line 180 - 183. Recall is a metric used by REC task, which only requires the model to locate an object known to exist, but no need to reject false positives. A model pursuing high recall can predict as many false targets as it want as long as the ground truth targets are included. Recall is not suitable for detection tasks like OVD or DOD, which requires the model to distinguish and reject negative instances.
In the evaluation of models for DOD, we use mAP to evaluate models' ability to both locate positive instances and reject negative instances. Average Recall is merely a metric for analyzing the characteristics of different models and its value does not reflect the quality or applicability of a model. Actually, we show in Sec. 5.2 of the manuscript that REC or bi-functional methods like G-DINO are difficult to reject false positive instances.
The proposed model is lower on Average Recall compared to G-DINO. This implies it is more "conservative" in prediction and tends to predict instances when it is rather certain. The choice of models depend on the use cases. For most detection settings where false positives are not welcomed, the proposed baseline should be more competitive in performance. When false positives are OK and the user just want to cover as many targets as possible, G-DINO is a nice choice, also considering its wide application and integration available in the community.
### 5. More discussion on limitation.
Indeed. Thanks for the suggestion. We will add more discussion as below in the "limitation" section of the manuscript:
> As a human-curated dataset, D3 benchmark inevitably contains some bias during data collection and annotation. When designing the dataset, though we try our best to cover as many scenes as possible and make the image distribution and language diversity very broad, the evaluation is still heavily dependent on the choices of language descriptions and the distribution of images.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. Additional details on the annotation process are helpful, as well as the added motivation for the task.
The visualizations provided in the paper however don't have "oversized" glove, and my example was based on that example. It is unclear how many other such examples are there and perhaps more qualitative analysis could be useful.
As such, I keep my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer SdJt's Feedback
Comment: Thanks for the feedback.
The reviewer questioned the annotation of "oversized glove on left hand" (last row) in Fig. 2 of supplementary material is indeed annotation on "glove".
We want to clarify that the "oversized glove" refers to gloves significantly larger than regular size (i.e. human hand size). As an evidence for this, in the rightmost image of this row, there are two gloves with regular size on both hands of the baseball player on the left. As these two gloves are **not "oversized" and "on left hand" at the same time, they are not annotated. This shows we are actually annotating "oversized glove on left hand" rather than "glove".** Gloves not significantly larger than regular size or gloves on right hand were not annotated.
In this figure we mainly shows the "unrestricted descriptions" characteristic of the dataset, so we use mostly positively annotated samples of "oversized glove on left hand", which are also positive for "glove". As a better example, we shows some samples of cars damaged (annotated) and not damaged (not annotated) for "partially damaged car" category in the examples in Fig. 1 of supplementary material, which focuses on "complete annotation" (of both positive and negative samples) for the dataset.
We hope this would clarify the reviewer's question on annotation and we would add more qualitative cases, along with discussions, in the manuscript. Thanks for the suggestions. | Rebuttal 1:
Rebuttal: We thank the reviewers (R1: Q5jG, R2: SdJt, R3: EwWL, R4: YHXi, R5: wkHv) for their positive feedbacks, such as the contribution of the dataset (R2, R5), the significance of the target problem (R3, R5), the design of the dataset is throughly considered (R2), the absence description characteristics is interesting (R2, R5), the findings in the comparison of different tasks is informative (R2, R4, R5), the experimental and visual analysis is interesting and comprehensive (R2, R3, R4, R5), the proposed baseline is effective (R2, R3), the writing is clear and clarified (R1).
In the general response we answer some questions asked by more than 1 reviewer. We address each reviewer's question in the individual response for them. We will revise the paper accordingly.
## Annotation process of the proposed $D^3$ dataset.
A diagram illustrating the annotation process of the proposed dataset is in the PDF file. Here we describe the steps for annotation as below:
Data source: 106 groups from GRD with about 100 images and 3 ~ 4 designed refs for each group. Each group belongs to a different scenario and the overlapping between refs from different groups are small (i.e, a ref for one group are not likely (but possible) to appear in the image from another group). Now we have 10000+ images and 300+ refs.
1. [Manual] Adding absence refs: design 1 ~ 2 absence refs based on the images for each group and add them to the corresponding groups. Now we have 400+ refs.
2. [Automatic] Selecting possible positive refs: for each image, select **all the refs **(4 ~ 6) from the group it belongs to, and also the other 105 groups (top-n refs out of 400+ refs, by CLIP similarity between the image and each description). Now for each image, we have n+4 ~ n+6 candidate refs and all the other refs are filtered out. n is set as 40 initially.
3. [Manual] Verification: randomly choose 5 groups of images, and check if there are any positive refs that should not be filtered out. If so, increase n to cover that ref and go back to step 2.
4. [Manual] Manual annotation: annotation by trained annotators on all images. The annotation of boxes (and instance masks) are instance-level, dataset-wise complete, and includes absence refs.
5. [Manual] Quality check: this includes 3 small steps:
1. Discarding some images (ambiguous, etc., unsuitable for annotation) or categories from the dataset. About 8% samples are discarded.
2. Quality check on 100% samples. For each group, if image with error is more than 2%, it is returned for re-annotation. Otherwise the errors are fixed and this group passes this step.
3. Final check on 5% samples. For each group, if there are image with error, it is returned, otherwise it is accepted.
## Motivation of the proposed DOD task.
Both the OVD and REC tasks have their respective limitations.
- OVD can only perform detection based on categories, where the detection targets are limited to "certain object classes" rather than "objects with specific attributes/relationships." This approach lacks an understanding of contextual information within images and cannot leverage language to precisely control detection targets and requirements. This inflexibility prevents it from meeting specific application demands.
- REC, while capable of comprehending longer object descriptions with attributes or relationships, assumes the existence of such objects in the image. In cases where the described object doesn't exist, REC lacks the ability to reject or filter, leading to false positive errors. This issue poses a significant problem for practical applications and limits its direct usability.
Consider a practical scenario, such as detecting "individuals without helmets" in a construction site using camera data. An OVD method can detect objects like "helmets" and "people" and generalize, but it can't determine the relationship between people and helmets, rendering it unsuitable for direct application. On the other hand, the REC method produces localization results in any image, but often generates false positives, making it impractical.
The current solutions involve breaking down the process, first detecting "people" and "helmets," then training a separate model to determine the relationship between them, or determining presence first, followed by the REC method for localization. This approach requires multiple specialized models, tailored for each scenario, which is far from practical and is inefficient in terms of development.
Hence, there is a significant demand for detection based on language descriptions – a model with strong generalization capabilities, capable of determining whether the described object exists in the image and localizing it based on arbitrary language descriptions. This is where our proposed DOD task comes in.
The introduced DOD task has various practical applications, including:
- Urban security, like detecting "individuals without helmets" in construction sites, "dog outside without leash" in communities, "clothes hung outdoors" on a street, "overloaded vehicles" and "fallen trees on roadsides" on the road, etc.
- Network security, where sensitive images containing bloodshed or violence need to be detected within a massive image dataset.
- (Fine-grained) photo album retrieval based on language (descriptions, keywords, etc.).
- Retrieval and filtering of web image data.
- Detection of specific events in autonomous driving, such as "pedestrians crossing the road".
These scenarios are beyond the capabilities of both OVD and REC. This is the motivation behind DOD.
## Content of the attached PDF
In this PDF file we include 3 additional figures:
- Figure 1 is the diagram of the annotation process for $D^3$.
- Figure 2 and Figure 3 show the model structures of OFA and the proposed OFA-DOD.
Pdf: /pdf/60aba7d5e02c4f27ecf8b747aa009d2cec8800bb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a task called described object detection which involves detecting objects through free form text queries, encompassing referring expression comprehension as well as open vocabulary detection.
Strengths: ## Clarity
The paper is written quite clearly.
## Originality, Quality and Significance: Please see weaknesses
Weaknesses: # Major issues:
## Quality [Annotating using CLIP is insufficient]
* DOD does not provide manually annotated explicit negative certificates for the images that are deemed as a negative for a given text query. The negatives are extracted using a CLIP matching score for each image with all the possible text queries. It has been demonstrated in several works that CLIP has no fine-grained understanding of the image [1,2,3], is largely incapable
of performing spatial reasoning off-the-shelf, obtaining chance performance even on simple synthetic images [1] and also behaves as a bag of words when understanding the text [2]. This implies that any complex query that is more than just a category name, cannot be appropriately distinguished as being relevant or not for a given image, and using this as a step in the annotation pipeline is sure to introduce errors. Using any sort of image-text model during the annotation process inherits the biases of the underlying model and in my opinion is not a viable approach for constructing an evaluation benchmark. For a benchmark that is characterized as a detection dataset, having accurate negatives is paramount, and the authors have not demonstrated that this criteria can be met using a CLIP matching in the annotation process.
* Further, testing the ability of models to distinguish the presence or absence of a textual query, and localize it, would be truly tested in the cases of having unlikely phrases or combinations of objects and attributes or unlikely relations. Using CLIP in the annotation process would completely fail in these cases, as it has been shown that CLIP has a strong Concept Association Bias [3], frequently giving the highest matching score to the most likely completion, without paying attention to the image. This would further exacerbate the difficulty in accurately evaluating models on examples that might be especially hard and interesting (especially relevant to the more challenging "absence" type of queries).
## Originality, Significance [Proposed task is equivalent to existing benchmarks]
* As far as I can tell, the proposed task does not differ from the Phrase Detection task proposed by [5] which addresses the problem of both identifying whether the phrase is relevant to an image and also localizing the phrase, across a whole dataset. Departing from referring expression comprehension, they allow prediction of multiple boxes per phrase, and different from object detection, they evaluate text prompts that are longer than simple category names. Missing this line of literature completely is quite a red flag given that it has been around for quite some years now.
* Another benchmark, "COPS-Ref" has also been proposed [6] that focuses on referring expressions with varying degrees of complexity and in which the localization must be done across multiple images, also containing distractors. This work also does not acknowledge or differentiate from COPS-Ref.
[1] ReCLIP: A Strong Zero-Shot Baseline for Referring Expression Comprehension. Sanjay Subramania et al, 2022
[2] When and why vision-language models behave like bags-of-words, and what to do about it? Mert Yuksekgonul et al, 2023.
[3] When are Lemons Purple? The Concept Association Bias of CLIP. Yutaro Yamada et al, 2022
[4] Revisiting Image-Language Networks for Open-ended Phrase Detection. Plummer et al, 2020
[5] Cops-Ref: A new Dataset and Task on Compositional Referring Expression Comprehension. Zhenfang Chen, 2020
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: ## Suggestions
* Line 103: "Currently, OFA holds the SOTA among REC methods." I believe this comment is a quite outdated and can be updated (joint OVD & REC methods such as FIBER [1] outperform it).
* For the rebuttal, the authors could explain the difference between the proposed task and phrase detection.
[1] Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone. Dou et al, 2022
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 3 good
Contribution: 1 poor
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. **Due to the page limit, please refer to the *[general response (author rebuttal)](https://openreview.net/forum?id=0hwq2vOHT4¬eId=EtVOyLQxeQ)* for the description of annotation process of D3 dataset.**
### 1. Annotation using CLIP is insufficient. DOD does not provide manually annotated explicit negative certificates for the images deemed as negative for a text query.
Thanks for the question.
We want to clarify that we do provide **manually annotated negative certificates**. We apologize for not describing the annotation process detailedly in the paper, and have added a diagram to illustrate this process, together with detailed explanation, in the ***[general response (author rebuttal)](https://openreview.net/forum?id=0hwq2vOHT4¬eId=EtVOyLQxeQ)*** above. We hope the reviewer will look into this.
Actually, we provide **negative certificates for all categories except positive categories** on an image. We do not take the federated annotation manner in large-scale, many-classes detection dataset, which only labels partial negative categories for an image due to large annotation cost of negative classes.
In our annotation process, we composite the following behaviors to ensure the negative labels are accurate and not missed:
1. For the **data source**, the images are divided into different scenarios (groups), and the refs from different group are manually designed to have a small overlapping with each other (i.e., the refs of one group are not likely (but possible) to appear in images from another group).
2. For image A1 from group A, we select (1) all the refs from group A, which are likely positive, and (2) partial refs from groups except A, proposed by by top-(n) according to the matching score between the image and refs by **CLIP**. To avoid this operation to filtering out some positive refs that should be kept, we use a rather large value of n (initially 40).
3. The **annotators** select 5% images to check that the selected candidate refs covers all positive refs and the filtered out refs are negative. If there is a positive refs filtered out, the value of n is increased to cover that ref, and go back to step 2. After this check, the selected refs includes both positive and negative and the refs filtered out are negative only.
4. The **annotators** annotate each image by selecting positive refs from candidate refs and adding boxes. They check that other candidate refs are negative refs.
5. The **annotators** check all the images for 2 rounds, the first on all images and the second on 5% images.
With the 5 conditions above, we make sure that for each image, the refs not labeled as positive will be given a manual negative certificate. Such exhaustive and complete annotation is possible by limiting the scale to be 10000+ only, as an evaluation benchmark, and utilizing CLIP. But we do not rely on CLIP for deciding a category is positive or negative for an image.
### 2. Using CLIP in the annotation process would fail in cases of having unlikely phrases, attributes or relations.
Thanks for the insightful question. As shown in the annotation process (diagram and text) and the answer #1, in the annotation process of D3, CLIP merely provides some initial candidate refs. Since
(1) refs likely to be positive are bound with the image's group and always kept as candidates,
(2) the selection percentage of CLIP is large (>10%) and adjusted based on manual check,
(3) the refs not selected by CLIP is manually checked by annotators to be negative,
(4) the annotators decide a ref is positive or negative,
(5) the final annotations are checked by annotators twice,
we argue that the proposed D3 dataset offers manually labeled, accurate negative and positive labels with explicit negative and positive certificates, and CLIP only serves as a tool for accelerating the annotation process without deciding the positive/negative or harming the annotation accuracy.
### 3. Difference with Phrase Detection task and existing benchmark COPS-Ref.
Apologies for leaving out these two relevant works. The main difference between DOD and Phrase Detection [4] is that Phrase Detection lacks explicit negative certificates. Negative instances are not labeled, so Phrase Detection is not a detection task. In DOD, we ensure that positive instances are annotated exhaustively and all the other references are reliably negative labels. Additionally, Phrase Detection focuses solely on the form of phrases, while DOD encompasses OVD and REC, allowing expressions to be words, phrases, or even sentences.
Cops-Ref [5] focuses on assessing the grounding capability of the REC method in difficult negative regions with related/distracting targets, ensuring explicit negative certificates for a small set of images in their benchmark. Thus, achieving the 'explicit negative certificates across a whole dataset' attribute, like in detection tasks, is only feasible in DOD.
We will include this discussion in the manuscript. Thanks for your valuable suggestions.
### 4. Update REC SOTAs like FIBER. OFA is not SOTA.
Thanks for the reminder. In this work, we divide existing methods into REC, OVD and bi-functional methods. As FIBER handles both detection and REC, we believe it is more suitable to be classified as bi-functional methods rather than REC methods. Therefore, we think the expression "OFA holds the SOTA among REC methods" is valid. We also make comparison to joint OVD & REC methods like UNINEXT (outperforming FIBER on most metrics of REC) and Grounding-DINO (comparable to FIBER) in the paper.
We will add methods like FIBER in bi-functional methods and MDETR in REC methods in the related work of the manuscript. Thanks for your suggestions.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the clarifications and for the detailed response to my questions. I have one follow up question after which I would be happy to raise my score. Reviewer YHXi brought up the point about evaluation on phrases that may be contained within a longer phrase (such as "backpack" and "yellow backpack"). Could you please clarify how this is handled by clearly explaining how the mAP metric is calculated in these cases or in the case of hypernyms like boat / canoe?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Q5JG' Feedback
Comment: Thank you very much for reviewing our responses. We are grateful for the feedback! We address the new query with the following points:
1. **Regarding category relationships, parent-child or synonym are avoided in design, but partial overlap is acceptable**. When designing the categories, we intentionally avoided incorporating such parent-child relationships between categories, and also synonym relationships, to ensure greater diversity and challenge within the dataset. However, there is some partial overlap between categories. For example, "dog not lead by rope outside" and "clothed dog" do not have a parent-child relationship but can overlap in certain cases.
2. **Detection on $D^3$ is multi-label, making it suitable for categories with relationships**. Considering possible relationship between categories, detection on $D^3$ is multi-label rather than single-label. An effective detector should assign all relevant positive categories (e.g., "dog not lead by rope outside" and "clothed dog" for a clothed dog not lead by rope outside) for an instance.
3. Given the multi-label setting, **our exhaustively labeled dataset does not require a specially designed metric for category relationships.** In $D^3$, as all positive and negative labels are known for an instance, the relationships between different categories will not affect the evaluation, so we can use consistent evaluation for each category across all images. Comparatively, for datasets like LVIS with non-exhaustive federated labeling, when relationships between categories exists, the partial labels can introduce errors on unknown categories, so such categories may need to be handled specially. $D^3$ is not susceptible to this issue.
4. **We use the standard detection mAP as the evaluation metric.** The evaluation is similar to COCO mAP and we base its implementation (will be open-sourced) on `pycocotools` . For inference, an instance predicated with category A and B is regarded as an instance for category A and an instance for B. The AP for each category is computed as follows: *Predictions for each category across all images are sorted by score in descending order, and those with a ground truth IoU exceeding a threshold are counted as TP (and the ground truth is marked as taken), while the rest are counted as false positives.* With these TP and FP instances, we calculate the precision, recall, and AP following COCO. The mAP is calculated by *averaging the AP across all categories*.
In conclusion, the exhaustive annotation in our dataset, unlike federated datasets such as LVIS, and the multi-label setting, which accommodates categories with relationships, ensures that direct AP evaluation for each category is suitable and does not introduce errors. This is attributed to our dedicated design of dataset metadata and annotation process. The standard mAP metric adopted shows our dataset adheres to the stringent requirements of a standard detection dataset and meets the demands of the DOD task. If the reviewer has any further inquiries, we will be happy to answer. Thanks for your help in making this work better. | null | null | null | null | null | null |
Rethinking Semi-Supervised Imbalanced Node Classification from Bias-Variance Decomposition | Accept (poster) | Summary: The paper studies imbalanced node classification problem and propose a novel perspective to understand graph imbalance via bias-variance decomposition.By leveraging graph data augmentation, the paper develops a regularization technique to approximate the model's variance. The effectiveness of the method is evaluated across multiple settings and datasets, where the proposed method largely outperforms the compared baselines.
Strengths: - The idea is novel and has solid theoretical motivation.
- Strong performance improvement + extensive experiments.
- The paper is clearly written and well-structured.
Weaknesses: - As someone not working directly in the same field, I think it would be very helpful if the related work section could be more comprehensive than its current shape. Plus, the method design in Sec. 4 shares similarity with many existing techniques. For example, the "confidence-based label-guided consistency regularization" in Eq. 8 has been widely used in standard semi-supervised learning (UDA, FixMatch, FlexMatch) for a long time. And the idea of intra-class aggregation for contrastive learning is also studied in Supervised Contrastive Learning paper. IMHO, it would be better if the paper could discuss the difference with them and tune down accordingly, which does not harm the originality of this paper.
Unconfident comments:
- I appreciate the analysis on that overall variance increases with the increase of imbalance ratio. But it is a bit surprising there was not any analysis like this before even in the field of long-tailed recognition. Could the authors please comment on this?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What value does "v" at Line 208 take? Is it a fixed value across the training or it is dynamically adapted?
- What does it mean in Sec. 2.2 that "We randomly mask node properties"? Does it mean dropping nodes and the corresponding edges of the original graph at random? Is this a standard way of performing data augmentation on graph? What are other alternative data augmentation methods? It would be great if the authors could give more information on this either in Sec. 2.2 or in related work.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments that the theoretical and experimental contribution of RVGNN is strong for tackling imbalance problems. We address all your concerns below:
***
**Q1**:The related work section could be more detailed. The method in Sec. 4 resembles existing techniques. Discussing differences and making adjustments won't affect the paper's originality.
**Answer1**:
We appreciate your recognition of the similarities between our method and existing techniques [1,2,3], and we agree that a more detailed comparison with these techniques would enhance the paper. we would like to kindly draw your attention to **Appendix A where we have provided more detailed related work** including Imbalanced Learning in the Vision Domain and Graph Contrastive Learning. Here is an attempt to address your concern.
- **Related work of our confidence-based label-guided consistency regularization**: You are right in pointing out that the method in Eq. 8 shares similarities with widely-used semi-supervised learning techniques like UDA, FixMatch, and FlexMatch. Both UDA[1] and FixMatch[2] emphasize consistent labels after enhanced data augmentation. Consistency loss is skipped for low-confidence samples. FlexMatch[3] challenges a fixed threshold, advocating for adaptive adjustment of pseudo-label status via curriculum learning during training.
- **We distinguish our work from them as follows**:
- First, the confidence-based label-guided consistency regularization we use is **to better estimate the variance of the model** accurately and reduce the interference of samples whose samples are poorly predicted.
- **Our approach considers the underlying graph structure, exploiting the relationships between nodes to guide the consistency regularization process. This adaptation leads to more coherent propagation of information across graph structures, which distinguishes our method from the conventional techniques mentioned**. However, we acknowledge the importance of making these distinctions more explicit and will revise the section to articulate these differences more clearly.
- **Related work of our Intra-class Aggregation for Contrastive Learning**: we also acknowledge the similarities between our method and the ideas studied in the Supervised Contrastive Learning[4] paper. Our intra-class aggregation is indeed conceptually similar, but we employ this concept differently to suit the particularities of graph-structured data.
- By considering the relational information in graphs and employing an innovative aggregation strategy, our method strives to capture deeper inter-node dependencies, which differentiates it from the approach taken in Supervised Contrastive Learning. We will provide a more explicit comparison between these methods in the revised manuscript to clarify the novel aspects of our approach. **Importantly, within the GNN domain, we are the first to introduce the idea of supervised contrastive learning into imbalance node classification, which can inspire follow-up work, and we think this is one of our contributions.**
**We agree with your suggestion that discussing these differences and drawing connections with existing methods would not harm the originality of our paper.**
[1] Xie, Qizhe, et al. "Unsupervised data augmentation for consistency training." Advances in neural information processing systems 33 (2020): 6256-6268.
[2] Sohn, Kihyuk, et al. "Fixmatch: Simplifying semi-supervised learning with consistency and confidence." Advances in neural information processing systems 33 (2020): 596-608.
[3] Zhang, Bowen, et al. "Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling." Advances in Neural Information Processing Systems 34 (2021): 18408-18419.
[4] Khosla, Prannay, et al. "Supervised contrastive learning." Advances in neural information processing systems 33 (2020): 18661-18673.
***
**Q2**: The author values the variance analysis with imbalance ratio but is surprised it's absent in long-tailed recognition.
**Answer2**:
- We sincerely value the reviewer's keen insights and feedback on this matter. Upon conducting an exhaustive literature review, it appears that **our study is among the pioneering works to investigate the relationship between variance and imbalance ratio**. We also believe that our focus on optimizing the model's variance through theoretical analysis in both graph learning and traditional imbalanced learning is novel. Recognizing the importance of this contribution, we have been meticulous in our theoretical approach. As detailed in Section 6, we have plans to extend RVGNN and its foundational theories to broader areas, including computer vision and natural language processing.
- To further clarify, we'd like to emphasize that our work currently seems best suited for graph-specific settings. In optimizing the approximate variance, graph data augmentation(GDA) is crucial as it serves to emulate diverse training sets. In this work, it's worth noting that GDA is distinct from traditional data augmentation. Here, GDA acts as a form of dataset augmentation. It not only perturbs the features of individual nodes but also significantly modifies the graph dataset by cropping or adding edges. However, we recognize that applying our framework to traditional settings may not be direct.
***
**Q3**: Value of "v" at Line 8.
**Answer3**: The value "v" serves as a threshold to determine whether a node's prediction is confident. It is treated as a hyperparameter.
***
**Q4**: The meaning of "randomly mask node properties" and other alternative graph augmentation methods.
**Answer4**:
- Randomly masking node properties involves randomly **replacing or hiding specific attributes of certain nodes** within the graph.
- Other augmentation methods include node/edge perturbation, subgraph sampling, feature noise addition, and so on.
We will expand on these points in **detail in the related work section** to provide a more comprehensive explanation.
---
Rebuttal Comment 1.1:
Title: Further Clarification on Q4 of Reviewer 8yVE.
Comment: ## 1. Explanation of Randomly Masking Node Properties
- "Randomly masking node properties" [1,2,3,4,5] does not refer to randomly dropping nodes and the corresponding edges of the original graph. Instead, it's a data preprocessing technique that involves randomly replacing or hiding specific attributes of certain nodes within the graph. This process is typically used during the training phase and helps the model learn to extract useful information from incomplete or partially corrupted data.
[1] You, Yuning, et al. "Graph contrastive learning with augmentations." Advances in neural information processing systems 33 (2020): 5812-5823.
[2] Zhao, Tong, et al. "Data augmentation for graph neural networks." Proceedings of the aaai conference on artificial intelligence. Vol. 35. No. 12. 2021.
[3] Zhao, Tong, et al. "Graph data augmentation for graph machine learning: A survey." arXiv preprint arXiv:2202.08871 (2022).
[4] Ding, Kaize, et al. "Data augmentation for deep graph learning: A survey." ACM SIGKDD Explorations Newsletter 24.2 (2022): 61-77.
[5] Zhou, Jiajun, Jie Shen, and Qi Xuan. "Data augmentation for graph classification." Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2020.
***
## 2. Is this a Standard Way of Performing Data Augmentation on Graph?
- Randomly masking node properties can be considered a data augmentation method, particularly in the training of Graph Neural Networks (GNNs). By presenting the model with some masked or perturbed node information, it can enhance the robustness and generalization ability of the model.
***
## 3. What are Other Alternative Data Augmentation Methods?
In addition to randomly masking node properties, there are many other methods [1,2,3,4,5,6,7,8] for data augmentation in graphs, including but not limited to:
- Node/Edge Perturbation: Altering the graph's structure by randomly adding or removing some nodes or edges.
- Subgraph Sampling: Randomly sampling subgraphs from the original graph for use as training samples.
- Feature Noise Addition: Adding random noise to the features of nodes or edges.
- Graph Rotation and Reflection: Applying geometric transformations like rotation and reflection to the graph structure.
- Adjacency Matrix Perturbation: Changing the connectivity of the graph by altering the weights of the adjacency matrix.
These methods can be used independently or in combination to enrich the training data and enhance the model's robustness and generalization performance. **We will expand on these points in detail in either Sec. 2.2 or in the related work section, to provide a more comprehensive background and explanation.** Thank you again for your valuable feedback!
[1] You, Yuning, et al. "Graph contrastive learning with augmentations." Advances in neural information processing systems 33 (2020): 5812-5823.
[2] Zhao, Tong, et al. "Data augmentation for graph neural networks." Proceedings of the aaai conference on artificial intelligence. Vol. 35. No. 12. 2021.
[3] Zhao, Tong, et al. "Graph data augmentation for graph machine learning: A survey." arXiv preprint arXiv:2202.08871 (2022).
[4] Ding, Kaize, et al. "Data augmentation for deep graph learning: A survey." ACM SIGKDD Explorations Newsletter 24.2 (2022): 61-77.
[5] Zhou, Jiajun, Jie Shen, and Qi Xuan. "Data augmentation for graph classification." Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2020.
[6] Zhu, Yanqiao, et al. "Graph contrastive learning with adaptive augmentation." Proceedings of the Web Conference 2021. 2021.
[7] Liu, Yixin, et al. "Graph self-supervised learning: A survey." IEEE Transactions on Knowledge and Data Engineering 35.6 (2022): 5879-5900.
[8] Zhao, Tong, et al. "Graph data augmentation for graph machine learning: A survey." arXiv preprint arXiv:2202.08871 (2022).
---
Reply to Comment 1.1.1:
Title: We've Carefully Addressed Each of the Questions You Raised and Eagerly Hoping for Your Valuable Feedback
Comment: Dear Reviewer 8yVE,
Thank you deeply for your thoughtful review and valuable insights. We've taken every question you've raised to heart and have responded in detail where needed. We sincerely hope you'll take a moment to reflect on our responses, trusting that they meet your considerations. Your time and expertise in reviewing our work mean so much to us.
Warm Regards,
Authors | Summary: This paper is majoring on the imbalance problem in graph node classification. The authors confirmed the relationship between model variance and the degree of dataset imbalance by adopting the Bias-Variance Decomposition. Furthermore, they diverted a regularization term for approximating the variance of the model from the above theoretical analysis.
Strengths: This paper discovers the relation between imbalance and variance, estimates the variance with graph argumentation, and solves the imbalance problem by adopting two proposed regularization terms, which is an important problem in node classification.
The regularization term is theoretically derived from the Bias-variance Decomposition, and the author, for the first time, fits this theory into the field of imbalance by making two weak assumptions, which is a good contribution to this field.
Weaknesses: - Overall, this paper contains many typos and abuse of the notation that may make the reader uncomfortable. I list those points in the limitations.
- In this work, the authors make a strong assumption that all embeddings follow the multivariate normal distribution with the same sample variance ($h(x)^T\Lambda^ih(x)$ in the paper) between each class. And the authors use this assumption to derive the core relation between variance and imbalance, then the regularization term.
However, this assumption is strongly related to the quality of the extracted embeddings.
One important guarantee for this assumption is a well-trained feature extractor with a clear boundary between each class.
However, at the initial stage, the model is not well-trained, meaning that a large variance of $h(x)^T\Lambda^ih(x)$ in each class.
It is unclear how this will affect the power of regularization though the overall results look good, and the authors did not make a discussion of the above problem in the paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Why do the authors assume node embeddings follow the multivariate normal distribution?
2. Could the authors provide the significance of the results in Fig1 (e.g., using hypothesis testing)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The author could improve their manuscript by solving the below points and typos:
- line 25-27: sentence should be shortened.
- line 76: no definition for $N$, $F$ and $D$.
- Section 3.1: Is there a lack of citations of Bias-variance Decomposition?
- The term between Eq(2) and line 111, line 114 is different.
- line 140: "condition on" should be replaced by "conditioning on".
- line 208: indicator function is often denoted as $\\mathbf{1}_{ \\{ \\cdots \\} }$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the comments. We appreciate your comments that the theoretical and experimental contribution of RVGNN is strong for tackling imbalance problems. We address all your concerns below:
***
**Q1**: This paper contains many typos and abuse of the notation that may make the reader uncomfortable.
**Answer1**:
Thanks for highlighting the concerns related to typos and notation inconsistencies. We have **taken every point you listed into consideration. We have thoroughly revised the paper, making sure to correct all the typos and standardize the notation throughout**. Also based on this, we have **updated the notation table (Appendix G)** of the full text we compiled earlier to facilitate the reading of the article. We believe these changes have enhanced the readability and clarity of the manuscript. **Details are as follows:**
- For the line 25-27 which sentence should be shortened, we modified it to: "Graph-structured data often requires consideration of the data's topology and environment, and empirical evidence indicates that topological asymmetries can also affect the performance of models."
***
- For the line 76 which is no definition for N, F, D, we modified it to: "N, F, and D are the dimensions of features and models, and they have the same meaning as n, f, and d of Preliminaries. We unify them."
***
- For Section 3.1 which is a lack of citations of Bias-variance Decomposition, we have cited some of the most relevant papers[1,2,3,4].
[1] Belkin, Mikhail, et al. "Reconciling modern machine-learning practice and the classical bias–variance trade-off." Proceedings of the National Academy of Sciences 116.32 (2019): 15849-15854
[2] Kohavi, Ron, and David H. Wolpert. "Bias plus variance decomposition for zero-one loss functions." ICML. Vol. 96. 1996.
[3] Von Luxburg, Ulrike, and Bernhard Schölkopf. "Statistical learning theory: Models, concepts, and results." Handbook of the History of Logic. Vol. 10. North-Holland, 2011. 651-706.
[4] Neal, Brady. "On the bias-variance tradeoff: Textbooks need an update." arXiv preprint arXiv:1912.08286 (2019).
***
- For the term between Eq(2) and line 111, line 114 is different, I guess you are confused about the definition of $x_{i}$ in line 111, for a better understanding for the readers, we modified the sentence as " We make two assumptions in our approach. Firstly, we assume that the node embeddings $h^i$ of node $x^i$ extracted by a graph neural network for nodes belonging to class $i$ follow a multivariate normal distribution $h^{i} \sim N(\mu^{i}, \Lambda^{i})$ ''. For $h_{n_{i}}^{i}$, it means the embedings of the $n_{i}$th node belong to $i$ class. We know that this notation can also cause some confusion, and we have carefully corrected it in the paper.
***
- For the line 140 where "condition on" should be replaced by "conditioning on", we have corrected it.
***
- For the line 208 that indicator function is often denoted as $\boldsymbol{1}_{\{...\}}$, we have followed your idea and have corrected it.
***
Please let us know if there are any other aspects that need further attention.
***
**Q2**: The strong assumption that all embeddings follow the multivariate normal distribution with the same sample variance $(h(x)^{T}\Lambda^{i}h(x))$ in the paper) between each class. And the authors use this assumption to derive the core relation between variance and imbalance, then the regularization term.
**Answer2**:
We appreciate the reviewer's feedback. Our work indeed assumes, for illustrative purposes, that different classes share the same $\Lambda^i$ when demonstrating the relationship between variance and imbalance. However, it is vital to clarify that our regularization term is not contingent on this assumption. By employing graph augmentation, we generate pseudo-node pairs belonging to an identical class, subsequently reducing the feature discrepancy within these pairs to diminish the class-specific variance. It's worth noting that this variance approximation remains unaffected by the discrepancy of different $\Lambda^i$ values. The optimization is conducted concurrently with the classification training. As you astutely pointed out, by the conclusion of the training phase, the relationship between variance and dataset imbalance is evident. Given that our algorithm consistently yields a model with reduced variance, we confidently assert that the resultant model exhibits robustness in imbalanced scenarios.
***
**Q3**: Could the authors provide the significance of the results in Fig1 (e.g., using hypothesis testing)?
**Answer3**:
Thank you for your constructive suggestions. **We computed the Pearson correlation coefficient between variance and imbalance ratio (log), as presented in the table below**. The Pearson correlation coefficient, denoted as $r$, is a prevalent metric for gauging linear correlations. This coefficient lies between $-1$ and $1$ and reflects both the magnitude and direction of the correlation between two variables. An $r$ value greater than $0.5$ indicates a strong positive correlation. Furthermore, the p-value results from a hypothesis test with the null hypothesis $H_{0}: \rho = 0$ and the alternative hypothesis $H_{a}: \rho \neq 0$, where $\rho$ represents the population correlation coefficient.
| | Citeseer-GCN | Citeseer-GAT | Citeseer-SAGE | PubMed-GCN | PubMed-GAT | PubMed-SAGE |
|------------|--------------|--------------|---------------|------------|------------|-------------|
| **$r$** | 0.751 | 0.786 | 0.642 | 0.694 | 0.760 | 0.747 |
| **P-value**| 3.203e-14 | 1.516e-14 | 5.107e-07 | 2.233e-09 | 2.344e-14 | 5.634e-13 |
**Given that the Pearson correlation coefficient between variance and imbalance ratio exceeds $0.5$, and the p-value is below $0.01$, we deduce that there is a robust correlation between variance and imbalance ratio. This relationship is statistically significant at the $0.01$ significance level.**
---
Rebuttal Comment 1.1:
Title: I'll raise my score.
Comment: The authors clearly addressed my concern. Thus, I will raise my score as 6.
---
Reply to Comment 1.1.1:
Title: Expressing Gratitude and Seeking Further Consideration
Comment: Thank you for your constructive feedback and the subsequent score adjustment. **We truly value your insights and are pleased that our rebuttal addressed your concerns.**
We earnestly believe that **our research provides a notable contribution to the domain, which is the first to concentrate on optimizing the model's variance, both in graph imbalance learning and traditional imbalanced learning.** Given your expertise, we would be grateful if you could **reconsider areas of our work that might further align with the conference's criteria**, possibly leading to an even more favorable assessment.
Once again, we deeply appreciate your time and thoughtful evaluation. We eagerly await your final review.
Best regards | Summary: This paper focuses on semi-supervised imbalanced node
classification tasks. Specifically, the authors first establish a theoretical result that connects the imbalance ratio with the model variance and then propose a new regularization term related to the variance based on the graph augmentation technique. Experimental results show the effectiveness of the proposal.
Strengths: 1. The authors give theoretical results that connect the imbalance ratio to the model variance. The results are insightful.
2. The experimental results show the proposal can achieve the SOTA performance on various class-imbalanced datasets.
Weaknesses: 1. The theoretical results rely on strong assumptions, which may be difficult to satisfy in real-world tasks.
2. From Figure 3, it looks like the performance is highly influenced by the hyper-parameters. So how to determine the hyper-parameter should be discussed in the paper.
3. Moreover, I doubt that the performance of the proposal is not related to the theoretical results, it may rely on graph augmentation. Thus, more ablation studies should be conducted.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How to determine the hyper-parameters in different datasets?
2. How to demonstrate the superiority of algorithm performance is related to theoretical results?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the comments. We appreciate your comments that the theoretical and experimental superiority is strong for our work. We address all your concerns below:
***
**Q1**: The theoretical results rely on strong assumptions, which may be difficult to satisfy in real-world tasks.
**Answer1**:
We acknowledge and appreciate your observation regarding our assumption about the embeddings following a multivariate normal distribution to approximate the variance. **Our rationale is inspired by the Central Limit Theorem (CLT)**. The generation of these embeddings of GNN involves aggregating information from several random processes—such as random walks and aggregations from neighboring nodes[1,2]. Each of these can be viewed as a distinct random variable. As we aggregate numerous such variables, the CLT suggests that their aggregate effect tends to approach a normal distribution, given they possess finite expectations and variances.
**We recognize this as an approximation and aimed to streamline our theoretical framework**. In practice, this assumption has proven to be a satisfactory fit. Nonetheless, we are cognizant of its limitations and value your feedback on this matter.
[1] Cavallari, Sandro, et al. "Learning community embedding with community detection and node embedding on graphs." Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. 2017.
[2] Xu, Mengjia. "Understanding graph embedding methods and their applications." SIAM Review 63.4 (2021): 825-853.
***
**Q2**: From Figure 3, it looks like the performance is highly influenced by the hyper-parameters. So how to determine the hyper-parameter should be discussed in the paper.
**Answer2**:
- We would like to **kindly draw your attention to Appendix E.4 of the original submission**, where we have provided detailed range for our hyperparameter search.
- In determining the hyperparameters for different datasets, we follow a rigorous and systematic approach rather than relying on heuristic methods. **Our methodology involves conducting a hyperparameter sweep in the conventional manner**.
To begin, we partition the dataset into training, validation, and test sets. During the hyperparameter sweep, we carefully configure the range for each hyperparameter and employ some strategies to generate sets of hyperparameters. **We then search for the set of hyperparameters that yields the best performance on the validation set, using F1 score as key evaluation metrics**. In our experiments, we observed that **the hyperparameter selection process for our model is exceptionally robust.**
To facilitate this process, we leverage the wandb platform (wandb.ai) to organize our experiments and **utilize the built-in Bayes strategy within the hyperparameter sweep**. This allows for efficient exploration of the hyperparameter space, enabling us to identify the optimal combination of hyperparameters for each dataset.
***
**Q3**: Moreover, I doubt that the performance of the proposal is not related to the theoretical results, it may rely on graph augmentation. Thus, more ablation studies should be conducted.
**Answer3**:
- The core idea of our work is to optimize the variance of the model. As the key technology we use to **approximate the variance, graph data augmentation plays a very important role**. In fact, the **theoretical result derivation also does depend on graph data augmentation**.
- We are not sure that we understand your point. We think that your concern may be about that the use of contrastive loss ($L_{IR}$) together with data augmentation plays a key role. Through **the following detailed ablation analysis**, we clearly show that $L_{VR}$ plays the most important role.
| | **Dataset** | **CiteSeer-Semi** | **PubMed-Semi** | **Computers-Semi** |
|:-----------------|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
| | | F1-Score(%) | F1-Score(%) | F1-Score(%) |
|------------------|----------------------|----------------------|----------------------|----------------------|
| **SAGE** | Sup | 44.43 | 64.80 | 77.16 |
| | Sup+IR | 60.92 | 68.35 | 78.62 |
| | Sup+VR | 62.33 | 73.68 | 79.54 |
| | Sup+IR+VR | 64.91 | 76.44 | 80.40 |
|------------------|----------------------|----------------------|----------------------|----------------------|
| **GAT** | Sup | 48.08 | 65.91 | 74.04 |
| | Sup+IR | 65.35 | 67.04 | 74.35 |
| | Sup+VR | 65.67 | 72.87 | 77.65 |
| | Sup+IR+VR | 65.70 | 74.46 | 78.12 |
|------------------|----------------------|----------------------|----------------------|----------------------|
| **GCN** | Sup | 43.98 | 62.28 | 73.54 |
| | Sup+IR | 53.94 | 66.16 | 76.32 |
| | Sup+VR | 57.43 | 72.34 | 77.09 |
| | Sup+IR+VR | 59.73 | 74.48 | 78.49 |
|------------------|----------------------|----------------------|----------------------|----------------------|
---
Rebuttal Comment 1.1:
Title: We've Carefully Addressed Each of the Questions You Raised and Eagerly Hoping for Your Valuable Feedback
Comment: Dear Reviewer j7Kz,
From the depths of our hearts, we express our sincerest gratitude for your thoughtful and perceptive review. Every question you posed has been carefully contemplated, and we've strived to offer thorough answers where appropriate. We cordially invite you to peruse our responses, with the hope that they align with your insights. The time and wisdom you've invested in reviewing our work profoundly moves us.
Warm Regards,
Authors | Summary: This paper introduces a new approach to address the issue of class imbalance in graph neural networks (GNNs) for learning on graph-structured data. It also provides a novel theoretical perspective for addressing the problem of imbalanced node classification in GNNs.
Strengths: 1 The article is well written and easy to understand.
3 The authors prove their claim by theoretical derivation.
4 The experimental results further corroborate the authors' view.
Weaknesses: 1 The use of L to represent the set of labeled nodes is still relatively rare in the definition of graphs, and the authors are advised to describe the definition involved further in the opening paragraph of the method.
2 Is the authors considering restructuring the article? Putting the theoretical derivation before the introduction of the method may not give the reader a good reading experience.
3 I am curious about this one: the authors point out that the full graph noise distribution is taken into account when sampling the graph in order to sample the training set that has approximate variance. However, if one considers graph learning in an open world, then the noise from the added data might be quite different from the original graph data.
4 In Table 1, the authors can consider further reporting the percentage of performance improvement compared to SOTA.
5 In Fig 3(a), it seems that the performance improvement from VR loss is not significant. Are you considering the Sup+VR combination? Also, can you further diet why VR loss gives a smaller boost to GAT?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: No more questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the comments. We appreciate your comments that the theoretical and experimental superiority is strong for our model. We address all your concerns below:
***
**Q1**: The use of L to represent the set of labeled nodes is still relatively rare in the definition of graphs, and the authors are advised to describe the definition involved further in the opening paragraph of the method.
**Answer1**:
We have modified our notation to **use $V_{L}$ and $V_{U}$** to represent the sets of labeled and unlabeled nodes, respectively, replacing the previous $L$ and $U$. This revised notation aligns better with common conventions.
***
**Q2**: Is the authors considering restructuring the article? Putting the theoretical derivation before the introduction of the method may not give the reader a good reading experience.
**Answer2**:
We sincerely appreciate your valuable feedback concerning the structure of our paper. We will **thoughtfully consider your suggestions and discuss potential refinements to the article's organization** with our co-authors. Our goal is to enhance the clarity and readability for the audience.
***
**Q3**: I am curious about this one: the authors point out that the full graph noise distribution is taken into account when sampling the graph in order to sample the training set that has an approximate variance. However, if one considers graph learning in an open world, then the noise from the added data might be quite different from the original graph data.
**Answer3**:
Thank you for raising this intriguing scenario. Our original focus **revolved around semi-supervised learning within the "close-world" setting, where all labeled and unlabeled nodes are accessible during the training stage**. Therefore, we would like to clarify that the concern you've mentioned won't be applicable to our current paper.
We do recognize that in the "open-world" setting, the regularization term introduced in our paper **may not be able to handle the variance of unseen data effectively**. However, it still serves as a valuable regularization term for addressing the imbalance present within the known dataset. We appreciate your acknowledgment of the potential value in estimating variance within the "open-world" setting, and we agree that it could pave the way for generalizing our algorithm.
***
**Q4**: In Table 1, the authors can consider further reporting the percentage of performance improvement compared to SOTA.
**Answer4**:
We agree that including the percentage of performance improvement compared to the state-of-the-art (SOTA) would provide additional clarity. **We have updated the tables** to reflect this comparison and believe it enhances the understanding of our method's effectiveness.
***
**Q5**: In Fig 3(a), it seems that the performance improvement from VR loss is not significant. Are you considering the Sup+VR combination? Also, can you further diet why VR loss gives a smaller boost to GAT?
**Answer5**:
1. Thank you for pointing that out. It's true that the notable performance enhancement from the VR loss isn't as pronounced in the CiteSeer-GAT instance. **However, in a majority of the other cases, the role of $L_{VR}$ is quite significant**. We would like to **kindly draw your attention to Appendix D of the original submission**, where we have provided more detailed ablation experiments across additional datasets like PubMed and Computers, and using various model scenarios such as GCN, GAT, and SAGE. We hope this addresses your concerns.
2. Based on your insightful feedback, we **rigorously incorporated the combination of Sup+$L_{VR}$ into our ablation study as follows**. The results affirm a fundamental insight of our work: optimizing the model's variance proves to be an effective method to address the long-tail problem.
| | **Dataset** | **CiteSeer-Semi** | **PubMed-Semi** | **Computers-Semi** |
|:-----------------|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
| | | F1-Score(%) | F1-Score(%) | F1-Score(%) |
|------------------|----------------------|----------------------|----------------------|----------------------|
| **SAGE** | Sup | 44.43 | 64.80 | 77.16 |
| | Sup+IR | 60.92 | 68.35 | 78.62 |
| | Sup+VR | 62.33 | 73.68 | 79.54 |
| | Sup+IR+VR | 64.91 | 76.44 | 80.40 |
|------------------|----------------------|----------------------|----------------------|----------------------|
| **GAT** | Sup | 48.08 | 65.91 | 74.04 |
| | Sup+IR | 65.35 | 67.04 | 74.35 |
| | Sup+VR | 65.67 | 72.87 | 77.65 |
| | Sup+IR+VR | 65.70 | 74.46 | 78.12 |
|------------------|----------------------|----------------------|----------------------|----------------------|
| **GCN** | Sup | 43.98 | 62.28 | 73.54 |
| | Sup+IR | 53.94 | 66.16 | 76.32 |
| | Sup+VR | 57.43 | 72.34 | 77.09 |
| | Sup+IR+VR | 59.73 | 74.48 | 78.49 |
|------------------|----------------------|----------------------|----------------------|----------------------|
---
Rebuttal Comment 1.1:
Title: We've Carefully Addressed Each of the Questions You Raised and Eagerly Hoping for Your Valuable Feedback
Comment: Dear Reviewer vtyV,
From the bottom of our hearts, we thank you for your kind and insightful review. Each question you brought up has been tenderly considered, and we've endeavored to provide comprehensive answers where necessary. We warmly invite you to take a moment to go through our responses, hoping they resonate with your thoughts. The time and wisdom you've shared in reviewing our work touches us deeply.
Warm Regards,
Authors
---
Rebuttal Comment 1.2:
Comment: Thank you for the response. Based on the authors' rebuttal and comments from other reviewers, I decided to keep my score. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive and insightful feedback.
- We have thoroughly revised the paper, making sure to correct all the typos and standardize the notation throughout.
- We **have uploaded our revised Figure 2(Pipeline of RVGNN) here**, in this figure, additional elements have been incorporated that were not previously marked.
Pdf: /pdf/ee02af6861667440e366fb329e218d863a4a3196.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a theory that relates data imbalance to model variance and designs a method to mitigate the bias of class imbalance.
For the theory, this paper finds that the variance of each class is proportional to the invert of the number of samples in that class.
For the method, this paper uses a regularization term to approximate the model variance and construct a varied training distribution with graph augmentation. The regularization term is added to the original loss function.
Results on public-split datasets and naturally imbalanced datasets verify the proposal.
Strengths: This paper first connects the model variance and class imbalance in graph learning.
Weaknesses: 1. L_IR objective is more closely related to variance than contrastive learning. It would help to clarify the difference between your work and contrastive graph learning.
2. This paper does not discuss previous work on variance and imbalance in traditional imbalanced learning. Since the variance calculated in this paper does not use graph-specific settings, it is not unique to graph learning.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Why can you assume that node embeddings follow a multivariate normal distribution?
2. Many elements of Figure 2 need descriptions. E.g., What do yellow and blue circles mean?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: This paper does not discuss limitations. But I think the novelty and unique contribution is limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the comments. We appreciate your comments that our novelty that first connects the model variance and class imbalance problem on the graph. We address all your concerns below:
***
**Q1**: $L_{IR}$ objective is more closely related to variance than contrastive learning. It would help to clarify the difference between your work and contrastive graph learning.
**Answer1**:
- The main focus of our study is to utilize graph augmentation to formulate the loss function $L_{VR}$ for approximating the model's variance. It's worth clarifying that **$L_{IR}$ is not directly linked to the model's variance**. We hope this provides a clearer understanding.
- We acknowledge that there might have been confusion caused by a misleading sentence in Sec 4.2, where we mentioned, "In this section, we propose an extension to the concept of graph contrastive learning that emphasizes the invariance of node representations in semi-supervised scenarios." We understand that this statement could imply a direct relation between $L_{IR}$ and variance, which was not our intention. **In the revision, we will carefully rephrase this section to clarify our intentions** and avoid any potential misinterpretations.
- For the term $L_{IR}$, our experiments revealed that the inclusion of an additional loss term, $L_{IR}$, **is beneficial in ensuring that the graph augmentation does not negatively influence GNN feature extraction**. This concept is inspired by graph contrastive learning, which we have tailored to better fit our specific setting.
***
**Q2**: This paper does not discuss previous work on variance and imbalance in traditional imbalanced learning. Since the variance calculated in this paper does not use graph-specific settings, it is not unique to graph learning.
**Answer2**:
- First, we truly appreciate the reviewer's attention and feedback about this concern. After an in-depth literature review, we found that our study is among **the first to concentrate on optimizing the model's variance, both in graph learning and traditional imbalanced learning**. We consider this a significant contribution and have approached the challenge from a thoughtful theoretical perspective. As mentioned in Section 6, our future directions include extending RVGNN and its foundational theories to the realms of computer vision and natural language processing. Considering this focus, we feel that an extensive discussion about the combination of variance and imbalance in traditional imbalanced learning might not be central to our current study. Once again, thank you for your invaluable insights and suggestions.
- Regarding your second concern, we would like to clarify that in our work, we indeed incorporate graph-specific settings when approximating the variance of the model.
- In optimizing the approximate variance, graph data augmentation(GDA) is crucial as it serves to emulate diverse training sets. In this work, it's worth noting that GDA is distinct from traditional data augmentation. Here, **GDA acts as a form of dataset augmentation**. It not only perturbs the features of individual nodes but also significantly modifies the graph dataset by cropping or adding edges. However, we recognize that **applying our framework to traditional settings may not be direct**.
- As detailed in our paper, we believe that **addressing the graph imbalance problem by optimizing the model's variance offers a more meaningful approach compared to traditional imbalanced learning domains**. Unlike conventional imbalanced learning, devising GNN imbalanced learning methods for graph-structured data presents distinct challenges. Graph-structured data often necessitate careful consideration of the data’s topology and surroundings. Traditional strategies such as oversampling and loss function engineering often fall short in achieving satisfactory results. More specifically, the modeling approach for data personalization shows marked deficiencies in scalability and generalization capability. Therefore, a more fundamental and theoretical viewpoint is urgently needed to address the imbalance node classification issue. In this work, we introduce a novel perspective to comprehend graph imbalance through the prism of Bias-Variance Decomposition. This forms the motivation for our work.
***
**Q3**: The node embeddings follow a multivariate normal distribution.
**Answer3**:
We acknowledge and appreciate your observation regarding our assumption about the embeddings following a multivariate normal distribution. **Our rationale is inspired by the Central Limit Theorem (CLT)**. The generation of these embeddings of graph involves aggregating information from several random processes—such as random walks and aggregations from neighboring nodes[1,2]. Each of these can be viewed as a distinct random variable. As we aggregate numerous such variables, the CLT suggests that their aggregate effect tends to approach a normal distribution, given they possess finite expectations and variances.
**We recognize this as an approximation and aimed to streamline our theoretical framework**. In practice, this assumption has proven to be a satisfactory fit. Nonetheless, we are cognizant of its limitations and value your feedback on this matter.
[1] Cavallari, Sandro, et al. "Learning community embedding with community detection and node embedding on graphs." Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. 2017.
[2] Xu, Mengjia. "Understanding graph embedding methods and their applications." SIAM Review 63.4 (2021): 825-853.
***
**Q4**: Many elements of Figure 2 need descriptions. E.g., What do yellow and blue circles mean??
**Answer4**:
We have revised Figure 2 to provide more detail. In this figure, additional elements have been incorporated that were not previously marked (yellow and blue represent different classes). **We have placed the modified Figure 2 to in the PDF file that has been uploaded.**
---
Rebuttal Comment 1.1:
Title: We've Carefully Addressed Each of the Questions You Raised and Eagerly Hoping for Your Valuable Feedback
Comment: Dear Reviewer bhDb,
Thank you profoundly for your insightful review and invaluable feedback. We have earnestly considered each query you posed and have addressed them in depth where necessary. We genuinely hope that you will take some time to review our detailed responses, confident that they address your concerns. Your expertise and the time you've invested in reviewing our work are deeply appreciated by us.
Warm Regards,
Authors | null | null | null | null | null | null |
Modeling Dynamics over Meshes with Gauge Equivariant Nonlinear Message Passing | Accept (poster) | Summary: This paper studies the problem of gauge equivariant convolutional and attentional architectures on meshes and proposes to introduce non-linear activations to enhance the model. The experiments on three models shows the performance of the proposed method.
Strengths: S1. The studied problem is important.
S2. The presentation is good.
S3. Equivariance is an important property in message passing neural networks.
Weaknesses: W1. The most important point is the novelty. Actually, Equivariant Mesh Attention Networks have combined Gauge Equivariance with MPNN. Your work adds non-linear activation [16] into the original models. The technical contribution is week.
W2. Given that your claim that "the combination of nonlinear message passing and gauge equivariance has not been proposed", you should introduce "nonlinear" into the title and give more emphasis on the significance of it.
W3. Besides the performance, I didn't see too many deep insights about the benefit of nonlinear terms. For example, how to choose the non-linear activation? How the "non-linear" activation function influence the solution of non-linear PDE function. Since the difference of our method is minor, you should introduce deep insights to strength your contribution.
W4. How about add non-linear activation in different places? I hope to get some deep results.
W5. The performance of your methods seems to not good on FAUST used in EMAN. Why you skip TOSCA? Can your method compared with [R1,R2,R3] on some standard benchmarks reported in [R1,R2,R3].
[R1] Learning Mesh-Based Simulation with Graph Networks, ICLR 21
[R2] EAGLE- Large-scale Learning of Turbulent Fluid Dynamics with Mesh Transformers, ICLR 23
[R3] Predicting Physics in Mesh-reduced Space with Temporal Attention, ICLR 22.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the detailed feedback and hope to have addressed all of your concerns.
> The most important point is the novelty.
We respectfully disagree that our work only differs from [16] by adding a non-linear activation. Our work explores non-linear message passing with gauge-equivariance on meshes, an important design decision to consider with respect to the three flavors of message passing. Previous works have highlighted the importance of nonlinear message passing for modeling complex interactions [A, B] and also for equivariant graph networks [C]. It is thus natural to explore how nonlinear message passing would benefit gauge-equivariant methods. There is also a clear conceptual difference to Hermes than simply adding a nonlinearity to GemCNN. Nonlinear message passing effectively decouples the hop distance in the graph from the number of nonlinear layers and gives practitioners the ability to choose the receptive field freely from the network depth. This makes the input graph and computational different which may be useful for certain tasks. Throughout the literature, various methods have been proposed to perform this decoupling such as graph sparsification [D], diffusion [E], or dynamic rewiring [F]. In our work, we demonstrate that this decoupling may be crucial in modeling complex interactions and surface dynamics on meshes.
> Given that your claim that "the combination of nonlinear message passing and gauge equivariance has not been proposed", you should introduce "nonlinear" into the title
We agree and we will include the word “nonlinear” in the title of the final version. We adopted our terminology from [G] which labels the three flavors of GNN as convolutional, attentional, or message-passing. Sometimes the last is also referred to as “general message passing,” and we tried to be consistent with the literature.
> Besides the performance, I didn't see too many deep insights about the benefit of nonlinear terms.
We emphasize that the contribution of our method is not the addition of specific nonlinearity, but rather the proposal of nonlinear functions for the message and update functions to compute nonlinear interaction messages in gauge-equivariant networks. This stems from an important design decision to compute nonlinear interactions between vertices, which we demonstrate to be important in predicting complex surface dynamics. Furthermore, this also decouples one axis of architecture design (network depth) from the receptive field of the network, which is important in improving model performance without additional learnable parameters. The specific nonlinearity used is a hyperparameter in our method, dependent on the task.
>How about add non-linear activation in different places?
We point out that adding non-linear activation in different places does not change the nonlinearity of the message and update functions. The “degree” of nonlinearity depends on the network depth and so nonlinear activations should be interspersed with the convolution layers in the architecture. Thus the axis to consider is the number of layers in the message and update functions. In all datasets, we tuned the Hermes architecture so that the message and update functions consist of a different number of layers and activations.
>Why you skip TOSCA? Can your method compared with [R1,R2,R3] on some standard benchmarks reported in [R1,R2,R3].
We were unable to obtain the TOSCA dataset as the public download URL is down. We clarify that we are not chasing state-of-the-art results but propose a new architecture that combines nonlinear message passing with gauge equivariance for meshes. We demonstrate the importance of computing nonlinear messages in predicting complex surface dynamics. However, we agree that having more baselines would be beneficial. We have included MeshGraphNet [A] as a baseline and also add two non-equivariant, non mesh-aware baselines (GCN and MPNN), an E(3)-equivariant non mesh-aware baseline (EGNN), and a non-equivariant, mesh-aware method (SpiralNet++). All methods use a similar number of parameters. See Table 1 in the uploaded figures/tables page for the comparison of features of each method. Results show that Hermes outperforms other baselines on all tasks, with the exception of MeshGraphNet on the Heat dataset. Interestingly, Hermes significantly outperforms MeshGraphNet on the test mesh dataset for Wave and Cahn-Hilliard, suggesting that Hermes can more accurately learn the dynamics function without being mesh-specific.
We also add the FlagSimple dataset from [A] and the results show that Hermes outperforms MeshGraphNet. We will include these additional results in the final version.
References:
- [A] Battaglia, P., Pascanu, R., Lai, M., & Jimenez Rezende, D. (2016). Interaction networks for learning about objects, relations and physics. Advances in neural information processing systems.
- [B] Kipf, T., Fetaya, E., Wang, K. C., Welling, M., & Zemel, R. (2018). Neural relational inference for interacting systems. In International conference on machine learning.
- [C] Brandstetter, J., Hesselink, R., van der Pol, E., Bekkers, E. J., & Welling, M. (2021). Geometric and Physical Quantities improve E (3) Equivariant Message Passing. In International Conference on Learning Representations.
- [D] Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive representation learning on large graphs. Advances in neural information processing systems.
- [E] Gasteiger, J., Weißenberger, S., & Günnemann, S. (2019). Diffusion improves graph learning. Advances in neural information processing systems.
- [F] Gutteridge, B., Dong, X., Bronstein, M. M., & Di Giovanni, F. (2023). DRew: Dynamically Rewired Message Passing with Delay. In International Conference on Machine Learning.
- [G] Bronstein, M. M., Bruna, J., Cohen, T., & Veličković, P. (2021). Geometric deep learning: Grids, groups, graphs, geodesics, and gauges.
---
Rebuttal Comment 1.1:
Title: Thanks for your response.
Comment: Thanks for your response. First, can you please summarize your technical contribution about the model and make a detailed comparison with the related works? To me, decoupling the hop distance is still kind of trivial in graph communities. Second, nonlinear functions for the message and update functions can date from [1], and a range of graph methods work on this, which is usually considered implementation details in some recent works. Third, since NeurIPS is the best venue in machine learning, I suggest to do more comparison with more state-of-the-arts.
[1] Simplifying Graph Convolutional Networks, ICML 19.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. Please see our response below.
> can you please summarize your technical contribution about the model and make a detailed comparison with the related works?
Our main contribution is proposing a novel method that combines gauge-equivariance and nonlinear message-passing. While both convolutional and attentional gauge-equivariant architectures have been introduced (GemCNN and EMAN), such methods can only linearly approximate the interactions between node neighbors, i.e. linear message passing. Hermes generalizes these to pass nonlinear messages. Modifying GemCNN and EMAN to pass nonlinear messages requires a more conceptual change than simply adding a nonlinearity after the layer: one needs to use nonlinear function approximators such as neural networks for the node and edge networks within the layer. Nonlinear message passing has been shown to be essential in object-relational reasoning and physics environments [A, B, C] and this is a central contribution of these papers. We specifically show how nonlinear gauge-equivariant message passing is more beneficial in modeling complex surface dynamics over convolutional and attentional versions. Additionally, we note that making message passing networks gauge-equivariant is non-trivial and a novel contribution. The objective of our paper is not to create a new architecture to be the state-of-the-art in standard benchmarks, but rather to extend the design space of neural networks for meshes (particularly with respect to equivariant networks) and to add insights about in which scenarios nonlinear message passing should be used.
References:
- [A] Battaglia, P., Pascanu, R., Lai, M., & Jimenez Rezende, D. (2016). Interaction networks for learning about objects, relations and physics. Advances in neural information processing systems.
- [B] Kipf, T., Fetaya, E., Wang, K. C., Welling, M., & Zemel, R. (2018). Neural relational inference for interacting systems. In International conference on machine learning.
- [C] Brandstetter, J., Hesselink, R., van der Pol, E., Bekkers, E. J., & Welling, M. (2021). Geometric and Physical Quantities improve E (3) Equivariant Message Passing. In International Conference on Learning Representations.
> nonlinear functions for the message and update functions can date from [1], and a range of graph methods work on this, which is usually considered implementation details in some recent works
If we understand correctly, the reference [1] removes nonlinearities from GCN layers and combine the weights into a single linear layer called SGC. A key difference between the topic of [1] and our paper is that [1] removes nonlinearities from GCN layers where the aggregation step is performed directly after the convolution and before the nonlinearity. Thus both GCN layers and their proposed SGC use only linear messages. In our paper, we specifically use nonlinear functions before the aggregation step, leading to nonlinear message passing.
> I suggest to do more comparison with more state-of-the-arts.
We have added several new baselines including a state-of-the-art (SOTA) mesh method (MeshGraphNet), a popular and standard mesh method (SpiralNet++), and also a SOTA E(3)-equivariant method (EGNN). We aimed to compare against baselines along several different design axes, to assess how helpful gauge equivariance or nonlinear message passing is with respect to modeling dynamics. We note that our goal was not to introduce a new SOTA method but rather to investigate the three flavors of message passing and when nonlinear message passing works better. | Summary: The authors describe a message-passing mesh-based gauge equivariant architecture. The architecture is described as the natural follow-up from previous equivariant architecture. In short, an edge network aggregates information between source, target nodes, and edge features. Then, these "messages" are aggregated with a gauge equivariant convolution to the target node, accounting also for self-interactions.
Such architecture is tested on 4 applications: shape correspondences, object interactions, PDE on meshes, modeling heat eq., and Cahn-Hilliard equation on a mesh. The authors compare it with other message-passing architecture proving the soundness and effectiveness of the technique.
Strengths: The described architecture is the natural follow-up from previous works (as the authors describe it). Its description and illustration are well crafted, and the text is clear and well-structured. The experiments are properly designed to show the benefits of this approach with respect to previous methods, hence showing numerical and qualitative improvements. Furthermore, the authors address important concerns such as mesh finesses and roughness.
Weaknesses: The main weakness of the method is the requirement of a reasonable topology, which cannot be overcome. This may limit severely the application of these types of methods.
Other than the above, as stated by the authors the method is relatively slow (or slower) compared to baselines (eg GemCNN).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Overall I am quite happy with the paper, there are a few concerns I wish the authors would address:
- has this method been tested on incredibly large meshes (>1M vertices)?
- I would be interested to know what are the performance be wrt a method like MeshCNN in applications like mesh classification? both in terms of time and accuracy. (if the authors know this it would be great)
- based on the visual results, it looks like the long-term roll out is far from GT, do the authors have any idea on how to fix this?
- would alternating different message passing layers (conv->hermes->...) help in terms of speed up without excessively harming the performances?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I do not see any negative societal impact.
I would recommend the authors add in the next submission (if any) other tasks to the paper such as mesh segmentation and classification and compare with know baselines. This should show the strength of the method. It would also be interesting to assess the robustness of the method with corrupted topologies, this could be also an interesting point for possible follow-up.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed comments and positive feedback.
> The main weakness of the method is the requirement of a reasonable topology ... the method is relatively slow (or slower) compared to baselines (eg GemCNN).
We respectfully argue that meshes can approximate any Riemannian manifold, which covers many possible surfaces and objects encountered in the real world. Furthermore, most objects can be approximated as manifold meshes where there are no singularities or unreasonable boundary conditions. Many previous mesh-based methods assume manifold meshes [A, B, C], including all 3 gauge-equivariant methods (GemCNN, EMAN, Hermes). We feel that the assumption of a manifold mesh is not a severe limitation.To handle unreasonable topologies or non-manifold meshes, one could perform a classing meshing technique [D] to make it a manifold. It may also be possible to modify Hermes to handle such meshes by not using gauge equivariance on the problematic vertices and edges.
Regarding the computation time, we recorded the forward computation time of each model during inference on the test time dataset (Table 3 of the tables page). We find that Hermes actually has lower computation time in the forward pass than GemCNN, due to the fact that it performs a smaller number of aggregations than GemCNN and with a similar number of parameters. We will adjust the discussion to reflect this.
> has this method been tested on incredibly large meshes (>1M vertices)?
We restricted our experiments to meshes with up to 4670 vertices. As meshes inherently contain more data than other 3D representations such as point clouds or voxels, they require more memory to process. Though we simplify this and process meshes as graphs, even graph neural networks cannot scale to large graphs due to memory constraints [E, F]. We are not aware of any message passing graph networks that can scale up to 1M vertices on a single GPU and scaling to such large meshes would likely require significant distributed training and/or multi-scale methods with graph subsampling. It would be a good avenue to explore to apply such techniques to Hermes.
> ... the performance be wrt a method like MeshCNN in applications like mesh classification? both in terms of time and accuracy.
As a nonlinear message passing and gauge-equivariant method, Hermes is more suited to predicting complicated interactions and irregular, rough meshes, which is supported by the performance increase on surface PDE datasets. In mesh classification, it seems that it is more important to recognize local shapes and surface curvatures and we hypothesize that interactions between neighboring vertices would likely be close to linear. Methods such as MeshCNN would likely perform better in these scenarios. Regarding computation time, all three flavors of gauge-equivariant methods require more computation time than regular graph neural networks. As MeshCNN effectively performs a convolution over the mesh faces, it would likely be more computationally efficient than either GemCNN or Hermes, which perform convolutions over edges.
> ... it looks like the long-term roll out is far from GT
We would argue that the long-term rollouts for Hermes do not look that far from GT as the local patches seem to match GT in size and sign, though the magnitudes may differ slightly. One possible way to improve performance is to use normalization for the node and edge input features or to predict differentials (e.g. x_t+1 - x_t) as targets as done in [3]. Another way could be to use multi-step predictions in the training loss. We note that these techniques are general and not specific to our method, and performance may depend on the task.
>would alternating different message passing layers (conv->hermes->...) help in terms of speed up without excessively harming the performances?
It’s an interesting idea! It would combine the parameter/computational efficiency of the convolution but also retain the ability to compute nonlinear messages. One could also use the gauge-equivariant convolution from GemCNN to preserve gauge-equivariance for the entire model.
>... other tasks to the paper such as mesh segmentation and classification and compare with know baselines ... assess the robustness of the method with corrupted topologies.
As mentioned above, mesh segmentation and classification tasks are likely not suited for our method. Node correspondence results on FAUST seem to support this. We do agree that adding more baselines and datasets would be useful and so we include several more baselines (GCN, MPNN, MeshGraphNet, EGNN, SpiralNet++)
We agree that experimenting with different corrupted topologies would strengthen the paper. One could also consider non-manifold meshes, with open geometries, internal faces, or non-manifold edges or vertices where we previously mentioned some possible techniques to process such meshes.
References
- [A] Hanocka, R., et al. (2019). Meshcnn: a network with an edge. ACM Transactions on Graphics.
- [B] Pfaff, T., et al. (2020). Learning Mesh-Based Simulation with Graph Networks. In International Conference on Learning Representations.
- [C] De Haan, P., et al. 2020. “Gauge Equivariant Mesh CNNs: Anisotropic Convolutions on Geometric Graphs.”
- [D] Shimada, K., & Gossard, D. C. (1995). Bubble mesh: automated triangular meshing of non-manifold geometry by sphere packing. In Proceedings of the third ACM symposium on Solid modeling and applications.
- [E] Wu, Z., et al. (2020). A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems.
- [F] Duan, K., et al. (2022). A comprehensive study on large-scale graph training: Benchmarking and rethinking. Advances in Neural Information Processing Systems.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to in-depth respond to my concern and for the additional evaluation.
### Topology
I agree meshes can approximate any surface my point was more related to the manifoldness and self-intersection of the mesh itself. Your response addressed my concerns.
### 1M vertices
I understand handling such large meshes is difficult, I was wondering if that is the case for this method. Thank you for your response.
### Mesh classification
Interesting, I understand the inductive bias required to perform mesh classification it is different than PDEs, and I wonder if authors have any insight/intuition on how to change the current pipeline to adapt it to such tasks? Perhaps with informed vertex clustering to massively reduce the mesh resolution?
### Roll out
I disagree, based on visual analysis the results are on average close but quite different in details. Probably the residual approach would further improve the result. Thank you for the response.
---
Reply to Comment 1.1.1:
Comment: For a global-level task such as mesh classification (compared to node classification), modeling long-range interactions between local shape structures on the object is likely key. Informed vertex clustering such that each cluster represents local shapes would be a great approach. One could perform the clustering in a non-learned manner using additional information about the object structure, such as its topology or other global features. One could also use a learned approach by using multi-scale or hierarchical methods. At coarse scales, different local basic shapes on the object that are far away from each other could interact and pass messages, while the finer features within the local shapes could be handled at higher resolutions. We note that this idea has been proven successful for the analogous task of graph classification [A, B, C].
References:
- [A] Ying, Z., You, J., Morris, C., Ren, X., Hamilton, W., & Leskovec, J. (2018). Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31.
- [B] Lee, J., Lee, I., & Kang, J. (2019, May). Self-attention graph pooling. In International conference on machine learning (pp. 3734-3743). PMLR.
- [C] Wu, Z., Jain, P., Wright, M., Mirhoseini, A., Gonzalez, J. E., & Stoica, I. (2021). Representing long-range context for graph neural networks with global attention. Advances in Neural Information Processing Systems, 34, 13266-13279. | Summary: The authors propose a gauge equivariant method for simulating PDEs on the surface of meshes. Different from the convolutional and attentional prior works, the authors use non-linear message passing with gauge equivariant layers. They compare to the prior works in the FAUST shape classification and the simulation of three PDEs. They find that the convolutional method is best for the shape classification, while on the PDEs, their method achieves the best results.
Strengths: - It's great to see the code included
- I agree with the authors that it's good to have non-linear message passing as an additional gauge-equivariant method.
- The paper is clearly written.
Weaknesses: - The authors should include a reference to [1], which simulates fluid dynamics with gauge equivariant methods.
- It appears like the PDE experiments merely using the scalar features without information on geometry. This seems insufficient. Both [1] and [2] have suggestions for gauge equivariant geometric input features.
- The paper would benefit from an evaluation on more problems, as well as a comparison to pointcloud based methods (as done in [1]).
----------
The weaknesses have been sufficiently addressed by the rebuttal. I increase my score.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In [3], it is noted in a footnote that the kernel could be made dependent on the radius, but that the authors found it not beneficial. The method proposed here seems to also have a kernel insensitive to distance. Have the authors verified whether this is still the best choice?
- Could the authors clarify in their manuscript that the edge feature $e_{pq}$ is a feature situated on the fiber at $p$ ?
- Could the authors add a discussion of the computational cost of the various method in their paper?
- The regular non-linearity used in [3] is only approximately equivariant. Could the authors clarify that in their paper?
References:
- [1] Suk, Julian, Pim de Haan, Phillip Lippe, Christoph Brune, and Jelmer M. Wolterink. 2022. “Mesh Neural Networks for SE(3)-Equivariant Hemodynamics Estimation on the Artery Wall.”
- [2] Basu, Sourya, Jose Gallego-Posada, Francesco Viganò, James Rowbottom, and Taco Cohen. 2022. “Equivariant Mesh Attention Networks.”
- [3] De Haan, P., M. Weiler, T. Cohen, and M. Welling. 2020. “Gauge Equivariant Mesh CNNs: Anisotropic Convolutions on Geometric Graphs.”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors fairly reflect the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and thoughtful review. Please see our response below.
>The authors should include a reference to [1], which simulates fluid dynamics with gauge equivariant methods.
Thank you for pointing out this paper. It is indeed relevant and we will include it in the final version.
>It appears like the PDE experiments merely using the scalar features without information on geometry. This seems insufficient. Both [1] and [2] have suggestions for gauge equivariant geometric input features.
We tried using the relative tangent features proposed in [2] and observe slightly worse performance. We note that although the input/output features are scalars in the PDE datasets, the intermediate latent features are not scalars and consider irreducible representations up to order 2. In both [1] and [2], equivariant input features are constructed using either the vertex normals and/or the relative distance vector or their projections onto the tangent plane at the source node. These features are already incorporated into our model: vertex normals are used to achieve gauge equivariance and positions are used as inputs. Thus the proposed equivariant geometric input features do not necessarily convey additional information and an expressive model should be able to learn such equivariant features from data. This is rather a question of feature engineering and we agree that incorporating these features may make it easier for the model to learn the task. However, we choose the simpler option and let the model learn such features on its own, without withholding any additional information.
>The paper would benefit from an evaluation on more problems, as well as a comparison to pointcloud based methods (as done in [1]).
We agree that more evaluation would be beneficial and add the FlagSimple dataset from [A] where the task is to predict the positions of a flag blowing in the wind. We do not perform normalization or noisy training for simplicity. On this dataset, Hermes outperforms MeshGraphNet on the test dataset for this task (see Table 4 of uploaded page).
Additionally, we experiment with more baselines. Specifically we consider two non-equivariant, non mesh-aware baselines (GCN and MPNN), a SOTA non-equivariant, mesh-aware method (MeshGraphNet [A]), an E(3)-equivariant non mesh-aware baseline (EGNN [B]), and a non-equivariant, mesh-aware method (SpiralNet++ [C]). EGNN can be considered a point cloud based method as it can infer edges. See Table 1 in the uploaded tables page for the comparison of features of each method and Table 2 for the results. Hermes outperforms baselines in most settings and does slightly worse than MeshGraphNets on Heat. We will include these results in the final version.
>In [3], it is noted in a footnote that the kernel could be made dependent on the radius, but that the authors found it not beneficial. The method proposed here seems to also have a kernel insensitive to distance. Have the authors verified whether this is still the best choice?
Yes, the kernel used here is also independent of the radius. As the datasets contain generally homogeneous graphs with similar edge distances, we surmise that having a radius-dependent kernel would not change much and do not include it for computational efficiency. As our method easily accommodates edge features, we can make the messages radius dependent by adding the relative position vector and its distance as an edge feature. We tried this, but the performance was similar. For the FlagSimple dataset, we do use the edge features as described in [A].
>Could the authors clarify in their manuscript that the edge feature e_pq is a feature situated on the fiber at p?
Yes, we will clarify this point in the manuscript. We do this because edge features are often vectors relative to p and/or the edge distance which is not necessarily tied to the geometry of the edge. One could conceivably use the fiber at q or use the midpoint between p and q and parallel transport the edge features accordingly, but we choose the simpler option. We note that we did not see much performance improvement by adding edge features to the PDE datasets.
>Could the authors add a discussion of the computational cost of the various method in their paper?
We include the forward computation time during inference on the test time dataset in Table 4 of the additional page. Surprisingly, we find that Hermes is slightly faster computation-wise than GemCNN as we generally use a small number of message passing layers and use a similar number parameters. As expected, all 3 gauge-equivariant methods are significantly more computationally expensive than standard graph neural networks.
>The regular non-linearity used in [3] is only approximately equivariant. Could the authors clarify that in their paper?
We will clarify in Section 3 and in Proposition 1 that the regular nonlinearity is only equivariant in the limit when the number of samples goes to infinity. In experiments, we used an increased number of intermediate samples (101 vs. 7) in each regular nonlinearity compared to [3], leading to an equivariance error to random gauge transformations of approximately 1e-5 for the entire model.
References:
- [A] Pfaff, T., Fortunato, M., Sanchez-Gonzalez, A., & Battaglia, P. (2020, October). Learning Mesh-Based Simulation with Graph Networks. In International Conference on Learning Representations.
- [B] Satorras, V. G., Hoogeboom, E., & Welling, M. (2021, July). E (n) equivariant graph neural networks. In International conference on machine learning (pp. 9323-9332). PMLR.
- [C] Gong, S., Chen, L., Bronstein, M., & Zafeiriou, S. (2019). Spiralnet++: A fast and highly efficient mesh convolution operator. In Proceedings of the IEEE/CVF international conference on computer vision workshops (pp. 0-0).
---
Rebuttal Comment 1.1:
Title: Better experiments, increased score
Comment: I thank the authors for their rebuttal. With their additional experiments, I will raise my score.
There's one thing it'd like to clarify though.
> Thus the proposed equivariant geometric input features do not necessarily convey additional information and an expressive model should be able to learn such equivariant features from data.
I disagree. For example, a cylinder is a flat manifold. Only the global topology indicates it is different from the plane and global topology might be hard for a convolution to detect. In general, the local surface geometry (intrinsic curvature) - which is all a that a gauge equivariant method sees - might not inform how this manifold is embedded in $\mathbb R^3$ (extrinsic curvature). Similarly, in the 1D case, any curve has a flat 1D geometry, thus a 1D gauge equivariant method can not detect whether the line is embedded in the ambient space flat or curved. Additional features can inform the gauge equivariant method about the embedding and address this.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for raising their score and we are pleased to have answered most questions.
It’s a good point that the proposed equivariant input features in both [A, B], which use either signals based on the relative position/distance between neighbors or vertex normals, may convey extrinsic information related to the global embedding and thus may be useful for certain tasks. While these features are local, they convey information regarding the embedding that may make it easier to understand the global topology.
In order to provide more direct global topological information, another easy way would be to include the absolute node positions as inputs, as done in the FAUST experiments in GemCNN and also our experiments on FAUST and Objects. (This does sacrifice E(3)-equivariance, however.) Another way would be to ``rewire'' the mesh [C] or create edges (and faces) to nodes that are close in the embedded space [D].
In our experiments, the PDEs are inherently based on intrinsic local geometry and so extrinsic global information may not be as important. Indeed when we added absolute positions as inputs for the PDE datasets, we did not see much improvement in the results, likely because local dynamics dominate in these PDEs.
References:
- [A] Suk, Julian, Pim de Haan, Phillip Lippe, Christoph Brune, and Jelmer M. Wolterink. 2022. “Mesh Neural Networks for SE(3)-Equivariant Hemodynamics Estimation on the Artery Wall.”
- [B] Basu, Sourya, Jose Gallego-Posada, Francesco Viganò, James Rowbottom, and Taco Cohen. 2022. “Equivariant Mesh Attention Networks.”
- [C] Gutteridge, B., Dong, X., Bronstein, M. M., & Di Giovanni, F. (2023). DRew: Dynamically Rewired Message Passing with Delay. In International Conference on Machine Learning.
- [D] Pfaff, T., et al. (2020). Learning Mesh-Based Simulation with Graph Networks. In International Conference on Learning Representations. | Summary: This paper aims at solving complex partial differential equations on surfaces. Given the fact that most existing work neither incorporate surface geometry nor consider local gauge symmetries of the manifolds, this paper proposes a novel gauge equivariant network, known as Hermes, that can achieve higher performance than existing convolutional or attentional networks in certain cases. In addition, authors investigate in which cases their method has advantages over other methods.
Strengths: - Propose a new gauge equivariant network, Hermes, for learning signal on meshes.
- Hermes outperforms both convolutional and attentional architectures on complex and nonlinear dynamics such as surface PDEs.
- Authors investigate in which situations nonlinear message passing should be preferred over convolutional or attentional counterparts.
Weaknesses: - It is not clear what is the relationship between Hermes and previous methods that use graph networks to perform mesh-based simulation, such as [1]. Could authors elaborate on the differences with the method in [1]? And I would like to see the performance comparison between Hermes and [1] in some scenarios, for example, FlagDynamic and CylinderFlow in [1].
- In addition, Geo-FNO [2] can also perform PDE learning on irregular geometries, what is the advantages of Hermes over Geo-FNO in terms of accuracy in solving PDE on irregular domains.
[1] Pfaff, Tobias, et al. "Learning mesh-based simulation with graph networks." arXiv preprint arXiv:2010.03409 (2020).
[2] Li, Zongyi, et al. "Fourier neural operator with learned deformations for pdes on general geometries." arXiv preprint arXiv:2207.05209 (2022).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - See the weakness part. I would like to raise the score if the authors can well address my concerns.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations of this paper are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful references and hope to have addressed all of your questions.
>It is not clear what is the relationship between Hermes and previous methods that use graph networks to perform mesh-based simulation, such as [1]. Could authors elaborate on the differences with the method in [1]? And I would like to see the performance comparison between Hermes and [1] in some scenarios, for example, FlagDynamic and CylinderFlow in [1].
We did not initially compare to GNN approaches such as [1] since our primary focus is on determining when nonlinear message passing would be beneficial over simpler linear message passing schemes for gauge-equivariant networks and how decoupling the network depth from the receptive field of the neurons improves expressivity. The main difference between Hermes and MeshGraphNet is that Hermes learns directly on the two-dimensional mesh (mesh space) and does not depend on the embedding to 3D space (world space). It performs convolutions in a way that preserves local gauge symmetries and thus incorporates the geometry of the mesh intrinsically. On the other hand, MeshGraphNet uses both node positions in mesh space and world space as input node features and additionally creates edges between nodes that are close in world space (see Figure 3 of [1]). It also normalizes the input node and edge features and encodes them before performing several iterations of message passing.
We agree that having more baselines to compare is useful. We have included MeshGraphNet [1] as a baseline for the PDE datasets. We scaled the hidden dimension and the number of message passing iterations so that the number of parameters is roughly equal to Hermes and keep the rest of the hyperparameters the same as in [1]. We do not use node/edge normalization to keep the RMSE values comparable. We also add two non-equivariant, non mesh-aware baselines (GCN and MPNN), a E(3)-equivariant non mesh-aware baseline (EGNN), and a non-equivariant, mesh-aware method (SpiralNet++). See Table 1 in the uploaded figures/tables page for the comparison of features of each method. Due to time constraints, we note that we were only able to conduct a coarse hyperparameter search for MeshGraphNet and mostly used the hyperparameters reported in the paper [4].
Table 2 in the uploaded tables page shows that MeshGraphNet outperforms Hermes on Heat, does worse on Wave, and performs similarly or slightly worse on Cahn-Hilliard. Interestingly, we point out that MeshGraphNet does noticeably worse than Hermes on the test mesh dataset, suggesting that Hermes can more accurately learn the dynamics function and not just the specific to the seen trajectories. This property is important as one may use different discretizations and scales of the mesh when the underlying dynamics are the same.
Since our method assumes triangular static meshes, we could not perform experiments on FlagDynamic and CylinderFlow in [1] and instead include FlagSimple. Table 3 shows that MeshGraphNet performs slightly better than Hermes.
We will include these additional results in the final version.
> In addition, Geo-FNO [2] can also perform PDE learning on irregular geometries, what is the advantages of Hermes over Geo-FNO in terms of accuracy in solving PDE on irregular domains.
Thank you for the reference. The paper [2] indeed tackles similar problems. Geo-FNO assumes that there exists a diffeomorphic deformation between the embedding/input space and the computational mesh and then learns the deformation and then learns the Fourier neural operator in the computation space. A key difference between our method and Geo-FNO is that Hermes does not depend on the embedding space of the mesh (e.g. embedding a rough 2D mesh in 3D) and works directly on the intrinsic mesh surface. Geo-FNO mostly evaluates on flat 2D disks (with or without holes) where vertex normals would all be parallel and thus local symmetry would not be beneficial in these scenarios. We hypothesize that due to this difference, Hermes would outperform Geo-FNO on more irregular curvatures and rougher surfaces. Furthermore, we note that the mesh sizes considered are < 2k vertices (we consider meshes with up to 4670 vertices). It would be interesting to test Hermes on different topologies such as the torus in future work. We will include this discussion in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for taking the effort to conduct additional experiments. Most of my concerns have been addressed so I will increase the score to 5. And I highly recommend adding an experimental comparison with Geo-FNO in the camera-ready version.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for raising their score and are glad that we have answered most of their concerns. We will definitely consider and look into adding Geo-FNO in the final version. | Rebuttal 1:
Rebuttal: ## Summary of Response
We thank the reviewers for the detailed feedback and constructive suggestions and hope that we have addressed all concerns. The reviewers all appreciate the importance of the problem and the clear benefit of combining nonlinear message passing with a gauge-equivariant method for predicting complex dynamics. The main concern raised by reviewers seems to be more comparisons with standard baselines and on more datasets, and also an evaluation of computation time amongst the gauge equivariant methods.
We have added several new baselines with different features and also a new dataset for a stronger evaluation. We display the main results here and have also added all results in the additional figures/tables page.
Summary of additions:
1. Added several new baselines: non-equivariant GNN variants (GCN and MPNN), a SOTA message passing, mesh-aware method (MeshGraphNet), an equivariant, non mesh-aware message passing network (EGNN), a non-equivariant, mesh-aware baseline (SpiralNet++). Results are shown in Table 2 of the uploaded figures/tables page.
2. Included a detailed comparison of features of each method in Table 1 of the uploaded figures/tables page.
3. Added a new FlagSimple dataset to compare Hermes and MeshGraphNet (Table 2 of uploaded page).
4. Recorded the average computation time during inference for each method in Table 3 and discussed the difference in computation between GemCNN, Hermes, and non gauge-equivariant methods.
5. Performed an ablation study on residual connections in Hermes.
Additional results with new baselines and new dataset:
| | | Hermes | GCN | MPNN | MeshGraphNet | EGNN | SpiralNet++
|---|---|---|---|---|---|---|---|
|Heat | Test time ($\times 10^{-3}$) | $1.18 \scriptstyle \pm 0.3$ | $152 \scriptstyle \pm 1.2$ | $2.66 \scriptstyle \pm 0.8$|$\textbf{0.93} \scriptstyle \pm 0.2$ | $3.09 \scriptstyle \pm 1.2$ | $2.82 \scriptstyle \pm 0.2$
| | Test init ($\times 10^{-3}$) | $1.16 \scriptstyle \pm 0.3$ | $152 \scriptstyle \pm 0.9$ | $2.63 \scriptstyle \pm 0.8$|$\textbf{0.93} \scriptstyle \pm 0.2$ | $3.07 \scriptstyle \pm 1.2$ | $6.44 \scriptstyle \pm 0.1$
| | Test mesh ($\times 10^{-3}$) |$\textbf{1.01} \scriptstyle \pm 0.3$ | $127 \scriptstyle \pm 2.2$ | $2.36 \scriptstyle \pm 0.7$|$2.41 \scriptstyle \pm 1.1$ | $8.96 \scriptstyle \pm 5.0$ | $22.0 \scriptstyle \pm 0.2$
|Wave | Test time ($\times 10^{-3}$) | $\textbf{5.43} \scriptstyle \pm 0.8$ | $162 \scriptstyle \pm 5.0$ | $9.07 \scriptstyle \pm 1.2$|$6.26 \scriptstyle \pm 0.9$ | $45.9 \scriptstyle \pm 6.1$ | $8.88 \scriptstyle \pm 1.2$
| | Test init ($\times 10^{-3}$) | $\textbf{3.72} \scriptstyle \pm 1.3$ | $158 \scriptstyle \pm 5.9$ | $5.24 \scriptstyle \pm 1.1$|$4.24 \scriptstyle \pm 0.6$ | $12.1 \scriptstyle \pm 3.5$ | $8.47 \scriptstyle \pm 0.6$
| | Test mesh ($\times 10^{-3}$) | $\textbf{3.79} \scriptstyle \pm 1.3$ | $164 \scriptstyle \pm 5.1$ | $6.29 \scriptstyle \pm 1.3$ | $7.01 \scriptstyle \pm 1.9$ | $54.5 \scriptstyle \pm 18$ | $10.8 \scriptstyle \pm 0.8$
|Cahn-Hilliard | Test time ($\times 10^{-3}$) | $\textbf{4.23} \scriptstyle \pm 0.9$ | $250 \scriptstyle \pm 7.6$ | $7.25 \scriptstyle \pm 3.1$|$\textbf{4.49} \scriptstyle \pm 0.6$ | $8.36 \scriptstyle \pm 1.4$ | $11.6 \scriptstyle \pm 3.3$
| | Test init ($\times 10^{-3}$) | $5.21 \scriptstyle \pm 1.2$ | $383 \scriptstyle \pm 6.0$ | $7.52 \scriptstyle \pm 3.0$|$\textbf{4.64} \scriptstyle \pm 0.6$ | $10.6 \scriptstyle \pm 1.1$ | $12.9 \scriptstyle \pm 2.8$
| | Test mesh ($\times 10^{-3}$) | $\textbf{5.34} \scriptstyle \pm 0.9$ | $391 \scriptstyle \pm 8.6$ | $7.63 \scriptstyle \pm 3.0$|$18.7 \scriptstyle \pm 7.2$ | $9.38 \scriptstyle \pm 1.7$ | $13.4 \scriptstyle \pm 2.5$
|Flag | Test ($\times 10^{-3}$) | $\textbf{5.87} \scriptstyle \pm 0.0$| - | - |$\textbf{9.01} \scriptstyle \pm 0.1$ | - | - |
Average forward computation time during inference of each considered method on the test time dataset.
| | GemCNN | EMAN | Hermes | GCN | MPNN | MeshGraphNet | EGNN | SpiralNet++
|---|---|---|---|---|---|---|---|---|
|Heat (s) | 0.0124 | 0.0195 | 0.0108 | 0.0015 | 0.0012 | 0.0022 | 0.0014 | 0.0009 |
|Wave (s) | 0.0125 | 0.0177 | 0.0105 | 0.0014 | 0.0013 | 0.0022 | 0.0013 | 0.0009 |
|Cahn-Hilliard (s) | 0.0069 | 0.0092 | 0.0062 | 0.0019 | 0.0015 | 0.0021 | 0.0016 | 0.0013 |
Pdf: /pdf/6033569b74f37a0e7cdb8afe490716ead8fd0d12.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes a novel graph neural network architecture for gauge-equivariant learning on meshes. Compared to prior work using convolutional or attentional methods, this paper shows a nonlinear message passing method for their gauge-equivariant graph neural network. A comparison with two baselines, GemCNN and EMAN, shows that the novel method, Hermes, outperforms baselines in a diverse set of example applications.
Strengths: Great introduction to the problem setting, background well-explained step-by-step, making it very easy for the reader to follow. Example applications are clear, and always have arguments for why they are relevant problems in practice. Ablation studies also provide a full overview of the proposed method. A well-varied set of problems was used to show the performance increase of the proposed method.
Weaknesses: 1. In Line 143 the authors noted the addition of a residual connection to the HermesBlocks, for more expressivity. Was this perhaps tested in an ablation study? Did the model perform noticably better?
2. In Figure 3 the claim is that Hermes is accurate for all datasets, even though in the Wave example Hermes is diverging faster than EMAN.
3. Line 312 mentions there is a runtime increase and parameter increase involved with Hermes, this would be good to add to the main paper to understand the drawbacks of the method, as well as scalability. Although somewhat high-dimensional meshes (170k vertices) have already been tried out in this paper, it is unclear how much larger the authors are considering when mentioning the limitation in scalability in Line 314.
4. The paper concludes with an insightful remark, summarizing the findings from the paper. Ideally future work can be described as well, seeing what direction is most promising from the authors' opinion.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. For Table 1 the reported values of RMSE feel less interpretable than perhaps relative errors, since it might not be clear if all the datasets are normalized for the PDEs. Would relative errors make sense in this context?
2. What is the intuitive reason that all perform so poorly for the wave equation in rollout testing?
3. As far as understood, both GemCNN and EMAN in this paper have the gauge-equivariant adjustments, correct? Would it make sense to have a baseline that is not gauge-equivariant?
4. In Line 212 the authors mention how EMAN has far more parameters and constraining it would limit expressivity. Were the test examples hence using the constrained EMAN? I.e., did all models have the same number of parameters? It would also be interesting to see if the computation time during inference is different between the models, and how much they vary.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Very well detailed limitations are mentioned, and potential future work to tackle them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the supportive and insightful feedback.
> In Line 143 the authors noted the addition of a residual connection to the HermesBlocks ... tested in an ablation study?
We performed an ablation study and results are shown in Table 5 of the uploaded page. The impact on model performance was mixed: having a residual connection improves performance on Heat, but decreases performance slightly on Wave and Cahn-Hilliard. We will include these results in the final version and make note that the residual connection should be considered as a task-dependent design dimension.
> In Figure 3 the claim is that Hermes is accurate for all datasets, even though in the Wave example Hermes is diverging faster than EMAN.
Thank you for pointing this out. We will adjust the wording to accurately reflect the results in the final version.
> Line 312 mentions there is a runtime increase and parameter increase involved with Hermes ... mentioning the limitation in scalability in Line 314.
We show the mean forward computation time during inference on the test time dataset in Table 4 of the uploaded page. Interestingly, Hermes is actually slightly faster than GemCNN, likely because Hermes performs fewer aggregations than GemCNN when using a similar number of parameters. Adding more message-passing iterations would increase the runtime over GemCNN.
Regarding the number of vertices, we reduced the original meshes (~170k vertices) to <5000 vertices to keep training tractable. It is well known that GNNs do not scale to larger graphs [A, B] due to memory constraints. We are not aware of any message passing graph networks that can scale up to 170k vertices on a single GPU and scaling to this many vertices likely requires distributed computing and/or graph subsampling. Applying such techniques on Hermes to scale to larger graphs would be a good future research direction.
> Ideally future work can be described as well, seeing what direction is most promising from the authors' opinion.
Thank you for your comment. One future direction is to consider different dynamics such as non-stationary or chaotic dynamics and other PDEs important in real world applications. Another direction that is to analyze the design space of gauge-equivariant networks. While GNNs have been extensively studied, far less work exists for mesh methods. GNNs are often highly task-specific and there are many design dimensions (e.g. residual connections, message passing iterations, etc.) to consider [C]. It would be particularly helpful for practitioners to have guidelines on when to use gauge equivariance and/or message passing over simpler approaches. This work aims to be a first step in this direction by demonstrating Hermes as a good fit for predicting nonlinear dynamics on meshes.
> For Table 1 the reported values of RMSE feel less interpretable than perhaps relative errors
We chose RMSE as it would emphasize large differences from the ground truth and is a standard way to measure error [D,F]. While relative error is also a standard method and may be more interpretable as a unitless quantity, it also depends on where the zero point of the units is (e.g. measurements in Celsius vs. Kelvin would give different relative errors even though the absolute error is equivalent) and therefore relative error may not always be preferred over RMSE. We observe roughly 2-4% relative absolute errors for Hermes on the Heat and Cahn-Hilliard datasets.
>What is the intuitive reason that all perform so poorly for the wave equation in rollout testing?
One possible reason that all models diverge on long-term predictions is that the wave amplitude oscillates around 0 between [-1, 1] multiple times over the course of one training trajectory. On the other hand, the temperature in the heat PDE diffuses toward the mean and the concentration in the Cahn-Hilliard equation approach towards the boundaries. It is possible that it may be more difficult to predict the periodic nature of the wave and the exact times of the wave peaks.
>Would it make sense to have a baseline that is not gauge-equivariant?
Yes, both GemCNN and EMAN are gauge-equivariant methods. We considered this comparison most relevant as our primary consideration was to elucidate when nonlinear message passing is more beneficial than linear messages with respect to meshes. We have also added several new baselines: two standard GNN variants (GCN and MPNN), a non-equivariant mesh method (MeshGraphNet [D]), an E(3)-equivariant baseline (EGNN [E]), and a non-equivariant, mesh-aware method (SpiralNet++ [F]). See Table 1 for the comparison of features of each method and Table 2 for the results of the uploaded page. The results show that Hermes outperforms EGNN and SpiralNet++ on all datasets and also MeshGraphNet on test mesh.
> ... did all models have the same number of parameters? ... computation time during inference.
Yes, for our tests, we use a similar number of parameters for each method including EMAN (see Table 7 in the Appendix and Table 5 in the additional tables page). We also measured the mean computation time during inference on the test time dataset and surprisingly find that Hermes is slightly better than GemCNN and roughly 1.5~2x than EMAN. All three gauge-equivariant methods are much slower than regular graph networks (e.g. GCN, EGNN, SpiralNet++). We will clarify the number of parameters and include the computation time in the final version.
References:
- [A] Wu, Z., et al. (2020). A comprehensive survey on graph neural networks.
- [B] Duan, K., et al. (2022). A comprehensive study on large-scale graph training: Benchmarking and rethinking.
- [C] You, J., et al. (2020). Design space for graph neural networks.
- [D] Pfaff, T., et al. (2020). Learning Mesh-Based Simulation with Graph Networks.
- [E] Satorras, V. G., et al. (2021). E (n) equivariant graph neural networks.
- [F] Gong, S., et al. (2019). Spiralnet++: A fast and highly efficient mesh convolution operator.
---
Rebuttal Comment 1.1:
Comment: Thank you for the extensive rebuttal and additional results. These very much strengthen the paper, and makes it a much more convincing novel framework. One questions that remains, is what the authors think of the new baseline with MeshGraphNet, and how this impacts the novelties of the proposed method. MeshGraphNet outperforms Hermes in some cases, and similarly in others, while running at much faster inference times. The authors mentioned in future work it would be interesting to see when to use gauge equivariance, how would the authors answer this question currently with the new results? Is there a clear area where gauge equivariance is beneficial whereas in other areas it is not? My other comments have been properly addressed, I'd like to thank the authors for their detailed response.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the quick and thoughtful reply and are glad that we have answered most concerns.
It is true that in MeshGraphNet outperforms in Heat, but underperforms in Wave, and performs similarly to Hermes on Cahn-Hilliard. We would like to point that Hermes performs substantially better on the test mesh datasets, which may indicate that Hermes can generalize to the true dynamics function rather than the specific dynamics seen in the training trajectories.
More generally in the equivariant networks literature, it is well known that using symmetry as an inductive bias guarantees generalization as opposed to data augmentation approaches, often leading to better sample efficiency and faster convergence. Gauge equivariance specifically improves generalization to local frame transformations and so Hermes would be likely be more useful in the small data regime, or where the local curvatures are not very homogeneous across the mesh. Although our experiments did not specifically evaluate sample efficiency, we can see some support for this as seen on the test mesh dataset, where Hermes outperforms MeshGraphNet on all PDE datasets. | null | null | null | null | null | null |
A Finite-Particle Convergence Rate for Stein Variational Gradient Descent | Accept (poster) | Summary: This paper provides an analysis of the convergence rate of finite-sample Stein Variational Gradient Descent (SVGD) for sub-Gaussian targets with Lipschitz scores. In contrast to previous works such as Liu 2017, Duncan et al. 2019 and Korba et al. 2020, the presented results offer convergence guarantees that hold for finite samples and rely on weaker assumptions. The authors present two key results regarding the discretization error of finite-sample SVGD compared to infinite-sample SVGD, measured in terms of Wasserstein (Theorem 1) and KSD (Theorem 2), respectively. These results are then combined to establish a finite-sample bound on the KSD error between a finite-sample approximation and the target distribution (Theorem 3). By carefully selecting step sizes, the authors demonstrate that this error decays at a rate of $1 / \sqrt{\log \log n}$ (Corollary 2).
Strengths: **Quality**: This paper provides a rigorous study of the convergence of finite-sample SVGD and delivers clear and well-supported results. The main findings (Theorem 1, 2, 3) build upon existing results (Lemma 1 and 4) but require sophisticated combinations and detailed arguments. The authors provide extensive discussions and overview the proof strategy, which appears reasonable and comprehensive (although I did not examine the proofs in depth).
**Novelty**: This paper offers a crucial contribution by bridging the gap between the existing convergence guarantees of infinite-sample SVGD and the practical implementation of finite-sample approximations, which is currently lacking in the literature. Whilst the established convergence rate is slow, this work stands out as the first to provide an explicit, non-asymptotic guarantee for SVGD approximations under reasonably mild assumptions on the target distribution. This, in my view, is the primary novelty and significance of this paper.
**Clarity**: The paper demonstrates clear motivation and objectives. Despite the technical nature of the content, the exposition is highly comprehensible. The intuitive explanations and remarks provided before and after each main theorem are particularly valuable in aiding understanding.
Weaknesses: **Slow convergence rate**: As mentioned above and in the paper, the established rate $1 / \sqrt{\log \log n}$ is notably slow, suggesting that the bound (3) is likely to be very loose. Although the authors acknowledge the potential of the presented proof strategy as a starting point for refining the bounds, it is not entirely clear whether or not or how the proof could be adapted to achieve an improved bound. Including discussions on potential avenues for enhancing the convergence rate would be valuable and insightful.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. There are a few places that say “the Algorithm 2 outputs $\mu_r^n = SVGD(\mu_0^n, r)$…”, e.g. L166. Since this output is based on a discrete measure $\mu_0^n$, should it be “the **Algorithm 1** outputs …” instead?
2. COuld you elaborate on the suggested step size scheme in Corollary 2? Specifically, what is the dependence on the sample size $n$ and on the dimension $d$? How easy is it to construct the upper bounds $(\overline{w}_{0, n}, \overline{A}, \overline{B}, \overline{C})$ in practice?
3. As also mentioned in the paper, the established rate $1 / \sqrt{\log \log n}$ is very slow. Could you provide some insights into which parts of the proof strategy could possibly have led to this slow rate? Is the bound (3) tight? How does this convergence rate compare with empirical performance in numerical simulations?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The assumptions on the target distribution, its score function and the positive definite kernel are summarised in Section 2, accompanied by interpretations and discussions on their connections to related literature. Limitations are also discussed in Section 9. Overall, I think the discussions on assumptions and limitations are adequately covered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback. We are pleased that the reviewer found our proof strategy reasonable, our contribution crucial, and our exposition comprehensive. We respond to the detailed comments below.
### “the Algorithm 2 outputs …”, e.g. L166
Thank you for pointing that out! Indeed, it should be Algorithm 1.
### “Could you elaborate on the suggested step size scheme in Corollary 2?”
The step size scheme is constructed such that the two terms in the unified error bound (Theorem 3) balance with each other as there is a tradeoff between them—the discretization error bound (i.e., $a_{t-1}$) grows with the step size sum while the continuous SVGD error decreases proportional to it. The optimal step size sum is
$O(\\log\\log(e^e+\\frac{1}{ \\bar{w}_{0,n} })),$
and after plugging in the upper bound in eq. (9):
$O(\\log\\log(e^e + \\frac{ \\delta n^{1/(2\\vee d)} }{ M_{\\mu_0^{\\infty}}\\log(n)^{\\mathbf{1}[d=2]} }) ).$
As is used above, an upper bound for $w_{0,n}$ is given in eq. (9).
To get upper bounds $\\bar{A}, \\bar{B}, \\bar{C}$ it suffices to produce the following upper bounds:
* Kernel constant upper bounds: $\\kappa_1, \\kappa_2, \\gamma$. These are straightforward to compute explicitly, and we have provided these values in Appendix A for Gaussian and IMQ kernels.
* Lipschitz constant of the score upper bound: This can be derived from the observable score function (which does not require knowledge of the normalizing constant of the target density) and is a standard input to score-based distributional approximation like Langevin Monte Carlo.
* Moment upper bounds: It suffices to provide any upper bound on $\\mathbb{E}_{P}[\\|\\cdot\\|_2]$ as
$m_{P, x^*} \\leq \\|x^*\\|_2 + \\mathbb{E}_P [\\|\\cdot\\|_2],$
$m_{\\mu_0, x^*} \\leq \\mathbb{E}_{\\mu_0}[\\|\\cdot\\|_2] + \\mathbb{E}_P [\\|\\cdot\\|_2],$
can $x^*$ be identified efficiently by running gradient ascent to find a stationary point, and $\\mathbb{E}_{\\mu_0}[\\|\\cdot\\|_2]$ can be numerically estimated.
### Is the bound (3) tight? How does this convergence rate compare with empirical performance in numerical simulations?
We believe the first part of the bound ($a_{t-1}$) can be further improved (please see our response to questions shared by the reviewers for detailed reasons) and as suggested in the submission we expect our worst case rate (which holds for any initialization and a broad class of target distributions and step size sequences) to be slower than the true convergence rate observed in practice for many distributions, initializations, and step size settings. We believe that developing a non-trivial lower bound for SVGD performance to assess tightness is an important open question.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which have answered my questions. I would like to keep my scores. | Summary: In this work, the authors present a novel analysis of finite-particle Stein Variational Gradient Descent (SVGD) and derive a unified convergence bound for this algorithm. The convergence bound provides an explicit measure of how close the finite-particle SVGD algorithm gets to its target. To establish this convergence bound, the authors first introduce a bound on the discretization error of the 1-Wasserstein distance between the finite-particle and continuous SVGD. They make certain assumptions that are commonly satisfied in SVGD applications and compatible with Kernelized Stein Discrepancy (KSD) weak convergence control.
Overall, this work contributes to the understanding of finite-particle SVGD and provides a unified convergence bound that quantifies the algorithm's convergence to its target. The derived bounds enable better control and evaluation of the accuracy of finite-particle SVGD in practical applications.
Strengths: The strengths are:
* The authors have proved the first unified convergence bound and rate for finite-particle SVGD.
* They show that SVGD with n-particles drives the KSD to zero at an order $1/ \sqrt{\log\log(n)}$ rate.
Weaknesses: In my perspective, this paper would be more suitable for an optimization journal, as it would provide an environment where the technical contributions of the study can undergo a more comprehensive evaluation through an extended rebuttal cycle.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: None.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging our contributions and for the feedback. We respond to the comment about suitable venues below.
### Optimization journal vs. NeurIPS
We firmly believe that NeurIPS is an ideal venue for this work, as the original SVGD algorithm and analysis were published at NeurIPS [17], the bulk of all subsequent analyses were published at NeurIPS or ICML [16,9,14,20,25], and SVGD has since gained widespread use in the machine learning community in particular. We thus believe that this work will be of the greatest interest and relevance to the NeurIPS community. | Summary: The authors provide the first convergence guarantee for finite particle Stein Variational Gradient Descent (SVGD). Although I am not an expert on this topic, I believe this problem remained open for a long time, and it should be the first of many finite particle results to come.
While $(\log\log n)^{-1/2}$ is a fairly slow rate, I believe we should overlook the rate, but consider the significance of the technical leap taken by the authors to achieve a finite particle result at all. For this reason, I will recommend accept fro this paper.
Strengths: 1. This is the first finite particle convergence guarantee for SVGD.
2. The contents are well organized and presented.
3. The proof is concise and clean.
Weaknesses: N/A
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Given that I am not an expert on this subject, I would like the authors to clarify a couple of questions for me.
1. What was the main conceptual challenge in establishing a finite particle guarantee, and how did this work overcome it? For me Theorem 3 reads like a bit of magic, and the desired result just appears without much intuition. I would like to understand how this came to be.
2. What do the authors perceive as the next challenge preventing improvements to this result? Similar to the previous question, I don't quite see on an intuitive level where the $log log n$ dependence came about, which the authors also believe can be improved. I would appreciate the authors can elaborate further on this topic.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive feedback. We are glad that you found our contribution significant, our work well presented, and our proof concise and clean. We provide responses to the detailed comments below.
### Main conceptual challenge in establishing a finite particle guarantee, and how this work overcomes it.
SVGD is originally derived from finding descent directions of KL divergence between an approximation and the target distribution. Most prior convergence analyses [16,14,20] heavily rely on this KL-descent property, which ultimately yields a bound on KSD error but is only applicable to continuous SVGD because KL divergence is ill-defined between the n-particle discrete approximation and the continuous target distribution. Our work overcomes the difficulty, by explicitly controlling the 1-Wasserstein distance between the continuous and n-particle SVGD through a discretization error bound (a key challenge in deriving the explicit Theorem 1 was that the Wasserstein pseudo-Lipschitz constant of SVGD itself depends on the moment growth of the measures being compared, so our proof carefully tracks this growth in tandem with the discretization error) and, unlike [9], translating the resulting Wasserstein error into KSD error to enable its combination with convergence results for continuous SVGD.
### Intuition behind Theorem 3
We will clarify that the aim of Theorem 3 is to combine Theorem 2 (the error of using n-particle SVGD to approximate continuous SVGD) and Corollary 1 (the error bound of continuous SVGD) into an error bound of n-particle SVGD. Since both bounds are formulated with the KSD metric, we used the triangle inequality of KSD to prove the result.
### The next challenge preventing improvements to this result; where the $\\log \\log n$ dependence came about
Please see our response to questions shared by the reviewers.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the reply. I believe my questions are answered. While I am not an expert on this subject, and it's hard for me to justify raising the score, I would like to see this paper accepted given what I understand about it now.
Therefore I will raise my score to 8, mostly in context of other reviews that are too pessimistic in my opinion and for non-technical reasons. I hope the AC will consider evaluating this work on a more fair benchmark, especially given this is the first finite particle guarantee. | Summary: This work studies the non-asymptotic convergence rate of Stein variational gradient descent (SVGD), an algorithm for approximating a target probability distribution with a collection of particles. This work presents a finite-particle convergence rate for SVGD, which provides a measure of how quickly the algorithm converges to the target distribution with a finite number of particles. The convergence rate formula drives the kernel Stein discrepancy to zero at an order 1/√log log n rate, but the authors suspect that the dependence on n can be improved and hope that their proof strategy will serve as a template for future refinements.
Strengths: This work provides the first finite-particle convergence rate for Stein variational gradient descent (SVGD), which is a popular algorithm for approximating a probability distribution with a collection of particles. This work presents an explicit, non-asymptotic proof strategy for the convergence rate formula --- which drives the kernel Stein discrepancy to zero at an order $1/\sqrt{\log \log n}$ rate providing a measure of how quickly the algorithm converges to the target distribution with a finite $n$ particles. Prior to this work, relatively little is known about SVGD's non-asymptotic approximation quality, despite that SVGD has demonstrated promising results for various inferential tasks. The authors also claims that it serves as a template for future refinements to improve the dependence on $n$. I highly believe in this, due to the soundness of technical tools the work adopted.
The authors provide a thorough discussion of the assumptions and conditions required for the convergence rate formula to hold, which helps to clarify the limitations and applicability of the formula. Finally, this work also includes a comprehensive list of references to related work, which provides a useful starting point for further research on SVGD and related algorithms.
Weaknesses: Despite its success in providing the first non-asymptotic convergence rate, this work assumes a level of familiarity with the mathematical concepts and notation for experts, which may make it difficult for readers without a strong background in probability theory and optimization to follow. In addition, this work does not provide any experimental results or comparisons with other algorithms to demonstrate the practical usefulness of the convergence rate formula, and does not provide any specific information on how the dependence on $n$ can be improved, which may limit its usefulness for researchers looking to optimize the performance of SVGD. Lastly, this work focuses exclusively on SVGD and does not provide any insights or comparisons with other algorithms for approximating probability distributions, which may limit its broader relevance to the field.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I am at an educative level but quite enjoy reading on this topic. I quite like the generic / infinite-particle continuum manner SVGD Algorithm 2 is written instead of the $n$-particle SVGD Algorithm 1. I wondered if this continuous-time approximation is the critical reason for the success of the non-asymptotic rate first established by the authors.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: This paper contributes as a theoretical work and does not raise negative social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback. We are pleased that the reviewer found our technical tools sound, our discussion of assumptions thorough, and also shared our vision for future refinements. Below is our detailed response:
### Assumed familiarity of optimization/probability theory concepts and notations
Thank you for this feedback. We endeavor to make the final version maximally accessible, with all notation and concepts defined in Section 2. If there are particular concepts that the reviewer finds unclear, please let us know!
### Experimental results and comparisons with other algorithms
The main focus of our work was not to propose a new algorithm or even to advocate for SVGD over alternative algorithms but rather to address the longstanding open question of whether a unified convergence bound for finite-particle SVGD could be derived. As such, we believe that an experimental comparison with other algorithms would be out of scope in this work. There are however many empirical comparisons of SVGD with other algorithms in the cited literature, and, in the introduction, we do compare our theoretical result with prior work (including Liu [16], Gorham et al. [9], Korba et al. [14], and Salim et al. [20]) that studies the convergence of SVGD without providing unified convergence bounds or rates.
### How the dependence on n can be improved
Please see our response to questions shared by the reviewers.
### This work focuses only on SVGD.
Thank you for highlighting this opportunity to further contextualize our work. While the focus of this work is wholly on solving the open problem of establishing any unified convergence bound for finite-particle SVGD and we believe this initial rate will be improved in the future, we will better contextualize the initial rate and highlight what may be achievable in the future by comparing to other algorithms for approximating probability distributions. For example, MCMC methods like the unadjusted Langevin algorithm admit a much faster rate (see, e.g., the polynomial rate bounds of Balasubramanian et al. (2022)). However, as agreed upon by the reviewer and other reviewers (igw1,awG8), this work is still a “significant technical leap” to achieve the first non-asymptotic convergence rate for SVGD. We will also discuss a promising follow-up to this work by an independent research group (for anonymity reasons we omit the citation here) that has already begun investigating improved rates by modifying the SVGD algorithm.
Our analysis also provides a template for studying convergence rates for SVGD-like algorithms. For example, Shi et al. (2021) proposed SVGD-like methods for sampling in constrained domains. However, their convergence analysis lacks a unified bound and rate. We could apply similar proof techniques in this submission to obtain a convergence rate for their algorithm.
Reference:
* Balasubramanian, K., Chewi, S., Erdogdu, M. A., Salim, A., & Zhang, S. (2022). Towards a theory of non-log-concave sampling: first-order stationarity guarantees for langevin monte carlo. In Conference on Learning Theory (pp. 2896-2923).
* Shi, J., Liu, C., & Mackey, L. (2021). Sampling with mirrored Stein operators. arXiv preprint arXiv:2106.12506.
### The role of infinite-particle continuum manner SVGD
We appreciate your positive response to how we present the continuous SVGD. The formulation in Algorithm 2 is indeed critical to our analysis. Our proof relies on an error bound for the continuous SVGD established by Corollary 1 and uses a discretization error bound (Theorem 2) to relate the continuous SVGD to n-particle SVGD.
---
Rebuttal Comment 1.1:
Comment: Thank you for your informative response, especially the clarification of infinite-particle continuum limit. Indeed I believe this work should not be made obsolete due to its providing the first non-asymptotic convergence rate "as a significant leap". I have raised my score from 5 to 6 accordingly. | Rebuttal 1:
Rebuttal: # Response to questions shared by Reviewer g2wP, igw1, and awG8:
### Source of $n$ dependence, potential avenues for rate improvement, and challenges involved.
The unified error bound of Theorem 3 reveals that the dependence on n arises from the tradeoff between the KSD discretization error bound ($a_{t-1}$), which grows double exponentially as the step size sum $b_{t-1}$ increases, and the infinite-particle SVGD error, which decreases proportionally to $\sqrt{b_{t-1}}$. The $\log\log n$ dependence is mainly caused by the double exponential growth of $a_{t-1}$, which can be traced back to Theorem 1. A key challenge in deriving the explicit Theorem 1 was that the Wasserstein pseudo-Lipschitz constant of SVGD itself depends on the moment growth of the measures being compared, so our proof carefully tracks this growth in tandem with the discretization error. Any improvement in this discretization error bound growth rate would translate immediately into an improved approximation error rate for SVGD. Moreover, one may be able to derive tighter bounds by analyzing the discretization error of KSD directly or by using an alternative intermediate metric in place of the 1-Wasserstein distance; finding the right metric that simultaneously remains small across the SVGD trajectory and is tractable to analyze is the main challenge.
Alternatively, instead of measuring convergence to an arbitrary sub-Gaussian target with respect to a large non-parametric measure like KSD, one could focus on the convergence of a more restricted set of moments (like means and variances) or a more restricted set of targets. Our submission has already stimulated promising follow-up work (we have withheld the title and reference to preserve anonymity, but please let us know if we should reveal it) from an independent research group demonstrating rate improvements when the function class is more restricted and the target is Gaussian or strongly log concave. Since the posting of our preprint on arXiv, another independent research group has built upon our work to show that a variant of SVGD converges at a much faster O(1/poly(n)) rate. We will highlight these avenues for improvement and cite these follow-up works in the revision. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.